Page 2 of 7

Changing a React Input Value from Vanilla Javascript

The short answer:

function setNativeValue(element, value) {
    let lastValue = element.value;
    element.value = value;
    let event = new Event("input", { target: element, bubbles: true });
    // React 15
    event.simulated = true;
    // React 16
    let tracker = element._valueTracker;
    if (tracker) {
        tracker.setValue(lastValue);
    }
    element.dispatchEvent(event);
}

var input = document.getElementById("ID OF ELEMENT");
setNativeValue(input, "VALUE YOU WANT TO SET");

Reference: https://stackoverflow.com/a/52486921/17360

The long answer:

React overrides the native Javascript onChange behavior. Triggering an onChange event does nothing to change the input field value in React’s eyes. To React the value is still unchanged, even though to a user the value can clearly be seen on the screen. The above code triggers the change in React also.

When to Use The FromService Attribute

I recently discovered the [FromServices] attribute, which has been a part of .Net Core since the first version.

The [FromServices] attribute allows method level dependency injection in Asp.Net Core controllers.

Here’s an example:

public class UserController : Controller
{
    private readonly IApplicationSettings _applicationSettings;

    public UserController(IApplicationSettings applicationSettings)
    {
        _applicationSettings = applicationSettings;
    }

    public IActionResult Get([FromService]IUserRepository userRepository, int userId)
    {
        //Do magic
    }
}

Why use method injection over constructor injection? The common explanation is when a method needs dependencies and it’s not used anywhere else, then it’s a candidate for using the [FromService] attribute.

Steven from StackOverflow posted an answer against using the [FromService] attribute:

For me, the use of this type of method injection into controller actions is a bad idea, because:

– Such [FromServices] attribute can be easily forgotten, and you will only find out when the action is invoked (instead of finding out at application start-up, where you can verify the application’s configuration)

– The need for moving away from constructor injection for performance reasons is a clear indication that injected components are too heavy to create, while injection constructors should be simple, and component creation should, therefore, be very lightweight.

– The need for moving away from constructor injection to prevent constructors from becoming too large is an indication that your classes have too many dependencies and are becoming too complex. In other words, having many dependencies is an indication that the class violates the Single Responsibility Principle. The fact that your controller actions can easily be split over different classes is proof that such controller is not very cohesive and, therefore, an indication of a SRP violation.

So instead of hiding the root problem with the use of method injection, I advise the use of constructor injection as sole injection pattern here and make your controllers smaller. This might mean, however, that your routing scheme becomes different from your class structure, but this is perfectly fine, and completely supported by ASP.NET Core.

From a testability perspective, btw, it shouldn’t really matter if there sometimes is a dependency that isn’t needed. There are effective test patterns that fix this problem.

I agree with Steven; if you need to move your dependencies from your controller to the method because the class is constructing too many dependencies, then it’s time to break up the controller. You’re almost certainly violating SRP.

The only use case I see with method injection is late-binding when a dependency that isn’t ready at controller construction. Otherwise, it’s better to use constructor injection.

I say this because with constructor injection the class knows at construction whether the dependencies are available. With method injection, this isn’t the case, it’s not known if the dependencies are available until the method is called.

C# 8 – Nullable Reference Types

Microsoft is adding a new feature to C# 8 called Nullable Reference Types. Which at first, is confusing because all reference types are nullable… so how this different? Going forward, if the feature is enabled, references types are non-nullable, unless you explicitly notate them as nullable.

Let me explain.

Nullable Reference Types

When Nullable Reference Types are enabled and the compiler believes a reference type has the potential of being null, it warns you. You’ll see warning messages from Visual Studio:

And build warnings:

To remove this warning, add a question mark to the back for the reference type. For example:

public string StringTest()
{
    string? notNull = null;
    return notNull;
}

Now the reference type behaves as it did before C# 8. 

This feature is enabled by adding  #nullable enable   to the top of any C# file or adding lt;NullableReferenceTypes>true</NullableReferenceTypes> to the .csproj file. Out of the box it’s not enabled, which is a good thing if it was enabled any existing code-base would likely light up like a Christmas tree.

The Null Debate

Why is Microsoft adding this feature now? Nulls have been part of the language since, well the beginning? Honestly, I don’t know why. I’ve always used nulls, it’s a fact of life in C#. I didn’t realize not having nulls was an option… Maybe life will be better without them. We’ll find out.

Should you or should you not use nulls? I’ve summarized the ongoing debate as I understand them.

For

The argument for nulls is generally that an object has an unknown state. This unknown state is represented with null. You see this with the bit data type in SQL Server, which has 3 values, null (not set), 0 and 1. You also see this in UI’s, where sometimes it’s important to know if a user touched a field or not. Someone might counter with, “Instead of null, why not create an unknown state type or a ‘not set’ state?” How is this different than null? You’d still have to check for this additional state. Now you’re creating unknown states for each instance. Why not just use null and have a global unknown state? 

Against

The argument against nulls is it’s a different data type and must be checked for each time you use a reference type. The net result is code like this:

var user = GetUser(username, password);

if(user != null)
{
    DoSomethingWithUser(user);
} else 
{
    SetUserNotFoundErrorMessage()
}

If the GetUser method returned a user in all cases, including when the user is not found. If the code never returns null, then it’s a waste guarding against it and ideally, this simplifies the code. However, at some point, you’ll need to check for an empty user and display an error message. Not using a null doesn’t remove the need to fill the business case of a user not found.

Is this Feature a Good Idea?

The purpose of this feature is NOT to eliminate the use of nulls, but to instead ask the question: “Is there a better way?” And sometimes the answer is “No”.  If we can eliminate the constant checking for nulls with a little forethought, which in turn simplifies our code. I’m in. The good news is C# has made working with nulls trivial.

I do fear some will take a dogmatic stance and insisting on eliminating nulls to the detriment of a system. This is a fool’s errand, because nulls are integral to C#.

Is Nullable Reference Types a good idea? It is, if the end result is simpler and less error prone code.

9 Guidelines for Creating Expressive Names

Naming is subjective and situational, it’s an art, and with most art, we discover patterns.  I’ve learned a lot through the reading of other’s code.  In this article, I’ve compiled 9 guidelines I wished others had followed when I read their code. 

When a software engineer opens a class, she should know, based on the names, the class’s responsibilities. Yes, I know naming is only one spoke of the wheel, physical and logical structures also play a significant role in understanding the code as does complexity. In this article, I’m only focusing on naming because I feel it’s the most significant impact on understanding the code.

Don’t Include Type Unless it Clarifies the Intent

A type can be anything from a programming type (string, int, decimal) to a grouping of responsibilities (Util, Helper, Validator, Event, etc.). Often it’s a classification which doesn’t express intent.

Let’s look at an example: The name StringHelper doesn’t express much. A string is a system type, and Helper is vague, StringHelper speaks more to the “how” than the intent. If instead, we change the name to DisplayNameFormatter we are given a clearer picture of intent. DisplayName is very specific, and Formatter expresses outcome. Formatter may or may not be a type, but it doesn’t matter, because it expresses intent. 

There are always exceptions; for example, in ASP.Net MVC, controllers must end in “Controller” or the application doesn’t function. Using paradigms such as Domain Driven Design (DDD), names like “Services,” “Repository,” “ValueType” and “Model” have meaning in DDD and express responsibility.

 For example, UserRespository implies that user data is retrieved and save to a data store.

Avoid Metaphors

Metaphors are cultural, and engineers from other cultures might not understand the intent.

Common metaphors in the United States:

  • Beating a dead horse
  • Chicken or the egg
  • Elephant in the room

Common metaphors in New Zealand:

  • Spit the dummy
  • Knackered
  • Hard yakka

Use Verbs

Steve Yegge wrote a (very long) blog post about using verbs over nouns.

His point is to use verbs, applications are composed of nouns, but nouns don’t do things. Systems are useless with only nouns, instead express action in names of methods.

For example, UserAuthentication(noun).AuthenticateUser(action/verb) expresses the action of verifying a user’s credentials.

Be Descriptive

Be descriptive, the more detail, the better — express the responsibility in the name. 

Ask yourself, what is the one thing this class or function does well?

If you have difficulty finding a name, the class or function might have more than one responsibility and thus violating the Single Responsibility Principle.

Don’t Lean on Comments for Intent

Comments are a great way to provide additional context to the code but don’t lean on comments. The names of classes and methods should stand on their own.

In Refactoring: Improving the Design of Existing Code by Martin Fowler, Kent Beck, John Brant, William Opdyke, and Don Roberts:

… comments are often used as a deodorant. It’s surprising how often you look at thickly commented code and notice that the comments are there because the code is bad.

Another wonderful quote from the “In Refactoring” authors:

When you feel the need to write a comment, first try to refactor the code so that any comment becomes superfluous. Page 88

Many times when the code is refactored and encapsulated into a method, you’ll find other locations where it’s possible to leverage the new method, places you never anticipated in using the new method. 

Sometimes when calling a method the consumer needs to know something particular about the method, if that particularness is a part of the name, then the consumer doesn’t need to review the source code.

Here’s an example of incorporating a comment into a name.

With comments:

// without tracking
var user = GetUserByUserId(userId);

Refactored to include the comment in the method name:

var userWithOutTracking = GetUserByUserIdWithoutTracking(userId);

Other engineers now know this method doesn’t have tracking before they’d need to either read the source code or find the comment.

Comments should be your last line of defense when possible lean other ways to express intent including using physical and logic structure and names to convey intent.

Refrain From Using Names with Ambiguous Meaning

Avoid names with ambiguous meanings. The meaning of ambiguous names changes from project to project which makes understanding intent harder for a new engineer. 

Here’s a list of common ambiguous names:

  • Helper
  • Input
  • Item
  • Logic
  • Manager
  • Minder
  • Moniker
  • Nanny
  • Overseer
  • Processor
  • Shepherd
  • Supervisor
  • Thingy
  • Utility
  • Widget

Use the Same Language as the Business Domain

Use the same terminology in the code as in the business domain. This allows engineers and subject matter experts (SME) to easily communicate ideas because they share the same vocabulary. When there isn’t a shared vocabulary translation happen which invariably leads to misunderstandings.

In one project I worked on, the business started with “Pupil” and then switched to “Student.” The software engineers never updated the software to reflect the change in terminology. When new engineers joined the project most believed Pupil and Student were different concepts.

Use Industry Speak

When possible, use terminology that has meaning across the software industry.

Most software engineers, when they see something name “factory,” they immediately think of the factory pattern.

Using existing application paradigms such as “Clean Architecture” and “Domain Driven Design” facilitates idea sharing and creates a common language for engineers to communicate ideas among themselves.

The worst possible naming is co-opting industry-wide terminology and giving it a different meaning.

When Naming Booleans…

Boolean names should always be an answer to a question with its value of either true or false.

For example, isUserAutheticated, the answer is either yes (true) or no (false)

Use words like:

  • Has
  • Does
  • Is
  • Can

Avoid negated names, for example:

A negated variable name:

var IsNotDeleted = true; // this is confusing

if(!IsNotDeleted) { // it gets even more confusing when the value is negated
  //Do magic
}

Without negated variable name:

var IsDeleted = true; // this is confusing

if(!IsDeleted) { 
  //Do magic
}

In Closing

Choosing expressive names is crucial in communicating intent, design, and domain knowledge to the next engineer. Often we work on code to fix defects or incorporate new features, and we’re continually compiling code in our head trying to understand how it works. Naming gives us hints as to what the previous engineer was thinking, without this communication between the past and the future engineers we handicap ourselves in growing of the application. Potentially dooming the project to failure.

With or Without Curly Braces?

There’s a heated debate around single statements and whether they should have curly braces or not.

In C++, C#, Java, and Javascript a single line statement without curly braces is valid, some take advantage of this feature, while others don’t.

For Example

if(ifTrue) 
  MowTheLawn();

for(var index; index > 10; index++)
  ChopWood();

foreach(var dollar in money)
  BuyLollipop();

while(untilTheEnd)
  Read();

Arguments Against Single Line Curly-Braces

The argument against curly-braces are it’s terser syntax, it’s fewer characters to type, and it’s valid syntax. Why not take advantage of it?

Arguments For Single Line Curly-Braces

The argument for curly-braces is consistency, fewer bugs and more natural to mentally parse.

In an article written by Jon Abrams titled Single-line ‘if’ statements, Jon explains how a defect in Apple’s TLS implementation was introduced as a result of a single line if statement without curly-braces. Jon goes on to say while omitting curly braces in single-line statements is terser, preventing defects is more important than terseness.

Jon proposes a compromise, to allow single-line statements if they are truly on a single line:

if(ifTrue) MowTheLawn();

I echo Jon’s thoughts, omitting the curly-braces in single lines isn’t worth the benefit it offers. It forces the software engineer to consider two variations of valid syntax. It may not seem so bad, but it’s taxing to make this determination each time you happen upon an if statement. The next effect is the engineer saves a few keystrokes and passes the burden on to future readers to parse their code.

For the C# Software Engineers, Microsoft has taken a side in their coding conventions, which call for curly braces.

When we use curly-braces in all cases regardless of the number of lines, what’s in scope and what’s out of scope is very clear. This makes the code less error-prone and more consistent, although some might argue this point, I find it easier to read.

Understanding Begins with Expressive Names

In 2018, I joined a large project halfway through its development. The original engineers had moved on leaving behind convoluted and undocumented code. Working with this type of code is challenging because you can’t differentiate the plumbing from the business domain. This makes debugging difficult and changes unpredictable because you don’t know the impact. It’s like trying to edit a book without understanding the words.

Many engineers believe the measure of success is when the code compiles. I believe it’s when another engineer (or you in six months) understands the ”why” of your code. The original engineers handicapped the future engineers by not documenting and using obtuse names. The names are sometimes the only window into the previous engineer’s thought process.

Donald Knuth famously said:

Programs are meant to be read by humans and only incidentally for computers to execute. – Donald Knuth

Naming

Naming is hard because it requires labeling and defining where and how a piece fits in an application.

Phil Karlton, while at Netscape observed:

There are only two hard things in Computer Science: cache invalidation and naming things.
— Phil Karlton

We see our code through the lens of words and names we use. Names create a language for the next engineer to comprehend. This language paints a picture of how the author bridged the business domain and the programming language.

Lugwid Wittgenstein, a philosopher in the first half of the 20th century, said:

The limits of my language mean the limits of my world. – Ludwig Wittgenstein

The language of our software is only as descriptive as the names we use and using vague names blur the software’s purpose; using descriptive names bring clarity and understanding.

Imagine visiting a country where you don’t speak the language. A simple request such as asking to use the bathroom brings bewildered looks. The inability to communicate is frustrating maybe even scary. An engineer feels the same when confronted with confusing, unclear, or even worse, misleading names.

This feeling is best experienced.

Experience

Examine the first snippet of code, what does this code do? What’s the why?

Take your time.

public class StringHelper
{
    public string Get(string input1, string input2)
    {
        var result = string.Emtpy;
        if(!string.IsNullOrEmtpy(input1) && !string.IsNullOrEmtpy(input2))
        {
            result = $"{input1} {input2}";
        }
        return result;
    }
}

The above code is a simple concatenation of two strings. What the code doesn’t tell you is the “why.” The “why” is so important, without it, it’s difficult to change behavior without understanding the impact. Of course, investigating the code’s usage will likely reveal it’s “why,” but that’s the point. You shouldn’t have to discover the code’s purpose, instead, the author should have left clues, it’s their responsiblity to do so.

Let’s revisit the code, but with a little “why” sprinkled in.

Again, take your time, observe the difference you feel when reading this code.

    public class FirstAndLastNameFormatter
    {
        public string Concatenate(string firstName, string lastName)
        {
            var fullName = string.Emtpy;
            if(!string.IsNullOrEmtpy(firstName) && !string.IsNullOrEmtpy(lastName))
            {
                fullName = $"{firstName} {lastName}";
            }
            return fullName;
        }
    }

The “why” brings the code to life, there’s a story to read.

Communicate

Communicating the intent and the design to the next engineer allows software to live and to grow because if engineers can’t modify the software, it dies. This is a tragedy, even more so when it’s a result of poor design and lack of expressiveness — each is preventable with knowledge.

Do the next engineer a favor and be expressive in your code. Use descriptive names and capture the “why” because who knows, the next engineer might be you.

Codifying the Secret Sauce

Each application has its secret sauce, it’s reason for existing. Codifying the secret sauce is instrumental in writing maintainable and successful applications.

Wait. What is codifying? Patience my friend, we’ll get there.

First let’s hypothesize:

You’ve just been promoted to Lead Software Engineer (Congratulations!). Your CEO’s first task is creating a new product for the company. It’s a ground-up accounting application. The executives feel having a custom accounting solution will give them an edge on the competition.

A few months have passed, most of the cross-cutting concerns are developed (yay you!). The team is now focused on the yummy goodness of the application: the business domain (the secret sauce). This is where codifying the secret sauce begins.

Codifying, is putting structure around an essential concept in the business domain.

In accounting, the price (P) earning (E) ratio (P/E Ratio) is a measurement of earnings for a company. A high P/E ratio suggests high earnings growth in the future. The P/E ratio is calculated by taking the market value per share (share price) divided by the earnings per share (profit – dividends / # of outstanding shares).

A simple, and I argue, naive implementation:

public class Metric
{
    public string Name { get; set; }
    public decimal Value {get; set}
    public int Order {get; set;}
}
public class AccountingSummary
{
    public Metric[] GetMetrics(decimal price, decimal earnings)
    {
        var priceEarningsRatio = price/earnings;
        
        var priceEarningsRatioMetric = new Metric 
        {
            Name = "P/E Ratio",
            Value = priceEarningsRatio,
            Order = 0
        }
        return new [] {priceEarningsRatioMetric};
    }
}

If this is only used one place, this is fine. What if you use the P/E ratio in other areas?

Like here in PriceEarnings.cs

var priceEarningsRatio = price/earnings;

And here in AccountSummary.cs

var priceEarningsRatio = price/earnings;

And over here in StockSummary.cs

var priceEarningsRatio = price/earnings;

The P/E Ratio is core to this application, but how it’s implemented, hardcoded in various places, makes the importance of the P/E Ratio lost in a sea of code. It’s just another tree in the forest.

You also open to the risk of changing the ratio in one place but not the other. This may throw off downstream calculations. These types of errors are notoriously difficult to find.

Often testers will assume if it works in one area, it’s correct in all areas. Why wouldn’t the application use the same code to generate the P/E Ratio for the entire application? Isn’t this the point of Object Oriented Programming?

I can imagine an error like this making it into production and not discovered until a visit from your executive who demands to know why the SEC’s P/E Ratio calculations are different from what the company filed. That’s not a great place to be.

Let’s revisit our P/E ratio implementation and to see how we can improve upon our first attempt.

In accounting systems formulas are a thing, let’s put structure around formulas by adding an interface:

public interface IFormula
{
    decimal Calculate<T>(T model);
}

Each formula now is implemented with this interface giving us consistency and predictability.

Here’s our improved P/E Ratio after implementing our interface:

We’ve added a PriceEarningsModel to pass the needed data into our Calculate method.

public class PriceEarningsModel
{
    public decimal Price {get; set;}
    public decimal Earnings {get; set;}
}

Using our PriceEarningsModel, we’ve created an implementation of the IFormula interface for the P/E Ratio.

public class PriceEarningsRatioFormula : IFormula
{
    public decimal Calculate<PriceEarningsModel>(PriceEarningsModel model)
    {
        return model.Price / model.Earnings;
    }
}

We’ve now codified the P/E Ratio. It’s a first-class concept in our application. We can use it anywhere. It’s testable, and a change impacts the entire application.

As a reminder, here’s the implementation we began with:

public class Metric
{
    public string Name { get; set; }
    public decimal Value {get; set}
    public int Order {get; set;}
}
public class AccountingSummary
{
    public Metric[] GetMetrics(decimal price, decimal earnings)
    {
        var priceEarningsRatio = price/earnings;
        
        var priceEarningsRatioMetric = new Metric 
        {
            Name = "P/E Ratio",
            Value = priceEarningsRatio,
            Order = 0
        }
        return new [] {priceEarningsRatioMetric};
    }
}

It’s simple and gets the job done. The problem is the P/E Ratio, which is a core concept in our accounting application doesn’t stand out. Engineers not familiar with the application or the business domain won’t understand its importance.

Our improved implementation uses our new P/E Ratio class. We inject the PriceEarningsRatioFormula class into our AccountSummary class.

We replace our hardcoded P/E Ratio with our new `PriceEarningsRatioFormula` class.

public class AccountingSummary
{
    private PriceEarningsRatioFormula _peRatio;
    
    public AccountingSummary(PriceEarningsRatioFormula peRatio)
    {
        _peRatio = peRatio;
    }
    
    public Metric[] GetMetrics(decimal price, decimal earnings)
    {
        var priceEarningsRatio = _peRatio.Calculate(new PriceEarningsModel 
        { 
            Price = price, 
            Earnings = earnings
        });
        
        var priceEarningsRatioMetric = new Metric 
        {
            Name = "P/E Ratio",
            Value = priceEarningsRatio,
            Order = 0
        }
        return new [] {priceEarningsRatioMetric};
    }
}

One could argue there is a bit more lifting with the PriceEarningsRationFormula over the previous implementation and I’d agree. There is a bit more ceremony but the benefits are well worth the small increase in code and in ceremony.

First, we gain the ability to change the P/E ratio for the entire application. We also have a single implementation to debug if defects arise.

Lastly, we’ve codified the concept of the PriceEarningsRatioFormula in the application. When a new engineer joins the team, they’ll know formulas are essential to the application and the business domain.

There are other methods of codifying (encapsulate) the domain, such as microservices and assemblies. Each approach has its pros and cons, you and your team will have to decide what’s best for your application.

Ensconcing key domain concepts in classes and interfaces creates reusable components and conceptual boundaries. Making an application easier to reason with, reducing defects and lowering the barriers to onboarding new engineers.

Garbage Collection Types in .Net Core

Memory management in modern languages is often an afterthought. For all intents and purposes, we write software without nary a thought about memory. This serves us well but there are always exceptions…

In California, there are extensive financial reporting requirements for Local Education Agencies (LEA), an LEA can be a county, a district, a charter or a single school. Most LEAs create their own financial reports which are usually centered around Excel, it’s no surprise when each report is different. To solve this problem the California Board of Education commissioned software to generate financial reports. 

I was a part of the development team. 

My first stop was the testing logs, Ed-Pro’s logs pointed to high memory usage, perhaps there was a memory leak? An engineer observed that Ed-Pro’s calculations used a large amount of short-lived memory. If the memory wasn’t cleaned up quickly, it could appear like a memory leak.

Ed-Pro is built on top of .Net Core, Microsoft’s multi-platform framework. In .Net Core, memory is divided into three categories: Short-lived (Gen0), medium lived (Gen1), and long-lived (Gen2). Gen0 is for short-lived data that quickly goes out of scope, Gen1 is for medium lived memory that hangs around for a bit longer, it too also eventually goes out of scope and Gen2 is long-lived memory that may live for the life of the application. Gen0 memory is constantly reclaimed, Gen1 is reclaimed less frequently than Gen0, and Gen2 is reclaimed even less frequently than Gen1.

The only sure way to understand the memory usage of Ed-Pro was to profile it, below is a screenshot using dotMemory by JetBrains.

As suspected, we found large amounts of Gen0 memory (the blue), so much so, it appeared that Garbage Collection couldn’t keep up. A strategy to compensate for a large amount of memory, caused Garbage Collection to oscillated between increasing memory space (adding more memory for the application’s use) and cleaning it up. During the cleanup cycles, the application is unresponsive.

At first, we were stumped, isn’t the purpose of the GC to keep memory tidy? Two articles were instrumental in our understanding of how Garbage Collection works in .Net: Mark Vincze’s article Troubleshooting high memory usage with ASP.Net Core on Kubernetes and Fundamentals of Garbage Collection by Microsoft. Both are great reads and brought clarity to the memory usage in Ed-Pro. 

Here’s a summary of what we learned, there are two types of Garbage Collection in .Net: Server Garbage Collection and Workstation Garbage Collection.

Server Garbage Collection makes a couple of assumptions: First, there is ample memory available and second, the processors are multi-core and are fast. Both can be true, but we live in a world of virtual machines and Docker where it’s more likely that both assumptions are false.

Server Garbage Collection allows memory to build, at some point, it does one of two things: it either increases the memory space allowing memory to grow or it frees up orphaned memory. When it chooses to free memory, the Garbage Collection starts the process on a high priority thread. The high priority thread is a higher priority than the application; if the machine is fast, the clean up shouldn’t be noticed. However, if it’s not, it’ll cause the application to halt until the clean up is completed.

Workstation Garbage Collection operations differently. It continuously runs reclaiming memory on a thread with the same priority as the application. This means it’s also competing for resources with the application which can cause application slowness. The upside is the application’s memory usage can stay quite low, primarily when it uses large amounts of Gen0.

As a default, if .Net Core detects a server, it runs the Server Garbage Collection type, which was the case with our application. To run the Workstation Garbage Collection type add the following snippet to your project file:

  <PropertyGroup> 
    <ServerGarbageCollection>false</ServerGarbageCollection>
  </PropertyGroup>

We made this configuration change to Ed-Pro, using dotMemory, we profiled Ed-Pro’s memory with Workstation Garbage Collection enabled and loaded the same screens as in the previous test. Here are the results:

The memory usage is significantly decreased. The Gen0 allocations are virtually non-existent. Beyond the differences in the graph, the Server Garbage Collection memory usage topped 1 gig while the Workstation Garbage Collection topped at roughly 200 megs.

Every application is different. Our application used a ton of temporary data and thus uses a ton of Gen0 memory. Your application may leverage longer lived memory such as Gen1 or Gen2 in which Server Garbage Collection makes a whole lot of sense. My advice is to profile your memory under different conditions for an idea of how memory is used and then decide which mode is best for you application.

You Are Not Your Code

It’s not personal.

Your code reflects neither your beliefs, nor your upbringing, nor your character.

Your thoughts and your opinions evolve, new ideas form, and you change. 

The you of today will be different from the you of tomorrow.

Embrace the difference, you and your code are better because it.