Give a Safe Space to Express Ideas

When leading a team, it’s important to create an environment where everyone feels safe to express their ideas regardless of their experience level.

Early in my career, I was leading a team of six. One of the software engineers approached me with an idea; I knew it wouldn’t work. Instead of telling him, he’s wrong. I told him to set up a meeting with the rest of the team.

In the meeting, the team agreed his approach wouldn’t work, but they took aspects of his solution and incorporated them into the final solution. If I had told the software engineer no from the get-go, the engineer wouldn’t have felt heard, and we would’ve had a less robust solution.

Scrum is Overrated

Most companies follow some type of Scrum process. Typically this entails 2 or 3 week sprints. At the end of each sprint changes are demoed, retrospectives are performed and the backlog is groomed. During each sprint task completion time is captured, which allows management to project into the future when projects will reach completion.

Many of the Scrum projects I’ve been apart of emphasize “committing” to tasks or “taking ownership.” At the end of the sprint many engineers are held accountable for incomplete tasks. Sprint velocity is another idea that is hammered home. We have to keep our velocity! It’s like creating software is a race, it’s not. If engineers are held accountable by a metric, they’ll optimize for the metric, this isn’t want you want.

Scrum creates an easy to understand framework for teams to follow and it gives management the tools to predict the future. Teams that have practices waterfall find Scrum easy to grok.

Many of Scrums practices aren’t needed. For example, most issue tracking software allow managers to run reports on the frequency of ticket completion. With this information, managers are able to infer velocity, instead of baking velocity into the process and making it a big deal. Taking ownership is a farce, we do it naturally, to make it explicit is insulting. All the projects I’ve been a part of each engineer has a corner of the application that’s their space.

Other ways to improve software delivery:

  • If you need weekly deployments, schedule them. Deploy what’s ready.
  • Keep the backlog groomed; then engineers never run out of work.
  • In my opinion, retrospectives are the most essential non-development activity. Without it, you have no chance of becoming a better and more efficient organization.
  • Automate, automate, automate
  • Committing to a list of features is ridiculous. Rank the tasks and complete what you can. Fretting over why “task A” wasn’t complete is a waste of time. It’s clear the task was either too big, or higher priority work was taken on.
  • Demos are a waste of time unless the client cares and provides feedback. 
  • Daily meetings may or may not be needed. I prefer meeting every couple of days.

At the end of the day it’s about providing value to the client in the most efficient way.

A Binary Search Implementation

The binary search algorithm quickly searches a large array of numbers, it’s often referred to as divide and conquer.

public class BinarySearch
{
    public int BinarySearch(int[] items, int searchValue)
    {
        int left = 0;
        int right = items.Length - 1;

        while (left <= right)
        {
            var middle = (left + right) / 2;

            //If the searchValue is in the center, we found it!
            if(items[middle] == searchValue)
            {
                return middle;
            }            

            //If the searchValue is less than the current middle, we set the right to (middle - 1)
            //Because the searchValue is in the lower half of the items.
            if(searchValue < items[middle])
            {
                right = middle - 1;    
            }
            //If the searchValue is greater than the current middle, we set the right to (middle + 1)
            //Because the searchValue is in the higher half of the items.
            else
            {
                left = middle + 1;
            }

        } // now that we've either found the item and returned it or we've reset our search boundaries
          // we'll search it again.

        // Not found.
        return -1;
    }
}

The Benefits of Using a Build Framework

Continuous Integration (CI) and/or Continuous Delivery (CD) is the norm on software projects these days. There are many build servers such as Azure DevOps, TeamCity, Jenkins, and Cruise Control.Net. Most of these servers use proprietary languages to define build steps. But is codifying your build steps in a proprietary language a good thing?

Some applications are simple, with a few build steps, others are more complex with many build steps. When you define build steps in a proprietary language, the more complex the build steps (in sophistication or in number) the more coupled to a build platform you become. This becomes an issue when you want to switch build platforms. For example, you’re using JetBrain’s TeamCity in your on-premise datacenter, but the company decides to move to the cloud. Now you must re-write your build scripts because TeamCity isn’t supported in the new cloud platform.

Instead of writing your build scripts in a proprietary language, consider using a build framework.

Build frameworks have two benefits:

  1. Allowing transportability between build platforms.
  2. Allowing you to version your build scripts alongside your application code.

Transportability between platforms gives you the flexibility of moving between build platforms with minimal effort. There will always be some configuration on a new build platform, but build frameworks keep the effort low.

In my opinion, the biggest benefit to build frameworks is the ability to check-in and version your build scripts alongside your application code. Having the option to pull code from any point in your source control’s history and having that code build is well worth any downsides of a build framework.

There are two popular frameworks in the .Net space: Cake and Nuke Build. Both frameworks have been around for a while. I’ve used Nuke Build and enjoy it. I’ve heard great things about Cake and encourage you to look at it before deciding which is the best framework for your project.

So the next time you’re creating a new build definition for your application, consider using a build framework and checking it in source control with your application.

Tools and Resources I Commonly use to Develop Software

Below is a collection of tools, libraries, and resources I commonly use.

My Computer Setup

I’ve tried many configurations, and at one point, I even had three monitors.

What I discovered is that two 27-inch high-resolution monitors (4K+) work the best. I sometimes I miss the 3rd screen, but this is where the high resolution shines and I use split-screen.

I aim for a clutter-free workspace, it’s why I enjoy the iMac; it’s a beautiful computer with only a power cable.

27-inch 5k 2019 iMac 40 gigs of ram, 512gig SSD


It’s a compact, performant, capable computer, what else can I say?

Second Monitor BENQ 27-inch 4k HDR SW271

As a hobbyist photographer, a good monitor is a must. The BENQ is this monitor with its excellent color and brightness. The icing on the cake is the HDR support.

Keyboard – Logitech Craft

The Craft keyboard is quiet with backlit keys and supports both Mac and Windows key layout.

The biggest drawback is the price.

Mouse – Logitech MX Master 3

The MX Master series of mice has been phenomenal since the first version. Each iteration brings it closer to perfect.

HeadPhones – Beyerdynamic MMX 300 2nd gen.

I don’t know about you, but when I’m coding, I like a distraction-free space. In an office, that is nearly impossible, and I’m always the guy stuck next to the breakroom.

I’ve tried many brands, including three generations of Bose QC’s (wired and wireless), the Sony MDR1AM2’s, the Turtle Beach XOFOUR’s, and the Beryerdynamic’s.

For sound quality, wired is the way to go. Please don’t get me wrong wireless headphones sound good, but they can’t beat wired headphones.

The Beyerdynamics are not for everyone, the cans are huge, and some people have complained about a tight fit. But they have a great sound stage and have good isolation without being noise-canceling.

Aeron Chair Remastered

Aeron Chairs are the gold standard of office chairs. I’ve worked in an office for years sitting in cheap chairs that hurt my tailbone and back.

The Aeron is a dream compared to those chairs. There are other cheaper chairs with the same level of comfort, but there is no consensus on which is comparable to the Aeron.

XDesk (formerly NextDesk)

I had a dream of walking on a treadmill while coding; I purchased the NextDesk and a walking treadmill; it was awesome.

The dream lasted about a year.

Software

Operating System

MacOs Big Sur

In 2016, I switched from Windows to Mac, but since I develop in Microsoft technologies, I never truly left Windows.

Both operating systems have their appeal, but the integration between Apple’s products is hard to beat.

IDE’s

JetBrains Rider

When JetBrain’s released Rider, I thought they were nuts to compete with Microsoft’s Visual Studio.

I was wrong.

Rider is faster and more innovative than Visual Studio.

JetBrains WebStorm

As with Rider, WebStorm is an excellent IDE; it’s natural to use if you’re used to other JetBrains IDE’s.

JetBrains DataGrip

Another IDE in from the JetBrain’s, but this one is for databases.

If you haven’t looked at JetBrains, I highly recommend you do.

Text Editors

Azure Data Studio

A SQL editor from Microsoft built on top of Electron. Many applications built with Electron amaze me, Azure Data Studio is one of them. To think at its core, it’s just javascript and HTML.

Visual Studio Code

As with Azure Data Studio, Visual Studio Code is built with Electron and is my de facto text editor.

I have to mention Sublime Text 3, from a performance standpoint, nothing can touch Sublime Text.

Programming Libraries

Nuke Build

In the olden days, we’d set up our CI/CD pipeline using Cruise Control.net with an MSBuild or a Nant script. You’d copy your script to the build server and be off the races. The problem is if your build pipeline changed, older versions of your application are no longer buildable.

This is where Nuke Build comes in. All of your build IP is checked in and versioned with the code, so you can roll back to an older version, and it’s still buildable.

XUnit

The two testing frameworks in the .Net eco-system are xUnit and nUnit. Both are great, but xUnit is simpler than nUnit, and as I mentioned at the start, I like simple.

Fluent Assertions

Be honest, you don’t test as often as you should. I didn’t think so, me either.

Fluent Assertions provide English like assertions making asserts easier to write and easier to read.

Bogus

In most unit tests, passing in dummy data is the norm. A good part of the time setting up the test is setting up the dummy data. Bogus eliminates the need to set up dummy data from scratch. It provides several common data formats out of the box.

Medatir

If you haven’t used MediatR, you’re missing out. It’s an excellent implementation of the Mediator Pattern. I use it in all of my applications.

Miscellaneous

Spark (Email Client)

This is the best email client on the Mac.

Slack

What is there to say about Slack. It’s one of the best communication platforms out there.

Typora (Rich Markdown Editor)

Typora takes Markdown to the next level. If you haven’t used it, try it, you won’t regret it.

Notion (Note Taking)

Finding the perfect solution for note-taking is nearly impossible, Notion is the closest I’ve gotten in a single application.

Beyond Compare

Beyond Compare is an excellent text comparer. I don’t use it often, but when I do, it’s well worth it.

GitKraken

If you’re looking for an application to visualize Git. GitKraken is the application for you.

Learning Resources

Udemy

Udemy is an excellent resource for courses of any type. If you want to learn something, check here first.

Pluralsight

Five years ago, Pluralsight, the king of technology videos. While they still have a great selection, other services have surpassed them. If you’re looking for .Net related content, check Pluralsight first, they’ll likely have a video.

Creative Live

Creative Live has a decent library of videos on drawing, photography, video production, etc. I’ve purchased courses on Final Cut Pro and photography.

Most of Creative Live’s videos are well produced and are high in video quality.

O’Reilly Learning

For me, this is the best learning platform for Software Engineers. It has videos, live sessions, hands-on coding, the entire O’Reilly book library, and Manning books.

Before subscribing to O’Reilly, I’d buy books from Amazon and Manning, now I don’t. Most of them are available on the O’Reilly Learning platform.

Grady Booch on Architecture

A Series of Tweets from Grady Booch on software architecture:

https://twitter.com/Grady_Booch/status/1301810358819069952

A thread regarding the architecture of software-intensive systems.

There is more to the world of software-intensive systems than web-centric platforms at scale.

A good architecture is characterized by crisp abstractions, a good separation of concerns, a clear distribution of responsibilities, and simplicity. All else is details.

You cannot reduce the complexity of a software-intensive systems; the best you can do is manage it.

In the fullness of time, all vibrant architectures must evolve.

Old software never dies; you must kill it.

Some architectures are intentional, some are accidental, most are emergent.

Meaningful architecture is a living, vibrant process of deliberation, design, and decision.

The relentless accretion of code over days, months, years and even decades quickly turns every successful new project into a legacy one.

Show me the organization of your team and I will show you the architecture of your system.

All well-structured software-intensive systems are full of patterns.

A software architect who does not code is like a cook who does not eat.

Focusing on patterns and cross-cutting concerns can yield an architecture that is smaller, simpler, and more understandable.

Design decisions encourage what a particular stakeholder can do as well as what constrains what a stakeholder cannot.

In the beginning, the architecture of a software-intensive system is a statement of vision. In the end, the architecture of every such system is a reflection of the billions upon billions of small and large, intentional and accidental design decisions made along the way.

All architecture is design, but not all design is architecture.

Architecture represents the set of significant design decisions that shape the form and the function of a system, where significant is measured by cost of change.

NVarchar Vs Varchar

Each engineer defining a new string column decides: Do I use nvarchar or do I use varchar?

Since I discovered nvarchar, I’ve always use nvarchar. My thought is, why use a datatype that may not support a text value, and you won’t likely discover an incompatibility value until it’s in production.

I hear the argument about space, but space is cheap and not worth worrying about. I know what you’re thinking, the cost doesn’t matter when the hard drive is full, and I agree.

Starting with Sql Server 2008 R2 data compression is applied to nchar and nvarchar (nvarchar(max) is excluded) fields. Depending on the data the effectiveness of the compression varies, but with English, there is a 50% compression, which puts it on par with the varchar’s space needs (1).

Something else to consider is most programming languages support UTF-16 as the string type. So each time a varchar is loaded from the database, it’s converted to UTF-16 (nvarchar-ish)

This StackOverflow answer sums up nvarchar vs. varchar:

An nvarchar column can store any Unicode data. A varchar column is restricted to an 8-bit codepage. Some people think that varchar should be used because it takes up less space. I believe this is not the correct answer. Codepage incompatabilities are a pain, and Unicode is the cure for codepage problems. With cheap disk and memory nowadays, there is really no reason to waste time mucking around with code pages anymore.

All modern operating systems and development platforms use Unicode internally. By using nvarchar rather than varchar, you can avoid doing encoding conversions every time you read from or write to the database. Conversions take time, and are prone to errors. And recovery from conversion errors is a non-trivial problem.

If you are interfacing with an application that uses only ASCII, I would still recommend using Unicode in the database. The OS and database collation algorithms will work better with Unicode. Unicode avoids conversion problems when interfacing with other systems. And you will be preparing for the future. And you can always validate that your data is restricted to 7-bit ASCII for whatever legacy system you’re having to maintain, even while enjoying some of the benefits of full Unicode storage. (2)

My conclusion is the only time the data is a varchar is when it’s at rest.

References:

1. Unicode Compression implementation
2. What is the difference between varchar and nvarchar?

Changing a React Input Value from Vanilla Javascript

The short answer:

function setNativeValue(element, value) {
    let lastValue = element.value;
    element.value = value;
    let event = new Event("input", { target: element, bubbles: true });
    // React 15
    event.simulated = true;
    // React 16
    let tracker = element._valueTracker;
    if (tracker) {
        tracker.setValue(lastValue);
    }
    element.dispatchEvent(event);
}

var input = document.getElementById("ID OF ELEMENT");
setNativeValue(input, "VALUE YOU WANT TO SET");

Reference: https://stackoverflow.com/a/52486921/17360

The long answer:

React overrides the native Javascript onChange behavior. Triggering an onChange event does nothing to change the input field value in React’s eyes. To React the value is still unchanged, even though to a user the value can clearly be seen on the screen. The above code triggers the change in React also.

When to Use The FromService Attribute

I recently discovered the [FromServices] attribute, which has been a part of .Net Core since the first version.

The [FromServices] attribute allows method level dependency injection in Asp.Net Core controllers.

Here’s an example:

public class UserController : Controller
{
    private readonly IApplicationSettings _applicationSettings;

    public UserController(IApplicationSettings applicationSettings)
    {
        _applicationSettings = applicationSettings;
    }

    public IActionResult Get([FromService]IUserRepository userRepository, int userId)
    {
        //Do magic
    }
}

Why use method injection over constructor injection? The common explanation is when a method needs dependencies and it’s not used anywhere else, then it’s a candidate for using the [FromService] attribute.

Steven from StackOverflow posted an answer against using the [FromService] attribute:

For me, the use of this type of method injection into controller actions is a bad idea, because:

– Such [FromServices] attribute can be easily forgotten, and you will only find out when the action is invoked (instead of finding out at application start-up, where you can verify the application’s configuration)

– The need for moving away from constructor injection for performance reasons is a clear indication that injected components are too heavy to create, while injection constructors should be simple, and component creation should, therefore, be very lightweight.

– The need for moving away from constructor injection to prevent constructors from becoming too large is an indication that your classes have too many dependencies and are becoming too complex. In other words, having many dependencies is an indication that the class violates the Single Responsibility Principle. The fact that your controller actions can easily be split over different classes is proof that such controller is not very cohesive and, therefore, an indication of a SRP violation.

So instead of hiding the root problem with the use of method injection, I advise the use of constructor injection as sole injection pattern here and make your controllers smaller. This might mean, however, that your routing scheme becomes different from your class structure, but this is perfectly fine, and completely supported by ASP.NET Core.

From a testability perspective, btw, it shouldn’t really matter if there sometimes is a dependency that isn’t needed. There are effective test patterns that fix this problem.

I agree with Steven; if you need to move your dependencies from your controller to the method because the class is constructing too many dependencies, then it’s time to break up the controller. You’re almost certainly violating SRP.

The only use case I see with method injection is late-binding when a dependency that isn’t ready at controller construction. Otherwise, it’s better to use constructor injection.

I say this because with constructor injection the class knows at construction whether the dependencies are available. With method injection, this isn’t the case, it’s not known if the dependencies are available until the method is called.

C# 8 – Nullable Reference Types

Microsoft is adding a new feature to C# 8 called Nullable Reference Types. Which at first, is confusing because all reference types are nullable… so how this different? Going forward, if the feature is enabled, references types are non-nullable, unless you explicitly notate them as nullable.

Let me explain.

Nullable Reference Types

When Nullable Reference Types are enabled and the compiler believes a reference type has the potential of being null, it warns you. You’ll see warning messages from Visual Studio:

And build warnings:

To remove this warning, add a question mark to the back for the reference type. For example:

public string StringTest()
{
    string? notNull = null;
    return notNull;
}

Now the reference type behaves as it did before C# 8. 

This feature is enabled by adding  #nullable enable   to the top of any C# file or adding lt;NullableReferenceTypes>true</NullableReferenceTypes> to the .csproj file. Out of the box it’s not enabled, which is a good thing if it was enabled any existing code-base would likely light up like a Christmas tree.

The Null Debate

Why is Microsoft adding this feature now? Nulls have been part of the language since, well the beginning? Honestly, I don’t know why. I’ve always used nulls, it’s a fact of life in C#. I didn’t realize not having nulls was an option… Maybe life will be better without them. We’ll find out.

Should you or should you not use nulls? I’ve summarized the ongoing debate as I understand them.

For

The argument for nulls is generally that an object has an unknown state. This unknown state is represented with null. You see this with the bit data type in SQL Server, which has 3 values, null (not set), 0 and 1. You also see this in UI’s, where sometimes it’s important to know if a user touched a field or not. Someone might counter with, “Instead of null, why not create an unknown state type or a ‘not set’ state?” How is this different than null? You’d still have to check for this additional state. Now you’re creating unknown states for each instance. Why not just use null and have a global unknown state? 

Against

The argument against nulls is it’s a different data type and must be checked for each time you use a reference type. The net result is code like this:

var user = GetUser(username, password);

if(user != null)
{
    DoSomethingWithUser(user);
} else 
{
    SetUserNotFoundErrorMessage()
}

If the GetUser method returned a user in all cases, including when the user is not found. If the code never returns null, then it’s a waste guarding against it and ideally, this simplifies the code. However, at some point, you’ll need to check for an empty user and display an error message. Not using a null doesn’t remove the need to fill the business case of a user not found.

Is this Feature a Good Idea?

The purpose of this feature is NOT to eliminate the use of nulls, but to instead ask the question: “Is there a better way?” And sometimes the answer is “No”.  If we can eliminate the constant checking for nulls with a little forethought, which in turn simplifies our code. I’m in. The good news is C# has made working with nulls trivial.

I do fear some will take a dogmatic stance and insisting on eliminating nulls to the detriment of a system. This is a fool’s errand, because nulls are integral to C#.

Is Nullable Reference Types a good idea? It is, if the end result is simpler and less error prone code.