Chuck Conway Chuck Conway

4 Practices to Lowering Your Defect Rate

Writing software is a battle between complexity and simplicity. Striking balance between the two is difficult. The trade-off is between long unmaintainable methods and too much abstraction. Tilting too far in either direction impairs code readability and increases the likelihood of defects.

Are defects avoidable? NASA tries, but they also do truckloads of testing. Their software is literally mission critical - a one shot deal. For most organizations, this isn't the case and large amounts of testing are costly and impractical. While there is no substitute for testing, it's possible to write defect resistant code, without testing.

In 20 years of coding and architecting applications, I've identified four practices to reduce defects. The first two practices limit the introduction of defects and the last two practices expose defects. Each practice is a vast topic on its own in which many books have been written. I've distilled each practice into a couple paragraphs and I've provided links to additional information when possible.

1. Write Simple Code

Simple should be easy, but it's not. Writing simple code is hard.

Some will read this and think this means using simple language features, but this isn't the case -- simple code is not dumb code.

To keep it objective, I'm using cyclomatic complexity as a measure. There are other ways to measure complexity and other types of complexity, I hope to explore these topics in later articles.

Microsoft defines cyclomatic complexity as:

Cyclomatic complexity measures the number of linearly-independent paths through the method, which is determined by the number and complexity of conditional branches. A low cyclomatic complexity generally indicates a method that is easy to understand, test, and maintain.

What is a low cyclomatic complexity? Microsoft recommends keeping cyclomatic complexity below 25.

To be honest, I've found Microsoft's recommendation of cyclomatic complexity of 25 to be too high. For maintainability and complexity, I've found the ideal method size is between 1 to 10 lines with a cyclomatic complexity between 1 and 5.

Bill Wagner in Effective C#, Second Edition wrote on method size:

Remember that translating your C# code into machine-executable code is a two-step process. The C# compiler generates IL that gets delivered in assemblies. The JIT compiler generates machine code for each method (or group of methods, when inlining is involved), as needed. Small functions make it much easier for the JIT compiler to amortize that cost. Small functions are also more likely to be candidates for inlining. It’s not just smallness: Simpler control flow matters just as much. Fewer control branches inside functions make it easier for the JIT compiler to enregister variables. It’s not just good practice to write clearer code; it’s how you create more efficient code at runtime.

To put cyclomatic complexity in perspective, the following method has a cyclomatic complexity of 12.

public string ComplexityOf12(int status)
    var isTrue = true;
    var myString = "Chuck";

    if (isTrue)
        if (isTrue)
            myString = string.Empty;
            isTrue = false;

            for (var index = 0; index < 10; index++)
                isTrue |= Convert.ToBoolean(new Random().Next());

            if (status == 1 || status == 3)
                switch (status)
                    case 3:
                        return "Bye";
                    case 1:
                        if (status % 1 == 0)
                            myString = "Super";

                return myString;

    if (!isTrue)
        myString = "New";

    switch (status)
        case 300:
            myString = "3001";
        case 400:
            myString = "4003";


    return myString;

A generally accepted complexity hypothesis postulates a positive correlation exists between complexity and defects.

The previous line is a bit convoluted. In the simplest terms -- keeping code simple reduces your defect rate.

2. Write Testable Code

Studies have shown that writing testable code, without writing the actual tests lowers the incidents of defects. This is so important and profound it needs repeating: Writing testable code, without writing the actual tests, lowers the incidents of defects.

This begs the question, what is testable code?

I define testable code as code that can be tested in isolation. This means all the dependencies can be mocked from a test. An example of a dependency is a database query. In a test, the data is mocked (faked) and an assertion of the expected behavior is made. If the assertion is true, the test passes, if not it fails.

Writing testable code might sound hard, but, in fact, it is easy when following the Inversion of Control (Dependency Injection) and the S.O.L.I.D principles. You’ll be surprised at the ease and will wonder why it took so long to start writing in this way.

3. Code Reviews

One of the most impactful practice a development team can adopt is code reviews.

Code Reviews facilitates knowledge sharing between developers. Speaking from experience, openly discussing code with other developers has had the greatest impact on my code writing skills.

In the book Code Complete, by Steve McConnell, Steve provides numerous case studies on the benefits code reviews:

If those numbers don't sway you to adopt code reviews, then you are destined to drift into a black hole while singing Johnny Paycheck's Take This Job and Shove It.

4. Unit Testing

I'll admit, when I am up against a deadline testing is the first thing to go. But the benefits of testing can't be denied as the following studies illustrate.

Microsoft performed a study on the Effectiveness on Unit Testing. They found that coding version 2 (version 1 had no testing) with automated testing immediately reduced defects by 20%, but at a cost of an additional 30%.

Another study looked at Test Driven Development (TDD). They observed an increase in code quality, more than two times, compared to similar projects not using TDD. TDD projects took on average 15% longer to develop. A side-effect of TDD was the tests served as documentation for the libraries and API's.

Lastly, in a study on Test Coverage and Post-Verification Defects:

... We find that in both projects the increase in test coverage is associated with decrease in field reported problems when adjusted for the number of prerelease changes...

An Example

The following code has a cyclomatic complexity of 4.

    public void SendUserHadJoinedEmailToAdministrator(DataAccess.Database.Schema.dbo.Agency agency, User savedUser)
        AgencySettingsRepository agencySettingsRepository = new AgencySettingsRepository();
        var agencySettings = agencySettingsRepository.GetById(agency.Id);

        if (agencySettings != null)
            var newAuthAdmin = agencySettings.NewUserAuthorizationContact;

            if (newAuthAdmin.IsNotNull())
                EmailService emailService = new EmailService();

                emailService.SendTemplate(new[] { newAuthAdmin.Email }, GroverConstants.EmailTemplate.NewUserAdminNotification, s =>
                    s.Add(new EmailToken { Token = "Domain", Value = _settings.Domain });
                    s.Add(new EmailToken
                        Token = "Subject",
                        Value =
                    string.Format("New User {0} has joined {1} on myGrover.", savedUser.FullName(), agency.Name)
                    s.Add(new EmailToken { Token = "Name", Value = savedUser.FullName() });

                    return s;

Let's examine the testability of the above code.

Is this simple code?

Yes, it is, the cyclomatic complexity is below 5.

Are there any dependencies?

Yes. There are 2 services AgencySettingsRepository and EmailService.

Are the services mockable?

No, their creation is hidden within the method.

Is the code testable?

No, this code isn't testable because we can't mock AgencySettingsRepository and EmailService.

Example of Refactored Code

How can we make this code testable?

We inject (using constuctor injection) AgencySettingsRepository and EmailService as dependencies. This allows us to mock them from a test and test in isolation.

Below is the refactored version.

Notice how the services are injected into the constructor. This allows us to control which implementation is passed into the SendMail constructor. It's then easy to pass dummy data and intercept the service method calls.

public class SendEmail
    private IAgencySettingsRepository _agencySettingsRepository;
    private IEmailService _emailService;

    public SendEmail(IAgencySettingsRepository agencySettingsRepository, IEmailService emailService)
        _agencySettingsRepository = agencySettingsRepository;
        _emailService = emailService;

    public void SendUserHadJoinedEmailToAdministrator(DataAccess.Database.Schema.dbo.Agency agency, User savedUser)
        var agencySettings = _agencySettingsRepository.GetById(agency.Id);

        if (agencySettings != null)
            var newAuthAdmin = agencySettings.NewUserAuthorizationContact;

            if (newAuthAdmin.IsNotNull())
                _emailService.SendTemplate(new[] { newAuthAdmin.Email },
                GroverConstants.EmailTemplate.NewUserAdminNotification, s =>
                    s.Add(new EmailToken { Token = "Domain", Value = _settings.Domain });
                    s.Add(new EmailToken
                        Token = "Subject",
                        Value = string.Format("New User {0} has joined {1} on myGrover.", savedUser.FullName(), agency.Name)
                    s.Add(new EmailToken { Token = "Name", Value = savedUser.FullName() });

                    return s;

Testing Exmaple

Below is an example of testing in isolation. We are using the mocking framework FakeItEasy.

    public void TestEmailService()

        //Using FakeItEasy mocking framework
        var repository = A<IAgencySettingsRepository>.Fake();
        var service = A<IEmailService>.Fake();

        var agency = new Agency { Name = "Acme Inc." };
        var user = new User { FirstName = "Chuck", LastName = "Conway", Email = "" }

        var sendEmail = new SendEmail(repository, service);
        sendEmail.SendUserHadJoinedEmailToAdministrator(agency, user);

        //An exception is thrown when this is not called.
        A.CallTo(() => service.SendTemplate(A<Agency>.Ignore, A<User>.Ignore)).MustHaveHappened();



Writing defect resistance code is surprisingly easy. Don't get me wrong, you'll never write defect-free code (if you figure out how, let me know!), but by following the 4 practices outlined in this article you'll see a decrease in defects found in your code.

To recap, Writing Simple Code is keeping the cyclomatic complexity around 5 and the method size small. Writing Testable Code is easily achieved when following the Inversion of Control and the S.O.L.I.D Principles. Code Reviews help you and the team understand the domain and the code you've written -- just having to explain your code will reveal issues. And lastly, Unit Testing can drastically improve your code quality and provide documentation for future developers.

What is Docker?


What is Docker? To be honest, I'm still working that out. But I'll tell you what I know.

Docker is a way to containerize your application. Why containerize my application? Because then, it simply plugs into any docker instance on any server.

Consider if each electrical device had a unique power interface. Supplying power to each device would be a major event. Much like deploying applications in today's software environment. Docker aims to change that.

By putting your application in a Docker container, your application runs anywhere a container runs. And a container runs on any server with Docker installed. Simple as that. No configurations, no installations -- it just works.

This is achieved by configuring your application to run in a Docker container. A docker container is a mini, stripped down Operating System (OS) environment -- OS features are added as needed. As in a full OS, your application is configured to run on the OS. It might be as simple as copying files or it could involve sophisticated configuration. Here's the best part, once your application is configured to run in a Docker container, that's it.

This sounds great, when can I start using it? The good news is, if you're on Linux, it's ready to use now. If you are on Windows --it's not quite ready. Docker does not run natively on Windows. There is a package that allows Docker to run on Windows, but it involves running Docker in a Virtual Machine (VM) ... it's not recommended. Anyways, Windows applications won't run in Docker because Docker emulates Linux. But don't fret, Microsoft is hard at work on a container technology that is slated for released with Windows Server 2016.

To learn more about Windows Server 2016 and Windows Containers read this article.

I'm excited about containers. I see containers revolutionizing software and we've only scratched the surface with what's possible.

Removing Nuget Elements from Solutions and Projects

David Ebbo wrote up a great piece on removing nuget elements from solutions and projects. I won't repeat his words, but I encourage you to read his post.

I do want to capture Owen Johnson's PowerShell script, which removes unused Nuget elements from projects and solution files.


# Regex Patterns for Really Bad Things!
$listOfBadStuff = @(
#sln regex
"\s*(\.nuget\\NuGet\.(exe|targets)) = \1",
#*proj regexes
"\s*<Import Project=""\$\(SolutionDir\)\\\.nuget\\NuGet\.targets"".*?/>",
"\s*<Target Name=""EnsureNuGetPackageBuildImports"" BeforeTargets=""PrepareForBuild"">(.|\n)*?</Target>"

# Delete NuGet.targets

ls -Recurse -include 'NuGet.exe','NuGet.targets' |
foreach { 
remove-item $_ -recurse -force
write-host deleted $_

# Fix Project and Solution Files to reverse damage done by "Enable NuGet Package Restore

ls -Recurse -include *.csproj, *.sln, *.fsproj, *.vbproj, *.wixproj |
foreach {
$content = cat $_.FullName | Out-String
$origContent = $content
foreach($badStuff in $listOfBadStuff){
$content = $content -replace $badStuff, ""
if ($origContent -ne $content)
$content | out-file -encoding "UTF8" $_.FullName
write-host messed with $_.Name

Setting Proxies for Git, NPM, Bower and Webdriver

Companies often setup proxy servers to filter their traffic. The intention is to keep wayward employees from 'accidentally' visiting sites that do not align with the professional environment.

To access the internet everything must pass through the proxy. Applications like Git, NPM, Bower and Webdriver aren't configured out of the box to use the company's proxy. When installed, then simply don't work.

Below are instructions for setting the proxy for Git, NPM, Bower, and Webdriver.


Create a .bowerrc file in your home (%USERPROFILE%) folder. Add these contents:



git config --global http.proxy
git config --global https.proxy

set http_proxy=
set https_proxy=


npm config set proxy
npm config set https-proxy

npm config set registry


//doing an update
webdriver-manager update --proxy=""

When Using Frameworks -- Sometimes Ignorance is Bliss

In software engineering, there is a prevailing idea that an engineer should only use a framework when he or she understands the internal workings. This is a fallacy.

Why is it that we must know the internal workings -- do the details matter that much? Some might say ignorance is bliss.

Car Engine

Let's examine the engine of a car:


How many really know how the engine works?

Can you tell me why it’s called a 4 stroke engine?

What does each stroke do?

What’s the difference between a 4 stroke engine and a 2 stroke engine?


And yet we still drive our cars without any thought on “how” the car is getting us to our destination.

We interface with the car using the steering wheel, the gear shifter, the gas pedal, and the brakes.

Who cares how it works, as long as it gets us to our destination. When the car breaks down we take it to an expert.

The Core Competency of a Business

Business In business, a company has specialized knowledge that allows it to be competitive. This is referred to as a company's core competency.

A core competency can be a process or a product.

To stay competitive, a company must tirelessly improve their core competency. Using resources for activities other than supporting the company's core competency weakens the company’s competitive advantage. Which opens the window of opportunity for competitors to overtake the company’s competitive advantage.

This idea is best illustrated with an example.


Apple Logo

Apple is known for their simplicity and their beautiful products. You’d think this would be easy to replicate, but it’s not, just ask Samsung, HTC, and Microsoft.

Why have these companies failed? Because simple is hard and Apple is expert in simple.

The Core Competency of a Person

People Core competency can apply to people too.

What sets you apart from others?

To have developed your core competency, you’ve had to rigorously focus in one area, sometime for years, gaining insights and knowledge setting you apart from others.

As in a business, to maintain your competitive advantage you must continually hone your core competency.

Using Small Pieces


A software engineer is no different from a company or any other professional. We must pick and choose what we learn to stay aligned with our core competency.

Understanding the internals of every framework we use is not practical and is time consuming. I'm expecting the framework's author to be an expert in the framework's domain, therefore, I don't need to know it's internal workings.

Isn’t this the point of software -- to use black boxed bits of functionality to produce a larger more complex work? I believe it is.

In the end, it comes down to focus and time, both of which are limited.

8 Must Have Extensions for

Everyone has a favorite editor. We each have reasons for choosing our editor. I’ve tried them all. And I’ve found that best suits me. Unfortunately, there are gaps in the functionality of With a robust ecosystem of extensions, I’ve found 8 extensions that complete

Here is a list of my 8 must have extensions.


For anyone working with CSS and HTML Emmet is a must have. I wrote about it earlier this year. It removes all the unnecessary typing from while create HTML and CSS.


It’s officially called “Autosave Files on Window Blur”. This extension saves all the changes files once you’ve navigated away from Brackets. It works similarly to how WebStorm saves it’s files.


You’d think this wasn’t a big deal, at least that’s what I thought. But it does a great job! Give it a try. You’ll be surprised how useful this plugin is -- Beautify

Brackets Git

This is the best git integration I have ever used. And I’ve used Git in WebStorm, Sublime Text and Visual Studio. So that’s saying a lot. It’s functional and aesthetically pleasing, there isn’t much else to ask for. - Brackets Git

Brackets Icons

You’d be surprised how much a few good icons can spruce up an ole editor. - Brackets Icons

Documents Toolbar

In my opinion this is a missing feature of This completes the editor. - Documents Toolbar


This summarizes all the TODO comments in the file. It also supports NOTE, FIXME, CHANGES and FUTURE. More can be added if this list is too limiting. - Todo

Quick Search

This extension automatically highlights occurrences of the selected word. Much like Notepad++ and Sublime Text. - Quick Search

Right Click Extended

I found the lack of right-click cut and paste annoying. In Windows, right-click cut and paste is bread and butter of my workflow. Out of the box Brackets is missing the right-click cut and paste functionality. This extension saves the day by adding it. - Right-Click Extended

Setting up Continuous Integration on Ubuntu with Nodejs

I went through blood, sweat and tears to bring this to you. I suffered the scorching heat of Death Valley and summited the peaks of Mount McKinley. I’ve sacrificed much.

Much of content shared in the post is not my original work. Where I can, I link back to the original work.

This article assumes you can get around Linux.

I could not find a comprehensive guide on hosting and managing Nodejs applications on Ubuntu in a production capacity. I’ve pulled together multiple articles on the subject. By the end of this article I hope you’ll be able to setup up your own Ubuntu server and have Nodejs deploying via a continuous integration server.


I am using TeamCity on Windows which then deploys code from GitHub to Ubuntu hosted on AWS.


For this article I used the following technologies:

Setting up Ubuntu

I’m not going into detail here. Amazon Web Services (AWS) makes this pretty easy to do. It doesn’t matter where it’s at or if it’s on your own server.

I encountered a few gotchas. First, make sure port 80 is opened. I made the foolish mistake of trying to connect with port 80 closed. Once I discovered my mistake, I felt like a rhinoceros's ass.

Installing Nodejs From Source

Nodejs is a server technology using Google’s V8 javascript engine. Since it’s release in 2010, its become widely popular.

The following instructions originally came from a Digital Ocean post).

You always have the option to install Nodejs from the apt-get, but it will be a few versions behind. To get the latest bits, install Nodejs from the source.

At this send of this section we will have downloaded the latest stable version of node (as of this article), we will have build the source and installed Nodejs.

Log into your server. We’ll start by updating the package lists.

sudo apt-get update

I’m also suggesting that you upgrade all the packages. This is not necessary, for Nodejs but it is good practice to keep your server updated.

sudo apt-get upgrade

Your server is all up to date. It’s time download the source.

cd ~

As of the writing 12.7 is the latest stable release of Nodejs. Check out for the latest version.


Extract the archive you’ve downloaded.

tar xvf node-v*

Move into the newly created directory

cd node-v*

Configure and build Nodejs.



Install Nodejs

sudo make install

To remove the downloaded and the extracted files. Of course, this is optional.

cd ~

rm -rf node-v*

Congrats! Nodejs is now installed! And it wasn’t very hard.

Setting up Nginx


Nodejs can act as a web server, but it’s not what I would want to expose to the world. An industrial, harden, feature rich web server is better suited for this. I’ve turned to Nginx for this task.

It’s a mature web server with the features we need. To run more than one instance of Nodejs, we’ll need to port forwarding.

You might be thinking, why do we need more than one instance of Nodejs running at the same-time. That’s a fair question… In my scenario, I have one server and I need to run DEV, QA and PROD on the same machine. Yeah, I know not ideal, but I don’t want to stand up 3 servers for each environment.

To start let’s install Nginx

sudo -s

add-apt-repository ppa:nginx/stable

apt-get update 

apt-get install nginx

Once Nginx is has successfully installed we need to set up on the domains. I’m going to assume you’ll want to have each of your sites on it’s own domain/sub domain. If you don’t and want to use different sub-folders, that’s doable and very easy to do. I am not going to cover that scenario here. There is a ton of documentation on how to do that. There is very little documentation on setting up different domains and port forwarding to the corresponding Nodejs instances. This is what I’ll be covering.

Now that Nginx is installed, create a file for at /etc/nginx/sites-available/

sudo nano /etc/nginx/sites-available/

Add the following configuration to your newly created file

# the IP(s) on which your node server is running. I chose port 9001.
upstream app_myapp1 {
    keepalive 8;

# the nginx server instance
server {
    listen 80;
    access_log /var/log/nginx/yourdomain.log;

    # pass the request to the node.js server with the correct headers
    # and much more can be added, see nginx config options
    location / {
        proxy_http_version 1.1;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header Host $http_host;
        proxy_set_header X-NginX-Proxy true;

        proxy_pass http://app_myapp1;


Make sure you replace "" with your actual domain. Save and exit your editor.

Create a symbolic link to this file in the sites-enabled directory.

cd /etc/nginx/sites-enabled/ 

ln -s /etc/nginx/sites-available/

To test everything is working correctly, create a simple node app and save it to /var/www/ and run it.

Here is a simple nodejs app if you don’t have one handy.

var http = require('http');

http.createServer(function (req, res) {
    res.writeHead(200, {'Content-Type': 'text/plain'});
    res.end('Hello World\n');}).listen(9001, "");
console.log('Server running at');

Let’s restart Nginx.

sudo /etc/init.d/nginx restart

Don’t forget to start your Nodejs instance, if you haven’t already.

cd /var/www/yourdomain/ && node app.js

If all is working correctly, when you navigate to you’ll see “Hello World.”

To add another domain for a different Nodejs instance your need to repeat the steps above. Specifically you’ll need to change the upstream name, the port and the domain in your new Nginx config file. The proxy_pass address must match the upstream name in the nginx config file. Look at the upstream name and the proxy_pass value and you’ll see what I mean.

To recap, we’ve installed NodeJS from source and we just finished installing Nginx. We’ve configured and tested port forwarding with Nginx and Nodejs

Installing PM2

You might be asking “What is PM2?” as I did when I first heard about. PM2 is a process manager for Nodejs applications. Nodejs doesn’t come with much. This is part of it’s appeal. The downside to this, is well, you have to provide the layers in front of it. PM2 is one of those layers.

PM2 manages the life of the Nodejs process. When it’s terminated, PM2 restarts it. When the server reboots PM2 restarts all the Nodejs processes for you. It also has extensive development lifecycle process. We won’t be covering this aspect of PM2. I encourage you to read well written documentation.

Assuming you are logged into the terminal, we’ll start by installing PM2 via NPM. Npm is Nodejs package manager (npm). It was installed when you installed Nodejs.

sudo npm install pm2 -g

That’s it. PM2 is now installed.

Using PM2

PM2 is easy to use.

The hello world for PM2 is simple.

pm2 start hello.js

This adds your application to PM2’s process list. This list is output each time an application is started.

In this example there are two Nodejs applications running. One called and api.pre.

PM2 automatically assigns the name of the app to the “App name” in the list.

Out of the box, PM2 does not configure itself to startup when the server restarts. The command is different for the different flavors of Linux. I’m running on Ubuntu, so I’ll execute the Ubuntu command.

pm2 start ubuntu

We are not quite done yet. We have to add a path to the PM2 binary. Fortunately, the output of the previous command tells us how to do that.


[PM2] You have to run this command as root
[PM2] Execute the following command :
[PM2] sudo env PATH=$PATH:/usr/local/bin pm2 startup ubuntu -u sammy
Run the command that was generated (similar to the highlighted output above) to set PM2 up to start on boot (use the command from your own output):

 sudo env PATH=$PATH:/usr/local/bin pm2 startup ubuntu -u sammy

Examples of other PM2 usages (optional)

Stopping an application by the app name

pm2 stop example

Restarting by the app name

pm2 restart example

List of current applications managed by PM2

pm2 list

Specifying a name when starting a process. If you call, PM2 uses the javascript file as the name. This might not work for you. Here’s how to specify the name.

pm2 start www.js --name api.pre

That should be enough to get you going with PM2. To learn more about PM2’s capabilities, visit the GitHub Repo.

Setting up and Using Plink

You are probably thinking, “What in the name of Betsey's cow is Plink?” At least that’s thought. I’m still not sure what to think of it. I’ve never seen anything like it.

You ever watched the movie Wall-e? Wall-e pulls out a spork. First he tries to put it with the forks, but it doesn’t fix and then he tries to put it with the spoons, but it doesn’t fit. Well that’s Plink. It’s a cross between Putty (SSH) and the Windows Command Line.

Plink basically allows you to run bash commands via the Windows command line while logged into a Linux (and probably Unix) shell.

Start by downloading Plink. It’s just an executable. I recommend putting it in C:/Program Files (x86)/Plink. We’ll need to reference it later.

If you are running an Ubuntu instance in AWS. You’ll already have a cert setup for Putty (I’m assuming you are using Putty).

If you are not, you’ll need to ensure you have a compatible ssh cert for Ubuntu in AWS.

If you are not using AWS, you can specify the username and password in the command line and won’t to worry about the ssh certs.

Here is an example command line that connects to Ubuntu with Plink.

"C:\Program Files (x86)\Plink\plink.exe" -ssh ubuntu@xx.xx.xx.xx -i "C:\Program Files (x86)\Plink\ssh certs\aws-ubuntu.ppk" 

This might be getting ahead of ourselves, but to run an ssh script on the Ubuntu server we add the complete path to the end of the Plink command.

"C:\Program Files (x86)\Plink\plink.exe" -ssh ubuntu@xx.xx.xx.xx -i "C:\Program Files (x86)\Plink\ssh certs\aws-ubuntu.ppk" /var/www/

And that, dear reader, is Plink.

Understanding NODE_ENV

NODE_ENV is an environment variable made popular by expressjs. Before your start the node instance, set the NODE_ENV to the environment. In the code you can load specific files based on the environment.

Setting NODE_ENV
Linux & Mac: export NODE_ENV=PROD
Windows: set NODE_ENV=PROD

The environment variable is retrieved inside a Nodejs instance by using process.env.NODE_ENV.


var environment = process.env.NODE_ENV

or with expressjs


*Note: app.get(‘env’) defaults to “development”.

Bringing it all together

Nodejs, PM2, Nginx and Plink are installed and hopefully working. We now need to bring all these pieces together into a continuous integration solution.

Clone your GitHub repository in /var/www/ Although SSH is more secure than HTTPS, I recommend using HTTPS. I know this isn’t ideal, but I couldn’t get Plink working with GitHub on Ubuntu. Without going into too much detail Plink and GitHub SSH cert formats are different and calling GitHub via Plink through SSH didn’t work. If you can figure out the issue let me know!

To make the GitHub pull handsfree, the username and password will need to be a part of the origin url.

Here’s how you set your origin url. Of course you’ll need to substitute your information where appropriate.

git remote set-url origin

Clone your repository.

cd /var/www/
git clone .

Note, that if this directory is not completely empty, including hidden files Git will not clone the repo to this directory.

To find hidden files in the directory run this command

ls -a

For the glue, we are using a shell script. Here is a copy of my script.


echo "> Current PM2 Apps"
pm2 list

echo "> Stopping running API"
pm2 stop

echo "> Set Environment variable."

echo "> Changing directory to"
cd /var/www/

echo "> Listing the contents of the directory."
ls -a

echo "> Remove untracked directories in addition to untracked files."
git clean -f -d

echo "> Pull updates from Github."
git pull

echo "> Install npm updates."
sudo npm install

echo "> Transpile the ECMAScript 2015 code"
gulp babel

echo "> Restart the API"
pm2 start transpiled/www.js --name

echo "> List folder directories"
ls -a

echo "> All done."

I launch this shell script with TeamCity, but you can launch with anything.

Here is the raw command.

"C:\Program Files (x86)\Plink\plink.exe" -ssh ubuntu@xx.xx.xx.xx -i "C:\Program Files (x86)\Plink\ssh certs\aws-ubuntu.ppk" /var/www/

That’s it.

In Closing

This process has some rough edges... I hope to polish those edges in time. If you have suggestions please leave them in the comments.

This document is in my GitHub Repository. Technologies change, so if you find an error please update it. I will then update this post.

Removing Large Files From You Git Repository

I've resisted moving my projects onto GitHub. When GitHub first opened it's doors, it surprised me. Why would anyone build an UI on top of version control? It just seems like such a simple idea, which had already been done many times over. So what made GitHub different?

GitHub Logo

As it turns out, GitHub is different. They have a wonderful product.

I expected the switch to be uneventful, but things don't always go as we expect. My previous git provider didn’t have filesize restrictions. During the push into GitHub, I received a warning at 50 megs. At 100 megs, it turned into a roadblock.

Luckily, GitHub has detailed instruction on how to remove the large files.

First, if it’s a pending check-in, you can simply remove the file from the cache.

git rm --cached giant_file
# Stage our giant file for removal, but leave it on disk

Commit the change.

git commit --amend -CHEAD
# Amend the previous commit with your change
# Simply making a new commit won't work, as you need
# to remove the file from the unpushed history as well

Push your changes to GitHub.

git push
# Push our rewritten, smaller commit

If it’s not in a pending check-in, but is a part of your repo’s history things get interesting. There is a utility, BFG Repo-Cleaner, that makes this process a breeze.

The command (from GitHub documentation).

bfg --strip-blobs-bigger-than 50M
# Git history will be cleaned - files in your latest commit will *not* be touched

The GitHub documentation must assume you have BFG install, because the command didn’t work for me.

I downloaded the jar file from and ran it. Don’t forget to be in the root of your git repository.

java -jar bfg.jar --strip-blobs-bigger-than 50M

Here is my output

Scanning packfile for large blobs: 35170
Scanning packfile for large blobs completed in 251 ms.
Found 10 blob ids for large blobs - biggest=125276291 smallest=53640151
Total size (unpacked)=958626718
Found 1691 objects to protect
Found 1 tag-pointing refs : refs/tags/v0.1
Found 7 commit-pointing refs : HEAD, refs/heads/dev, refs/heads/master, ...

Protected commits

These are your protected commits, and so their contents will NOT be altered:

 * commit a99dbf81 (protected by 'HEAD')


Found 1093 commits
Cleaning commits:   100% (1093/1093)
Cleaning commits completed in 8,427 ms.

Updating 6 Refs

Ref  Before After
refs/heads/dev | 02eeab40 | 8ad272d3
refs/heads/master  | a99dbf81 | 8008478b
refs/heads/prod| 15f1558b | dc52efeb
refs/heads/qa  | 15f1558b | dc52efeb
refs/remotes/origin/master | 0c71d31f | d992278d
refs/tags/v0.1 | fc78e278 | ba078ff6

Updating references:100% (6/6)
...Ref update completed in 45 ms.

Commit Tree-Dirt History

Earliest  Latest
|  |

D = dirty commits (file tree fixed)
m = modified commits (commit message or parents changed)
. = clean commits (no changes to file tree)

Before After
First modified commit | 71ab4035 | 5963444b
Last dirty commit | 48c18598 | d7000b5a

Deleted files

Filename  Git id
---------------------------------------------------------------- | 2ace978f (117.2 MB) | 3fb67bc6 (117.8 MB) | edc34fe0 (118.3 MB) | cf8b9f19 (118.5 MB) | a41ce08a (119.5 MB)
Grover_be2.mdb  | 129a7cc8 (61.4 MB)   | d730b329 (62.6 MB)
Unify | 5fca437c (53.2 MB), 728b06a4 (51.2 MB)  | ebe5f6cf (94.6 MB)

In total, 3191 object ids were changed. Full details are logged here:


BFG run is complete! When ready, run: git reflog expire --expire=now --all && gi
t gc --prune=now --aggressive

Has the BFG saved you time?  Support the BFG on BountySource:


*image reference

This is a response of sorts to this thoughtful post:

Here is a short summary of the post:

The author spent many years as a programmer. It was difficult to balance life and work. He was always running to the next engagement. Stuck in the learning rat race of software engineering. His job consumed him. He didn't have time for anything except the job. To find balance, he pivoted his career into a less demanding field and achieved balance between his job and his life.

I understand his pain. Much of my energy is devoted to learning. New technologies are getting more diverse and breeding innovation, which means even more learning.

One can easily get consumed by programming. In many ways it's crack for the brain.

Your are in a perpetual state of sharpening your sword. The programmer who stops is relegated to obsolescent in a short few years. In extreme situations, they will find themselves unemployable.

So why do I program? Because I love to do it, I get paid to do what I love. Like the author in the post, I’ve had to learn balance between my career and my personal time.

Some will disagree, but for me programming is an art. There is no limit to how skilled I can become. Applications are my canvas, programming is the medium I use to express myself. It's how I create.

The Mind State of a Software Engineer

Have patience.

I'll wait

Coding is discovery. Coding is failing. Be ok with this.


*image reference

Don't blame the framework. It’s more probable it’s your code. Accept this fallibility. Lady Bug

*image reference

Know when to walk away. You mind is a wonderful tool, even at rest it’s working on unsolved problems. Rest, and let your mind do it’s work.


*image reference

Be comfortable not knowing. Software engineering is a vast ocean of knowledge. Someone will always know more than you. The sooner you are OK with this the sooner you will recognize the opportunity to learn something new.

Ocean Sailing

*image reference

Anger and frustration don't fix code. Take a break, nothing can be accomplished in this state.


*image reference