Chuck Conway Chuck Conway

Setting Proxies for Git, NPM, Bower and Webdriver

Companies often setup proxy servers to filter their traffic. The intention is to keep wayward employees from 'accidentally' visiting sites that do not align with the professional environment.

To access the internet everything must pass through the proxy. Applications like Git, NPM, Bower and Webdriver aren't configured out of the box to use the company's proxy. When installed, then simply don't work.

Below are instructions for setting the proxy for Git, NPM, Bower, and Webdriver.


Create a .bowerrc file in your home (%USERPROFILE%) folder. Add these contents:



git config --global http.proxy
git config --global https.proxy

set http_proxy=
set https_proxy=


npm config set proxy
npm config set https-proxy

npm config set registry


//doing an update
webdriver-manager update --proxy=""

When Using Frameworks -- Sometimes Ignorance is Bliss

In software engineering, there is a prevailing idea that an engineer should only use a framework when he or she understands the internal workings. This is a fallacy.

Why is it that we must know the internal workings -- do the details matter that much? Some might say ignorance is bliss.

Car Engine

Let's examine the engine of a car:


How many really know how the engine works?

Can you tell me why it’s called a 4 stroke engine?

What does each stroke do?

What’s the difference between a 4 stroke engine and a 2 stroke engine?


And yet we still drive our cars without any thought on “how” the car is getting us to our destination.

We interface with the car using the steering wheel, the gear shifter, the gas pedal, and the brakes.

Who cares how it works, as long as it gets us to our destination. When the car breaks down we take it to an expert.

The Core Competency of a Business

Business In business, a company has specialized knowledge that allows it to be competitive. This is referred to as a company's core competency.

A core competency can be a process or a product.

To stay competitive, a company must tirelessly improve their core competency. Using resources for activities other than supporting the company's core competency weakens the company’s competitive advantage. Which opens the window of opportunity for competitors to overtake the company’s competitive advantage.

This idea is best illustrated with an example.


Apple Logo

Apple is known for their simplicity and their beautiful products. You’d think this would be easy to replicate, but it’s not, just ask Samsung, HTC, and Microsoft.

Why have these companies failed? Because simple is hard and Apple is expert in simple.

The Core Competency of a Person

People Core competency can apply to people too.

What sets you apart from others?

To have developed your core competency, you’ve had to rigorously focus in one area, sometime for years, gaining insights and knowledge setting you apart from others.

As in a business, to maintain your competitive advantage you must continually hone your core competency.

Using Small Pieces


A software engineer is no different from a company or any other professional. We must pick and choose what we learn to stay aligned with our core competency.

Understanding the internals of every framework we use is not practical and is time consuming. I'm expecting the framework's author to be an expert in the framework's domain, therefore, I don't need to know it's internal workings.

Isn’t this the point of software -- to use black boxed bits of functionality to produce a larger more complex work? I believe it is.

In the end, it comes down to focus and time, both of which are limited.

8 Must Have Extensions for

Everyone has a favorite editor. We each have reasons for choosing our editor. I’ve tried them all. And I’ve found that best suits me. Unfortunately, there are gaps in the functionality of With a robust ecosystem of extensions, I’ve found 8 extensions that complete

Here is a list of my 8 must have extensions.


For anyone working with CSS and HTML Emmet is a must have. I wrote about it earlier this year. It removes all the unnecessary typing from while create HTML and CSS.


It’s officially called “Autosave Files on Window Blur”. This extension saves all the changes files once you’ve navigated away from Brackets. It works similarly to how WebStorm saves it’s files.


You’d think this wasn’t a big deal, at least that’s what I thought. But it does a great job! Give it a try. You’ll be surprised how useful this plugin is -- Beautify

Brackets Git

This is the best git integration I have ever used. And I’ve used Git in WebStorm, Sublime Text and Visual Studio. So that’s saying a lot. It’s functional and aesthetically pleasing, there isn’t much else to ask for. - Brackets Git

Brackets Icons

You’d be surprised how much a few good icons can spruce up an ole editor. - Brackets Icons

Documents Toolbar

In my opinion this is a missing feature of This completes the editor. - Documents Toolbar


This summarizes all the TODO comments in the file. It also supports NOTE, FIXME, CHANGES and FUTURE. More can be added if this list is too limiting. - Todo

Markdown Preview

Ok, ok, this last one isn’t a must have. I find is useful when editing readmes or writing a post for my blog. - Markdown Preview

Setting up Continuous Integration on Ubuntu with Nodejs

I went through blood, sweat and tears to bring this to you. I suffered the scorching heat of Death Valley and summited the peaks of Mount McKinley. I’ve sacrificed much.

Much of content shared in the post is not my original work. Where I can, I link back to the original work.

This article assumes you can get around Linux.

I could not find a comprehensive guide on hosting and managing Nodejs applications on Ubuntu in a production capacity. I’ve pulled together multiple articles on the subject. By the end of this article I hope you’ll be able to setup up your own Ubuntu server and have Nodejs deploying via a continuous integration server.


I am using TeamCity on Windows which then deploys code from GitHub to Ubuntu hosted on AWS.


For this article I used the following technologies:

Setting up Ubuntu

I’m not going into detail here. Amazon Web Services (AWS) makes this pretty easy to do. It doesn’t matter where it’s at or if it’s on your own server.

I encountered a few gotchas. First, make sure port 80 is opened. I made the foolish mistake of trying to connect with port 80 closed. Once I discovered my mistake, I felt like a rhinoceros's ass.

Installing Nodejs From Source

Nodejs is a server technology using Google’s V8 javascript engine. Since it’s release in 2010, its become widely popular.

The following instructions originally came from a Digital Ocean post).

You always have the option to install Nodejs from the apt-get, but it will be a few versions behind. To get the latest bits, install Nodejs from the source.

At this send of this section we will have downloaded the latest stable version of node (as of this article), we will have build the source and installed Nodejs.

Log into your server. We’ll start by updating the package lists.

sudo apt-get update

I’m also suggesting that you upgrade all the packages. This is not necessary, for Nodejs but it is good practice to keep your server updated.

sudo apt-get upgrade

Your server is all up to date. It’s time download the source.

cd ~

As of the writing 12.7 is the latest stable release of Nodejs. Check out for the latest version.


Extract the archive you’ve downloaded.

tar xvf node-v*

Move into the newly created directory

cd node-v*

Configure and build Nodejs.



Install Nodejs

sudo make install

To remove the downloaded and the extracted files. Of course, this is optional.

cd ~

rm -rf node-v*

Congrats! Nodejs is now installed! And it wasn’t very hard.

Setting up Nginx


Nodejs can act as a web server, but it’s not what I would want to expose to the world. An industrial, harden, feature rich web server is better suited for this. I’ve turned to Nginx for this task.

It’s a mature web server with the features we need. To run more than one instance of Nodejs, we’ll need to port forwarding.

You might be thinking, why do we need more than one instance of Nodejs running at the same-time. That’s a fair question… In my scenario, I have one server and I need to run DEV, QA and PROD on the same machine. Yeah, I know not ideal, but I don’t want to stand up 3 servers for each environment.

To start let’s install Nginx

sudo -s

add-apt-repository ppa:nginx/stable

apt-get update 

apt-get install nginx

Once Nginx is has successfully installed we need to set up on the domains. I’m going to assume you’ll want to have each of your sites on it’s own domain/sub domain. If you don’t and want to use different sub-folders, that’s doable and very easy to do. I am not going to cover that scenario here. There is a ton of documentation on how to do that. There is very little documentation on setting up different domains and port forwarding to the corresponding Nodejs instances. This is what I’ll be covering.

Now that Nginx is installed, create a file for at /etc/nginx/sites-available/

sudo nano /etc/nginx/sites-available/

Add the following configuration to your newly created file

# the IP(s) on which your node server is running. I chose port 9001.
upstream app_myapp1 {
    keepalive 8;

# the nginx server instance
server {
    listen 80;
    access_log /var/log/nginx/yourdomain.log;

    # pass the request to the node.js server with the correct headers
    # and much more can be added, see nginx config options
    location / {
        proxy_http_version 1.1;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header Host $http_host;
        proxy_set_header X-NginX-Proxy true;

        proxy_pass http://app_myapp1;


Make sure you replace "" with your actual domain. Save and exit your editor.

Create a symbolic link to this file in the sites-enabled directory.

cd /etc/nginx/sites-enabled/ 

ln -s /etc/nginx/sites-available/

To test everything is working correctly, create a simple node app and save it to /var/www/ and run it.

Here is a simple nodejs app if you don’t have one handy.

var http = require('http');

http.createServer(function (req, res) {
    res.writeHead(200, {'Content-Type': 'text/plain'});
    res.end('Hello World\n');}).listen(9001, "");
console.log('Server running at');

Let’s restart Nginx.

sudo /etc/init.d/nginx restart

Don’t forget to start your Nodejs instance, if you haven’t already.

cd /var/www/yourdomain/ && node app.js

If all is working correctly, when you navigate to you’ll see “Hello World.”

To add another domain for a different Nodejs instance your need to repeat the steps above. Specifically you’ll need to change the upstream name, the port and the domain in your new Nginx config file. The proxy_pass address must match the upstream name in the nginx config file. Look at the upstream name and the proxy_pass value and you’ll see what I mean.

To recap, we’ve installed NodeJS from source and we just finished installing Nginx. We’ve configured and tested port forwarding with Nginx and Nodejs

Installing PM2

You might be asking “What is PM2?” as I did when I first heard about. PM2 is a process manager for Nodejs applications. Nodejs doesn’t come with much. This is part of it’s appeal. The downside to this, is well, you have to provide the layers in front of it. PM2 is one of those layers.

PM2 manages the life of the Nodejs process. When it’s terminated, PM2 restarts it. When the server reboots PM2 restarts all the Nodejs processes for you. It also has extensive development lifecycle process. We won’t be covering this aspect of PM2. I encourage you to read well written documentation.

Assuming you are logged into the terminal, we’ll start by installing PM2 via NPM. Npm is Nodejs package manager (npm). It was installed when you installed Nodejs.

sudo npm install pm2 -g

That’s it. PM2 is now installed.

Using PM2

PM2 is easy to use.

The hello world for PM2 is simple.

pm2 start hello.js

This adds your application to PM2’s process list. This list is output each time an application is started.

In this example there are two Nodejs applications running. One called and api.pre.

PM2 automatically assigns the name of the app to the “App name” in the list.

Out of the box, PM2 does not configure itself to startup when the server restarts. The command is different for the different flavors of Linux. I’m running on Ubuntu, so I’ll execute the Ubuntu command.

pm2 start ubuntu

We are not quite done yet. We have to add a path to the PM2 binary. Fortunately, the output of the previous command tells us how to do that.


[PM2] You have to run this command as root
[PM2] Execute the following command :
[PM2] sudo env PATH=$PATH:/usr/local/bin pm2 startup ubuntu -u sammy
Run the command that was generated (similar to the highlighted output above) to set PM2 up to start on boot (use the command from your own output):

 sudo env PATH=$PATH:/usr/local/bin pm2 startup ubuntu -u sammy

Examples of other PM2 usages (optional)

Stopping an application by the app name

pm2 stop example

Restarting by the app name

pm2 restart example

List of current applications managed by PM2

pm2 list

Specifying a name when starting a process. If you call, PM2 uses the javascript file as the name. This might not work for you. Here’s how to specify the name.

pm2 start www.js --name api.pre

That should be enough to get you going with PM2. To learn more about PM2’s capabilities, visit the GitHub Repo.

Setting up and Using Plink

You are probably thinking, “What in the name of Betsey's cow is Plink?” At least that’s thought. I’m still not sure what to think of it. I’ve never seen anything like it.

You ever watched the movie Wall-e? Wall-e pulls out a spork. First he tries to put it with the forks, but it doesn’t fix and then he tries to put it with the spoons, but it doesn’t fit. Well that’s Plink. It’s a cross between Putty (SSH) and the Windows Command Line.

Plink basically allows you to run bash commands via the Windows command line while logged into a Linux (and probably Unix) shell.

Start by downloading Plink. It’s just an executable. I recommend putting it in C:/Program Files (x86)/Plink. We’ll need to reference it later.

If you are running an Ubuntu instance in AWS. You’ll already have a cert setup for Putty (I’m assuming you are using Putty).

If you are not, you’ll need to ensure you have a compatible ssh cert for Ubuntu in AWS.

If you are not using AWS, you can specify the username and password in the command line and won’t to worry about the ssh certs.

Here is an example command line that connects to Ubuntu with Plink.

"C:\Program Files (x86)\Plink\plink.exe" -ssh ubuntu@xx.xx.xx.xx -i "C:\Program Files (x86)\Plink\ssh certs\aws-ubuntu.ppk" 

This might be getting ahead of ourselves, but to run an ssh script on the Ubuntu server we add the complete path to the end of the Plink command.

"C:\Program Files (x86)\Plink\plink.exe" -ssh ubuntu@xx.xx.xx.xx -i "C:\Program Files (x86)\Plink\ssh certs\aws-ubuntu.ppk" /var/www/

And that, dear reader, is Plink.

Understanding NODE_ENV

NODE_ENV is an environment variable made popular by expressjs. Before your start the node instance, set the NODE_ENV to the environment. In the code you can load specific files based on the environment.

Setting NODE_ENV
Linux & Mac: export NODE_ENV=PROD
Windows: set NODE_ENV=PROD

The environment variable is retrieved inside a Nodejs instance by using process.env.NODE_ENV.


var environment = process.env.NODE_ENV

or with expressjs


*Note: app.get(‘env’) defaults to “development”.

Bringing it all together

Nodejs, PM2, Nginx and Plink are installed and hopefully working. We now need to bring all these pieces together into a continuous integration solution.

Clone your GitHub repository in /var/www/ Although SSH is more secure than HTTPS, I recommend using HTTPS. I know this isn’t ideal, but I couldn’t get Plink working with GitHub on Ubuntu. Without going into too much detail Plink and GitHub SSH cert formats are different and calling GitHub via Plink through SSH didn’t work. If you can figure out the issue let me know!

To make the GitHub pull handsfree, the username and password will need to be a part of the origin url.

Here’s how you set your origin url. Of course you’ll need to substitute your information where appropriate.

git remote set-url origin

Clone your repository.

cd /var/www/
git clone .

Note, that if this directory is not completely empty, including hidden files Git will not clone the repo to this directory.

To find hidden files in the directory run this command

ls -a

For the glue, we are using a shell script. Here is a copy of my script.


echo "> Current PM2 Apps"
pm2 list

echo "> Stopping running API"
pm2 stop

echo "> Set Environment variable."

echo "> Changing directory to"
cd /var/www/

echo "> Listing the contents of the directory."
ls -a

echo "> Remove untracked directories in addition to untracked files."
git clean -f -d

echo "> Pull updates from Github."
git pull

echo "> Install npm updates."
sudo npm install

echo "> Transpile the ECMAScript 2015 code"
gulp babel

echo "> Restart the API"
pm2 start transpiled/www.js --name

echo "> List folder directories"
ls -a

echo "> All done."

I launch this shell script with TeamCity, but you can launch with anything.

Here is the raw command.

"C:\Program Files (x86)\Plink\plink.exe" -ssh ubuntu@xx.xx.xx.xx -i "C:\Program Files (x86)\Plink\ssh certs\aws-ubuntu.ppk" /var/www/

That’s it.

In Closing

This process has some rough edges... I hope to polish those edges in time. If you have suggestions please leave them in the comments.

This document is in my GitHub Repository. Technologies change, so if you find an error please update it. I will then update this post.

Removing Large Files From You Git Repository

I've resisted moving my projects onto GitHub. When GitHub first opened it's doors, it surprised me. Why would anyone build an UI on top of version control? It just seems like such a simple idea, which had already been done many times over. So what made GitHub different?

GitHub Logo

As it turns out, GitHub is different. They have a wonderful product.

I expected the switch to be uneventful, but things don't always go as we expect. My previous git provider didn’t have filesize restrictions. During the push into GitHub, I received a warning at 50 megs. At 100 megs, it turned into a roadblock.

Luckily, GitHub has detailed instruction on how to remove the large files.

First, if it’s a pending check-in, you can simply remove the file from the cache.

git rm --cached giant_file
# Stage our giant file for removal, but leave it on disk

Commit the change.

git commit --amend -CHEAD
# Amend the previous commit with your change
# Simply making a new commit won't work, as you need
# to remove the file from the unpushed history as well

Push your changes to GitHub.

git push
# Push our rewritten, smaller commit

If it’s not in a pending check-in, but is a part of your repo’s history things get interesting. There is a utility, BFG Repo-Cleaner, that makes this process a breeze.

The command (from GitHub documentation).

bfg --strip-blobs-bigger-than 50M
# Git history will be cleaned - files in your latest commit will *not* be touched

The GitHub documentation must assume you have BFG install, because the command didn’t work for me.

I downloaded the jar file from and ran it. Don’t forget to be in the root of your git repository.

java -jar bfg.jar --strip-blobs-bigger-than 50M

Here is my output

Scanning packfile for large blobs: 35170
Scanning packfile for large blobs completed in 251 ms.
Found 10 blob ids for large blobs - biggest=125276291 smallest=53640151
Total size (unpacked)=958626718
Found 1691 objects to protect
Found 1 tag-pointing refs : refs/tags/v0.1
Found 7 commit-pointing refs : HEAD, refs/heads/dev, refs/heads/master, ...

Protected commits

These are your protected commits, and so their contents will NOT be altered:

 * commit a99dbf81 (protected by 'HEAD')


Found 1093 commits
Cleaning commits:   100% (1093/1093)
Cleaning commits completed in 8,427 ms.

Updating 6 Refs

Ref  Before After
refs/heads/dev | 02eeab40 | 8ad272d3
refs/heads/master  | a99dbf81 | 8008478b
refs/heads/prod| 15f1558b | dc52efeb
refs/heads/qa  | 15f1558b | dc52efeb
refs/remotes/origin/master | 0c71d31f | d992278d
refs/tags/v0.1 | fc78e278 | ba078ff6

Updating references:100% (6/6)
...Ref update completed in 45 ms.

Commit Tree-Dirt History

Earliest  Latest
|  |

D = dirty commits (file tree fixed)
m = modified commits (commit message or parents changed)
. = clean commits (no changes to file tree)

Before After
First modified commit | 71ab4035 | 5963444b
Last dirty commit | 48c18598 | d7000b5a

Deleted files

Filename  Git id
---------------------------------------------------------------- | 2ace978f (117.2 MB) | 3fb67bc6 (117.8 MB) | edc34fe0 (118.3 MB) | cf8b9f19 (118.5 MB) | a41ce08a (119.5 MB)
Grover_be2.mdb  | 129a7cc8 (61.4 MB)   | d730b329 (62.6 MB)
Unify | 5fca437c (53.2 MB), 728b06a4 (51.2 MB)  | ebe5f6cf (94.6 MB)

In total, 3191 object ids were changed. Full details are logged here:


BFG run is complete! When ready, run: git reflog expire --expire=now --all && gi
t gc --prune=now --aggressive

Has the BFG saved you time?  Support the BFG on BountySource:


*image reference

This is a response of sorts to this thoughtful post:

Here is a short summary of the post:

The author spent many years as a programmer. It was difficult to balance life and work. He was always running to the next engagement. Stuck in the learning rat race of software engineering. His job consumed him. He didn't have time for anything except the job. To find balance, he pivoted his career into a less demanding field and achieved balance between his job and his life.

I understand his pain. Much of my energy is devoted to learning. New technologies are getting more diverse and breeding innovation, which means even more learning.

One can easily get consumed by programming. In many ways it's crack for the brain.

Your are in a perpetual state of sharpening your sword. The programmer who stops is relegated to obsolescent in a short few years. In extreme situations, they will find themselves unemployable.

So why do I program? Because I love to do it, I get paid to do what I love. Like the author in the post, I’ve had to learn balance between my career and my personal time.

Some will disagree, but for me programming is an art. There is no limit to how skilled I can become. Applications are my canvas, programming is the medium I use to express myself. It's how I create.

The Mind State of a Software Engineer

Have patience.

I'll wait

Coding is discovery. Coding is failing. Be ok with this.


*image reference

Don't blame the framework. It’s more probable it’s your code. Accept this fallibility. Lady Bug

*image reference

Know when to walk away. You mind is a wonderful tool, even at rest it’s working on unsolved problems. Rest, and let your mind do it’s work.


*image reference

Be comfortable not knowing. Software engineering is a vast ocean of knowledge. Someone will always know more than you. The sooner you are OK with this the sooner you will recognize the opportunity to learn something new.

Ocean Sailing

*image reference

Anger and frustration don't fix code. Take a break, nothing can be accomplished in this state.


*image reference

Simplifying Null Checks

I have checks for nulls littered throughout my codebase. We see this in all C# codebases it's just comes with the territory. So much so, we don't see it as an issue. We are numb to the pain.

An example of a null check.

if (source != null)
    return source.Select(s => base.Convert(s));

return null;

It's trivial code, no doubt about that, but my issue is it doesn't provide much in the way of intent. Can we make it better? I think so. Lets make it more compact with a ternary operator. Here is the result.

return (source != null ? source.Select(s => base.Convert(s)) : null);

This is more succinct, but it lacks readability. We've squeezed the if-statement into one line. It feels harder to read than the previous form.

How can we make this more succinct and maintain readability? What if we used an extension method for the null check, instead of comparing the object to null?

if (source.IsNotNull())
    return source.Select(s => base.Convert(s));

return null;

Ok, I like this. There is meaning to the if-statement evaluation. If it's not null then we enter the if-statement.

Here is the code for the IsNotNull() extension method.

public static bool IsNotNull(this object val)
    return val != null;

This is good, but I am bothered by the if-statement. I wonder if we can get rid of the if-statement all together. Maybe if we use a little of C#'s functional magic we can eliminate the if-block.

return source.IsNotNull(() => source.Select(s => base.Convert(s)));

Ahh, that's better.

When the object is not null is executes the passed in lambda expression. If you aren't familiar with lambda expressions this might have you scratching your head. The code is below. Take a look at it, hopefully it will clean things up.

    public static T IsNotNull<T>(this object val, Func<T> result) where T : class 
        if (val != null)
            return result();

        return null;

It's a constant challenge keeping code readable. Most of us write code like this all day without a thought to making it better. We've been walking on glass for so long that we don't feel the pain. Each little nugget helps.

On a side note, the next version of C# will have null-conditional operators making the extension method I created irrelevant. Here is an example.

return source?.Select(s => base.Convert(s));

Index Fragmentation in SQL Azure, Who Knew!

I’ve been on my project for over a year and it has significantly grown as an application and in data during the year. It’s been nonstop new features. I’ve rarely gone back and refactored code. Last week I noticed some of the data heavy pages were loading slowly. At the worst case one view could take up to 30 seconds to load. 10 times over my maximum load time...

Call me naive, but I didn’t consider index fragmentation in SQL Azure. It’s the cloud! It’s suppose to be immune to premise issues… Apparently index fragmentation is also an issue in the cloud.

I found a couple of queries on an MSDN blog, that identify the fragmented indexes and then rebuilds them.

After running the first query to show index fragmentation I found some indexes with over 50 percent fragmentation. According to the article anything over 10% needs attention.

First Query Display Index Fragmentation

--Get the fragmentation percentage

,OBJECT_NAME(ps.object_id) AS TableName
, AS IndexName
FROM sys.dm_db_partition_stats ps
INNER JOIN sys.indexes i
ON ps.object_id = i.object_id
AND ps.index_id = i.index_id
CROSS APPLY sys.dm_db_index_physical_stats(DB_ID(), ps.object_id, ps.index_id, null, 'LIMITED') ips
ORDER BY ps.object_id, ps.index_id

Second Query Rebuilds the Indexes

--Rebuild the indexes
DECLARE @TableName varchar(255)

 SELECT '[' + IST.TABLE_SCHEMA + '].[' + IST.TABLE_NAME + ']' AS [TableName]

 OPEN TableCursor
 FETCH NEXT FROM TableCursor INTO @TableName

 PRINT('Rebuilding Indexes on ' + @TableName)
Begin Try
End Try
Begin Catch
 PRINT('Cannot do rebuild with Online=On option, taking table ' + @TableName+' down for douing rebuild')
 End Catch
FETCH NEXT FROM TableCursor INTO @TableName

CLOSE TableCursor


Moving from Wordpress to Hexo

I love Wordpress - it just works. It’s community is huge and it’s drop dead simple to get running.

I started blogging in 2002 when the blogging landscape was barren. Blogging platforms were few and far between. Heck “blogging” wasn’t even a term.

My first blogging engine was b2 the precursor to Wordpress. In 2003, Wordpress forked b2 and started on the journey to the Wordpress we now all love. At the time I felt conflicted. Why create a second blogging platform? Why not lend support to b2? Wasn’t b2 good enough? Ultimately it was a good decision. Not to long after forking the code, the development on b2 stalled.

Wordpress has enjoyed a huge amount of popularity. It’s, by far, the most popular CMS (content management system).

So, it’s with sadness that after writing over 500 posts on b2 and Wordpress, I am moving from Wordpress. I simply don’t need the functionality and versatility of Wordpress. I am moving to Hexo, a node based blog/site generator.

Assets and posts are stored on the file system. The posts are written in Markdown. Hexo takes the Markdown and generates HTML pages linking the pages as it moves through the content. Depending on which theme you choose and how you customize the it you can generate just about anything.

I hope you enjoy the change. The site is much faster. The comments are now powered by Disqus. These changes will allow me to deliver a better and a faster experience for you.