Chuck Conway Chuck Conway

8 Must Have Extensions for Brackets.io

Everyone has a favorite editor. We each have reasons for choosing our editor. I’ve tried them all. And I’ve found that Brackets.io best suits me. Unfortunately, there are gaps in the functionality of Brackets.io. With a robust ecosystem of extensions, I’ve found 8 extensions that complete Brackets.io.

Here is a list of my 8 must have extensions.

Emmet

For anyone working with CSS and HTML Emmet is a must have. I wrote about it earlier this year. It removes all the unnecessary typing from while create HTML and CSS.

Autosave

It’s officially called “Autosave Files on Window Blur”. This extension saves all the changes files once you’ve navigated away from Brackets. It works similarly to how WebStorm saves it’s files.

Beautify

You’d think this wasn’t a big deal, at least that’s what I thought. But it does a great job! Give it a try. You’ll be surprised how useful this plugin is -- Beautify

Brackets Git

This is the best git integration I have ever used. And I’ve used Git in WebStorm, Sublime Text and Visual Studio. So that’s saying a lot. It’s functional and aesthetically pleasing, there isn’t much else to ask for. - Brackets Git

Brackets Icons

You’d be surprised how much a few good icons can spruce up an ole editor. - Brackets Icons

Documents Toolbar

In my opinion this is a missing feature of Brackets.io. This completes the editor. - Documents Toolbar

Todo

This summarizes all the TODO comments in the file. It also supports NOTE, FIXME, CHANGES and FUTURE. More can be added if this list is too limiting. - Todo

Markdown Preview

Ok, ok, this last one isn’t a must have. I find is useful when editing readmes or writing a post for my blog. - Markdown Preview

Comments

Setting up Continuous Integration on Ubuntu with Nodejs

I went through blood, sweat and tears to bring this to you. I suffered the scorching heat of Death Valley and summited the peaks of Mount McKinley. I’ve sacrificed much.

Much of content shared in the post is not my original work. Where I can, I link back to the original work.

This article assumes you can get around Linux.

I could not find a comprehensive guide on hosting and managing Nodejs applications on Ubuntu in a production capacity. I’ve pulled together multiple articles on the subject. By the end of this article I hope you’ll be able to setup up your own Ubuntu server and have Nodejs deploying via a continuous integration server.

Environment

I am using TeamCity on Windows which then deploys code from GitHub to Ubuntu hosted on AWS.

Technologies

For this article I used the following technologies:

Setting up Ubuntu

I’m not going into detail here. Amazon Web Services (AWS) makes this pretty easy to do. It doesn’t matter where it’s at or if it’s on your own server.

I encountered a few gotchas. First, make sure port 80 is opened. I made the foolish mistake of trying to connect with port 80 closed. Once I discovered my mistake, I felt like a rhinoceros's ass.

Installing Nodejs From Source

Nodejs is a server technology using Google’s V8 javascript engine. Since it’s release in 2010, its become widely popular.

The following instructions originally came from a Digital Ocean post).

You always have the option to install Nodejs from the apt-get, but it will be a few versions behind. To get the latest bits, install Nodejs from the source.

At this send of this section we will have downloaded the latest stable version of node (as of this article), we will have build the source and installed Nodejs.

Log into your server. We’ll start by updating the package lists.

sudo apt-get update

I’m also suggesting that you upgrade all the packages. This is not necessary, for Nodejs but it is good practice to keep your server updated.

sudo apt-get upgrade

Your server is all up to date. It’s time download the source.

cd ~

As of the writing 12.7 is the latest stable release of Nodejs. Check out nodejs.org for the latest version.

wget https://nodejs.org/dist/v0.12.7/node-v0.12.7.tar.gz

Extract the archive you’ve downloaded.

tar xvf node-v*

Move into the newly created directory

cd node-v*

Configure and build Nodejs.

./configure

make

Install Nodejs

sudo make install

To remove the downloaded and the extracted files. Of course, this is optional.

cd ~

rm -rf node-v*

Congrats! Nodejs is now installed! And it wasn’t very hard.

Setting up Nginx

Source

Nodejs can act as a web server, but it’s not what I would want to expose to the world. An industrial, harden, feature rich web server is better suited for this. I’ve turned to Nginx for this task.

It’s a mature web server with the features we need. To run more than one instance of Nodejs, we’ll need to port forwarding.

You might be thinking, why do we need more than one instance of Nodejs running at the same-time. That’s a fair question… In my scenario, I have one server and I need to run DEV, QA and PROD on the same machine. Yeah, I know not ideal, but I don’t want to stand up 3 servers for each environment.

To start let’s install Nginx

sudo -s

add-apt-repository ppa:nginx/stable

apt-get update 

apt-get install nginx

Once Nginx is has successfully installed we need to set up on the domains. I’m going to assume you’ll want to have each of your sites on it’s own domain/sub domain. If you don’t and want to use different sub-folders, that’s doable and very easy to do. I am not going to cover that scenario here. There is a ton of documentation on how to do that. There is very little documentation on setting up different domains and port forwarding to the corresponding Nodejs instances. This is what I’ll be covering.

Now that Nginx is installed, create a file for yourdomain.com at /etc/nginx/sites-available/

sudo nano /etc/nginx/sites-available/yourdomain.com

Add the following configuration to your newly created file

# the IP(s) on which your node server is running. I chose port 9001.
upstream app_myapp1 {
    server 127.0.0.1:9001;
    keepalive 8;
}

# the nginx server instance
server {
    listen 80;
    server_name yourdomain.com;
    access_log /var/log/nginx/yourdomain.log;

    # pass the request to the node.js server with the correct headers
    # and much more can be added, see nginx config options
    location / {
        proxy_http_version 1.1;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header Host $http_host;
        proxy_set_header X-NginX-Proxy true;

        proxy_pass http://app_myapp1;

    }
 }

Make sure you replace "yourdomain.com" with your actual domain. Save and exit your editor.

Create a symbolic link to this file in the sites-enabled directory.

cd /etc/nginx/sites-enabled/ 

ln -s /etc/nginx/sites-available/yourdomain.com yourdomain.com

To test everything is working correctly, create a simple node app and save it to /var/www/yourdomain.com/app.js and run it.

Here is a simple nodejs app if you don’t have one handy.

var http = require('http');

http.createServer(function (req, res) {
    res.writeHead(200, {'Content-Type': 'text/plain'});
    res.end('Hello World\n');}).listen(9001, "127.0.0.1");
console.log('Server running at http://127.0.0.1:9001/');

Let’s restart Nginx.

sudo /etc/init.d/nginx restart

Don’t forget to start your Nodejs instance, if you haven’t already.

cd /var/www/yourdomain/ && node app.js

If all is working correctly, when you navigate to yourdomain.com you’ll see “Hello World.”

To add another domain for a different Nodejs instance your need to repeat the steps above. Specifically you’ll need to change the upstream name, the port and the domain in your new Nginx config file. The proxy_pass address must match the upstream name in the nginx config file. Look at the upstream name and the proxy_pass value and you’ll see what I mean.

To recap, we’ve installed NodeJS from source and we just finished installing Nginx. We’ve configured and tested port forwarding with Nginx and Nodejs

Installing PM2

You might be asking “What is PM2?” as I did when I first heard about. PM2 is a process manager for Nodejs applications. Nodejs doesn’t come with much. This is part of it’s appeal. The downside to this, is well, you have to provide the layers in front of it. PM2 is one of those layers.

PM2 manages the life of the Nodejs process. When it’s terminated, PM2 restarts it. When the server reboots PM2 restarts all the Nodejs processes for you. It also has extensive development lifecycle process. We won’t be covering this aspect of PM2. I encourage you to read well written documentation.

Assuming you are logged into the terminal, we’ll start by installing PM2 via NPM. Npm is Nodejs package manager (npm). It was installed when you installed Nodejs.

sudo npm install pm2 -g

That’s it. PM2 is now installed.

Using PM2

PM2 is easy to use.

The hello world for PM2 is simple.

pm2 start hello.js

This adds your application to PM2’s process list. This list is output each time an application is started.

In this example there are two Nodejs applications running. One called api.dev and api.pre.

PM2 automatically assigns the name of the app to the “App name” in the list.

Out of the box, PM2 does not configure itself to startup when the server restarts. The command is different for the different flavors of Linux. I’m running on Ubuntu, so I’ll execute the Ubuntu command.

pm2 start ubuntu

We are not quite done yet. We have to add a path to the PM2 binary. Fortunately, the output of the previous command tells us how to do that.

Output:

[PM2] You have to run this command as root
[PM2] Execute the following command :
[PM2] sudo env PATH=$PATH:/usr/local/bin pm2 startup ubuntu -u sammy
Run the command that was generated (similar to the highlighted output above) to set PM2 up to start on boot (use the command from your own output):

 sudo env PATH=$PATH:/usr/local/bin pm2 startup ubuntu -u sammy

Examples of other PM2 usages (optional)

Stopping an application by the app name

pm2 stop example

Restarting by the app name

pm2 restart example

List of current applications managed by PM2

pm2 list

Specifying a name when starting a process. If you call, PM2 uses the javascript file as the name. This might not work for you. Here’s how to specify the name.

pm2 start www.js --name api.pre

That should be enough to get you going with PM2. To learn more about PM2’s capabilities, visit the GitHub Repo.

Setting up and Using Plink

You are probably thinking, “What in the name of Betsey's cow is Plink?” At least that’s thought. I’m still not sure what to think of it. I’ve never seen anything like it.

You ever watched the movie Wall-e? Wall-e pulls out a spork. First he tries to put it with the forks, but it doesn’t fix and then he tries to put it with the spoons, but it doesn’t fit. Well that’s Plink. It’s a cross between Putty (SSH) and the Windows Command Line.

Plink basically allows you to run bash commands via the Windows command line while logged into a Linux (and probably Unix) shell.

Start by downloading Plink. It’s just an executable. I recommend putting it in C:/Program Files (x86)/Plink. We’ll need to reference it later.

If you are running an Ubuntu instance in AWS. You’ll already have a cert setup for Putty (I’m assuming you are using Putty).

If you are not, you’ll need to ensure you have a compatible ssh cert for Ubuntu in AWS.

If you are not using AWS, you can specify the username and password in the command line and won’t to worry about the ssh certs.

Here is an example command line that connects to Ubuntu with Plink.

"C:\Program Files (x86)\Plink\plink.exe" -ssh ubuntu@xx.xx.xx.xx -i "C:\Program Files (x86)\Plink\ssh certs\aws-ubuntu.ppk" 

This might be getting ahead of ourselves, but to run an ssh script on the Ubuntu server we add the complete path to the end of the Plink command.

"C:\Program Files (x86)\Plink\plink.exe" -ssh ubuntu@xx.xx.xx.xx -i "C:\Program Files (x86)\Plink\ssh certs\aws-ubuntu.ppk" /var/www/deploy-dev-ui.sh

And that, dear reader, is Plink.

Understanding NODE_ENV

NODE_ENV is an environment variable made popular by expressjs. Before your start the node instance, set the NODE_ENV to the environment. In the code you can load specific files based on the environment.

Setting NODE_ENV
Linux & Mac: export NODE_ENV=PROD
Windows: set NODE_ENV=PROD

The environment variable is retrieved inside a Nodejs instance by using process.env.NODE_ENV.

example

var environment = process.env.NODE_ENV

or with expressjs

app.get('env')

*Note: app.get(‘env’) defaults to “development”.

Bringing it all together

Nodejs, PM2, Nginx and Plink are installed and hopefully working. We now need to bring all these pieces together into a continuous integration solution.

Clone your GitHub repository in /var/www/yourdomain.com. Although SSH is more secure than HTTPS, I recommend using HTTPS. I know this isn’t ideal, but I couldn’t get Plink working with GitHub on Ubuntu. Without going into too much detail Plink and GitHub SSH cert formats are different and calling GitHub via Plink through SSH didn’t work. If you can figure out the issue let me know!

To make the GitHub pull handsfree, the username and password will need to be a part of the origin url.

Here’s how you set your origin url. Of course you’ll need to substitute your information where appropriate.

git remote set-url origin  https://username:password@github.com/username/yourdomain.git

Clone your repository.

cd /var/www/yourdomain.com
git clone https://username:password@github.com/username/yourdomain.git .

Note, that if this directory is not completely empty, including hidden files Git will not clone the repo to this directory.

To find hidden files in the directory run this command

ls -a

For the glue, we are using a shell script. Here is a copy of my script.

#!/bin/bash

echo "> Current PM2 Apps"
pm2 list

echo "> Stopping running API"
pm2 stop api.dev

echo "> Set Environment variable."
export NODE_ENV=DEV

echo "> Changing directory to dev.momentz.com."
cd /var/www/yourdomain.com

echo "> Listing the contents of the directory."
ls -a

echo "> Remove untracked directories in addition to untracked files."
git clean -f -d

echo "> Pull updates from Github."
git pull

echo "> Install npm updates."
sudo npm install

echo "> Transpile the ECMAScript 2015 code"
gulp babel

echo "> Restart the API"
pm2 start transpiled/www.js --name api.dev

echo "> List folder directories"
ls -a

echo "> All done."

I launch this shell script with TeamCity, but you can launch with anything.

Here is the raw command.

"C:\Program Files (x86)\Plink\plink.exe" -ssh ubuntu@xx.xx.xx.xx -i "C:\Program Files (x86)\Plink\ssh certs\aws-ubuntu.ppk" /var/www/deploy-yourdomain.sh
exit
>&2

That’s it.

In Closing

This process has some rough edges... I hope to polish those edges in time. If you have suggestions please leave them in the comments.

This document is in my GitHub Repository. Technologies change, so if you find an error please update it. I will then update this post.

Comments

Removing Large Files From You Git Repository

I've resisted moving my projects onto GitHub. When GitHub first opened it's doors, it surprised me. Why would anyone build an UI on top of version control? It just seems like such a simple idea, which had already been done many times over. So what made GitHub different?

GitHub Logo

As it turns out, GitHub is different. They have a wonderful product.

I expected the switch to be uneventful, but things don't always go as we expect. My previous git provider didn’t have filesize restrictions. During the push into GitHub, I received a warning at 50 megs. At 100 megs, it turned into a roadblock.

Luckily, GitHub has detailed instruction on how to remove the large files.

First, if it’s a pending check-in, you can simply remove the file from the cache.

git rm --cached giant_file
# Stage our giant file for removal, but leave it on disk

Commit the change.

git commit --amend -CHEAD
# Amend the previous commit with your change
# Simply making a new commit won't work, as you need
# to remove the file from the unpushed history as well

Push your changes to GitHub.

git push
# Push our rewritten, smaller commit

If it’s not in a pending check-in, but is a part of your repo’s history things get interesting. There is a utility, BFG Repo-Cleaner, that makes this process a breeze.

The command (from GitHub documentation).

bfg --strip-blobs-bigger-than 50M
# Git history will be cleaned - files in your latest commit will *not* be touched

The GitHub documentation must assume you have BFG install, because the command didn’t work for me.

I downloaded the jar file from and ran it. Don’t forget to be in the root of your git repository.

java -jar bfg.jar --strip-blobs-bigger-than 50M

Here is my output

Scanning packfile for large blobs: 35170
Scanning packfile for large blobs completed in 251 ms.
Found 10 blob ids for large blobs - biggest=125276291 smallest=53640151
Total size (unpacked)=958626718
Found 1691 objects to protect
Found 1 tag-pointing refs : refs/tags/v0.1
Found 7 commit-pointing refs : HEAD, refs/heads/dev, refs/heads/master, ...

Protected commits
-----------------

These are your protected commits, and so their contents will NOT be altered:

 * commit a99dbf81 (protected by 'HEAD')

Cleaning
--------

Found 1093 commits
Cleaning commits:   100% (1093/1093)
Cleaning commits completed in 8,427 ms.

Updating 6 Refs
---------------

Ref  Before After
------------------------------------------------
refs/heads/dev | 02eeab40 | 8ad272d3
refs/heads/master  | a99dbf81 | 8008478b
refs/heads/prod| 15f1558b | dc52efeb
refs/heads/qa  | 15f1558b | dc52efeb
refs/remotes/origin/master | 0c71d31f | d992278d
refs/tags/v0.1 | fc78e278 | ba078ff6

Updating references:100% (6/6)
...Ref update completed in 45 ms.

Commit Tree-Dirt History
------------------------

Earliest  Latest
|  |
DDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDD

D = dirty commits (file tree fixed)
m = modified commits (commit message or parents changed)
. = clean commits (no changes to file tree)

Before After
-------------------------------------------
First modified commit | 71ab4035 | 5963444b
Last dirty commit | 48c18598 | d7000b5a

Deleted files
-------------

Filename  Git id
----------------------------------------------------------------
GroverFull-06292014.zip | 2ace978f (117.2 MB)
GroverFull-07142014.zip | 3fb67bc6 (117.8 MB)
GroverFull-07312014.zip | edc34fe0 (118.3 MB)
GroverFull-08012014.zip | cf8b9f19 (118.5 MB)
GroverFull-08292014.zip | a41ce08a (119.5 MB)
Grover_be2.mdb  | 129a7cc8 (61.4 MB)
HELOS.zip   | d730b329 (62.6 MB)
Unify Theme.zip | 5fca437c (53.2 MB), 728b06a4 (51.2 MB)
products-WB0412697.zip  | ebe5f6cf (94.6 MB)


In total, 3191 object ids were changed. Full details are logged here:

C:\Projects\grover.bfg-report\2015-07-21\13-31-45

BFG run is complete! When ready, run: git reflog expire --expire=now --all && gi
t gc --prune=now --aggressive


Has the BFG saved you time?  Support the BFG on BountySource:  https://j.mp/fund
-bfg
Comments

Balanced

*image reference

This is a response of sorts to this thoughtful post: https://medium.com/@sabmalik/turning-the-page-32913f07e818

Here is a short summary of the post:

The author spent many years as a programmer. It was difficult to balance life and work. He was always running to the next engagement. Stuck in the learning rat race of software engineering. His job consumed him. He didn't have time for anything except the job. To find balance, he pivoted his career into a less demanding field and achieved balance between his job and his life.

I understand his pain. Much of my energy is devoted to learning. New technologies are getting more diverse and breeding innovation, which means even more learning.

One can easily get consumed by programming. In many ways it's crack for the brain.

Your are in a perpetual state of sharpening your sword. The programmer who stops is relegated to obsolescent in a short few years. In extreme situations, they will find themselves unemployable.

So why do I program? Because I love to do it, I get paid to do what I love. Like the author in the post, I’ve had to learn balance between my career and my personal time.

Some will disagree, but for me programming is an art. There is no limit to how skilled I can become. Applications are my canvas, programming is the medium I use to express myself. It's how I create.

Comments

The Mind State of a Software Engineer

Have patience.

I'll wait

Coding is discovery. Coding is failing. Be ok with this.

Discovery

*image reference

Don't blame the framework. It’s more probable it’s your code. Accept this fallibility. Lady Bug

*image reference

Know when to walk away. You mind is a wonderful tool, even at rest it’s working on unsolved problems. Rest, and let your mind do it’s work.

Hammock

*image reference

Be comfortable not knowing. Software engineering is a vast ocean of knowledge. Someone will always know more than you. The sooner you are OK with this the sooner you will recognize the opportunity to learn something new.

Ocean Sailing

*image reference

Anger and frustration don't fix code. Take a break, nothing can be accomplished in this state.

Anger

*image reference

Comments

Simplifying Null Checks

I have checks for nulls littered throughout my codebase. We see this in all C# codebases it's just comes with the territory. So much so, we don't see it as an issue. We are numb to the pain.

An example of a null check.

if (source != null)
{
    return source.Select(s => base.Convert(s));
}

return null;

It's trivial code, no doubt about that, but my issue is it doesn't provide much in the way of intent. Can we make it better? I think so. Lets make it more compact with a ternary operator. Here is the result.

return (source != null ? source.Select(s => base.Convert(s)) : null);

This is more succinct, but it lacks readability. We've squeezed the if-statement into one line. It feels harder to read than the previous form.

How can we make this more succinct and maintain readability? What if we used an extension method for the null check, instead of comparing the object to null?

if (source.IsNotNull())
{
    return source.Select(s => base.Convert(s));
}

return null;

Ok, I like this. There is meaning to the if-statement evaluation. If it's not null then we enter the if-statement.

Here is the code for the IsNotNull() extension method.

public static bool IsNotNull(this object val)
{
    return val != null;
}

This is good, but I am bothered by the if-statement. I wonder if we can get rid of the if-statement all together. Maybe if we use a little of C#'s functional magic we can eliminate the if-block.

return source.IsNotNull(() => source.Select(s => base.Convert(s)));

Ahh, that's better.

When the object is not null is executes the passed in lambda expression. If you aren't familiar with lambda expressions this might have you scratching your head. The code is below. Take a look at it, hopefully it will clean things up.

    public static T IsNotNull<T>(this object val, Func<T> result) where T : class 
    {
        if (val != null)
        {
            return result();
        }

        return null;
    }

It's a constant challenge keeping code readable. Most of us write code like this all day without a thought to making it better. We've been walking on glass for so long that we don't feel the pain. Each little nugget helps.

On a side note, the next version of C# will have null-conditional operators making the extension method I created irrelevant. Here is an example.

return source?.Select(s => base.Convert(s));
Comments

Index Fragmentation in SQL Azure, Who Knew!

I’ve been on my project for over a year and it has significantly grown as an application and in data during the year. It’s been nonstop new features. I’ve rarely gone back and refactored code. Last week I noticed some of the data heavy pages were loading slowly. At the worst case one view could take up to 30 seconds to load. 10 times over my maximum load time...

Call me naive, but I didn’t consider index fragmentation in SQL Azure. It’s the cloud! It’s suppose to be immune to premise issues… Apparently index fragmentation is also an issue in the cloud.

I found a couple of queries on an MSDN blog, that identify the fragmented indexes and then rebuilds them.

After running the first query to show index fragmentation I found some indexes with over 50 percent fragmentation. According to the article anything over 10% needs attention.

First Query Display Index Fragmentation

--Get the fragmentation percentage

SELECT
 DB_NAME() AS DBName
,OBJECT_NAME(ps.object_id) AS TableName
,i.name AS IndexName
,ips.index_type_desc
,ips.avg_fragmentation_in_percent
FROM sys.dm_db_partition_stats ps
INNER JOIN sys.indexes i
ON ps.object_id = i.object_id
AND ps.index_id = i.index_id
CROSS APPLY sys.dm_db_index_physical_stats(DB_ID(), ps.object_id, ps.index_id, null, 'LIMITED') ips
ORDER BY ps.object_id, ps.index_id

Second Query Rebuilds the Indexes

--Rebuild the indexes
DECLARE @TableName varchar(255)

DECLARE TableCursor CURSOR FOR
(
 SELECT '[' + IST.TABLE_SCHEMA + '].[' + IST.TABLE_NAME + ']' AS [TableName]
 FROM INFORMATION_SCHEMA.TABLES IST
 WHERE IST.TABLE_TYPE = 'BASE TABLE'
 )

 OPEN TableCursor
 FETCH NEXT FROM TableCursor INTO @TableName
WHILE @@FETCH_STATUS = 0

 BEGIN
 PRINT('Rebuilding Indexes on ' + @TableName)
Begin Try
 EXEC('ALTER INDEX ALL ON ' + @TableName + ' REBUILD with (ONLINE=ON)')
End Try
Begin Catch
 PRINT('Cannot do rebuild with Online=On option, taking table ' + @TableName+' down for douing rebuild')
 EXEC('ALTER INDEX ALL ON ' + @TableName + ' REBUILD')
 End Catch
FETCH NEXT FROM TableCursor INTO @TableName
END

CLOSE TableCursor
DEALLOCATE TableCursor

Source

Comments

Moving from Wordpress to Hexo

I love Wordpress - it just works. It’s community is huge and it’s drop dead simple to get running.

I started blogging in 2002 when the blogging landscape was barren. Blogging platforms were few and far between. Heck “blogging” wasn’t even a term.

My first blogging engine was b2 the precursor to Wordpress. In 2003, Wordpress forked b2 and started on the journey to the Wordpress we now all love. At the time I felt conflicted. Why create a second blogging platform? Why not lend support to b2? Wasn’t b2 good enough? Ultimately it was a good decision. Not to long after forking the code, the development on b2 stalled.

Wordpress has enjoyed a huge amount of popularity. It’s, by far, the most popular CMS (content management system).

So, it’s with sadness that after writing over 500 posts on b2 and Wordpress, I am moving from Wordpress. I simply don’t need the functionality and versatility of Wordpress. I am moving to Hexo, a node based blog/site generator.

Assets and posts are stored on the file system. The posts are written in Markdown. Hexo takes the Markdown and generates HTML pages linking the pages as it moves through the content. Depending on which theme you choose and how you customize the it you can generate just about anything.

I hope you enjoy the change. The site is much faster. The comments are now powered by Disqus. These changes will allow me to deliver a better and a faster experience for you.

Comments

A General Ledger: A Simple C# Implementation

If you don’t have a basic understanding of general ledgers and double-entry bookkeeping read my post explaining the basics of these concepts.

Over the years I've worked on a systems with financial transactions. To have integrity with financial transactions using a general ledger is a must. If not, you can’t account for revenue and accounts payable. Believe me you, when your client wants detailed reports on their cash flow you better be able to generate it. Not to mention any legal issues you might encounter.

Early in my career, I had a discussion with a C-Level executive, I explained the importance of a general ledger. I was getting push back because it pushed out the timeline a bit to implement the general ledger. Eventually we won out and implemented a ledger and thankfully so. Just as we predicted the requests for reports started rolling in.

A basic schema for a general ledger.

CREATE TABLE [Accounting].[GeneralLedger] (
    [Id]             INT             IDENTITY (1, 1) NOT NULL,
    [Account_Id]     INT             NOT NULL,
    [Debit]          DECIMAL (19, 4) NULL,
    [Credit]         DECIMAL (19, 4) NULL,
    [Transaction_Id] INT             NOT NULL,
    [EntryDateTime]  DATETIME        NOT NULL,
);

The C# class.

  public class GeneralLedger
   {
       public int Id { get; set; }

       public  Account Account { get; set; }

       public decimal Debit { get; set; }

       public decimal Credit { get; set; }

       public Transaction Transaction { get; set; }

       public DateTime EntryDateTime { get; set; }
    }

In my system I track all the transactions in and out of the system. For example, if a customer pays an invoice. I track the total payment in the general ledger. The credit account is called “Revenue” and the debit account is my company. Remember for each financial transaction two records are entered into the general ledger: a credit and a debit.

In my system I wanted higher fidelity so I added Transaction to the ledger. The transaction tracks the details of the entry. Only the transaction total is recorded in the general ledger. The transaction details(taxes, per item costs, etc) tells the story of how we arrived at the total.

Lets look at some data. Find an account with some credits and debits. Sum all the debit rows and sum all the credit rows. Subtract the debit from the credits. If the number is positive, the account finished in the black (has a profit), if it’s negative, then the account finished in the red (has a loss).

Your CEO wants to know how much money a client spent with your company. No problem. Again just sum the debits and credits and subtract them from each other for the clients account.

I hope this has helped you understand the power of the ledger and why it’s important when dealing with financial transactions.

Comments

A General Ledger : Understanding the Ledger

cropped_ledgerWhat is a general ledger and why is it important? To find out read on!

What is a general ledger? A general ledger is a log of all the transactions relating to assets, liabilities, owners’ equity, revenue and expenses. It’s how a company can tell if it’s profitable or it’s taking a loss. In the US, this is the most common way to track the financials.

To understand how a general ledger works, you must understand double entry bookkeeping. So, what is double entry bookkeeping? I’m glad you asked. Imagine you have a company and your first customer paid you $1000. To record this, you add this transaction to the general ledger. Two entries made: a debit, increasing the value of your assets in your cash account and a credit, decreasing the value of the revenue (money given to you by your customer payment). Think of the cash account as an internal account, meaning an account that you track the debits (increasing in value) and credits (decreasing in value). The revenue account is an external account. Meaning you only track the credit entries. External accounts don’t impact your business. They merely tell you where the money is coming from and where it’s going.

Here is a visual of our first customers payment.

Simple

If the sum of the debit column and the sum of the credit column don’t equal each other, then there is an error in the general ledger. When both sides equal each other the books are said to be balanced. You want balanced books.

Let’s look at a slightly more complex example.

You receive two bills: water and electric, both for $50. You pay them using part of the cash in your cash account. The current balance is $1000. What entries are needed? Take your time. I’ll wait.

harder

Four entries are added to the general ledger: two credit entries for cash and one entry for each the water and electric accounts. Notice the cash entries are for credits.

For bonus, how would we calculate the remaining balance of the cash account? Take your time. Again, I’ll wait for you.

To get the remaining balance we need to identify each cash entry.

cash

To get the balance of the Cash account we do the same thing we did to balance the books, but this time we only look at the cash account. We take the sum of the debit column for the cash account and the sum of the Credit column for the cash account and subtract them from each other. The remaining value is the balance of the cash account.

cash_balance

And that folks, is the basics of a general ledger and double entry bookkeeping. I hope you see the importance of this approach. As it give you the ability to quicking see if there are errors in your books. You have high fidelity in tracking payments and revenues.

This is just the tip of the iceberg in accounting. If you’d like to dive deeper into accounting, have a look at the accounting equation: Assets = Liabilities + Owner’s Equity.

Hopefully this post has given you a basic understanding of what a general ledger is and how double-entry bookkeeping works. In the next post I’ll go into how to implement a general ledger in C#.

Comments