3 Reasons Why Code Reviews are Important

swordsharpeningA great code review will challenge your assumptions and give you constructive feedback. For me, code reviews are an essential part in growing as a software engineer.

Writing code is an intimate process. Software engineers spend years learning the craft of software engineering and when something critical is said of our creation it’s hard not to take it personal. I find myself, at times, getting defensive when I hearing criticisms. I know the reviewer means well, but this isn’t always comforting. If it wasn’t for honest feedback from some exceptional software engineers, I wouldn’t be half the software engineer I am today.

Benefits of Code Reviews

1. Finding Bugs

Sometimes it’s the simple fact of reading the code that you find an error. Sometimes it’s the other developer who spots the error. Regardless, simply walking the code is enough to expose potential issues.

I think of my mistakes as the grindstone to my sword. To quote Michael Jordan:

I’ve missed more than 9000 shots in my career. I’ve lost almost 300 games. 26 times, I’ve been trusted to take the game winning shot and missed. I’ve failed over and over and over again in my life. And that is why I succeed.

2. Knowledge Transfer

Sharing your work with others is humbling. In many ways you are the code. I know that I feel vulnerable when I share my code.

This a great opportunity to learn from and to teach other engineers. In sharing your code you are taking the reviews on a journey, a journey into the code and aspects about you. A lot can be learned about you by how your write code.

At the end of the code review the reviewers should have a good understanding of how the code works, the rationale behind it and will have learned a little bit about you.

3. Improving the Health of the Code

As I mentioned, the more times you read the code the better code becomes. The more reviewers the better the chance one of them will suggest an improvement. Some might think skill level matters, it doesn’t. Less experienced software engineers don’t have the deep technological knowledge as experienced software engineers, but they also don’t have to wade through all the mental technical baggage to see opportunities for improvement.

Code reviews gives us the benefit of evaluating our code. There will always be something to change to make it just a little bit better.

Coding, in this way, is much like writing. For a good piece to come into focus the code must rest and be re-read. The more times you repeat this process the better the code will become.

In Closing

Some companies don’t officially do code reviews, that’s ok. Seek out other engineers. Most software engineer’s will be happy to take 10 to 15 minutes to look over your code.

Creating Reports Using Encrypted Data

reportingI wrote “Implementing Transparent Encryption with NHibernate Listeners” last year. If you haven’t read it, I recommend you do, even if nHibernate is not your cup of tea, you are guaranteed to learn something.

Michael Johnson from Sharpened Developer commented on the post asking “How do I report on encrypted data?”

Most reports are aggregations and encrypted data is typically personally identifiable information (PII), in many cases this data is not included in reports. In the case that we need encrypted data for reports we have options.

Reports within the Application

Generating reports with in the application is the simplest way to access the data. Mostly likely encryption and decryption mechanisms are already in place. There a number of third party (Telerik, DevExpress, etc..) libraries available for building reports. Building reports with in the application works well on small sets of data, but as the data grows this simply doesn’t scale.

Using a Reporting Database

The long term solution is to move the data into a reporting database. Once the data is in it’s own database an enterprise reporting solution can be put in place.

The leads me to the more interesting aspect of keeping the reporting data current.

Firstly, an external facing application should not have direct access to the decrypted reporting database. If the external application is compromised it is a good bet they’ll soon have access to the decrypted reporting database.

Ideally decrypted data is only available on the internal network.

With all the hacking (Sony) lately, I wonder if it’s a good idea to have any sensitive data decrypted at rest… but I digress. That’s a discussion for another time.

There are two types of reporting data: real time and everything else.

Real time data requires the data to be nearly insync with production data. The only way to report on real time data is to capture changes as they happen. This is fraught with issues, but a cleverly crafted asynchronous process using a message queue is a good solution. A service watching the message queue can process the messages near real-time. The service is decoupled, providing a layer of security and it’s asynchronous minimizing the performance impact.

When data freshness isn’t a concern, a daily or even weekly service is a good solution. It might be as simple as restoring a production database backup on the reporting server.

Lastly, avoid syncing directly from production databases, the last thing you’d want is to hinder production’s performance by inundating it with database requests.

In Closing

Reporting is rarely thought of at the onset of an application. As applications mature, stakeholders want to mine data for insights. This inevitably turns into a growing collection of reporting requests that seem to never end. A solid reporting solution will help abate these requests. In the best scenario the stakeholders can create their own reports.

* image reference (http://www.ajboggs.com/consulting/software-development/reporting-solutions/)

Algorithms: Binary Search

You are presented with a set of a 1000 numbers. You are tasked with finding the position of 73. The most obvious approach is to started with the first number and evaluate every number until 73 is found. This approach is called a linear search algorithm or sequential search algorithm. This works for a set of 1000 numbers, but consider if the set is increased to 10 million numbers. A linear search can not scale and is simply not suited for this many numbers, but a binary search algorithm can.

A binary search algorithm requires the data to be sorted. Once sorted, the value is found by finding the middle value and comparing it to the search value. If the search value is lower than the middle value, we take the first half of numbers and again we find the middle value and once again we compare it to the search value. If the search value is still lower, we again take the first half of numbers and find the middle value. And once again we compare the middle value to the search value. This process repeats itself until we find the search value or we run out of values.

Comparing the two algorithms for performance: A linear search of 10 million numbers, assuming 1 second per number, will consume roughly 116 days. A binary search of 10 million numbers, again assuming 1 second per number, will only consume about 23 seconds. When searching for numbers the binary search wins hands down.

Binary Search implemented in C#:

        public int BinarySearch(int number, int[] collection)
            int low = 0;
            int high = collection.Length - 1; //collection[]; // find the last number, this assumes the collection is sorted.

            while (low <= high)
                int mid = (low + high) / 2;

                if (collection[mid] < number)
                    low = mid + 1;
                else if (collection[mid] > number)
                    high = mid - 1;
                    return collection[mid];

            return -1;

5 Steps for Coding for the Next Developer

codingsmallMost of us probably don’t think about the developer who will maintain our code. Until recently, I did not consider him either. I never intentionally wrote obtuse code, but I also never left any breadcrumbs.

Kent Beck on good programmers:

Any fool can write code that a computer can understand. Good programmers write code that humans can understand.

Douglas Crockford on good computer programs:

It all comes down to communication and the structures that you use in order to facilitate that communication. Human language and computer languages work very differently in many ways, but ultimately I judge a good computer program by it’s ability to communicate with a human who reads that program. So at that level, they’re not that different.

Discovering purpose and intent is difficult in the most well written code. Any breadcrumbs left by the author, comments, verbose naming and consistency, is immensely helpful to next developers.

I start by looking for patterns. Patterns can be found in many places including variables names, class layout and project structure. Once identified, patterns are insights into the previous developer’s intent and help in comprehending the code.

What is a pattern? A pattern is a repeatable solution to a recurring problem. Consider a door. When a space must allow people to enter and to leave and yet maintain isolation, the door pattern is implemented. Now this seems obvious, but at one point it wasn’t. Someone created the door pattern which included the door handle, the hinges and the placement of these components. Walk into any home and you can identify any door and it’s components. The styles and colors might be different, but the components are the same. Software is the same.

There are known software patterns to common software problems. In 1995, Design Patterns: Elements of Reusable Object-Oriented Software was published describing common software patterns. This book describes common problems encountered in most software application and offered an elegant way to solve these problems. Developers also create their own patterns while solving problems they routinely encounter. While they don’t publish a book, if you look close enough you can identify them.

Sometimes it’s difficult to identify the patterns. This makes grokking the code difficult. When you find yourself in this situation, inspect the code, see how it is used. Start a re-write. Ask yourself, how would you accomplish the same outcome. Often as you travel the thought process of an algorithm, you gain insight into the other developer’s implementation. Many of us have the inclination to re-write what we don’t understand. Resist this urge! The existing implementation is battle-tested and yours is not.

Some code is just vexing, reach out to a peer — a second set of eyes always helps. Walk the code together. You’ll be surprised what the two of you will find.

Here are 5 tips for leaving breadcrumbs for next developers

1. Patterns
Use known patterns, create your own patterns. Stick with a consistent paradigm throughout the code. For example, don’t have 3 approaches to data access.

2. Consistency
This is by far the most important aspect of coding. Nothing is more frustrating than finding inconsistent code. Consistency allows for assumptions. Each time a specific software pattern is encountered, it should be assumed it behaves similarly as other instances of the pattern.

Inconsistent code is a nightmare, imagine reading a book with every word meaning something different, including the same word in different places. You’d have to look up each word and expend large amounts of mental energy discovering the intent. It’s frustrating, tedious and painful. You’ll go crazy! Don’t do this to next developer.

3. Verbose Naming
This is your language. These are the words to your story. Weave them well.

This includes class names, method names, variable names, project names and property names.


if(monkey.HoursSinceLastMeal > 3)


int feedInterval = 3;
if(monkey.HoursSinceLastMeal > feedInterval)

The first example has 3 hard coded in the if statement. This code is syntactically correct, but the intent of the number 3 tells you nothing. Looking at the property it’s evaluated against, you can surmise that it’s really 3 hours. In reality we don’t know. We are making an assumption.

In the second example, we set 3 to a variable called ‘feedInterval’. The intent is clearly stated in the variable name. If it’s been 3 hours since the last meal, it’s time to feed the monkey. A side effect of setting the variable is we can now change the feed interval without changing the logic.

This is a contrived example, in a large piece of software this type of code is self documenting and will help the next developer understand the code.

Comments are a double edge sword. Too much commenting increases maintenance costs, not enough leaves developers unsure on how the code works. A general rule of thumb is to comment when the average developer will not understand the code. This happens when the assumptions are not obvious or the code is out of the ordinary.

5. Code Simple
In my professional opinion writing complex code is the biggest folly among developers.

Steve Jobs on simplicity:

Simple can be harder than complex: You have to work hard to get your thinking clean to make it simple. But it’s worth it in the end because once you get there, you can move mountains.

Complexity comes in many forms, some of which include: future proofing, overly complex implementations, too much abstraction, large classes and large methods.

For more on writing clean simple code, see Uncle Bob’s book Clean Code and Max Kanat-Alexander’s Code Simplicity


Reading code is hard. With a few simple steps you can ensure the next developer will grok your code.

HTML Extended

There is an emerging trend to extend HTML. To the untrained eye the changes are not obvious.

For example, AngularJS extends the HTML with directives:

<my-directive my-data=”user”></my-directive>

To the browser this snippet of HTML is meaningless. Fortunately AngularJS has an internal compiler that converts it to meaningful HTML.

EmberJs is following suit with their upcoming 1.11 release. Web components will now appear as inline HTML. Like AngularJS, EmberJs will convert it to meaningful HTML.

After EmberJs release 1.11 markup:

<my-video src={{movie.url}}></my-video>

Starting with ASP.NET 5, ASP.NET will have “TagHelpers.” In the past to render HTML with Razor you’d do something like this:

<li>@Html.ActionLink("Default", "Index", "Home")</li>

Now with TagHelpers:

<li><a controller="Default" action="Index">Home</a></li>

This is an interesting trend. These technologies are extending HTML. At the end of the day it all must be rendered into meaningful HTML for the browser. I wonder if this is the end of custom view engines, such as Haml, Razor, Jade and EJS. It’ll be interesting to see how this plays out.

Iterate Small

Wielding RobotsIn the 1980’s manufacturing in the United States was in decline. After World War 2 the United States was the undisputed leader in manufacturing. During the 1960’s this changed. Japanese companies made the same products as the United States but their products were of a higher quality and cheaper. How did they do it? Many factors played a role, but one of factor was batch size. By lowering batch sizes Japanese companies increased quality and were able to deliver more product.

Lowering batch sizes allowed for a more nimble process. Instead having 20 tons of raw materials in process, they only needed 2 tons. When a batch was defective (it had bugs) only 2 tons of raw material were lost instead of 20 tons. The entire process was more efficient.

Applying this to software engineering, we want to develop small and deploy small.

Develop Small
Manufacturing is not an exact parallel to software development, but many of the principles are applicable. For instance, the more code changed, the more opportunity for bugs to manifest. Minimizing the number of changes lessens the likelihood of bugs.

Break tasks into small chunks. Even large feature can be split into small tasks. It’s fine to ship benign code.

Deploy Small
Deploying small requires a build and deployment process that can be confidently run multiple times a day.

Continual deployments allows for an evolving product. Nothing like upgrading from Windows 7 to Windows 8 ( for those that did not experience this, it was a drastic change). Small changes deployed in small increments. Less impact on the user and less opportunity for something to go wrong.

Case Sensitivity with Windows and Git

GitOnWindowsWhen working on Windows it’s easy to forget about Git being case-sensitive. Normally, it’s not an issue; however, sometimes it can bite you.

The scenario goes like this: There are two Git repositories. One locally and one in the cloud (to the cloud!). The local Git repo is on Windows, a case-insensitive system. The other is in the cloud on Linux, a case-sensitive system. Can you feel the suspense building?

A new directory called “Password ” is created. It’s committed and pushed to the cloud Git repo. A short time later, it’s realized the directory name is wrong. It’s really suppose to be named “password”. This is a simple problem. Simply rename the directory to “password”.

This is where things go awry. Windows does not consider renaming a folder from “Password” to “password” a significant event. Rightly so, with Windows being a case-insensitive system. In the case-sensitive world it matters a whole heck of a lot. Git shell for Windows does not register the renaming as a change — nothing is queued to be committed. This leaves the local and remote repository out of sync. I can only surmise that Windows does not trigger an event when the folder’s name changes only by case.

Re-syncing the names is a simple two step process:

  1. git mv casesensitive Temp
  2. git mv Temp CaseSensitive

Implementing Transparent Encryption with NHibernate Listeners (Interceptors)

Have you ever had to encrypt data in the database? In this post, I’ll explore how using nHibernate Listeners to encrypt and decrypt data coming from and going into your database. The cryptography will be transparent to your application.

Why would you want to do this? SQL Server has encryption baked into the product. That is true, but if you are moving to the cloud and want to use SQL Azure you’ll need some sort of cryptography strategy. SQL Azure does not support database encryption.

What is an nHibernate Listener? I think of a Listener as a piece of code that I can inject into specific extensibility points in the nHibernate persistence and data hydration lifecycle.

As of this writing the following extensibility points are available in nHibernate.

  • IAutoFlushEventListener
  • IDeleteEventListener
  • IDirtyCheckEventListener
  • IEvictEventListener
  • IFlushEntityEventListener
  • IFlushEventListener
  • IInitializeCollectionEventListener
  • ILoadEventListener
  • ILockEventListener
  • IMergeEventListener
  • IPersistEventListener
  • IPostCollectionRecreateEventListener
  • IPostCollectionRemoveEventListener
  • IPostCollectionUpdateEventListener
  • IPostDeleteEventListener
  • IPostInsertEventListener
  • IPostLoadEventListener
  • IPostUpdateEventListener
  • IPreCollectionRecreateEventListener
  • IPreCollectionRemoveEventListener
  • IPreCollectionUpdateEventListener
  • IPreDeleteEventListener
  • IPreInsertEventListener
  • IPreLoadEventListener
  • IPreUpdateEventListener
  • IRefreshEventListener
  • IReplicateEventListener
  • ISaveOrUpdateEventListener

The list is extensive.

To implement transparent cryptography, we need to find the right place to encrypt and decrypt the data. For encrypting the data we’ll use IPostInsertEventListener and IPostUpdateEventListener. With these events we’ll catch the new data and the updated data going into the database. For decrypting, we’ll use the IPreLoadEventListener.

For this demonstration we’ll be using DatabaseCryptography class for encrypting and decrypting. The cryptography implementation is not important for this article.


public class PreLoadEventListener : IPreLoadEventListener
readonly DatabaseCryptography _crypto = new DatabaseCryptography();

/// Called when [pre load].

///The event. public void OnPreLoad(PreLoadEvent @event)
_crypto.DecryptProperty(@event.Entity, @event.Persister.PropertyNames, @event.State);


public class PreInsertEventListener : IPreInsertEventListener
readonly DatabaseCryptography _crypto = new DatabaseCryptography();

/// Return true if the operation should be vetoed

///The event. /// true if XXXX, false otherwise.
public bool OnPreInsert(PreInsertEvent @event)
_crypto.EncryptProperties(@event.Entity, @event.State, @event.Persister.PropertyNames);

return false;


public class PreUpdateEventListener : IPreUpdateEventListener
readonly DatabaseCryptography _crypto = new DatabaseCryptography();

/// Return true if the operation should be vetoed

///The event. /// true if XXXX, false otherwise.
public bool OnPreUpdate(PreUpdateEvent @event)
_crypto.EncryptProperties(@event.Entity, @event.State, @event.Persister.PropertyNames);

return false;

It’s important to note that on both IPreUpdateEventListener and IPreInsertEventListener must return false, otherwise the insert/update event will be aborted.

Now that we have the Listeners implemented we need to register them with nHibernate. I am using FluentNHibernate so this will be different if you are using raw nHibernate.


public class SessionFactory
/// Creates the session factory.

/// ISessionFactory.
public static ISessionFactory CreateSessionFactory()
return Fluently.Configure()

.ConnectionString(c => c

.Mappings(m => m.FluentMappings.AddFromAssemblyOf())
.ExposeConfiguration(s =>
s.SetListener(ListenerType.PreUpdate, new PreUpdateEventListener());
s.SetListener(ListenerType.PreInsert, new PreInsertEventListener());
s.SetListener(ListenerType.PreLoad, new PreLoadEventListener());

When decrypting and encrypting data at the application level it makes the data useless in the database. You’ll need to bring the data back into the application to read the values of the encrypted fields. We want to limit the fields that are encrypted and we only want to encrypt string values. Encrypting anything other that string values complicates things. There is nothing saying we can’t encrypt dates, but doing so will require the date field in the database to become a string(nvarchar or varchar) field, to hold the encrypted data, once we do this we lose the ability to operate on the date field from the database.

To identify which fields we want encrypted and decrypted I’ll use marker attributes.

Encrypt Attribute

public class EncryptAttribute : Attribute

Decrypted Attribute

public class DecryptAttribute : Attribute

To see the EncryptAttribute and the DecryptedAttribute in action we’ll take a peek into the DatabaseCryptography class.


public class DatabaseCryptography
readonly Crypto _crypto = ObjectFactory.GetInstance();

/// Encrypts the properties.

///The entity. ///The state. ///The property names. public void EncryptProperties(object entity, object[] state, string[] propertyNames)
Crypt(entity, propertyNames, s=>_crypto.Encrypt(s),state);

/// Crypts the specified entity.

///The entity. ///The state. ///The property names. ///The crypt. private void Crypt(object entity, string[] propertyNames, Func<string, string> crypt, object[] state) where T : Attribute
if (entity != null)
var properties = entity.GetType().GetProperties();

foreach (var info in properties)
var attributes = info.GetCustomAttributes(typeof (T), true);

if (attributes.Any())
var name = info.Name;
var count = 0;

foreach (var s in propertyNames)
if (string.Equals(s, name, StringComparison.InvariantCultureIgnoreCase))
var val = Convert.ToString(state[count]);
if (!string.IsNullOrEmpty(val))

val = crypt(val);
state[count] = val;



/// Decrypts the property.

///The entity. ///The state. ///The property names. public void DecryptProperies(object entity, string[] propertyNames, object[] state)
Crypt(entity, propertyNames, s => _crypto.Decrypt(s), state);


That’s it. Now the encryption and decryption of data will be transparent to the application and you can go on your merry way building the next Facebook.

Reaction to AngularJs 2.0’s Announcement

deadangularThe AngularJS team announced the successor to AngularJS 1.x — AngularJS 2.0. It’s all new and shiney. The anticipated release date is late 2015.

So, what has changed? Well, everything. $scope is gone, controllers are gone, angular.module is gone, directives are gone and jqLite is gone.

Wow, thems are some big changes! How will this impact my existing AngularJS 1.x application? Don’t worry, it won’t, they have a migration path. It’s called a rewrite. From AngularJS 1.x to Angular 2.0. Although, we are holding out hope on this changing.

According to ng-learn.org, AngularJS 2.0 is a complete rewrite. Say what? They are rewriting Angular JS? Someone must have spiked the Google Kool-Aid more than usual.

They do know that rewrites can fail? Just ask Netscape (Netscape 6.0), Borland (Quattro Pro), Microsoft (failed rewrite of MS Word) and Winamp (anyone remember Winamp 3?).

You might wonder what is happening to the truck loads of community goodwill they have built with AngularJS 1.x. It appears they’ve decided to walk away from it.

Kevin Dente’s pointed tweet:

Did Angular just commit suicide?

Time will illuminate whether they can rebuild the lost goodwill that’s been so gashed with this rewrite.

I admit, AngularJS 2.0’s features are enticing. They are scraping the internal module system in favor of ECMAScript 6.0’s built in module system and dependency injection has been revamped. This is nice, but they still will be lacking a thriving eco-system, which they enjoyed with AngularJS 1.x. It’s being called AngularJS 2.0, but it will effectively be a 1.0 product. I can only wonder what brave developers will venture into the darkness with AngularJS 2.0.

It can be surmised that the Angular Team thinks this is a good idea… I hope they know what they are doing.