Iterate Small

Wielding RobotsIn the 1980’s manufacturing in the United States was in decline. After World War 2 the United States was the undisputed leader in manufacturing. During the 1960’s this changed. Japanese companies made the same products as the United States but their products were of a higher quality and cheaper. How did they do it? Many factors played a role, but one of factor was batch size. By lowering batch sizes Japanese companies increased quality and were able to deliver more product.

Lowering batch sizes allowed for a more nimble process. Instead having 20 tons of raw materials in process, they only needed 2 tons. When a batch was defective (it had bugs) only 2 tons of raw material were lost instead of 20 tons. The entire process was more efficient.

Applying this to software engineering, we want to develop small and deploy small.

Develop Small
Manufacturing is not an exact parallel to software development, but many of the principles are applicable. For instance, the more code changed, the more opportunity for bugs to manifest. Minimizing the number of changes lessens the likelihood of bugs.

Break tasks into small chunks. Even large feature can be split into small tasks. It’s fine to ship benign code.

Deploy Small
Deploying small requires a build and deployment process that can be confidently run multiple times a day.

Continual deployments allows for an evolving product. Nothing like upgrading from Windows 7 to Windows 8 ( for those that did not experience this, it was a drastic change). Small changes deployed in small increments. Less impact on the user and less opportunity for something to go wrong.

Case Sensitivity with Windows and Git

GitOnWindowsWhen working on Windows it’s easy to forget about Git being case-sensitive. Normally, it’s not an issue; however, sometimes it can bite you.

The scenario goes like this: There are two Git repositories. One locally and one in the cloud (to the cloud!). The local Git repo is on Windows, a case-insensitive system. The other is in the cloud on Linux, a case-sensitive system. Can you feel the suspense building?

A new directory called “Password ” is created. It’s committed and pushed to the cloud Git repo. A short time later, it’s realized the directory name is wrong. It’s really suppose to be named “password”. This is a simple problem. Simply rename the directory to “password”.

This is where things go awry. Windows does not consider renaming a folder from “Password” to “password” a significant event. Rightly so, with Windows being a case-insensitive system. In the case-sensitive world it matters a whole heck of a lot. Git shell for Windows does not register the renaming as a change — nothing is queued to be committed. This leaves the local and remote repository out of sync. I can only surmise that Windows does not trigger an event when the folder’s name changes only by case.

Re-syncing the names is a simple two step process:

  1. git mv casesensitive Temp
  2. git mv Temp CaseSensitive

Implementing Transparent Encryption with NHibernate Listeners (Interceptors)

Have you ever had to encrypt data in the database? In this post, I’ll explore how using nHibernate Listeners to encrypt and decrypt data coming from and going into your database. The cryptography will be transparent to your application.

Why would you want to do this? SQL Server has encryption baked into the product. That is true, but if you are moving to the cloud and want to use SQL Azure you’ll need some sort of cryptography strategy. SQL Azure does not support database encryption.

What is an nHibernate Listener? I think of a Listener as a piece of code that I can inject into specific extensibility points in the nHibernate persistence and data hydration lifecycle.

As of this writing the following extensibility points are available in nHibernate.

  • IAutoFlushEventListener
  • IDeleteEventListener
  • IDirtyCheckEventListener
  • IEvictEventListener
  • IFlushEntityEventListener
  • IFlushEventListener
  • IInitializeCollectionEventListener
  • ILoadEventListener
  • ILockEventListener
  • IMergeEventListener
  • IPersistEventListener
  • IPostCollectionRecreateEventListener
  • IPostCollectionRemoveEventListener
  • IPostCollectionUpdateEventListener
  • IPostDeleteEventListener
  • IPostInsertEventListener
  • IPostLoadEventListener
  • IPostUpdateEventListener
  • IPreCollectionRecreateEventListener
  • IPreCollectionRemoveEventListener
  • IPreCollectionUpdateEventListener
  • IPreDeleteEventListener
  • IPreInsertEventListener
  • IPreLoadEventListener
  • IPreUpdateEventListener
  • IRefreshEventListener
  • IReplicateEventListener
  • ISaveOrUpdateEventListener

The list is extensive.

To implement transparent cryptography, we need to find the right place to encrypt and decrypt the data. For encrypting the data we’ll use IPostInsertEventListener and IPostUpdateEventListener. With these events we’ll catch the new data and the updated data going into the database. For decrypting, we’ll use the IPreLoadEventListener.

For this demonstration we’ll be using DatabaseCryptography class for encrypting and decrypting. The cryptography implementation is not important for this article.


public class PreLoadEventListener : IPreLoadEventListener
readonly DatabaseCryptography _crypto = new DatabaseCryptography();

/// Called when [pre load].

///The event. public void OnPreLoad(PreLoadEvent @event)
_crypto.DecryptProperty(@event.Entity, @event.Persister.PropertyNames, @event.State);


public class PreInsertEventListener : IPreInsertEventListener
readonly DatabaseCryptography _crypto = new DatabaseCryptography();

/// Return true if the operation should be vetoed

///The event. /// true if XXXX, false otherwise.
public bool OnPreInsert(PreInsertEvent @event)
_crypto.EncryptProperties(@event.Entity, @event.State, @event.Persister.PropertyNames);

return false;


public class PreUpdateEventListener : IPreUpdateEventListener
readonly DatabaseCryptography _crypto = new DatabaseCryptography();

/// Return true if the operation should be vetoed

///The event. /// true if XXXX, false otherwise.
public bool OnPreUpdate(PreUpdateEvent @event)
_crypto.EncryptProperties(@event.Entity, @event.State, @event.Persister.PropertyNames);

return false;

It’s important to note that on both IPreUpdateEventListener and IPreInsertEventListener must return false, otherwise the insert/update event will be aborted.

Now that we have the Listeners implemented we need to register them with nHibernate. I am using FluentNHibernate so this will be different if you are using raw nHibernate.


public class SessionFactory
/// Creates the session factory.

/// ISessionFactory.
public static ISessionFactory CreateSessionFactory()
return Fluently.Configure()

.ConnectionString(c => c

.Mappings(m => m.FluentMappings.AddFromAssemblyOf())
.ExposeConfiguration(s =>
s.SetListener(ListenerType.PreUpdate, new PreUpdateEventListener());
s.SetListener(ListenerType.PreInsert, new PreInsertEventListener());
s.SetListener(ListenerType.PreLoad, new PreLoadEventListener());

When decrypting and encrypting data at the application level it makes the data useless in the database. You’ll need to bring the data back into the application to read the values of the encrypted fields. We want to limit the fields that are encrypted and we only want to encrypt string values. Encrypting anything other that string values complicates things. There is nothing saying we can’t encrypt dates, but doing so will require the date field in the database to become a string(nvarchar or varchar) field, to hold the encrypted data, once we do this we lose the ability to operate on the date field from the database.

To identify which fields we want encrypted and decrypted I’ll use marker attributes.

Encrypt Attribute

public class EncryptAttribute : Attribute

Decrypted Attribute

public class DecryptAttribute : Attribute

To see the EncryptAttribute and the DecryptedAttribute in action we’ll take a peek into the DatabaseCryptography class.


public class DatabaseCryptography
readonly Crypto _crypto = ObjectFactory.GetInstance();

/// Encrypts the properties.

///The entity. ///The state. ///The property names. public void EncryptProperties(object entity, object[] state, string[] propertyNames)
Crypt(entity, propertyNames, s=>_crypto.Encrypt(s),state);

/// Crypts the specified entity.

///The entity. ///The state. ///The property names. ///The crypt. private void Crypt(object entity, string[] propertyNames, Func<string, string> crypt, object[] state) where T : Attribute
if (entity != null)
var properties = entity.GetType().GetProperties();

foreach (var info in properties)
var attributes = info.GetCustomAttributes(typeof (T), true);

if (attributes.Any())
var name = info.Name;
var count = 0;

foreach (var s in propertyNames)
if (string.Equals(s, name, StringComparison.InvariantCultureIgnoreCase))
var val = Convert.ToString(state[count]);
if (!string.IsNullOrEmpty(val))

val = crypt(val);
state[count] = val;



/// Decrypts the property.

///The entity. ///The state. ///The property names. public void DecryptProperies(object entity, string[] propertyNames, object[] state)
Crypt(entity, propertyNames, s => _crypto.Decrypt(s), state);


That’s it. Now the encryption and decryption of data will be transparent to the application and you can go on your merry way building the next Facebook.

Reaction to AngularJs 2.0’s Announcement

deadangularThe AngularJS team announced the successor to AngularJS 1.x — AngularJS 2.0. It’s all new and shiney. The anticipated release date is late 2015.

So, what has changed? Well, everything. $scope is gone, controllers are gone, angular.module is gone, directives are gone and jqLite is gone.

Wow, thems are some big changes! How will this impact my existing AngularJS 1.x application? Don’t worry, it won’t, they have a migration path. It’s called a rewrite. From AngularJS 1.x to Angular 2.0. Although, we are holding out hope on this changing.

According to, AngularJS 2.0 is a complete rewrite. Say what? They are rewriting Angular JS? Someone must have spiked the Google Kool-Aid more than usual.

They do know that rewrites can fail? Just ask Netscape (Netscape 6.0), Borland (Quattro Pro), Microsoft (failed rewrite of MS Word) and Winamp (anyone remember Winamp 3?).

You might wonder what is happening to the truck loads of community goodwill they have built with AngularJS 1.x. It appears they’ve decided to walk away from it.

Kevin Dente’s pointed tweet:

Did Angular just commit suicide?

Time will illuminate whether they can rebuild the lost goodwill that’s been so gashed with this rewrite.

I admit, AngularJS 2.0’s features are enticing. They are scraping the internal module system in favor of ECMAScript 6.0’s built in module system and dependency injection has been revamped. This is nice, but they still will be lacking a thriving eco-system, which they enjoyed with AngularJS 1.x. It’s being called AngularJS 2.0, but it will effectively be a 1.0 product. I can only wonder what brave developers will venture into the darkness with AngularJS 2.0.

It can be surmised that the Angular Team thinks this is a good idea… I hope they know what they are doing.

The Allure of Rewriting an Application

7304468436_ee00db57d5_zMost Software Engineers have at sometime in their careers advocated for a rewrite. This is a Software Engineer’s utopia. I admit, there was a time I was that software engineer. Thankfully those days are behind me.

Joel Spolsky explains the merits of rewriting software:

…the single worst strategic mistake that any software company can make.

That seems pretty harsh. Is it really that bad to toss the old and write anew?

Joel Spolsky has a thought on why software engineers want to rewrite code:

It’s harder to read code than to write it.

How many times have you read code and thought “What the hell were they thinking?” Worse yet, you’ve said it aloud. To understand code, you must mentally compile it. This is really hard. The author might be a novice, speak a different language or be an experienced coder. Heck the author could be you!

Have you ever read code and wondered why something was written a particular way? You rewrite it, only to discover why it was written in that particular way. Each line of code holds bit of knowledge. Sometimes this knowledge is hard fought. Have you ever chased a bug for weeks?

To be fair a rewrite may be the way to go, but most of the time it’s not.

.NET Vs MEAN: A Rebuttal

In a recent post on Airpair’s blog by Michael Perrenoud, Michael discusses his reasons for leaving .NET and moving to the MEAN stack. After reading his post, I do not feel he gave .NET a fair shake.  This was my response to  him:


I can relate with you on your move to the MEAN Stack. I am also a .NET software engineer with 15 plus years of experience. Like you, I have become concerned with .NET and its eco-system. I started exploring other Stacks. I tried the MEAN stack and found it enticing.

When I read your reasons for leaving .NET, I was disappointed. .NET is not perfect by no stretch of the imagination, but I don’t see it on it’s way out. In fact, C# is the 5th most popular language according to Tiobe software. And in the next version of .NET, Microsoft is simplifying choices by revamping core web technologies, reworking it’s packaging framework and effectively sun-setting some older parts of the framework. If anything .NET might be just starting an upswing.

To give .NET a fair shake, I provide counterpoints to your 4 points:

In your first point it is stated: “The stack is heavily server-side dependent.” This could be true for just about every technology stack: Ruby on Rails, Java, .NET and NodeJS. Yes, I said NodeJS. You can just as easily render html on the server and send it to the client en masse from NodeJS. This point seems to be more of a comparison between Single Page Applications (AngularJS, EmberJs) and Server-Side HTML than a specific technology stack.

In your second point it is stated: The server-side technology is very heavy and relies on an even clunkier web server for its exposure. Even the lightest server-side technology, ASP.NET Web API, is still heavier than NodeJS.” Internet Information Server (IIS), Microsoft’s web server provides many out of the box features. At first it might seem overwhelming, but like some of the best technologies you have to get past the learning curve.

Web API is a lightweight API framework that sits on top of the .NET framework. NodeJs a server-side javascript engine, that allows you to write high performing javascript outside of the browser… I don’t know how you can compare the two.

In your third point it is stated: “The market has grown weary of the high cost of entry due to licensing fees. To get into Visual Studio Ultimate (necessary for any very large projects) you need to invest $13K; that’s insane!” I’ve never been on a project that NEEDED Visual Studio Ultimate. There is a free version of Visual Studio called Visual Studio Express that is fine for most smaller projects.

Your post does not mentioned what the $13,000 gets you: It gets you Visual Studio Ultimate Edition, arguably the most powerful IDE, it also gets you access to MSDN, which includes licenses to virtually all of Microsoft’s products, including a generous amount of hours per month on Microsoft’s cloud platform, Azure.

Microsoft is a for profit company. It’s their business. Just like you are a software engineer, you charge for your time. They charge for their software.

In your fourth and final point it is stated: The typical .NET stack is not homogeneous in nature. The data alone, and the transitions it must go through from database engine, to data model, to view model, to JSON, and back again tells the tale. It doesn’t matter how much syntactic sugar you wrap it in. It doesn’t matter how many frameworks. The stack is not homogeneous.” This seems to be more of an argument between statically typed languages and dynamically typed languages. You could just as well be talking about Java rather than .NET. While you don’t explicitly call it out, I can’t help but feel there is a hint of comparison between Relational and NoSql databases.

All the transitions (which are really separations of concerns (SOC)) you mention serve a purpose. The same transitions exist when using the MEAN Stack. It’s just they are handled differently in Javascript and in MongoDB. Switching to the MEAN stack won’t cause these transitions (SOC) to go away.

“They will be in demand for a long time; like COBOL developers. Believe me, you don’t “just rewrite” 4 decades of COBOL; the same holds true for the .NET stack. However, I did draw a conclusion that the .NET stack is on its way out.”

Youch! Really man? That’s just harsh!

.Net and the MEAN Stack both have a large following and have delivered great software. Each has advantages and drawbacks. In my humble opinion the technology stack is a person choice and an academic discussion. Delivering successful inspiring software is what’s important.

Questions to Ask During an Interview

When I waQuestionslk out of a interview, I want to know the position’s responsibilities, I want to know the environment and I want to know what I am expected to accomplish during my first week. Most of all I want to know if the company is a fit for me. More often than not companies will hire the best among the candidate pool. This does not mean they are the best for the position. Simply they are the best in the given candidate pool. Very few companies recognize this difference. It’s your job as the interviewee to vet the company.

I have developed the following questions to ask during an interview:

What will be my first task?
Is there a project plan? How much thought has gone into this position?

What will determine success or failure?
If project success can’t be articulated, how can they measure success in the position?

How do I get my tasks?
Is an issue tracking system used?

Do you use source control?
A company without source control in 2014 is almost always a deal breaker. If a company can’t provide the most basic need of software engineers there are bound to be other issues.

Do you allow remote work?
Telecommuting is a nice perk. It affords you flexibility to do errands or have appointments during lunch.

Describe the computer/environment I am provide.
What type of machine is given to software engineers? Two monitors or one? Is the work area low traffic and quiet — Getting stuck in a loud high traffic area sucks.

What are the hours?
Are the hours flexible? What are the core hours?

Am I on call?
Are you expected to support production issues during off hours? Do software engineers answer customer support calls?

Automated builds and Deployments?
How evolved is the build process? Do developers manually build or is it automated?

Do you have testers?
Am I responsible for testing?

What technologies do you use?
There are some technologies that are no longer interesting.
SCRUM, Lean, Agile or Waterfall. Does the team do Code Reviews? What about Unit Testing?

Most forget that an interview is a two way street. You, as the interviewee, are interviewing the company and your future co-workers for a good fit in the company and in the position.

Missing Management Delegation Icon in IIS

It’s critical this is done first. Web deploy may not install correctly if it’s installed with the Management Service icon missing. Check IIS for the Management Delegation icon, it’ll be under the Management section.

If it’s missing run the following commands.

Windows 2012

dism /online /enable-feature /featurename:IIS-WebServerRole
dism /online /enable-feature /featurename:IIS-WebServerManagementTools
dism /online /enable-feature /featurename:IIS-ManagementService
Reg Add HKLM\Software\Microsoft\WebManagement\Server /V EnableRemoteManagement /T REG_DWORD /D 1
net start wmsvc
sc config wmsvc start= auto

Run Web Deploy.

Check to see if the icon is there. If it’s not, run web deploy again. It should be there.

Calling Stored Procedures with Code First

codeOne of the weaknesses of Entity Framework 6 Code First is the lack of support for natively calling database constructs (views, stored procedures… etc). For those who have not heard of or used Code-First in Entity Framework (EF), Code-First is simply a Fluent mapping API. The idea is to create all your database mappings in code (i.e. C#) and the framework then creates and track the changes in the database schema.

In traditional Entity Framework to call a stored procedure you’d map it in your EDMX file. This is a multi-step process. Once the process is completed a method is created, which hangs off the DataContext.

I sought to making a calling stored procedure easier. At the heart of a stored procedure you have a procedure name, N number of parameters and a results set. I’ve written a small extension method that takes a procedure name, parameters and a return type. It just works. No mapping the procedure and it’s parameters.

public static List<TReturn> CallStoredProcedure<TParameters, TReturn>(this DataContext context, string storedProcedure, TParameters parameters) where TParameters : class where TReturn : class, new()
IDictionary<string,object> procedureParameters = new Dictionary<string, object>();
PropertyInfo[] properties = parameters.GetType().GetProperties();

var ps = new List<object>();

foreach (var property in properties)
object value = property.GetValue(parameters);
string name = property.Name;

procedureParameters.Add(name, value);

ps.Add(new SqlParameter(name, value));

var keys = procedureParameters.Select(p => string.Format("@{0}", p.Key)).ToList();
var parms = string.Join(", ", keys.ToArray());

return context.Database.SqlQuery<TReturn>(storedProcedure + " " + parms, ps.ToArray()).ToList();



var context = new DataContext();

List<User> users = context.CallStoredProcedure<object,User>("User_GetUserById", new{userId = 3});