Using JavaScript 6: Getting started Today with Babel.js

There has been a lot of talk about the new version of JavaScript. Officially called ECMAScript 6, JavaScript 6 is a few months (sometime mid 2015) away from being officially a recommendation. Then starts the arduous march towards browser support. Many of browsers support a subset of JavaScript 6. How long to get 100% support is anyone’s guess. But don’t fret, we can use most of JavaScript 6 today.

I’d like to introduce you to Babel.js (formerly called 6to5) a JavaScript 6 to JavaScript 5 transpiler. Transpilers allow unsupported features to be used today while we wait for the browsers to implement JavaScript 6. Transpilers convert new JavaScript 6 features into JavaScript 5 syntax. Below I have a simple example of a JS6 class that is transpiled to the JS5 equivalent.

Getting started with Babel.js is mind numbingly simple. Thanks to Babel.js’s extensive documentation. I encourage you to visit their site.

I have copied the grunt instructions below.

Node
$ npm install –global babel

Usage
$ babel script.js

Grunt
$ npm install –save-dev grunt-babel

Gruntfile.js

module.exports = function(grunt){
    "use strict";

    grunt.loadNpmTasks('grunt-babel');

  grunt.initConfig({
    "babel": {
      options: {
        sourceMap: true
      },
      dist: {
        files: {
          "dist/app.js": "app.js"
        }
      }
    }
  });

  grunt.registerTask("default", ["babel"]);
}

Now that we have Babel.js installed let’s test it. I have created a JavaScript 6 class called Test.

class Test {

	getItems(){
		return [];

	}

	saveItem(item){

	}
}

Running Babel.js via Grunt outputs a new app.js, but with the equivalent JavaScript 5 syntax.

Transpiled app.js

"use strict";

var _prototypeProperties = function (child, staticProps, instanceProps) { if (staticProps) Object.defineProperties(child, staticProps); if (instanceProps) Object.defineProperties(child.prototype, instanceProps); };

var _classCallCheck = function (instance, Constructor) { if (!(instance instanceof Constructor)) { throw new TypeError("Cannot call a class as a function"); } };

var Test = (function () {
	function Test() {
		_classCallCheck(this, Test);
	}

	_prototypeProperties(Test, null, {
		getItems: {
			value: function getItems() {
				return [];
			},
			writable: true,
			configurable: true
		},
		saveItem: {
			value: function saveItem(item) {},
			writable: true,
			configurable: true
		}
	});

	return Test;
})();

Babel.js is a great way to use the new features of JavaScript 6 today.

The biggest risk is if the features transpiled with BabelJs will behave the same as they will in the browser? There is always the chance they won’t. Babel.js claims their implementation is 100% spec compliant. This is good, but still a browser maker might interpret a spec differently (we’ve never seen that happen right?).

Worse case scenario is you have to move forward with BabelJs longer than you intended. Yes, you might have to rework some code, it’s the risk you take when using cutting edge tech.

There is no guarantee the browsers makers won’t implement features differently. Using Babel.js might save you during the round of browser releases supporting JavaScript 6.

I can envision a world where most websites a use a transpiler to use the latest features, because browsers tend to lag a few years behind.

In the end most advancements in programming languages make the developers more productive. The sooner we can use the new features the sooner we can realize productivity gains.

Securing AngularJS with Claims

At some point an application needs authorization. This means different levels of access behave differently on a web site (or anything for that matter). It can be anything from seeing data to whole area’s that are not accessible by a group of users.

In non Single Page Applications (SPA), a claim or role is associated with data or an area of the application, either the user has this role or claim or he does not. In a SPA it’s the same, but with a huge disclaimer. A SPA is downloaded to the browser. At this point the browser has total control over the code. A nefarious person can change the code to do his bidding.

Because SPAs can’t be secured, authentication and authorization in a SPA is simply user experience. All meaningful security must be done on the web server. This article does not cover securing your API against attacks. I recommend watching a video from Pluralsight or reading a paper that addresses security for your server technology.

The intent of this article to show you how I added an authorization user experience to my Angular 1.x SPA.

Security Scopes

I have identified 3 areas of the UI that need authorization: Elements (HTML), Routes, and Data.

Just a reminder, securing a SPA is no substitute to securing the server. Permissions on the client is simply to keep the honest people honest and to provide the user with a good experience.

The 3 areas in detail:

Elements

You’ll need to hide specific HTML elements. It could be a label, a table with data, a button, or any element on the page.

Routes

You’ll want to hide entire routes. In certain cases you don’t want the user accessing a view. By securing the route a user can’t to navigate to the view. They instead will be shown a “You are not authorized to navigate to this view” message.

Data

Sometimes hiding the elements in the view is not enough. An astute user can simply view the source and see the hidden data in the HTML source or watch it stream to the browser. What we want is the data not to be retrieve in the first place..

Adding security is tricky. At first I tried constraining the access at the HTTP API (on the client). I quickly realized this wouldn’t work. A user might not have direct access to the data, but this doesn’t mean they don’t indirectly access to the data. At the HTTP API layer (usually one of the lowest in the application) we can’t tell the context of the call and therefore can’t apply security concerns to it.

Below I have provided coding samples:

Code

I created a service for the authorization checking code. This is the heart of the authorization. All authorization requests use this service to check if the user is authorized for the particular action.

angular.module('services')
    .service('AuthorizationContext',function(_, Session){

        this.authorizedExecution = function(key, action){

            //Looking for the claim key that was passed in. If it exists in the claim set, then execute the action.
            Session.claims(function(claims){
                var claim = findKey(key, claims);

                //If Claim was found then execute the call.
                //If it was not found, do nothing
                if(claim !== undefined){
                    action();
                }
            });
        };

        this.authorized = function(key, callback){
            //Looking for the claim key that was passed in. If it exists in the claim set, then execute the action.
            Session.claims(function(claims){
                var claim = findKey(key, claims);

                //If they don't have any security key, then move forward and authorization.
                var valid = claim !== undefined;
                callback(valid);
            });
        };

        //this.agencyViewKey = '401D91E7-6EA0-46B4-9A10-530E3483CE15';

        function findKey(key, claims){
            var claim = _.find(claims, function(item){
                return item.value === key;
            });

            return claim;
        }
    });

Authorize Directive

The authorize directive can be applied to any HTML element that you want to hide from users without a specific level of access. If the user has the access token as part of their claims they are allow to see the element. If they don’t it’s hidden from them.

angular.module(directives')
    .directive('authorize', ['$compile', 'AuthorizationContext', function($compile, AuthorizationContext) {
        return {
            restrict: 'A',
            replace: true,
            //can't have isolated the scope in a shared directive
            link:function ($scope, element, attributes) {

                var securityKey = attributes.authorize;
                AuthorizationContext.authorized(securityKey, function(authorized){
                    var el = angular.element(element);
                    el.attr('ng-show', authorized);

                    //remove the attribute, otherwise it creates an infinite loop.
                    el.removeAttr('authorize');
                    $compile(el)($scope);
                });
            }
        };
    }]);

Elements

I rely heavily on tabs in my application. I apply the authorize directive to the tab that I want to hide from users without the proper claims.

<tabset>
<tab ng-cloak heading="Users" authorize="{{allowUserManagement}}">
...html content
</tab>
</tabset>

Routes

I’m using the ui-router. Unfortunately for those who are not, I don’t have code for the out of the box AngularJS router.

In the $stateChangeStart I authenticate the route. This is the code in that event.

$rootScope.$on("$stateChangeStart", function(event, toState, toParams, fromState, fromParams){
   AuthenticationManager.authenticate(event, toState, toParams);
});

The function that authorizes the route. If it’s authorized, the route is allowed to continue. If it’s not authorized, a message is displayed to the user and they are directed to the home page.

function authorizedRoute(toState, location, toaster, breadCrumbs){
   if(toState.authorization !== undefined){
       AuthorizationContext.authorized(toState.authorization, function(authorized){
           if(!authorized){
               toaster.pop('error', 'Error', 'You are not authorized to view this page.');
               location.path("/search");
           } else {
               breadCrumbs();
           }
       });
   } else{
       breadCrumbs();
   }
}

In this router definition you’ll notice a property called ‘authorization’. If the user has this claim they are allowed to proceed.

    angular.module('agency',
        [
            'ui.router',
            'services'
        ])
        .config(function config($stateProvider){
        $stateProvider.state( 'agency', {
            url: '/agency',
            controller: 'agency.index',
            templateUrl: 'agency/agency.tpl.html',
            authenticate: true,
            authorization:'401d91e7-6ea0-46b4-9a10-530e3483ce15',
            data:{ pageTitle: 'Agency' }
        });
    });

Data

In some cases, you don’t want to make a request to the server for the data. If the user has the claim they’ll be allowed to make the request.

The above AuthorizationContext at beginning of the article show the code for authoriedExecution. Here you see it’s usage.

AuthorizationContext.authorizedExecution(Keys.authorization.allowUserManagement, function(){
	//execute code, if the loggedin user has rights.

                });

As I mentioned above, this is no substitute for securing the server. This code works for providing a wonder user experience.

Adding Custom Converters in AutoMapper with Assembly Scanning

Automapper is a wonderful tool. Those who haven’t used are missing out.

Simply put, AutoMapper is a convention based object to object mapper. For example, your application has a boatload of view models. How are the view models mapped to domain models? Before AutoMapper, it was lines of left to right value setting code. Using Automapper eliminates much of this code. To learn more visit http://automapper.org (I’m not affliated with AutoMapper. I’m just a fan).

In my current project I use AutoMapper to map domain models to view models and vice versa. For each domain model this equates to roughly two ITypeConverter implementations. Predictably, the number of mappings have increased as the application has grown. So much so Visual Studio began having trouble parsing the list of mappings.

Here is a short sample of custom converters:

                c.CreateMap<NewAgency, Agency>().ConvertUsing(new NewAgencyToAgencyConverter());

                c.CreateMap<DependentModel, EmployerGroupMember>().ConvertUsing(new DependentModelToMemberConverter());

                c.CreateMap<EmployeeModel, EmployerGroupMember>().ConvertUsing(new EmployeeToMemberConverter());
                c.CreateMap<NewEmployerGroup, EmployerGroup>().ConvertUsing(new NewEmployerGroupToEmployerGroupConverter());
                c.CreateMap<UpdateEmployerGroup, EmployerGroup>().ConvertUsing(new UpdateEmployerGroupToEmployerGroupConverter());

                c.CreateMap<EmployerGroupMember, object>().ConvertUsing(new EmployerGroupMemberToResultConverter());

                c.CreateMap<EmployerGroup, object>().ConvertUsing(new EmployerGroupToResultConverter());
                c.CreateMap<EmployerGroupAddress, object>().ConvertUsing(new EmployerGroupAddressToObjectConverter());
                c.CreateMap<NewLocation, EmployerGroupAddress>().ConvertUsing(new NewLocationToEmployerGroupAddressConverter());
                c.CreateMap<UpdateLocation, EmployerGroupAddress>().ConvertUsing(new UpdateLocationToEmployerGroupAddressConverter());
                c.CreateMap<User, object>().ConvertUsing(new UserToObjectResult());
                c.CreateMap<List<Carrier>, object>().ConvertUsing(new CarrierCollectionToResultConverter());
                c.CreateMap<Benefit, object>().ConvertUsing(new BenefitToResultConverter());
                c.CreateMap<List<Benefit>, object>().ConvertUsing(new BenefitCollectionToResultConverter());
                c.CreateMap<NewBenefit, Benefit>().ConvertUsing(new NewBenefitToBenefitConverter());

I can’t tell you how many times I created a Customer Converter and forgot to add it to the list. With assembly scanning, all this pain goes away. The downside is AutoMapper does not support assembly scanning. However most modern Dependency Injection containers do. So all we need to do is use a DI container that has scanning capabilities. My preferred container is StructureMap which does support assembly scanning.

Firstly, AutoMapper’s ITypeConverter interface needs to be added to StructureMaps manifest

x.Scan(scan => { scan.ConnectImplementationsToTypesClosing(typeof(ITypeConverter<,>));
       });


Retrieving the ITypeConverter’s implementation isn’t as easy as it seems. My first attempt was to use StrucutureMaps’s GetAllInstances method:

var items = ObjectFactory.GetAllInstances(typeof(ITypeConverter<,>));

No Cigar. At first I was mystified. Why this didn’t work? After all this is how I registered the implementations. Without going into detail, StructureMap doesn’t track this type of information. It finds all the implementations of ITypeConverter<,> and add the concrete types to the manifest. Most developers are don’t want to retrieve all the implementations by the open generic interface this information is discarded.

It turns out that to get the implementations of ITypeConverter<,>, is a bit harder than I thought. A little reflection magic is needed:

       private static IEnumerable<object> GetITypeConverters()
        {
            IEnumerable<IPluginTypeConfiguration> handlers =
                ObjectFactory.Container.Model.PluginTypes
                    .Where(x => x.PluginType.IsGenericType &&
                                x.PluginType.GetGenericTypeDefinition() ==
                                typeof (ITypeConverter<,>))
                               .ToList();

            var allInstances = new List<object>();

            foreach (IPluginTypeConfiguration pluginTypeConfiguration in handlers)
            {
                var instancesForPluginType = ObjectFactory.GetAllInstances(pluginTypeConfiguration.PluginType).OfType<object>();
                allInstances.AddRange(instancesForPluginType);
            }

            return allInstances;
        }

We now have all the implementations of ITypeConverter<,>. The next step is to add them to AutoMapper.

        public static void ConfigureAutoMapper()
        {

            var items = GetITypeConverters();

            Mapper.Initialize(c =>
            {
                foreach (var item in items)
                {
                    string interfaceName = typeof (ITypeConverter<,>).FullName;
                    c.CreateMap(item.GetType().GetInterface(interfaceName).GenericTypeArguments[0], item.GetType().GetInterface(interfaceName).GenericTypeArguments[1]).ConvertUsing(item.GetType());
                }
            });
        }

That’s it. It’s not as straight forward as it could be, but it works. Luckily this code runs once at application startup otherwise we might have performance concerns.

Here is the complete code.

using System;
using System.Collections.Generic;
using System.Linq;
using System.Web;
using AutoMapper;
using StructureMap;
using StructureMap.Query;

namespace Grover.Api.App_Start
{
    public class AutoMapperInitialize
    {
        public static void ConfigureAutoMapper()
        {
            var items = GetITypeConverters();

            Mapper.Initialize(c =>
            {
                foreach (var item in items)
                {

                    string interfaceName = typeof (ITypeConverter<,>).FullName;
                    c.CreateMap(item.GetType().GetInterface(interfaceName).GenericTypeArguments[0], item.GetType().GetInterface(interfaceName).GenericTypeArguments[1]).ConvertUsing(item.GetType());
                }

            });
        }

        private static IEnumerable<object> GetITypeConverters()
        {
            IEnumerable<IPluginTypeConfiguration> handlers =
                ObjectFactory.Container.Model.PluginTypes
                    .Where(x => x.PluginType.IsGenericType &&
                                x.PluginType.GetGenericTypeDefinition() ==
                                typeof (ITypeConverter<,>))
                               .ToList();

            var allInstances = new List<object>();

            foreach (IPluginTypeConfiguration pluginTypeConfiguration in handlers)
            {
                var instancesForPluginType = ObjectFactory.GetAllInstances(pluginTypeConfiguration.PluginType).OfType<object>();
                allInstances.AddRange(instancesForPluginType);
            }

            return allInstances;
        }
    }
}

3 Reasons Why Code Reviews are Important

swordsharpeningA great code review will challenge your assumptions and give you constructive feedback. For me, code reviews are an essential part in growing as a software engineer.

Writing code is an intimate process. Software engineers spend years learning the craft of software engineering and when something critical is said of our creation it’s hard not to take it personal. I find myself, at times, getting defensive when I hearing criticisms. I know the reviewer means well, but this isn’t always comforting. If it wasn’t for honest feedback from some exceptional software engineers, I wouldn’t be half the software engineer I am today.

Benefits of Code Reviews

1. Finding Bugs

Sometimes it’s the simple fact of reading the code that you find an error. Sometimes it’s the other developer who spots the error. Regardless, simply walking the code is enough to expose potential issues.

I think of my mistakes as the grindstone to my sword. To quote Michael Jordan:

I’ve missed more than 9000 shots in my career. I’ve lost almost 300 games. 26 times, I’ve been trusted to take the game winning shot and missed. I’ve failed over and over and over again in my life. And that is why I succeed.

2. Knowledge Transfer

Sharing your work with others is humbling. In many ways you are the code. I know that I feel vulnerable when I share my code.

This a great opportunity to learn from and to teach other engineers. In sharing your code you are taking the reviews on a journey, a journey into the code and aspects about you. A lot can be learned about you by how your write code.

At the end of the code review the reviewers should have a good understanding of how the code works, the rationale behind it and will have learned a little bit about you.

3. Improving the Health of the Code

As I mentioned, the more times you read the code the better code becomes. The more reviewers the better the chance one of them will suggest an improvement. Some might think skill level matters, it doesn’t. Less experienced software engineers don’t have the deep technological knowledge as experienced software engineers, but they also don’t have to wade through all the mental technical baggage to see opportunities for improvement.

Code reviews gives us the benefit of evaluating our code. There will always be something to change to make it just a little bit better.

Coding, in this way, is much like writing. For a good piece to come into focus the code must rest and be re-read. The more times you repeat this process the better the code will become.

In Closing

Some companies don’t officially do code reviews, that’s ok. Seek out other engineers. Most software engineer’s will be happy to take 10 to 15 minutes to look over your code.

Creating Reports Using Encrypted Data

reportingI wrote “Implementing Transparent Encryption with NHibernate Listeners” last year. If you haven’t read it, I recommend you do, even if nHibernate is not your cup of tea, you are guaranteed to learn something.

Michael Johnson from Sharpened Developer commented on the post asking “How do I report on encrypted data?”

Most reports are aggregations and encrypted data is typically personally identifiable information (PII), in many cases this data is not included in reports. In the case that we need encrypted data for reports we have options.

Reports within the Application

Generating reports with in the application is the simplest way to access the data. Mostly likely encryption and decryption mechanisms are already in place. There a number of third party (Telerik, DevExpress, etc..) libraries available for building reports. Building reports with in the application works well on small sets of data, but as the data grows this simply doesn’t scale.

Using a Reporting Database

The long term solution is to move the data into a reporting database. Once the data is in it’s own database an enterprise reporting solution can be put in place.

The leads me to the more interesting aspect of keeping the reporting data current.

Firstly, an external facing application should not have direct access to the decrypted reporting database. If the external application is compromised it is a good bet they’ll soon have access to the decrypted reporting database.

Ideally decrypted data is only available on the internal network.

With all the hacking (Sony) lately, I wonder if it’s a good idea to have any sensitive data decrypted at rest… but I digress. That’s a discussion for another time.

There are two types of reporting data: real time and everything else.

Real time data requires the data to be nearly insync with production data. The only way to report on real time data is to capture changes as they happen. This is fraught with issues, but a cleverly crafted asynchronous process using a message queue is a good solution. A service watching the message queue can process the messages near real-time. The service is decoupled, providing a layer of security and it’s asynchronous minimizing the performance impact.

When data freshness isn’t a concern, a daily or even weekly service is a good solution. It might be as simple as restoring a production database backup on the reporting server.

Lastly, avoid syncing directly from production databases, the last thing you’d want is to hinder production’s performance by inundating it with database requests.

In Closing

Reporting is rarely thought of at the onset of an application. As applications mature, stakeholders want to mine data for insights. This inevitably turns into a growing collection of reporting requests that seem to never end. A solid reporting solution will help abate these requests. In the best scenario the stakeholders can create their own reports.

* image reference (http://www.ajboggs.com/consulting/software-development/reporting-solutions/)

Algorithms: Binary Search

You are presented with a set of a 1000 numbers. You are tasked with finding the position of 73. The most obvious approach is to started with the first number and evaluate every number until 73 is found. This approach is called a linear search algorithm or sequential search algorithm. This works for a set of 1000 numbers, but consider if the set is increased to 10 million numbers. A linear search can not scale and is simply not suited for this many numbers, but a binary search algorithm can.

A binary search algorithm requires the data to be sorted. Once sorted, the value is found by finding the middle value and comparing it to the search value. If the search value is lower than the middle value, we take the first half of numbers and again we find the middle value and once again we compare it to the search value. If the search value is still lower, we again take the first half of numbers and find the middle value. And once again we compare the middle value to the search value. This process repeats itself until we find the search value or we run out of values.

Comparing the two algorithms for performance: A linear search of 10 million numbers, assuming 1 second per number, will consume roughly 116 days. A binary search of 10 million numbers, again assuming 1 second per number, will only consume about 23 seconds. When searching for numbers the binary search wins hands down.

Binary Search implemented in C#:


        public int BinarySearch(int number, int[] collection)
        {
            int low = 0;
            int high = collection.Length - 1; //collection[]; // find the last number, this assumes the collection is sorted.

            while (low <= high)
            {
                int mid = (low + high) / 2;

                if (collection[mid] < number)
                {
                    low = mid + 1;
                }
                else if (collection[mid] > number)
                {
                    high = mid - 1;
                }
                else
                {
                    return collection[mid];
                }
            }

            return -1;
        }

5 Steps for Coding for the Next Developer

codingsmallMost of us probably don’t think about the developer who will maintain our code. Until recently, I did not consider him either. I never intentionally wrote obtuse code, but I also never left any breadcrumbs.

Kent Beck on good programmers:

Any fool can write code that a computer can understand. Good programmers write code that humans can understand.

Douglas Crockford on good computer programs:

It all comes down to communication and the structures that you use in order to facilitate that communication. Human language and computer languages work very differently in many ways, but ultimately I judge a good computer program by it’s ability to communicate with a human who reads that program. So at that level, they’re not that different.

Discovering purpose and intent is difficult in the most well written code. Any breadcrumbs left by the author, comments, verbose naming and consistency, is immensely helpful to next developers.

I start by looking for patterns. Patterns can be found in many places including variables names, class layout and project structure. Once identified, patterns are insights into the previous developer’s intent and help in comprehending the code.

What is a pattern? A pattern is a repeatable solution to a recurring problem. Consider a door. When a space must allow people to enter and to leave and yet maintain isolation, the door pattern is implemented. Now this seems obvious, but at one point it wasn’t. Someone created the door pattern which included the door handle, the hinges and the placement of these components. Walk into any home and you can identify any door and it’s components. The styles and colors might be different, but the components are the same. Software is the same.

There are known software patterns to common software problems. In 1995, Design Patterns: Elements of Reusable Object-Oriented Software was published describing common software patterns. This book describes common problems encountered in most software application and offered an elegant way to solve these problems. Developers also create their own patterns while solving problems they routinely encounter. While they don’t publish a book, if you look close enough you can identify them.

Sometimes it’s difficult to identify the patterns. This makes grokking the code difficult. When you find yourself in this situation, inspect the code, see how it is used. Start a re-write. Ask yourself, how would you accomplish the same outcome. Often as you travel the thought process of an algorithm, you gain insight into the other developer’s implementation. Many of us have the inclination to re-write what we don’t understand. Resist this urge! The existing implementation is battle-tested and yours is not.

Some code is just vexing, reach out to a peer — a second set of eyes always helps. Walk the code together. You’ll be surprised what the two of you will find.

Here are 5 tips for leaving breadcrumbs for next developers

1. Patterns
Use known patterns, create your own patterns. Stick with a consistent paradigm throughout the code. For example, don’t have 3 approaches to data access.

2. Consistency
This is by far the most important aspect of coding. Nothing is more frustrating than finding inconsistent code. Consistency allows for assumptions. Each time a specific software pattern is encountered, it should be assumed it behaves similarly as other instances of the pattern.

Inconsistent code is a nightmare, imagine reading a book with every word meaning something different, including the same word in different places. You’d have to look up each word and expend large amounts of mental energy discovering the intent. It’s frustrating, tedious and painful. You’ll go crazy! Don’t do this to next developer.

3. Verbose Naming
This is your language. These are the words to your story. Weave them well.

This includes class names, method names, variable names, project names and property names.

Don’t:

if(monkey.HoursSinceLastMeal > 3)
{
FeedMonkey();
}

Do:

int feedInterval = 3;
if(monkey.HoursSinceLastMeal > feedInterval)
{
FeedMonkey();
}

The first example has 3 hard coded in the if statement. This code is syntactically correct, but the intent of the number 3 tells you nothing. Looking at the property it’s evaluated against, you can surmise that it’s really 3 hours. In reality we don’t know. We are making an assumption.

In the second example, we set 3 to a variable called ‘feedInterval’. The intent is clearly stated in the variable name. If it’s been 3 hours since the last meal, it’s time to feed the monkey. A side effect of setting the variable is we can now change the feed interval without changing the logic.

This is a contrived example, in a large piece of software this type of code is self documenting and will help the next developer understand the code.

4. Comments
Comments are a double edge sword. Too much commenting increases maintenance costs, not enough leaves developers unsure on how the code works. A general rule of thumb is to comment when the average developer will not understand the code. This happens when the assumptions are not obvious or the code is out of the ordinary.

5. Code Simple
In my professional opinion writing complex code is the biggest folly among developers.

Steve Jobs on simplicity:

Simple can be harder than complex: You have to work hard to get your thinking clean to make it simple. But it’s worth it in the end because once you get there, you can move mountains.

Complexity comes in many forms, some of which include: future proofing, overly complex implementations, too much abstraction, large classes and large methods.

For more on writing clean simple code, see Uncle Bob’s book Clean Code and Max Kanat-Alexander’s Code Simplicity

Closing

Reading code is hard. With a few simple steps you can ensure the next developer will grok your code.

HTML Extended

There is an emerging trend to extend HTML. To the untrained eye the changes are not obvious.

For example, AngularJS extends the HTML with directives:

<my-directive my-data=”user”></my-directive>

To the browser this snippet of HTML is meaningless. Fortunately AngularJS has an internal compiler that converts it to meaningful HTML.

EmberJs is following suit with their upcoming 1.11 release. Web components will now appear as inline HTML. Like AngularJS, EmberJs will convert it to meaningful HTML.

After EmberJs release 1.11 markup:

<my-video src={{movie.url}}></my-video>

Starting with ASP.NET 5, ASP.NET will have “TagHelpers.” In the past to render HTML with Razor you’d do something like this:


<ul>
<li>@Html.ActionLink("Default", "Index", "Home")</li>
</ul>

Now with TagHelpers:


<ul>
<li><a controller="Default" action="Index">Home</a></li>
</ul>

This is an interesting trend. These technologies are extending HTML. At the end of the day it all must be rendered into meaningful HTML for the browser. I wonder if this is the end of custom view engines, such as Haml, Razor, Jade and EJS. It’ll be interesting to see how this plays out.

Iterate Small

Wielding RobotsIn the 1980’s manufacturing in the United States was in decline. After World War 2 the United States was the undisputed leader in manufacturing. During the 1960’s this changed. Japanese companies made the same products as the United States but their products were of a higher quality and cheaper. How did they do it? Many factors played a role, but one of factor was batch size. By lowering batch sizes Japanese companies increased quality and were able to deliver more product.

Lowering batch sizes allowed for a more nimble process. Instead having 20 tons of raw materials in process, they only needed 2 tons. When a batch was defective (it had bugs) only 2 tons of raw material were lost instead of 20 tons. The entire process was more efficient.

Applying this to software engineering, we want to develop small and deploy small.

Develop Small
Manufacturing is not an exact parallel to software development, but many of the principles are applicable. For instance, the more code changed, the more opportunity for bugs to manifest. Minimizing the number of changes lessens the likelihood of bugs.

Break tasks into small chunks. Even large feature can be split into small tasks. It’s fine to ship benign code.

Deploy Small
Deploying small requires a build and deployment process that can be confidently run multiple times a day.

Continual deployments allows for an evolving product. Nothing like upgrading from Windows 7 to Windows 8 ( for those that did not experience this, it was a drastic change). Small changes deployed in small increments. Less impact on the user and less opportunity for something to go wrong.