A General Ledger : Understanding the Ledger

cropped_ledgerWhat is a general ledger and why is it important? To find out read on!

What is a general ledger? A general ledger is a log of all the transactions relating to assets, liabilities, owners’ equity, revenue and expenses. It’s how a company can tell if it’s profitable or it’s taking a loss. In the US, this is the most common way to track the financials.

To understand how a general ledger works, you must understand double entry bookkeeping. So, what is double entry bookkeeping? I’m glad you asked. Imagine you have a company and your first customer paid you $1000. To record this, you add this transaction to the general ledger. Two entries made: a debit, increasing the value of your assets in your cash account and a credit, decreasing the value of the revenue (money given to you by your customer payment). Think of the cash account as an internal account, meaning an account that you track the debits (increasing in value) and credits (decreasing in value). The revenue account is an external account. Meaning you only track the credit entries. External accounts don’t impact your business. They merely tell you where the money is coming from and where it’s going.

Here is a visual of our first customers payment.


If the sum of the debit column and the sum of the credit column don’t equal each other, then there is an error in the general ledger. When both sides equal each other the books are said to be balanced. You want balanced books.

Let’s look at a slightly more complex example.

You receive two bills: water and electric, both for $50. You pay them using part of the cash in your cash account. The current balance is $1000. What entries are needed? Take your time. I’ll wait.


Four entries are added to the general ledger: two credit entries for cash and one entry for each the water and electric accounts. Notice the cash entries are for credits.

For bonus, how would we calculate the remaining balance of the cash account? Take your time. Again, I’ll wait for you.

To get the remaining balance we need to identify each cash entry.


To get the balance of the Cash account we do the same thing we did to balance the books, but this time we only look at the cash account. We take the sum of the debit column for the cash account and the sum of the Credit column for the cash account and subtract them from each other. The remaining value is the balance of the cash account.


And that folks, is the basics of a general ledger and double entry bookkeeping. I hope you see the importance of this approach. As it give you the ability to quicking see if there are errors in your books. You have high fidelity in tracking payments and revenues.

This is just the tip of the iceberg in accounting. If you’d like to dive deeper into accounting, have a look at the accounting equation: Assets = Liabilities + Owner’s Equity.

Hopefully this post has given you a basic understanding of what a general ledger is and how double-entry bookkeeping works. In the next post I’ll go into how to implement a general ledger in C#.

A Tour of Bidwell Mansion

This week, I was out of town and didn’t find the time to write a technical post. I did tour the Bidwell Mansion in Chico, California and found it fascinating and I wanted to share a few photos of my trip. The technical content will return next week.

John Bidwell was an influential political figure in the 1800’s. He helped California present it’s case for statehood. He brought agriculture into California. John ran for governor of California 3 times and failed all three times. In many ways John was ahead of his times.

The 1800’s and into the 1900’s American Indians were hunted and persecuted. In a time where this was a common occurrence, John Bidwell took exception to it. John invited a local Mechoopda Maidu Indian tribe to move their village onto his land. In essence, he provided protection.

The Bidwell mansion is a 3 story Italianate styled mansion. There are 26 rooms, including a ballroom on the third floor.

The Mansion


John Bidwell’s Desk


Setting Room

After dinner people would migrate into this room to talk it up. Notice the fireplace. Most rooms in the mansion have a fire place, wood heat was the only source of heat.


High-chair with Wheels

Wheels fold down and turns into a stroller.





Your own library. I’d love to have one of those.



You don’t see such grand staircases anymore.


This is a beautiful mansion, if you are ever in the Chico Area, I recommend stopping and seeing the mansion.

9 JavaScript Libraries that Every Serious Application Needs

javascript_logoOver the past year I have been developing a line of business application with AngularJs. AngularJs has many out of the box features that just work. That’s the beauty of AngularJs. This is also it’s downside. You can’t be good at everything — some of the API are lacking performance and features.

Through trial and error I’ve discovered what JavaScript libraries fill the gaps. I’ve compiled a list of these libraries.



This is the swiss army knife of the JavaScript world. It is immensely helpful when I want to get close to the DOM and manipulate


I have yet to find a better library for queuing and uploading documents.


Filtering, sorting, mapping data with this library is a must have. If you still using for and forEach statements you are doing it wrong.


Displaying message to the client is always a challenge. This library makes it simple. The messages are configurable and appear professional.


AngularJs has known issues with pub/sub. Whether you choose $emit or $broadcast each has it’s failings. Radio is an alternative pub/sub library that just works and avoids the issues found in the Angular options


Angular UI Bootstrap has a fully featured datepicker. However it was discovered to have a design flaw that in certain conditions slows the page to a 3 or a 4 second load time. Pikaday is a lightweight zippy datepicker alternative that just works.


Formatting money is a pain. It might seem straightforward, but it’s not. Accounting.js takes the pain out of it. It’s another library that just works.


Filesize takes the number of bytes and converts it to mb, K, g notation. It’s that simple and it works.


Anyone who is dealing with time, needs this library. Dates and time are a headache in JavaScript. The browsers try to do too much for you. When it works, it’s great when it doesn’t, it’s like fighting with a Chinese finger puzzle.

Proofing a Concept and Growing the Code

treeIn a recent conversation, a friend mentioned he creates proof of concepts and then discards them after testing their viability. I’ve done the same in the past. This time it didn’t feel right. I cringed when he said he threw away to the code. Maybe my days as a business owner has turned me into a froogle goat, but it felt like he was throwing away value.

Why don’t we continue forward with a proof of concept?

Generally when I think of a proof of concept its hastily assembled. Many of the “best practices” are short-cutted if not downright ignored. The goal is to test the feasibility an idea. At some point you’ll realize if the solution will work. Then you’ll decide if it’s time to walk away from the idea and ditch the proof of concept or move forward with the idea. If you move forward with the idea, why not keep coding and turn the proof of concept into the real deal?

I’ll be honest here, it seems ridiculous that you’d create a solution and then throw it away just to create it again. That’s like poorly painting an entire house just to see if you like the color. “Yep, the color is good. Let’s paint the house for reals this time and this time we’ll do a good job.

There is another way. Evolve the code. Add in the missing infrastructure. This has the possibility of growing into a long term healthy solution.

Walking away from a proof of concept costs you value (time and money) that might otherwise be captured. Even if you don’t capture 100%, you’ll still be better off than just chucking everything and walking away. So next time, give it a try. See if you can morph a proof of concept into a sustainable project. I think you might be surprised at the end result.

Using JavaScript 6: Getting started Today with Babel.js

There has been a lot of talk about the new version of JavaScript. Officially called ECMAScript 6, JavaScript 6 is a few months (sometime mid 2015) away from being officially a recommendation. Then starts the arduous march towards browser support. Many of browsers support a subset of JavaScript 6. How long to get 100% support is anyone’s guess. But don’t fret, we can use most of JavaScript 6 today.

I’d like to introduce you to Babel.js (formerly called 6to5) a JavaScript 6 to JavaScript 5 transpiler. Transpilers allow unsupported features to be used today while we wait for the browsers to implement JavaScript 6. Transpilers convert new JavaScript 6 features into JavaScript 5 syntax. Below I have a simple example of a JS6 class that is transpiled to the JS5 equivalent.

Getting started with Babel.js is mind numbingly simple. Thanks to Babel.js’s extensive documentation. I encourage you to visit their site.

I have copied the grunt instructions below.

$ npm install –global babel

$ babel script.js

$ npm install –save-dev grunt-babel


module.exports = function(grunt){
    "use strict";


    "babel": {
      options: {
        sourceMap: true
      dist: {
        files: {
          "dist/app.js": "app.js"

  grunt.registerTask("default", ["babel"]);

Now that we have Babel.js installed let’s test it. I have created a JavaScript 6 class called Test.

class Test {

		return [];




Running Babel.js via Grunt outputs a new app.js, but with the equivalent JavaScript 5 syntax.

Transpiled app.js

"use strict";

var _prototypeProperties = function (child, staticProps, instanceProps) { if (staticProps) Object.defineProperties(child, staticProps); if (instanceProps) Object.defineProperties(child.prototype, instanceProps); };

var _classCallCheck = function (instance, Constructor) { if (!(instance instanceof Constructor)) { throw new TypeError("Cannot call a class as a function"); } };

var Test = (function () {
	function Test() {
		_classCallCheck(this, Test);

	_prototypeProperties(Test, null, {
		getItems: {
			value: function getItems() {
				return [];
			writable: true,
			configurable: true
		saveItem: {
			value: function saveItem(item) {},
			writable: true,
			configurable: true

	return Test;

Babel.js is a great way to use the new features of JavaScript 6 today.

The biggest risk is if the features transpiled with BabelJs will behave the same as they will in the browser? There is always the chance they won’t. Babel.js claims their implementation is 100% spec compliant. This is good, but still a browser maker might interpret a spec differently (we’ve never seen that happen right?).

Worse case scenario is you have to move forward with BabelJs longer than you intended. Yes, you might have to rework some code, it’s the risk you take when using cutting edge tech.

There is no guarantee the browsers makers won’t implement features differently. Using Babel.js might save you during the round of browser releases supporting JavaScript 6.

I can envision a world where most websites a use a transpiler to use the latest features, because browsers tend to lag a few years behind.

In the end most advancements in programming languages make the developers more productive. The sooner we can use the new features the sooner we can realize productivity gains.

Securing AngularJS with Claims

At some point an application needs authorization. This means different levels of access behave differently on a web site (or anything for that matter). It can be anything from seeing data to whole area’s that are not accessible by a group of users.

In non Single Page Applications (SPA), a claim or role is associated with data or an area of the application, either the user has this role or claim or he does not. In a SPA it’s the same, but with a huge disclaimer. A SPA is downloaded to the browser. At this point the browser has total control over the code. A nefarious person can change the code to do his bidding.

Because SPAs can’t be secured, authentication and authorization in a SPA is simply user experience. All meaningful security must be done on the web server. This article does not cover securing your API against attacks. I recommend watching a video from Pluralsight or reading a paper that addresses security for your server technology.

The intent of this article to show you how I added an authorization user experience to my Angular 1.x SPA.

Security Scopes

I have identified 3 areas of the UI that need authorization: Elements (HTML), Routes, and Data.

Just a reminder, securing a SPA is no substitute to securing the server. Permissions on the client is simply to keep the honest people honest and to provide the user with a good experience.

The 3 areas in detail:


You’ll need to hide specific HTML elements. It could be a label, a table with data, a button, or any element on the page.


You’ll want to hide entire routes. In certain cases you don’t want the user accessing a view. By securing the route a user can’t to navigate to the view. They instead will be shown a “You are not authorized to navigate to this view” message.


Sometimes hiding the elements in the view is not enough. An astute user can simply view the source and see the hidden data in the HTML source or watch it stream to the browser. What we want is the data not to be retrieve in the first place..

Adding security is tricky. At first I tried constraining the access at the HTTP API (on the client). I quickly realized this wouldn’t work. A user might not have direct access to the data, but this doesn’t mean they don’t indirectly access to the data. At the HTTP API layer (usually one of the lowest in the application) we can’t tell the context of the call and therefore can’t apply security concerns to it.

Below I have provided coding samples:


I created a service for the authorization checking code. This is the heart of the authorization. All authorization requests use this service to check if the user is authorized for the particular action.

    .service('AuthorizationContext',function(_, Session){

        this.authorizedExecution = function(key, action){

            //Looking for the claim key that was passed in. If it exists in the claim set, then execute the action.
                var claim = findKey(key, claims);

                //If Claim was found then execute the call.
                //If it was not found, do nothing
                if(claim !== undefined){

        this.authorized = function(key, callback){
            //Looking for the claim key that was passed in. If it exists in the claim set, then execute the action.
                var claim = findKey(key, claims);

                //If they don't have any security key, then move forward and authorization.
                var valid = claim !== undefined;

        //this.agencyViewKey = '401D91E7-6EA0-46B4-9A10-530E3483CE15';

        function findKey(key, claims){
            var claim = _.find(claims, function(item){
                return item.value === key;

            return claim;

Authorize Directive

The authorize directive can be applied to any HTML element that you want to hide from users without a specific level of access. If the user has the access token as part of their claims they are allow to see the element. If they don’t it’s hidden from them.

    .directive('authorize', ['$compile', 'AuthorizationContext', function($compile, AuthorizationContext) {
        return {
            restrict: 'A',
            replace: true,
            //can't have isolated the scope in a shared directive
            link:function ($scope, element, attributes) {

                var securityKey = attributes.authorize;
                AuthorizationContext.authorized(securityKey, function(authorized){
                    var el = angular.element(element);
                    el.attr('ng-show', authorized);

                    //remove the attribute, otherwise it creates an infinite loop.


I rely heavily on tabs in my application. I apply the authorize directive to the tab that I want to hide from users without the proper claims.

<tab ng-cloak heading="Users" authorize="{{allowUserManagement}}">
...html content


I’m using the ui-router. Unfortunately for those who are not, I don’t have code for the out of the box AngularJS router.

In the $stateChangeStart I authenticate the route. This is the code in that event.

$rootScope.$on("$stateChangeStart", function(event, toState, toParams, fromState, fromParams){
   AuthenticationManager.authenticate(event, toState, toParams);

The function that authorizes the route. If it’s authorized, the route is allowed to continue. If it’s not authorized, a message is displayed to the user and they are directed to the home page.

function authorizedRoute(toState, location, toaster, breadCrumbs){
   if(toState.authorization !== undefined){
       AuthorizationContext.authorized(toState.authorization, function(authorized){
               toaster.pop('error', 'Error', 'You are not authorized to view this page.');
           } else {
   } else{

In this router definition you’ll notice a property called ‘authorization’. If the user has this claim they are allowed to proceed.

        .config(function config($stateProvider){
        $stateProvider.state( 'agency', {
            url: '/agency',
            controller: 'agency.index',
            templateUrl: 'agency/agency.tpl.html',
            authenticate: true,
            data:{ pageTitle: 'Agency' }


In some cases, you don’t want to make a request to the server for the data. If the user has the claim they’ll be allowed to make the request.

The above AuthorizationContext at beginning of the article show the code for authoriedExecution. Here you see it’s usage.

AuthorizationContext.authorizedExecution(Keys.authorization.allowUserManagement, function(){
	//execute code, if the loggedin user has rights.


As I mentioned above, this is no substitute for securing the server. This code works for providing a wonder user experience.

Adding Custom Converters in AutoMapper with Assembly Scanning

Automapper is a wonderful tool. Those who haven’t used are missing out.

Simply put, AutoMapper is a convention based object to object mapper. For example, your application has a boatload of view models. How are the view models mapped to domain models? Before AutoMapper, it was lines of left to right value setting code. Using Automapper eliminates much of this code. To learn more visit http://automapper.org (I’m not affliated with AutoMapper. I’m just a fan).

In my current project I use AutoMapper to map domain models to view models and vice versa. For each domain model this equates to roughly two ITypeConverter implementations. Predictably, the number of mappings have increased as the application has grown. So much so Visual Studio began having trouble parsing the list of mappings.

Here is a short sample of custom converters:

                c.CreateMap<NewAgency, Agency>().ConvertUsing(new NewAgencyToAgencyConverter());

                c.CreateMap<DependentModel, EmployerGroupMember>().ConvertUsing(new DependentModelToMemberConverter());

                c.CreateMap<EmployeeModel, EmployerGroupMember>().ConvertUsing(new EmployeeToMemberConverter());
                c.CreateMap<NewEmployerGroup, EmployerGroup>().ConvertUsing(new NewEmployerGroupToEmployerGroupConverter());
                c.CreateMap<UpdateEmployerGroup, EmployerGroup>().ConvertUsing(new UpdateEmployerGroupToEmployerGroupConverter());

                c.CreateMap<EmployerGroupMember, object>().ConvertUsing(new EmployerGroupMemberToResultConverter());

                c.CreateMap<EmployerGroup, object>().ConvertUsing(new EmployerGroupToResultConverter());
                c.CreateMap<EmployerGroupAddress, object>().ConvertUsing(new EmployerGroupAddressToObjectConverter());
                c.CreateMap<NewLocation, EmployerGroupAddress>().ConvertUsing(new NewLocationToEmployerGroupAddressConverter());
                c.CreateMap<UpdateLocation, EmployerGroupAddress>().ConvertUsing(new UpdateLocationToEmployerGroupAddressConverter());
                c.CreateMap<User, object>().ConvertUsing(new UserToObjectResult());
                c.CreateMap<List<Carrier>, object>().ConvertUsing(new CarrierCollectionToResultConverter());
                c.CreateMap<Benefit, object>().ConvertUsing(new BenefitToResultConverter());
                c.CreateMap<List<Benefit>, object>().ConvertUsing(new BenefitCollectionToResultConverter());
                c.CreateMap<NewBenefit, Benefit>().ConvertUsing(new NewBenefitToBenefitConverter());

I can’t tell you how many times I created a Customer Converter and forgot to add it to the list. With assembly scanning, all this pain goes away. The downside is AutoMapper does not support assembly scanning. However most modern Dependency Injection containers do. So all we need to do is use a DI container that has scanning capabilities. My preferred container is StructureMap which does support assembly scanning.

Firstly, AutoMapper’s ITypeConverter interface needs to be added to StructureMaps manifest

x.Scan(scan => { scan.ConnectImplementationsToTypesClosing(typeof(ITypeConverter<,>));

Retrieving the ITypeConverter’s implementation isn’t as easy as it seems. My first attempt was to use StrucutureMaps’s GetAllInstances method:

var items = ObjectFactory.GetAllInstances(typeof(ITypeConverter<,>));

No Cigar. At first I was mystified. Why this didn’t work? After all this is how I registered the implementations. Without going into detail, StructureMap doesn’t track this type of information. It finds all the implementations of ITypeConverter<,> and add the concrete types to the manifest. Most developers are don’t want to retrieve all the implementations by the open generic interface this information is discarded.

It turns out that to get the implementations of ITypeConverter<,>, is a bit harder than I thought. A little reflection magic is needed:

       private static IEnumerable<object> GetITypeConverters()
            IEnumerable<IPluginTypeConfiguration> handlers =
                    .Where(x => x.PluginType.IsGenericType &&
                                x.PluginType.GetGenericTypeDefinition() ==
                                typeof (ITypeConverter<,>))

            var allInstances = new List<object>();

            foreach (IPluginTypeConfiguration pluginTypeConfiguration in handlers)
                var instancesForPluginType = ObjectFactory.GetAllInstances(pluginTypeConfiguration.PluginType).OfType<object>();

            return allInstances;

We now have all the implementations of ITypeConverter<,>. The next step is to add them to AutoMapper.

        public static void ConfigureAutoMapper()

            var items = GetITypeConverters();

            Mapper.Initialize(c =>
                foreach (var item in items)
                    string interfaceName = typeof (ITypeConverter<,>).FullName;
                    c.CreateMap(item.GetType().GetInterface(interfaceName).GenericTypeArguments[0], item.GetType().GetInterface(interfaceName).GenericTypeArguments[1]).ConvertUsing(item.GetType());

That’s it. It’s not as straight forward as it could be, but it works. Luckily this code runs once at application startup otherwise we might have performance concerns.

Here is the complete code.

using System;
using System.Collections.Generic;
using System.Linq;
using System.Web;
using AutoMapper;
using StructureMap;
using StructureMap.Query;

namespace Grover.Api.App_Start
    public class AutoMapperInitialize
        public static void ConfigureAutoMapper()
            var items = GetITypeConverters();

            Mapper.Initialize(c =>
                foreach (var item in items)

                    string interfaceName = typeof (ITypeConverter<,>).FullName;
                    c.CreateMap(item.GetType().GetInterface(interfaceName).GenericTypeArguments[0], item.GetType().GetInterface(interfaceName).GenericTypeArguments[1]).ConvertUsing(item.GetType());


        private static IEnumerable<object> GetITypeConverters()
            IEnumerable<IPluginTypeConfiguration> handlers =
                    .Where(x => x.PluginType.IsGenericType &&
                                x.PluginType.GetGenericTypeDefinition() ==
                                typeof (ITypeConverter<,>))

            var allInstances = new List<object>();

            foreach (IPluginTypeConfiguration pluginTypeConfiguration in handlers)
                var instancesForPluginType = ObjectFactory.GetAllInstances(pluginTypeConfiguration.PluginType).OfType<object>();

            return allInstances;

3 Reasons Why Code Reviews are Important

swordsharpeningA great code review will challenge your assumptions and give you constructive feedback. For me, code reviews are an essential part in growing as a software engineer.

Writing code is an intimate process. Software engineers spend years learning the craft of software engineering and when something critical is said of our creation it’s hard not to take it personal. I find myself, at times, getting defensive when I hearing criticisms. I know the reviewer means well, but this isn’t always comforting. If it wasn’t for honest feedback from some exceptional software engineers, I wouldn’t be half the software engineer I am today.

Benefits of Code Reviews

1. Finding Bugs

Sometimes it’s the simple fact of reading the code that you find an error. Sometimes it’s the other developer who spots the error. Regardless, simply walking the code is enough to expose potential issues.

I think of my mistakes as the grindstone to my sword. To quote Michael Jordan:

I’ve missed more than 9000 shots in my career. I’ve lost almost 300 games. 26 times, I’ve been trusted to take the game winning shot and missed. I’ve failed over and over and over again in my life. And that is why I succeed.

2. Knowledge Transfer

Sharing your work with others is humbling. In many ways you are the code. I know that I feel vulnerable when I share my code.

This a great opportunity to learn from and to teach other engineers. In sharing your code you are taking the reviews on a journey, a journey into the code and aspects about you. A lot can be learned about you by how your write code.

At the end of the code review the reviewers should have a good understanding of how the code works, the rationale behind it and will have learned a little bit about you.

3. Improving the Health of the Code

As I mentioned, the more times you read the code the better code becomes. The more reviewers the better the chance one of them will suggest an improvement. Some might think skill level matters, it doesn’t. Less experienced software engineers don’t have the deep technological knowledge as experienced software engineers, but they also don’t have to wade through all the mental technical baggage to see opportunities for improvement.

Code reviews gives us the benefit of evaluating our code. There will always be something to change to make it just a little bit better.

Coding, in this way, is much like writing. For a good piece to come into focus the code must rest and be re-read. The more times you repeat this process the better the code will become.

In Closing

Some companies don’t officially do code reviews, that’s ok. Seek out other engineers. Most software engineer’s will be happy to take 10 to 15 minutes to look over your code.

Creating Reports Using Encrypted Data

reportingI wrote “Implementing Transparent Encryption with NHibernate Listeners” last year. If you haven’t read it, I recommend you do, even if nHibernate is not your cup of tea, you are guaranteed to learn something.

Michael Johnson from Sharpened Developer commented on the post asking “How do I report on encrypted data?”

Most reports are aggregations and encrypted data is typically personally identifiable information (PII), in many cases this data is not included in reports. In the case that we need encrypted data for reports we have options.

Reports within the Application

Generating reports with in the application is the simplest way to access the data. Mostly likely encryption and decryption mechanisms are already in place. There a number of third party (Telerik, DevExpress, etc..) libraries available for building reports. Building reports with in the application works well on small sets of data, but as the data grows this simply doesn’t scale.

Using a Reporting Database

The long term solution is to move the data into a reporting database. Once the data is in it’s own database an enterprise reporting solution can be put in place.

The leads me to the more interesting aspect of keeping the reporting data current.

Firstly, an external facing application should not have direct access to the decrypted reporting database. If the external application is compromised it is a good bet they’ll soon have access to the decrypted reporting database.

Ideally decrypted data is only available on the internal network.

With all the hacking (Sony) lately, I wonder if it’s a good idea to have any sensitive data decrypted at rest… but I digress. That’s a discussion for another time.

There are two types of reporting data: real time and everything else.

Real time data requires the data to be nearly insync with production data. The only way to report on real time data is to capture changes as they happen. This is fraught with issues, but a cleverly crafted asynchronous process using a message queue is a good solution. A service watching the message queue can process the messages near real-time. The service is decoupled, providing a layer of security and it’s asynchronous minimizing the performance impact.

When data freshness isn’t a concern, a daily or even weekly service is a good solution. It might be as simple as restoring a production database backup on the reporting server.

Lastly, avoid syncing directly from production databases, the last thing you’d want is to hinder production’s performance by inundating it with database requests.

In Closing

Reporting is rarely thought of at the onset of an application. As applications mature, stakeholders want to mine data for insights. This inevitably turns into a growing collection of reporting requests that seem to never end. A solid reporting solution will help abate these requests. In the best scenario the stakeholders can create their own reports.

* image reference (http://www.ajboggs.com/consulting/software-development/reporting-solutions/)

Algorithms: Binary Search

You are presented with a set of a 1000 numbers. You are tasked with finding the position of 73. The most obvious approach is to started with the first number and evaluate every number until 73 is found. This approach is called a linear search algorithm or sequential search algorithm. This works for a set of 1000 numbers, but consider if the set is increased to 10 million numbers. A linear search can not scale and is simply not suited for this many numbers, but a binary search algorithm can.

A binary search algorithm requires the data to be sorted. Once sorted, the value is found by finding the middle value and comparing it to the search value. If the search value is lower than the middle value, we take the first half of numbers and again we find the middle value and once again we compare it to the search value. If the search value is still lower, we again take the first half of numbers and find the middle value. And once again we compare the middle value to the search value. This process repeats itself until we find the search value or we run out of values.

Comparing the two algorithms for performance: A linear search of 10 million numbers, assuming 1 second per number, will consume roughly 116 days. A binary search of 10 million numbers, again assuming 1 second per number, will only consume about 23 seconds. When searching for numbers the binary search wins hands down.

Binary Search implemented in C#:

        public int BinarySearch(int number, int[] collection)
            int low = 0;
            int high = collection.Length - 1; //collection[]; // find the last number, this assumes the collection is sorted.

            while (low <= high)
                int mid = (low + high) / 2;

                if (collection[mid] < number)
                    low = mid + 1;
                else if (collection[mid] > number)
                    high = mid - 1;
                    return collection[mid];

            return -1;