Chuck Conway

Index Fragmentation in SQL Azure, Who Knew!

I’ve been on my project for over a year and it has significantly grown as an application and in data during the year. It’s been nonstop new features. I’ve rarely gone back and refactored code. Last week I noticed some of the data heavy pages were loading slowly. At the worst case one view could take up to 30 seconds to load. 10 times over my maximum load time...

Call me naive, but I didn’t consider index fragmentation in SQL Azure. It’s the cloud! It’s suppose to be immune on premise issues… Apparently index fragmentation is also an issue in the cloud.

I found a couple of queries on an MSDN blog, that identify the fragmented indexes and then rebuilds them.

After running the first query to show index fragmentation I found some indexes with over 50 percent fragmentation. According to the article anything over 10% needs attention.

First Query Display Index Fragmentation

--Get the fragmentation percentage

,OBJECT_NAME(ps.object_id) AS TableName
, AS IndexName
FROM sys.dm_db_partition_stats ps
INNER JOIN sys.indexes i
ON ps.object_id = i.object_id
AND ps.index_id = i.index_id
CROSS APPLY sys.dm_db_index_physical_stats(DB_ID(), ps.object_id, ps.index_id, null, 'LIMITED') ips
ORDER BY ps.object_id, ps.index_id

Second Query Rebuilds the Indexes

--Rebuild the indexes
DECLARE @TableName varchar(255)

 SELECT '[' + IST.TABLE_SCHEMA + '].[' + IST.TABLE_NAME + ']' AS [TableName]

 OPEN TableCursor
 FETCH NEXT FROM TableCursor INTO @TableName

 PRINT('Rebuilding Indexes on ' + @TableName)
Begin Try
End Try
Begin Catch
 PRINT('Cannot do rebuild with Online=On option, taking table ' + @TableName+' down for douing rebuild')
 End Catch
FETCH NEXT FROM TableCursor INTO @TableName

CLOSE TableCursor



Moving from Wordpress to Hexo

I love Wordpress - it just works. It’s community is huge and it’s drop dead simple to get running.

I started blogging in 2002 when the blogging landscape was barren. Blogging platforms were few and far between. Heck “blogging” wasn’t even a term.

My first blogging engine was b2 the precursor to Wordpress. In 2003, Wordpress forked b2 and started on the journey to the Wordpress we now all love. At the time I felt conflicted. Why create a second blogging platform? Why not lend support to b2? Wasn’t b2 good enough? Ultimately it was a good decision. Not to long after forking the code, the development on b2 stalled.

Wordpress has enjoyed a huge amount of popularity. It’s, by far, the most popular CMS (content management system).

So, it’s with sadness that after writing over 500 posts on b2 and Wordpress, I am moving from Wordpress. I simply don’t need the functionality and versatility of Wordpress. I am moving to Hexo, a node based blog/site generator.

Assets and posts are stored on the file system. The posts are written in Markdown. Hexo takes the Markdown and generates HTML pages linking the pages as it moves through the content. Depending on which theme you choose and how you customize the it you can generate just about anything.

I hope you enjoy the change. The site is much faster. The comments are now powered by Disqus. These changes will allow me to deliver a better and a faster experience for you.


A General Ledger: A Simple C# Implementation

If you don’t have a basic understanding of general ledgers and double-entry bookkeeping read my post explaining the basics of these concepts.

Over the years I've worked on a systems with financial transactions. To have integrity with financial transactions using a general ledger is a must. If not, you can’t account for revenue and accounts payable. Believe me you, when your client wants detailed reports on their cash flow you better be able to generate it. Not to mention any legal issues you might encounter.

Early in my career, I had a discussion with a C-Level executive, I explained the importance of a general ledger. I was getting push back because it pushed out the timeline a bit to implement the general ledger. Eventually we won out and implemented a ledger and thankfully so. Just as we predicted the requests for reports started rolling in.

A basic schema for a general ledger.

CREATE TABLE [Accounting].[GeneralLedger] (
    [Id]             INT             IDENTITY (1, 1) NOT NULL,
    [Account_Id]     INT             NOT NULL,
    [Debit]          DECIMAL (19, 4) NULL,
    [Credit]         DECIMAL (19, 4) NULL,
    [Transaction_Id] INT             NOT NULL,
    [EntryDateTime]  DATETIME        NOT NULL,

The C# class.

  public class GeneralLedger
       public int Id { get; set; }

       public  Account Account { get; set; }

       public decimal Debit { get; set; }

       public decimal Credit { get; set; }

       public Transaction Transaction { get; set; }

       public DateTime EntryDateTime { get; set; }

In my system I track all the transactions in and out of the system. For example, if a customer pays an invoice. I track the total payment in the general ledger. The credit account is called “Revenue” and the debit account is my company. Remember for each financial transaction two records are entered into the general ledger: a credit and a debit.

In my system I wanted higher fidelity so I added Transaction to the ledger. The transaction tracks the details of the entry. Only the transaction total is recorded in the general ledger. The transaction details(taxes, per item costs, etc) tells the story of how we arrived at the total.

Lets look at some data. Find an account with some credits and debits. Sum all the debit rows and sum all the credit rows. Subtract the debit from the credits. If the number is positive, the account finished in the black (has a profit), if it’s negative, then the account finished in the red (has a loss).

Your CEO wants to know how much money a client spent with your company. No problem. Again just sum the debits and credits and subtract them from each other for the clients account.

I hope this has helped you understand the power of the ledger and why it’s important when dealing with financial transactions.


A General Ledger : Understanding the Ledger

cropped_ledgerWhat is a general ledger and why is it important? To find out read on!

What is a general ledger? A general ledger is a log of all the transactions relating to assets, liabilities, owners’ equity, revenue and expenses. It’s how a company can tell if it’s profitable or it’s taking a loss. In the US, this is the most common way to track the financials.

To understand how a general ledger works, you must understand double entry bookkeeping. So, what is double entry bookkeeping? I’m glad you asked. Imagine you have a company and your first customer paid you $1000. To record this, you add this transaction to the general ledger. Two entries made: a debit, increasing the value of your assets in your cash account and a credit, decreasing the value of the revenue (money given to you by your customer payment). Think of the cash account as an internal account, meaning an account that you track the debits (increasing in value) and credits (decreasing in value). The revenue account is an external account. Meaning you only track the credit entries. External accounts don’t impact your business. They merely tell you where the money is coming from and where it’s going.

Here is a visual of our first customers payment.


If the sum of the debit column and the sum of the credit column don’t equal each other, then there is an error in the general ledger. When both sides equal each other the books are said to be balanced. You want balanced books.

Let’s look at a slightly more complex example.

You receive two bills: water and electric, both for $50. You pay them using part of the cash in your cash account. The current balance is $1000. What entries are needed? Take your time. I’ll wait.


Four entries are added to the general ledger: two credit entries for cash and one entry for each the water and electric accounts. Notice the cash entries are for credits.

For bonus, how would we calculate the remaining balance of the cash account? Take your time. Again, I’ll wait for you.

To get the remaining balance we need to identify each cash entry.


To get the balance of the Cash account we do the same thing we did to balance the books, but this time we only look at the cash account. We take the sum of the debit column for the cash account and the sum of the Credit column for the cash account and subtract them from each other. The remaining value is the balance of the cash account.


And that folks, is the basics of a general ledger and double entry bookkeeping. I hope you see the importance of this approach. As it give you the ability to quicking see if there are errors in your books. You have high fidelity in tracking payments and revenues.

This is just the tip of the iceberg in accounting. If you’d like to dive deeper into accounting, have a look at the accounting equation: Assets = Liabilities + Owner’s Equity.

Hopefully this post has given you a basic understanding of what a general ledger is and how double-entry bookkeeping works. In the next post I’ll go into how to implement a general ledger in C#.


A Tour of Bidwell Mansion

This week, I was out of town and didn't find the time to write a technical post. I did tour the Bidwell Mansion in Chico, California and found it fascinating and I wanted to share a few photos of my trip. The technical content will return next week.

John Bidwell was an influential political figure in the 1800’s. He helped California present it’s case for statehood. He brought agriculture into California. John ran for governor of California 3 times and failed all three times. In many ways John was ahead of his times.

The 1800’s and into the 1900’s American Indians were hunted and persecuted. In a time where this was a common occurrence, John Bidwell took exception to it. John invited a local Mechoopda Maidu Indian tribe to move their village onto his land. In essence, he provided protection.

The Bidwell mansion is a 3 story Italianate styled mansion. There are 26 rooms, including a ballroom on the third floor.

The Mansion


John Bidwell's Desk


Setting Room

After dinner people would migrate into this room to talk it up. Notice the fireplace. Most rooms in the mansion have a fire place, wood heat was the only source of heat.


High-chair with Wheels

Wheels fold down and turns into a stroller.





Your own library. I'd love to have one of those.



You don't see such grand staircases anymore.


This is a beautiful mansion, if you are ever in the Chico Area, I recommend stopping and seeing the mansion.


9 JavaScript Libraries that Every Serious Application Needs

javascript_logoOver the past year I have been developing a line of business application with AngularJs. AngularJs has many out of the box features that just work. That’s the beauty of AngularJs. This is also it’s downside. You can’t be good at everything -- some of the API are lacking performance and features.

Through trial and error I've discovered what JavaScript libraries fill the gaps. I've compiled a list of these libraries.


jQuery This is the swiss army knife of the JavaScript world. It is immensely helpful when I want to get close to the DOM and manipulate

Dropzone I have yet to find a better library for queuing and uploading documents.

Lodash Filtering, sorting, mapping data with this library is a must have. If you still using for and forEach statements you are doing it wrong.

Toastr Displaying message to the client is always a challenge. This library makes it simple. The messages are configurable and appear professional.

Radio AngularJs has known issues with pub/sub. Whether you choose $emit or $broadcast each has it’s failings. Radio is an alternative pub/sub library that just works and avoids the issues found in the Angular options

Pikaday Angular UI Bootstrap has a fully featured datepicker. However it was discovered to have a design flaw that in certain conditions slows the page to a 3 or a 4 second load time. Pikaday is a lightweight zippy datepicker alternative that just works.

Accounting.js Formatting money is a pain. It might seem straightforward, but it’s not. Accounting.js takes the pain out of it. It’s another library that just works.

filesize.js Filesize takes the number of bytes and converts it to mb, K, g notation. It’s that simple and it works.

moment.js Anyone who is dealing with time, needs this library. Dates and time are a headache in JavaScript. The browsers try to do too much for you. When it works, it’s great when it doesn’t, it’s like fighting with a Chinese finger puzzle.


Proofing a Concept and Growing the Code

treeIn a recent conversation, a friend mentioned he creates proof of concepts and then discards them after testing their viability. I’ve done the same in the past. This time it didn’t feel right. I cringed when he said he threw away to the code. Maybe my days as a business owner has turned me into a froogle goat, but it felt like he was throwing away value.

Why don’t we continue forward with a proof of concept?

Generally when I think of a proof of concept its hastily assembled. Many of the “best practices” are short-cutted if not downright ignored. The goal is to test the feasibility an idea. At some point you’ll realize if the solution will work. Then you’ll decide if it’s time to walk away from the idea and ditch the proof of concept or move forward with the idea. If you move forward with the idea, why not keep coding and turn the proof of concept into the real deal?

I’ll be honest here, it seems ridiculous that you’d create a solution and then throw it away just to create it again. That’s like poorly painting an entire house just to see if you like the color. “Yep, the color is good. Let’s paint the house for reals this time and this time we’ll do a good job.

There is another way. Evolve the code. Add in the missing infrastructure. This has the possibility of growing into a long term healthy solution.

Walking away from a proof of concept costs you value (time and money) that might otherwise be captured. Even if you don’t capture 100%, you’ll still be better off than just chucking everything and walking away. So next time, give it a try. See if you can morph a proof of concept into a sustainable project. I think you might be surprised at the end result.


Using JavaScript 6: Getting started Today with Babel.js

There has been a lot of talk about the new version of JavaScript. Officially called ECMAScript 6, JavaScript 6 is a few months (sometime mid 2015) away from being officially a recommendation. Then starts the arduous march towards browser support. Many of browsers support a subset of JavaScript 6. How long to get 100% support is anyone’s guess. But don’t fret, we can use most of JavaScript 6 today.

I’d like to introduce you to Babel.js (formerly called 6to5) a JavaScript 6 to JavaScript 5 transpiler. Transpilers allow unsupported features to be used today while we wait for the browsers to implement JavaScript 6. Transpilers convert new JavaScript 6 features into JavaScript 5 syntax. Below I have a simple example of a JS6 class that is transpiled to the JS5 equivalent.

Getting started with Babel.js is mind numbingly simple. Thanks to Babel.js’s extensive documentation. I encourage you to visit their site.

I have copied the grunt instructions below.


$ npm install --global babel


$ babel script.js


$ npm install --save-dev grunt-babel


module.exports = function(grunt){
    "use strict";


    "babel": {
      options: {
        sourceMap: true
      dist: {
        files: {
          "dist/app.js": "app.js"

  grunt.registerTask("default", ["babel"]);

Now that we have Babel.js installed let’s test it. I have created a JavaScript 6 class called Test.

class Test {

        return [];




Running Babel.js via Grunt outputs a new app.js, but with the equivalent JavaScript 5 syntax.

Transpiled app.js

"use strict";

var _prototypeProperties = function (child, staticProps, instanceProps) { if (staticProps) Object.defineProperties(child, staticProps); if (instanceProps) Object.defineProperties(child.prototype, instanceProps); };

var _classCallCheck = function (instance, Constructor) { if (!(instance instanceof Constructor)) { throw new TypeError("Cannot call a class as a function"); } };

var Test = (function () {
    function Test() {
        _classCallCheck(this, Test);

    _prototypeProperties(Test, null, {
        getItems: {
            value: function getItems() {
                return [];
            writable: true,
            configurable: true
        saveItem: {
            value: function saveItem(item) {},
            writable: true,
            configurable: true

    return Test;

Babel.js is a great way to use the new features of JavaScript 6 today.

The biggest risk is if the features transpiled with BabelJs will behave the same as they will in the browser? There is always the chance they won’t. Babel.js claims their implementation is 100% spec compliant. This is good, but still a browser maker might interpret a spec differently (we’ve never seen that happen right?).

Worse case scenario is you have to move forward with BabelJs longer than you intended. Yes, you might have to rework some code, it’s the risk you take when using cutting edge tech.

There is no guarantee the browsers makers won’t implement features differently. Using Babel.js might save you during the round of browser releases supporting JavaScript 6.

I can envision a world where most websites a use a transpiler to use the latest features, because browsers tend to lag a few years behind.

In the end most advancements in programming languages make the developers more productive. The sooner we can use the new features the sooner we can realize productivity gains.


Securing AngularJS with Claims

At some point an application needs authorization. This means different levels of access behave differently on a web site (or anything for that matter). It can be anything from seeing data to whole area’s that are not accessible by a group of users.

In non Single Page Applications (SPA), a claim or role is associated with data or an area of the application, either the user has this role or claim or he does not. In a SPA it’s the same, but with a huge disclaimer. A SPA is downloaded to the browser. At this point the browser has total control over the code. A nefarious person can change the code to do his bidding.

Because SPAs can’t be secured, authentication and authorization in a SPA is simply user experience. All meaningful security must be done on the web server. This article does not cover securing your API against attacks. I recommend watching a video from Pluralsight or reading a paper that addresses security for your server technology.

The intent of this article to show you how I added an authorization user experience to my Angular 1.x SPA.

Security Scopes

I have identified 3 areas of the UI that need authorization: Elements (HTML), Routes, and Data.

Just a reminder, securing a SPA is no substitute to securing the server. Permissions on the client is simply to keep the honest people honest and to provide the user with a good experience.

The 3 areas in detail:


You’ll need to hide specific HTML elements. It could be a label, a table with data, a button, or any element on the page.


You’ll want to hide entire routes. In certain cases you don’t want the user accessing a view. By securing the route a user can’t to navigate to the view. They instead will be shown a “You are not authorized to navigate to this view” message.


Sometimes hiding the elements in the view is not enough. An astute user can simply view the source and see the hidden data in the HTML source or watch it stream to the browser. What we want is the data not to be retrieve in the first place..

Adding security is tricky. At first I tried constraining the access at the HTTP API (on the client). I quickly realized this wouldn’t work. A user might not have direct access to the data, but this doesn't mean they don’t indirectly access to the data. At the HTTP API layer (usually one of the lowest in the application) we can’t tell the context of the call and therefore can’t apply security concerns to it.

Below I have provided coding samples:


I created a service for the authorization checking code. This is the heart of the authorization. All authorization requests use this service to check if the user is authorized for the particular action.

    .service('AuthorizationContext',function(_, Session){

        this.authorizedExecution = function(key, action){

            //Looking for the claim key that was passed in. If it exists in the claim set, then execute the action.
                var claim = findKey(key, claims);

                //If Claim was found then execute the call.
                //If it was not found, do nothing
                if(claim !== undefined){

        this.authorized = function(key, callback){
            //Looking for the claim key that was passed in. If it exists in the claim set, then execute the action.
                var claim = findKey(key, claims);

                //If they don't have any security key, then move forward and authorization.
                var valid = claim !== undefined;

        //this.agencyViewKey = '401D91E7-6EA0-46B4-9A10-530E3483CE15';

        function findKey(key, claims){
            var claim = _.find(claims, function(item){
                return item.value === key;

            return claim;

Authorize Directive

The authorize directive can be applied to any HTML element that you want to hide from users without a specific level of access. If the user has the access token as part of their claims they are allow to see the element. If they don’t it’s hidden from them.

    .directive('authorize', ['$compile', 'AuthorizationContext', function($compile, AuthorizationContext) {
        return {
            restrict: 'A',
            replace: true,
            //can't have isolated the scope in a shared directive
            link:function ($scope, element, attributes) {

                var securityKey = attributes.authorize;
                AuthorizationContext.authorized(securityKey, function(authorized){
                    var el = angular.element(element);
                    el.attr('ng-show', authorized);

                    //remove the attribute, otherwise it creates an infinite loop.


I rely heavily on tabs in my application. I apply the authorize directive to the tab that I want to hide from users without the proper claims.

<tab ng-cloak heading="Users" authorize="">
...html content


I’m using the ui-router. Unfortunately for those who are not, I don’t have code for the out of the box AngularJS router.

In the $stateChangeStart I authenticate the route. This is the code in that event.

$rootScope.$on("$stateChangeStart", function(event, toState, toParams, fromState, fromParams){
   AuthenticationManager.authenticate(event, toState, toParams);

The function that authorizes the route. If it’s authorized, the route is allowed to continue. If it’s not authorized, a message is displayed to the user and they are directed to the home page.

function authorizedRoute(toState, location, toaster, breadCrumbs){
   if(toState.authorization !== undefined){
       AuthorizationContext.authorized(toState.authorization, function(authorized){
               toaster.pop('error', 'Error', 'You are not authorized to view this page.');
           } else {
   } else{

In this router definition you’ll notice a property called ‘authorization’. If the user has this claim they are allowed to proceed.

    .config(function config($stateProvider){
    $stateProvider.state( 'agency', {
        url: '/agency',
        controller: 'agency.index',
        templateUrl: 'agency/agency.tpl.html',
        authenticate: true,
        data:{ pageTitle: 'Agency' }


In some cases, you don’t want to make a request to the server for the data. If the user has the claim they’ll be allowed to make the request.

The above AuthorizationContext at beginning of the article show the code for authoriedExecution. Here you see it’s usage.

AuthorizationContext.authorizedExecution(Keys.authorization.allowUserManagement, function(){
    //execute code, if the loggedin user has rights.


As I mentioned above, this is no substitute for securing the server. This code works for providing a wonder user experience.


Adding Custom Converters in AutoMapper with Assembly Scanning

Automapper is a wonderful tool. Those who haven’t used are missing out.

Simply put, AutoMapper is a convention based object to object mapper. For example, your application has a boatload of view models. How are the view models mapped to domain models? Before AutoMapper, it was lines of left to right value setting code. Using Automapper eliminates much of this code. To learn more visit (I’m not affliated with AutoMapper. I’m just a fan).

In my current project I use AutoMapper to map domain models to view models and vice versa. For each domain model this equates to roughly two ITypeConverter implementations. Predictably, the number of mappings have increased as the application has grown. So much so Visual Studio began having trouble parsing the list of mappings.

Here is a short sample of custom converters:

c.CreateMap<NewAgency, Agency>().ConvertUsing(new NewAgencyToAgencyConverter());

c.CreateMap<DependentModel, EmployerGroupMember>().ConvertUsing(new DependentModelToMemberConverter());

c.CreateMap<EmployeeModel, EmployerGroupMember>().ConvertUsing(new EmployeeToMemberConverter());
c.CreateMap<NewEmployerGroup, EmployerGroup>().ConvertUsing(new NewEmployerGroupToEmployerGroupConverter());
c.CreateMap<UpdateEmployerGroup, EmployerGroup>().ConvertUsing(new UpdateEmployerGroupToEmployerGroupConverter());

c.CreateMap<EmployerGroupMember, object>().ConvertUsing(new EmployerGroupMemberToResultConverter());

c.CreateMap<EmployerGroup, object>().ConvertUsing(new EmployerGroupToResultConverter());
c.CreateMap<EmployerGroupAddress, object>().ConvertUsing(new EmployerGroupAddressToObjectConverter());
c.CreateMap<NewLocation, EmployerGroupAddress>().ConvertUsing(new NewLocationToEmployerGroupAddressConverter());
c.CreateMap<UpdateLocation, EmployerGroupAddress>().ConvertUsing(new UpdateLocationToEmployerGroupAddressConverter());
c.CreateMap<User, object>().ConvertUsing(new UserToObjectResult());
c.CreateMap<List<Carrier>, object>().ConvertUsing(new CarrierCollectionToResultConverter());
c.CreateMap<Benefit, object>().ConvertUsing(new BenefitToResultConverter());
c.CreateMap<List<Benefit>, object>().ConvertUsing(new BenefitCollectionToResultConverter());
c.CreateMap<NewBenefit, Benefit>().ConvertUsing(new NewBenefitToBenefitConverter());

I can’t tell you how many times I created a Customer Converter and forgot to add it to the list. With assembly scanning, all this pain goes away. The downside is AutoMapper does not support assembly scanning. However most modern Dependency Injection containers do. So all we need to do is use a DI container that has scanning capabilities. My preferred container is StructureMap which does support assembly scanning.

Firstly, AutoMapper’s ITypeConverter interface needs to be added to StructureMaps manifest

x.Scan(scan => { scan.ConnectImplementationsToTypesClosing(typeof(ITypeConverter<,>));

Retrieving the ITypeConverter’s implementation isn’t as easy as it seems. My first attempt was to use StrucutureMaps’s GetAllInstances method:

var items = ObjectFactory.GetAllInstances(typeof(ITypeConverter<,>));

No Cigar. At first I was mystified. Why this didn't work? After all this is how I registered the implementations. Without going into detail, StructureMap doesn’t track this type of information. It finds all the implementations of ITypeConverter<,> and add the concrete types to the manifest. Most developers are don’t want to retrieve all the implementations by the open generic interface this information is discarded.

It turns out that to get the implementations of ITypeConverter<,>, is a bit harder than I thought. A little reflection magic is needed:

   private static IEnumerable<object> GetITypeConverters()
        IEnumerable<IPluginTypeConfiguration> handlers =
                .Where(x => x.PluginType.IsGenericType &&
                            x.PluginType.GetGenericTypeDefinition() ==
                            typeof (ITypeConverter<,>))

        var allInstances = new List<object>();

        foreach (IPluginTypeConfiguration pluginTypeConfiguration in handlers)
            var instancesForPluginType = ObjectFactory.GetAllInstances(pluginTypeConfiguration.PluginType).OfType<object>();

        return allInstances;

We now have all the implementations of ITypeConverter<,>. The next step is to add them to AutoMapper.

    public static void ConfigureAutoMapper()

        var items = GetITypeConverters();

        Mapper.Initialize(c =>
            foreach (var item in items)
                string interfaceName = typeof (ITypeConverter<,>).FullName;
                c.CreateMap(item.GetType().GetInterface(interfaceName).GenericTypeArguments[0], item.GetType().GetInterface(interfaceName).GenericTypeArguments[1]).ConvertUsing(item.GetType());

That’s it. It’s not as straight forward as it could be, but it works. Luckily this code runs once at application startup otherwise we might have performance concerns.

Here is the complete code.

using System;
using System.Collections.Generic;
using System.Linq;
using System.Web;
using AutoMapper;
using StructureMap;
using StructureMap.Query;

namespace Grover.Api.App_Start
    public class AutoMapperInitialize
        public static void ConfigureAutoMapper()
            var items = GetITypeConverters();

            Mapper.Initialize(c =>
                foreach (var item in items)

                    string interfaceName = typeof (ITypeConverter<,>).FullName;
                    c.CreateMap(item.GetType().GetInterface(interfaceName).GenericTypeArguments[0], item.GetType().GetInterface(interfaceName).GenericTypeArguments[1]).ConvertUsing(item.GetType());


        private static IEnumerable<object> GetITypeConverters()
            IEnumerable<IPluginTypeConfiguration> handlers =
                    .Where(x => x.PluginType.IsGenericType &&
                                x.PluginType.GetGenericTypeDefinition() ==
                                typeof (ITypeConverter<,>))

            var allInstances = new List<object>();

            foreach (IPluginTypeConfiguration pluginTypeConfiguration in handlers)
                var instancesForPluginType = ObjectFactory.GetAllInstances(pluginTypeConfiguration.PluginType).OfType<object>();

            return allInstances;