Category: Code

4 Practices to Lowering Your Defect Rate

Writing software is a battle between complexity and simplicity. Striking balance between the two is difficult. The trade-off is between long unmaintainable methods and too much abstraction. Tilting too far in either direction impairs code readability and increases the likelihood of defects.

Are defects avoidable? NASA tries, but they also do truckloads of testing. Their software is literally mission critical – a one shot deal. For most organizations, this isn’t the case and large amounts of testing are costly and impractical. While there is no substitute for testing, it’s possible to write defect resistant code, without testing.

In 20 years of coding and architecting applications, I’ve identified four practices to reduce defects. The first two practices limit the introduction of defects and the last two practices expose defects. Each practice is a vast topic on its own in which many books have been written. I’ve distilled each practice into a couple paragraphs and I’ve provided links to additional information when possible.

1. Write Simple Code

Simple should be easy, but it’s not. Writing simple code is hard.

Some will read this and think this means using simple language features, but this isn’t the case — simple code is not dumb code.

To keep it objective, I’m using cyclomatic complexity as a measure. There are other ways to measure complexity and other types of complexity, I hope to explore these topics in later articles.

Microsoft defines cyclomatic complexity as:

Cyclomatic complexity measures the number of linearly-independent
paths through the method, which is determined by the number and
complexity of conditional branches. A low cyclomatic complexity
generally indicates a method that is easy to understand, test, and
maintain.

What is a low cyclomatic complexity? Microsoft recommends keeping cyclomatic complexity below 25.

To be honest, I’ve found Microsoft’s recommendation of cyclomatic complexity of 25 to be too high. For maintainability and complexity, I’ve found the ideal method size is between 1 to 10 lines with a cyclomatic complexity between 1 and 5.

Bill Wagner in Effective C#, Second Edition wrote on method size:

Remember that translating your C# code into machine-executable code is a two-step process. The C# compiler generates IL that gets delivered in assemblies. The JIT compiler generates machine code for each method (or group of methods, when inlining is involved), as needed. Small functions make it much easier for the JIT compiler to amortize that cost. Small functions are also more likely to be candidates for inlining. It’s not just smallness: Simpler control flow matters just as much. Fewer control branches inside functions make it easier for the JIT compiler to enregister variables. It’s not just good practice to write clearer code; it’s how you create more efficient code at runtime.

To put cyclomatic complexity in perspective, the following method has a cyclomatic complexity of 12.

public string ComplexityOf12(int status)
{
    var isTrue = true;
    var myString = "Chuck";

    if (isTrue)
    {
        if (isTrue)
        {
            myString = string.Empty;
            isTrue = false;

            for (var index = 0; index < 10; index++)
            {
                isTrue |= Convert.ToBoolean(new Random().Next());
            }

            if (status == 1 || status == 3)
            {
                switch (status)
                {
                    case 3:
                        return "Bye";
                    case 1:
                        if (status % 1 == 0)
                        {
                            myString = "Super";
                        }
                        break;
                }

                return myString;
            }
        }
    }

    if (!isTrue)
    {
        myString = "New";
    }

    switch (status)
    {
        case 300:
            myString = "3001";
            break;
        case 400:
            myString = "4003";
            break;

    }

    return myString;
}

A generally accepted complexity hypothesis postulates a positive correlation exists between complexity and defects.

The previous line is a bit convoluted. In the simplest terms — keeping code simple reduces your defect rate.

2. Write Testable Code

Studies have shown that writing testable code, without writing the actual tests lowers the incidents of defects. This is so important and profound it needs repeating: Writing testable code, without writing the actual tests, lowers the incidents of defects.

This begs the question, what is testable code?

I define testable code as code that can be tested in isolation. This means all the dependencies can be mocked from a test. An example of a dependency is a database query. In a test, the data is mocked (faked) and an assertion of the expected behavior is made. If the assertion is true, the test passes, if not it fails.

Writing testable code might sound hard, but, in fact, it is easy when following the Inversion of Control (Dependency Injection) and the S.O.L.I.D principles. You’ll be surprised at the ease and will wonder why it took so long to start writing in this way.

3. Code Reviews

One of the most impactful practice a development team can adopt is code reviews.

Code Reviews facilitates knowledge sharing between developers. Speaking from experience, openly discussing code with other developers has had the greatest impact on my code writing skills.

In the book Code Complete, by Steve McConnell, Steve provides numerous case studies on the benefits code reviews:

  • A study of an organization at AT&T with more than 200 people reported a 14 percent increase in productivity and a 90 percent decrease in defects after the organization introduced reviews.
    • The Aetna Insurance Company found 82 percent of the errors in a program by using inspections and was able to decrease its development resources by 20 percent.
    • In a group of 11 programs developed by the same group of people, the first 5 were developed without reviews. The remaining 6 were developed with reviews. After all the programs were released to production, the first 5 had an average of 4.5 errors per 100 lines of code. The 6 that had been inspected had an average of only 0.82 errors per 100. Reviews cut the errors by over 80 percent.

If those numbers don’t sway you to adopt code reviews, then you are destined to drift into a black hole while singing Johnny Paycheck’s Take This Job and Shove It.

4. Unit Testing

I’ll admit, when I am up against a deadline testing is the first thing to go. But the benefits of testing can’t be denied as the following studies illustrate.

Microsoft performed a study on the Effectiveness on Unit Testing. They found that coding version 2 (version 1 had no testing) with automated testing immediately reduced defects by 20%, but at a cost of an additional 30%.

Another study looked at Test Driven Development (TDD). They observed an increase in code quality, more than two times, compared to similar projects not using TDD. TDD projects took on average 15% longer to develop. A side-effect of TDD was the tests served as documentation for the libraries and API’s.

Lastly, in a study on Test Coverage and Post-Verification Defects:

… We find that in both projects the increase in test coverage is
associated with decrease in field reported problems when adjusted for
the number of prerelease changes…

An Example

The following code has a cyclomatic complexity of 4.

    public void SendUserHadJoinedEmailToAdministrator(DataAccess.Database.Schema.dbo.Agency agency, User savedUser)
    {
        AgencySettingsRepository agencySettingsRepository = new AgencySettingsRepository();
        var agencySettings = agencySettingsRepository.GetById(agency.Id);

        if (agencySettings != null)
        {
            var newAuthAdmin = agencySettings.NewUserAuthorizationContact;

            if (newAuthAdmin.IsNotNull())
            {
                EmailService emailService = new EmailService();

                emailService.SendTemplate(new[] { newAuthAdmin.Email }, GroverConstants.EmailTemplate.NewUserAdminNotification, s =>
                {
                    s.Add(new EmailToken { Token = "Domain", Value = _settings.Domain });
                    s.Add(new EmailToken
                    {
                        Token = "Subject",
                        Value =
                    string.Format("New User {0} has joined {1} on myGrover.", savedUser.FullName(), agency.Name)
                    });
                    s.Add(new EmailToken { Token = "Name", Value = savedUser.FullName() });

                    return s;
                });
            }
        }
    }

Let’s examine the testability of the above code.

Is this simple code?

Yes, it is, the cyclomatic complexity is below 5.

Are there any dependencies?

Yes. There are 2 services AgencySettingsRepository and EmailService.

Are the services mockable?

No, their creation is hidden within the method.

Is the code testable?

No, this code isn’t testable because we can’t mock AgencySettingsRepository and EmailService.

Example of Refactored Code

How can we make this code testable?

We inject (using constuctor injection) AgencySettingsRepository and EmailService as dependencies. This allows us to mock them from a test and test in isolation.

Below is the refactored version.

Notice how the services are injected into the constructor. This allows us to control which implementation is passed into the SendMail constructor. It’s then easy to pass dummy data and intercept the service method calls.

public class SendEmail
{
    private IAgencySettingsRepository _agencySettingsRepository;
    private IEmailService _emailService;


    public SendEmail(IAgencySettingsRepository agencySettingsRepository, IEmailService emailService)
    {
        _agencySettingsRepository = agencySettingsRepository;
        _emailService = emailService;
    }

    public void SendUserHadJoinedEmailToAdministrator(DataAccess.Database.Schema.dbo.Agency agency, User savedUser)
    {
        var agencySettings = _agencySettingsRepository.GetById(agency.Id);

        if (agencySettings != null)
        {
            var newAuthAdmin = agencySettings.NewUserAuthorizationContact;

            if (newAuthAdmin.IsNotNull())
            {
                _emailService.SendTemplate(new[] { newAuthAdmin.Email },
                GroverConstants.EmailTemplate.NewUserAdminNotification, s =>
                {
                    s.Add(new EmailToken { Token = "Domain", Value = _settings.Domain });
                    s.Add(new EmailToken
                    {
                        Token = "Subject",
                        Value = string.Format("New User {0} has joined {1} on myGrover.", savedUser.FullName(), agency.Name)
                    });
                    s.Add(new EmailToken { Token = "Name", Value = savedUser.FullName() });

                    return s;
                });
            }
        }
    }
}

Testing Exmaple

Below is an example of testing in isolation. We are using the mocking framework FakeItEasy.

    [Test]
    public void TestEmailService()
    {
        //Given

        //Using FakeItEasy mocking framework
        var repository = A<IAgencySettingsRepository>.Fake();
        var service = A<IEmailService>.Fake();

        var agency = new Agency { Name = "Acme Inc." };
        var user = new User { FirstName = "Chuck", LastName = "Conway", Email = "chuck.conway@fakedomain.com" }

        //When

        var sendEmail = new SendEmail(repository, service);
        sendEmail.SendUserHadJoinedEmailToAdministrator(agency, user);


        //Then
        //An exception is thrown when this is not called.
        A.CallTo(() => service.SendTemplate(A<Agency>.Ignore, A<User>.Ignore)).MustHaveHappened();

    }

Closing

Writing defect resistance code is surprisingly easy. Don’t get me wrong, you’ll never write defect-free code (if you figure out how, let me know!), but by following the 4 practices outlined in this article you’ll see a decrease in defects found in your code.

To recap, Writing Simple Code is keeping the cyclomatic complexity around 5 and the method size small. Writing Testable Code is easily achieved when following the Inversion of Control and the S.O.L.I.D Principles. Code Reviews help you and the team understand the domain and the code you’ve written — just having to explain your code will reveal issues. And lastly, Unit Testing can drastically improve your code quality and provide documentation for future developers.

Index Fragmentation in SQL Azure, Who Knew!

I’ve been on my project for over a year and it has significantly grown as an application and in data during the year. It’s been nonstop new features. I’ve rarely gone back and refactored code. Last week I noticed some of the data heavy pages were loading slowly. At the worst case one view could take up to 30 seconds to load. 10 times over my maximum load time…

Call me naive, but I didn’t consider index fragmentation in SQL Azure. It’s the cloud! It’s suppose to be immune to premise issues… Apparently index fragmentation is also an issue in the cloud.

I found a couple of queries on an MSDN blog, that identify the fragmented indexes and then rebuilds them.

After running the first query to show index fragmentation I found some indexes with over 50 percent fragmentation. According to the article anything over 10% needs attention.

First Query Display Index Fragmentation

--Get the fragmentation percentage

SELECT
 DB_NAME() AS DBName
,OBJECT_NAME(ps.object_id) AS TableName
,i.name AS IndexName
,ips.index_type_desc
,ips.avg_fragmentation_in_percent
FROM sys.dm_db_partition_stats ps
INNER JOIN sys.indexes i
ON ps.object_id = i.object_id
AND ps.index_id = i.index_id
CROSS APPLY sys.dm_db_index_physical_stats(DB_ID(), ps.object_id, ps.index_id, null, 'LIMITED') ips
ORDER BY ps.object_id, ps.index_id

Second Query Rebuilds the Indexes

--Rebuild the indexes
DECLARE @TableName varchar(255)

DECLARE TableCursor CURSOR FOR
(
 SELECT '[' + IST.TABLE_SCHEMA + '].[' + IST.TABLE_NAME + ']' AS [TableName]
 FROM INFORMATION_SCHEMA.TABLES IST
 WHERE IST.TABLE_TYPE = 'BASE TABLE'
 )

 OPEN TableCursor
 FETCH NEXT FROM TableCursor INTO @TableName
WHILE @@FETCH_STATUS = 0

 BEGIN
 PRINT('Rebuilding Indexes on ' + @TableName)
Begin Try
 EXEC('ALTER INDEX ALL ON ' + @TableName + ' REBUILD with (ONLINE=ON)')
End Try
Begin Catch
 PRINT('Cannot do rebuild with Online=On option, taking table ' + @TableName+' down for douing rebuild')
 EXEC('ALTER INDEX ALL ON ' + @TableName + ' REBUILD')
 End Catch
FETCH NEXT FROM TableCursor INTO @TableName
END

CLOSE TableCursor
DEALLOCATE TableCursor

Source

Proofing a Concept and Growing the Code

In a recent conversation, a friend mentioned he creates proof of concepts and then discards them after testing their viability. I’ve done the same in the past. This time it didn’t feel right. I cringed when he said he threw away to the code. Maybe my days as a business owner has turned me into a froogle goat, but it felt like he was throwing away value.

Why don’t we continue forward with a proof of concept?

Generally when I think of a proof of concept its hastily assembled. Many of the “best practices” are short-cutted if not downright ignored. The goal is to test the feasibility an idea. At some point you’ll realize if the solution will work. Then you’ll decide if it’s time to walk away from the idea and ditch the proof of concept or move forward with the idea. If you move forward with the idea, why not keep coding and turn the proof of concept into the real deal?

I’ll be honest here, it seems ridiculous that you’d create a solution and then throw it away just to create it again. That’s like poorly painting an entire house just to see if you like the color. “Yep, the color is good. Let’s paint the house for reals this time and this time we’ll do a good job.

There is another way. Evolve the code. Add in the missing infrastructure. This has the possibility of growing into a long term healthy solution.

Walking away from a proof of concept costs you value (time and money) that might otherwise be captured. Even if you don’t capture 100%, you’ll still be better off than just chucking everything and walking away. So next time, give it a try. See if you can morph a proof of concept into a sustainable project. I think you might be surprised at the end result.

Securing AngularJS with Claims

At some point an application needs authorization. This means different levels of access behave differently on a web site (or anything for that matter). It can be anything from seeing data to whole area’s that are not accessible by a group of users.

In non Single Page Applications (SPA), a claim or role is associated with data or an area of the application, either the user has this role or claim or he does not. In a SPA it’s the same, but with a huge disclaimer. A SPA is downloaded to the browser. At this point the browser has total control over the code. A nefarious person can change the code to do his bidding.

Because SPAs can’t be secured, authentication and authorization in a SPA is simply user experience. All meaningful security must be done on the web server. This article does not cover securing your API against attacks. I recommend watching a video from Pluralsight or reading a paper that addresses security for your server technology.

The intent of this article to show you how I added an authorization user experience to my Angular 1.x SPA.

Security Scopes

I have identified 3 areas of the UI that need authorization: Elements (HTML), Routes, and Data.

Just a reminder, securing a SPA is no substitute to securing the server. Permissions on the client is simply to keep the honest people honest and to provide the user with a good experience.

The 3 areas in detail:

Elements

You’ll need to hide specific HTML elements. It could be a label, a table with data, a button, or any element on the page.

Routes

You’ll want to hide entire routes. In certain cases you don’t want the user accessing a view. By securing the route a user can’t to navigate to the view. They instead will be shown a “You are not authorized to navigate to this view” message.

Data

Sometimes hiding the elements in the view is not enough. An astute user can simply view the source and see the hidden data in the HTML source or watch it stream to the browser. What we want is the data not to be retrieve in the first place..

Adding security is tricky. At first I tried constraining the access at the HTTP API (on the client). I quickly realized this wouldn’t work. A user might not have direct access to the data, but this doesn’t mean they don’t indirectly access to the data. At the HTTP API layer (usually one of the lowest in the application) we can’t tell the context of the call and therefore can’t apply security concerns to it.

Below I have provided coding samples:

Code

I created a service for the authorization checking code. This is the heart of the authorization. All authorization requests use this service to check if the user is authorized for the particular action.

angular.module('services')
    .service('AuthorizationContext',function(_, Session){

        this.authorizedExecution = function(key, action){

            //Looking for the claim key that was passed in. If it exists in the claim set, then execute the action.
            Session.claims(function(claims){
                var claim = findKey(key, claims);

                //If Claim was found then execute the call.
                //If it was not found, do nothing
                if(claim !== undefined){
                    action();
                }
            });
        };

        this.authorized = function(key, callback){
            //Looking for the claim key that was passed in. If it exists in the claim set, then execute the action.
            Session.claims(function(claims){
                var claim = findKey(key, claims);

                //If they don't have any security key, then move forward and authorization.
                var valid = claim !== undefined;
                callback(valid);
            });
        };

        //this.agencyViewKey = '401D91E7-6EA0-46B4-9A10-530E3483CE15';

        function findKey(key, claims){
            var claim = _.find(claims, function(item){
                return item.value === key;
            });

            return claim;
        }
    });

Authorize Directive

The authorize directive can be applied to any HTML element that you want to hide from users without a specific level of access. If the user has the access token as part of their claims they are allow to see the element. If they don’t it’s hidden from them.

angular.module(directives')
    .directive('authorize', ['$compile', 'AuthorizationContext', function($compile, AuthorizationContext) {
        return {
            restrict: 'A',
            replace: true,
            //can't have isolated the scope in a shared directive
            link:function ($scope, element, attributes) {

                var securityKey = attributes.authorize;
                AuthorizationContext.authorized(securityKey, function(authorized){
                    var el = angular.element(element);
                    el.attr('ng-show', authorized);

                    //remove the attribute, otherwise it creates an infinite loop.
                    el.removeAttr('authorize');
                    $compile(el)($scope);
                });
            }
        };
    }]);

Elements

I rely heavily on tabs in my application. I apply the authorize directive to the tab that I want to hide from users without the proper claims.

<tabset>
<tab ng-cloak heading="Users" authorize="{{allowUserManagement}}">
...html content
</tab>
</tabset>

Routes

I’m using the ui-router. Unfortunately for those who are not, I don’t have code for the out of the box AngularJS router.

In the $stateChangeStart I authenticate the route. This is the code in that event.

$rootScope.$on("$stateChangeStart", function(event, toState, toParams, fromState, fromParams){
   AuthenticationManager.authenticate(event, toState, toParams);
});

The function that authorizes the route. If it’s authorized, the route is allowed to continue. If it’s not authorized, a message is displayed to the user and they are directed to the home page.

function authorizedRoute(toState, location, toaster, breadCrumbs){
   if(toState.authorization !== undefined){
       AuthorizationContext.authorized(toState.authorization, function(authorized){
           if(!authorized){
               toaster.pop('error', 'Error', 'You are not authorized to view this page.');
               location.path("/search");
           } else {
               breadCrumbs();
           }
       });
   } else{
       breadCrumbs();
   }
}

In this router definition you’ll notice a property called ‘authorization’. If the user has this claim they are allowed to proceed.

angular.module('agency',
    [
        'ui.router',
        'services'
    ])
    .config(function config($stateProvider){
    $stateProvider.state( 'agency', {
        url: '/agency',
        controller: 'agency.index',
        templateUrl: 'agency/agency.tpl.html',
        authenticate: true,
        authorization:'401d91e7-6ea0-46b4-9a10-530e3483ce15',
        data:{ pageTitle: 'Agency' }
    });
});

Data

In some cases, you don’t want to make a request to the server for the data. If the user has the claim they’ll be allowed to make the request.

The above AuthorizationContext at beginning of the article show the code for authoriedExecution. Here you see it’s usage.

AuthorizationContext.authorizedExecution(Keys.authorization.allowUserManagement, function(){
    //execute code, if the loggedin user has rights.

                });

As I mentioned above, this is no substitute for securing the server. This code works for providing a wonder user experience.

5 Steps for Coding for the Next Developer

Most of us probably don’t think about the developer who will maintain our code. Until recently, I did not consider him either. I never intentionally wrote obtuse code, but I also never left any breadcrumbs.

Kent Beck on good programmers:

Any fool can write code that a computer can understand. Good programmers write code that humans can understand.

Douglas Crockford on good computer programs:

It all comes down to communication and the structures that you use in order to facilitate that communication. Human language and computer languages work very differently in many ways, but ultimately I judge a good computer program by it’s ability to communicate with a human who reads that program. So at that level, they’re not that different.

Discovering purpose and intent is difficult in the most well written code. Any breadcrumbs left by the author, comments, verbose naming and consistency, is immensely helpful to next developers.

I start by looking for patterns. Patterns can be found in many places including variables names, class layout and project structure. Once identified, patterns are insights into the previous developer’s intent and help in comprehending the code.

What is a pattern? A pattern is a repeatable solution to a recurring problem. Consider a door. When a space must allow people to enter and to leave and yet maintain isolation, the door pattern is implemented. Now this seems obvious, but at one point it wasn’t. Someone created the door pattern which included the door handle, the hinges and the placement of these components. Walk into any home and you can identify any door and it’s components. The styles and colors might be different, but the components are the same. Software is the same.

There are known software patterns to common software problems. In 1995, Design Patterns: Elements of Reusable Object-Oriented Software was published describing common software patterns. This book describes common problems encountered in most software application and offered an elegant way to solve these problems. Developers also create their own patterns while solving problems they routinely encounter. While they don’t publish a book, if you look close enough you can identify them.

Sometimes it’s difficult to identify the patterns. This makes grokking the code difficult. When you find yourself in this situation, inspect the code, see how it is used. Start a re-write. Ask yourself, how would you accomplish the same outcome. Often as you travel the thought process of an algorithm, you gain insight into the other developer’s implementation. Many of us have the inclination to re-write what we don’t understand. Resist this urge! The existing implementation is battle-tested and yours is not.

Some code is just vexing, reach out to a peer — a second set of eyes always helps. Walk the code together. You’ll be surprised what the two of you will find.

Here are 5 tips for leaving breadcrumbs for next developers

1. Patterns
Use known patterns, create your own patterns. Stick with a consistent paradigm throughout the code. For example, don’t have 3 approaches to data access.

2. Consistency
This is by far the most important aspect of coding. Nothing is more frustrating than finding inconsistent code. Consistency allows for assumptions. Each time a specific software pattern is encountered, it should be assumed it behaves similarly as other instances of the pattern.

Inconsistent code is a nightmare, imagine reading a book with every word meaning something different, including the same word in different places. You’d have to look up each word and expend large amounts of mental energy discovering the intent. It’s frustrating, tedious and painful. You’ll go crazy! Don’t do this to next developer.

3. Verbose Naming
This is your language. These are the words to your story. Weave them well.

This includes class names, method names, variable names, project names and property names.

Don’t:

if(monkey.HoursSinceLastMeal > 3)
{
    FeedMonkey();
}

Do:

int feedInterval = 3;

if(monkey.HoursSinceLastMeal > feedInterval)
{
    FeedMonkey();
}

The first example has 3 hard coded in the if statement. This code is syntactically correct, but the intent of the number 3 tells you nothing. Looking at the property it’s evaluated against, you can surmise that it’s really 3 hours. In reality we don’t know. We are making an assumption.

In the second example, we set 3 to a variable called ‘feedInterval’. The intent is clearly stated in the variable name. If it’s been 3 hours since the last meal, it’s time to feed the monkey. A side effect of setting the variable is we can now change the feed interval without changing the logic.

This is a contrived example, in a large piece of software this type of code is self documenting and will help the next developer understand the code.

4. Comments
Comments are a double edge sword. Too much commenting increases maintenance costs, not enough leaves developers unsure on how the code works. A general rule of thumb is to comment when the average developer will not understand the code. This happens when the assumptions are not obvious or the code is out of the ordinary.

5. Code Simple
In my professional opinion writing complex code is the biggest folly among developers.

Steve Jobs on simplicity:

Simple can be harder than complex: You have to work hard to get your thinking clean to make it simple. But it’s worth it in the end because once you get there, you can move mountains.

Complexity comes in many forms, some of which include: future proofing, overly complex implementations, too much abstraction, large classes and large methods.

For more on writing clean simple code, see Uncle Bob’s book Clean Code and Max Kanat-Alexander’s Code Simplicity

Closing

Reading code is hard. With a few simple steps you can ensure the next developer will grok your code.

Implementing Transparent Encryption with NHibernate Listeners (Interceptors)

Have you ever had to encrypt data in the database? In this post, I’ll explore how using nHibernate Listeners to encrypt and decrypt data coming from and going into your database. The cryptography will be transparent to your application.

Why would you want to do this? SQL Server has encryption baked into the product. That is true, but if you are moving to the cloud and want to use SQL Azure you’ll need some sort of cryptography strategy. SQL Azure does not support database encryption.

What is an nHibernate Listener? I think of a Listener as a piece of code that I can inject into specific extensibility points in the nHibernate persistence and data hydration lifecycle.

As of this writing the following extensibility points are available in nHibernate.

  • IAutoFlushEventListener
  • IDeleteEventListener
  • IDirtyCheckEventListener
  • IEvictEventListener
  • IFlushEntityEventListener
  • IFlushEventListener
  • IInitializeCollectionEventListener
  • ILoadEventListener
  • ILockEventListener
  • IMergeEventListener
  • IPersistEventListener
  • IPostCollectionRecreateEventListener
  • IPostCollectionRemoveEventListener
  • IPostCollectionUpdateEventListener
  • IPostDeleteEventListener
  • IPostInsertEventListener
  • IPostLoadEventListener
  • IPostUpdateEventListener
  • IPreCollectionRecreateEventListener
  • IPreCollectionRemoveEventListener
  • IPreCollectionUpdateEventListener
  • IPreDeleteEventListener
  • IPreInsertEventListener
  • IPreLoadEventListener
  • IPreUpdateEventListener
  • IRefreshEventListener
  • IReplicateEventListener
  • ISaveOrUpdateEventListener

The list is extensive.

To implement transparent cryptography, we need to find the right place to encrypt and decrypt the data. For encrypting the data we’ll use IPostInsertEventListener and IPostUpdateEventListener. With these events we’ll catch the new data and the updated data going into the database. For decrypting, we’ll use the IPreLoadEventListener.

For this demonstration we’ll be using DatabaseCryptography class for encrypting and decrypting. The cryptography implementation is not important for this article.

IPreLoadEventListener

public class PreLoadEventListener : IPreLoadEventListener
{
readonly DatabaseCryptography _crypto = new DatabaseCryptography();

///
/// Called when [pre load].
///

///The event. public void OnPreLoad(PreLoadEvent @event)
{
_crypto.DecryptProperty(@event.Entity, @event.Persister.PropertyNames, @event.State);
}
}

IPreInsertEventListener

public class PreInsertEventListener : IPreInsertEventListener
{
readonly DatabaseCryptography _crypto = new DatabaseCryptography();

///
/// Return true if the operation should be vetoed
///

///The event. /// true if XXXX, false otherwise.
public bool OnPreInsert(PreInsertEvent @event)
{
_crypto.EncryptProperties(@event.Entity, @event.State, @event.Persister.PropertyNames);

return false;
}
}

IPreUpdateEventListener

public class PreUpdateEventListener : IPreUpdateEventListener
{
readonly DatabaseCryptography _crypto = new DatabaseCryptography();

///
/// Return true if the operation should be vetoed
///

///The event. /// true if XXXX, false otherwise.
public bool OnPreUpdate(PreUpdateEvent @event)
{
_crypto.EncryptProperties(@event.Entity, @event.State, @event.Persister.PropertyNames);

return false;
}
}

It’s important to note that on both IPreUpdateEventListener and IPreInsertEventListener must return false, otherwise the insert/update event will be aborted.

Now that we have the Listeners implemented we need to register them with nHibernate. I am using FluentNHibernate so this will be different if you are using raw nHibernate.

SessionFactory

public class SessionFactory
{
///
/// Creates the session factory.
///

/// ISessionFactory.
public static ISessionFactory CreateSessionFactory()
{
return Fluently.Configure()

.Database(MsSqlConfiguration.MsSql2012
.ConnectionString(c =&gt; c
.FromConnectionStringWithKey("DefaultConnection")))

.Mappings(m =&gt; m.FluentMappings.AddFromAssemblyOf())
.ExposeConfiguration(s =&gt;
{
s.SetListener(ListenerType.PreUpdate, new PreUpdateEventListener());
s.SetListener(ListenerType.PreInsert, new PreInsertEventListener());
s.SetListener(ListenerType.PreLoad, new PreLoadEventListener());
})
.BuildConfiguration()
.BuildSessionFactory();
}

When decrypting and encrypting data at the application level it makes the data useless in the database. You’ll need to bring the data back into the application to read the values of the encrypted fields. We want to limit the fields that are encrypted and we only want to encrypt string values. Encrypting anything other that string values complicates things. There is nothing saying we can’t encrypt dates, but doing so will require the date field in the database to become a string(nvarchar or varchar) field, to hold the encrypted data, once we do this we lose the ability to operate on the date field from the database.

To identify which fields we want encrypted and decrypted I’ll use marker attributes.

Encrypt Attribute

public class EncryptAttribute : Attribute
{
}

Decrypted Attribute

public class DecryptAttribute : Attribute
{
}

To see the EncryptAttribute and the DecryptedAttribute in action we’ll take a peek into the DatabaseCryptography class.

DatabaseCryptography

public class DatabaseCryptography
{
    private readonly Crypto _crypto = ObjectFactory.GetInstance();

    ///
    /// Encrypts the properties.
    ///
    ///The entity. ///The state. ///The property names. 
    public void EncryptProperties(object entity, object[] state, string[] propertyNames)
    {
        Crypt(entity, propertyNames, s = &gt;
        _crypto.Encrypt(s),
        state)
        ;
    }

    ///
    /// Crypts the specified entity.
    ///

    ///
    ///The entity. ///The state. ///The property names. ///The crypt.
    private void Crypt(object entity, string[] propertyNames, Func&lt;string, string&gt; crypt, object[] state) where T : Attribute
    {
        if (entity != null)
        {
            var properties = entity.GetType().GetProperties();

            foreach (var info in properties)
            {
                var attributes = info.GetCustomAttributes(typeof (T), true);

                if (attributes.Any())
                {
                    var name = info.Name;
                    var count = 0;

                    foreach (var s in propertyNames)
                    {
                        if (string.Equals(s, name, StringComparison.InvariantCultureIgnoreCase))
                        {
                            var val = Convert.ToString(state[count]);
                            if (!string.IsNullOrEmpty(val))
                            {

                                val = crypt(val);
                                state[count] = val;
                            }

                            break;
                        }

                        count++;
                    }
                }
            }
        }
    }

    ///
    /// Decrypts the property.
    ///
    ///The entity. ///The state. ///The property names. 
    public void DecryptProperies(object entity, string[] propertyNames, object[] state)
    {
        Crypt(entity, propertyNames, s = &gt;
        _crypto.Decrypt(s),
        state)
        ;
    }

}

That’s it. Now the encryption and decryption of data will be transparent to the application and you can go on your merry way building the next Facebook.

Missing Management Delegation Icon in IIS

It’s critical this is done first. Web deploy may not install correctly if it’s installed with the Management Service icon missing. Check IIS for the Management Delegation icon, it’ll be under the Management section.

If it’s missing run the following commands.

Windows 2012

dism /online /enable-feature /featurename:IIS-WebServerRole
dism /online /enable-feature /featurename:IIS-WebServerManagementTools
dism /online /enable-feature /featurename:IIS-ManagementService
Reg Add HKLM\Software\Microsoft\WebManagement\Server /V EnableRemoteManagement /T REG_DWORD /D 1
net start wmsvc
sc config wmsvc start= auto

Run Web Deploy.

Check to see if the icon is there. If it’s not, run web deploy again. It should be there.

Calling Stored Procedures with Code First

One of the weaknesses of Entity Framework 6 Code First is the lack of support for natively calling database constructs (views, stored procedures… etc). For those who have not heard of or used Code-First in Entity Framework (EF), Code-First is simply a Fluent mapping API. The idea is to create all your database mappings in code (i.e. C#) and the framework then creates and track the changes in the database schema.

In traditional Entity Framework to call a stored procedure you’d map it in your EDMX file. This is a multi-step process. Once the process is completed a method is created, which hangs off the DataContext.

I sought to making a calling stored procedure easier. At the heart of a stored procedure you have a procedure name, N number of parameters and a results set. I’ve written a small extension method that takes a procedure name, parameters and a return type. It just works. No mapping the procedure and it’s parameters.

public static List<TReturn> CallStoredProcedure<TParameters, TReturn>(this DataContext context, string storedProcedure, TParameters parameters) where TParameters : class where TReturn : class, new()
{
IDictionary<string,object> procedureParameters = new Dictionary<string, object>();
PropertyInfo[] properties = parameters.GetType().GetProperties();

var ps = new List<object>();

foreach (var property in properties)
{
object value = property.GetValue(parameters);
string name = property.Name;

procedureParameters.Add(name, value);

ps.Add(new SqlParameter(name, value));
}

var keys = procedureParameters.Select(p => string.Format("@{0}", p.Key)).ToList();
var parms = string.Join(", ", keys.ToArray());

return context.Database.SqlQuery<TReturn>(storedProcedure + " " + parms, ps.ToArray()).ToList();
}

Usage

var context = new DataContext();

List<User> users = context.CallStoredProcedure<object,User>("User_GetUserById", new{userId = 3});

Conditional Sql parameters with nHibernate

The problem is a the nHibernate’s CreateSqlQuery needs a complete sql string to be created, but you can’t create a string until you’ve evaluated the parameters. The only work around is to evaluate the conditional parameters to create the sql string to create the nHibernate session and then revaluate the parameters again to add them to the nHibernate query object. The problem with this, is the same evaluation logic is written twice. What is needed is a simple fluent api that will do everything for you and spit out the ISQLQuery when it’s done.

Before

public IList<AppointmentScheduleSummary> FillQuantityOld(DateTime? fromDate, DateTime? toDate, string CompanyID)
{
    string sql = "select VA.Userid as ID, E.FirstName + ' ' + E.LastName as Name,VA.Userid,count(*) as Total, getdate() as Date  from V_AppointmentScheduleStat VA, Appointment A, Employee E, Office O where";

    if (fromDate.HasValue)
    {
        sql += "  VA.Date >= '" + fromDate.Value.ToShortDateString() + "' and";

    }

    if (toDate.HasValue)
    {
        sql += "  VA.Date <= '" + toDate.Value.AddDays(1).ToShortDateString() + "' and";
    }

    sql += "  VA.date = A.date  and VA.UserId = E.UserId and O.OfficeNum = A.OfficeNum ";
    sql += " and A.appttypeid is not null";
    sql += " and O.CompanyID='" + CompanyID + "'";
    sql += " group by E.FirstName + ' ' + E.LastName ,VA.Userid  ";

    ISQLQuery query = _NHibernateSessionManager.GetSession().CreateSQLQuery(sql)
     .AddEntity("K", typeof(AppointmentScheduleSummary));
    return query.List<AppointmentScheduleSummary>();
}

After

public IList<AppointmentScheduleSummary> FillQuantity(DateTime? fromDate, DateTime? toDate,string CompanyID)
{
   var query = _NHibernateSessionManager.GetSession()
        .SQLQuery("select VA.Userid as ID, E.FirstName + ' ' + E.LastName as Name,VA.Userid,count(*) as Total, getdate() as Date  from V_AppointmentScheduleStat VA, Appointment A, Employee E, Office O where")

        .If(fromDate.HasValue, "VA.Date >= :fromDate and", parameters =>
        {
            parameters.SetParameter("fromDate", fromDate.Value.ToShortDateString());
        })

        .If(toDate.HasValue, "VA.Date <=:toDate and ", parameters =>
        {
            parameters.SetParameter("toDate", toDate.Value.AddDays(1).ToShortDateString());
            parameters.SetParameterList("", new[] {2, 3, 4,});

        })

        .Sql(" VA.date = A.date and VA.UserId = E.UserId and O.OfficeNum = A.OfficeNum and A.appttypeid is not null and O.CompanyID = :companyId" +
             " group by E.FirstName + ' ' + E.LastName ,VA.Userid")
            .SetParameter("companyId", CompanyID)
        .ToQuery();

     query.AddEntity("K", typeof(AppointmentScheduleSummary));
     return query.List<AppointmentScheduleSummary>();
}

SqlStringBuilder

using System;
using System.Collections;
using System.Collections.Generic;
using System.Text;
using NHibernate;

namespace IT2.DataAccess.Extensions
{
    public class SqlStringBuilder
    {
        private readonly ISession _session;
        private readonly Action<string> _logging;
        readonly StringBuilder _builder = new StringBuilder();
        readonly ParameterBuilder _parameterBuilder;

        /// <summary>
        /// Initializes a new instance of the <see cref="SqlStringBuilder" /> class.
        /// </summary>
        /// <param name="session">The session.</param>
        /// <param name="sql">The SQL.</param>
        /// <param name="logging"></param>
        public SqlStringBuilder(ISession session, string sql, Action<string> logging)
        {
            _session = session;
            _logging = logging;
            _builder.Append(sql);
            Parameters = new Dictionary<string, object>();
            ParameterList = new Dictionary<string, IEnumerable>();
            _parameterBuilder = new ParameterBuilder(this);
        }

        /// <summary>
        /// Gets or sets the parameters.
        /// </summary>
        /// <value>The parameters.</value>
        public IDictionary<string, object> Parameters { get; set; }

        /// <summary>
        /// Gets or sets the parameters.
        /// </summary>
        /// <value>The parameters.</value>
        public IDictionary<string, IEnumerable> ParameterList { get; set; }

        /// <summary>
        /// To the query.
        /// </summary>
        /// <returns>IQuery.</returns>
        public ISQLQuery ToSqlQuery() 
        {
            string sql = _builder.ToString();

            if (_logging != null)
            {
                _logging(sql);
            }

            var query = _session.CreateSQLQuery(sql);

            foreach (var parameter in Parameters)
            {
                query.SetParameter(parameter.Key, parameter.Value);
            }

            foreach (var parameter in ParameterList)
            {
                query.SetParameterList(parameter.Key, parameter.Value);
            }

            return query;
        }

        /// <summary>
        /// To the query.
        /// </summary>
        /// <returns>IQuery.</returns>
        public IQuery ToQuery()
        {
            string sql = _builder.ToString();

            if (_logging != null)
            {
                _logging(sql);
            }

            var query = _session.CreateQuery(sql);

            foreach (var parameter in Parameters)
            {
                query.SetParameter(parameter.Key, parameter.Value);
            }

            foreach (var parameter in ParameterList)
            {
                query.SetParameterList(parameter.Key, parameter.Value);
            }

            return query;
        }

        /// <summary>
        /// Ifs the specified evaluation.
        /// </summary>
        /// <param name="evaluation">if set to <c>true</c> [evaluation].</param>
        /// <param name="sql">The SQL.</param>
        /// <returns>ParameterBuilder.</returns>
        public ParameterBuilder If(bool evaluation, string sql)
        {
            return If(evaluation, sql, null);
        }

        /// <summary>
        /// Conditionals the specified evaluation.
        /// </summary>
        /// <param name="evaluation">if set to <c>true</c> [evaluation].</param>
        /// <param name="sql">The SQL.</param>
        /// <param name="parameters">The parameters.</param>
        /// <returns>SqlStringBuilder.</returns>
        public ParameterBuilder If(bool evaluation, string sql, Action<ParameterBuilder> parameters)
        {
            if (evaluation)
            {
                _builder.Append(string.Format(" {0} ", sql));

                if (parameters != null)
                {
                    parameters(_parameterBuilder);
                }
            }

            return _parameterBuilder;
        }

        /// <summary>
        /// Sets the parameters.
        /// </summary>
        /// <typeparam name="T"></typeparam>
        /// <param name="key">The key.</param>
        /// <param name="value">The value.</param>
        /// <returns>ParameterBuilder.</returns>
        public ParameterBuilder SetParameter<T>(string key, T value)
        {
            _parameterBuilder.SetParameter(key, value);
            return _parameterBuilder;
        }

        /// <summary>
        /// Sets the parameter list.
        /// </summary>
        /// <typeparam name="T"></typeparam>
        /// <param name="key">The key.</param>
        /// <param name="value">The value.</param>
        /// <returns>ParameterBuilder.</returns>
        public ParameterBuilder SetParameterList<T>(string key, T value) where T : IEnumerable
        {
            _parameterBuilder.SetParameterList(key, value);
            return _parameterBuilder;
        }

        /// <summary>
        /// SQLs the specified SQL.
        /// </summary>
        /// <param name="sql">The SQL.</param>
        /// <returns>IT2.DataAccess.SqlStringBuilder.</returns>
        public SqlStringBuilder Sql(string sql)
        {
            _builder.Append(string.Format(" {0} ", sql));
            return this;
        }
    }

    public class ParameterBuilder
    {
        private readonly SqlStringBuilder _builder;

        /// <summary>
        /// Initializes a new instance of the <see cref="ParameterBuilder" /> class.
        /// </summary>
        /// <param name="builder">The builder.</param>
        public ParameterBuilder(SqlStringBuilder builder)
        {
            _builder = builder;
        }

        /// <summary>
        /// Parameters the specified key.
        /// </summary>
        /// <typeparam name="T"></typeparam>
        /// <param name="key">The key.</param>
        /// <param name="value">The value.</param>
        /// <returns>ParameterBuilder.</returns>
        public ParameterBuilder SetParameter<T>(string key, T value)
        {
            _builder.Parameters.Add(key, value);
            return this;
        }

        /// <summary>
        /// Parameters the specified key.
        /// </summary>
        /// <typeparam name="T"></typeparam>
        /// <param name="key">The key.</param>
        /// <param name="value">The value.</param>
        /// <returns>ParameterBuilder.</returns>
        public ParameterBuilder SetParameterList<T>(string key, T value) where T : IEnumerable
        {
            _builder.ParameterList.Add(key, value);
            return this;
        }

        /// <summary>
        /// Ifs the specified evaluation.
        /// </summary>
        /// <param name="evaluation">if set to <c>true</c> [evaluation].</param>
        /// <param name="sql">The SQL.</param>
        /// <returns>ParameterBuilder.</returns>
        public ParameterBuilder If(bool evaluation, string sql)
        {
            return _builder.If(evaluation, sql);
        }

        /// <summary>
        /// Conditions the specified evaluation.
        /// </summary>
        /// <param name="evaluation">if set to <c>true</c> [evaluation].</param>
        /// <param name="sql">The SQL.</param>
        /// <param name="parameters">The parameters.</param>
        /// <returns>ParameterBuilder.</returns>
        public ParameterBuilder If(bool evaluation, string sql, Action<ParameterBuilder> parameters)
        {
            return _builder.If(evaluation, sql, parameters);
        }

        /// <summary>
        /// SQLs the specified SQL.
        /// </summary>
        /// <param name="sql">The SQL.</param>
        /// <returns>SqlStringBuilder.</returns>
        public SqlStringBuilder Sql(string sql)
        {
            _builder.Sql(sql);
            return _builder;
        }

        /// <summary>
        /// To the query.
        /// </summary>
        /// <returns>ISQLQuery.</returns>
        public IQuery ToQuery()
        {
            return _builder.ToQuery();
        }

        /// <summary>
        /// To the query.
        /// </summary>
        /// <returns>ISQLQuery.</returns>
        public ISQLQuery ToSqlQuery()
        {
            return _builder.ToSqlQuery();
        }
    }
}

Crystal Reports 13 Maximum Report Processing Limit Reached Workaround

In the Visual Studio 2012 version of Crystal Reports 13 there is a threshold that throttles concurrent reports, this also includes subreports, to 75 reports across a machine. This means if there are 5 web applications on a given server all opened reports across all 5 web applications counts toward the 75 report limit.

The error manifests itself in different ways and may result in the following errors “Not enough memory for operation.” or “The maximum report processing jobs limit configured by your system administrator has been reached”.

The problem is the reports are not disposed and they continue accumulate until the 75 limit is hit. To fix this issue, the reports have to be disposed of at the earliest possible time. This sounds simple, but is not as straightforward as it seems. Depending how the reports are generated there are two scenarios: First is generating PDF’s or Excel spreadsheets and the second is using the Crystal Report Viewer. Each scenario has a different lifetime, which we need to take into account when crafting our solution.

Solution

There are two reports lifetimes we have to manage: generated reports: PDF, Excel Spreadsheet and the Crystal Report viewer.

PDF’s and Excel Spreadsheets are generated during the request. They can be disposed on the Page Unload event. The Crystal Report viewer is a bit different. It needs to span requests and is stored in session. This makes disposing the viewer reports a bit challenging, but not impossible.

Disposing the Viewer on the Page Unload event won’t work. The Viewer has paging functionality which requests each new page from the server. To get around this issue we implemented a report reference counter. Each time a report is created, it is stored in a concurrent dictionary. When a report is disposed the report is removed from the dictionary. Upon opening a type of a report we check that the user does not already have this report open, if they do, we simply dispose the existing report and open a new one in it’s place. Other opportunities to dispose the report is on Session End (the user signs out), on Application End and when navigating away from report page.

Our internal QA team tested a non fixed version of Crystal Reports. Crystal Reports fell over at around 100 concurrent connections. After applying the fix our QA team ran a load against the servers at 750 concurrent connections without any issues.

On a side note, we have encountered latency when disposing reports with multiple sub reports.

public static class ReportFactory
{
static readonly ConcurrentDictionary<string, ConcurrentDictionary<string, UserReport>> _sessions = new ConcurrentDictionary<string, ConcurrentDictionary<string, UserReport>>();

/// <summary>
/// Creates the report dispose on unload.
/// </summary>
/// <typeparam name="T"></typeparam>
/// <param name="page">The page.</param>
/// <returns>``0.</returns>
public static T CreateReportDisposeOnUnload<T>(this Page page) where T : IDisposable, new()
{
    var report = new T();
    DisposeOnUnload(page, report);
    return report;
}

/// <summary>
/// Disposes on page unload.
/// </summary>
/// <typeparam name="T"></typeparam>
/// <param name="page">The page.</param>
/// <param name="instance">The instance.</param>
private static void DisposeOnUnload<T>(this Page page, T instance) where T : IDisposable
{
    page.Unload += (s, o) =>
    {
        if (instance != null)
        {
            CloseAndDispose(instance);
        }
    };
}

/// <summary>
/// Unloads the report when user navigates away from report.
/// </summary>
/// <param name="page">The page.</param>
public static void UnloadReportWhenUserNavigatesAwayFromPage(this Page page)
{
    var sessionId = page.Session.SessionID;
    var pageName = Path.GetFileName(page.Request.Url.AbsolutePath);

    var contains = _sessions.ContainsKey(sessionId);

    if (contains)
    {
        var reports = _sessions[sessionId];
        var report = reports.Where(r => r.Value.PageName != pageName).ToList();

        foreach (var userReport in report)
        {
            UserReport instance;

            bool removed = reports.TryRemove(userReport.Key, out instance);

            if (removed)
            {
                CloseAndDispose(instance.Report);
            }
        }
    }
}

/// <summary>
/// Gets the report.
/// </summary>
/// <typeparam name="T"></typeparam>
/// <returns>ReportClass.</returns>
public static T CreateReportForCrystalReportViewer<T>(this Page page) where T : IDisposable, new()
{
    var report = CreateReport<T>(page);
    return report;
}

/// <summary>
/// Creates the report.
/// </summary>
/// <typeparam name="T"></typeparam>
/// <param name="page">The page.</param>
/// <returns>``0.</returns>
private static T CreateReport<T>(Page page) where T : IDisposable, new()
{
    MoreThan70ReportsFoundRemoveOldestReport();

    var sessionId = page.Session.SessionID;

    bool containsKey = _sessions.ContainsKey(sessionId);
    var reportKey = typeof(T).FullName;
    var newReport = GetUserReport<T>(page);

    if (containsKey)
    {
        //Get user by session id
        var reports = _sessions[sessionId];

        //check for the report, remove it and dispose it if it exists in the collection
        RemoveReportWhenMatchingTypeFound<T>(reports);

        //add the report to the collection

        reports.TryAdd(reportKey, newReport);

        //add the reports to the user key in the concurrent dictionary
        _sessions[sessionId] = reports;
    }
    else //key does not exist in the collection
    {
        var newDictionary = new ConcurrentDictionary<string, UserReport>();
        newDictionary.TryAdd(reportKey, newReport);

        _sessions[sessionId] = newDictionary;

    }

    return (T)newReport.Report;
}

/// <summary>
/// Ifs the more than 70 reports remove the oldest report.
/// </summary>
private static void MoreThan70ReportsFoundRemoveOldestReport()
{
    var reports = _sessions.SelectMany(r => r.Value).ToList();

    if (reports.Count() > 70)
    {
        //order the reports with the oldest on top.
        var sorted = reports.OrderByDescending(r => r.Value.TimeAdded);

        //remove the oldest
        var first = sorted.FirstOrDefault();
        var key = first.Key;
        var sessionKey = first.Value.SessionId;

        if (first.Value != null)
        {
            //close and depose of the first report
            CloseAndDispose(first.Value.Report);

            var dictionary = _sessions[sessionKey];
            var containsKey = dictionary.ContainsKey(key);

            if (containsKey)
            {
                //remove the disposed report from the collection
                UserReport report;
                dictionary.TryRemove(key, out report);
            }
        }

    }
}

/// <summary>
/// Removes the report if there is a report with a match type.
/// </summary>
/// <typeparam name="T"></typeparam>
/// <param name="reports">The reports.</param>
private static void RemoveReportWhenMatchingTypeFound<T>(ConcurrentDictionary<string, UserReport> reports) where T : IDisposable, new()
{
    var key = typeof(T).FullName;
    var containsKey = reports.ContainsKey(key);

    if (containsKey)
    {
        UserReport instance;

        bool removed = reports.TryRemove(key, out instance);

        if (removed)
        {
            CloseAndDispose(instance.Report);
        }

    }
}

/// <summary>
/// Removes the reports for session.
/// </summary>
/// <param name="sessionId">The session identifier.</param>
public static void RemoveReportsForSession(string sessionId)
{
    var containsKey = _sessions.ContainsKey(sessionId);

    if (containsKey)
    {
        ConcurrentDictionary<string, UserReport> session;

        var removed = _sessions.TryRemove(sessionId, out session);

        if (removed)
        {
            foreach (var report in session.Where(r => r.Value.Report != null))
            {
                CloseAndDispose(report.Value.Report);
            }
        }
    }
}

/// <summary>
/// Closes the and dispose.
/// </summary>
/// <param name="report">The report.</param>
private static void CloseAndDispose(IDisposable report)
{
    report.Dispose();
}

/// <summary>
/// Gets the user report.
/// </summary>
/// <typeparam name="T"></typeparam>
/// <returns>UserReport.</returns>
private static UserReport GetUserReport<T>(Page page) where T : IDisposable, new()
{
    string onlyPageName = Path.GetFileName(page.Request.Url.AbsolutePath);

    var report = new T();
    var userReport = new UserReport { PageName = onlyPageName, TimeAdded = DateTime.UtcNow, Report = report, SessionId = page.Session.SessionID };

    return userReport;
}

/// <summary>
/// Removes all reports.
/// </summary>
public static void RemoveAllReports()
{
    foreach (var session in _sessions)
    {
        foreach (var report in session.Value)
        {
            if (report.Value.Report != null)
            {
                CloseAndDispose(report.Value.Report);
            }
        }

        //remove all the disposed reports
        session.Value.Clear();
    }

    //empty the collection
    _sessions.Clear();
}

private class UserReport
{
    /// <summary>
    /// Gets or sets the time added.
    /// </summary>
    /// <value>The time added.</value>
    public DateTime TimeAdded { get; set; }

    /// <summary>
    /// Gets or sets the report.
    /// </summary>
    /// <value>The report.</value>
    public IDisposable Report { get; set; }

    /// <summary>
    /// Gets or sets the session identifier.
    /// </summary>
    /// <value>The session identifier.</value>
    public string SessionId { get; set; }

    /// <summary>
    /// Gets or sets the name of the page.
    /// </summary>
    /// <value>The name of the page.</value>
    public string PageName { get; set; }
}
}