Strong Name your Assemblies!

muscleRecently with coworkers, we discussed the pros and cons of strong naming your assemblies. I’m currently employed at an organization that does not strong name their assemblies.

The organization has many .Net projects that depend on other internal projects. When the core team releases a new build the product teams upgrade to the latest build.

Version Integrity
Strong naming provides version checking at compile time. If an assembly is has a dependency on version 1.5.0, the assembly will not run unless the 1.5.0 version of the assembly is present. Dropping in a different version of the assembly will cause a run-time error. This also includes the same code-base and assembly being generated with a different strong name (don’t know who would do this…).

Code Integrity
Strong naming provides a level of confidence that your assemblies have not fallen a victim of tampering. This is achieved by creating a hash of all the code and files in the assembly. If any of the files change the hash create at compile time and run-time do not match and exception is thrown — game over for those IL hackers!

It should be noted, for a higher level of trust use a digital signature.

Benefits
The biggest benefit I’ve experienced is enforcing the version integrity. Without version integrity you are left not knowing if the assembly is the correct assembly.

I can’t tell you how many times I’ve run into type mismatch issues because two version of the same dll were loaded or the assembly throws a missing method exception. These issues magically disappear with a strong name.

Drawbacks
With all the benefits of strong naming there are downsides.

Strong named assemblies can only reference other strong named assemblies this causes issue because many third party and open source assemblies don’t strong name their assemblies. This can be resolved by strong naming the assembly. This can be achieved by either building the source and applying a strong name at compilation time or using command line tools to add a strong name.

Chasing down non strong named assemblies can turn into a yak shaving event.

It’s yet another piece of the application that must be managed.

How to Strong Name a non-strong named assembly

1. Generate a Key File

sn -k sn.snk

2. Generate the IL for an Assembly

ildasm MyAssembly.dll /out:MyAssembly.il

3. Rename the original assembly, just in case you need to revert.

ren MyAssembly.dll MyAssembly.dll.old

4. Create a new assembly with the strong name.

ilasm MyAssembly.il /dll /key=sn.snk

Crystal Reports 13 Maximum Report Processing Limit Reached Workaround

In the Visual Studio 2012 version of Crystal Reports 13 there is a threshold that throttles concurrent reports, this also includes subreports, to 75 reports across a machine. This means if there are 5 web applications on a given server all opened reports across all 5 web applications counts toward the 75 report limit.

The error manifests itself in different ways and may result in the following errors “Not enough memory for operation.” or “The maximum report processing jobs limit configured by your system administrator has been reached”.

The problem is the reports are not disposed and they continue accumulate until the 75 limit is hit. To fix this issue, the reports have to be disposed of at the earliest possible time. This sounds simple, but is not as straightforward as it seems. Depending how the reports are generated there are two scenarios: First is generating PDF’s or Excel spreadsheets and the second is using the Crystal Report Viewer. Each scenario has a different lifetime, which we need to take into account when crafting our solution.

Solution

There are two reports lifetimes we have to manage: generated reports: PDF, Excel Spreadsheet and the Crystal Report viewer.

PDF’s and Excel Spreadsheets are generated during the request. They can be disposed on the Page Unload event. The Crystal Report viewer is a bit different. It needs to span requests and is stored in session. This makes disposing the viewer reports a bit challenging, but not impossible.

Disposing the Viewer on the Page Unload event won’t work. The Viewer has paging functionality which requests each new page from the server. To get around this issue we implemented a report reference counter. Each time a report is created, it is stored in a concurrent dictionary. When a report is disposed the report is removed from the dictionary. Upon opening a type of a report we check that the user does not already have this report open, if they do, we simply dispose the existing report and open a new one in it’s place. Other opportunities to dispose the report is on Session End (the user signs out), on Application End and when navigating away from report page.

Our internal QA team tested a non fixed version of Crystal Reports. Crystal Reports fell over at around 100 concurrent connections. After applying the fix our QA team ran a load against the servers at 750 concurrent connections without any issues.

On a side note, we have encountered latency when disposing reports with multiple sub reports.

public static class ReportFactory
{
static readonly ConcurrentDictionary<string, ConcurrentDictionary<string, UserReport>> _sessions = new ConcurrentDictionary<string, ConcurrentDictionary<string, UserReport>>();

/// <summary>
/// Creates the report dispose on unload.
/// </summary>
/// <typeparam name="T"></typeparam>
/// <param name="page">The page.</param>
/// <returns>``0.</returns>
public static T CreateReportDisposeOnUnload<T>(this Page page) where T : IDisposable, new()
{
    var report = new T();
    DisposeOnUnload(page, report);
    return report;
}

/// <summary>
/// Disposes on page unload.
/// </summary>
/// <typeparam name="T"></typeparam>
/// <param name="page">The page.</param>
/// <param name="instance">The instance.</param>
private static void DisposeOnUnload<T>(this Page page, T instance) where T : IDisposable
{
    page.Unload += (s, o) =>
    {
        if (instance != null)
        {
            CloseAndDispose(instance);
        }
    };
}

/// <summary>
/// Unloads the report when user navigates away from report.
/// </summary>
/// <param name="page">The page.</param>
public static void UnloadReportWhenUserNavigatesAwayFromPage(this Page page)
{
    var sessionId = page.Session.SessionID;
    var pageName = Path.GetFileName(page.Request.Url.AbsolutePath);

    var contains = _sessions.ContainsKey(sessionId);

    if (contains)
    {
        var reports = _sessions[sessionId];
        var report = reports.Where(r => r.Value.PageName != pageName).ToList();

        foreach (var userReport in report)
        {
            UserReport instance;

            bool removed = reports.TryRemove(userReport.Key, out instance);

            if (removed)
            {
                CloseAndDispose(instance.Report);
            }
        }
    }
}

/// <summary>
/// Gets the report.
/// </summary>
/// <typeparam name="T"></typeparam>
/// <returns>ReportClass.</returns>
public static T CreateReportForCrystalReportViewer<T>(this Page page) where T : IDisposable, new()
{
    var report = CreateReport<T>(page);
    return report;
}

/// <summary>
/// Creates the report.
/// </summary>
/// <typeparam name="T"></typeparam>
/// <param name="page">The page.</param>
/// <returns>``0.</returns>
private static T CreateReport<T>(Page page) where T : IDisposable, new()
{
    MoreThan70ReportsFoundRemoveOldestReport();

    var sessionId = page.Session.SessionID;

    bool containsKey = _sessions.ContainsKey(sessionId);
    var reportKey = typeof(T).FullName;
    var newReport = GetUserReport<T>(page);

    if (containsKey)
    {
        //Get user by session id
        var reports = _sessions[sessionId];

        //check for the report, remove it and dispose it if it exists in the collection
        RemoveReportWhenMatchingTypeFound<T>(reports);

        //add the report to the collection

        reports.TryAdd(reportKey, newReport);

        //add the reports to the user key in the concurrent dictionary
        _sessions[sessionId] = reports;
    }
    else //key does not exist in the collection
    {
        var newDictionary = new ConcurrentDictionary<string, UserReport>();
        newDictionary.TryAdd(reportKey, newReport);

        _sessions[sessionId] = newDictionary;

    }

    return (T)newReport.Report;
}

/// <summary>
/// Ifs the more than 70 reports remove the oldest report.
/// </summary>
private static void MoreThan70ReportsFoundRemoveOldestReport()
{
    var reports = _sessions.SelectMany(r => r.Value).ToList();

    if (reports.Count() > 70)
    {
        //order the reports with the oldest on top.
        var sorted = reports.OrderByDescending(r => r.Value.TimeAdded);

        //remove the oldest
        var first = sorted.FirstOrDefault();
        var key = first.Key;
        var sessionKey = first.Value.SessionId;

        if (first.Value != null)
        {
            //close and depose of the first report
            CloseAndDispose(first.Value.Report);

            var dictionary = _sessions[sessionKey];
            var containsKey = dictionary.ContainsKey(key);

            if (containsKey)
            {
                //remove the disposed report from the collection
                UserReport report;
                dictionary.TryRemove(key, out report);
            }
        }

    }
}

/// <summary>
/// Removes the report if there is a report with a match type.
/// </summary>
/// <typeparam name="T"></typeparam>
/// <param name="reports">The reports.</param>
private static void RemoveReportWhenMatchingTypeFound<T>(ConcurrentDictionary<string, UserReport> reports) where T : IDisposable, new()
{
    var key = typeof(T).FullName;
    var containsKey = reports.ContainsKey(key);

    if (containsKey)
    {
        UserReport instance;

        bool removed = reports.TryRemove(key, out instance);

        if (removed)
        {
            CloseAndDispose(instance.Report);
        }

    }
}

/// <summary>
/// Removes the reports for session.
/// </summary>
/// <param name="sessionId">The session identifier.</param>
public static void RemoveReportsForSession(string sessionId)
{
    var containsKey = _sessions.ContainsKey(sessionId);

    if (containsKey)
    {
        ConcurrentDictionary<string, UserReport> session;

        var removed = _sessions.TryRemove(sessionId, out session);

        if (removed)
        {
            foreach (var report in session.Where(r => r.Value.Report != null))
            {
                CloseAndDispose(report.Value.Report);
            }
        }
    }
}

/// <summary>
/// Closes the and dispose.
/// </summary>
/// <param name="report">The report.</param>
private static void CloseAndDispose(IDisposable report)
{
    report.Dispose();
}

/// <summary>
/// Gets the user report.
/// </summary>
/// <typeparam name="T"></typeparam>
/// <returns>UserReport.</returns>
private static UserReport GetUserReport<T>(Page page) where T : IDisposable, new()
{
    string onlyPageName = Path.GetFileName(page.Request.Url.AbsolutePath);

    var report = new T();
    var userReport = new UserReport { PageName = onlyPageName, TimeAdded = DateTime.UtcNow, Report = report, SessionId = page.Session.SessionID };

    return userReport;
}

/// <summary>
/// Removes all reports.
/// </summary>
public static void RemoveAllReports()
{
    foreach (var session in _sessions)
    {
        foreach (var report in session.Value)
        {
            if (report.Value.Report != null)
            {
                CloseAndDispose(report.Value.Report);
            }
        }

        //remove all the disposed reports
        session.Value.Clear();
    }

    //empty the collection
    _sessions.Clear();
}

private class UserReport
{
    /// <summary>
    /// Gets or sets the time added.
    /// </summary>
    /// <value>The time added.</value>
    public DateTime TimeAdded { get; set; }

    /// <summary>
    /// Gets or sets the report.
    /// </summary>
    /// <value>The report.</value>
    public IDisposable Report { get; set; }

    /// <summary>
    /// Gets or sets the session identifier.
    /// </summary>
    /// <value>The session identifier.</value>
    public string SessionId { get; set; }

    /// <summary>
    /// Gets or sets the name of the page.
    /// </summary>
    /// <value>The name of the page.</value>
    public string PageName { get; set; }
}
}

Using AutoMapper with Entity Framework between to EDMX Containers

If you try to map two EF Models with AutoMapper, you’ll run into an issue when persisting. At runtime, EF add a base EF object to the model. AutoMapper maps these properties. If the mapping is between two models from the same context, then it’s not an issue. The values are the same. If the models live in different edmx containers and issue arises. The EntityState and EntityKey values are also mapped.

Automapper does not have the means to ignore inherited properties. This leaves a problem. How does one map EF models between edmx containers?

To solve this issue, I created a small extension method:

public static class AutoMapperExtension
{
    /// <summary>
    /// Ignores the definition properties for EF models.
    /// </summary>
    /// <typeparam name="S"></typeparam>
    /// <typeparam name="D"></typeparam>
    /// <param name="expression">The expression.</param>
    /// <returns>IMappingExpression{``0``1}.</returns>
    public static IMappingExpression<S, D> IgnoreEFProperties<S, D>(this IMappingExpression<S, D> expression)
    {
        expression.ForMember("EntityKey", o => o.Ignore())
            .ForMember("EntityState", p => p.Ignore());
        return expression;
    }
}

Var or Not to Var in Javascript

Using var in Javascript creates a locally scoped variable. If the var keyword is omitted then the variable is initialized into the global space.

Don’t forget your var!

Fail Fast

failfast

For goodness sakes don’t swallow exceptions! It’ll kill you (not really)!!!

Don’t try to hobble along. If an exception is thrown, log it, present the user with a “Sorry, but we had a problem…” message and assure them you are on top of the issue and graceful close the application or restart it.

In business programming there is no reason to try to “Save” the context. It’s a disaster waiting to happen. By swallowing an exception the application is put into an unknown state. Badness can happen when your application is an unexpected state. Anything from data corruption to your application being compromised. One of the strategies of hackers is to test edge cases, looking for an in or an error to gain more information on compromising the application.

When your application throws an exception, logged it and close the application. The context may have been lost, but the application lives on.

Comparing Enums With the Same Values

comparing_notes_popupRecently I came across some code where a switch statement was used to convert one Enum to another Enum. There were over 20 cases in the switch statement. The kicker was the values were the same. Why did they not use a single Enum? I’m still trying to figure that out.

I wrote some code that eliminated the massive switch statement:

        public TTo MapEnum(TFrom from)
            where TFrom : struct, IConvertible
            where TTo : struct, IConvertible
        {
            System.Diagnostics.Contracts.Contract.Requires(!typeof(TTo).IsEnum, "TTo must be an enumerated type");
            System.Diagnostics.Contracts.Contract.Requires(!typeof(TFrom).IsEnum, "TFrom must be an enumerated type");
          
            string fromValue = Convert.ToString(from);

            TTo toEnum;
            bool found = Enum.TryParse(fromValue, true, out toEnum);

            if (!found)
            {
                throw new Exception("Could not find value {0} for type {1} in enum {2} ");
            }

            return toEnum;
        }

Considerations When Throwing Exceptions

iStock_000007846219_ExtraSmall

A co-worker sent an email with some code he’s struggling with. He’s trying to avoid using try/catches to drive business logic.

The problem is not the try/catches it’s simply a symptom of the problem. Can you spot the problem? You’ll have to make some assumption, but I have faith you’ll come to the same conclusion I came too.

The code is below; I changed it to protect the innocent:

private Customer GetOrCreateCustomer(long customerTelephoneNumberOrCustomerId)
        {
           Customer customer;
            try
            {
                customer = this.DoMagic(customerMasterTelephoneNumberOrCustomerId);
            }
            catch (DataException)
            {
                try
                {
                    //TODO: I know this isn't ideal. Still thinking of a better way to do this. 
                    customer = this. GetCustomer(customerMasterTelephoneNumberOrCustomerId);
                }
                catch (DataException)
                {
                    customer = this.GetCustomerFromExternal(customerMasterTelephoneNumberOrCustomerId);
                    customer.CustomerId = this.CreateCustomer(customer);
                }
            }

            return customer;
        }

There is an underlining philosophy in this system that nulls are bad. In most cases where a null can be generated an exception is thrown. At first I did not see a problem with this. I saw it as an architecture decision, an aesthetic, but as I interface with the code, it’s apparent to me it’s an architectural mistake.

You might ask, why is throwing an exception in the case of nulls bad?

Below are some guidelines when considering throwing an exception:

  1. The fact that you have to check for the null to throw the exception should be a hint that it is not needed. It an expected outcome, thus not an exception.
  2. Throwing an exception is a resource intensive operation, one of the most resource intensive operations that can be done in .Net.
  3. An exception is just that, an exception. It’s an exception to the assumptions made in the code – when these assumptions are broken, the system must terminate, it cannot move on because the system is in an unknown state (i.e. the database is no longer available) this could also be an attack vector.
  4. Throwing an exception means you have to wrap the upstream call in a try/catch block to enforce business rules. A null value is a business opportunity to control the flow of the application. The action upon the null value should be done at the point in which a business decision must take place. For example, a customer variable is null, at the UI layer a message is shown to the user stating the customer with id ‘1234’ cannot be found.

Hypersonic is Open

Hypersonic started as an attempt to ease the pain of writing CRUD with ADO.NET. My goal was to simplify data persistence and retrieval.

In the early days of .NET, ORMs did not exist. ORMs were a foreign idea. Most companies had their own data access strategy. As a contractor, I’d seen many of these strategies. .TEXT was released by Scott Watermasysk in 2003. It was the first popular open source, if not THE first .Net blogging engine. I studied the code. I liked Scott’s approach to data access. He had elegantly decomposed the operations needed to persist and retrieve data. I leveraged his approach in Hypersonic.

In early 2006 I finished the first build of Hypersonic. It was one of many tools in my toolbox. It was not called Hypersonic then. I soon realized the value of what I had created and made it a stand alone library. Friends began to use it and before I knew it, it developed a small following.

Hypersonic is now at version 3. I have successfully used it for many projects over the years — it is still my tool of choice when developing against stored procedures.

I have decided to open source it. The data access problem has been solved with frameworks like nHibernate and Entity Framework.

My hope is others will find success using it.

The source is at GitHub.

The binaries are available.

ADO.Net or ORMs?

There is nothing wrong with using ADO.NET. It’s what drives all the ORM solutions in .NET. The current attitude is using raw ADO.Net is a bad thing. If it gets the job done, then what’s the issue?

Yet it seems .NET developers are running in droves to jump on the ORM bandwagon. ORMs are just one of many tools in the data access toolbox.

In the early 2000′s ORMs took the Java world by storm. Store procedures were shunned. It was ORM or nothing. A half decade later Java developers realized that the best solution uses both an ORM and stored procedures, each has strengths.

Use the best tool for the job. ORMs can automate much of the CRUD from the application. Stored procedures are good for adding abstraction, adding a layer of security and optimizing area’s that need to be highly performant.

Choose the best tool for the job.

Isolating the Database When Testing

Jimmy Bogard wrote a post about Isolating the database when testing. I highly recommend reading it.

I’d like to add another option.

When writing tests, each test is responsible for it’s own data. This means if the test needs a user, it will create it’s own user and any supporting data. At times this can be challenging, especially as the schema changes. To help mitigate this issue, we use classes to encapsulate the data creation. When the schema does changes it’s a simple to change the tests in a centralized location. In most cases there is zero impact on the logic of the tests.

The second part of this is, data is not deleted. This is very important. Data grows and illuminates potential performance issues. As time passes, you’ll start seeing area’s of the application slowing down. These are performance smells. It may be an index is needed. It could be a synchronous process that should be an asynchronous process. Many of these performance issues to do not expose themselves until you’ve been in production for a number of weeks. Even load testing may not catch these issues. At that point it’s too late.