A recent discussion at work got me thinking…
A bug was reported in one of our products. A co-worker was tasked with finding and fixing the problem. His approach was to deploy new code from development into qa. The reasoning was that many bugs had been fixed since the issue was first reported. We might have already fixed it.
At first blush this makes perfect sense. Why worry about something when it could already be fixed. If we can’t reproduce it, then there is no issue.
But had we really fixed the bug? It reminded me of a phrase from the Pragmatic Programmer, called “Coding by coincidence.”
Had we really fixed the bug? Until we witnessed the bug and tracked down its cause we don’t know. Maybe we had, maybe we hadn’t.
If we could not witness the bug in action and then made a change to the code and then witnessed the bug not occurring we would not know it was fixed. Even if the newly deployed code did not exhibit the symptoms we don’t know if the bug was actually fixed or the symptoms simply suppressed.
Recently I’ve been working with ASP.Net Routing. It has some cool features. I particularly like how the RouteValueDictiontionary works. Give it any type and it will reduce the type to a key/value pair. Awesome! The tokenized routes are another lovely feature, so are the default values.
A pain point I’ve encountered is route evaluation.
For example if you have 10 routes:
The routes are evaluated in the order they appear. This is logical. Evaluating a route consists of processing the incoming url and attempting to extract data from it. If no data exists a null is returned to the Routing Engine and the next route is tried. The issue I have is there is no smarts built into the route evaluation. It iterates over the route collection. For example if my route is the last one “kl/x” the first 9 have to be checked before it reaches the correct one. I’m assuming all the routes are relative to the root of the site. Why couldn’t there be a pre-evaluation to eliminate the routes? For example, I’ve requested kl/x. First 9 will be processed before kl/x. Why can’t the routing framework determine that 8, 9, 10 are the closes possible matches? Instead processing 9 routes we process 3.
This gets bigger when custom constraints are added. Say, for each route there is a constraint that looks up a user in the database then the database will be hit 10 times for the same data! Sure you can throw in caching, but that’s not the point.
Wrapping this up, Routing works great for 80 percent of the cases. It’s a good little framework with a few tweaks it could go from good to great.
What happens when there is a single standalone class.
A need arises for a second, slightly difference class. A base class is created. Both classes now inherit from the base. All is in harmony.
In testing it is discovered that some of the internal functionality of the base class is need to test the two implementations. How do you handle this? Expose the internal method as public? Move the internal method off to another class and make it public static? How about a re-implementing it?
I’m a bit of a pack rat — I don’t throw anything away, including emails.
In 1998 I started using Microsoft Outlook. For the time it was a great program. The best feature was the integrated MS Word support as the editor. Now that is common place, but back then it’s was the cats meow.
Over the years I’ve saved my PST files into an archive, it has grown to over 10 gigs. It was time to extract the messages and store them in a more searchable medium. I’ll be posting more on what I’m doing later. To my point, I am blown away with the size difference between the PST file format and the Sql Server.
Attachments: 8637 (yes, I have saved everyone of the attachments to the database)
PST: 8.9 gigs
Sql Server: 368 MB
I don’t know the inner workings of either product. Sql Server is optimized for data storage, Outlook is apparently not. I’m not surprised that SQL Server has a much smaller footprint. I’m surprised by the degree of which SQL Server is smaller.
We are implementing routes. There are specific requirements around the tokens. We have 10 to 12 scenarios to test. Getting them wrong is not an option.
Using FakeItEasy (check it out if you have not, it’s a kick ass little mocking framework) here is the code I used to mock the routing.
Phil Haack has a great article on the subject.
public abstract class RouteTestBase
/// Gets the route data. Only pass in relative urls: ~/t/r/HeroRocksf9173ed1c882481fbe9681e8236fc98cu15zc4e3mhm/50006/42006_42006
/// The URL.
public RouteData GetRouteData(string url)
var virtualPathProvider = A.Fake();
A.CallTo(() => virtualPathProvider.FileExists(A.Ignored)).Returns(false);
var routes = new RouteCollection(virtualPathProvider);
var contextBase = A.Fake();
var request = contextBase.Request;
A.CallTo(() => request.AppRelativeCurrentExecutionFilePath).Returns(url);