Tuesday, July 27, 2010

Upgrading to ASP.NET MVC 3 Preview 1

July 27, 2010 Posted by Jason , , No comments

Earlier today Scott Gu announced the availability of ASP.NET MVC 3 Preview 1. This release  includes the beginnings of the Razor view engine as well as a bevy of additional features. The following are some notable links with more details:

Scott Gu

Scott Hanselman

Phil Haack

Maarten Balliauw

Anyhow, I played with the demo and got to wondering how to update an existing ASP.NET MVC 2 project to use the new preview version (obviously I wouldn’t switch production code any time soon…)

It was simply a matter of editing my existing ASP.NET MVC 2 (C#) project file and making the following updates:









<Reference Include="System.Web.Mvc, Version=, Culture=neutral, PublicKeyToken=31bf3856ad364e35, processorArchitecture=MSIL" />


<Reference Include="System.Web.Mvc, Version=, Culture=neutral, PublicKeyToken=31bf3856ad364e35, processorArchitecture=MSIL" />

Reloading my project I can confirm that everything builds nicely and I know have the additional option of creating a razor view as in the blow screenshot:


Simple stuff!


Looks like I missed a step. I also had to make a minor modification to my Web.Config file replacing:

<assemblyIdentity name="System.Web.Mvc" PublicKeyToken="31bf3856ad364e35" />
<bindingRedirect oldVersion="" newVersion="" />



<assemblyIdentity name="System.Web.Mvc" publicKeyToken="31bf3856ad364e35" />
<bindingRedirect oldVersion="" newVersion="" />

Monday, July 26, 2010

"$.validator.methods[method] is undefined"

July 26, 2010 Posted by Jason , , 2 comments

I’ve started developing a new project and have recently been getting into ASP.NET MVC and JQuery a lot more. To make my life easier I decided to use the JQuery Validation libraries to perform my client side validation. I created a simple registration form with three fields – username, password, confirm password. The following is the code I am using:




The code uses a remote call to the IsLoginAvailable action of the user controller which in turn checks if the entered username has already been taken and wraps the response in a JsonResult. This appeared to work fine (validating the field when focus was lost) but upon submitting the form I received the following error:


"$.validator.methods[…] is undefined"




"$.validator.methods[method] is undefined"


After debugging with Firebug I found that the validation was failing on a "data" method. Looking at the code I realized I was specifying the username to pass via the data section in the remote call – this was something I picked up in the examples online and assumed I too would need to use – shame on me!


I removed the data section from my remote call and lo-and-behold everything worked as it should. I was surprised to see that my #username value was passed correctly to the controller action and the check was performed successfully. Better still, upon submitting the form I no longer receive the "Error: ‘$.validator.methods[…]’ is null or not an object" error...now to figure out why I would or wouldn’t need the data section!

Friday, July 23, 2010

ASP.NET MVC – Object Reference is not set to an instance of an object

July 23, 2010 Posted by Jason , , , 2 comments

In the end I have to chalk this down to incompetence and one of those frustrating error messages that tell you absolutely nothing.

I’ve been working on an ASP.NET MVC application and last night decided to use JQTouch to provide an iPhone/Android mobile interface. Doing so should have been extremely straightforward. The planned steps were as follows:

Add code in my base ViewEngine to detect if a mobile browser is being used. If so, provide a mobile view rather than a standard one.

  • Create a number of views that use the JQTouch library to format my pages for mobile devices.


I recently came across the Mobile Browser Definition File on Codeplex which can be dumped in the app_browser folder in an asp.net solution in order to provide a bunch of information through Request.Browser. I dumped this into my solution, updated my view engine and implemented a few views with JQTouch. It didn’t take too long and i was feeling pretty good. However, attempting to open my application  I received a very generic error:


Object reference not set to an instance of an object

As shown in the below screenshot, not only is the message completely useless, but no source file or code line is specified.




I debugged for ages, but couldn’t even break in the global.asax file – so couldn’t step through the code to see what was going on. I subsequently tore my code apart, and after a full rollback realized that the mobile.browser file that I had dumped in my app_browser folder was causing the error.


Removing this file allowed my application to run, re-adding it caused the message to re-appear. After a little googling I found the root cause – browser files can no longer go straight into the app_browser folder – they must go in a sub-folder…any sub-folder. Had I looked a little closer I would have seen the following on the MDBF FAQ page and saved myself a ton of time and frustration:

Why am I getting "Object reference not set to an instance of an object."?
If after copying the .browser file into your AppBrowsers folder you get a compile error “Object reference not set to an instance of an object.” and you have .NET 3.5 SP1, then the issue is likely that you must create a sub-directory (of any name) in your AppBrowsers folder and place the .browser file in there. E.G. \App_Browsers\MobileBrowserData\mobile.browser

And now all is back to normal

Tuesday, July 20, 2010

VS2010 Power Tools Extensions are AWESOME

July 20, 2010 Posted by Jason , No comments

This morning on my daily commute to work I read ScottGu’s latest post detailing the latest round of updates to the VS21010 Productivity Tools extensions. I’m always excited to look at new (or improved) tools that can have a positive impact on my development work and have been very impressed with the powertools thus far. I’m not sure if it is a function of Visual Studio’s extension engine overhaul or simply coincidental, but a lot of really neat new extensions have already been developed for 2010 and to date have been much more useful than their 2008 counterparts.

The powertools are a free extension and worth every penny! They contain such gems as Ctrl+Click to go to definition, Triple Click to select a full line of code, Colorized and orderable tabs, and a whole lot more. Today’s update blows all of that out of the water.

Mr. Gu went into detail so I’ll keep this short. While there are a bunch of new features in the latest release I am super-excited about additions to the solution explorer (in the form of a new solution navigator) in the latest revision (and will focus on these). I’d expect to see these in Coderush or Resharper – not in a free extension – and I seriously believe that my productivity will increase as a direct result of these updates.

1. View Classes and Members in Solution Explorer

I didn’t know I needed this until today but files in the solution explorer are now further expandable, allowing types, methods and even members to be viewed in the tree. This is pretty sweet and one less context switch to get where I’m going.


2. Searchable Solution Explorer

You heard me right – you can now enter search terms in Solution Explorer! And it works ever so well! Searches will find terms in project names, file names, type names…down to member names. It is fast (even for a couple of large projects I played with) and makes finding what you are looking for super easy.


3. Selectable Root

Solution Explorer can often be overwhelming – especially for big solutions with many projects and deep source trees. This update adds the ability to choose what project/folder/file you want as the root. In the below example I’ve chosen the jni4net.n.10 project as my root. Everything else is hidden and I can focus on only the code I want to see.


Similarly, if I wanted to see only that beneath adaptors I would click the little icon (highlighted) on the right and end up with the view below. Seriously, do you ever remember your solution being so uncluttered? The two icons (highlighted) on the top left are used to go up a level or back to a previous configuration.


While I love the new search capabilities, this is by far my favorite of the new additions. This is the developer’s equivalent to a pair of blinkers!

4. Additional Filters

Last but not least (for this post) is the addition of a number of filters (All/Open/Unsaved/Edited) which allow the Solution Explorer to be filtered to show only files that are currently of interest. Below I’ve chosen Unsaved and see that I have been working on (but am yet to save) one file – Out.cs. Not a big deal, but one that could be useful nonetheless.



While there is no doubt that Microsoft develops amazing software it often feels like we get a big release once very two or three years and have to be happy with our lot for a few years until we get to the new release. This feels different – more akin to the community driven approach of many OSS projects where the updates are added and codebases refined continually. This is an awesome piece of software and a must-have addition to Visual Studio. It really increases the value of the tool and, no doubt, the efficiency of the developer wielding it! I love it!


July 20, 2010 Posted by Jason , 3 comments

Recently I blogged about my experience with a neat wire-framing tool, Balsamiq Mockups. To recap, I loved (and continue to love) the tool with the exception of one design decision - intentionally keeping the tool rough and low-fidelity. Shortly after publishing the post I received a comment asking if I had tried Napkee for HTML wireframe conversion…Following the Anonymous poster’s lead, I managed to procure a review copy of Napkee and am happy that I did. To answer the obvious question: no, Napkee isn’t exactly what I was looking for. But its not far off!!

Balsamiq/Napkee Affiliation

There does not appear to be any official affiliation between the Napkee and Balsamiq teams, however the following 2009 Balsamiq blog post does suggest that Balsamiq are open to building a community around their open bmml standard – starting with Napkee:



Napkee, as its website says, lets you export Balsamiq Mockups to HTML/CSS/JS and Adobe Flex 3. In essence it is a post-processor that takes bmml files and turns them into web or desktop apps. In my post on Balsamiq I “implore[d] the Balsamiq team to introduce a post-processing feature that turns a wireframe mockup into something that actually looks like a real screen”. This is precisely what Napkee has done. While I would like to see a third mode (adding to the existing HTML and Flex support) that generates static image files (PSDs would be nice) rather than interactive content.

As is Napkee has fulfilled my wish to turn Balsamiq mockup files into something more presentable – specifically something that could be presented to a project sponsor or client.

Converting a Mockup

The process of converting an existing mockup to HTML/Flex couldn’t be more straightforward. The UI is extremely simple with very little required interaction. Importing a bmml file is a two step process (File->Import Balsamiq Mockup Files) at which point the file is processed and converted to the project nature of choice (see the big WEB and FLEX 3 buttons on the top right hand side?).

I created a new Napkee project named Mockups for Blogpost and added the same datagrid used in my Balsamiq post. The below screenshots show how Napkee converts these controls to HTML and Flex respectively.



The Preview pane is fully interactive so if there is a control on the page (for instance, the Flex datagrid is sortable and has a scrollbar) one can immediately test how these controls react to user input. You’ll notice that there are multiple tabs beside the preview tab -  HTML/CSS/JS/BMML for web mode and MXML/AS/BMML for flex mode. These are (sadly) read only views where you can see the markup (and existing bmml) for your mockup. While it is probably outside of the scope of the application, it would be great if these tabs acted as editors for the relevant source files, allowing HTML/CSS/etc. to be tweaked within the application and saved as part of the project.

While it seems like a trivial point, I love that markup is broken out into separate files. It may stand to reason, but I can’t count the number of code generation tools I’ve seen that package up all source code into a single unholy mess of a markup file. Kudos to Napkee for being developer friendly here.

Exporting a project is as simple as clicking the novelty-sized Export Project button in the mockup files menu on the bottom-left of the screen, resulting in a well structured folder containing the newly generated source files.

One feature that I’m a little disappointed by is the ability to provide additional CSS files to the project. Ideally this would give the user the ability to customize the output HTML with their own CSS styles. It works…to a point. The problem, in so far as I see it, is that there is no obvious way to remove the existing CSS styles – i.e. you cannot tell Napkee to provide only raw HTML. When I added my own CSS styles some were overridden by Napkee’s styles leaving me with an ugly matrimony of styles. This isn’t a dealbreaker by any means, but I’d love to see it addressed and am guessing it would be a relatively simple change.


More Advanced Mockups

Mockups To Go is still a great source of Balsamiq mockup templates and, to put Napkee through its paces I downloaded two of the larger templates and converted both into Web and Flex pages.


Facebook Fan Page

Original Balsamiq Mockup


Napkee Web


Napkee Flex


Wordpress 3.0 Beta Admin Comment List

Original Balsamiq Mockup


Napkee Web


Napkee Flex


As you can see in the above images I have highlighted in yellow some major discrepancies between the originals and the web/flex counterparts. Most of these are in the form of inconsistent layout and, in the case of the flex conversion, fields whose markup was not converted at all. While this sample size is way too small to definitely point to a problem with Napkee itself, it definitely looks like the bmml processor needs a little work.

That said, there may be cases where the Balsamiq mockup was created outside of recommended practice. Take, for instance, the yellow row in the Wordpress mockup. Rather than a colored row, it looks like a yellow rectangle was placed right on top of the grid to give the illusion that the row is colored. In this and other cases Napkee can only convert what it sees and the tool cannot be blamed for bad implementation. Garbage in garbage out, so to speak…

In Summary

Napkee has no learning curve whatsoever and converts Balsamiq mockups to their prettier and more functional form without requiring any thought or real effort. As complexity of mockups increases Napkee’s results become somewhat of a mixed bag and it appears that the bmml processor could do with some updates to its layout algorithms. However, when it works it works really well and all-in-all I think the tool is a great addition to Balsamiq Mockups. At $49 it is relatively inexpensive and if you charge by the hour will probably pay for itself pretty quickly. I, for one, look forward to seeing how Napkee improves and evolves over time.

Monday, July 19, 2010

SQL Server – Excel Import and Mixed Data Types

July 19, 2010 Posted by Jason , , No comments


I am currently managing a project that, added to its significant development investment, requires a large amount of data gathering up-front. One component of this effort is the importing of information pertaining to each of the client’s physical clinics, parsing the data and pre-populating our database with data necessary to go live on day one. Collecting this data will take a relatively substantial time investment which, fortunately, will be undertaken by the client and not myself! I am, however, responsible for executing the import scripts to perform the data insertions and, as the first completed excel spreadsheets have begun to trickle through, I’ve discovered some issues with the process.

Problem #1 – Mixed Data Types

Loading the first spreadsheet everything appeared to execute fine, but I noticed that for one of our columns – which can be textual/numeric/alphanumeric – data is not always imported (when the column contains mixed data types). Specifically, some rows have a value, others are NULL Here is a simple sample query.

Select Name, Nickname

from OPENROWSET('Microsoft.Jet.OLEDB.4.0',

'Excel 8.0;Database=C:\015;',


My investigation led to the fact that the excel connection manager scans the first 8 rows to determine the type for a specific column. In my case the first 8 rows are numeric, therefore alphanumeric rows are ignored. This is a problem because I will have hundreds of spreadsheets to import and cannot guarantee the ordering. Frankly, I would prefer if the import broke rather than providing me with seemingly good data that is actually bad…Had I had an off-day, this issue may have gone unnoticed until release-day!

The fix I found requires a couple of registry settings to be changed as well as an update to the OPENROWSET connection string.

The first registry setting is named TypeGuessRows and accepts values from 0-16. This setting is essentially an override for the 8 rows used to determine the datatype of the column. 1-16 specify the number of rows to use to determine the column type, 0 is used to scan all rows - in my case I cannot be sure that 16 rows would be a large enough sample size to see both numeric and textual content so I set the registry setting to 0.

The second setting is named ImportMixedTypes and essentially tells the driver what to do when mixed types are found – in my case I want them converted to text and set the registry value to Text. In order to read this registry setting I also set the IMEX variable in my connection string to 1 as in the example below.


HKEY_LOCAL_MACHINE\SOFTWARE \Microsoft\Jet\4.0\Engines\Excel\TypeGuessRows

HKEY_LOCAL_MACHINE\SOFTWARE \Microsoft\Jet\4.0\Engines\Excel\ImportMixedTypes




Select Name, Nickname

from OPENROWSET('Microsoft.Jet.OLEDB.4.0',

'Excel 8.0;Database=C:\015;IMEX=1;',


Problem #2 – 64 bit OPENROWSET

After researching the issue and figuring out what needed to be done I felt pretty good – however, executing the statement against a SQL Server 2008 database on my local (x64) machine yielded the following error message:

Msg 7308, Level 16, State 1, Line 17

OLE DB provider 'Microsoft.Jet.OLEDB.4.0' cannot be used for distributed queries because the provider is configured to run in single-threaded apartment mode.

Any hopes of a quick (registry) fix on this one quickly disappeared. The issue is caused by the lack of a 64 bit JET driver and I was led to the following solution(s) on the Microsoft SQL Server forum:


Option 1:
Use 32 bit SQL Server on the 64 bit machine….

Option 2:
Build a bridge out of SQL Express.  Keep your main SQL Server instance 64 bit, but also install SQL Express 32 bit Side by Side…

Option 3:
Reverse the flow of data from pull to push.  Instead of having SQL Server pull data from the source that only supports 32 bit clients, push data from that source to SQL Server (as it supports 32/64 bit clients).


The second option suggested connecting a link between the 64-bit SQL instance and a 32-bit SQL express instance. While the bulk of our development work is performed against large centralized databases, the moment I heard the phrase “registry setting change” I knew I’d be developing on my local box and then porting the data to testing (and then production). The next logical step was to install SQL Server Express (x86) on my local box and use this database to gather the initial temporary data. I will then export the data to our testing servers in order for it to be parsed and our database populated.

ASP.NET MVC Membership Provider Issues

July 19, 2010 Posted by Jason , 1 comment

This afternoon I struggled a little with an ASP.NET MVC 2.0 issue. I’m using NHibernate’s export schema capabilities to create my database schema. I have an NUnit project with a bunch of different tests with the sole purpose of re-creating the database and populating some dummy data so that I do not have to do so by hand. I wanted to take this a step further and create some users in my system using Microsoft’s in-built MembershipProvider. Having set up my tests, I received the following message:

'System.TypeLoadException : Could not load type 'OnlineAuctions.Core.Classes.BuyerMembershipProvider' from assembly 'System.Web, Version=, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a'.

Obviously in this case the system is looking for my provider in the System.Web assembly, not my OnlineAuctions.Core assembly. My config file entry was as follows:


<add name="BuyerMembershipProvider" applicationName="OnlineAuctions" connectionStringName="mydevpc" minRequiredNonAlphanumericCharacters="0" passwordFormat="Encrypted" enablePasswordRetrieval="false" enablePasswordReset="false" requiresQuestionAndAnswer="false" requiresUniqueEmail="true" encryptionKey="aaavvvvaaabbbb" type="OnlineAuctions.Core.Classes.BuyerMembershipProvider” />

In retrospect this was a simple issue, but it took a little bit of playing before I fixed the issue. Can you see the issue? Yes, I am specifying my type in the type field – but I am not specifying the assembly in which the type resides. Adding the assembly, as in the below example, fixed the issue. Huzzar!


<add name="BuyerMembershipProvider" applicationName="OnlineAuctions" connectionStringName="mydevpc" minRequiredNonAlphanumericCharacters="0" passwordFormat="Encrypted" enablePasswordRetrieval="false" enablePasswordReset="false" requiresQuestionAndAnswer="false" requiresUniqueEmail="true" encryptionKey="aaavvvvaaabbbb" type="OnlineAuctions.Core.Classes.BuyerMembershipProvider, OnlineAuctions.Core” />

Saturday, July 17, 2010

Generic ThrowIfNull (C#) Helper Method

July 17, 2010 Posted by Jason No comments

For a while now I’ve wanted to write a helper to beautify code like the following:

if(account == null)
throw new InvalidAccountException();

My code seems to be littered with such calls to the point that it feels like unnecessary bloat. While I’m sure there are many ways to get around this (I’d love to hear what they are) in this post I’ll describe a generic ThrowIfNull helper method I’ve created. I’ve seen similar attempts to my own where by default a helper method is used to throw an ArgumentNullException exception but I’ve wanted to be more specific with the exception being thrown. Essentially I want to pass an exception type and a value and, if the value is null, the exception type in question should be thrown. Specifically this is the desired signature:

ThrowIfNull<T>(Type exception, T value)

Obviously this isn’t a terribly difficult goal to achieve but previous efforts have led me to worry about the performance impact of using reflection to instantiate the desired exception at runtime. As it happens tonight I came across (related) posts by Roger Asling and Vagif Abilov citing performance differences when using compiled lambda expressions instead of using reflection (specifically Activator.CreateInstance and ConstructorInfo.Invoke) to instantiate objects at runtime.

This reading rekindled my desire to create a ThrowIfNull helper and, using Roger’s ObjectActivator function I was able to create a generic ThrowIfNull function that takes an exception type and a value. If the type is null, we instantiate an instance of the passed exception type. Essentially I get info for the standard String, Exception constructor, and pass it to the ObjectActivator function. This function creates an expression that will call the constructor using the parameters we defined. A Lambda is created using this expression and then the lambda is compiled.

Once control returns to the ThrowIfNull method, the lambda is actually executed and is passed in the name of the type which or value uses (resulting in an “Account is null” exception message”). The code can be instantiated as follows:

 ExceptionHelpers.ThrowIfNull(typeof(InvalidAccountException), account);

Below is the entire class:

   public static class ExceptionHelpers
public static void ThrowIfNull<T>(Type exception, T value) where T : class
if (value == null)
var types = new Type[2];
types[0] = typeof(string);
types[1] = typeof(Exception);

var constructorInfo = exception.GetConstructor(types);
var createdActivator = GetActivator<Exception>(constructorInfo);

var instance = createdActivator(typeof(T).Name + " is null", null);
throw instance;

private delegate T ObjectActivator<T>(params object[] args);

private static ObjectActivator<T> GetActivator<T>(ConstructorInfo ctor)
Type type = ctor.DeclaringType;
ParameterInfo[] paramsInfo = ctor.GetParameters();

//create a single param of type object[]
ParameterExpression param = Expression.Parameter(typeof(object[]), "args");

var argsExp = new Expression[paramsInfo.Length];

//pick each arg from the params array
//and create a typed expression of them
for (int i = 0; i < paramsInfo.Length; i++)
Expression index = Expression.Constant(i);
Type paramType = paramsInfo[i].ParameterType;
Expression paramAccessorExp = Expression.ArrayIndex(param, index);
Expression paramCastExp = Expression.Convert(paramAccessorExp, paramType);
argsExp[i] = paramCastExp;

//make a NewExpression that calls the
//ctor with the args we just created
var newExp = Expression.New(ctor, argsExp);

//create a lambda with the New
//Expression as body and our param object[] as arg
var lambda = Expression.Lambda(typeof(ObjectActivator<T>), newExp, param);

//compile it
var compiled = (ObjectActivator<T>)lambda.Compile();
return compiled;


Right now all (anecdotal) evidence indicates that this method runs quite quickly but I plan to run some performance tests over the weekend to determine if this is truly the case.

I’d love to hear feedback as to whether or not this is an appropriate approach and what better approaches exist in the wild. Until next time…


The obvious logical progression is to turn this helper into an extension method. The signature becomes:

ThrowIfNull<T>(this T value, Type exception)

Calling syntax becomes:


Friday, July 16, 2010

No pg_hba.conf entry for host

July 16, 2010 Posted by Jason , 3 comments

This morning a colleague and I finally had the chance to update our existing Mirth configuration to use a database other than the bundled DerbyDB. While Derby worked fine for a while, we quickly reached the size limit (around 16GB) under which it was able to perform adequately. Mirth themselves recommend using a different database and package Derby, I assume, as a way to get people off the ground without ever meaning it to be used long term.

My organization's use of Mirth is still in its infancy and, while we're a Microsoft shop and use SQL Server for all of our database needs, we really didn't want to purchase a SQL server license for this purpose. We opted instead to use PostgreSQL and installed a copy on a virtual server. We ran through the default steps for setting up a Mirth database, including database user creation and execution of the postgresql table creation script bundled in the Mirth installation folder. It looked like easy sailing until we restarted the existing Mirth service and received the following message:

The Mirth Service could not be started

Contrary to the message the Mirth service did in fact restart. However, we were unable to log into Mirth despite the myriad username/password combinations we tried.

Looking at Mirth’s latest log file we found the following error:

org.postgresql.util.PSQLException: FATAL: no pg_hba.conf entry for host "myhost", user "myuser", database "mymirthdatabase", SSL off

Doing a little research we discovered that this error meant that we were hitting the correct server, however the postgresql installation was not configured to accept remote connections. I assume, but cannot say for certain, that if our mirth database and mirth application server were on the same box then we wouldn’t have had this issue.

We found that there were two files of interest in our postgresql setup.

· pg_hba.conf

· postgresql.conf

These files are packaged in the sample folder of the postgresql installation. However, it is key to note that (on Windows at least) the files that must be edited are contained in the data folder (i.e. wherever you specified your data would be stored during the initial installation) – in our case D:\mirth_data


In the sample file the listen_addresses, port and max_connections settings were commented out. However, by default in the data folder they appeared correct, essentially listening to all available addresses on port 5432. If however, you run into this issue and the listen_addresses does not contain * or worse still, is completely commented out, then this is definitely something you’ll need to change.

listen_addresses = '*'

port = 5432

max_connections = 100


The crux of the issue, as specified in the error message above was that no entry was defined for the calling server (i.e. the Mirth application server) in our list of postgresql hosts. Our Mirth database server does not face any public networks (just our own intranet) so we were able to add the following line to allow communication from any host.

host all all md5

A simple postgresql service restart and then Mirth service restart and we were back in business with our shiny new database!

Getting started with NDepend (part 2 of n) – Digging up Roots

July 16, 2010 Posted by Jason , , , No comments

A while back I posted the first in a series of post on NDepend – a static code analysis tool for .NET code. Today it is time to add a second post to the series. If you’re new to NDepend take a quick look at my previous post which should provide a decent understanding of what it can do…


I have an ASP.NET 3.5 ecommerce website running in production on the internet for a third party who will remain nameless. The site itself is based on the open source nopCommerce ecommerce application which originally allowed me to avoid the plumbing work and focus my energy on the implementation of features valuable to the customer.  Since it’s launch it has done quite well and has required little-to-no maintenance on my part – a happy fact!

What I have noticed unfortunately is that I’ve (sub-consciously I think) avoided adding new features or performing any major refactoring since the site’s inception – based, I think, on the fact that the vast majority of source code is not mine and there is a relatively larger cognitive effort required when making changes. A weak excuse, I know, but an excuse nonetheless.

My goal is to refactor the code in increments, starting with a very high level refactoring – this post – in which unused assemblies/types will be removed permanently in order to make my codebase leaner. To help me on my way I will use the newly released NDepend 3.x to identify and remove dependencies.

Please note that I really like nopCommerce. However, when using an open source ecommerce platform is that what you gain in speed-to-market you lose in simplicity. Simply put, such solutions are one-size-fits-all and by proxy do more than is needed. I guess this is an example of KISS and YAGNI after the face as there is a TON of code that I am not using and will never use - and I’m going to remove it all. I’m not looking for performance gains (actually I’ve heard anecdotal evidence that suggests build times are impacted by the number of projects in one’s solution (and not just the total amount of code)…so maybe future builds will be a little quicker. YAY!) and know that the disk space savings are negligible. My goal is to reduce the size of the codebase to facilitate easier refactoring in the future.


NDepend 3.x

While writing my previous post on the subject I was using version 2.x of NDepend – the most up-to-date version available at the time. Since then version 3.x has hit the shelves and it is a massive improvement to an already great tool. Right at the top of my wish list when writing that post was tighter integration of NDepend within Visual Studio. See the below excerpt:

While I definitely see NDepend as a standalone product rather than a VS add-in, I would love to have access to a number of its features within my regular workflow without having to switch between tools.

I’m happy to say that the NDepend team was way ahead of me. In version 3.x one can now access all of the NDepend functionality from Visual Studio – including writing and executing CQL queries and viewing dependency matrices and graphs. While I’m using Visual Studio 2010 the same functionality is available in VS2005 and VS2008 and is awesome!


Getting Started

For this post I have identified two project within my solution that I do not need. The fact of the matter is that due to the nature of the ecommerce store shipping via UPS and/or USPS is impossible. I will never use this code. Ever!

On a side note, even if there was a chance that I would use this code in the future I would still remove it – it is redundant, plain and simple. I can always go to source control later in the unlikely case I do need it…


Right-clicking on either project in visual studio presents me with a plethora of options and I’m most interested in seeing which types are directly using these assemblies.


Choosing Select Types->…that are using me (directly or indirectly) the CQL Query Editor pops up, pre-populated with the relevant query. The query is executed and interestingly it tells me that Nop.Shipping.USPS is not currently used by any assemblies. See below…


There are no dependencies on Nop.Shipping.USPS. Therefore the project can be permanently removed from my solution. The project itself is tiny with only 138 lines of code and 3 types, but it is a little less than I need to maintain.

It seems a little strange to have an assembly that isn’t used, but looking at the metrics for this assembly in the NDepend Info view we can see that Afferent coupling is zero (Afferent coupling expresses the number of namespaces that depend on the namespace in question whereas Efferent coupling expresses the number of namespaces the namespace in question depends on), lending credence to our original finding that this assembly is unused.


Taking a quick look at the dependency matrix (which I’ll come back to in a bit) I can see that the Nop.Shipping.USPS assembly depends on 6 namespaces within the Nop.DataAccess assembly – but there are no dependencies on Nop.Shipping.USPS itself.


At this point I’m comfortable that I can safely remove Nop.Shipping.USPS from my solution and I’m 138 lines and one whole project lighter.

[Aside: The version of NopCommerce I originally used did not bundle any unit tests, therefore I’m flying a little blind. I can confirm that the solution builds but runtime errors are still possible. Right about now I would LOVE some unit tests for my solution]


I’m done with USPS but now it is time to move onto it’s sibling UPS. We’ll take the same approach as before, but this time there should be a bit more meat (I got a sneak peak and know that this assembly is used!).

As usual I use the Select Types->…that are using me (directly or indirectly) shortcut to get a high level overview of what is going on. This is about the quickest step to getting up and running and makes it easy to get a high level idea of what I’m getting myself in for (i.e. should I wait until after lunch?)


This time my results are a little bit different and I can see that one assembly depends on Nop.Shipping.UPS (i.e. afferent coupling is 1). Changing the CQL query a little bit I can query the same dependency on a type and method basis and see that a single type and 4 methods depend on Nop.Shipping.UPS.


Looking at the methods that use this assembly directly, we see that the depth of using is 1 for BindData and Save, but 2 for Page_Load. This means that Page_Load has an indirect dependency on this assembly, and BindData and Save have a direct dependency. Based on the method naming (specifically Page_Load) we know that this type is in fact the code behind for an ASPX page.

The next couple of steps are technically unnecessary but I like to be well informed – especially before removing code – and look at a couple more views. Copying the assembly to the Horizontal header of the data matrix i am able to drill down very deeply and see that it is indeed being used by two methods in the ConfigureShipping type in the NopCommerceStore assembly.


Looking at the above screenshot it is clear that the ConfigureShipping type is specific to UPS shipping (look at the namespace) and specifically is a function visible only from the administration section of the site (again, look at the namespace).

At this point I jump to the source code by right-clicking on the ConfigureShipping type and choose “Open one of my 2 declarations ins source code (double click)”.


At this point I see exactly what I suspected – this is indeed the code behind for an ASPX page and the page itself is specific to UPS shipping configuration (rather than being a more generic and encompassing shipping page).

At this point I know the following:

  1. Only one assembly uses the Nop.Shipping.UPS assembly
  2. Within that assembly only one type uses type uses the Nop.Shipping.UPS assembly
  3. The type is the code behind of a very specific ASPX page

Before removing this assembly there is one thing I need to know:

  1. Is this ASPX page ever used?

The answer to this question is: no, the ASPX page is never used. I know this because I originally removed all ties to shipping configuration from the administration section of the code. I could confirm this in two ways 1) Using NDepend, use the matrix view to look at dependencies on ConfigureShipping. This follows the exact same approach as we used above so I won’t repeat myself 2) use another tools (I use Resharper right now…Find Usages Advanced specifically) to find usages of the page itself.

At this point I am ready to remove the ConfigureShipping page itself as well as the Nop.Shipping.UPS assembly.


This was very much a 101 level introduction to removing un-needed code with NDepend. While writing this post took some time, removing the code was extremely fast – between 5 and 10 minutes. Everything I did was inside Visual Studio and I never had to switch context – for me this is a massive improvement over the previous version of NDepend which I really liked. For the save of 10 minutes I removed about 500/600 lines of code – including an ASPX file (basically an entire namespace) and two assemblies.

My plan for the next x number of posts in the series is to ratchet up the difficulty level incrementally and start refactoring code with more wide-spread dependencies. Stay tuned!

Monday, July 5, 2010

More MSTest Woes – Location of the file or directory is not trusted

July 05, 2010 Posted by Jason , , No comments

This one was easier to investigate than my previous post. To be fair this one is not the fault of Visual Studio or MSTest. Running my tests I began to see the following error message.

Failed to queue test run 'jirwin@mypc 2010-07-02 14:22:48': Test Run deployment issue: The location of the file or directory 'c:\mypath\bin\debug\Microsoft.Practices.Unity.dll' is not trusted.

A number of posts online suggested changing the trust levels for specific directories but, having used Vista and now Windows 7 for some time, I correctly figured that the libraries in question (in this case Unity) were downloaded from the internet and therefore were not trusted.

Looking at the properties of the files (the above error occurred for a number of libraries) I was relieved to see that my hunch was correct and I would not need to touch my security setting. Unblocking the files (see the screenshot below) fixed the issue immediately. Happy days!


MSTest - Exception has been thrown by the target of an invocation

This week, when attempting to run MSTest unit tests in Visual Studio, I ran into an interesting issue. Whether running the test from Resharper’s test runner or Visual Studio’s in-built test views all tests failed with the following error message:

Exception has been thrown by the target of an invocation

This is clearly a pretty unhelpful error message – the equivalent to “something is wrong somewhere – please fix it”. Other members of my team were able to run the tests without issue and I was able to run the full suite only last week – leading to the obvious conclusion that my PC configuration is causing the issue.

My only recent installation of note has been Team Explorer (2010) for Visual Studio 2008 and, while I’m not sure why this would break my tests (i’m guessing some team build hooks were causing problems…) it was a good starting point.

Opening the Team Explorer window I noticed that I was no longer connected to a TFS server. I manually connected to our central TFS server and voila, problem solved. All my tests now run successfully without issue. I’d love to know what the root cause to this issue was but for now I’m happy to have my tests up and running again! Anyone have any ideas why TFS would cripple my MSTest unit tests?