In the first part of the checklist, we looked at creating high quality websites from a client perspective and the tools that helps us do that. In this part we look at the (free) tools that will help us build high quality on the server side of the website.

Code quality

Treat compiler warnings as errors

When you compile your solution in Visual Studio it will by default allow compiler warnings. Compiler warning occurs when there is a problem with the code, but nothing that will result in severe errors. Such a warning could be if you have declared a variable that is never used. These warnings should at all times be treated as errors since they allow you to produce bad code. Keyvan has written a post about how to treat compiler warnings as errors.

StyleCop

The StyleCop Visual Studio add-in analyses your C# code and validates it against a lot of rules. The purpose of the tool is to force you to build maintainable, well documented code using consistent syntax and naming conventions. I’ve found that most of the rules are for maintainability and consistency. After using StyleCop on my latest project I will never build a C# project again without it.
 
Some of the rules might seem strange at first glance, but when you give it a closer look you’ll find that it actually makes a lot of sense.

FxCop

This tool should be familiar to most .NET developers by now. It has existed for a long time and is now on version 1.36. FxCop doesn’t analyze your C# code but the compiled MSIL code, so it can be used with any .NET language. Some of the rules are the same as in StyleCop, but it also actually helps you write more robust methods that result in fewer errors.

If you use StyleCop and do proper unit testing, then you might not need FxCop, but it’s always a good idea to run it on your assemblies. Here's a guide to using FxCop in website projects. Just in case. If you own a Visual Studio Team Edition, then you already have FxCop build in.

Security

Anti-Cross site Scripting (XSS) Library

The Anti-XSS library by Microsoft is not just a fancy way to HTML encode text strings entered by users. It uses white-listing which is much more secure than just trust any input and then HTML encode it in the response. It works with JavaScript, HTML elements and even HTML attributes.

Code Analysis Tool .NET (CAT.NET)

When your website relies on cookies, URL parameters or forms then it’s open for attacks. That’s because all three of them is very easy to forge and manipulate by hackers and robots even. By using the CAT.NET add-in for Visual Studio you can now easily analyze the places in your mark-up and code-behind that is vulnerable to those kinds of attacks. CAT.NET analyzes your code and tells you exactly what the problem is. It’s easy to use, understand and it lets you build more secure websites.

Just the other day I was digging into various Web 2.0 APIs to see what the possibilities where. You know, just kicking back and having fun geek style. I quickly gave up.

For some reason, both Facebook and LinkedIn protect certain information about your friends and contacts in the name of privacy. If you log into your Facebook account, you can see the e-mail address of your friends if they have provided one on their profile. You cannot retrieve that e-mail through the API and the same goes for the phone number. You can get pictures, gender, age etc. but not the e-mail address.

LinkedIn does expose the e-mail address, but not addresses, phone numbers or any other information except the name, title and organization. The reason why I wanted the e-mail address was that it is a great key that could pair up your Facebook friends to the LinkedIn contacts. Then I would be able to get all the information on the people from the two networks and make a more complete profile on them.

I know that people might be reluctant to share their e-mail address on Facebook, but apparently a lot of the same people have no issue sharing it on LinkedIn. It doesn’t make sense. And why does the LinkedIn vCards of your own contacts not contain information like country and zip code even though people have entered it? Why couldn’t they just let it be up to the individual user to allow this information being public? Privacy restriction, that’s why, and probably a law suit waiting to happen.

Now, there is some sense in keeping sensitive information private, but why are they just sensitive to the API’s on not if you’re logged in on the websites? In other words, people can get access but not machines. It might be that people build mash-ups, but machines have to execute them and that’s the problem.

It seems that the bigger the programmable web becomes, the bigger the issue becomes on keeping information private, thus limiting us from doing some really cool stuff easily. I guess we could always go back to screen scraping as long as it’s still possible, which by the way it is on both Facebook and LinkedIn – for now anyway – even thought it is a clear violation on their terms of service.

So, I gave up my little venture, looked longingly at the moon from my window and dreamed of a world where privacy restrictions and law suits don’t conflict with my geeky nature.