and has 0 comments
I admit this is not a very efficient class for my purposes, but it was a quick and dirty fix for a personal project, so it didn't matter. The class presented here stores a string in a compressed byte array if the length of the string exceeds a value. I used it to solve an annoying XmlSerializer OutOfMemoryException when deserializing a very large XML (400MB) in a list of objects. By objects had a Content property that stored the content of html pages and it went completely overboard when putting in memory. The class uses the System.IO.Compression.GZipStream class that was introduced in .Net 2.0 (you have to add a reference to System.IO.Compression.dll). Enjoy!

    public class CompressedString
{
private byte[] _content;
private int _length;
private bool _compressed;
private int _maximumStringLength;

public CompressedString():this(0)
{
}

public CompressedString(int maximumStringLengthBeforeCompress)
{
_length = 0;
_maximumStringLength = maximumStringLengthBeforeCompress;
}

public string Value
{
get
{
if (_content == null) return null;
if (!_compressed) return Encoding.UTF8.GetString(_content);
using (var ms = new MemoryStream(_content))
{
using (var gz = new GZipStream(ms, CompressionMode.Decompress))
{
using (var ms2 = new MemoryStream())
{
gz.CopyTo(ms2);
return Encoding.UTF8.GetString(ms2.ToArray());
}
}
}
}
set
{
if (value == null)
{
_content = null;
_compressed = false;
_length = 0;
return;
}
_length = value.Length;
var arr = Encoding.UTF8.GetBytes(value);
if (_length <= _maximumStringLength)
{
_compressed = false;
_content = arr;
return;
}
using (var ms = new MemoryStream())
{
using (var gz = new GZipStream(ms, CompressionMode.Compress))
{
gz.Write(arr, 0, arr.Length);
gz.Close();
_compressed = true;
_content = ms.ToArray();
}
}
}
}

public int Length
{
get
{
return _length;
}
}
}

and has 2 comments
An unlikely blog post from me, about graphics; and not any kind of graphics, but GDI graphics. It involves something that may seem simple at first: rotating a text in a rectangle container so that it is readable when you turn the page to the left. It is useful to write text in containers that have a height that is bigger than the width. This is not about writing vertically, that's another issue entirely.
So, the bird's eye view of the problem: I had to create a PDF that contains some generated images, a sort of chart with many colored rectangles that contain text. The issue is that some of them are a lot higher than they are wide, which means it is better to write text that is rotated, in this case to -90 degrees, or 270 degrees, if you like it more. To the left, as Beyoncé would put it.

I created the image, using the Bitmap class, then got a Graphics instance from it, then starting drawing things up. It's trivial to draw a line, fill a rectangle, or draw an arc. Just as easy it is to write some text, using the DrawString method of the Graphics object. I half expected there to be a parameter that would allow me to write rotated, but there wasn't. How hard could it be?

Let's start with the basics. You want to draw a colored rectangle and write some text into it. This is achieved by:
var rectangle=new Rectangle(x,y,width,height); // reuse the dimensions
g.FillRectangle(new SolidBrush(Color.Blue),rectangle); // fill the rectangle with the blue color
g.DrawRectangle(new Pen(Color.Black),rectangle); // draw a black border
g.DrawString("This is my text",new Font("Verdana",12,GraphicsUnit.Pixel),new SolidBrush(Color.Black),rectangle, new StringFormat {
LineAlignment=StringAlignment.Center,
Alignment=StringAlignment.Center,
Trimming = StringTrimming.None
}); // this draws a string in the middle of the rectangle, wrapping it up as needed

All very neat. However, you might already notice some problems. One of them is that there is no way to "overflow" the container. If you worked with HTML you know what I mean. If you use the method that uses a rectangle as a parameter, then the resulting text will NOT go over the edges of that rectangle. This is usually a good thing, but not all the time. Another issue that might have jumped in your eyes is that there is no way to control the way the text is wrapped. In fact, it will wrap the text in the middle of words or clip the text in order to keep the text inside the container. If you don't use the container function, there is no wrapping around. In other words, if you want custom wrapping you're going to have to go another way.
Enter TextRenderer, a class that is part of the Windows.Forms library. If you decide that linking to that library is acceptable, even if you are using this in a web or console application, you will see that the parameters given to the TextRenderer.DrawText method contain information about wrapping. I did that in my web application and indeed it worked. However, besides drawing the text really thick and ugly, you will see that it completely ignores text rotation, even if it has a specific option to not ignore translation tranforms (PreserveGraphicsTranslateTransform).

But let's not get into that at this moment. Let's assume we like the DrawString wrapping of text or we don't need it. What do we need to do in order to write at a 270 degrees angle? Basically you need to use two transformations, one translates and one rotates. I know it sounds like a bad cop joke, but it's not that complicated. The difficulty comes in understanding what to rotate and how.
Let's try the naive implementation, what everyone probably tried before going to almighty Google to find how it's really done:
// assume we already defined the rectangle and drew it
g.RotateTransform(-270);
g.DrawString("This is my text",new Font("Verdana",12,GraphicsUnit.Pixel),new SolidBrush(Color.Black),rectangle, new StringFormat {
LineAlignment=StringAlignment.Center,
Alignment=StringAlignment.Center,
Trimming = StringTrimming.None
}); // and cross fingers
g.ResetTranformation();
Of course it doesn't work. For once, the rotation transformation applies to the Graphics object and, in theory, the primitive drawing the text doesn't know what to rotate. Besides, how do you rotate it? On a corner, on the center, the center of the text or the container?
The trick with the rotation transformation is that it rotates on the origin, always. Therefore we need the translate transformation to help us out. Here is where it gets tricky.

g.TranslateTransform(rectangle.X+rectangle.Width/2,rectangle.Y+rectangle.Height/2); // we define the center of rotation in the center of the container rectangle
g.RotateTransform(-270); // this doesn't change
var newRectangle=new Rectangle(-rectangle.Height/2,-rectangle.Width/2,rectangle.Height,rectangle.Width); // notice that width is switched with height
g.DrawString("This is my text",new Font("Verdana",12,GraphicsUnit.Pixel),new SolidBrush(Color.Black),newRectangle, new StringFormat {
LineAlignment=StringAlignment.Center,
Alignment=StringAlignment.Center,
Trimming = StringTrimming.None
});
g.ResetTranformation();
So what's the deal? First of all we changed the origin of the entire graphics object and that means we have to draw anything relative to it. So if we would not have rotated the text, the new rectangle would have had the same width and height, but the origin in 0,0.
But we want to rotate it, and therefore we need to think of the original bounding rectangle relative to the new origin and rotated 270 degrees. That's what newRectangle is, a rotated original rectangle in which to limit the drawn string.

So this works, but how do you determine if the text needs to be rotated and its size?
Here we have to use MeasureString, but it's not easy. It basically does the same thing as DrawString, only it returns a size rather than drawing things. This means you cannot measure the actual text size, you will always get either the size of the text or the size of the container rectangle, if the text is bigger. I created a method that attempts to get the maximum font size for normal text and rotated text and then returns it. I do that by using a slightly larger bounding rectangle and then going a size down when I find the result. But it wasn't nice.

We have a real problem in the way Graphics wraps the text. A simple, but incomplete solution is to use TextRenderer to measure and Graphics.DrawString to draw. But it's not exactly what we need. The complete solution would determine its own wrapping, work with multiple strings and draw (and rotate) them individually. One interesting question is what happens if we try to draw a string containing new lines. And the answer is that it does render text line by line. We can use this to create our own wrapping and not work with individual strings.

So here is the final solution, a helper class that adds a new DrawString method to Graphics that takes the string, the font name, the text color and the bounding rectangle and writes the text as large as possible, with the orientation most befitting.

using System;
using System.Collections.Generic;
using System.Drawing;
using System.Linq;
using System.Text;
using System.Text.RegularExpressions;
using System.Threading.Tasks;

namespace GraphicsTextRotation
{
public static class GraphicsExtensions
{
public static void DrawString(this Graphics g, string text, string fontName, Rectangle rect, Color textColor, int minTextSize=1)
{
var textInfo = getTextInfo(g, text, fontName, rect.Width, rect.Height); // get the largest possible font size and the necessary rotation and text wrapping
if (textInfo.Size < minTextSize) return;
g.TranslateTransform(rect.X + rect.Width / 2, rect.Y + rect.Height / 2); // translate for any rotation
Rectangle newRect;
if (textInfo.Rotation != 0) // a bit hackish, because we know the rotation is either 0 or -90
{
g.RotateTransform(textInfo.Rotation);
newRect = new Rectangle(-rect.Height / 2, -rect.Width / 2, rect.Height, rect.Width); //switch height with width
}
else
{
newRect = new Rectangle(-rect.Width / 2, -rect.Height / 2, rect.Width, rect.Height);
}
g.DrawString(textInfo.Text, new Font(fontName, textInfo.Size, GraphicsUnit.Pixel), new SolidBrush(textColor), newRect, new StringFormat
{
Alignment = StringAlignment.Center,
LineAlignment = StringAlignment.Center,
Trimming = StringTrimming.None
});
g.ResetTransform();
}

private static TextInfo getTextInfo(Graphics g, string text, string fontName, int width, int height)
{
var arr = getStringWraps(text); // get all the symmetrical ways to split this string
var result = new TextInfo();
foreach (string s in arr) //for each of them find the largest size that fits in the provided dimensions
{
var nsize = 0;
Font font;
SizeF size;
do
{
nsize++;
font = new Font(fontName, nsize, GraphicsUnit.Pixel);
size = g.MeasureString(s, font);
} while (size.Width <= width && size.Height <= height);
nsize--;
var rsize = 0;
do
{
rsize++;
font = new Font(fontName, rsize, GraphicsUnit.Pixel);
size = g.MeasureString(text, font);
} while (size.Width <= height && size.Height <= width);
rsize--;
if (nsize > result.Size)
{
result.Size = nsize;
result.Rotation = 0;
result.Text = s;
}
if (rsize > result.Size)
{
result.Size = rsize;
result.Rotation = -90;
result.Text = s;
}
}
return result;
}

private static List<string> getStringWraps(string text)
{
var result = new List<string>();
result.Add(text); // add the original text
var indexes = new List<int>();
var match = Regex.Match(text, @"\b"); // find all word breaks
while (match.Success)
{
indexes.Add(match.Index);
match = match.NextMatch();
}
for (var i = 1; i < indexes.Count; i++)
{
var pos = 0;
string segment;
var list = new List<string>();
for (var n = 1; n <= i; n++) // for all possible splits 1 to indexes.Count+1
{
var limit = text.Length / (i + 1) * n;
var index = closest(indexes, limit); // find the most symmetrical split
segment = index <= pos
? ""
: text.Substring(pos, index - pos);
if (!string.IsNullOrWhiteSpace(segment))
{
list.Add(segment);
}
pos = index;
}
segment = text.Substring(pos);
if (!string.IsNullOrWhiteSpace(segment))
{
list.Add(segment);
}
result.Add(string.Join("\r\n", list)); // add the split by new lines to the list of possibilities
}
return result;
}

private static int closest(List<int> indexes, int limit)
{
return indexes.OrderBy(i => Math.Abs(limit - i)).First();
}

private class TextInfo
{
public int Rotation { get; set; }
public float Size { get; set; }
public string Text { get; set; }
}
}
}

I hope you like it.

and has 0 comments
I was trying to figure out an issue with our product and, in order to understand what was going on, I copied a class from the project into a small sandbox project to see how it would work. Lo and behold, a problem occurred in one of the utility functions that would have made the entire feature unusable. Yet the feature worked fine, except the little detail I was working on. What was going on?

Let me show you the code first (simplified for your pleasure):
Dim al As New ArrayList
al.Add("A")
al.Add("B")
Dim result as String = String.Join(":", al.ToArray(GetType(String))))

What do you think the result will hold?

In our production site the result was "A:B". In my sandbox project the result was "System.String[]". It took me a little to understand what was going on. You see, the sandbox project was .Net 4.0 while the production site still worked with 3.5. New to .Net 4.0 are overloads for the String.Join method, including one that receives a params array of objects. Since ArrayList.ToArray(Type type) returns Array no matter the type boxed inside, this is the overload that is chosen. The list of strings is taken as the first parameter, stringified, and the result is what you saw.

Conclusion: be very careful of the types you send to methods. Even if Visual Basic automatically casts method parameters, you never know for sure which type it will choose to transform into. And if you want to upgrade a VB project from .Net 2.0-3.5 to 4.0, be careful of the new overloads that have appeared.

We had a legacy import page in our application that took a very long time to perform its operation. Thus, the user was faced with a long loading empty page and no feedback. We wanted to do something to show the user the progress of the import without fundamentally changing the page. Of course, the best solution would have been to make the import an asynchronous background operation and then periodically get the status from the server via Ajax calls, but limited by the requirement to not change the page we came up with another solution: we would send bits of javascript while the import went on.

An attempt was made but it didn't work. All the scripts were loaded and executed at once. The user would still see an empty page, then a progress bar that immediately gets to 100%. Strange, that, since we knew that in certain circumstances, the scripts are executed as they are loaded. The answer was that browsers are caching a minimum bit of the page before they are interpreting it, about 1024 characters. The solution, then, was to send 1024 empty spaces before we start sending in the progress. This value of 1024 is not really documented or standard; it is a browser implementation thing.

Our design had the page loaded in an iframe, which allowed for scripts and html to not be loaded in the import page (thus making us stumble upon this behavior), and allowed for them to be loaded in the parent page. The scripts that we sent through the ASP.Net pipeline (using Response.Write and Response.Flush) accessed the resources from the parent page and showed a nice progress bar.

In case the page would have been a simple ASP.Net page, then the html and the CSS would have had to be sent first, perhaps instead of the 1024 spaces. There would have been problems when the page would have finished the import and the output of the page would have followed the one sent via the pipeline, but for our specific scenario it seems mere spaces and script blocks did not change the way browsers interpreted the rest of the page output.

A secondary side effect of this change was that we prevented the closing of the connection by some types of routers that need HTTP connections to have some traffic sent through them in an interval of time, providing a sort of "keep-alive". Before we made this change, these routers would simply cut the connection, leaving the user hanging.

Yesterday I wanted to upgrade the NUnit testing framework we use in our project to the latest stable version. We used 2.5.10 and it had reached 2.6.0. I simply removed the old version and replaced it with the new. Some of the tests failed.

Investigating revealed all tests had something in common: they were testing if two collections are not equal (meaning not the same instance) then that the collections are not equivalent (meaning none of the items in one collection is found in the other), yet that the values in the items are the same. Practically it was a test that checked if a cloning operation was successful. And it failed because from this version on, the two collections were considered Equal and Equivalent.

That is at least strange and so I searched the release notes for some information about this and found this passage: EqualConstraint now recognizes and uses IEquatable<T> if it is implemented on either the actual or the expected value. The interface is used in preference to any override of Object.Equals(), so long as the other argument is of Type T. Note that this applies to all equality tests performed by NUnit.

Indeed, checking the failing tests I realized that the collections contained IEquatable types.

and has 2 comments
I found a bit of code today that tested if a bunch of strings were found in another. It used IndexOf for each of the strings and continued to search if not found. The code, a long list of ifs and elses, looked terrible. So I thought I would refactor it to use regular expressions. I created a big Regex object, using the "|" regular expression operator and I tested for speed.

( Actually, I took the code, encapsulated it into a method that then went into a new object, then created the automated unit tests for that object and only then I proceeded in writing new code. I am very smug because usually I don't do that :) )

After the tests said the new code was good, I created a new test to compare the increase in performance. It is always good to have a metric to justify the work you have been doing. So the old code worked in about 3 seconds. The new code took 10! I was flabbergasted. Not only that I couldn't understand how that could happen, how several scans of the same string could be faster than a single one, but I am the one that wrote the article that said IndexOf is slower than Regex search (at least it was so in the .Net 2.0 times and I could not replicate the results in .Net 4.0). It was like a slap in the face, really.

I proceeded to change the method, having now a way to determine increases in performance, until I finally figured out what was going on. The original code was first transforming the text into lowercase, then doing IndexOf. It was not even using IndexOf with StringComparison.OrdinalIgnoreCase which was, of course, a "pfff" moment for me. My new method was, of course, using RegexOptions.IgnoreCase. No way this option would slow things down. But it did!

You see, when you have a search of two strings, separated by the "|" regular expression operator, inside there is a tree of states that is created. Say you are searching for "abc|abd", it will search once for "a", then once for "b", then check the next character for "c" or "d". If any of these conditions fail, the match will fail. However, if you do a case ignorant match, for each character there will be at least two searches per letter. Even so, I expected only a doubling of the processing length, not the whooping five times decrease in speed!

So I did the humble thing: I transformed the string into lowercase, then did a normal regex match. And the whole thing went from 10 seconds to under 3. I am yet to understand why this happens, but be careful when using the case ignorant option in regular expressions in .Net.

and has 1 comment
A short post about an exception I've met today: System.InvalidOperationException: There was an error reflecting 'SomeClassName'. ---> System.InvalidOperationException: SomeStaticClassName cannot be serialized. Static types cannot be used as parameters or return types.

Obviously one cannot serialize a static class, but I wasn't trying to. There was an asmx service method returning an Enum, but the enum was nested in the static class. Something like this:
public static class Common {

public enum MyEnumeration {
Item1,
Item2
}

}

Therefore, take this as a warning. Even if the compilation does not fail when a class is set to static, it may fail at runtime due to nested classes.

and has 6 comments
It's a horribly old bug, something that was reported on their page since 2007 and it in the issue list for HtmlAgilityPack since 2011. You want to parse a string as an HTML document and then get it back as a string from the DOM that the pack is generating. And it closes the form tag, like it has no children.
Example: <form></form> gets transformed into <form/></form>

The problem lies in the HtmlNode class of the HtmlAgilityPack project. It defines the form tag as empty in this line:
ElementsFlags.Add("form", HtmlElementFlag.CanOverlap | HtmlElementFlag.Empty);
One can download the sources and remove the Empty value in order to fix the problem or, if they do not want to change the sources of the pack, they have the option of using a workaround:
HtmlNode.ElementsFlags["form"]=HtmlElementFlag.CanOverlap;
Be careful, though, the ElementsFlags dictionary is a static property. This change will be applied on the entire application.

I had a pretty strange bug to fix. It involved a class used in a web page that provided localized strings, only it didn't seem to work for Japanese, while French or English or German worked OK. The resource class was used like this: AdminUIString.SomeStringKey, where AdminUIString was a resx file in the App_GlobalResources folder. Other similar global resource resx classes were used in the class and they worked! The only difference between them was the custom tool that was configured for them. My problem class was using PublicResXFileCodeGenerator from namespace Resources, while the other classes used the GlobalResourceProxyGenerator, without any namespace.

Now, changing the custom tool did solve the issue there, but it didn't solve it in some integration tests where it failed. The workaround for this was to use HttpContext.GetGlobalResourceObject("AdminUIString", "SomeStringKey").ToString(), which is pretty ugly. Since our project was pretty complex, using bits of ASP.Net MVC and (very) old school ASP.Net, no one actually understood where the difference stood. Here is an article that partially explains it: Resource Files and ASP.NET MVC Projects. I say partially, because it doesn't really solve my problem in a satisfactory way. All it says is that I should not use global resources in ASP.Net MVC, it doesn't explain why it fails so miserable for Japanese, nor does it find a magical fix for the problem without refactoring the convoluted resource mess we have in this legacy project. It will have to do, though, as no one is budgeting refactoring time right now.

Update:
I've pinpointed the issue after a few other investigations. The https:// site was returning a security certificate that was issued for another domain. Why it worked in FireFox anyway and why it didn't work in Chrome, but then it worked after an unauthorized call first, I still don't know, but it is already in the domain of browser internals.

I was trying to access an API on https:// from a page that was hosted on http://. Since the scheme of the call was different from the scheme of the hosted URL, it is interpreted as a cross domain call. You might want to research this concept, called CORS, in order to understand the rest of the post.

The thing is that it didn't work. The server was correctly configured to allow cross domain access, but my jQuery calls did not succeed. In order to access the API I needed to send an Authorization header, as well as request the information as JSON. Investigations on the actual browser calls showed the correct OPTIONS request method, as well as the expected headers, only they appeared as 'Aborted'. It took me a few hours of taking things apart, debugging jQuery, adding and removing options to suddenly see it work! The problem was that after resetting IIS, the problem appeared again! What was going on?

In the end I've identified a way to consistently reproduce the problem, even if at the moment I have no explanation for it. The calls succeed after making a call with no headers (including the Content-Type one). So make a bogus, unauthorized call and the next correct calls will work. Somehow that depends on IIS as well as the Chrome browser. In Firefox it works directly and in Chrome it seems to be consistently reproducible.

I had to investigate a situation where a message of "Object moved to here", where "here" was a link, appeared in our ASP.Net application. First of all, we don't have that message in the app, it appears it is an internal message in ASP.Net, more exactly in HttpResponse.Redirect. It is a hardcoded HTML that is displayed as the response status code is set to 302 and the redirect location is set to the given URL. The browser is expected to move to the redirect location anyway, and the displayed message should be only a temporary thing. However, if the URL is empty, the browser does not go anywhere.

In conclusion, if you get to a webpage that has the following content:
<html><head><title>Object moved</title></head><body>
<h2>Object moved to <a href="[url]">here</a>.</h2>
</body></html>
then you are probably trying to Response.Redirect to an empty URL.

A Microsoft patch for ASP.Net released on the 29th of December 2011 adds a new functionality that rejects POST http requests with more than 1000 keys and any JSON http request with more than 1000 members. That is pretty huge, and if you have encountered this exception:
Operation is not valid due to the current state of the object.

Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code.

Exception Details: System.InvalidOperationException: Operation is not valid due to the current state of the object.

Source Error:
An unhandled exception was generated during the execution of the current web request. Information regarding the origin and location of the exception can be identified using the exception stack trace below.

Stack Trace:
[InvalidOperationException: Operation is not valid due to the current state of the object.]
System.Web.HttpValueCollection.ThrowIfMaxHttpCollectionKeysExceeded() +2692302
System.Web.HttpValueCollection.FillFromEncodedBytes(Byte[] bytes, Encoding encoding) +61
System.Web.HttpRequest.FillInFormCollection() +148

[HttpException (0x80004005): The URL-encoded form data is not valid.]
System.Web.HttpRequest.FillInFormCollection() +206
System.Web.HttpRequest.get_Form() +68
System.Web.HttpRequest.get_HasForm() +8735447
System.Web.UI.Page.GetCollectionBasedOnMethod(Boolean dontReturnNull) +97
System.Web.UI.Page.DeterminePostBackMode() +63
System.Web.UI.Page.ProcessRequestMain(Boolean includeStagesBeforeAsyncPoint, Boolean includeStagesAfterAsyncPoint) +133


then your site has been affected by this patch.

Well, you probably know that something is wrong with the design of a page that sends 1000 POST values, but still, let's assume you are in a situation where you cannot change the design of the application and you just want the site to work. Never fear, use this:

<configuration xmlns=”http://schemas.microsoft.com/.NetConfiguration/v2.0>
<appSettings>
<add key="aspnet:MaxHttpCollectionKeys" value="5000" />
<add key="aspnet:MaxJsonDeserializerMembers" value="5000" />
</appSettings>
</configuration>


More details:
Knowledge base article about it
The security advisor for the vulnerability fixed
The entire MS11-100 security update bulletin

and has 0 comments
Update February 2016: Tested it on Visual Studio 2015, with the Roslyn compiler, and the problem seems to have vanished.

Here is the now obsolete post:

A class in .Net can have default parameters, with values that are specified in the constructor signature, like this:
public MyClass(int p1,int p2=0) {}

If the class is inheriting from the Attribute class, then one can also specify property values when using it to decorate something, like this:
public class MyTestAttribute:Attribute {
public int P3 { get;set; }
}

[MyTest(P3=2)]
public class MyClass {}

What do you think this code would do?
public class MyTestAttribute:Attribute {
public MyTestAttribute(int p1,int p2=0) {}
public int P3 { get;set; }
}

[MyTest(1,P3=2)]
public class MyClass {}


Well, I tell you what is going to happen. Visual Studio and ReSharper both will see no problem with the syntax, but the compiler will issue an error based on the exception "error CS0182: An attribute argument must be a constant expression, typeof expression or array creation expression of an attribute parameter type", but without specifying any file or line.

My guess is that it is trying to interpret the P3=2 line as an expression to be calculated and passed as the second attribute of the constructor. What I was expecting is to set the default value to the second constructor parameter, then set the property P3. The vagueness of the error points out to a possible bug.

I've had a horrible week. It all started with a good Scrum sprint (or so I thought) followed by a period of quiet in which I could concentrate on my own ideas. And one of my ideas was to optimize the structure of the solution we work on, containing 48 projects, in order to save space and compilation time. In my eyes, I was a hero, considering that for a company with tens to hundreds of devs, even a one second increase in speed would be important. So, I set up doing that.

Of course, the sprint was not as good as I had imagined. A single stored procedure led to not less than four bugs in production, with me being to blame for them all. People lost more time working on reproducing the bugs, deploying the fix, code reviewing, etc. At long last I thought I was done with it and I could show everyone how great the solution looked now (on my computer) and atone for my sins.

So from a solution that spanned from 700Mb clean and 4Gb after compilation, I managed to get it to a maximum of 1.4Gb. In fact, it was so small I could put it all in a Ram disk, leading to enormous speeds. In comparison, a normal drive goes to about 30MB per second, an SSD drive (without encryption) goes to about 250MB/s, while my RamDisk was running at a whooping 3.6GB/s. That sped up the compilation and parsing of files. Moreover, I had discovered that MsBuild has this /m parameter that makes it use more processors. A compilation would go to about 40 seconds, down from two minutes and a half. Great! Alas, it was not to be so easy.

First of all, the steps I was considering were simple:
  • Take all projects and make them have a single output folder. That would decrease the size of the solution since there would be no copies of the .dll files, Then the sheer speed of the compilation would have to increase, since there would be less copying and less compilation.
  • More importantly, I was considering making a symlink to a RAM drive and using it instead of the destination folder.
  • Another step I was considering was making all references to the dll files in the output folder, not to the projects, allowing for projects to be opened independently.


At first I was amazed the solution decreased in size so much and I just placed the entirety of it into a RAM drive. This fixed some of the issues with Visual Studio, because when I was selecting a file through a symlink to add as a reference, it would resolve to the target folder instead of the name of the symlink. And it was't easy either. Imagine removing all project references and replacing them with dll references for 48 projects. It took forever.

Finally I had the glorious compilation. Speed, power, size, no warnings either (since I also worked on that) and a few bug fixes thrown in there for good measure. I was a god! Then the problems appeared.

Problem 1: I had finished the previous sprint with a buggy stored procedure committed to production. Clients were losing money and complaining. That put a serious dent in my pride, especially since there were multiple problems coming from both less attention to how I wrote the code to downright lack of knowledge of the flow of the application. For the last part I am not really the only one to blame, but it was my responsibility.

Problem 2: The application was throwing some errors about the target framework of a dll. It was enough to make me understand a major flaw in my design: there were .Net 3.5 and .Net 4.0 assemblies in the solution and placing them all in the same output folder would break some build scripts. Even worse, the 8 web projects in the solution needed to have their output in the bin folder, so that IIS would find them. Fixed it only to see the size of the solution rise back to 3Gb.

Problem 3: Visual Studio would not be so smart as to understand that if a project is loaded, going to the declaration of a member in the compiled assembly means I want to see the actual source, not the IL code. Well, sometime it worked, but sometimes it didn't. As a result I restored the project references instead of the assembly references.

Problem 4: the MsBuild /m flag would do wonders on my machine, but it would not do much on the build server. Nor would it do its magic on slower, less multiprocessor computers than my own.

Problem 5: Facing a flood of problems coming from me, my colleagues lost faith and decided to not even try the modifications that removed the compilation warnings from the solution.

Conclusion: The build went marginally faster, but not enough to justify a whole week of work on it. The size decreased by 25%, making it feasible to put it all in a RAM Drive, so that was great, to the detriment of working memory. I still have to see if that is a good or a bad thing. The multiprocessor hacks didn't do much, the warnings are still there and even some of my bug fixes were problematic because someone else also worked on them and didn't tell anyone. All in a week's work.

Things I have learned from all this: Baby steps. When I feel enthusiasm, I must take it as a sign of trouble. I must be dispassionate as an ice cube and think things through. If I am working on a branch, integrate the trunk into it every day, so as to not make it harder to do at the end. When doing something, do it from start to finish, no matter what horrors I see while doing it. Move away from Sodom and not look back at it. Someone else will fix that, maybe, you just do your task well. When finishing something, commit it into the source control so it can easily be reverted through a single atomic operation.

It is difficult to me to adjust to something that involves this amount of planning and focus. I feel as if the chaotic development years of my youth were somewhat better, even if at the time I felt that it was stupid and focus and planning was needed. As a good Romanian, I am neurotic enough to see the worst side of everything, master at complaining about it, but incapable of actually doing something. Yeah... this was a bad week.

and has 0 comments
A question arose at the office today: What is faster? Using a Dictionary<string,object> with a StringComparer.OrdinalCaseInsensitive constructor parameter or using a normal constructor call and using ToLower on the key before using it. The quick answer: using ToLower on the key.

The longer answer is that StringComparer.OrdinalCaseInsensitive implements IEqualityComparer<string> by using a native code function for GetHasCode(), which is very efficient. Unfortunately, it must use the case insensitive string comparison on both input key and stored keys, while calling ToLower on the keys before using them makes the comparison only once.