I named this post so because I started researching something that a chat user asked me: how do you add UpdatePanels programatically to a page. You see, the actual problem was that he couldn't add controls to the UpdatePanel after adding it to the page and that was because the UpdatePanel is a templated control, in other words it contains one or more objects that inherit from ITemplate and all the control's children are part of these templates.

So, the required application is like this: A page that has a button that does nothing but a regular postback and another button that adds an UpdatePanel. Each update panel must contain a textbox and a button. When the button is pressed, the textbox must fill with the current time only in that particular UpdatePanel. If the regular postback button is pressed, the UpdatePanels must remain on the page.

What are the possible issues?
First of all, the UpdatePanels must survive postbacks. That means that you have to actually create them every time the page loads, therefore inside Page_Load. Note: we could add them in Page_Init and in fact that's where they are added when getting the controls from the aspx file of a page, but during Init, the ViewState is not accessable!
Then, there is the adding of the UpdatePanels. It is done in a Click event from a button, one that is done AFTER the Page_Load, therefore adding of an UpdatePanel must also be done there. Note: we could put the CreatePanels method in Page_LoadComplete, but then the controls in the update panel will not respond to any events, since the Load phase is already complete.
There is the matter of how we add the TextBox and the Button in each UpdatePanel. The most elegant solution is to use a Web User Control. This way one can visually control the content and layout of each UpdatePanel and also (most important for our application) add code to it!
Now there is the matter of the ITemplate object that each UpdatePanel must have as a ContentTemplate. This is done via the Page.LoadTemplate method! We you give it the virtual path to the ascx file and it returns an ITemplate. It's that easy!

Update:if you by any chance want to add controls programatically to the UpdatePanel, use the ContentTemplateContainer property of the UpdatePanel like this:
updatePanel.ContentTemplateContainer.Controls.Add(new TextBox());


Enough chit-chat. Here is the complete code for the application, the DynamicUpdatePanels page and the ucUpdatePanelTemplate web user control:
DynamicUpdatePanels.aspx.cs
using System;
using System.Web.UI;

public partial class DynamicUpdatePanels : Page
{
private int? _nrPanels;

public int NrPanels
{
get
{
if (_nrPanels == null)
{
if (ViewState["NrPanels"] == null)
NrPanels = 0;
else
NrPanels = (int) ViewState["NrPanels"];
}
return _nrPanels.Value;
}
set
{
_nrPanels = value;
ViewState["NrPanels"] = value;
}
}

protected void Page_Load(object sender, EventArgs e)
{
CreatePanels();
}

private void CreatePanels()
{
for (int i = 0; i < NrPanels; i++)
{
AddPanel();
}
}

private void AddPanel()
{
UpdatePanel up = new UpdatePanel();
up.UpdateMode = UpdatePanelUpdateMode.Conditional;
up.ContentTemplate = Page.LoadTemplate("~/ucUpdatePanelTemplate.ascx");
pnlTest.Controls.Add(up);
}

protected void btnAdd_Click(object sender, EventArgs e)
{
NrPanels++;
AddPanel();
}
}


DynamicUpdatePanels.aspx
<%@ Page Language="C#" AutoEventWireup="true" CodeFile="DynamicUpdatePanels.aspx.cs"
Inherits="DynamicUpdatePanels" %>


<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
<html xmlns="http://www.w3.org/1999/xhtml">
<head runat="server">
<title>
Untitled Page</title>
</head>
<body>
<form id="form1" runat="server">
<asp:ScriptManager ID="ScriptManager1" runat="server" />
<asp:Panel ID="pnlTest" runat="server">
</asp:Panel>
<asp:Button ID="btnAdd" runat="server" Text="Add Panel" OnClick="btnAdd_Click" />
<asp:Button ID="btnPostBack" runat="server" Text="Postback" />
</form>
</body>
</html>


ucUpdatePanelTemplate.ascx.cs
using System;
using System.Web.UI;

public partial class ucUpdatePanelTemplate : UserControl
{
protected void btnAjax_Click(object sender, EventArgs e)
{
tbSomething.Text = DateTime.Now.ToString();
}
}


ucUpdatePanelTemplate.ascx
<%@ Control Language="C#" AutoEventWireup="true" CodeFile="ucUpdatePanelTemplate.ascx.cs"
Inherits="ucUpdatePanelTemplate" %>

<asp:TextBox ID="tbSomething" runat="server"></asp:TextBox>
<asp:Button ID="btnAjax" runat="server" OnClick="btnAjax_Click" />


That's it, folks!

Sometimes you need an information like the time taken for a web page to actually reach the client. It is different from the time it takes to create the rendered content as it includes some web server overhead and the actual network transfer. IIS doesn't know anything about your code, so you can't tell it to log everything you need. How do you synchronize the information in the IIS log with the one in your own logging system?

Use the Response.AppendToLog method that will add a custom string to the end of the IIS logged cs-uri-query field. That doesn't help you much, but since you can add any string you want, you can add a key that would help synchronize the two informations.

Quick example:
string key=Guid.NewGuid().ToString();
Response.AppendToLog(" key=["+key+"]");
MyLogger.Write(myInformation,key);


Now you will only have to Regex the cs-uri-query field to find the key, then search the corresponding line in your own log. Simple! Sort of...

We were working on a project for a company that suddenly started complaining of slow ASP.Net pages. I optimised what I could, but it seemed to me that it ran pretty fast. Then I find out that some of the customers use a slow Internet connection. The only way to test this was to simulate a slow connection.

But how can one do that on IIS 5.1, the Windows XP web server? After a while of searching I realised that it was the wrong question. I don't need this for other projects and if I did I certainly wouldn't want to slow the entire web server to check it out. Because yes, changing the metadata of the server can, supposedly, change the maximum speed the pages are delivered. But it was simply too much hassle and it wasn't a reusable solution.

My way was to create a Filter for the Response of all pages. Response.Filter is supposed to be a Stream that receives as parameter the previous Response.Filter (which at the very start is Response.OutputStream) and does something to the output of the page. So I've created a BandwidthThrottleFilter object and added it in the MasterPage Page_Load:
Response.Filter=new BandwidthThrottleFilter(Response.Fitler,10000);
. It worked.

Now for the code. Follow these steps:
  1. Create a BandwidthThrottleFilter class that inherits from the abstract class Stream
  2. Add a constructor that receives as parameters a Stream and an integer
  3. Add fields that will get instantiated from these two parameters
  4. Implement all abstract methods of the Stream object and use the same methods from the Stream field
  5. Change the Write method to also call a Delay method that receives as parameter the count parameter of the Write method


That's it. You need only create the Delay method which will do a Thread.Sleep for the duration of time it normally should take to transfer that amount of bytes. Of course, that assumes that the normal speed of transfer is negligeable.

Click to see the whole class code

When one wants to indicate clearly that a control is to perform an asynchronous or a synchronous postback, one should use the Triggers collection of the UpdatePanel. Of course, I am assuming you have an ASP.Net Ajax application and you are stuck on how to indicate the same thing on controls that are insides templated controls like DataGrid, DataList, GridView, etc.

The solution is to get a reference to the page ScriptManager then use the method RegisterPostBackControl on your postback control. You get a reference to the page ScriptManager with the static ScriptManager.GetCurrent(Page); method. You get the control you need inside the templated control Item/RowCreated event with a e.Item/Row.FindControl("postbackControlID");

So, the end result is:

ScriptManager sm=ScriptManager.GetCurrent(Page);
Control ctl=e.Item/Row.FindControl("MyControl");
sm.RegisterPostBackControl(ctl);


Of course, if you want it the other way around (set the controls as Ajax async postback triggers) use the RegisterAsyncPostBackControl method instead.

Special thanks to Sim Singh from India for asking me to research this.

I was reading this post where Jeff Atwood complained about too many shiny tools that only waste our time and of which there are so many that the whole shining thing becomes old.

Of course, I went to all the links for tools in the post that I could find, and then some. I will probably finish reading it after I try them all :)

Here are my refinements on the lists that I've accessed, specific with .NET programming in mind and free tools:
  • Nregex.com - nice site that tests your regular expressions online and let's you explore the results. Unfortunately it has no profiling or at least a display of how long it took to match your text
  • PowerShell - Great tool once you get to know it. It comes complete with blog, SDK and Community Extensions
  • PowerTab - adds Tab expansion in PowerShell
  • Lutz Roeder's Reflector - the .NET decompiler and its many add-ons
  • Highlight - a tool to format and colorize source code for any flavour of operating system and output file format.


There are a lot more, but I am lazy and I don't find the use for many of them, but you might. Here is Scott Hanselman's list of developer tools from which I am quite amazed he excluded ReSharper, my favourite Visual Studio addon.

Warning: this is going to be one long and messy article. I will also update it from time to time, since it contains work in progress.

Update: I've managed to uncover something new called lookbehinds! They try to match text that is behind the regular expression runner cursor. Using lookbehinds, one might construct a regular expression that would only match a certain maximum length, fixing the problem with huge mismatch times in some situations like CSV parsing a big file that has no commas inside.

Update 2: It wouldn't really work, since look-behinds check a match AFTER it was matched, so it doesn't optimize anything. It would have been great to have support for more regular expressions ran in parallel on the same string.

What started me up was a colleague of mine, complaining about the ever changing format of import files. She isn't the only one complaining, mind you, since it happened to me at least on one project before. Basically, what you have is a simple text file, either comma separated, semicolon separated, fixed width, etc, and you want to map that to a table. But after you make this beautiful little method to take care of that, the client sends a slightly modified file in an email attachment, with an accompanying angry message like: "The import is not working anymore!".

Well, I have been fumbling with the finer aspects of regular expressions for about two weeks. This seemed like the perfect application of Regex: just save the regular expression in a configuration string then change it as the mood and IQ of the client wildly fluctuates. What I needed was:
  • a general format for parsing the data
  • a way to mark the different matched groups with meaningful identifiers
  • performance and resource economy


The format is clear: regular expression language. The .NET flavour allows me to mark any matched group with a string. The performance should be as good as the time spent on the theory and practice of regular expressions (about 50 years).

There you have it. But I noticed a few problems. First of all, if the file is big (as client data usually is) translating the entire content in a string and parsing it afterwards would take gigantic amounts of memory and processing power. Regular expressions don't work with streams, at least not in .Net. What I needed is a Regex.Match(Stream stream, string pattern) method.

Without too much explanation (except the in code comments) here is a class that does that. I made it today in a few hours, tested it, it works. I'll detail my findings after the code box (which you will have to click to expand).

StreamRegex - click to expand/collapse


One issue I had with it was that I kept translating a StringBuilder to a string. I know it is somewhat optimized, but the content of the StringBuilder was constantly changing. A Regex class that would work at least on a StringBuilder would have been a boost. A second problem was that if the input file was not even close to my Regex pattern, the matching would take forever, as the algorithm would add more and more bytes to the string and tried to match it.

And of course, there was my blunt and inelegant approach to regular expression writing. What does one do whan in Regex hell? Read Steve Levithan's blog, of course! It was then when I decided to write this post and also document my regular expression findings.

So, let's summarize a bit, then add a bunch of links.
  • the .NET regular expression flavour supports marking a group with a name like this
    (?<nameOfGroup>someRegexPattern)
  • it also supports non capturing grouping:
    (?:pattern)
    This will not appear as a Group in any match although you can apply quantifiers to it
  • also supported are atomic or greedy grouping.
    (?>".+")
    The pattern above will match "abc" but not "abc"d because ".+ matches the whole pattern and the ending quote is not matched. Normally, it would backtrack, but atomic groups do not backtrack once they failed, saving time, but possibly skipping matches
  • one can also use lazy quantifiers:ab+? will match ab in the string abbbbbb
  • posessive quantifiers are not supported, but they can be substituted with atomic groups:
    ab*+ in some regex flavours is (?>ab*) in .NET
  • let's not forget the
    (?#this is a comment)
    notation to add comments to a regular expression
  • Look-behinds! - great new discovery of mine that can match an already matched expression. I am not sure how it would hinder speed, though. Quick example: I want to match "This is a string", but not "This is a longer string, that I don't want to match, since it is ridiculously long and it would make my regex run really slow when I really need only a short string" :), both as separate lines in a text file.
    ([^\r\n]+)(?:$|[\r\n])(?<=(?:^|[\r\n]).{1,21})
    This expression matches all strings that do not contain line breaks, then looks behind to check if there is a string begin or a line break character at at most 21 characters behind, effectively reducing the maximum length of the matched string to 20. Unfortunately, this would slow even more the search, since it would only back check a match AFTER the match completed.


What does that mean? Well, first of all, an increase in performance: using non capuring grouping will save memory, using atomic quantifiers will speed up processing. Then there is the "Unrolling the loop" trick, using atomic grouping to optimize repeated alternation like (that|this)*. Group names and comments ease the reading and reuse of regular expressions.

Now for the conclusion: using the optimizations described above (and in the following links) one can write a regular expression that can be changed, understood and used in order to break the input file into matches, each one having named groups. A csv file and a fixed length record file would be treated exactly the same. Let's say using something like (?<ZipCode>\w*),(?<City>\w*)\r\n or (?<ZipCode>\w{5})(?<City>\w{45})\r\n or use look-behinds to limit the maximum line size. All the program has to do is parse the file and create objects with the ZipCode and City properties (if present), maybe using the new C# 3.0 anonymous types. Also, I have read about the DFA versus NFA types of regular expression implementations. DFAs are a lot faster, but cannot support many features that are supported by NFA implementations. The .Net regex flavour is NFA, but using atomic grouping and other such optimizations bridges the gap between those two.

There is more to come, as I come to understand these things. I will probably keep reading my own post in order to keep my thoughts together, so you should also stay tuned, if interested. Now the links:

.NET Framework General Reference Grouping Constructs
.NET Framework General Reference Quantifiers
Steve Levithan's blog
Regular Expression Optimization Case Study
Optimizing regular expressions in Java
Atomic Grouping
Look behinds
Want faster regular expressions? Maybe you should think about that IgnoreCase option
Scott Hanselman's .NET Regular Expression Tool list
Compiling regular expressions (also worth noting is that the static method Regex.Match will cache about 15 used regular expressions so that they can be reused. There is also the Regex.CacheSize property that can be used to change that number)
Regular expressions at Wikipedia
Converting a Regular Expression into a Deterministic Finite Automaton
From Regular Expressions to DFA's Using
Compressed NFA's


There is still work to be done. The optimal StreamRegex would not need StringBuilders and strings, but would work directly on the stream. There are a lot of properties that I didn't expose from the standard Regex and Match objects. The GroupCollection and Group objects that my class exposes are normal Regex objects, some of their properties do not make sense (like index). Normally, I would have inherited from Regex and Match, but Match doesn't have a public constructor, even if it is not sealed. Although, I've read somewhere that one should use composition over inheritance whenever possible. Also, there are some rules to be implemented in my grand importing scheme, like some things should not be null, or in a range of values or in some relation to other values in the same record and so on. But that is beyond the scope of this article.

Any opinions or suggestions would really be apreciated, even if they are not positive. As a friend of mine said, every kick in the butt is a step forward or a new and interesting anal experience.

Update:

I've taken the Reflected sources of System.Text.RegularExpressions in the System.dll file and made my own library to play with. I might still get somewhere, but the concepts in that code are way beyond my ability to comprehend in the two hours that I allowed myself for this project.

What I've gathered so far:
  • the Regex class is no sealed
  • Regex calls on a RegexRunner class, which is also public and abstract
  • RegexRunner asks you to implement the FindFirstChar, Go and InitTrackCount methods, while all the other methods it has are protected but not virtual. In the MSDN documentation on it, this text seals the fate of the class This API supports the .NET Framework infrastructure and is not intended to be used directly from your code.
  • The RegexRunner class that the Regex class calls on is the RegexInterpreter class, which is a lot of extra code and, of course, is internal sealed
.

The conclusion I draw from these points and the random experiments I did on the code itself are that there is no convenient way of inheriting from Regex or any other class in the System.Text.RegularExpressions namespace. It would be easy, once the code is freely distributed with comments and everything, to change it in order to allow for custom Go or ForwardCharNext methods that would read from a stream when reaching the end of the buffered string or cause a mismatch once the runmatch exceeds a certain maximum length. Actually, this last point is the reason why regular expressions cannot be used so freely as my original post idea suggested, since trying to parse a completely different file than the one intended would result in huge time consumption.

Strike that! I've compiled a regular expression into an assembly (in case you don't know what that is, check out this link) and then used Reflector on it! Here is how to make your own regular expression object:
  • Step 1: inherit from Regex and set some base protected values. One that is essential is base.factory = new YourOwnFactory();
  • Step 2: create said YourOwnFactory by inheriting from RegexRunnerFactory, override the CreateInstance() method and return a YourOwnRunner object. Like this:
    class YourOwnFactory : RegexRunnerFactory
    {
    protected override RegexRunner CreateInstance()
    {
    return new YourOwnRunner();
    }
    }

  • Step 3: create said YourOwnRunner by inheriting from abstract class RegexRunner. You must now implement FindFirstChar, Go and InitTrackCount.
. You may recognize here a Factory design pattern! However, consider that the Microsoft normal implementation (the internal sealed RegexInterpreter) has like 36Kb/1100 lines of highly optimised code. This abstract class is available to poor mortals for the single reason that they needed to implement regular expressions compiled into separate assemblies.

I will end this article with my X-mas wish list for regular expressions:
  • An option to match in parallel two or more regular expressions on the same string. This would allow me to check for a really complicated expression and in the same time validate it (for length, format, or whatever)
  • Stream support. This hack in the above code works, but does not real tap in the power of regular expressions. The support should be included in the engine itself
  • Extensibility support. Maybe this would have been a lot more easy if there was some support for adding custom expressions, maybe hidden in .NET (?#comment) syntax.

I am going to quickly describe what happened in the briefing, then link to the site where all the presentation materials can be found (if I ever find it :))

The whole thing was supposed to happen at the Grand RIN hotel, but apparently the people there changed their minds suddenly leaving the briefing without a set location. In the end the brief took place at the Marriott Hotel and the MSDN people were nice enough to phone me and let me know of the change.

The conference lasted for 9 hours, with coffee and lunch breaks, and also half an hour for signing in and another 30 minutes for introduction bullshit. You know the drill if you ever went to one of such events: you sit in a chair waiting for the event to start while you are SPAMMED with video presentations of Microsoft products, then some guy comes in saying hello, presenting the people that will do the talking, then each of the people that do the talking present themselves, maybe even thank the presenter at the beginning... like a circular reference! Luckily I brought my trusted ear plugs and PDA, loaded with sci-fi and tech files.

The actual talk began at 10:00, with Petru Jucovschi presenting as well as holding the first talk, about Linq and C# 3.0. He has recently taken over from Zoltan Herczeg and he has not yet gained the necessary confidence to keep crouds interested. Luckily, the information and code were reasonably well structured and, even if I've heard them before, held me watching the whole thing.

Linq highlights:
  • is new in .NET 3.0+ and it takes advantage of a lot of the other newly introduced features like anonymous types and methods, lambda expressions, expression trees, extension methods, object initializers and many others.
  • it works over any object defined as IQueryable<T> or IEnumerable (although this last thing is a bit of a compromise).
  • simplifies our way of working with queries, bring them closer to the .NET programming languages and from the just-in-time errors into the domain of compiler errors.
  • "out of the box" it comes with support for T-Sql, Xml, Objects and Datasets, but providers can be built (easily) for anything imaginable.
  • the linq queries are actually execution trees that are only run when GetEnumerator is called. This is called "deffered execution" and it means more queries can be linked and optimised before the data is actually required.
  • in case you want the data for caching purposes, there are ToList and ToArray methods available


Then there were two back-to-back sessions from my favourite speaker, Ciprian Jichici, about Linq over SQL and Linq over Entities. He was slightly tired and in a hurry to catch the plain for his native lands of Timisoara, VB, but he held it through, even if he had to talk for 2.5 hours straight. He went through the manual motions of creating mappings between Linq to SQL objects and actualy database data; it wouldn't compile, but the principles were throughly explained and I have all the respect for the fact that he didn't just drag and drop everything and not explain what happened in the background.

Linq to SQL highlights:
  • Linq to SQL does not replace SQL and SQL programming
  • Linq to SQL supports only T-SQL 2005 and 2008 for now, but Linq providers from the other DB manufacturers are sure to come.
  • Linq queries are being translated, wherever possible, to the SQL server and executed there.
  • queries support filtering, grouping, ordering, and C# functions. One of the query was done with StartsWith. I don't know if that translated into SQL2005 CLR code or into a LIKE and I don't know exactly what happends with custom methods
  • using simple decoration, mapping between SQL tables and C# objects can be done very easily
  • Visual Studio has GUI tools to accomplish the mapping for you
  • Linq to SQL can make good use of automatic properties and object initialisers and collection initialisers
  • an interesting feature is the ability to tell Linq which of the "child" objects to load with a parent object. You can read a Person object and load all its phone numbers and email addresses, but not the purchases made in that name


Linq to Entities highlights:
  • it does not ship with the .NET framework, but separately, probably a release version will be unveiled in the second half of this year
  • it uses three XML files to map source to destination: conceptual, mapping and database. The conceptual file will hold a schema of local object, the database file will hold a schema of source objects and the mapping will describe their relationship.
  • One of my questions was if I can use Linq to Entities to make a data adapter from an already existing data layer to another, using it to redesign data layer architecture. The answer was yes. I find this very interesting indeed.
  • of course, GUI tools will help you do that with drag and drop operations and so on and so on
  • the three level mapping allows you to create objects from more linked tables, making the internal workings of the database engine and even some of its structure irrelevant
  • I do not know if you can create an object from two different sources, like SQL and an XML file
  • for the moment Linq to SQL and Linq to Entities are built by different teams and they may have different approaches to similar problems


Then it was lunch time. For a classy (read expensive like crap) hotel, the service was really badly organised. The food was there, but you had to stay in long queues qith a plate in your hand to get some food, then quickly hunt for empty tables, the type you stand in front of to eat. The food was good though, although not exceptional.

Aurelian Popa was the third speaker, talking about Silverlight. Now, it may be something personal, but he brought in my mind the image of Tom Cruise, arrogant, hyperactive, a bit petty. I was half expecting him to say "show me the money!" all the time. He insisted on telling us about the great mathematician Comway who, by a silly mistake, created Conway's Life Game. If he could only spell his name right, tsk, tsk, tsk.

Anyway, technically this presentation was the most interesting to me, since it showed concepts I was not familiar with. Apparently Silverlight 1.0 is Javascript based, but Silverlight 2.0, which will be released by the half of this year, I guess, uses .NET! You can finally program the web with C#. The speed and code protection advantages are great. Silverlight 2.0 maintains the ability to manipulate Html DOM objects and let Javascript manipulate its elements.

Silverlight 2.0 highlights:
  • Silverlight 2.0 comes with its own .NET compact version, independent on .NET versions on the system or even on operating system
  • it is designed with compatibility in mind, cross-browser and cross-platform. One will be able to use it in Safari on Linux
  • the programming can be both declarative (using XAML) and object oriented (programatic access with C# or VB)
  • I asked if it was possible to manipulate the html DOM of the page and, being written in .NET, work significantly faster than the same operations in pure Javascript. The answer was yes, but since Silverlight is designed to be cross-browser, I doubt it is the whole answer. I wouldn't put it past Microsoft to make some performance optimizations for IE, though.
  • Silverlight 2.0 has extra abilities: CLR, DLR (for Ruby and other dynamic languages), suport for RSS, SOAP, WCF, WPF, Generics, Ajax, all the buzzwords are there, including DRM (ugh!)


The fourth presentation was just a bore, not worth mentioning. What I thought would enlighten me with new and exciting WCF features was something long, featureless (the technical details as well as the presenter) and lingering on the description would only make me look vengeful and cruel. One must maintain apparences, after all.

WCF highlights: google for them. WCF replaces Web Services, Remoting, Microsoft Message Queue, DCOM and can communicate with any one of them.

If you don't want to read the whole thing and just go to the solution, click here.

I reached a stage in an ASP.Net project where a I needed to make some pages work faster. I used dotTrace to profile the speed of each form and I optimized the C# and SQL code as much as I could. Some pages still were very slow.

Now, I had the idea to look for Javascript profilers. Good idea, bad offer. You either end up with a makeshift implementation that hurts more than it helps, or with something commercial that you don't even like. FireFox has a few free options like FireBug or Venkman, but I didn't even like them and then the pages I was talking about were performing badly in Internet Explorer, not FireFox.

That got me thinking of the time when Firefox managed to quickly select all the items in a <select> element, while on Internet Explorer it scrolled to each item when selecting it, slowing the process tremendously. I then solved that issue by setting the select style.display to none, selecting all the items, then restoring the display. It worked instantly.

Can you guess where I am going with this?

Most ASP.Net applications have a MasterPage now. Even most other types of sites employ a template for all the pages in a web application, with the changing page content set in a div or some other container. My solution is simple and easy to apply to the entire project:

Step 1. Set the style.display for the page content container to "none".
Step 2. Add a function to the window.onload event to restore the style.display.

Now what will happen is that the content will be displayed in the hidden div, all javascript functions that create, move, change elements in the content will work really fast, as Internet Explorer will not refresh the visual content in the middle of the execution, then show the hidden div.

A more elegant solution would have been to disable the visual refresh of the element while the changes are taking place, then enable it again, but I don't think one can do that in Javascript.

This fix can be applied to pages in FireFox as well, although I don't know if it speeds anything significantly. The overall effect will be like the one in Internet Explorer table display. You will see the page appear suddenly, rather than see each row appear while the table is loaded. This might be nice or not nice, depending on personal taste.

Another cool idea would be to hide the div and replace it with a "Page loading" div or image. That would look even cooler.

Here is the code for the restoration of display. In my own project I just set the div to style="display:none", although it might be more elegant to also hide it using Javascript for the off chance that someone might view the site in lynx or has Javascript disabled.
function addEvent(elm, evType, fn, useCapture)
// addEvent and removeEvent
// cross-browser event handling for IE5+, NS6 and Mozilla
// By Scott Andrew
{
if (elm.addEventListener){
elm.addEventListener(evType, fn, useCapture);
return true;
} else if (elm.attachEvent){
var r = elm.attachEvent("on"+evType, fn);
return r;
} else {
alert("Handler could not be removed");
}
}

function initMasterPage() {
document.getElementById('contenuti').style.display='';
}

addEvent(window,'load',initMasterPage);

Update: this problem appeared for older versions of AjaxControlToolKit. Here is a link that says they fixed this issue since 21st of September 2007.

You are building this cool page using a TabContainer or some other AjaxControlToolKit control and everything looks smashing and you decide to add the UpdatePanels so that everything would run super-duper-fast. And suddenly the beautiful page looks like crap! Everything works, but your controls don't seem to load the cascading style sheet.
What is happening is that you make a control visible using update panels and so the CSS doesn't get loaded. I don't know exactly why, you would have to look into the AjaxControlToolKit source code and find out for yourself.

I found two fixes for this. The first is the nobrainer: add another TabContainer or AjaxControlToolKit control in the page, outside any updatepanels, make it visible, but set its style.display to 'none' or put it in a div or span with style="display:none". The second is the AjaxControlToolKit way. In the Page_Load event of the page or user control that contains the TabContainer or AjaxControlToolKit control add this line:
ScriptObjectBuilder.RegisterCssReferences(AjaxControlToolKit control);

This is part of the ExtenderControlBase class in AjaxControlToolKit, which is inherited by most if not all of their controls.

Now it should all work wonderfully.

This book started great. It outlaid a structured view of how a software company should function, from the way one designs projects, to code and documentation standards. I really hoped this was the mother load: a book that would show me how a "standard" IT company functions on every level. It wasn't. Mark Horner started it well and ended it badly. A shorter book and more to the point would have been enough.

Bottom line, the book starts in an interesting way, describing what I would call "IT gap analysis", in other words the application of a simple idea: begin with a detailed (and documented) picture of what the current (start) situation is, then describe just as much detail the situation you want to reach (end). From then on, the job of describing the transition becomes orders of magnitude easier. That applies to software projects (start with what the client has and needs, then create the plan to bridge the gap), documentation (start with the functional and end with the structural) and ultimately code (start with abstract classes and interfaces, then fill in the missing code).

Other than that there are some (hopefully) nice references, then a lot of empty space filled up with irrelevant things: description of design patterns (which are nice, but there are books for something like this), a glossary of terms (some were never even used in the book!) and then the general way of describing something, then adding the "Standard acknowledges" part that basically says the same thing as his own description. It generally felt as a student paper from one that needed only a passing grade.

Sorry Mark, better luck next time. I will add here a short summary of what yours truly thought was noteworthy in the book:
  • use gap analysis for all the levels of your software work. When you define what you have and what you need, filling the blanks becomes easier.
  • use functional documentation, design documentation and structural documentation to detail what you wanted the software to do, how you designed to solve the problems and what are the basic building blocks of the project (classes, patterns, etc).
  • use code standards and peer reviews and even external code auditing to improve the quality of code. Refactoring is a must. Popular code development methodologies include Extreme Programming and Rational Unified Process.
  • use a design standard like the open-source architecture framework (TOGAF)
  • the enterprise vs. domain dichotomy. Should a software be started from scratch and done for the current set of requests only, or should it be designed as a general component ready for reuse? I would really go towards the enterprise, even when the profit from the extra work is not immediately obvious. Sometimes things that you have prepared in advance and nobody acknowledged become a real time (and life) saver when unreasonable requests tumble down upon you.
  • also linked to the enterprise/domain issue: an application framework solution. Create a basic Visual Studio solution that contains common components used in many projects and use it as a startup solution.
  • use the Visual Studio formatting options to keep your code well formatted. Use a standard of naming variables, methods, properties. My own choice is using lowerCamelCase for inner variables, prefixing the name with an underscore for fields. UpperCamelCase (or Pascal) for methods, properties and class names. Hungarian notation for controls (lbName for a label with a name). I don't really care if one names the control txtName, tbName or tboxName, as long as the prefix is revealing.
  • use the Obsolete attribute for methods and properties that are intended to be removed in the near future. In my own library I have used methods that became obsolete with the coming of .NET 2.0 and used this attribute to point not only to the obsolescence, but also to the blog entry detailing the reasoning behind it.
  • this is basically derived from other sources, but I do think it is relevant: best practices recommends using composition over inheritance, wherever possible. I admit that the coding of composition is much more complex, but with the refactoring tools found in Visual Studio and its add-ons (like my beloved Resharper), it becomes similar in complexity.
  • references:
    1. book: Programming C# by Jesse Liberty, published by O'Reilly
    2. dude: Martin Fowler is a leading authority on refactoring
    3. books to understand object-oriented development: Object-Oriented Analysis and Design with Applications by Grady Booch, published by Addison-Wesley in 1994
    4. Expert C# Business Objects by Rockford Lhotka, published by Apress in 2003
    5. book: Code Complete, by Steve McConnell, published by Microsoft Press 2004
    6. authorities on design patterns: Martin Fowler, Gregor Hohpe, Bobby Woolf
    7. dude: professor Trygve Reenskaug and his discussion on the role of object collaboration: Role Modeling and UML-VM discussions


My conclusion: read my summary and you don't waste two days of reading time.

Most programmers use the ViewState to preserve data across postbacks and the Session to preserve data for a user. There are cases when you want to preserve data across users. Maybe you are using an IHttpHandler and you want to send information through a key between your application and the handler. Or maybe you want to keep data that is resource consuming to acquire, but used by more users of the same application. Here is where Cache comes along.

There are two things you need to pay attention to when you are using the Cache:
  • While using the syntax Cache[key] is very simple and consistent with the ViewState, Session or other dictionaries you are used to, you need to be aware that in case of the server freeing memory, the cache items will be removed based on priority. Try to use Cache.Add or Cache.Insert with CacheItemPriority.NotRemovable when you are sure you don't want this to happen. The Absolute and Sliding expirations will still work.
  • I've read somewhere that HttpContext.Current.Cache does not work across users. It's like another Session object. Use HttpRuntime.Cache instead. I also looked in the ASP.Net source code and I found out that HttpContext.Cache returns HttpRuntime.Cache, so I don't see how these two properties could behave any differently. HttpRuntime is much easily usable, though, since it works in situations where HttpContext.Current is null

I was building my nice little web app, you know: grids, buttons, ajax, stuff like that. And of course I had to create CSS classes for my controls, for example a button has the class of butt and the Send button has a class of buttsend. I agree it was not one of the most inspired CSS class name, but look what ajax made out of it:

Nullable types are new to .NET 2.0 and at first glance they seem great. The idea that you can now wrap any type into a generic one that also allows for null values seems valuable enough, especially for database interactions and the dreaded SQL null.

However, having an object that can be both instantiated and null can cause a lot of issues. First of all, boxing! You can define a Nullable<int>, set it to null, then access one of its properties (HasValue). So suddenly a piece of code like obj=null; obj.Property=...; makes sense. But if you want to send it as a parameter for a method, one that receives an object and then does stuff with it, the object must represent the null value, which means it is no longer an instance of anything. Therefore you can't get the type of the variable that was passed to the method!

Quick code snippet:
int? i=null;
DbWrapper.Save("id",i);

With Save defined as:
public void Save(string name,object value)

Now, I want to know what kind of nullable type was sent to the method. I can't see that if the parameter signature is object, so I am creating another signature for the method:
public void Save(string name,Nullable value)

At this time I get an error: static types cannot be used as parameters. And of course I can't, because Nullable is a generic static type that needs the type of the wrapped value. So the type is actually Nullable<int>. My only solution now is to create a signature for every value type: integers, floating point values, booleans, chars and IntPtrs. String and Object are value types, but they accept null, so Nullable<string> is not used on them, but if you count all the other value types , there are 14 of them!

There is another option that I just thought of: a custom object that implicitly converts from a nullable. Then the method would use this object type as a parameter.
I tested it and it works. Here is the code for the object:
Click to show


Update:
There are issues regarding this object, mainly refering to constants. If you pass a constant value like the number 6 a method that expects either Object or NullableWrapper, will choose NullableWrapper. More than that, it will choose a NullableWrapper<byte> since the value is under 256. Adding signatures for int, double, etc causes Ambiguos reference errors. So my only solution so far is to consider the UnderlyingType of even Nullable<byte> as Int32. It is obviously a hack, but I haven't found a good programming solution to it, yet. If you have an idea, please let me know! Thanks.

Links:
Nullable Types (C# Programming Guide)
Check If Type is a Nullable Type - Convert Nullable Type to Underlying Type - C# 2.0 Generics

FxCop is a free Microsoft utility that analyses compiled .Net code and makes suggestion based on rules, may them be design, security, performance of company policy rules.

The first problem is that you can only use it on compiled code. That means executables and DLLs. But what about ASP.Net 2.0? It doesn't build a DLL anymore, like 1.1 did. How can one use it? I have built an application (one that you will have to message me to send to you) that takes the project name as a parameter and then creates an FxCop project file with the DLLs from the site bin folder as reference DLLs and the DLLs from the Asp.Net temporary folder of that project as analysis targets. That means that you can also use it for ASP.Net now.

The second problem is that FxCop is now part of Team System, the overpriced and overhyped version of Visual Studio, and any attempts to use it with the Standard or Professional versions are cumbersome and undocumented. Siderite to the rescue! Here is a quick way to integrate FxCop as an external tool to Visual Studio and use it for either Console and Windows Forms applications or Asp.NET sites.

Step 1. Download FxCop. The latest version is 1.35, but there is also a 1.36 beta available that knows about lambda expressions and stuff like that.
Step 2. Get from me the FxCopAspNet application (completely free and with source), or build your own. Here is a possibly working link for it: at MediaFire.
Step 3. Open Visual Studio, go to Tools/External Tools and add two FxCop entries:
- FxCop [use C:\Program Files\Microsoft FxCop 1.36\fxcop.exe as Command, "$(TargetPath)" as Arguments and C:\Program Files\Microsoft FxCop 1.36 as Initial Directory]
- FxCopAspNet [use C:\Program Files\FxCopAspNet\FxCopAspNet.exe as Command, "$(ProjectFileName)" as arguments and C:\Program Files\FxCopAspNet as Initial Directory]
Step 4. Just open your web site or application in Visual Studio, compile it, then click on the FxCop item that applies.

Now, this is not meant to be a tutorial on FxCop, here are some links that might enlighten people:
Download FxCop 1.36 Beta
Download FxCop 1.35
Open Source FxCop Integration Add-in for Visual Studio 2005
How to copy the necessary files from Team System to Visual Studio 2005 Professional to make integration work
Use FxCop from your own code
A small tutorial
FxCopUnit, FxCop testing unit project
Video on how to create your own FxCop rules

There are a myriad rules included in the default installation of FxCop, some of them are just annoying, like some naming rules or some telling you you shouldn't raise Exceptions just objects inherited from Exception, but some are pretty good. A lot more can be found on the Internet and now, with the integration in VS Team System, I expect a lot more to pop-up.

Maybe it works for other IIS versions as well, but I certainly was looking for a way of turning it on on our Windows 2000 development/test computer. So this is the long story:
HOW TO: Enable ASPX Compression in IIS

and this is the short one:
Step 1: backup your site metabase
Go to the Internet Information Services (IIS) tab and right click on it, go to All Tasks, choose Backup/Restore Configuration and save it.

Step 2: make the change to the metabase
Create a .bat file that has the following content:
net stop iisadmin
cd C:\InetPub\adminscripts
CSCRIPT.EXE ADSUTIL.VBS SET W3Svc/Filters/Compression/GZIP/HcScriptFileExtensions "asp" "dll" "exe" "aspx"
CSCRIPT.EXE ADSUTIL.VBS SET W3Svc/Filters/Compression/DEFLATE/HcScriptFileExtensions "asp" "dll" "exe" "aspx"
net start w3svc


Make sure to restart the SMTP service or any others that were stopped by the bat. I don't know how to start it from the command line and I pretty much don't care. The batch file will notify you of possible services it will shut down, but will restart in the end only the Web service.

The performance is immediately visible and it also works with Ajax.

Update:
This article was originally talking about Windows XP. Thanks to McHilarant (see comment below) I realized that, even if the changes in the metabase are possible on any IIS5 (Windows XP and Windows 2000), the actual compression will not be possible on XP. I remembered then that the actual modification that I did that time was not on my dev machine, but on our office server, therefore I updated the post accordingly.

Another Update:
Here is a link about a script to enable IIS 6 gzip compression: Script to Enable HTTP Compression (Gzip/Deflate) in IIS 6.