I was working on a small ASP.Net control that inherited from Button, so nothing special. I went into Visual Studio in design mode and I noticed that the control did not display any resize handles so I could only move the control, not resize it using the mouse.

After intensive Googling, trying custom ControlDesigner classes with AllowResize set to true and all other kinds of strange things, I realised that in the Render method I would write a style tag while in DesignMode before rendering the button. The method was described in a previous blog entry to fix the problem with CSS files with WebResource entries inside and PerformSubstitution.

It appears that Visual Studio, by default, only resizes the controls that render an HTML element that has size. A style tag does not! The solution was to add a div around the control (and the style block) while in DesignMode. The only problem was that the div is a block element, while the input of type button is an inline element. That means that two div elements would place themselves one under the other, not next to the other like two buttons would. So I fixed the problem by also adding a float style attribute and setting it to left.

Here is a small code sample:

protected override void Render(HtmlTextWriter writer)
{
if (DesignMode)
{
writer.AddStyleAttribute("float", "left");
writer.RenderBeginTag("div");
// this renders an embedded CSS file as a style block
writer.Write(this.RenderDesignTimeCss("MyButton.css"));
}
base.Render(writer);
if (DesignMode)
{
writer.RenderEndTag();
}
}

and has 0 comments
Another great Malazan book, the fourth in the series, House of Chains starts with the personal history of Karsa, the Teblor, previously known to us as Shai'k's Toblakai guardian. His tale is both brutal and inspiring, as he evolves from a mindless brute to a ... well... slightly minded brute. At the end of the book there is another battle, like I have been already accustomed by previous reads, only it must be the weirdest one yet. You have to read it to believe it.

Since it started with the singular story of Karsa and because of the many characters that were both introduced, developed from the previous stories or simply clashing at the end, the book felt fragmented (like Raraku's warren :) ). That wasn't so bad, however it opened to many avenues that need to be closed in following books.

At this point it is obvious to me that the Malazan Book of the Fallen is not really a series, but a humongous single story with many interlocking threads and characters. Like chains dragging ghosts of books read, I feel the pressure to end the series so I will probably start hacking away at the fifth book this week.

I was trying to make a web control that would be completely styled by CSS rules. That was hard enough with the browsers all thinking differently about what I was saying, but then I had to also make it look decent in the Visual Studio 2008 designer.

You see, in order to use in the CSS rules (or in Javascript resources, for that matter) files that have been stored as embedded resources in the control assembly, a special construct that looks like <%=WebResource("complete.resource.name")%> is required.

The complete resource name is obtained by appending the path of the embedded resource inside the project to the namespace of the assembly. So if you have a namespace like MyNamespace.Controls and you have an image called image.gif in the Resources/Images folder in your project, the rule for a background image using it is background-image:url(<%=WebResource("MyNamespace.Controls.Resources.Images.Image.gif")%>);.

Also, in order to access the CSS file you need something like [assembly:
WebResource("MyNamespace.Controls.Resources.MyControl.css",
"text/css", PerformSubstitution = true)]
somewhere outside a class (I usually put them in AssemblyInfo.cs). Take notice of the PerformSubstitution = true part, that looks for WebResource constructs and interprets them.

Great! Only that PerformSubstitution is ignored in design mode in Visual Studio!!. The only solution to render the control correctly is to render the CSS file yourself and handle the WebResource tokens yourself. Here is a method to do just that:

private static readonly Regex sRegexWebResource =
new Regex(@"\<%\s*=\s*WebResource\(""(?<resourceName>.+?)""\)\s*%\>",
RegexOptions.ExplicitCapture | RegexOptions.Compiled);

public static string RenderDesignTimeCss2(Control control, string cssResource)
{
var type = control.GetType();
/*or a type in the assembly containing the CSS file*/


Assembly executingAssembly = type.Assembly;
var stream=executingAssembly.GetManifestResourceStream(cssResource);

string content;
using (stream)
{
StreamReader reader = new StreamReader(stream);
using (reader)
{
content = reader.ReadToEnd();
}
}
content = sRegexWebResource
.Replace(content,
new MatchEvaluator(m =>
{
return
control.GetWebResourceUrl(
m.Groups["resourceName"].Value);
}));
return string.Format("<style>{0}</style>", content);
}

All you have to do is then override the Render method of the control and add:

if (DesignMode) {
writer.Write(
RenderDesignTimeCss2(
this,"MyNamespace.Controls.Resources.MyControl.css"
);
}


Update:
It seems that rendering a style block before the control disables the resizing of the control in the Visual Studio designer. The problem and the solution are described here.

My friend Meaflux mentioned a strange concept called polyphasic sleep that would supposedly allow me to spend less time sleeping, thus maximizing my waking time. I usually love sleep, I can sleep half a day if you let me and I am very cranky when forcefully waken up... as in every day when going to work, doh! Also, I enjoy dreaming and even nightmares. Sure, I get scared and lose rest and there are probably underlying reasons for the horrors I experience at night sometimes, but they are cool! Better than any Hollywood horror, that's for sure. My brain's got budget :)

Anyway, as I get older I understand more and more the value of time, so a method that would give me an extra of 2 to 6 hours a day sounds magical and makes me reminisce of the good times of my childhood when I had time for anything! Just that instead of skipping school I would skip sleep. But does it work?

A quick Google shows some very favourable articles, including one called How to Hack your Brain and the one on Wikipedia, which is ridiculously short and undocumented. A further search reveals some strong criticism as well, such as this very long and seemingly documented article called Polyphasic Sleep: Facts and Myths. Then again, there are people that criticise the critic like in An attack on polyphasic sleep. Perhaps the most interesting information comes from blog comments from people who have tried it and either failed miserably or are extremely happy with it. Some warn about the importance of the sleep cycles that the polyphasic sleep skips over, like this Stage 4 Sleep Deprivation article.

Given the strongly conflicting evidence, my only option is to try it out, see what I get. At least if I suddenly stop writing the blog you know you should not try it and lives will be saved :) Ok, so let's summarise what this all is about, just in case you ignored all the links above.

Most people are monophasic sleepers, a fancy name for people who sleep once a day for about 8 hours (more or less, depending on how draconic your work schedule and responsibilities are). Many are biphasic, that means they sleep a little during the afternoon. This apparently is highly appreciated by "creative people", which I think means people that are self employed and doing well, so they can afford the nap. I know many retired people have a biphasic sleep cycle at least and probably children. Research shows that people normally feel they need to sleep most at around 2:00 and 14:00, which accounts for the sleepiness we feel after lunch. The mid day sleep is also called Siesta.

Now, poliphasic sleep means you reduce your sleep (which in the fancy terminology is called core sleep) and then compensate by having short sleep bursts of around 20 minutes of sleep at as fixed intervals as possible called naps. This supposedly "fixes" your brain with REM sleep, which is the first in the sleep lifecycle, however it is a contested theory. The only sure thing seems to come from an italian researcher called Claudio Stampi who did a lot of actual research and who clearly stated that sleeping many short naps is better than sleeping only once at the same number of hours of sleep. So in other words six 20 minutes naps are better than one 3 hour sleep.

Personally, I believe there is some truth to the method, as many people are actually using it, but with some caveats. Extreme versions like the Uberman (six naps a day, resulting in 2 hours of actual sleep) probably take their toll physiologically, even if they might work for the mental fitness. Also, probably some people are better suited than others for this type of customised sleep cycles. And, of course, it is difficult for a working man to actually find the place and time to nap during the afternoon, although I hear that it has become a fashion of sorts in some major world cities to go to Power nap places and sleep for 20 minutes in special chairs. No wonder New Yorkers are neurotic :) On a more serious yet paranoid note: what if this works and then employers make it mandatory? :-SS

So, in the interest of science, I will attempt this for a while, see if it works. My plan is to sleep 5 hours for the core, preferably from 1:00 to 6:00, then have two naps, one when I get back from work (haven't decided if before or after dinner, as there are people recommending not napping an hour after eating) and another close to 8:30 when I go to work. So far I have been doing it for three days, but it seems all this needs at least a few weeks of adjustment.

Now, with 5 hours and 40 minutes of sleep instead of 7 I only gain 1.33 hours a day, but that means an extra TV show, programming a small utility, reading a lot and maybe even writing... so wish me luck!

Update: I did try it, but I didn't get the support I needed from the wife, so I had to give it up. My experience was that, if you find the way to fall asleep in about 5 minutes, the method works. I didn't feel sleepy, quite the contrary, I felt energized, although that may be from the feeling of accomplishment that the thing actually works :) Besides, I only employed the method during the work week and slept as much as I needed in the weekend. I actually saved about 40 hours a month, which I could use for anything I wanted. If one works during that time, it means an increase in revenue to up to 25%. That's pretty neat.

I have this ASP.Net web control that has a property that is persisted as an inner property and sets some custom style attributes. Then there is another control that inherits from the first one and overloads this custom style property with another having the same name and returning a different type. All this is done via the new operator

Now, while this is perfectly acceptable from the standpoint of Object Oriented Programming and it compiles without any issues, I get a compile error when I actually use the property and save it in the aspx code. That is a limitation of ASP.Net and I could find no way to circumventing other than changing the name of the property. This would also be the case if you have two public properties that have the same name only differently cased.

While this second situation I understand, the first is really stupid. I am specifying the type of the control so there should be no issues with an overloaded property. I guess it has something to do with the persistence mechanism, but still... Annoying.

I have been working on a class that has some properties of type Color?, a nullable Color. I noticed that, in Visual Studio 2008, when I go to edit the properties in question in the property grid I get a dropdown of possible colors whereas if I go to edit a color on a WebControl, something like BackColor, I get a nice dialog window with the title 'More Colors' with colored hexagons from which to choose a color. Researching on the net I found out that if you want to use the ColorEditor in the property grid control you should decorate the property with [Editor(typeof(ColorEditor), typeof(UITypeEditor))]. I did that and, indeed, the editor was changed. However, it was not the one for BackColor, with its hexagonal design.

Reflecting the WebControl class I noticed that it didn't use any Editor decoration, but rather [TypeConverter(typeof(WebColorConverter))], so I added that and removed the editor on my property. Now the only editing option for my property was a simple textbox. Using both TypeConverter with the Editor option didn't have any visible effect either, it just used the normal editor.

As a last test I changed the type of the property to Color and decorated it only with the TypeConverter attribute and there it was, the 'More colors' editor. My only conclusion is that the editor is chosen by the property grid itself on Color properties decorated with the WebColorConverter. It must be hardcoded and thus it will never work on other types. Just stick with the ColorEditor option for nullable values.

Needless to say, I spent quite a lot of time compiling, closing and opening the designer on a test page and so on, so I hope this helps other people with a similar problem.

Today I found out about a very cool attribute called DebuggerDisplayAttribute. What it does is tell the Visual Studio debugger how to display a certain class when making a break in the code. And it seems it is very powerful, using expressions in the string that is given as its parameter (ex: [DebuggerDisplay("x = {x} y = {y}")]). Check out the link on MSDN here.

The way the class is displayed can be further customized by using the DebuggerTypeProxyAttribute class, which actually uses a completely different class to be presented in the debugger windows. Read Using DebuggerTypeProxy Attribute for more details.

I spent several hours debugging an error that gave this exact error message: 'null' is null or not an object. Well, duh! The problem here is that the project was an ASP.Net project, with a large number of files that interacted in innovative and weird ways with the ASP.Net Ajax framework. So going line by line was a problem. Also, the Javascript debuggers stopped in a finally block that had nothing to do with the error.

As a side note, the only debugger that showed where the error originated was the Chrome debugger (and probably the FireFox one, but I didn't get to try it). The problem with Chrome is that it only gave me the file and line number in a "minimized" js file, so I had no clue on what was happening.

Ok, to cut it short, the error appeared in a line like $get('someId').value=... An element with the id someId did not exist, therefore the result of the function was null. Setting the value resulted in the error.

Conclusion: 'null' is null or not an object is an error that occurs when member access is requested on a function result without it being stored in a variable. Look for stuff of the form someFunction(params).member. It is a bad form anyway; store the someFunction result in a variable and then access its members: var x=someFunction(params); if (x) //doSomethingWith x.member

Whenever you build a more complex ASP.Net control you get to build templates. Stuff like collapsing panels, headered controls or custom repeaters are all usually working with templates, by having properties of the ITemplate type which then are instantiated into a simple control like a Placeholder or Panel. There are many tutorials on how to do this, so I won't linger on the subject.

What I find very difficult to achieve was to have a templated control that would just accept dropped controls and place them into a template, without having to enter in "template mode". First I will copy the entire code of a very simple control, then I will explain.

using System.ComponentModel;
using System.ComponentModel.Design;
using System.Web.UI;
using System.Web.UI.Design;
using System.Web.UI.WebControls;

namespace Web.Controls
{
[Designer(typeof (TestControlDesigner), typeof (IDesigner))]
[ParseChildren(true)]
[PersistChildren(false)]
public class TestControl : WebControl
{
#region Properties

[EditorBrowsable(EditorBrowsableState.Never)]
[Browsable(false)]
[PersistenceMode(PersistenceMode.InnerProperty)]
[DefaultValue(typeof (ITemplate), "")]
[TemplateInstance(TemplateInstance.Single)]
public ITemplate MainTemplate
{
get;
set;
}

#endregion

#region Private Methods

protected override void CreateChildControls()
{
base.CreateChildControls();
if (MainTemplate != null)
{
MainTemplate.InstantiateIn(this);
}
}

#endregion
}

public class TestControlDesigner : ControlDesigner
{
#region Member data

private TestControl mTestControl;

#endregion

#region Public Methods

public override void Initialize(IComponent component)
{
base.Initialize(component);
SetViewFlags(ViewFlags.TemplateEditing, true);
mTestControl = (TestControl) component;
}

public override string GetDesignTimeHtml()
{
mTestControl.Attributes[DesignerRegion.DesignerRegionAttributeName] = "0";
return base.GetDesignTimeHtml();
}

public override string GetDesignTimeHtml(DesignerRegionCollection regions)
{
var region = new EditableDesignerRegion(this, "MainTemplate");
region.Properties[typeof (Control)] = mTestControl;
region.EnsureSize = true;
regions.Add(region);
string html = GetDesignTimeHtml();
return html;
}


public override string GetEditableDesignerRegionContent(
EditableDesignerRegion region)
{
if (region.Name == "MainTemplate")
{
var service = (IDesignerHost) Component.Site.GetService(typeof (IDesignerHost));
if (service != null)
{
ITemplate template = mTestControl.MainTemplate;
if (template != null)
{
return ControlPersister.PersistTemplate(template, service);
}
}
}
return string.Empty;
}


public override void SetEditableDesignerRegionContent(
EditableDesignerRegion region,string content)
{
if (region.Name == "MainTemplate")
{
if (string.IsNullOrEmpty(content))
{
mTestControl.MainTemplate = null;
}
else
{
var service = (IDesignerHost)
Component.Site.GetService(typeof (IDesignerHost));
if (service != null)
{
ITemplate template = ControlParser.ParseTemplate(service, content);
if (template != null)
{
mTestControl.MainTemplate = template;
}
}
}
}
}

#endregion

#region Properties

public override TemplateGroupCollection TemplateGroups
{
get
{
var collection = new TemplateGroupCollection();

var group = new TemplateGroup("MainTemplate");
var template = new TemplateDefinition(this,
"MainTemplate", mTestControl, "MainTemplate", true);
group.AddTemplateDefinition(template);
collection.Add(group);
return collection;
}
}

#endregion
}
}


The above code is a pretty common way of making a templated control. We have a Control (TestControl) with one or more ITemplate properties (MainTemplate) and a ControlDesigner (TestControlDesigner) that decorates the TestControl class via the Designer attribute. The trick for the drag and drop to work with this scenario is in the last two methods.

First look at the GetDesignTimeHtml() override. There I did a little hack in order to set for the target control an attribute with the name DesignerRegion.DesignerRegionAttributeName and with the value of "0". This actually declares the entire control in the Visual Studio designer as the first (and only) "editable region". The region is registered in the GetDesignTimeHtml(DesignerRegionCollection regions) override and named MainTemplate.

Secondly, look at the TemplateGroups property override. This is in order for the "Edit Templates" action to appear in design mode. Since drag and drop would work very nicely, technically we don't need it. However, I noticed that if one tries to drag a control out of the templated one, it doesn't really work :). We still need Template Editing mode, but only in unlikely scenarios. Even if the template group is called "MainTemplate" it has no connection with the editing region.

Thirdly, look at the lots of attributes decorating the TestControl class and the MainTemplate property. An important one is [TemplateInstance(TemplateInstance.Single)], which says that there will be only one instance of the template, in other words, the controls can be treated as in a Panel and accessible from the code as protected members.

Finally, let's look at the methods that actually permit the drag and drop operation: GetEditableDesignerRegionContent, which has a region as a parameter, and SetEditableDesignerRegionContent, which also has a region, but also a string content parameter.

The trick lies in the conversion from this unwieldy string variable to a control tree and viceversa. The string is necessary because it is the XML/HTML as interpreted by the Visual Studio designer.

As you can see, in the get method the static class ControlPersister is used to transform the control tree into a string, while in the set method the static ControlParser class is used to do the reverse.

Hope this helps someone.

and has 0 comments
From all the animes that I've watched, Cowboy Bebop must be in the top three or five. It was an imaginative mix of space sci-fi and film noir, jazz and fight movies. The director of said anime, Shinichirō Watanabe, also did another anime called Samurai Champloo that I just finished watching. It is an imaginative mix of samurai era and Tarantino movies, hip hop and gangster movies. Even if it is sort of formulaic, this recipe for animes produced some pretty cool results. I thoroughly enjoyed the series and, at the end of the 26Th and last episode, I was crying for more.

The story revolves around three characters: a young girl looking for a mysterious samurai that smells of sunflowers, a samurai looking for purpose in life and a low life thug with an unconventional but deadly style of sword fighting. In the end, they meet some very dangerous people, thus ending the whole story arc. I have to say that, even if the fights in the main story arc were better and the emotions stronger, I enjoyed the other stuff, the episodes on the side, a lot more, even if most of the time they did not take themselves seriously... or maybe perhaps because of that very reason.

I completely recommend this to any anime lover, it is a nicely animated, with a cool soundtrack, and with an ingenious story.

and has 0 comments
I did a service on Linux for a friend of mine, mainly a script that he was supposed to execute. He tried using it, but failed every time. I was logging in, tried it and it ran perfectly well. We scratched our heads a little until he noticed some error messages from when he executed the script, saying that a specific command could not be found.

So, this is what happens: he logs in using SSH with credentials that are not root (as it should be) then he executes su (super user) to gain root privileges. He then executes the script and the commands inside the script are not found by the system. I do the same thing, and it works.

It took a while until I realized that he gained super user privileges using just su while I was using su -. Leave it to Linux guys to have a single minus sign as an important command line parameter :) su - executes the complete shell environment for the root user and changes the PATH variable and the home directory (to root). su gives you root privileges, su - makes you root.

and has 0 comments

Third book in the Malazan Book of the Fallen series, Memories of Ice is another masterpiece of epic fantasy from Steven Erikson. What Deadhouse Gates had in shere scope of military strategy, this book has in number of people and characters as well as levels of magic. If in Gardens of the Moon the characters would avoid gathering too much power in a single place, Memories of Ice practically heaps mages and powerful creatures.

I enjoyed the book a lot, even though I think the multitude of important characters and the magnitude of their powers was a bit overwhelming. Also, the battles seemed more chaotic, less strategic, considering they came after the impressive story of Coltaine's campaign in Deadhouse Gates. All this was compensated by troop numbers in the hundreds of thousands, major magic users in the tens, an alliance of the T'Lan Imass allied with Caladan Brood and the Tiste Andii and the Barghast and the Malazans and the Rhivi and so many others against the cannibal army of a Jaghut manipulated by a god and so many other gems.

I started reading the fourth book in the series and I will be reviewing it as fast as I can.

The thing to do is use .Net 2.0 web parts and override CreateEditorParts or implement IWebEditable on generic controls that are not web parts.

A good link on how to do that (once you know what to look for, duh!) is How to get EditorParts working in your WebParts.

Another link that summarises different solutions for several issues with web parts is WSS 3.0 webparts development. A quote that is relevant to this blog entry:

Toolpanes can be customized in a number of ways. A web part by itself can declare custom toolparts which show up in the toolpane by overriding the GetToolParts method (WSS WebPart) or CreateEditorParts (ASP.NET WebPart).

A developer can also customize the toolpane generically by supporting the ICustomizeToolpane interface (which allows you to redefine the structure of the toolpane UI) or supporting IAddToGallery (which allows you to add new gallery "tabs").


I am still working on clarifying that out as the documentation from MSDN is almost non existent on the subject. It appears that using ICustomizeToolPane is only supported for Sharepoint web parts and not for ASP.Net web parts. I yet know of no other way of changing the toolpane except by using some javascript from the web part editor (which would suck) and it might not even be possible.

Update: The rest of this post is not longer relevant. I believe it is the way it was done in the old Sharepoint days, when Sharepoint web parts were not based on ASP.Net 2.0.

Click to show anyway

Ever wanted to leave your office and go see the world? Well, here's your chance!

and has 0 comments
In previous entries I have described how I got hooked on the The Teaching Company courses and especially on the ones lectured by the mathematician Edward B. Burger. After Introduction to Number Theory and The Joy of Thinking I only had one such math course to watch and that was From Zero to Infinity: A History of Numbers.

At first I thought it was a tame version of the course about number theory, only a bit more historical. It started up with how people moved from counting thing to an abstract understanding of numbers, then the evolution of the concept of the number with the advent of zero, negative numbers, rations, irrational numbers, complex numbers, Π, etc. However, at the end, the story split quite a bit and became a course in its own right so now I am pretty glad I watched it as well.

It did start to bother me that the level of understanding required for these classes is pretty low and as such the lecturer is forced to repeat and over-exemplify things and avoid as much as possible math notation and equations. The model makes no sense to me. If the people watching were to be uneducated, would they really want to watch the courses? If they did, would they have the money to spare for them if they were stupid? And if they were not stupid or they would be young people interesting in the basics of science, wouldn't they be smart enough to raise the bar a bit? I mean, it's not TV. People actually have to make an effort to purchase and then watch these courses.

Anyway, Mr. Burger was cool as always, but I had issues with some of the concepts presented in the course and how they were presented. After a plethora of information about Pythagoreans and natural numbers and Π, the lecture about the number e was really basic. No real proofs, no examples of use, it was like it didn't belong in the course at all.

Then there was the thing about 0.(9) being equal to 1. I understood the theory behind it, but it just got me wondering what about integer part of 0.(9)? And, if one could use the reasoning behind the idea, then how come S=sum(x^n) with n=0..infinity is not always 1/(1-x) regardless of x? And how come it is considered possible for a real number to have different decimal expansions? Shouldn't it there be a theorem about the uniqueness of said decimal expansion for a specific number just as it is about the prime factorization in order for some of the proofs in the course to make sense? I intend to write an email about it to Burger himself and if (with a godly voice from the sky :)), he answers me, I will be able to complete this entry.

That being said, I thoroughly enjoyed the course, although the one about number theory remains my favourite from this lecturer.

Update: Mr. Burger was kind enough to answer my questions. Here is his reply:
You are correct, there are examples for which the decimal expansion is not unique (and it only happens when we have an infinite tail of 9s). Here are two quick ways of convincing yourself about 0.(9):

1) I bet you feel very comfortable with the identity: 1/3 = 0.(3). Now multiply by 3: 1 = 0.(9)! Fun.

2) Suppose that 0.(9) does NOT equal 1. Then I'm sure you would guess it would be SMALLER than 1. Now recall that if we have two DIFFERENT numbers and we AVERAGE them, then the average will be larger than the smaller number and also smaller than the larger number (the average is in between them). So let's find the average: add: 1 + 0.(9) = 1.(9). Now divide by 2 and we see the average is 0.(9)... but that's one of the numbers we were averaging! Whoops.. therefore the numbers must be equal.