To simply quote and link: Unfortunately, this is where the SharedSizeGroup method breaks down. If you want to have a shared Grid that uses the whole available width and automatically adjusts when that space changes you're going to need a different method. A column set to * in a shared group acts just like an Auto column and won't fill or stay within the given space. Taken from John Bowen's blog.

and has 0 comments
I am not much of an art guy, but this thing just blew me away. Not so much the animation itself (it is very original, but... not an art guy) as the volume of effort and work this had to require. Just watch it, it is worth it.


BIG BANG BIG BOOM - the new wall-painted animation by BLU from blu on Vimeo.

and has 0 comments
Forced to wait for the tenth and final novel of the Malazan Book of the Fallen series, due to be published this year, I've started to read the books placed in the same universe written by Steven Erikson's friend, Ian Cameron Esslemont. The first of these books is Night of Knives, which is rather short compared with Erikson's novels or, indeed, with the second Esslemont book, Return of the Crimson Guard, which I am reading now.

The book is alert, as it spans a single night on the island of Malaz, during a rare event which weakens the borders between realms. Anything can happen during this night and, indeed, does happen. The island is assaulted by alien ice magic water dwellers, the dead house is under siege and Kellanved and Dancer are making their move towards the throne of Shadow realm. Meanwhile Surly is Clawing her way into the throne, a natural talented girl with too much attitude is trying to get a job and start an adventure and an old retired soldier gives his all once again.

All and all, it was a nice book. The writing style is clearly different from Erikson's, with less descriptive passages, a little more action and a more positive bias, tending to lend people more good qualities and having them end a little better. However, it only takes a few pages to get into the Malazan feel of things and enjoy the book.

Ok, so I am doing a lot of Access for a friend. And I got into a problem that I was sure had a simple solution somewhere. Apparently it does not. The documentation for the issue is either not existant or buggy and the "helping" comments usually are trying to tell you you are wrong without trying to give a workable solution or directing you to some commercial solution. So this is for the people trying to solve the following problem: You have images embedded in an Ole Object field in a table, the images are jpg or whatever format and they appear to the table editor as Package and you want to display those images in an Image control in an Access Form via VB, without binding anything. Also, I am using Access 2007.

The first thing you are going to find when googling is that putting images in the database is a bad idea. Whenever you see this, close the page. People will give you their solution, which is store the URL of the image in the database. We don't want that, for various reasons.

After googling some more, you will find there is no solution involving the Image control, but rather only Bound or Unbound Ole Object Frames. We don't want that either.

The only solution left, since the Image control does not support direct content, but only a path to an image, is to read the binary data from the field, store it in a temporary file, then display it. When looking for this you will get to a Microsoft knowledge base article, which does most of the work, but is buggy! You see, the FileData variable they use in the WriteBLOB function is defined as a string, and it should be defined as a byte array.

Also, you want to retrieve the data from the record as binary data and so you want to use CurrentDb.OpenRecordset("MyQuery") and you get a stupid error like "Run-time error '3061': Too few parameters. Expected 1.". This is because your query has a form parameter and it just fails. There are some solutions for this, but what I basically did was to read the ID of the record in a variable using normal DLookup, then write a new SQL query inline: CurrentDb.OpenRecordset("SELECT Picture FROM MyTable WHERE ID=" & id).

When you finally make it to save the binary data in a file, you notice that the file is not what you wanted, instead it is a little bigger and starts with some mambo jumbo containing the word Package again. That means that, in order to get the file we want, you need to decode the OLE package format.

And here is where I come from, with the following code:

' Declarations that should go at the beginning of your code file
' ==========================
Const BlockSize = 32768
Const UNIQUE_NAME = &H0

Private Declare Function GetTempPathA Lib "kernel32" _
(ByVal nBufferLength As Long, _
ByVal lpBuffer As String) As Long

Private Declare Function GetTempFileNameA Lib "kernel32" _
(ByVal lpszPath As String, ByVal lpPrefixString As String, _
ByVal wUnique As Long, ByVal lpTempFileName As String) _
As Long
' ==========================

' Get a temporary file name
Public Function GetTempFileName() As String

Dim sTmp As String
Dim sTmp2 As String

sTmp2 = GetTempPath
sTmp = Space(Len(sTmp2) + 256)
Call GetTempFileNameA(sTmp2, "", UNIQUE_NAME, sTmp)
GetTempFileName = Left$(sTmp, InStr(sTmp, Chr$(0)) - 1)

End Function

' Get a temporary file path in the temporary files folder
Private Function GetTempPath() As String

Dim sTmp As String
Dim i As Integer

i = GetTempPathA(0, "")
sTmp = Space(i)

Call GetTempPathA(i, sTmp)
GetTempPath = AddBackslash(Left$(sTmp, i - 1))

End Function

' Add a trailing backslash is not already there
Private Function AddBackslash(s As String) As String

If Len(s) > 0 Then
If Right$(s, 1) <> "\" Then
AddBackslash = s + "\"
Else
AddBackslash = s
End If
Else
AddBackslash = "\"
End If

End Function

' Write binary data from a recordset into a temporary file and return the file name
Function WriteBLOBToFile(T As DAO.Recordset, sField As String)
Dim NumBlocks As Integer, DestFile As Integer, i As Integer
Dim FileLength As Long, LeftOver As Long
Dim FileData() As Byte
Dim RetVal As Variant

On Error GoTo Err_WriteBLOB

' Get the size of the field.
FileLength = T(sField).FieldSize()
If FileLength = 0 Then
WriteBLOBToFile = Null
Exit Function
End If

'read Package format
Dim pos As Integer
pos = 70 ' Go to position 70
Do ' read a string that ends in a 0 byte
FileData = T(sField).GetChunk(pos, 1)
pos = pos + 1
Loop Until FileData(0) = 0
Do ' read a string that ends in a 0 byte
FileData = T(sField).GetChunk(pos, 1)
pos = pos + 1
Loop Until FileData(0) = 0
pos = pos + 8 ' ignore 8 bytes
Do ' read a string that ends in a 0 byte
FileData = T(sField).GetChunk(pos, 1)
pos = pos + 1
Loop Until FileData(0) = 0
' Get the original file size
FileData = T(sField).GetChunk(pos, 4)
FileLength = CLng(FileData(3)) * 256 * 256 * 256 + _
CLng(FileData(2)) * 256 * 256 + _
CLng(FileData(1)) * 256 + CLng(FileData(0))
' Read the original file data from the current position
pos = pos + 4

' Calculate number of blocks to write and leftover bytes.
NumBlocks = FileLength \ BlockSize
LeftOver = FileLength Mod BlockSize

' Get a temporary file name
Dim Destination As String
Destination = GetTempFileName()

' Remove any existing destination file.
DestFile = FreeFile
Open Destination For Output As DestFile
Close DestFile

' Open the destination file.
Open Destination For Binary As DestFile

' SysCmd is used to manipulate the status bar meter.
RetVal = SysCmd(acSysCmdInitMeter, "Writing BLOB", FileLength / 1000)

' Write the leftover data to the output file.
FileData = T(sField).GetChunk(pos, LeftOver)
Put DestFile, , FileData

' Update the status bar meter.
RetVal = SysCmd(acSysCmdUpdateMeter, LeftOver / 1000)

' Write the remaining blocks of data to the output file.
For i = 1 To NumBlocks
' Reads a chunk and writes it to output file.
FileData = T(sField).GetChunk(pos + (i - 1) * BlockSize _
+ LeftOver, BlockSize)
Put DestFile, , FileData

RetVal = SysCmd(acSysCmdUpdateMeter, _
((i - 1) * BlockSize + LeftOver) / 1000)
Next i

' Terminates function
RetVal = SysCmd(acSysCmdRemoveMeter)
Close DestFile
WriteBLOBToFile = Destination
Exit Function

Err_WriteBLOB:
WriteBLOBToFile = Null
Exit Function

End Function


The function is used like this:

Dim id As String
id = DLookup("ID", "MyTableQueryWithFormCriteria", "")
Dim rs As DAO.Recordset
Set rs = CurrentDb.OpenRecordset("SELECT Picture FROM MyTable WHERE ID=" & id)
Dim filename As String
filename = Nz(WriteBLOBToFile(rs, "Picture"), "")
imgMyImage.Picture = filename


So, MyTable is a fictional table that contains an ID field and a Picture field of type OLE Object. MyTableQueryWithFormCriteria is a query used inside the form to get the data for the current form. It contains the MyTable table and selects at least the ID field. The WriteBLOBToFile function creates a temporary file, writes the binary data in the OLE Object field in it and returns the file's filename, so that we can feed it in the Image control.

The trick in the WriteBLOBToFile function is that, at least in my case with Access 2007, the binary data in the field is stored in a "Package". After looking at it I have determined that its format is like this:
  1. A 0x40 (64) byte header
  2. A 4 byte length
  3. A 2 byte (version?) field
  4. A string (characters ended with a 0 byte)
  5. Another string
  6. 8 bytes that I cared not to decode
  7. Another string
  8. The size of the packaged file (the original) in a 4 byte UInt32
  9. The data in the original file
  10. Some other rubbish that I ignored

The function thus goes to 64+6=70, reads 2 strings, moves 8 bytes, reads another string, then reads the length of the data and saves that much from the current position.

The examples in the pages I found said nothing about this except that you need an OLE server for a specific format in order to read the field, etc, but all of them suggested to save the binary data as if it were the original file. Maybe in some cases this happends, or maybe it is related to the version of MS Access.

and has 6 comments
I have been trying to build this setup for a project I made, using WiX, the new Microsoft paradigm for setup packages. So I did what any programmer would do: copy paste from a previously working setup! :) However, there was a small change I needed to implement, as it was a .NET4.0 project. I built the setup, compiled it, ran the MSI and kaboom!

Here is a piece of the log file:
Action 15:34:48: FetchDatabasesAction. 
Action start 15:34:48: FetchDatabasesAction.
MSI (c) (A0:14) [15:34:48:172]: Invoking remote custom action. DLL: C:\DOCUME~1\siderite\LOCALS~1\Temp\MSI21CF.tmp, Entrypoint: FetchDatabases
MSI (c) (A0:68) [15:34:48:204]: Cloaking enabled.
MSI (c) (A0:68) [15:34:48:219]: Attempting to enable all disabled privileges before calling Install on Server
MSI (c) (A0:68) [15:34:48:251]: Connected to service for CA interface.
Action ended 15:34:48: FetchDatabasesAction. Return value 3.
DEBUG: Error 2896: Executing action FetchDatabasesAction failed.
The installer has encountered an unexpected error installing this package. This may indicate a problem with this package. The error code is 2896. The arguments are: FetchDatabasesAction, ,
Action ended 15:34:48: WelcomeDlg. Return value 3.
MSI (c) (A0:3C) [15:34:48:516]: Doing action: FatalError

In order to get the log of an MSI installation use this syntax:
msiexec /i YourSetup.msi /l*vvv! msiexec.log
vvv is used to specify verbosity, the ! sign is used to specify that the log should be flushed after each line.

As you can notice, the error is a numeric error (2896) and nothing else. Googling it you get to a lot of people having security issues with it on Vista and Windows 7, but I have Windows XP on my computer. The error message descriptions pretty much says what the log does: Custom action failed. Adding message boxes and System.Diagnostics.Debugger.Launch(); didn't have any effect at all. It seemed the custom action was not even executed!

After hours of dispair, I found what the problem was: A custom action is specified in a DLL which has a config file containing this:

<startup>
<supportedRuntime version="v2.0.50727"/>
</startup>
which specifies for the MSI installer which version of the .NET framework to use for the custom action. Not specifying it leads to a kind of version autodetect, which takes into account the version of the msiexec tool rather than the custom action dll. It is highly recommended to not omit it. The problem I had was that I had changed the target of the custom action to .NET 4.0 and had also changed the config file to:

<startup>
<!--<supportedRuntime version="v2.0.50727"/>-->
<supportedRuntime version="v4.0.30319.1"/>
</startup>


Changing the version to NET3.5 and adding the original config string fixed it. However, I am still unsure on what are the steps to take in order to make the 4.0 Custom Action work. I have tried both 4.0.30319 and 4.0.30319.1 versions (the framework version folder name and the version of the mscorlib.dll file in the .NET 4.0 framework). I have tried v4.0 only and even removed the version altogether, to no avail.

In the end, I opened the WiX3.5 sources and looked for a config file. I found one that had this:

<startup useLegacyV2RuntimeActivationPolicy="true">
<supportedRuntime version="v4.0" sku=".NETFramework,Version=v4.0"/>
<supportedRuntime version="v2.0.50727" />
</startup>
As you can see, there is an extended supportedRuntime syntax in the 4.0 case, but that is not really relevant. The thing that makes it all work is useLegacyV2RuntimeActivationPolicy="true"!

So shame on whoever wrote msiexec for not specifying the actual problems that make a setup fail, and a curse on whoever decided to display a numeric code for an error, rather than trying to write an as verbose a description as possible. I hope people will find this post when facing the same problem and not waste three hours or more on a simple and idiotic problem.

If you have a T4 Template .tt file that throws a weird Compiling transformation: Invalid token 'this' in class, struct, or interface member error that seems to come out of nowhere, try to delete extraneous spaces.

In my case, I has copied/pasted the .tt content from a web page and I was trying to understand why it wouldn't work. I right clicked on the source, went to Advanced, chose Convert all spaces to tabs, then back to Convert all tabs to spaces. Then it worked. I guess some white spaces where not really spaces or some other formatting issue.

If you don't have the options when you right click, it might be that they are features of the Tangible T4 Editor.

and has 0 comments
The word that I think best describes the book is "naive". That's not necessarily a bad thing; people have been hooked by naive stories since forever. Isaac Asimov had some very simplistic plots where everything was going well for the main character. The Harry Potter series was also what I could call naive; didn't hurt it much. From the same perspective I can say that The Vorkosigan Saga, which now spans about twenty novels and short stories, had its share of success (and three Hugo awards) no matter what the writing style. That writing style should have evolved anyhow as each book was written.

Back to The Warrior's Apprentice, though. It's about a kid, son of royalty on his home backward planet, who singlehandedly buys a spaceship, runs a blockade, creates a mercenary force, fools everybody that he is older, is smarter than anyone and also foils a ploy to destroy his father. And the drama is, as teenagers go, that he doesn't get the girl. Now see why I call it naive?

However, I am sure I would have gobbled the whole series up when I was fifteen, so, even if I have decided to not read the other books in the series, it depends on what your tastes are. The Warrior's Apprentice is an easy to read, easy to follow, shortish book. As a travel book I guess it would be decent.

Update: Thanks to Tim Fischer from Tangible, I got to solve all the problems described in the post below using VolatileAssembly and macros like $(SolutionDir) or $(ProjectDir).

When T4 (Text Template Transformation Toolkit) appeared as a third party toolkit that you could install on Visual Studio 2008, I thought to myself that it is a cool concept, but I didn't get to actually use it. Now it is included in Visual Studio 2010 and I had the opportunity to use it in a project.

The idea is to automatically create code classes and other files directly in Visual Studio, integrated so that the files are generated when saving the template. All in all a wonderful idea... but it doesn't work. Well, I may be exaggerating a bit, but my beginning experience has been off putting. I did manage to solve all the problems, though, and this is what this blog post is about.

First of all, there is the issue of intellisense. I am using ReSharper with my Visual Studio, so the expectations for the computer knowing what I am doing are pretty high. In the .tt (the default extension for T4) files you don't have any. The solution for this is to use the Tangible T4 editor (I think they were going for a fifth T here) that comes as a Visual Studio addon for VS2008 and VS2010. Fortunately, there is a free version. Unfortunately, it doesn't do intellisense on your own libraries unless you buy the priced one. Also, the intellisense is years behind the one provided by ReSharper or even the default Visual Studio one and the actions one can do automatically on code in a T4 template are pretty limited.

The second problem was when trying to link to an assembly using a relative path to the .tt file. The Assembly directive supports either the name of an assembly loaded in the GAC or a rooted path. Fortunately, the VS2010 version of the T4 engine supports macros like $(SolutionDir). I don't know if it supports all Visual Studio build macros in the link, but the path ones are certainly there.

Here is how you use it. Instead of


<#@ Assembly Name="Siderite.Contract.dll" #>

use


<#@ Assembly Name="$(SolutionDir)/Assemblies/Siderite.Contract.dll" #>



The third problem was that using an assembly directive locked the assembly used until you either reopened the solution or renamed the assembly file. That proved very limiting when using assemblies that needed compiling in the same solution.

This can be solved by installing the T4 Toolbox and using the VolatileAssembly directive. Actually, on the link above from Oleg Sych you can also find a bit advising using the T4 toolbox VolatileAssembly directive in the Assembly Locking section.

Here is how you use it. Instead of


<#@ Assembly Name="$(SolutionDir)/Assemblies/Siderite.Contract.dll" #>

use


<#@ VolatileAssembly
processor="T4Toolbox.VolatileAssemblyProcessor"
name="$(SolutionDir)/Assemblies/Siderite.Contract.dll" #>

As you can see you need to specify the processor (the VolatileAssemblyProcessor would have been installed by the T4 Toolbox) and you can use macros to get to a relative rooted path.

So thanks to the eforts of Oleg and Tim here, we can actually use T4. It would have been terribly akward to work with the solution in the obsolete section below. The Tangible guys have a learning T4 section on their site as well. I guess that using the resources there would have spared me from a day and a half wasted on this.

The following is obsolete due to the solutions described above, but it is still an informative read and may provide solutions for similar problems.

Click to expand.



Tips And Tricks:
Problem: the T4 generated file has some unexplained empty lines before the start of the text.
Solution: Remove any spaces from the end of lines. Amazingly so, some white space at the end of some of the lines were translated as empty lines in the resulting .tt.

Problem: The code is not aligned properly
Solution: Well, it should be obvious, but empty spaces before the T4 tags are translated as empty spaces in the resulting .tt file. In other words, stuff like <# ... should not be preceded by any indenting. It will make the template look a bit funny, but the resulting template will look ok. If you dislike the way the intending looks in the template, move the indent space in the tag, where it will be treated as empty space in the T4 code.

I've finally finished the book WPF in Action with Visual Studio 2008 by Arlen Feldman and Maxx Daymon. Simply put, it was a great book. Most of the programming books focus too much on structure, resulting in very comprehensive information, but giving one little in the way of actual work. WPF in Action is describing features while using them in a few applications that are built almost entirely out of code printed in the book. I think this is the second book any beginner in WPF should read, the first being one of those boring comprehensive ones :)

The book goes from a brief history of Windows Forms and WPF to Hello World in part one, then to describing layouts, styles, triggers, events and animations in the second part. The third goes to create a wiki application using commands and binding, datatemplates, converters, triggers, validation, then custom controls and drawing (including 3D!). I am a big fan of the MVVM pattern, but I liked that in this book, while it got described, it didn't suffocate the other topics, getting only a small subchapter in the binding section. The fourth part explains navigation, XBAP, goes briefly through ClickOnce and Silverlight, then has a large chapter about printing (too large, I believe). The book finishes with transition effects, interoperability with Windows Forms and threading.

All in all I think it was a very nice read. The authors clearly have a lot of experience and are quite qualified to talk not only about the features in WPF, but also the gotchas and some of the problematic implementations or even bugs. The fourth part of the book was a bit of a bore, though. After a pretty heavy 3D drawing ending of part three, I get to read a whole lot about really boring stuff like printing. I am sure that when need arises, though, this is the first book I will open to see what they did.

Bottom line: First three chapters are a must read. Maybe skip the 3D drawing part the the end of part three. The fourth is optional. The authors themselves said that they intended to write something that could be used as a reference, and I think they succeeded. So read the table of contents and see which parts of WPF you are really interested in in those optional parts.

The WPF in Action with Visual Studio 2008 link goes to the publishers site, where you can download the source code and even read some sample chapters.

and has 0 comments
The final chapter of the Fullmetal Alchemist story has been released today. Have the Elric brothers regained their bodies? Have they sacrificed everything in that Japanese way we so love? Did they get to yet another place filled with Nazis, turned vampire and got to be characters in an Uwe Boll movie? You will have to read the manga to find out! :) The good news it that following the link above you can do just that!

As you may know, the anime finished abruptly a while ago with the two brothers teleporting to our world in the middle of World War 2 and ended up in a ridiculous story. Luckily enough, the manga had none of that bullshit and continued on its merry way. Picking on that, another anime was started, Fullmetal Alchemist Brotherhood, which was supposed to delete from our memory the shame of the ending of the first anime. It is now pretty close to ending itself.

My opinion about the whole story is that it was a pretty imaginative concept, a kind of alchemic steampunk universe, filled with wonder, horror and fun stories. I hope you Read/watch it with just as much fun as I have.

Update 4th of July: The anime (Fullmetal Alchemist: Brotherhood) has also ended. It covered the exact same things as the manga this time.

I also forgot to mention that the story ends with a few loose ends: Al goes to explore the East and learn Alkahestry, accompanied by the two chimera men that want their original bodies back as well; Ed is going West, trying to learn as much as possible so that he can return and complete his brother's research and then help people together; Mrs Bradley is raising the last homunculus, Selim, as her son, trying to infuse him with love and make him a good person. These three threads can lead to a possible continuation of the Alchemist story. At least, I hope they do.

and has 2 comments
I won't get into explaining in detail what the dynamic keyword does in .Net 4.0, enough to say that it is enabled by the DLR (Dynamic Language Runtime) and it allows an object to have a type determined at runtime, not at compile time. I will use this feature to select the appropriate method for a specific type.

I've long had this problem with the object oriented principles that stated that methods with different signatures should be chosen based on the type of their parameters, but if the type of the parameter was unknown at compile time (let's say it was boxed in an object type) the method executed was always the one with the object parameter:

void DoSomething(object o) {
Console.WriteLine("I'm an object");
}

void DoSomething(int o) {
Console.WriteLine("I'm an int");
}

void Test() {
int i=1;
object o=i;
DoSomething(i);
DoSomething(o);
}

The result of this would be "I'm an int" followed by "I'm an object".

Solutions for this range from having a type casting cascade (because the switch statement doesn't work with Type) or using a Dictionary<Type,Action> or even using reflection to get the method with a specific signature and execute it.

A typical OOP solution that is correct, but really cumbersome to use is the double dispatch pattern. You can read an interesting article about it here.

The worst thing about this is that o.GetType() is System.Int32. It's like it's mocking us (and not in a unit testing way either).

Here comes dynamic:

void DoSomething(object o) {
Console.WriteLine("I'm an object");
}

void DoSomething(int o) {
Console.WriteLine("I'm an int");
}

void Test() {
int i=1;
object o=i;
dynamic d=o;
DoSomething(i);
DoSomething(o);
DoSomething(d);
}

This will have the same result as before followed by "I'm an int"! Pay attention to the fact that I did not set the value of d to the int, but to the object! What is ever cooler is that one can use only the methods that make sense, without the need of a general method that received a parameter of type object (unless, of course, you want to catch it and throw some meaningful error). No more casting insanity!

I use this technique to get an object that inherits from a base class, without knowing what the object is, then passing it to private methods that have the same name but different parameter types. All I have to do is cast my object to dynamic and pass it to the private method and the DLR does the rest for me.

I have not yet tested the performance aspect of this, but, considering that the DLR is the base for all dynamic languages in .Net like Ruby and Python, I bet it is faster than dictionaries with Actions in them.

So, to recap, if you ever want that a boxed object behave differently based on its true type, use myMethod((dynamic)obj) rather than myMethod(obj) and you are set.

Update: I have implemented this pattern in an application I am working on and I am very satisfied with it. I've created a separate assembly that adds the DynamicPatternAttribute and DynamicPatternIgnoreAttribute classes, which help decorate the methods I am using and also a PatternChecker class that can be used to check that there is a method implementation for each type inheriting from a specific base type. Here are some details on how it works.

First of all, we must define the purpose of such a pattern. As I said above, one can use it to specify behavior based on specific types that inherit from a base type. This can be desirable when the types in question cannot be changed to add new behavior. Even if they can be changed, it may not be desired to add a reference from the project containing our classes to the project containing the behavior. It is almost like defining static extension methods.

Then there are the elements:
  • The base class from which all classes that determine behavior inherit (object if nothing specific)
  • The routing method that receives at least a base class parameter
  • The behavior methods that have as a parameter the subclasses of the base type
  • The pattern implementation checker class


So far I've used it in the following way:
  • The new behavior is encapsulated in methods in a static class
  • The routing method receives as the first parameter the base type for the classes that should determine behavior
  • The only code in the routing method is calling the private behavior method(s) using the same parameters as itself, except the first which is cast to dynamic
  • The behavior methods usually are private methods having the same name as the routing methods, but ending in "handle"
  • The routing method is decorated with [DynamicPattern("nameOfTheBehaviorMethods")]
  • The routing method is decorated with [DynamicPatternIgnore(typeof(subClassToIgnore))] which tells the checker which classes do not need behavior implementations
  • The static class containing the pattern has a static constructor that calls a method to check the implementation of the pattern
  • The checker method is decorated with [Conditional("DEBUG")] so that it doesn't hinder the functionality of the program with slow reflection checks
  • The checker method calls PatternChecker.CheckImplementation(typeof(staticClass)) or PatternChecker.CheckImplementation(typeof(class).Assembly)


The PatternChecker class only checks if there is a method with the name specified in the DynamicPattern contructor for each of the subclasses of the base type of the first parameter in the routing method.

I hope you like this pattern. I certainly do. I leave you with an actual implementation example:

public static class RequestHandler
{

static RequestHandler()
{
checkDynamicPattern();
}

[Conditional("DEBUG")]
private static void checkDynamicPattern()
{
PatternChecker.CheckImplementation(typeof(RequestHandler));
}

[DynamicPattern("getObjectsHandle")]
[DynamicPatternIgnore(typeof(BaseDeleteRequest))]
[DynamicPatternIgnore(typeof(DeleteUsersRequest))]
[DynamicPatternIgnore(typeof(DeleteCategoriesRequest))]
[DynamicPatternIgnore(typeof(DeleteDataRequest))]
[DynamicPatternIgnore(typeof(DeleteApplicationsRequest))]
[DynamicPatternIgnore(typeof(GetEntityRequest))]
[DynamicPatternIgnore(typeof(BaseObjectActionRequest))]
[DynamicPatternIgnore(typeof(GetEntitiesRequest))]
[DynamicPatternIgnore(typeof(GetEntityByIdRequest))]
[DynamicPatternIgnore(typeof(BatchOperationRequest))]
public static BaseEntitiesResponse GetObjects(BaseRequest request, Coordinator coordinator)
{
return getObjectsHandle((dynamic)request, coordinator);
}

private static BaseEntitiesResponse getObjectsHandle(BaseRequest request, Coordinator coordinator)
{
throw new ArgumentException("Cannot find a suitable getObjects method for type of request " +
request.GetType());
}

private static BaseEntitiesResponse getObjectsHandle(GetApplicationRequest request,
Coordinator coordinator)
{
DataObject entity = coordinator.ApplicationManager.GetObject(request.Id,
request.IncludeOptions);
return getEntitiesResponse(entity);
}



Update: There is an wonderful unintended side-effect of the dynamic pattern when casting to generic types. Imagine you have a generic interface like ICustom<T> and you want to use the standard model of checking the type and selecting behaviour. You can't do it with as! There is no valid method of doing
var custom=obj as ICustom<T>;
so you are forced to use GetType() and then some weird methods that interogate the Type object. You can do it with the dynamic pattern.



checkIfCustom(obj);

private void checkIfCustom(object obj) {
dynamicCheckIfCustom((dynamic)obj);
}

private void dynamicCheckIfCustom(object obj) {
//do nothing
}

private void dynamicCheckIfCustom<T>(ICustom<T> iCustom) {
doSomethingWith(iCustom);
}


This thing works! If anything than an ICustom<T> is given, nothing happends. If it is the correct type, then doSomething is executed with it. Pretty neat, huh?

This will be a short post to describe my own stupidity. I was testing the new Entity Framework Plain Old CLR Objects (POCO) support and so I made a small test to:
  • Clear the database
  • Insert new items in the database
  • Select the items from the database, with and without related items

Every time I got the entire object tree, with child collection and parent objects, thus making me think that in this implementation of EF, the need to use Include was gone, instead an Exclude method was needed in order to tell the framework to NOT LOAD related objects! Insane!
After looking everywhere to find the answer, I finally turned to profiling SQL only to see that the database was not accessed with related items, but only what I had asked for. Then I had my "I'm an idiot!" moment. I was using the same context and EF knew the entire hierarchy of objects because (duh!) I had just inserted it a few code lines above. Using different contexts solved the "problem" and only returned the requested objects, making Include a necessity again.

Well, there are a lot of good reasons why that could happen because of your bad code, but this time it is a plain ugly Microsoft bug. You see, the code in the RaisePostBackEvent method in the TreeView control first checks if the control has an Adapter and if not, it just does its thing. If there is an Adapter, it tries to cast it to a IPostBackEventHandler and then fires the RaisePostBackEvent event there. However, if the TreeView control has an adapter and it is not a RaisePostBackEvent, nothing happends!

Here is the offending code:

protected virtual void RaisePostBackEvent(string eventArgument)
{
base.ValidateEvent(this.UniqueID, eventArgument);
if (base.IsEnabled)
{
if (base._adapter != null)
{
IPostBackEventHandler handler =
base._adapter as IPostBackEventHandler;
if (handler != null)
{
handler.RaisePostBackEvent(eventArgument);
}
}
else ...


Bottom line, you need to either not use an adapter for the TreeView, or use one that knows how to handle the postback. And given the complexity of the code in the method, it is better to just not use an adapter.

The solution I have adopted is to recreate the functionality in an override of the RaisePostBackEvent method and add some more (like TreeNodeClicked and SelectedNodeClicked). Hint: you need to also get in LoadPostData and remember which nodes are selected in order to check if the selected node has changed.

Wow, long title. The problem, however, is simple: when adding an inline Javascript script block that changes the window.location or window.location.href properties, FireFox and Chrome do not retain the original URL of the page in the browser history. The Back button doesn't work correctly.

Going to the Mozilla page for developers I find that both redirect methods are equivalent to location.assign(url) which implicitly sets the url in the browser history chain, as opposed to location.replace(url) which doesn't affect the history and just replaces the current URL. So I get to use one method and get the behaviour of the other!

Enough said. Long story short, the behaviour was not reproduced if the same script was being loaded in a button click event. That means it is another of those annoying Gecko page load completed issues.

The solution? Instead of location=url; use setTimeout(function() { location=url; },1);. I know, really ugly and stupid. If you find a better solution to cause a redirect from javascript, please let me know.

I am only linking to this blog post that shows how to instantiate the converters directly in the binding, without having to define a resource just for that.

WPF Quick Tip: Converters as MarkupExtensions

Update: After careful deliberation I've reached the conclusion that instead of custom converters that would have to be instantiated in the binding XAML I can just create new binding types. Here is how you can do it:
  1. Inherit from Binding
  2. Implement IValueConverter
  3. Set Converter=this in the constructor
  4. Use the new binding where you see fit


Actually, I have created a more complex Binding object that chooses the type of conversion based on an enumeration or, alternatively, gets converters as content and pipes them one after the other for a more dynamic reuse of converter power. I am still not sure which of these two solution I will use more often, though.