Well, I have been kind of absent from the blog lately and that is for several reasons. One is that I have been waiting for some news that would determine my direction as a professional developer. The other is that I have re-acquired a passion for chess. So, between work at the office, watching chess videos, playing chess with my PDA and watching all seven seasons of Star Trek Deep Space 9, I haven't had much time for blogging.

Also, when you think about it, the last period of my programming life has been in some sort of a limbo: switched from ASP.Net to WPF, then to ASP.Net again (while being promised it would be temporary), then back to WPF (but in a mere executive position). Meanwhile, Microsoft didn't do much to help me, and thus saw their profits plummet. Well, maybe it was a coincidence, but what if it wasn't?

I am complaining about Microsoft because I was so sold into the whole WPF/Silverlight concept, while I was totally getting fed up with web work. Yet WPF is slow, with no clear development pathway when using it, while Silverlight is essentially something else, supported by only a few platforms, and I haven't even gotten around to use it yet. And now the Internet Explorer 9/Windows 8 duo come in force placing Javascript and HTML5 in the forefront again. Check out this cool ArsTechnica blog post about Microsoft's (re)new(ed) direction.

All of this, plus the mysterious news I have been waiting for that I won't detail (don't want to jinx it :-S), but which could throw me back into the web world, plus the insanity with the mobile everything that has only one common point: web. Add to it the not too enthusiastic reaction of my blog readers when starting talking about WPF. So the world either wants web or I just have been spouting one stupid thing after another and blew my readers away.

All these shining signs pointing me towards web development also say that I should be relearning web dev with ASP.Net MVC, getting serious about Javascript, relearning HTML in its 5th incarnation and finally making some sense of CSS. Exciting and crazy at the same time. Am I getting too old for this shit or am I ready for the challenge? We'll just have to see, won't we?

and has 0 comments
This is a case of a bug fix that I made work, but I can't understand why the solution works. Basically, the story was that some internal component of a third part control forced WPF to throw an exception on the UI thread. As it was impossible to plug the hole in the third party library, and since its next version is supposed to solve the issue, I've opted for a somewhat ugly hack: I've handled the DispatcherUnhandledException event of the Application class and I've basically said to it to ignore that specific UI error.

I will get into details of what the error and where it came from was and how to handle it, but I want to focus on the fact that, since this was a fix for a specific class, I've inherited from that class and used a static method in it to do the above handling of the event. Well, it worked most of the cases, but not all. Some code that involved moving the focus of WPF elements programmatically would cause the bug to reappear.

At first I thought it was a matter of a change in the policy of exception handling from .Net 1.0 to 2.0 and above. So I've set the <legacyUnhandledExceptionPolicy enabled="1"/> option in the app.config runtime section, but it didn't help.

I've tried everything, from using the control instance Dispatcher in the constructor or in the Loaded event, to moving the code directly to the point after the application was instantiated and before the application was run. Bingo, it worked! I thought that was it. I've again encapsulated the entire behavior in the inherited control and ... watched it fail.

Let me simplify the situation: static code that doesn't work when encapsulated in a static class works perfectly when the same code is inlined in the calling code. Can you explain that? I cannot!

The code is simple:

application.DispatcherUnhandledException +=
application_DispatcherUnhandledException;

static void application_DispatcherUnhandledException
(object sender, DispatcherUnhandledExceptionEventArgs e)
{
if (e.Exception.Message.Contains("Hover")
&& e.Exception.Message
.Contains("System.Windows.Controls.ControlTemplate"))
{
e.Handled = true;
}
}


Move that in a static class and execute it as MyClass.RegisterFix(application); and it doesn't catch all the exceptions thrown, although it works in most cases.

Can anyone explain this to me? Why does it matter where the code is?

and has 0 comments
Today is the Rapture, at least according to a doomsayer Christian evangelist in the US (where else?). Anyway, today is my name saint day, so it makes a weird kind of sense, although I would have preferred it to be on my birthday, so I would feel even more special. Something is certain though, if it happens during my lifetime, the Rapture will probably happen on my death day.

But what is this Rapture? According to Wikipedia, God bless her, it seems there is a moment in time when the big guy gets fed up with all the bullshit and just packs all his believers and goes home. A small script bottom line also mentions throwing away everything else. But theeen, he gets his people back on Earth. Ha! I know that concept! It's a Genetic Algorithm Reaper! It takes the fit to another generation, it gets rid of the unfit, in order to evolve the perfect believer in Christ. So appropriate when you think about it: the Rapture happens when Christianity and Evolution finally converge.

As I was blogging before, RedGate are assholes. They bought Reflector, promised to keep it free, then asked money for it. But every crisis can be turned into an opportunity. JetBrains promised a free decompiler tool and they have kept their word as they have released an early build. A total news to me, but not really a surprise, other software companies decided to build their own version in order to boost their visibility in the developer world. Telerik, for example, has just released JustDecompile, beta version.

It is no secret that JetBrains is a company that I respect a lot, as they made ReSharper, the coolest tool I've ever had the pleasure to work with, but I will try to be as unbiased as possible in comparing the options. I have tried dotPeek on WPF's PresentationFramework.dll from the .NET framework 4.0, as I often need to check the sources in order to understand functionality or bugs.

As a footnote, Reflector, just before it went commercial, could not decompile some of the code there. Not only it did not decompile it, but it presented empty methods like that was all there was in the code, with no warnings or errors or explicative comments. So, even if free, I bet Reflector would have sucked in the end, after getting into the money grabbing hands of RedGate.

dotPeek has seen and decompiled the code that Reflector did not. Also, I have to say that similar functionality like in ReSharper, finding usages, going to declaration, etc are making dotPeek a very nice tool to work with. What I did not quite like is that it doesn't have yet the functionality to save the sources to text files. But I am sure this is just a detail that was not implemented yet. Hopefully, they will provide a rich plugin model like old Reflector did.

Unfortunately, to download JustDecompile, they need you to have a Telerik login in order to download, which, as everyone knows, simply sucks. No one likes a registration form, folks! Especially one that presents you with wonderful prechecked checkboxes for permission for Telerik to send you all kind of stupid promotions and newsletters. Also, the download is of a .msi file. Most developers like to see what they are installing and preferably just copy it from a file archive. Running the .msi took forever, including the mandatory 100% CPU utilization bit that I will never understand in installation products. (coming from the .NET runtime optimization service, mscorsvw, called by ngen) But that's just the delivery system. Let's check out the actual thing.

JustDecompile starts reasonably fast and it also has a nice look, being build with Telerik controls and what not. The decompilation is a bit weird at first, since it shows only the method names and for a second there I thought it was as bad as Reflector was, but then I noticed the Expand All Members button. The context menu is not nearly as useful as dotPeek's, but there are a lot of options in the top toolbar and the navigation via links is fast and intuitive. It also has no text saving options yet.

As the decompiled sources were, I noticed these differences:
  • JustDecompiler places inline member declarations in constructors, dotPeek shows it inline. It might not seem an important thing, but an internal class gains a weird public constructor in order to place the declaration there, instead of using the only internal constructor that the class had. It looks strange too as its last line is base(); which is not even legal.
  • dotPeek seems to want to cast everything in the source code. For example List list = (List) null;
  • JustDecompiler shows a Dictionary TryGetValue method with a ref parameter, dotPeek shows the correct out.
  • dotPeek creates really simple names for local scope variables like listand list1, JustDecompiler seems to create more meaningful names like attachedAnnotations
  • JustDecompiler shows a class internal as dotPeek shows it as internal abstract.
  • JustDecompiler seems to fail to decompile correctly indexer access.
  • JustDecompiler doesn't seem to handle explicit interface implementations.
  • JustDecompiler doesn't seem to decompile readonly fields.
  • JustDecompiler transforms a piece of code into an if with a return in it and then some other code, dotPeek decompiles it into an if/else.
  • JustDecompiler doesn't seem to handle Unicode characters. dotPeek correctly encodes them in source like "\x001B".
  • dotPeek seems to join nested ifs in a single one, as opposed to JustDecompiler.
  • JustDecompiler uses base. in order to access members coming from base classes, while dotPeek uses this.


I will stop here. I am sure there are many other differences. My conclusion is that dotPeek could do with the naming algorithm JustDecompiler seems to use for local scope variables, but in most other ways is superior to JustDecompiler for now. As both programs are in beta, this could quickly change. I do hope that healthy competition between these two products (and, why not, shady developer meetings in bars over tons of beer and pizza, in order to compare ideas intercompanies) will result in great products. My only wish is that one of these products would become open source, but as both use proprietary bits from commercial products, I doubt it will happen.

Have fun, devs!

Update 23 Feb 2012:
Spurred by a comment from Telerik, I again tried a (quick and dirty, mind you) comparison of the two .Net decompilation tools: JetBrains dotPeek and Telerik JustDecompile. Here are my impressions:

First of all, the Telerik tool has a really cute installer. I am certainly annoyed with the default Windows one and its weird error codes and inexplicable crashes. Now, that doesn't mean the Telerik installer does better in the error section, since I had none, but how could it not? The problem with the installation of JustDecompile is that it also tried to install (option checkboxes set by default) JustCode and JustTrace. The checkboxes themselves were something really custom, graphically, so I almost let them checked, since they looked as part of the background picture. If it weren't for my brain spam detector which went all red lights and alarm bells when seeing a really beautiful installer for a free tool, I might have installed the two applications.

Now for the decompilation itself. I was trying to see what the VisualBasic Strings.FormatNumber method contained. The results:
  • dotPeek showed xml documentation comments, JustDecompile did not
  • dotPeek showed default values for method parameters, JustDecompile did not
  • JustDecompile could decompile the source in C#, VB and IL, dotPeek did only C#
  • JustDecompile showed the source closer to the original source (I can say this because it also shows VB, which is probably the language in which Microsoft.VisualBasic was written), dotPeek shows an equivalent source, but heavily optimized, with things like ternary operators, inversions of if blocks and even removals of else sections if the if block can directly return from the method
  • There are some decorative attributes that dotPeek shows, while JustDecompile does not (like MethodImplAttribute)
  • dotPeek has a tabbed interface that allows the opening of more than a single file, JustDecompile has only a code view window
  • dotPeek shows the code of a class in a window, in order to see a method, it scrolls to where the method is; JustDecompile shows a class as a stub, one needs to click on a method to see the implementation of only that method in the code window


My conclusion remains that dotPeek is a lot more usable than JustDecompile. As a Resharper user, I can accept that I am biased, but one of the major functionalities of a .Net decompiler is to show you usable code. While I can take individual methods or properties with JustDecompile and paste them in my code, I can take entire classes with dotPeek, which makes me choose dotPeek for the moment, no matter all the other points above. Of course, if any of the two tools would give me a button that would allow me to take a dll and see it as a Visual Studio project, it would quickly rise to the top of my choices.

Update 26 Apr 2013:
I've again compared the two .Net decompilers. JustDecompile 1.404.2 versus dotPeek EAP 1.1.1.511. You might ask why I am comparing with the Early Access Program version. It is because JustDecompile now has the option to export the assembly to a Visual Studio project (yay!), but dotPeek only has this in the EAP version so far.
I have this to report:
Telerik's JustDecompile:
  • the installer is just as cute as before, only it is for a suite called DevCraft, of which one of the products is JustDecompile
  • something that seemed a bit careless is the "trial" keyword appearing in both download page and installer. If installing just JD, it is not trial
  • again the checkboxes for JustCode and JustTrace are checked by default, but at least they are more visible in the list of products in the suite
  • a Help Improve the Telerik Installer Privacy Policy checkbox checked by default appeared and it is not that visible
  • the same need to have an account to Telerik in order to download JD
  • when installing JD, it also installs the Telerik Control Panel a single place to download and manage Telerik products, which is not obvious from the installer
  • the install takes about two minutes on my computer to a total size of 31MB, including the control panel
  • if a class is in a multipart namespace like Net.Dns, it uses folders named Net.Dns if there is no class in the Net namespace
  • not everything goes smoothly, sometimes the decompiler throws exceptions that are then logged in the code as comments, with the request to mail to JustDecompilePublicFeedback@telerik.com
  • it creates the AssemblyInfo.cs file in a Properties folder, just like when creating a project
  • resolves string concatenation with string.Concat, rather than using the '+' operator as in the original code
  • resolves foreach loops into while(true) loops with breaks when a condition is met
  • uses private static methods in a class with the qualified class name
  • resolves inline variables, leaving the code readable
  • overall it has a nicer decompiled code structure than dotPeek
  • adds explicit default constructors to classes
  • places generic class constraint at the end of constraints list, generating an exception
  • it doesn't catch all reference assemblies, sometimes you have to manually add them to the list
  • decompiles enum values to integer in method optional parameters default values, generating compilation errors
  • decompiles default(T) to null in method optional parameters default values, generating compilation errors
  • decompiles class destructors to Finalize methods which are not valid, generating compilation errors
  • types of parameters in calls to base constructors are sometimes wrong
  • places calls to base/this constructors at the end of constructor code blocks, which of course does not work, when you place more complex code in the calls
  • doesn't understand cast to ValueType (which is somewhat obscure, I agree)
  • really fucks up expressions trees like FluentNHibernate mapping classes, but I hate NHibernate anyway
  • resolves if blocks with return in them to goto/label sometimes
  • resolves readonly fields instantiated from a constructor to a mess that uses a local variable to set the field (which is not valid)
  • doesn't resolve corectly a class name if it conflicts with the name of a local method or field
  • inlines constants (although I don't think they can solve this)
  • switch/case statements on Enum values sometimes gain weird extra case blocks
  • sometimes it uses safe casting with value types (x as bool)

JetBrain's dotPeek:
  • the EAP version has a standalone executable version which doesn't need installation
  • the whole install is really fast and installs around 46MB
  • as I said above, it does not have the Export to Project option until version 1.1
  • the decompilation process is slower than JustDecompile's
  • if a class is in a multipart namespace like Net.Dns, it uses a folder structure like Net/Dns
  • sometimes things don't go well and it marks this with // ISSUE: comments, describing the problem. Note: these are not code exceptions, but issues with the decompiled code
  • it inlines a lot of local variables, making the code more compact and less readable
  • weird casting of items in string concatenations
  • a tendency to strong typed casting, making the code less readable and generating compilation errors at times
  • the AssemblyInfo.cs file is not created in a Properties folder
  • when there are more classes in a single file, it creates a file for each, named as the original file, but prefixed with a number, instead of using the name of the class
  • it has an option to create the solution for the project as well
  • it creates types for anonymous types, and creates files with weird names for them, which are not really valid, screwing the project.
  • it has problems with base constructor calls and constructor inheritance
  • it has problems with out parameters, it makes a complete mess of them
  • tries to create a type for Linq IQueryable results, badly
  • it has problems with class names that are the same as names of namespaces (this is an issue of ReSharper as well, when it doesn't present the option to choose between a class name and a namespace name)
  • resolves while(method) to invalid for loops
  • it doesn't resolve corectly a class name if it conflicts with the name of a local method or field
  • problems with explicit interface implementations: ISomething a=new Something(); a.Method(); (it declares a as Something, not ISomething)
  • problems with decompiling linq method chains
  • I found a situation where it resolved Decimal.op_Increment(d) for 1+d
  • indirectly used assemblies are not added to the reference list
  • it sometimes creates weird local variables like local_0, which are not declared, so not valid
  • adds a weird [assembly: Extension] in the AssemblyInfo file, which is not valid
  • a lot of messed up bool values resolved as (object) (bool) (value ? 1 : 0), which doesn't even work
  • inlines constants (although I don't think they can solve this)
  • __Null local1 = null; - really?

After decompiling, solving the issues and compiling again an assembly in the project I am working on I got these sizes:
JustDecompile: 409088
dotPeek: 395776
The original: 396288

Of course, this is not really a scientific comparison between the two. I was excited by the implementation of Export to Project in both products and I focused mainly on that. The navigation between types and methods is vastly improved in JustDecompile and, to my chagrin, I have to admit that it may be easier and safer to use than dotPeek at this time. Good job, Telerik! Oh, and no, they have NOT paid me to do this research :-)

and has 0 comments
A user has noticed that in Google Chrome changing the hash of the url adds the address in the browser history. So no more cool ASCII eyes watching you from the address bar.

Also, I was greeted today by a warning (also from Google Chrome) that my blog contains content from www.hillarymason.com and that it is unsafe to open. I've removed that blog from the blog roll list, even if, for what is worth, I don't think that was an "evil blog".

People who know me also know about the Law of Siderite: never use any Microsoft product until it has reached a second service pack or release. That has been true for me, like a skewed Moore law, since Windows 95 OSR2 (also known as Windows 96). But I had to break my own legal advice and try to install Visual Studio 2010 Service Pack 1.

There are several reasons for this, one of them being that VS2010 had a few relatively important bugs and I am using the tool at work every day and at home when I get the time. So not only did I not wait for a Service Pack 2, I've gone and installed the beta version! And guess what? It installed without any problems and fixed most of the bugs that annoyed me, if not all. Yay, for Microsoft! But then I had to install ASP.Net MVC, which needed as a prerequisite VS2010 SP1 and so I went and installed the Release Candidate. That is, the version that should be like the beta, only better. This is the story of that fateful decision:
  • Step 1: Start ASP.Net MVC3 installer. Result: Fail!
  • Step 2: Start Visual Studio 2010 Service Pack 1 installer, the small 500kb version. Result: fail.
  • Step 3: Download the .iso version and run. Result: fail.
  • Step 4: Reinstall Visual Studio 2010 Ultimate
    a) Uninstall Visual Studio 2010. Result: fail.
    b) Use Microsoft Installer Cleanup Utility on Visual Studio. Install VS2010 (without VB or VC++). Result: Success!
  • Step 5: Install VS2010 SP1
    a) .iso version. Result: fail!
    b) Web Platform Installer version (new fancy Microsoft tech). Result: fail!
  • Step 6: Install Web Developer Express (free) + VS2010 SP1 from Web Platform Installer. Result: SP1 core fail! SP1 Asp.net success!?!


Bottom line: Now ASP.Net MVC seems to be working, with Razor syntax highlighting and intellisense, but the SP1 core is not installed. Visual Studio help window shows me that I have the Rel(ease) version, not SP1. I don't know what will blow up when I try it. Luckily, all this happened at home, not at work, therefore I haven't broken my main working tool.

My conclusion is that somehow you need to install the (useless, if you have Visual Studio) Visual Web Developer Express in order for the ASP.Net SP1 to work. This allows for different web engines, like Razor. SP1 doesn't work though. If you have the beta installed, try to use the Web Platform Installer to install Web Developer Express as well as the Visual Studio SP1. If it fails, look at the last section and see what got installed. Maybe you get the best of both worlds. I will be trying another Visual Studio version, but as the whole process takes ages, be warned.

Let me say that again, so it's perfectly clear: the install process of Visual Studio 2010 Service Pack 1 plus Visual Web Developer Express via the Web Platform Installer tool took me 12 hours! I did not have to press anything, it wasn't a case of trial and error, I just ran it, it said "installing 2 out of 12" for 12 hours in a row, then it partially failed. And the Web Platform Installer seems the best solution so far!

and has 0 comments
You may have noticed problems with my blog (and others hosted by Blogger) during last week. This was an isolated incident that Google has apologized for and that I feel necessary to also mention and apologize myself. Well, wasn't much I could do except compete with Blogger, but you know me, I'm a nice guy, I wouldn't want to put them out of business or something. So I will continue to use their services.

and has 2 comments
Today I've read this article in the Romanian press about a dog "shelter" where animal protection organisations found hundreds of dead dogs. They have tried to enter to investigate as did the news and later the police. What I found interesting is the reaction of the people there: they barricaded themselves inside and refused access even to the police. They panicked, yes, but it was more than that.

I believe it was the shock felt by people who do some atrocious thing because they were ordered to or because they didn't know what they were in for. They start doing it and realize almost immediately that they can't possibly want this. But they continue to do it because a) they started already and stopping would be an admission of guilt and fault b) people around them do the same thing c) someone said it was the right thing to do. It all falls apart when other people try to examine what they did, though, as they realize, imagining what others would think, that they cannot possibly get over what they did. As they had projected responsibility on their superiors, now they project anger and rejection on the witnesses of their actions. But they actually hate themselves in those moments.

In the end, no one cares what the reason was. Maybe the official explanation that they were all terminally ill dogs is correct after all. But the emotional trauma felt by the people that did this, looking those animals in the eyes and then killing them while the barks of dogs turn to frightened squeals as they run from corner to corner in sheer terror, from this inability to care for random animals and people that work for you, from this I would make a story worth publishing.

and has 0 comments
It started brilliantly: a Stargate series off-shoot that takes place in a very distant galaxy on a ship that runs itself with a bunch of stranded humans on it. And Robert Carlyle plays the role of the grumpy scientist! The feel of the show moves away from green pastures and ridiculous Goa'uld with their Jaffa and their sticks and goes to a much darker place where human nature and politics define the game, not only blazing heroism and implausible luck.

And then... people wanted their fucking green pastures! The ratings were in the millions, but still not enough for the greedy networks who decided to cancel the show. To be honest, it is not only the fault of idiotic executives and imbecile TV viewers, but also of the show writers. But the number of solid episodes so outnumbers the number of faulty ones that the blame cannot in good faith be attributed to the people working on the show. The actors played well, the scripts were mostly interesting and consistent. There were no self referential or parody episodes at all and the humour was left to the situation, not the mandatory smart quote before springing into action.

Of course, the potential of the show was mostly wasted in the so many average episodes in which they found a stargate, dialed in, then proceeded with the almost the same ideas as in the other Stargate shows, but there was a major difference even then: people has their own agendas, they pondered on their role in all of this, not just acted like automatons playing the same part over and over.

So yes, I think this show could have benefited from the slowly rising tide of people that don't watch shows the first time they air, but much later, when they ask their friends "do you know any good sci-fi series?" and someone answers "Have you seen Stargate Universe? It's awesome!". But no. If random morons who wouldn't understand a stick if it didn't hit them with the end they expect don't like the show, it must be cancelled. I've heard people react to Universe with repulsion and even hate. "It is not in the spirit of the Stargate shows that I liked!", they said. Well, I am sorry to tell you, but that spirit is the spirit of Harry Potter, Tom and Jerry and Prince Charming: impossible situations with incredible solutions from people that cannot exist. The ever successful recipe of "heroic people with which you would identify [for no real reason] battle the odds and succeed every time. And they do it smiling!" it nothing but a fairy tale. You are watching bed time stories. And yes, I want my bed time stories as well, but not the three year old ones!

I dedicate this post to so many people that believed they thought they understood Stargate Universe and similar shows that got cancelled for no good reason: you are idiots!

and has 0 comments

A bit late to the party, I finally found out that there was a mass escape from a Khanadahar prison. Apparently, Taliban forces have dug a tunnel from outside the Afghan prison and liberated about 500 of their peers. This is a blow to the local government and their western allies, the news say. I, however, cannot help but root a little for the underdog and think of the classic The Great Escape. In that film, allied prisoners of war were digging a tunnel to escape Nazis. Will they do a similar film about the Talibans now? I would. It would probably be both funny and tragic, navigating through all the incompetence, corruption, shrewdness and tension. As for the "western allies"... I would be a little proud. "Look, ma! They have grown so much, our children! They are not blowing themselves up, they are organising, planning for months in advance and finally building something. It's just a tunnel now, but I am so proud!".

Bruce Schneier says in his TED talk about security: I tell people "If it's in the news, don't worry about it", 'cause by definition, news is something that almost never happends. It is a great concept, although not completely correct. The switch from one state to another may not happen very often, but you are often worried whether you are in that state or not. But overall I agree and I have to say that it is a great news filter idea: just ignore news that are not about a change in state.

and has 0 comments
I will be going in vacation this Easter, so I won't be around until the 2nd of May. I apologize beforehand for the spam comments that I usually delete as they appear. Have a nice relaxing Easter holiday! Cause when you return to work, all hell has broken loose :)

As usual when I stumble into a problem that I can't solve only to see that it has a simple explanation and that I am not sufficiently informed on the matter, I had some misgivings on blogging about it. But these are the best possible blog entries, the "Oh, I am so stupid!" moments, because other people are sure to make the same mistake and you wouldn't wish for them the same humiliation, would you? Well, I wouldn't :-P

So here it is: I was having a ToggleButton and a ContextMenu in a custom control. I wanted to have the IsChecked property bound in two-way mode with the IsOpen property of the ContextMenu. So I did what most people would do, I created a Setter on the IsChecked property with a Binding on DropDown.IsOpen as a value (DropDown is the property of the control holding the ContextMenu). And it worked, of course. My custom control would inherit from ToggleButton and use the style with the Setter in it.

But now comes the tricky part: I wanted that when a certain condition was met, the button be checked and the menu open. So I added a Trigger to the Style on the condition that I wanted, with a Setter on the IsChecked property to True. And from then on, nothing made any sense. I would click the button and it would not open the menu anymore.

Well, if you think about it, you have a Setter in the Style and then another Setter on the same property in the Trigger. It makes a sort of a sense for them to interfere with each other, but I also know that setting a Binding as value to a DependencyProperty is like using SetBinding on the owner of the property. And I thought it was normal for the IsChecked property to be set to true from the trigger and, in turn, change the value of IsOpen. But it didn't happen. Let it be a lesson to other bozos like myself that this thing does make sense logically (or as wishful thinking), but not in WPF. And here is why:

Let's try to evaluate the values of IsChecked and IsOpen. First case scenario: clicking on the button. That changes the status of IsChecked from true to false and viceversa. Internally, what does happen is WPF finds any bindings associated with the value and refreshes them. In this case, it finds a binding to IsOpen and so the ContextMenu also opens up. Second case scenario: the condition in the ViewModel is met, the trigger is fired and the value of IsChecked is set. It should do the same thing, right? Find the bindings and refresh them. And it does! But in that fateful moment, it sees that the IsChecked property has a setter associated with it, from the trigger in the style, that sets it to True. It does not go further, because the trigger setter overrides the style setter. I personally think this is closer to a bug than a reasonable behavior, but I am biased here :) I mean, you set the value of the property directly in the code IsChecked=true;, and it works, but you set it in the trigger and it overrides the binding?

There are more solutions to this problem. One solution would be to replace the setter value in the trigger with another binding also to the IsOpen property of the ContextMenu, but with a nifty converter. I haven't tried it, because I think that, if it worked, it would add too much weird complexity and even if I love weird, this is a little bit too much for me. Of course, a programmatic solution is also possible, adding handlers for the Open and Close events of the ContextMenu and setting IsChecked, as well as setting IsOpen based on the value sent to the IsChecked property change callback. But I wanted to do something that is as WPFish as possible. Another solution is to set a binding to the IsOpen property, since it is two way, and this is what I did. Unfortunately, the ContextMenu is a variable of the control, so I had to manually set the binding in that property's PropertyChangedCallback. A more elegant solution, I believe, is to have another element present that would have two-way bindings for both IsChecked and IsOpen properties to two of its own properties. Internally, when one changes, the other is synchronized. This leaves both ToggleButton and ContextMenu free to have any setters on IsChecked and IsOpen without interference.

The pattern of using a separate control to link properties from other controls together is called Binding Hub. Here is an article detailing it. I disagree with the naming of properties as I believe for each connection a separate hub should be created with properties that actually make sense :), but the concept is powerful and I like it.

Short version: open registry, look for file association entry, locate the command subkey and check if, besides the (Default) value, there isn't a command multistring value that looks garbled. Rename or remove it.

Now for the long version. I've had this problem for a long time now: trying to open an Office doc file by double clicking it or selecting "Open" from the context menu or even trying "Open with" and selecting WinWord.exe threw an error that read like this: This action is only valid for installed products. This was strange, as I had Office 2007 installed and I could open Word just fine and open a document from within; it only had problems with the open command.

As I am rarely using Office at home, I didn't deem it necessary to solve the problem, but this morning I've decided that it is a matter of pride to make it work. After all, I have an IT blog and readers look up to me for technical advice. Both of them. So away I go to try to solve the problem.

The above error message is so looked up that it came up in Google autocomplete, but the circumstances and possible solutions are so varied that it didn't help much. I did find an article that explained how Office actually opens up documents. It said to go in the registry and look in the HKEY_CLASSES_ROOT\Word.Document.12\shell\Open\command subkey. There should be a command line that looks like "C:\Program Files\Microsoft Office\Office12\WINWORD.EXE" /n /dde. The /dde flag is an internal undocumented flag that tells Windows to use the Dynamic Data Exchange server to communicate the command line arguments to Word, via the next key in the registry: HKEY_CLASSES_ROOT\Word.Document.12\shell\Open\ddeexec which looks like: [REM _DDE_Direct][FileOpen("%1")]. So in other words (pun not intended) WinWord should open up with the /n flag, which instructs to start with no document open, then execute the FileOpen command with the file provided. If I had this as the value of the command registry key, it should work.

Ok, opened up the registry editor (if you don't know what that is or how to use it, it is my recommendation to NOT use it. instead ask a friend who knows what to do. You've been warned!), went to where the going is good and found the command subkey. It held a (Default) value that looked like it should and then it held another value named also command, only this one was not a string (REG_SZ), but a multi string (REG_MULTI_SZ), and its value was something like C84DVn-}f(YR]eAR6.jiWORDFiles>L&rfUmW.cG.e%fI4G}jd /n /dde. Do not worry, there is nothing wrong with your monitor, I control the horizontal, vertical and diagonal, it looked just as weird as you see it. At first I thought it was some weird check mechanism, some partial hash or weird encoding method used in that weird REG_MULTI_SZ type, which at the moment I didn't know what it meant. Did I mention it was weird? Well, it turns out that a multi string key is a list of strings, not a single line string, so there was no reason for the weirdness at all. You can see that it was expecting a list of strings because when you modify the key it presents you with a multiline textbox, not a singleline one.

So, thank you for reading thus far, the solution was: remove all the annoying command values (NOT the command subkeys) leaving the (Default) to its normal state. I do not know what garbled the registry, but what happened is that Windows was trying to execute the strange string and, obviously, failed. The obscure error message was basically saying that it didn't find the file or command you were trying to execute and has nothing to do with Office per se.

Of course,you have to repeat the procedure for all the file types that are affected, like RTF, for example.

Who needs time consuming trips to other countries when you can have it all here, on Siderite's blog, embedded in a blog post? Of course, if you would like the real deal (Hmpf!) don't hesitate to contact me. I can guarantee very good prices and the total trustworthiness of the people there. (Both Silverlight and Photosynth have been somewhat discontinued since the time I wrote this)

You can('t) access the same Photosynth by clicking on this link: Villa in Kyparissi, Greece on Photosynth.

Now, before you start thinking I've gone into tourism marketing, let me explain the technology, what is Photosynth and how to use it.

Photosynth is a Microsoft Research baby and one of the things that they should be terribly proud of, even if not many people have heard of it. I blame this on bad marketing and the stubbornness on using Silverlight only. If you are to read the Wikipedia article, the technology works in two steps. The first step is photo analysis with an algorithm similar to Scale-invariant feature transform for feature extraction. By analyzing subtle differences in the relationships between the features (angle, distance, etc.), the program identifies the 3D position of each feature, as well as the position and angle at which each photograph was taken. This process is known scientifically as Bundle adjustment. You can see it in action if you go to the villa and chose to see the point cloud. The second step is, obviously, navigating the data through the Photosynth viewer.

Now, how does one use it? Surprisingly simple. First take a bunch of photos that overlap themselves. You can use multiple cameras, multiple view angles and times of day, which of course does complicate matters, but the algorithm should be able to run smoothly. Then download the Photosynth software from their site (make sure you have an account there as well) and feed the photos to it. Wait a while (depending on how many photos and their quality) and you are done. I especially liked the option to find the place in the synth on Bing maps and select the angle of one picture in order for it to determine the real location of the objects in the photos. It will also use geographic information embedded in the pictures, if available.

There are, of course, problems. One of the major ones is that it is all done through the Photosynth site. You cannot save it on your HDD and explore it offline. Also, it is not possible to refine the synthing process manually. If your pictures are not good enough, that's it. You will notice, for example, that none of the images rotated to 90 degrees were joined to any others or that there is no correlation between the images of the house outside and those inside. One cannot remove or block pictures in the synth, either. Being all closely connected to the Silverlight viewer also reduces the visibility of the product to the outside world even if, let's face it, I have edited the Photosynth by adding highlights and geographic position and I have navigated it all in the Chrome browser, not Internet Explorer, so if you refuse to install Silverlight to see it, it's a personal problem.

I hope I have opened your eyes to this very nice and free technology and if you are interested in a vacation to the place, just leave me a message on the chat or in a comment. If you have read to this point, you also get a 10% discount, courtesy of yours truly :)