However, the footer of the calendar kept showing "Today" instead of "Oggi". I checked the source and I noticed that it used a AjaxControlToolkit.Resources.Calendar_Today javascript variable to create the footer. Step by step I narrowed it down to the AjaxControlToolkit resources! Funny thing is that they didn't work. I tried just about everything, including Googling, only to find people having the opposite problem: they would have all these resource dlls created in their Bin directory and couldn't get rid of them. I had none! I copied the directories from the ACTK and nothing happened.
After some research I realized that the Ajax Control Toolkit does NOT include the option for multi language UNLESS in Release mode. I was using everything in the solution, including the ACTK, in Debug mode. Changing the AjaxControlToolkit build to Release in the solution made this work.
A few days ago a coworker asked me about implementing an autocomplete textbox with not only text items, but also images. I thought, how hard can it be? I am sure the guys that made the AutoCompleteExtender in the AjaxControlToolkit thought about it. Yeah, right!
So, I needed to tap into the list showing mechanism of the AutoCompleteExtender, then (maybe) into the item selected mechanism. The AutoCompleteExtender exposes the OnClientShowing and the OnClientItemSelected properties. They expect a function name that accepts a behaviour and an args parameters.
Ok, the extender creates an html element to contain the list completion items or gets one from the property CompletionListElementID (which is obsoleted anyway). It creates a LI element for each item (or a DIV in case of setting CompletionListElementID). So all I had to do was iterate through the childNodes of the container element and change their content.
Then, on item selected, unfortunately the AutoCompleteExtender tries to take the text value with firstChild.nodeValue, which pretty much fails if the first child of the item element is not a text node. So we will tap in OnClientItemSelected, which args object contains item, the text extracted as mentioned above (useless to us), and the object that was passed from the web service that provided the completion list. The last one we need, but keep reading on.
So the display is easy (after you get the hang of the Microsoft patterns). But now you have to return a list of objects, not mere strings, in order to get all the information we need, like the text and the image URL. Here is the piece of code that interprets the values received from the web service:
// Get the text/value for the item try { var pair = Sys.Serialization.JavaScriptSerializer.deserialize('(' + completionItems[i] + ')'); if (pair && pair.First) { // Use the text and value pair returned from the web service text = pair.First; value = pair.Second; } else { // If the web service only returned a regular string, use it for // both the text and the value text = pair; value = pair; } } catch (ex) { text = completionItems[i]; value = completionItems[i]; }
In other words, it first tries to deserialize the string received, then it checks if it is a Pair object (if it has a First property) else it passes the object as value and text! If deserialization fails, the entire original string is considered. Bingo! So on the server side we need to serialize the array of strings we want to send to the client. And we do that by using System.Web.Script.Serialization.JavaScriptSerializer. You will see how it goes into the code.
So far we displayed what we wanted, we sent what we wanted, all we need is to set how we want the completion items to appear. And for that I could have used a simple string property, but I wanted all the goodness of the intellisense in Visual Studio and all the objects I want, without having to Render them manually into strings.
So, the final version of the AutoCompleteExtender with images is this: A class that inherits AutoCompleteExtender, but also INamingContainer. It has a property ItemTemplate of the type ITemplate which will hold the template we want in the item. You also need a web service that will use the JavascriptSerializer to construct the strings returned.
Here is the complete code:
That's it! All you have to do is make sure the controls in the template render ${N} text that gets replaced with the first, second, Nth item in the list sent by the web service. The text that will be changed in the textbox is always the first item in the list (${0}).
Restrictions: if you want to use this a control in a library and THEN add some more functionality on the Showing and ItemSelected events, you need to take into account that those are not real events, but javascript functions, and the design of the autocompleteextender only accepts one function name. You could create your own function that also call on the one described here, but that's besides the point of this blog entry.
Usually when I blog something I am writing the problem and the solution I have found. In this case, based also on the lack of pages describing the same problem, I have decided to blog about the problem only. If you guys find the solution, please let me know. I will post it here as soon as I find it myself. So here it is:
We started creating some tests for one of our web applications. My colleague created the tests, amongst them one that does a simple file upload. She used the following code:
var fu = ie.FileUpload(Find.ByName("ctl00$ContentPlaceHolder1$tcContent$tpAddItem$uplGalleryItem$fuGalleryItem")); fu.Set(UploadImageFile);
and it worked perfectly. She was using WatiN 1.2.4 and MBUnit 2.4.
I had Watin 2.0 installed and MBUnit 3.0. Downloaded the tests, removed the ApartmentState thing that seems not to be necessary in MBUnit 3.0, ran them. On my computer the FileUpload Set method opens a file upload dialog and stops. I've tried a lot of code variants, to no avail; I've uninstalled both MBUnit and WatiN and installed the 1.2.4 and 2.4 versions. Tried all possible combinations actually, using .NET 1.1 and 2.0 libraries and changing the code. Nothing helped. On my computer the setting of the file name doesn't work.
I've examined the WatiN source and I've noticed that it used a FileUploadDialogHandler that determines if a window is a file upload window or not by checking a Style property. I have no idea if that is the correct solution, but just to be sure I inherited my own class from FileUploadDialogHandler and I've instructed it to throw an exception with a message containing the style of the first window it handles. The exception never fired, so I am inclined to believe that the handler mechanism somehow fails on my computer!
I have no idea what to do. I have a Windows XP SP3 with the latest updates and I am running these tests in Visual Studio 2008 Professional.
Update: The only possible explanation left to me is that Internet Explorer 8 is the culprit, since my colleagues all have IE7. The maker of WatiN himself declared that identifying the windows by style is not the most elegant method possible, but he had no other way of doing it. My suspicion is that the window handling doesn't work at all in IE8, but I have no proof for it and so far I have found no solution for this problem.
I've stumbled upon a little VS2008 addon that I think could prove very useful. It's called Clone Detective. Here is how you use it:
Make sure VS2008 is closed
Download and install the setup file
Additionally the source is freely available!
Open VS2008 and load a solution up
Go to View -> Other Windows -> Clone Explorer
Click the Run Clone Detective button
Now you should be able to see the percentage of cloned code in each file and also see the cloned code as vertical lines on the right vertical border next to the code.
Also, you might get a Sys.WebForms.PageRequestManagerServerErrorException with code 500 when using Ajax. It usually happends when you click on a button in a GridView or another bound control. You expect it to work, but it doesn't, even if the code is relatively clear.
The answer is (probably) that you are binding the container bound control every time you load the page (instead of only on !IsPostBack). The thing is this used to work in ASP.Net 2.0.
Bottom line: check your binding, see if you are not doing any DataBind in between the button click and the eventual event catch.
We started our first ASP.Net 3.5 project and today I had to work with the Linq database access. Heralded by many as a complete revolution in the way of doing ORM, LInQ is not that simple to move to. A lot of stuff that has become second nature for me as a programmer now must be thrown in the garbage bin.
I'll skip the "how to do LInQ" (there are far too many tutorials on the net already) and get down to the problem. Let me give you a simple example. You want to select a User from the Users table. Let's see how I would have done it until now:
User u=User.SelectById(idUser);
or maybe
User u=new User(); u.IdUser=idUser; u.SelectOne();
In LInQ you do it like this:
var c=new MyDataContext(); User user=(from u in users where u.idUser=idUser select u).FirstOrDefault();
or maybe
var c=new MyDataContext(); User user=c.Users.Select(u=>u.IdUser=idUser).FirstOrDefault();
I am typing this by hand, forgive me the occasional typos or syntax errors.
Well, my eyes scream for an encapsulation of this into the User class. We'll do that by creating a new User.cs file that contains a User partial class that will just mold on the LInQ generated one, then adding methods like SelectById.
Ok, I have the bloody user. I want to change something, let's say rename him or changing the password, then save the changes to the database. Here it gets tricky.
User u=User.SelectById(idUser); u.Password="New Password"; ....?!
In LInQ you need to do a DataContext.SubmitChanges(); but since I encapsulated the select functionality in the User class, I have no reference to the DataContext. Now here are a few solutions for doing this without saving the linq queries in the interface:
Add the methods to the context class itself, stuff like SelectUserById();. You would still need to instantiate the DataContext though, in every query
Add a second out or ref parameter to the methods so that you can get a reference to the DataContext used.
Make a method for each operation, like User.SetPasswordById();, that would become quickly quite cumbersome.
Add a reference to the DataContext in the User object, but that would become troublesome for operations like SelectAll
.
I am still not satisfied with any of these solutions. I am open to suggestions. I will link to this little article I found that suggests encapsulating the LInQ queries in separate extensions methods: Implementing ORM-independent Linq queries
Anyway, the problem is with controls in SilverLight that expect an URI as a parameter. It doesn't work. After trying all kind of stuff and googling a litle I found out that
the path is relative to the web application ClientBin directory
the path cannot contain .. or other directory/URI navigation markers like ~
the SilverLight control does not have access to a Request object like a web page does
Provide an absolute Uri either statically or by generating it with this piece of code:
New Uri(Application.Current.Host.Source.AbsolutePath + "../../../video.wmv", UriKind.Absolute)
Copy the images, videos, files into the ClientBin directory then only provide their name
Create virtual directories inside ClientBin that point to your resource directories. For example create a virtual directory Videos that points to the ~/Resources/Videos folder in the site, then simply use Videos/video.wmv as the URI
Of these three, the last I find the most elegant, even if the setup of the website itself might be rendered a lot more difficult by this.
ASP.Net 2.0 added a very useful thing, the '~' sign, which indicates that a path is relative to the application directory. Since the application itself should be indifferent to its name, this little thing comes in useful when trying to set the location of user controls, style sheets and so on. The problem is that this feature only applies to user controls. That is not a problem for most tags, since even a link tag can be set as a user control with a runat=server attribute.
The problem comes when trying to set the location of javascript script blocks. A script block with runat=server must be a server code block by definition, i.e. C# or VB, whatever the site language is set to. One of the solutions often used to solve this is to use a code block inside the src attribute of the javascript block like this:
But this is still a nasty issue, because of the dreaded The Controls collection cannot be modified because the control contains code blocks (i.e. <% ... %>). error. As in the post I linked to, the solution for this is to place all external script blocks within a server control like a PlaceHolder or a even a div with the runat=server attribute set. The drawback to this solution is that you can't do that in the <head> section of the HTML output.
The other solution is to use from within the server code the ScriptManager.RegisterClientScriptInclude method to add the script programatically to the page. While this is elegant, it also defeats the purpose of the aspx page, that is to separate content from code.
Somebody asked me how to add things at the end of an AutoCompleteExtender div, something like a link. I thought it would be pretty easy, all I would have to do is catch the display function and add a little thing in the container. Well, it was that simple, but also complicated, because the onmousedown handler for the dropdown is on the entire div, not on the specific items. The link was easy to add, but there was no way to actually click on it!
Here is my solution:
function addLink(sender,args) { var div=sender.get_completionList(); if (div) { var newDiv=document.createElement('div'); newDiv.appendChild(getLink()); div.appendChild(newDiv); } }
function getLink() { var link=document.createElement('a'); link.href='#'; link.innerHTML='Click me for popup!' link.commandName='AutoCompleteLinkClick'; return link; }
function linkClicked() { alert('Popsicle!'); }
function ACitemSelected(sender,args) { var commandName=args.get_item().commandName; if (commandName=='AutoCompleteLinkClick') linkClicked(); }
The AutoCompleteExtender must have the two main functions as handlers for the item selected and popup shown set like this:
First we hook on OnClientShown and add our link. The link is added inside a div, because else it would have been shown as a list item (with highlight on mouse over). I also added a custom attribute to the link: commandName.
Second we hook on OnClientItemSelected and check if the selected item has a commandName attribute, and then we execute stuff depending on it.
We have this web application that needs to call a third party site that then redirects back to us. The other app is using a configured URL to redirect back. In order to develop and debug the application, we used a router redirect with a different port like this: the external site calls http://myExternalIp:81 and it gets redirected to my own computer on port 80.
I was amazed to notice that when entering my local page, Request.Url would be in the format http://myExternalIp, without the 81 port. As the page was executed in order to debug it, I was baffled by this behaviour. I tried a few things, then I decided to replicate it on a simple empty site and there it was. The only thing I could find that had any information about the original port number was Request.Headers["Host"] which looked something like myExternalIp:81.
I guess this is a bug in the Request object, since it uses the port of the actual server instead of the one of the request, since my server was responding on port 80 on localhost and not 81.
Here is a small method that gets the real Request URL:
publicstatic Uri GetRealRequestUri() { if ((HttpContext.Current == null) || (HttpContext.Current.Request == null)) thrownew ApplicationException("Cannot get current request."); return GetRealRequestUri(HttpContext.Current.Request); }
publicstatic Uri GetRealRequestUri(HttpRequest request) { if (String.IsNullOrEmpty(request.Headers["Host"])) return request.Url; UriBuilder ub = new UriBuilder(request.Url); string[] realHost = request.Headers["Host"].Split(':'); string host = realHost[0]; ub.Host = host; string portString = realHost.Length > 1 ? realHost[1] : ""; int port; if (int.TryParse(portString, out port)) ub.Port = port; return ub.Uri; }
Just a short infomercial. Response.Redirect(url) is the same with Response.Redirect(url,true), which means that after the redirect, Response.End will be executed. In case you get a weird 'Thread was being aborted' exception, you probably have the Redirect/End methods inside a try/catch block. Remove them from the block and it will work. Probably the ending of the Response session doesn't look good to the debugger and that particularily obtuse exception is thrown.
If you absolutely must put the thing in a try/catch block, just put everything EXCEPT the Redirect/End. Another option (only for Response.Redirect) is to add a false parameter so to not execute Response.End.
Well, to summarize the article and bring it up to date a little, here are some of the most important points (in my view):
Decide upon a database naming convention, standardize it across your organization, and be consistent in following it. It helps make your code more readable and understandable.
Write comments in your stored procedures, triggers and SQL batches generously, whenever something is not very obvious.
Try to avoid server side cursors as much as possible. As Vyas Kondreddi himself says: "I have personally tested and concluded that a WHILE loop is always faster than a cursor"
Avoid the creation of temporary tables while processing data as much as possible, as creating a temporary table means more disk I/O. Consider using advanced SQL, views, SQL Server 2000 table variable, or derived tables, instead of temporary tables. This is interesting, because I usually use a lot of temporary tables in my stored procedures to make the code more orderly. I guess that in the case of SQL Server 2005 and later one can always use Common Table Expressions to make the code more readable. For SQL 2000 and such I found two interesting articles about not using temporary tables and replacing them with either derived tables (selects in selects) or with table variables, although they do have some limitations, thoroughly explained in the latter post. Here are the links: Eliminate the Use of Temporary Tables For HUGE Performance Gains and Should I use a #temp table or a @table variable?
Try to avoid wildcard characters at the beginning of a word while searching using the LIKE keyword, as that results in an index scan, which defeats the purpose of an index. For a short analysis of index scans go to SQL SERVER - Index Seek Vs. Index Scan (Table Scan).
Use the graphical execution plan in Query Analyzer or SHOWPLAN_TEXT or SHOWPLAN_ALL commands to analyze your queries.
Use SET NOCOUNT ON at the beginning of your SQL batches, stored procedures and triggers in production environments, as this suppresses messages like '(1 row(s) affected)' after executing INSERT, UPDATE, DELETE and SELECT statements. This improves the performance of stored procedures by reducing network traffic.
Use the more readable ANSI-Standard Join clauses instead of the old style joins.
Incorporate your frequently required, complicated joins and calculations into a view so that you don't have to repeat those joins/calculations in all your queries.
Use User Defined Datatypes if a particular column repeats in a lot of your tables, so that the datatype of that column is consistent across all your tables. Here is a great article about Sql UDTs (not the new .NET CLR types): What's the Point of [SQL Server] User-Defined Types?. Never used them, myself, but then again I am not an SQL guy. For me it seems easier to control data from .Net code
Do not let your front-end applications query/manipulate the data directly using SELECT or INSERT/UPDATE/DELETE statements. Instead, create stored procedures, and let your applications access these stored procedures. I am afraid I also fail at this point. I don't use stored procedures for simple actions like selecting a specific item or deleting a row. Many time I have to build search pages with lots of parameters and I find it really difficult to add a variable number of parameters to a stored procedure. For example a string that I have to split by spaces and search for all found words. Would it be worth to use a stored procedure in such a situation?
Avoid dynamic SQL statements as much as possible. Dynamic SQL tends to be slower than static SQL, as SQL Server must generate an execution plan every time at runtime. Personally, I never use dynamic SQL. If I need to create an SQL string I do it from .Net code, not from SQL.
Consider the following drawbacks before using the IDENTITY property for generating primary keys. IDENTITY is very much SQL Server specific, and you will have problems porting your database application to some other RDBMS. IDENTITY columns have other inherent problems. For example, IDENTITY columns can run out of numbers at some point, depending on the data type selected; numbers can't be reused automatically, after deleting rows; and replication and IDENTITY columns don't always get along well. So, come up with an algorithm to generate a primary key in the front-end or from within the inserting stored procedure. There still could be issues with generating your own primary keys too, like concurrency while generating the key, or running out of values. So, consider both options and go with the one that suits you best. This is interesting because I always use identity columns for primary keys. I don't think a data export or a database engine change justify creating a custom identity system. However I do have to agree that in the case that data is somehow corrupted a GUID or some other identifier would be more useful. I am sticking with my IDENTITY columns for now.
Use Unicode datatypes, like NCHAR, NVARCHAR, or NTEXT.
Perform all your referential integrity checks and data validations using constraints (foreign key and check constraints) instead of triggers, as they are faster.
Always access tables in the same order in all your stored procedures and triggers consistently. This helps in avoiding deadlocks. Other things to keep in mind to avoid deadlocks are: Keep your transactions as short as possible. Touch as few data as possible during a transaction. Never, ever wait for user input in the middle of a transaction. Do not use higher level locking hints or restrictive isolation levels unless they are absolutely needed. Make your front-end applications deadlock-intelligent, that is, these applications should be able to resubmit the transaction incase the previous transaction fails with error 1205. In your applications, process all the results returned by SQL Server immediately so that the locks on the processed rows are released, hence no blocking. I don't have much experience with transactions. Even if I would need transactions in some complex scenarios, I would probably use the .Net transaction system.
Always add a @Debug parameter to your stored procedures. This can be of BIT data type. When a 1 is passed for this parameter, print all the intermediate results, variable contents using SELECT or PRINT statements and when 0 is passed do not print anything. This helps in quick debugging stored procedures, as you don't have to add and remove these PRINT/SELECT statements before and after troubleshooting problems. Interesting, I may investigate this further, although the SQL debugging methods have improved significantly since the article was written.
Make sure your stored procedures always return a value indicating their status. Standardize on the return values of stored procedures for success and failures. The RETURN statement is meant for returning the execution status only, but not data. If you need to return data, use OUTPUT parameters
If your stored procedure always returns a single row resultset, consider returning the resultset using OUTPUT parameters instead of a SELECT statement, as ADO handles output parameters faster than resultsets returned by SELECT statements.
Though T-SQL has no concept of constants (like the ones in the C language), variables can serve the same purpose. Using variables instead of constant values within your queries improves readability and maintainability of your code.
However, I didn't realise at the time that the same thing applies to normal GridView BoundFields as well. The thing is, in order to display a value in a bound cell, it FIRST applies the HtmlEncoding to the value CAST TO STRING, THEN it applies the FORMATTING. Here is the reflected source:
/// <summary>Formats the specified field value for a cell in the <see cref="T:System.Web.UI.WebControls.BoundField"></see> object.</summary> /// <returns>The field value converted to the format specified by <see cref="P:System.Web.UI.WebControls.BoundField.DataFormatString"></see>.</returns> /// <param name="dataValue">The field value to format.</param> /// <param name="encode">true to encode the value; otherwise, false.</param> protectedvirtualstring FormatDataValue(object dataValue, bool encode) { string text1 = string.Empty; if (!DataBinder.IsNull(dataValue)) { string text2 = dataValue.ToString(); string text3 = this.DataFormatString; int num1 = text2.Length; if ((num1 > 0) && encode) { text2 = HttpUtility.HtmlEncode(text2); } if ((num1 == 0) && this.ConvertEmptyStringToNull) { returnthis.NullDisplayText; } if (text3.Length == 0) { return text2; } if (encode) { returnstring.Format(CultureInfo.CurrentCulture, text3, newobject[] { text2 }); } returnstring.Format(CultureInfo.CurrentCulture, text3, newobject[] { dataValue }); } returnthis.NullDisplayText; }
At least the method is virtual. As you can see, there is no way to format a DateTime, let's say, once it is in string format.
Therefore, if you ever want to format your data in a GridView by using DataFormatString, you should make sure HtmlEncode is set to false! Or at least create your own BoundField object that implements a better FormatDataValue method.
It was great! Not only the setting was nice (the four star Smart hotel is exactly what I had expected a hotel should be, except the restaurant, maybe), but the weather was cool, the presentation helpful, the tutor (Aurelian Popa) was above expectations and the people pleasant. Not to mention a week away from boring stuff. ;) I feel it would be pointless to detail what we did there, since it was either my own personal life or the actual workshop (which involves work), so I will give you some impressions of the technology and point you towards the resources that would allow you to go through the same learning process.
The whole thing was about WPF and SilverLight and I can tell you two conclusions right now: WPF/XAML/SilverLight are a great technology and I expect a lot of .Net applications to migrate towards it in the next 6 to 12 months. The complexity of this technology is likely to put a lot of people off, therefore the tools like Expression Blend and the Visul Studio interface become completely indispensable and must evolve to have great ease of use and become more intuitive.
The entire presentation model allows one to use any graphical transformation available, including 3D, on any part of the interface. The controls are now without appearance. They come with a default appearance that can be totally replaced with your own. A weird example is to use a 3D cube with video running on each side as a button. Of course, the whole thing is still work in progress and some stuff is yet difficult to do. Besides, you know Microsoft: a lot of complicated things are easy to do, while some of the simplest are next to impossible.
You can taste the Microsoft confidence on this by watching them release an entire design oriented suite (Expression) and working on making Silverlight available on all platforms and browsers. Just the fact that Silverlight can access directly the browser DOM is enough to make me remove all those patchy javascript scripts and replace them with nice Silverlight C# code.
Enough of this. Go learn for yourself!
Tools: Silverlight is at version 2 beta 2. That is painfully obvious when new bugs are introduced and beta 1 applications break. The Expression Blend tool is at version 2.5 June 2008 CTP and it has also a long walk ahead towards becoming useful. Visual Studio 2008 performs rather well when faced with XAML and WPF stuff, but the Resharper 4.0 addon helps it out a lot. You need the Visual Studio 2008 Silverlight Tools, too. After this compulsory tool kit you could also look at Snoop, Blender and Expression Deep Zoom Composer.
Learning material: Simplest thing to do is to go to Silverlight Hands-on Labs or download the WPF Hand-on labs and download them all and run through the documentation script that is included with each one. There are video tutorials about how to use the tools, too. Here is one for Blend. Of course, all blogs and materials available online at the search of a Google are helpful, as well.
Community: As any community, it depends on your desired locality and interests. You can look for local .Net / WPF groups or browse for blogs half way around the globe from you. From my limited googling during the workshop I can see that there are people talking about their issues with WPF and SL, but not nearly enough: the technology still needs to mature. I haven't really searched for it, but I've stumbled upon this site: WindowsClient.NET that seems to centralize WPF, Windows Forms and a bit of Silverlight information.
I wanted to write this great post about how to make Web User Controls that would have templates, just like Repeaters or GridViews or whatever, face any problems, then solve them. Unfortunately, MSDN already has a post like this: How to: Create Templated ASP.NET User Controls. So all I can do is tell you where to use this and what problems you might encounter.
I think the usage is pretty clear and useful: whenever you have as?x code that repeats itself, but has different content, you can use a templated Web User Control. The best example I can think of is a collapsable panel. You have a Panel, with some javascript attached to it, a hidden field to hold the collapse state, some buttons and images and texts to act as a header, but every time the content is different.
Now with the issues one might encounter. In Visual Studio 2005 you get an error, while in VS 2008 you get a warning telling you the inner template, whatever name you gave it, is not supported. This is addressed by the
decoration of the ITemplate property of the control. Then there is the issue of the design mode, where you get an ugly error in all Visual Studio versions: Type 'System.Web.UI.UserControl' does not have a public property called '[yourTemplatePropertyName]'. As far as I know there is no way of getting rid of this. It is an issue within Visual Studio. However, the thing compiles and the source as?x code is warning free. I think one could easily sacrifice some design time comfort to reusability.