and has 1 comment
In a previous post I've written about my thoughts on how to learn chess, with the main emphasis being on game analysis. But how that is done is a matter of time, taste and budget. This post is about how I do it. I don't know how other people are analysing chess games, except from hearing them talk about it, so I am not an authority in the matter, but it will save you some time figuring it out for yourself (which you still have to do even if you read the post :) ).

The easiest way of all is to get a game in PGN form, load it in a chess program, then watch it unfold, while you think about what you would have done differently and highlight what you thought was a good move and a bad move. Of course, that relies on your own thinking, which may be flawed, so you can gain help from a chess engine that will analyse the variations and tell you what it thinks of what you thought. It sounds complicated, but it is not. You ask yourself why the player didn't fork the queen and the rook with the knight, you can do it for them (creating what is called a variation). While you are moving the pieces, a chess engine can suggest, based on the time you let it analyse, what is the best move in that situation.

For this I use Arena Chess GUI, a free chess program, and load up the free chess engine versions of Houdini and Rybka. Arena comes with more engines, so you can try them all, but really it's a matter of taste. I once did a championship of chess with the chess engines available in order to see which is best, but there is no real way of giving both chess engines the same computer resources, so the programming style of each engine makes this only a fun comparison, not a scientific one. There are videos on YouTube with competitions of chess engines; those are fun in their own right. Unfortunately, Arena is not really bug free. There are a few gotchas that you learn the hard way, like don't use the Minimize to tray option if you have the default "load PGN at startup" on, or don't use "save back into PGN", etc.

Of course, most chess engines have a free version and a commercial one. Depending on your seriousness, you can choose to pay money for them and get the advantages of new development in chess computing, but don't expect too much. After all, a computer engine only takes a formula that gives a value for a position and then makes a tree of possible move orders in order to minimize their opponent's advantage and maximize theirs. The algorithm is pretty standard and the formula, which really makes the difference between engines, is the outcome of centuries of chess analysis. The game hasn't changed that much in a few years. And exactly this difference between how humans process a game and how a computer does it, makes this analysis a little flawed. A computer will tell you where you went wrong, but won't be able to devise a winning strategy for you, as it is examining every move like it would be the first. So we move to plan B.

Another option for analysing a game, thanks to the vast database of chess games ever played, is to see what other players, human grandmasters and below, have done in the same situation. A software that was built to do this is ChessBase. And it is true that there is a gazillion of possible chess games, but they all begin kind of the same. The opening principles restrict the way a game can start and for the first 20 or 30 moves, there are a lot of players that did the same thing (and played a decent game). ChessBase is a great program, just like Chess Arena, because it allows flexibility in the way you use it without bundling it all. The program is small to download, then you get to download whatever chess engines you want, game databases, video tutorials, etc. To give you an example of size difference: ChessBase 11 is 150MB, the chess player base is 640MB, a video tutorial of chess openings in ChessBase format is 1500MB, while the MegaDatabase of chess games is 2700MB. You can imagine that there are a lot of games in that database, over five million of them. Unfortunately, ChessBase, the databases and the tutorials are not legally free. They cost around 200 euros, plus a few more for the tutorials. Not much for the effort that was put into them.

The chess blog posts that I am using to publicize games are done by using these kinds of scenarios. I am taking a game that I played with someone or with my phone, I am analysing the game so see where changes in the score have occurred (those are the interesting bits in a game) and then try out variations, using engines or databases to see what else could have been done. It is a good practice to analyse the game as soon as possible, as the ideas that led to the moves are still fresh in your mind. It may seem like a drag, but commenting why you did the moves allows you to understand the game later on, when you are revisiting it. Also a good idea is to have your chess partner do the same thing and then merge the two PGNs into one, that makes clear the overall play. A chess analysis engine will comment every move with what it thought would have been the best continuation and the value of the board at that time. It makes a PGN horrible to read, because even if you put it into a visual display of the PGN, you still want to have a clean, readable PGN file. What you want to do is analyse a single move with the engine, see were it goes, then write a humanly understandable statement like "which would have been disastrous because of the sacrifice of the rook on f8, followed by Qxf7, mate".

As examples, try to compare the following blog post PGNs. My first chess game post, was annotated automatically by ChessMaster XI, which has a human readable annotation engine which I first thought was great. But look at the texts: they are either obvious or resort to stuff like "Leads to 15...Kf7 16.Bh4 h6 17.b4 Rab8 18.a4 Bxf3 19.Qxf3 Be7 20.Ng4 Bxh4 21.Nxe5+ Kg7 22.bxc5, which wins a bishopand two pawns for a bishop and a knight.". Unless used to read PGN like English (which most professional players can, btw, complete with a chess board in their heads), you see a lot of mambo jumbo. A later game post, that contained annotated moves by me, as translated from chess engine analysis.Or this one, which contains no annotations at all. Which one do you like best?

A word of warning, analysing chess games is not a short process. The advantages of the ChessMaster XI auto analysis was that you could leave it on at night, then come back in the morning and see it unfurl before you (and audio read by the chess software). To do it manually, or even let the auto analysis run at night and then decode the best move suggested by the computer and translate, takes a lot of time. I've spent an hour per game to annotate a match (two games) that two coworkers had and that I dutifully stored on my cell phone while they were playing. It was satisfying, but time wasting. A lot like blog posting... I leave you at that. Have fun dissecting chess games.

and has 0 comments
There is, of course, a long tried method of learning chess: find a bunch of people that want to play chess and go fight with them. Like that old Army line: join the Army, meet interesting people, kill them. But also obvious is that this is not the best method there is. You meet old men that play chess every day with other old men that have a lot of time on their hands and, even if they kick your ass rather quickly, you often find ways of defeating them because all their play is organic, lacking a principled structure. And that is what this post is about. I am the least principled player out there, as I have a ridiculously leaky memory and am rather greedy (I want fun games, now, not dirty principles), and therefore I get often beaten by people that should not have been able to defeat me. And, like in the old adventure games: So, you want to be a (principled) chess player - this is the quest.

First of all, don't google "how to learn chess" as you will get swamped by all the chess players that think they can teach you for money and all the software that you need to buy and all the DVDs that you absolutely need to buy. That doesn't mean that if you are patient you wouldn't eventually find what you need and, being a wannabe chess player, you should have a little patience. But if you are like me (why do I want to play chess again?) you don't have patience. Second of all, don't start with chess books. You can find zillions of them on the web, but in order to read them you will have to become very familiar with a chess board. Unlike a video, they will require to read the book with a chess board at hand and do the moves as you read. That pretty much means you can read them only in specific settings and you will feel like an idiot for reading a page a day. Books are great, but not for beginners.

Then there are some free online resources that one can use, lovely chess sites like JRobiChess.com, theChessWebsite or ChessVideos.tv, where one can get multiple opportunities to learn: grandmaster games, videos, tutorial, references. These are great, and I thoroughly recommend them. Also, look for "chess" on YouTube and you will find a plethora of people discussing and teaching chess, some of them completely for passion alone. On JRobi's site you can even find a chess study time recommendation: Opening Study – 10%, Tactical Puzzles - 20%, Endgame Study – 10%, Analyzing Your Games – 30%,Analyzing Master Games – 30% of Time. Let's analyse this plan a little bit and some very interesting ideas will emerge.

As you can see, there is time allotted for openings, only a tenth of the time, and the same amount of time for endgames. This leads to the first very important idea: endgames are (at least) just as important as openings. If you think about it, most of the amateur games, played for fun and not for some chess rating, are spectacularly inaccurate and end in quick mates. That means that most of the "go get'em" practice will teach you about openings and some of the middle game. Once in endgame territory, we are suddenly beginners again, trying to get by and failing miserably. That's why it is important to spend time with endgames. Josh Waitzkin, a US champion at chess, recommended learning endgames even more than openings. He described how, by careful study of just a few pieces on the board (two kings and a pawn, or some other piece, the usual endgame scenarios) he would emerge slightly disadvantaged in the endgame, then crush his opponent, less instructed in this most important part of a chess game. The opening and endgame learning is only a fifth of all learning time, though. Other things are even more important.

20% tactical puzzles. This is the equivalent of learning karate moves out of context. You are not fighting anybody, but you learn to hit them in a particular way. You don't have the pressure of a game, you have all the time in the world, find the best move! Both JRobi's site and Kevin's (theChessWebsite) have daily tactical puzzles. This is actually one of the few exercises that I do almost every work day: I open their sites and do those puzzles. But let me tell you something: if you open a chess book with tactical puzzles, you get some really nasty, mind boggling stuff. These online ones, at least the two I mentioned, are made for people that if they don't immediately see the answer, they start trying moves until they get them right. I should know; that hint button also gets heavy use. So this is one of the moments when I can recommend books, but start with the online ones first and try to see them through before moving.

Now comes the heavy part. We've covered a little more than a third of the time one should spend on chess learning, according to JRobi. The other two thirds are analysing games. That is it. Take the game out of the competition and clinically dissect it until you learn everything there is to learn. A major idea comes out of this, though: if you need to analyse your own games, that means someone must record them as you play. Professional and club games have score cards: they write every move they and their opponent make, the score, even small comments (you will see what I mean when we get to PGNs) and they sign each other's cards at the end of the game. I've never done it, but I imagine it is satisfying to get your adversary to sign their own defeat while you desecrate their own card with your victory scribble (heh!). I also imagine it adds to one's motivation, getting this kind of direct recognition of your effort in the game. Another option, much easily available, is to load a chess game on your mobile phone and set it to two player game. Once you make a move, you make it on the phone as well. At the end, you download the PGN file on your computer for later analysis. With a chess engine at hand and all the time in the world you can see where mistakes were made, where good moves changed the score balance and what was missed. A spectator of the game can do that for you, as well. I originally planned to write in this blog about analysing games, but it has become too long already. I will, therefore, detail that particular part in another blog post.

Of course, analysing the games played at grandmaster level shows how other people are thinking when playing the game. It's not unlike reading material relevant to your line of work. You may be smart, but you are not expecting to think of everything that may be of use to you in a specific context. You read what other wrote on the subject and gain inspiration. And when you see a giant chess player sacrifice a queen for two knights and then mating the other guy in another ten moves, you also gain humility (or you close the bloody game and go watch a movie or something). Indeed, try not to let it get to you. Grandmasters are not geniuses that can outthink you at every step, monsters that can intellectually squash you like a bug, they are people just like you that also dedicated their lives to the game of chess. Professional chess players do it for life. Expecting to understand what they did without a lot of effort is a stupid expectation. It is important to analyse their games and learn from both their mistakes and brilliant moves.

That leaves us with the endgame of this post: computer chess tutors. That is different from playing chess with your computer or pad or cell phone. That enters the first category of just playing. Computers also play differently than people, they are great at not making mistakes and punishing yours, but their design also allows for moves that would mate you in 235 moves or something like that, which is insane for any human being. Don't get me wrong, computers are great practice, but consider that if you beat them at a certain difficulty level, it is because they were programmed to let you. With the computing power available today, a cell phone would probably be able to beat Kasparov. So, back to chess tutors. I've only found one that I liked, and that is the ChessMaster XI game. Incidentally, there is where I've learned of Josh Waitzkin, as he is the voice and mind behind the game tutorials. I've also heard a lot of Fritz, but I haven't found a context where it really tutors you. Fritz is bundled with ChessBase and there are some tutorials with board, PGN games and video that use ChessBase to teach you stuff. There are probably some ChessBase based tutorials, but I haven't searched for them yet.

So, I leave you with this little research I've done, I hope it helps you get better and encourages me to heed my own advice.

and has 2 comments
I found a bit of code today that tested if a bunch of strings were found in another. It used IndexOf for each of the strings and continued to search if not found. The code, a long list of ifs and elses, looked terrible. So I thought I would refactor it to use regular expressions. I created a big Regex object, using the "|" regular expression operator and I tested for speed.

( Actually, I took the code, encapsulated it into a method that then went into a new object, then created the automated unit tests for that object and only then I proceeded in writing new code. I am very smug because usually I don't do that :) )

After the tests said the new code was good, I created a new test to compare the increase in performance. It is always good to have a metric to justify the work you have been doing. So the old code worked in about 3 seconds. The new code took 10! I was flabbergasted. Not only that I couldn't understand how that could happen, how several scans of the same string could be faster than a single one, but I am the one that wrote the article that said IndexOf is slower than Regex search (at least it was so in the .Net 2.0 times and I could not replicate the results in .Net 4.0). It was like a slap in the face, really.

I proceeded to change the method, having now a way to determine increases in performance, until I finally figured out what was going on. The original code was first transforming the text into lowercase, then doing IndexOf. It was not even using IndexOf with StringComparison.OrdinalIgnoreCase which was, of course, a "pfff" moment for me. My new method was, of course, using RegexOptions.IgnoreCase. No way this option would slow things down. But it did!

You see, when you have a search of two strings, separated by the "|" regular expression operator, inside there is a tree of states that is created. Say you are searching for "abc|abd", it will search once for "a", then once for "b", then check the next character for "c" or "d". If any of these conditions fail, the match will fail. However, if you do a case ignorant match, for each character there will be at least two searches per letter. Even so, I expected only a doubling of the processing length, not the whooping five times decrease in speed!

So I did the humble thing: I transformed the string into lowercase, then did a normal regex match. And the whole thing went from 10 seconds to under 3. I am yet to understand why this happens, but be careful when using the case ignorant option in regular expressions in .Net.

and has 1 comment
A short post about an exception I've met today: System.InvalidOperationException: There was an error reflecting 'SomeClassName'. ---> System.InvalidOperationException: SomeStaticClassName cannot be serialized. Static types cannot be used as parameters or return types.

Obviously one cannot serialize a static class, but I wasn't trying to. There was an asmx service method returning an Enum, but the enum was nested in the static class. Something like this:
public static class Common {

public enum MyEnumeration {
Item1,
Item2
}

}

Therefore, take this as a warning. Even if the compilation does not fail when a class is set to static, it may fail at runtime due to nested classes.

and has 0 comments
I found this game on The Kenilworthian blog, a game between Nona Gaprindashvili and Alexander Blagidze. What fascinated me about this particular game is the position after 12... Nc6. Only twelve moves into the game, a position that seems normal, like something one would see when studying a particular opening, and then Bam! Can you see the bam? That is the challenge. Try to find it yourself.

I've analysed the game (very superficially) using Houdini and I've added two variations, just to see what would have happened in the more obvious situations. One question remains: what was really black's downfall? What was black's bad move and what could have been done instead of it? You can see it in the analysed PGN, but try to see it, it's a deceptively silent move.

[Event "USSR"]
[Site "USSR"]
[Date "1963.??.??"]
[Round "?"]
[White "Nona Gaprindashvili"]
[Black "Alexander Blagidze"]
[Result "1-0"]
[BlackElo "2400"]
[ECO "B23"]
[Opening "Sicilian"]
[Variation "Closed, 2...Nc6 3.Bb5 Nd4"]
[WhiteElo "2400"]
[TimeControl "600"]
[Termination "normal"]
[PlyCount "47"]
[WhiteType "human"]
[BlackType "human"]

1. e4 {(e7-e5) +0.10/16 20}
...c5 {(Ng1-f3 Nb8-c6 Nb1-c3 Ng8-f6 d2-d4 c5xd4 Nf3xd4 e7-e6 Bf1-e2 Bf8-b4 Nd4xc6 b7xc6 0-0 Qd8-a5 Qd1-d3 0-0 Bc1-e3 Bc8-b7 Be3-d4 e6-e5) -0.20/16 20}
2. Nc3 {(Nb8-c6 Ng1-f3 Ng8-f6 d2-d4 c5xd4 Nf3xd4 e7-e6 Bf1-e2 Bf8-b4 Nd4xc6 b7xc6 0-0 Qd8-a5 Qd1-d3 0-0 Bc1-e3 Bc8-b7 Ra1-d1 Bb4xc3 Qd3xc3 Qa5xc3 b2xc3 Nf6xe4 Rd1xd7 Ne4xc3) +0.22/17 20}
...Nc6 {(Ng1-f3 Ng8-f6 d2-d4 c5xd4 Nf3xd4 e7-e6 Bf1-e2 Bf8-b4 Nd4xc6 b7xc6 0-0 Qd8-a5 Qd1-d3 0-0 Bc1-e3 d7-d6 Qd3-c4 d6-d5 Qc4xc6 Bc8-d7 Qc6-b7 Bb4xc3 b2xc3 d5xe4) -0.19/16 20}
3. f4 {(d7-d6 Ng1-f3 g7-g6 Bf1-c4 Ng8-f6 d2-d3 Bf8-g7 h2-h3 0-0 0-0 a7-a6 a2-a4 Nf6-h5 Nc3-e2 e7-e6 c2-c3 Bc8-d7 Bc1-e3) 0.00/15 20}
...g6 {(g2-g4 Bf8-g7 g4-g5 a7-a6 Ng1-f3 b7-b5 d2-d3 h7-h6 a2-a4 b5-b4 Nc3-d5 h6xg5 f4xg5 e7-e6 Nd5-e3 Ng8-e7 Ne3-c4) 0.00/15 20}
4. Bb5 {(Nc6-d4 Ng1-f3 Nd4xb5 Nc3xb5 d7-d6 0-0 Bc8-d7 c2-c4 Bd7xb5 c4xb5 Qd8-b6 d2-d4 c5xd4 Qd1xd4 Qb6xd4+ Nf3xd4 Ng8-f6 e4-e5 Nf6-e4 Bc1-e3) -0.02/14 20}
...Nd4 {(Ng1-f3 Nd4xb5) +0.08/14 20}
5. Bc4 {(a7-a6 a2-a4 e7-e6 Ng1-f3 Ng8-e7 e4-e5 d7-d5 e5xd6/ep Ne7-f5 0-0 Nf5xd6 d2-d3 Bf8-g7 Nc3-e4 0-0 Bc1-e3 Nd6xc4 d3xc4 Nd4xf3+ Qd1xf3 Bg7xb2 Be3xc5 Bb2xa1 Bc5xf8) 0.00/14 20}
...Bg7 {(Ng1-f3 d7-d6 0-0 Ng8-f6 e4-e5 d6xe5 f4xe5 Nd4xf3+ Qd1xf3 Qd8-d4+ Qf3-e3 Nf6-g4 Qe3xd4 c5xd4 Bc4xf7+ Ke8-d8 Nc3-e4 Bg7xe5 h2-h3 Ng4-h2 Rf1-e1 Rh8-f8 Ne4-g5 Be5-g3) -0.05/15 20}
6. Nge2 {(d7-d6 0-0) -0.05/13 20}
...e6 {(b2-b3 a7-a6 e4-e5 d7-d5 Bc4-d3 Nd4xe2 Bd3xe2 Ng8-e7 Bc1-b2 Bc8-d7 h2-h4 Ne7-f5 h4-h5 Nf5-g3 Rh1-h3 Ng3xe2 Qd1xe2) +0.01/14 20}
7. Nxd4 {(c5xd4 Nc3-e2 d7-d6 b2-b3 Ng8-f6 Bc4-d3 Bc8-d7 Bc1-a3 Qd8-b6 h2-h3 Bd7-c6 Ne2-g3 0-0 Ba3xd6 Rf8-d8 Bd6-e7 Rd8-d7) -0.09/15 20}
...cxd4 {(Nc3-e2 d7-d6 b2-b3 Ng8-f6 Bc4-d3 Bc8-d7 Bc1-a3 Qd8-b6 h2-h3 Bd7-c6 Ne2-g3 0-0 Ba3xd6 Rf8-d8 Bd6-e7 Rd8-d7) +0.09/16 20}
8. Ne2 {(d7-d6 b2-b3 Ng8-f6 Ne2-g3 h7-h5 Bc4-d3 h5-h4 Ng3-e2 Bc8-d7 Bc1-b2 e6-e5 0-0 Bd7-c6 c2-c3 Bc6xe4 Bd3-b5+ Ke8-f8 f4xe5 d6xe5 c3xd4) -0.15/16 20}
...Qh4+ {(g2-g3 Qh4-e7 a2-a4 d7-d6 d2-d3 Ng8-h6 c2-c3 d4xc3 b2xc3 0-0 0-0 Nh6-g4 h2-h3 Ng4-f6 e4-e5 Nf6-d7 d3-d4 Nd7-b6) 0.00/16 20}
9. Ng3 {(Ng8-f6 Qd1-f3) -0.29/15 20}
...Qxf4 {(d2-d3 Qf4-f6 Rh1-f1 Qf6-e7 Qd1-f3 d7-d5 Bc4-b3 Ng8-h6 e4xd5 e6xd5+ Qf3-e2 Qe7xe2+ Ng3xe2 Nh6-g4 Bb3xd5 0-0 Bc1-f4 Ng4-e3 Bf4xe3 d4xe3 d3-d4 a7-a5 a2-a4) +0.12/16 20}
10. d3 {(Qf4-d6 0-0 Ng8-e7 Bc1-f4 Qd6-b6 a2-a4 0-0 a4-a5 Qb6-c5 Qd1-e1 d7-d6 Bf4-d2 Bc8-d7 Ng3-e2 b7-b5 a5xb6/ep a7xb6 Qe1-h4 Ra8xa1 Rf1xa1) -0.22/16 20}
...Qc7 {(0-0) +0.11/15 20}
11. O-O {(Ng8-e7 Bc1-g5 Qc7-c5 Qd1-d2 b7-b5 b2-b4 Qc5-b6 Bg5xe7 Ke8xe7 Bc4-b3 f7-f6 a2-a4 b5xa4 Bb3xa4 Bc8-b7 Qd2-f4 Ke7-e8 e4-e5 f6-f5 Ng3-e2 g6-g5) -0.21/16 20}
...Ne7 {(Bc1-g5 Qc7-c5 Qd1-d2 b7-b5 b2-b4 Qc5-b6 Bg5xe7 Ke8xe7 Bc4-b3 Bc8-b7 Qd2-f4 f7-f6 a2-a4 b5xa4 Bb3xa4 Ke7-e8 e4-e5 f6-f5 Ng3-e2 g6-g5) +0.21/16 20}
12. Bg5 {(Qc7-c5 Qd1-f3 0-0 Bg5-f6 Bg7xf6 Qf3xf6 b7-b6 a2-a3 Ne7-c6 Rf1-f5 Qc5-e7 Ng3-h5 Qe7xf6 Nh5xf6+ Kg8-g7 Rf5-f4 Nc6-e5 Ra1-f1 Ne5xc4 d3xc4 Bc8-a6 b2-b3 g6-g5 Rf4-f3) -0.10/16 20}
...Nc6 {(Ng3-h5 g6xh5 Rf1xf7 h7-h6 Bg5-f4 Bg7-e5 Qd1xh5 Ke8-d8 Qh5-h4+ Kd8-e8 Qh4-h5 Ke8-d8 Qh5-h4+) 0.00/16 20}
13. Nh5 {(g6xh5 Rf1xf7 h7-h6 Bg5-f4 Bg7-e5 Qd1xh5 Ke8-d8 Qh5-h4+ Kd8-e8 Qh4-h5 Ke8-d8 Qh5-h4+) 0.00/17 20}
...gxh5 {(Rf1xf7 h7-h6 Bg5-f4 Bg7-e5 Qd1xh5 Ke8-d8 Qh5-h4+ Kd8-e8 Qh4-h5 Ke8-d8 Qh5-h4+) 0.00/18 20}
14. Rxf7 {(h7-h6 Bg5-f4 Bg7-e5 Qd1xh5 Ke8-d8 Qh5-h4+ Kd8-e8 Qh4-h5 Ke8-d8 Qh5-h4+) 0.00/18 20}
...Qe5 {(Rf7-f5 Qe5xf5 e4xf5 0-0 f5xe6 d7xe6 Qd1xh5 Bc8-d7 Qh5-g4 Ra8-e8 Bg5-h6 Re8-e7 Bh6xg7 Re7xg7 Bc4xe6+ Kg8-h8 Qg4-h3 Bd7xe6 Qh3xe6 Rg7-f7 Qe6-b3 Rf7-e7 Ra1-f1 Rf8xf1+ Kg1xf1 Kh8-g7 Qb3-d5 h7-h6 Kf1-f2 Re7-f7+ Kf2-g3) -4.44/16 20}
(14. .. Kxf7 15. Qxh5+ Kf8 16. Rf1+ Qf4 17. Rxf4+ Bf6 18. Rxf6+ Ke7 19. Rxe6+ Kf8 20. Re8+ Kg7 21. Qh6#)
15. Rf5
(15. Rf5 Qxf5 16. exf5 O-O 17. fxe6 dxe6 18. Qxh5 Bd7 19. Qg4 Rae8 20. Bh6 Re7 21. Bxg7 Rxg7 22. Bxe6+ Kh8 23. Qh3 Bxe6 24. Qxe6 )
1-0

and has 0 comments

World of Ptavvs is a book first published in 1966, therefore it feels very dated indeed. Still, while reading the book, I've realised how I missed the style of the sci-fi back then, when the world was grand, the future was brilliant, people would all be intelligent and rational, doing what is smart and what is right, with a power of will that defined their very being. Remember Asimov? It's like that! And it's no wonder, both Larry Niven and Isaac Asimov were technical people at their core, even if they have expanded their interests in many other spheres as well. I have to reiterate this: in this time of Generation Me, when everything seems to be focused on the intensity of one's emotions, rather than on what the situation at hand is and what to do about it, when legions of film makers and young adult writers pound this insane idea that what we feel is more important than what we think and our own belief is more important than the welfare of the people around us, in these horrible times books like World of Ptavvs feel like good medicine.

Not the the book was not ridiculous in many aspects. The fusion drives, the trips towards Pluto that took a few days, a "belter" civilisation populating the main asteroid belt and moving around in ships that were essentially huge fusion bombs, the way people with terrible burns and psychic trauma would calmly talk about their ordeal to the scientific investigator come to solve the problem, the idealistic discussions that spawned out of nowhere in moments of maximum tension, the intelligent civilisation of dolphins that dream to go to space, the psionic powers... all of these were at the same time wonderful to behold and quite silly. However, I liked the book, I gobbled it up.

The plot is about this alien that can control the minds of others. Their entire civilisation is based on enslaving other populations via their Power. He is the victim of a malfunction in space and activates a retarder field that will protect him from time and space interactions until someone removes him from this stasis. Thus, he reaches Earth and remains on the bottom of the ocean for billions of years. When people get him out of his shell, all hell breaks loose, the book transforming into a space race and a philosophical introspection at the same time.

I can't make justice to the subject in a few words any more than I can do it in more words without spoiling the pleasure of the read - I've already said too much. If you feel you are in the mood for old school sci-fi, World of Ptavvs is a good book, reminiscent of the works of Asimov or van Vogt. Silly, yet grandiose at the same time.

and has 6 comments
It's a horribly old bug, something that was reported on their page since 2007 and it in the issue list for HtmlAgilityPack since 2011. You want to parse a string as an HTML document and then get it back as a string from the DOM that the pack is generating. And it closes the form tag, like it has no children.
Example: <form></form> gets transformed into <form/></form>

The problem lies in the HtmlNode class of the HtmlAgilityPack project. It defines the form tag as empty in this line:
ElementsFlags.Add("form", HtmlElementFlag.CanOverlap | HtmlElementFlag.Empty);
One can download the sources and remove the Empty value in order to fix the problem or, if they do not want to change the sources of the pack, they have the option of using a workaround:
HtmlNode.ElementsFlags["form"]=HtmlElementFlag.CanOverlap;
Be careful, though, the ElementsFlags dictionary is a static property. This change will be applied on the entire application.

and has 0 comments

I have been called a hipster many a times and that is because I don't really like mainstream things and, instead, choose to see beauty and purpose somewhere else. I'm not really a hipster, though, since the trends I am following are not the latest and I have no sense of fashion. But enough about me. Let's talk about software and it's latest incarnation: mobile and HTML5/javascript and how I despise the hype around something that is, let's put it simply, a combination of lazy programming and market forces. No real innovation, no quality, no soul. A Hollywood of software, if you will.

People, I've seen this before and so have you. There was a time when computing was done on devices that had single digit megahertz chips at their core. Applications and games thrived. Programmers would always complain about the lack of resources and there was a time when 64KB of memory was thought enough for most computing purposes. It was that time that spawned the algorithmic generation, the guys that usually ask you what a graph is or how to manually do a bubble sort on paper when they hire you to work on web sites. You needed to make your software slim and efficient to work on those devices. Then the processor power, memory and drive capacity just exploded. Each step of the way, applications and games thrived. The problem was... they were the same as before, only larger. Wolfenstein became Doom became Quake became Counter Strike became Call of Duty and beyond, the resolution, the realism, the environment ever evolving, but the game staying the same: get some guns and kill something. But still, it was OK, we like things bigger, we want more pixels. It doesn't matter that the Windows operating system grows exponentially to use the increasing space and processing power, yet we use it in about the same way as before. It doesn't matter that the single player games just look better and have the same or even less complexity than the games ten years before. It's a status quo we can live with.

But then browsers came along. Suddenly, there is a whole new market: online apps. One server to bind them all. And we again lament the lack of resources, as we use slow javascript and try the ever annoying Java applets and somehow we settle on Flash. It can be used on any operating system, almost any browser, let's make games with it and place them online. The only reason we do not make entire web sites in Flash remaining SEO. And we lament the lack of programming tools for something that was originally created only for online animation for commercial ads, but we can live with it. All the games we played so joyfully on 80386 processor machines we can now play in a browser, in Flash, on machines that are 100 times faster. No problem there.

Then the smartphones and tablets arrived, with their new operating systems, their weird resolutions and their direct dislike of Flash. Suddenly Flash is no good anymore: it is not open enough, not fast enough, not compatible enough. Instead, let's switch to HTML5 and Javascript, the backbone of the Web. Let's use those. Sure, now we need a faster Javascript, so we think on it a little and Kaboom! Javascript is suddenly 8 times faster. We need new HTML concepts and ideas and Kapow! the HTML standard suddenly changes after years of wallowing between panels and committees. Nothing can stand in the way of change now, we even have portable devices that are faster than the computers we used 10 years ago.

And so we get to today, when my Athlon II 2500+ computer is too slow to play flash games without overheating, because they are made with frameworks designed to output HTML and Javascript. And that is why I can't even move the mouse in browser based HTML5 and Javascript games, even if what the page I am on only wants to do is let me play Angry Birds, a game that would have worked fine, with almost the same level of graphics and certainly the same level of intelligence and entertainment, on a 33Mhz computer from 20 years ago.

I could live with all that, though. I could buy another computer, after all it is a wonder this one even works anymore, but it bothers me so much that I have games and films and software that have been working on this machine for so long and they are mostly better than what I can find today. It bothers me to buy a smartphone or a tablet only to see my rights to use it restricted and conditioned from the people that make them. It bothers me to have lived 20 years with computers, only to have more pixels at the end. I can even imagine my LCD coffin, being put into the ground, with people crying over the touching (really, you can touch them!) floating images from it, while some people would discuss the number of pixels the coffin has. He lived a good life, he got pixels.



I think that The Checklist Manifesto is a book that every technical professional should read. It is simple to read, to the point and extremely useful. I first heard about it in a Scrum training and now, after reading it, I think it was the best thing that came out of it (and it was a pretty awesome training session). What is this book about, then? It is about a surgeon that researches the way a simple checklist can improve the daily routine in a multitude of domains, but mainly, of course, in surgery. And the results are astounding: a two fold reduction in operating room accidents and/or postoperatory infections and complications. Atul Gawande does not stop there, though, he uses examples from other fields to bring his point around, focusing a lot on the one that introduced the wide spread use of checklists: aviation.

There is a lot to learn from this book. I couldn't help always comparing what the author had to say about surgery with the job I am doing, software development, and with the Scrum system we are currently employing. I think that, given he would have heard of Scrum and the industrial management processes it evolved from, Gawande would have surely talked about it in the book. There is no technical field that could not benefit from this, including things like playing chess or one's daily routine. The main idea of the book is that checklists take care of the simple, dumb things that we have to do, in order to unclutter our brain for the complex and intuitive work. It enables self discipline and allows for unexpected increases in efficiency. I am certainly considering using in my own life some of the knowledge I gained, and not only at the workplace.

What I could skim from the book, things that I marked as worthy to remember:
  • Do not punish mistakes, instead give more chances to experience and learning - this is paramount to any analytical process. The purpose is not to kill the host, but to help it adapt to the disease. Own your mistakes, analyse them, learn from them.
  • Decentralize control - let professionals assume responsibility and handle their own jobs as they know best. Dictating every action from the top puts enormous pressure on few people that cannot possibly know everything and react with enough speed to the unpredictable
  • Communication is paramount in managing complex and unexpected situations, while things like checklists can take care of simple and necessary things - this is the main idea of the book, enabling creativity and intuition by checking off the routine stuff
  • A process can help by only changing behaviour - Gawande gives an example where soap was freely given to people, together with instructions on how and when to use it. It had significant beneficial effects on people, not because of the soap per sé, but because it changed behaviour. They were already buying and using soap, but the routine and discipline of soap use was the most important result
  • Team huddles - like in some American sports, when a team is trying to achieve a result, they need to communicate well. One of the important checks for all the lists in the book was a discussion between all team members describing what they are about to do. Equally important is communicating during the task, but also at the end, where conclusions can be drawn and outcomes discussed
  • Checklists can be bad - a good checklist is precise, to the point, easy to use. A long and verbose list can impede people from their task, rather than help them, while vague items in the lists cause more harm than good
  • A very important part of using a checklist system is to clearly define pause points - they are the moments at which people take the list and check things from it. An undefined or vaguely defined pause point is just as bad as useless checklist items
  • Checklists are of two flavours - READ-DO, like a food recipe, with clear actions that must be performed in order, and DO-CONFIRM, where people stop to see what was accomplished and what is left to do, like a shopping list
  • A good checklist should optimally have between five and nine items - the number of items the human brain can easily remember. This is not a strong rule, but it does help
  • Investigate failures - there is no other way to adapt
  • A checklist gotcha is the translation - people might make an effort to make a checklist do wonders in a certain context, only to find that translating it to other cultures is very difficult and prone to errors. A checklist is itself subject to failure investigation and adaptation
  • Lobbying and greed are hurting us - a particularly emotional bit of the book is a small rant in which the author describes how people would have jumped on a pill or an expensive surgical device that would have brought the same great results as checklists, only to observe that people are less interested in something easy to copy, distribute and that doesn't bring benefits to anyone except the patients. That was a painful lesson
  • The star test pilot is dead - there was a time when crazy brave test pilots would risk their lives to test airplanes. The checklist method has removed the need for unnecessary risks and slowly removed the danger and complexity in the test pilot work, thus destroying the mythos. That also reduced the number of useless deaths significantly.
  • The financial investors that behave most like airline captains are the most successful - they balance their own greed or need for excitement with carefully crafted checklists, enabling their "guts" with the certainty that small details were not missed or ignored for reasons of wishful thinking
  • The Hudson river hero(es) - an interesting point was made when describing the Hudson river airplane crash. Even if the crew worked perfectly with each other, keeping their calm in the face of both engines suddenly stopping, calming and preparing the passengers, carefully checking things off their lists and completing each other's tasks, the media pulled hard to make only the pilot a hero. Surely he denied it every time and said that it was a crew effort because he was modest. Clearly he had everything under control. That did not happen and it also explains why the checklist is so effective and yet so few people actually employ it. We dream of something else
  • We are not built for discipline - that is why discipline is something that enables itself. It takes a little discipline to become more disciplined. A checklist ensures a kind of formal discipline in cases previously analysed by yourself. It assumes control over the emotional need for risk and excitement.
  • Optimize the system, not the parts - it is always the best choice to look at something as a whole and improve it as a whole. The author mentions an experiment of building a car from the best parts, taken from different companies. The result was a junk car that was not very good. The way the parts interact with one another is often more important than individual performance

I am ending this review with the two YouTube videos on how to use and how not to use the WHO Surgical Safety Checklist that Atul Gawande created for surgical team all around the globe.



I had a pretty strange bug to fix. It involved a class used in a web page that provided localized strings, only it didn't seem to work for Japanese, while French or English or German worked OK. The resource class was used like this: AdminUIString.SomeStringKey, where AdminUIString was a resx file in the App_GlobalResources folder. Other similar global resource resx classes were used in the class and they worked! The only difference between them was the custom tool that was configured for them. My problem class was using PublicResXFileCodeGenerator from namespace Resources, while the other classes used the GlobalResourceProxyGenerator, without any namespace.

Now, changing the custom tool did solve the issue there, but it didn't solve it in some integration tests where it failed. The workaround for this was to use HttpContext.GetGlobalResourceObject("AdminUIString", "SomeStringKey").ToString(), which is pretty ugly. Since our project was pretty complex, using bits of ASP.Net MVC and (very) old school ASP.Net, no one actually understood where the difference stood. Here is an article that partially explains it: Resource Files and ASP.NET MVC Projects. I say partially, because it doesn't really solve my problem in a satisfactory way. All it says is that I should not use global resources in ASP.Net MVC, it doesn't explain why it fails so miserable for Japanese, nor does it find a magical fix for the problem without refactoring the convoluted resource mess we have in this legacy project. It will have to do, though, as no one is budgeting refactoring time right now.

and has 0 comments

I've stumbled upon the Latvian Gambit and wanted to test it immediately against my colleagues. As you will see in the video, it seems a wonderfully aggressive opening, something akin to Shock and Awe, riddled with traps against your opponent. The truth is that it is an unprincipled opening, abandoning material advantage in the hope that your adversary will slip up and expecting you to have the skills to attack and defend accurately as the game progresses. I did manage to capture the queen once, but only after pointing out to my opponent that he could have forked my king and rook, so that the trap would work - he hadn't noticed. In the rest of the games, none of them running the course of the video you will see, the lack of skill on both sides of the table forced me to abandon this gambit for now, instead looking for something more principled and more appropriate for my playing level.

So here is the game from TheChessWebsite:

[youtube:2flPdsk9uz4]

I've experimented with chess engines and watched other videos about the gambit and constructed a rather complex PGN file. You can play with it here. Don't forget to click on the variations to see how the game could progress. There is even a full game there, from a video that has the link in the comment.
[Event "The Latvian Gambit"]
[Site "Siderite's Blog"]
[Date "2012.04.18"]
[Round "?"]
[White "Siderite"]
[Black "Siderite"]
[Result "0-1"]
[BlackElo "2400"]
[ECO "C63"]
[Opening "Spanish"]
[Time "13:45:38"]
[Variation "Schliemann, 4.Nc3 fxe4 5.Nxe4 Nf6"]
[WhiteElo "2400"]
[TimeControl "0+60"]
[Termination "normal"]
[PlyCount "11"]
[WhiteType "human"]
[BlackType "human"]

1. e4 e5 2. Nf3 f5 3. Nxe5 (3. Bc4 3. .. fxe4 4. Nxe5 Nf6 (4. .. Qg5 5. d4
Qxg2 6. Qh5+ g6 7. Bf7+ Kd8 8. Bxg6 Qxh1+ 9. Ke2 c6) (4. .. d5 5. Qh5+ g6
6. Nxg6 hxg6 7. Qxg6+ (7. Qxh8 Kf7 8. Qd4 Be6 9. Bb3 Nc6 10. Qe3 Bh6 11. f4
Nge7) 7. .. Kd7 8. Bxd5 Nf6 9. Nc3 Qe7) 5. Nf7 Qe7 6. Nxh8 d5 7. Be2 (7.
Bb3 Bg4 {White loses the queen, one way or another} 8. f3 exf3+ 9. Kf2 Ne4+
10. Kf1 (10. Kg1 f2+ 11. Kf1 Bxd1) 10. .. fxg2+ 11. Kxg2 Bxd1)) (3. Nc3 3.
.. Nf6 {Continue as for the king gambit (reversed)}) (3. exf5 3. .. e4 4.
Ne5 Nf6 5. Be2 d6 6. Bh5+ Ke7 7. Nf7 Qe8 8. Nxh8 Qxh5 9. Qxh5 Nxh5) (3. d4
fxe4 4. Nxe5 Nf6 5. Bg5 d6 6. Nc3 dxe5 7. dxe5 Qxd1+ 8. Rxd1 h6 9. Bxf6
gxf6) 3. .. Qf6 (3. .. Nc6 4. Qh5+
{https://www.youtube.com/watch?v=lZHGgEGM6SQ} (4. Nxc6 bxc6 5. exf5 Nf6 6.
d4 d5 7. Bd3 Bd6 8. Be3 O-O 9. Nd2 Rb8 10. Rb1 Qe7 11. O-O h5 12. Nf3 Ne4
13. Bxe4 dxe4 14. Ng5 Bxf5 15. Qxh5 Rf6 16. Nh3 Qd7 17. Bg5 g6 18. Qh4 Rf7
19. Nf4 Rh7 20. Qg3 Kg7 21. Qc3 Rbh8 22. d5+ Kg8 23. h3 {correct move dxc6}
Bxh3 24. Qxc6 (24. Nxh3 Rxh3 25. gxh3 Rxh3) 24. .. Bf5 25. Nh3 Bxh3 26. Bf6
Bxg2 {Bh2+ would have been mate in 4} (26. .. Bh2+ 27. Kxh2 Bxg2+ 28. Kg3
Rh3+ 29. Kxg2 Qg4#) 27. f4 Qxc6 28. dxc6 Bf3 29. b4 Bxf4 30. Rbe1 Rh1+ 31.
Kf2 R8h2#) 4. .. g6 5. Nxg6 (5. Nxc6 dxc6) 5. .. Nf6 6. Qh4 Rg8 7. Nxf8 Rg4
8. Qh6 Rxe4+ 9. Kd1 (9. Be2 Nd4 10. Nc3 Nxe2 11. Nxe2 Qe7) 9. .. Ng4) (3.
.. Bc5 4. exf5 Bxf2+ 5. Kxf2 Qh4+ 6. Kf3 (6. Kg1 6. .. Qd4#) (6. Ke2 6. ..
Qe4+) (6. g3 Qd4+ 7. Kg2 Qxe5 8. Nc3 Qxf5 9. Bd3 Qf7 10. b3 Nf6 11. Re1+
Kd8 12. Qf3 Nc6 13. Ne4 Qe7 14. Nxf6 Qxf6 15. Qxf6+ gxf6 16. Bb2) 6. .. Nf6
(6. .. Ne7 7. Nc3 d6 8. g3 Qh5+ 9. g4 Qh4 10. Qe1 Qxe1 11. Bb5+ Nbc6 (11.
.. c6 12. Rxe1 cxb5 13. Nd3 Nc6 14. Nxb5) 12. Rxe1 dxe5 13. Rxe5 O-O 14.
Re4 h5 15. h3 Nxf5 16. Bc4+ Kh7 17. gxf5 Bxf5 18. Kg2 Bxe4+ 19. Nxe4 Rae8)
(6. .. b5)) (3. .. fxe4 4. Qh5+ g6 5. Nxg6 Nf6 6. Qe5+) (3. .. d6 4. Qh5+
g6 5. Nxg6 Nf6 6. Qh4) 4. Nc4 (4. d4 d6 5. Nc4 fxe4 6. Nc3 Qg6 7. f3 exf3
8. Qxf3 Nc6 9. Bd3 Qg4) 4. .. fxe4 5. Nc3 Qf7 (5. .. Qg6 6. d3 exd3 7. Bxd3
Qxg2 8. Be4 Qh3 {At this point black has not developed and is lost}) 6. d4
(6. Nxe4 d5) 0-1



Update: Here is another analysis of the Latvian Gambit, by Abby Marshall.
Roman Dzindzichashvili considers the Latvian gambit a sign of mental illness.
Chessexplained also has a video about it.

It was long overdue for me to read a technical book and I've decided to go for a classic from 1999 about refactoring, written by software development icons as Martin Fowler and Kent Beck. As such, it is not a surprise that Refactoring: Improving the Design of Existing Code feels a little dated. However, not as much as I had expected. You see, the book is trying to familiarize the user with the idea of refactoring, something programmers of these days don't need. In 1999, though, that was a breakthrough concept and it needed not only explained, but lobbied. At the same time, the issues they describe regarding the process of refactoring, starting from the mechanics to the obstacles, feel as recent as today. Who didn't try to convince their managers to allow them a bit of refactoring time in order to improve the quality and readability of code, only to be met with the always pleasant "And what improvement would the client see?" or "are there ANY risks involved?" ?

The refactoring book starts by explaining what refactoring means, from the noun, which means the individual move, like Extract Method, to the verb, which represents the process of improving the readability and quality of the code base without changing functionality. To the defense of the managerial point of view, somewhere at the end of the book, authors submit that big refactoring cycles are usually a recipe for disaster, instead preaching for small, testable refactorings on the areas you are working on: clean the code before you add functionality. Refactoring is also promoting software testing. One cannot be confident they did not introduce bugs when they refactor if the functionality is not covered by automated or at least manual tests. One of the most important tenets of the book is that you write code for other programmers (or for yourself), not for the computer. Development speed comes from quickly grasping the intention and implementation when reading, maintaining and changing a bit of code. Refactoring is the process that improves the readability of code. Machines go faster no matter how you write the code, as long as it works.

The book is first describing and advocating refactoring, then presenting the various refactoring moves, in a sort of structured way, akin to the software patterns that Martin Fowler also attempted to catalog, then having a few chapter written by the other authors, with their own view of things. It can be used as a reference, I guess, even if Fowler's site does a better job at that. Also, it is an interesting read, even if, overall, it felt to me like a rehearsal of my own ideas on the subject. Many of the refactorings in the catalog are now automated in IDEs, but the more complex ones have not only the mechanics explained, but the reasons for why they should be used and where. That structured way of describing them might feel like repeating the obvious, but I bet if asked you couldn't come up with a conscious description of the place a specific refactoring should be used. Also, while reading those specific bits, I kept fantasizing about an automated tool that could suggest refactorings, maybe using FxCop or something like that.

Things I've marked down from the book, in the order I wrote them down in:
  • Refactoring versus Optimization - Optimizing the performance or improving some functionality should not be mixed up with the refactoring of code, which aims to improve readability of code while preserving the initial functionality. Mixing them up is pitting the two essential stages of development one against the other.
  • Methods should use their data of their own object - one of the telltales of need to refactor is when methods from an object use data from another object. It smells like the method should be moved in the responsibility of that other object.
  • When it is easy to refactor, choose a simple design - Of course the opposite is true, as well: when you know it will be hard to refactor a piece of code, try to design it first. If not, it is better to not add unnecessary complexity. This is in line with the KISS concept.
  • Split your application into self encapsulated parts - One of the ways to simplify refactoring is to separate your application into bits that you can manage separately. If you didn't design your application like that, try to first split it, then refactor.
  • Whenever you need to write a comment, consider extracting a method with a meaningful name - or renaming methods to be more expressive.
  • Consider polymorphism when seeing a switch statement - Now that is an interesting topic in itself. Why would polymorphism help here? How could it be simpler to understand than a switch/case statement? The idea behind this is that if you have a switch somewhere, you might have it somewhere else as well. Instead of taking decisions inside each method, it is better to split that behaviour in separate classes, each describing the particular value that the switch would have operated on.
  • Test before refactoring - this would have been drilled in your head already, but if not, the book will do that to you. In order to not add faults to the program with the refactoring, make sure you have tests for the existing functionality, tests that should pass after the refactoring process, as well.
  • The Quantity pattern - Review the Quantity pattern in order to improve readability and encapsulate simple common actions performed on specific types of units.
  • Split conditionals into methods - in other words try to simplify your conditional blocks to if conditionMethod() then ifMethod() else elseMethod(). It might seem a sure way to get to a fragmented code base, with small methods everywhere, but the idea is sound. A condition, after all, is an intention. Encapsulate it into a well named method and it will be very clear what the programmer intended. Maybe the same method will be used in other places as well, and then, using polymorphism, one can get rid of the conditional altogether.
  • Use Null objects - an interesting concept that I haven't even considered before. It is easy to recognize the need for a Null object when there are a lot of checks for null. if x==null then something() else x.somethingElse() would be turned into a simple x.something() if instead of null, x would be an object that represents empty, but still has attached behavior. An interesting side effect of this is that often the Null object can be made an immutable singleton.
  • Code inside Assertions always executes - This is a gotcha I found interesting. Imagine the following code: Assert.IsTrue(SomeCondition()) Even if the Assert object is designed to not execute anything in Release mode, only compiled in Debug, the method SomeCondition() will execute all the time. One option is to use an extra condition: Assert.IsTrue(Assert.On&&SomeCondition()) or, in C#, try to send an expression: Assert.IsTrue(()=>SomeCondition())
  • Careful when replacing method parameters with parameter object in parallel processing scenarios - Which nowadays means always. Anyways, the idea is that old libraries designed for parallel processing used large value parameter lists. One might be inclined to Introduce Parameter Object, but that introduces a reference object that might lead to locking issues. Just another gotcha.
  • Separate Modifier from Query - This is a useful convention to remember. A method should either get some information (query) or change some data (modifier), not both. It makes the intention clear.

That's about it. I have wet dreams of cleaning up the code base I am working on right now, maybe in a pair programming way (also a suggestion in the book and a situation when pair programming really seems a great opportunity), but I don't have the time. Maybe this summary of the book will inspire others who have it.

and has 0 comments
This is the second book by William Golding that I am reading. Unbeknownst to me, it is part of a trilogy! Even Nobel prize winners for Literature seem to can't help but write the damn things. How is one to finish reading all they want to read?! Anyway, the guy wrote Lord of the Flies, reason for the Nobel and a story I enjoyed both as book and movie. Rites of Passage has many similarities with that book. It happens at sea, on a ship, not on an island, but just as removed from the normal rules of civilised society. It reveals dark depths of human morality. It exposes hypocrisy and narrow mindedness. Golding also won an award for it, the Booker prize this time.

So, what is it about? There is this nobleman, godson of an unidentified but very important man, travelling on a ship from England to "the Antipodes" where he is to take an important state office. The guy is the highest socially elevated person on the ship, giving him some sort of equal footing with the captain himself, who is normally king and church on his ship. Now, the book appears as the journal of said guy, of the name of Talbot, written to his godfather, as a means to thank and humour the man, restricted in his own life by the gout disease. That is where the book is at its most difficult: the language is that of an Englishman noble from the 19th century, with antiquated words and funny ways of turning them into sentences. I pride myself on having understood and finished it, but now, that I know it is part of a trilogy, I am a bit disconcerted.

Anyway, the journal of Talbot presents us with the marvellous world of a ship at sea, forcing the reader to both empathise with the man, but also share some of his opinions of the lower castes and of the social system. We get to share his view of the current events as well. He thinks of himself as noble, intellectually and morally superior to other people on the ship, while exercising a benevolent and understanding indulgence at the actions of others. He truly seems to be the most intellectual person on board, as educated in comparison with others as a college graduate is to a four grade drop out. However, he is a bit of a dick, full of condescension and of the weaknesses of all men.

The main moral of the story is that his arrogant views on the world of the ship are reasonable, if taking as true the presuppositions he makes as member of his social class and having his position on the ship. However, they are completely wrong as related to what really happened. We understand this in the second half of the book, where he relates the contents of another man's journal, who's perspectives on the situations turn our perceptions on their head. Also, the finale is quite grotesque, revealing that his blindness to the world around him is, more or less, voluntary and part of the social system he represents.

This is a difficult book to read, but one that opens eyes, so to speak. Not only we get to see how English was meant to be used (bah, Americans! :) ) but also the author is taking us through the recesses of the human mind and society and the experience of the read is a visceral and personal one. If you can handle the language and the slow pace, I recommend reading this book.

There is also a quicker option for the ones who want to see what happens without having to read the books. The To the Ends of the Earth trilogy has been adapted to a miniseries, by the BBC, of course, starring Benedict Cumberbatch.

and has 0 comments
It's been a long time since I've posted a music entry. Here is one from Hanzel und Gretyl, an Industrial Rock band with Nazi overtones singing mostly in German. Actually, I couldn't say what ideology they have, since the band contains two people from New York who are not even German! But I like their music, this one in particular (I've listened to it on repeat for a day or so). Enjoy!

and has 1 comment
Towers of Midnight, the 13th book in the Wheel of Time series and the book before last in the saga, was a great read. As the second volume written by Brandon Sanderson after Robert Jordan's death, it benefited a lot from Sanderson's fluid writing style and the fact that all the stories are coming to their end.

The plot was interesting, too, with all three ta'veren characters doing their part, all the girls (except Nynaeve!) being involved and quite a few Forsaken as well, with interesting gimmicks the Darkfriend previously known as Luc and even a gholam!. The entire Trakand clan is represented and there are blessedly few Aiel. In this book Moiraine is found, there is some news of lovely and terrible Lanfear and someone dies (although, of course, nobody important and they die quite uselessly) while battling Aelfinn and Eelfinn, while The Final Battle approaches, with the revelation of red veilled dark Aiel?

My feeling of the book was one of discovery. Brandon Sanderson is writing this book with the enthusiasm of someone that just started work on a story (because he did!), yet with all the legacy material from Robert Jordan to build upon. No wonder the last part of the series was supposed to be just one book and resulted in three at the end. That means the story feels fresh, even if I have been reading it for the last four months. Also, it is no surprise that Sanderson could not maintain the fearful and deferential view of women that Jordan cultivated in the previous books. Oh, don't worry, they continue to meddle in all the affairs of others and their behaviour is just as erratic and irrational as before, it's just that men don't act like cowed idiots anymore and actually have a backbone. I don't know if this was supposed to happen in the original author's view, but I am willing to bet that it was not. Actually, this is one of the worse points of the story, when Elayne (the queen of Andor), Egwene (the Amyrlin seat) and Faille (Perrin's wife and princess of Saldaea) are plotting and acting like they own the world, while at the same time putting themselves in all sorts of dangers where they have to be saved by others. Of course, the others more often end up dead, but there is always some handy rationalization for behaving like an idiot. Amazingly enough, though, Cadsuane and even Moiraine are behaving quite well. The End must really be coming!

The bottom line is that I have finished reading the series. The last book won't be published until 2013, so now I am free to read other books. Yatta! I've gathered a technical book about refactoring, The Rites of Passage, by William Golding, and The Checklist Manifesto, which I hear is both instructive and interesting to read (not to mention short). So, I guess the wheel won't be turning as the wheel wants until next year. Meanwhile, you're stuck with me!