and has 0 comments
I was attempting to add very detailed logging (tracing) to my application. In order to do that, I had to change hundreds of objects. That wouldn't do. Fortunately, .NET has a nice feature called RealProxy. Let's see a quick and dirty example:
    class Program
{
static void Main(string[] args)
{
JsonConvert.DefaultSettings = () => new JsonSerializerSettings
{
ReferenceLoopHandling = ReferenceLoopHandling.Ignore,
NullValueHandling = NullValueHandling.Ignore,
Formatting = Formatting.Indented
};

Test test = new Test();
test.field = "test";
test.Property = "Test";
string outObject;
string refObject = "ref";
var result = test.Method("test1", "test2", ref refObject, out outObject);
Console.WriteLine(JsonConvert.SerializeObject(new object[] {
test,
result,
refObject,
outObject
})
);
Console.ReadKey();
}
 
}
 
class Test
{
public string field;
public string Property { get; set; }
public string Method(string parameter1, string parameter2,
ref string refObject, out string outObject)
{
if (parameter1 == null)
throw new ArgumentNullException("Parameter one cannot be null");
refObject += " reffed";
outObject = "outed";
return $"{parameter1} : {parameter2}";
}
}

So I defined an object with a field, a property and a method. That's what my application does. It then displays the object and the result of the method call. Here is the result:
[
{
"field": "test",
"Property": "Test"
},
"test1 : test2",
"ref reffed",
"outed"
]

I would like to log everything that happens with my Test object. Well, as such I can't do anything, I need to change the code a bit, like this:
    class Program
{
static void Main(string[] args)
{
JsonConvert.DefaultSettings = ()=>new JsonSerializerSettings
{
ReferenceLoopHandling = ReferenceLoopHandling.Ignore,
NullValueHandling = NullValueHandling.Ignore,
Formatting = Formatting.Indented
};

Test test = (Test)new LoggingProxy(new Test()).GetTransparentProxy();
test.field = "test";
test.Property = "Test";
string outObject;
string refObject = "ref";
var result = test.Method("test1", "test2", ref refObject, out outObject);
Console.WriteLine(JsonConvert.SerializeObject(new object[] {
test,
result,
refObject,
outObject
})
);
Console.ReadKey();
}
 
}
 
class Test: MarshalByRefObject
{
public string field;
public string Property { get; set; }
public string Method(string parameter1, string parameter2,
ref string refObject, out string outObject)
{
if (parameter1 == null)
throw new ArgumentNullException("Parameter one cannot be null");
refObject += " reffed";
outObject = "outed";
return $"{parameter1} : {parameter2}";
}
}
 
class LoggingProxy : RealProxy
{
private readonly object _target;
 
public LoggingProxy(object obj) : base(obj?.GetType())
{
_target = obj;
}
 
public override IMessage Invoke(IMessage msg)
{
if (msg is IMethodCallMessage methodCall)
{
var arguments = methodCall.Args.ToArray();
var result = methodCall.MethodBase.Invoke(_target, arguments);
return new ReturnMessage(
result,
arguments,
arguments.Length,
methodCall.LogicalCallContext,
methodCall);
}
return null;
}
}

So this is what I did above:
  • I've inherited my Test class from MarshalByRefObject
  • I've created a LoggingProxy class that inherits from RealProxy and implements the Invoke method
  • I've replaced new Test(); with (Test)new LoggingProxy(new Test()).GetTransparentProxy();

Running it we get the same result.

Time to add some logging. I will write stuff on the Console, too, for this demo. Here are the changes to the LoggingProxy class:
    class LoggingProxy : RealProxy
{
private readonly object _target;
 
public LoggingProxy(object obj) : base(obj?.GetType())
{
_target = obj;
}
 
public override IMessage Invoke(IMessage msg)
{
if (msg is IMethodCallMessage methodCall)
{
var arguments = methodCall.Args.ToArray();
string typeName;
try
{
typeName = Type.GetType(methodCall.TypeName).Name;
}
catch
{
typeName = methodCall.TypeName;
}
try
{
Console.WriteLine($"Called {typeName}.{methodCall.MethodName}" +
$"({JsonConvert.SerializeObject(arguments)})");
var result = methodCall.MethodBase.Invoke(_target, arguments);
Console.WriteLine($"Success for {typeName}.{methodCall.MethodName}" +
$"({JsonConvert.SerializeObject(arguments)}): " +
$"{JsonConvert.SerializeObject(result)}");
return new ReturnMessage(
result,
arguments,
arguments.Length,
methodCall.LogicalCallContext,
methodCall);
}
catch (Exception exception)
{
Console.WriteLine($"Error for {typeName}.{methodCall.MethodName}" +
$"({JsonConvert.SerializeObject(arguments)}): " +
$"{exception}");
return new ReturnMessage(exception, methodCall);
}
}
return null;
}
}

It's the same as before, but with a try/catch block and some extra Console.WriteLines. Here is the output:
Called Object.FieldSetter([
"RealProxyTests2.Test",
"field",
"test"
])
Success for Object.FieldSetter([
"RealProxyTests2.Test",
"field",
"test"
]): null
Called Test.set_Property([
"Test"
])
Success for Test.set_Property([
"Test"
]): null
Called Test.Method([
"test1",
"test2",
"ref",
null
])
Success for Test.Method([
"test1",
"test2",
"ref reffed",
"outed"
]): "test1 : test2"
Called Object.GetType([])
Success for Object.GetType([]): "RealProxyTests2.Test, RealProxyTests2, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null"
Called Object.FieldGetter([
"RealProxyTests2.Test",
"field",
null
])
Success for Object.FieldGetter([
"RealProxyTests2.Test",
"field",
"test"
]): null
Called Test.get_Property([])
Success for Test.get_Property([]): "Test"
[
{
"field": "test",
"Property": "Test"
},
"test1 : test2",
"ref reffed",
"outed"
]

A bounty of information. The first lines are the setting of the field, Property and the execution of Method. But then there are a GetType, a field getter and a Property getter. What's that about? That's JsonConvert, serializing the Test proxy.
Warning: I've copied methodCall.Args into a local property called arguments. At first glance it might appear superfluous, but methodCall.Args is not really an array of object, even if it appears that way in the debugger. It is read-only, changing any of its items has no effect.

Remember how we defined the proxy in Program.Main? We can use a method in our proxy object, let's call it Wrap:
    public static T Wrap<T>(T obj)
{
if (obj is MarshalByRefObject marshalByRefObject)
{
return (T)new LoggingProxy(marshalByRefObject).GetTransparentProxy();
}
return obj;
}

Conclusion:

While this works, there are a series of issues related to it:
  1. RealProxy is only available for .NET Framework, not .NET Core (see DispatchProxy for that)
  2. Serialization is not so straightforward as in this demo (see my blog post about Newtonsoft serialization
  3. You need to inherit from MarshalByRefObject, so this solution doesn't work for classes that already have a base class
  4. Performance wise you should make sure you are not logging or executing anything unless the correct log level is set (something like logger.LogTrace(JsonConvert(...)) would not work because the JSON serialization occurs no matter before executing LogTrace)
  5. Also, this wrapping is not free. With a simple proxy that did nothing than execute the code it took 43 times more time to run. Of course, that's because the actual execution of setting properties or returning a string is basically zero. When I added a Thread.Sleep(1) in the method, it took almost the same amount of time. Just don't use it in performance sensitive applications

Final thoughts: in the same namespace with RealProxy there is a ProxyAttribute class that at first glance seems to be even better: you just decorate a class with the attribute and BANG! instant AOP. But it's not that simple. First of all it works only on object that inherit from ContextBoundObject which itself inherit from MarshalByRefObject. And while it seems like a good idea to just replace MarshalByRefObject with ContextBoundObject in the code above, know that no generic class can inherit from it. There might be other restrictions, too. If you make Test inherit ContextBoundObject, the debugger will already show new Test() as being a transparent proxy, without wrapping it with any code. It might still be usable in certain conditions, though.

and has 0 comments
I was trying to log some stuff (a lot of stuff) and I noticed that my memory went to the roof (16GB in a few seconds), then an OutOfMemoryException was thrown. I've finally narrowed it down to the JSON serializer from Newtonsoft.

First of all, some introduction on how to serialize any object into JSON: whether you use JsonConvert.SerializeObject(yourObject,new JsonSerializerSettings { <settings> }) or new JsonSerializer { <settings> }.Serialize(writer, object) (where <settings> are some properties set via the object initializer syntax) you will need to consider these properties:
We will use these classes to test the results:
class Parent
{
public string Name { get; set; }
public Child Child1 { get; set; }
public Child Child2 { get; set; }
public Child[] Children { get; set; }
public Parent Self { get; internal set; }
}
 
class Child
{
public string Name { get; set; }
public Parent Parent { get; internal set; }
}

For this piece of code:
JsonConvert.SerializeObject(new Child { Name = "other child" }, settings)
you will get either
{"Name":"other child","Parent":null}
or
{
"Name": "other child",
"Parent": null
}
based on whether we use Formatting.None or Formatting.Indented. The other properties do not affect the serialization (yet).

Let's set up the following objects:
var child = new Child
{
Name = "Child name"
};
var parent = new Parent
{
Name = "Parent name",
Child1 = child,
Child2 = child,
Children = new[]
{
child, new Child { Name = "other child"}
}
};
parent.Self = parent;
child.Parent = parent;

As you can see, not only does parent have multiple references to child and one to himself, but the child also references the parent. If we try the code
JsonConvert.SerializeObject(parent, settings)
we will get an exception Newtonsoft.Json.JsonSerializationException: 'Self referencing loop detected for property 'Parent' with type 'Parent'. Path 'Child1'.'. In order to avoid this, we can use ReferenceLoopHandling.Ignore, which tells the serializer to ignore circular references. Here is the output when using
var settings = new JsonSerializerSettings
{
Formatting = Formatting.Indented,
ReferenceLoopHandling=ReferenceLoopHandling.Ignore
};

{
"Name": "Parent name",
"Child1": {
"Name": "Child name"
},
"Child2": {
"Name": "Child name"
},
"Children": [
{
"Name": "Child name"
},
{
"Name": "other child",
"Parent": null
}
]
}

If we add NullValueHandling.Ignore we get
{
"Name": "Parent name",
"Child1": {
"Name": "Child name"
},
"Child2": {
"Name": "Child name"
},
"Children": [
{
"Name": "Child name"
},
{
"Name": "other child"
}
]
}
(the "Parent": null bit is now gone)

The default for the ReferenceLoopHandling property is ReferenceLoopHandling.Error, which throws the serialization exception above, but we can also use ReferenceLoopHandling.Serialize besides Error and Ignore. In that case we get a System.StackOverflowException: 'Exception of type 'System.StackOverflowException' was thrown.' as it tries to serialize at infinitum.

PreserveReferencesHandling is rather interesting. It creates extra properties for objects like $id, $ref or $values and then uses those to define objects that are circularly referenced. Let's use this configuration:
var settings = new JsonSerializerSettings
{
Formatting = Formatting.Indented,
NullValueHandling = NullValueHandling.Ignore,
PreserveReferencesHandling = PreserveReferencesHandling.Objects
};

Then the result will be
{
"$id": "1",
"Name": "Parent name",
"Child1": {
"$id": "2",
"Name": "Child name",
"Parent": {
"$ref": "1"
}
},
"Child2": {
"$ref": "2"
},
"Children": [
{
"$ref": "2"
},
{
"$id": "3",
"Name": "other child"
}
],
"Self": {
"$ref": "1"
}
}

Let's try PreserveReferencesHandling.Arrays:
var settings = new JsonSerializerSettings
{
Formatting = Formatting.Indented,
NullValueHandling = NullValueHandling.Ignore,
ReferenceLoopHandling=ReferenceLoopHandling.Ignore,
PreserveReferencesHandling = PreserveReferencesHandling.Arrays
};

The result will then be
{
"Name": "Parent name",
"Child1": {
"Name": "Child name"
},
"Child2": {
"Name": "Child name"
},
"Children": {
"$id": "1",
"$values": [
{
"Name": "Child name"
},
{
"Name": "other child"
}
]
}
}
which annoyingly adds an $id to the Children array. There is one more possible value, PreserveReferencesHandling.All, which causes this output:
{
"$id": "1",
"Name": "Parent name",
"Child1": {
"$id": "2",
"Name": "Child name",
"Parent": {
"$ref": "1"
}
},
"Child2": {
"$ref": "2"
},
"Children": {
"$id": "3",
"$values": [
{
"$ref": "2"
},
{
"$id": "4",
"Name": "other child"
}
]
},
"Self": {
"$ref": "1"
}
}

I personally recommend using PreserveReferencesHandling.Objects, which doesn't need setting the ReferenceLoopHandling property at all. Unfortunately, it adds an $id to every object, even if it is not circularly defined. However, it creates an object that can be safely deserialized back into the original, but if you just want a quick and dirty output of the data in an object, use ReferenceLoopHandling.Ignore with NullValueHandling.Ignore. Note that object references cannot be preserved when a value is set via a non-default constructor such as types that implement ISerializable.

Warning, though, this is still not enough! In my logging code I had used ReferenceLoopHandling.Ignore and the exception was quite different, an OutOfMemoryException. It seems that even with circular references checked, JsonSerializer will messes up some times.

The culprits? Task<T> (or async lambdas send as parameters) and an Entity Framework context object. The solution I employed was to check the type of the objects I send to the serializer and, if any of the offending types, replace them with the full names of their types.

Hope it helps!

and has 0 comments

Intro


I had this situation where an application that was running in production was getting bigger and bigger until it would devour all memory. I had to fix the problem, but how could I identify the cause? Running it on my computer wouldn't lead to the same increase in memory, what I wanted to see was why a simple web API would take 6Gb in memory and keep growing.

The process is simple enough:
  1. Get a memory dump from the running process
  2. Analyze the memory dump

While this post relates specifically to process memory dumps, one can get whole memory dumps and analyze those. There are two types of process memory dump formats: mini and full. I will be talking about the full format.

Fortunately, for our process, there are only two steps. Unfortunately, both are fraught with problems. Let's go through each.

Getting a memory dump from a running process


Apparently that is as simple as opening Task Manager, right clicking on the running process in the Details tab and selecting Create dump file. However, what this does is suspend the process and then copy all of its memory in a file in %AppData%/Local/Temp/[name of process].dmp. If the process is being watched or some other issue occur, the dumping will fail. Personally, I couldn't dump anything over 2Gb when I was trying to get the info from w3wp.exe (the IIS process) as the process would restart.

There are a lot of other ways of getting a process memory dump, including crash dumps, periodic dumps, event driven dumps. ProcDump, one of the utilities in the free SysInternals suite, can do all of those. The Windows Error Reporting (WER) system also generates dumps for faulty or unresponsive application or kernel. Other options for collecting a dump are userdum.exep, Windows Debugger (ntsd or windbg). Process Explorer, of course, can create two types of memory dumps for a process: mini and full.

Analyzing a process memory dump


This is where it gets tricky. What you would like is something that opens the file and shows a nice interactive interface explaining exactly what hogs your memory. Visual Studio does that, but only the Ultimate version gives you the memory analysis option. dotMemory from JetBrains is great, but it also costs a lot of money.

My personal experience was using dotMemory in the five days in which it is free and it was as seamless as I would have wanted: I opened the file, it showed me a pie graph with what objects were hogging my memory, I clicked on them, looked where and how they were stored and even what paths could be taken to create those instances. It was beautiful.

Another option is SciTech's .NET Memory Profiler, but it is also rather expensive. I am sure RedGate has something, too, but their trials are way to intrusive for me to download.

I have been looking for free alternatives and frankly all I could find is WinDBG, which is included in Windows Debugging Tools.

Windows Debugger is the name of the software. It is a very basic tool, but there are visualizers which use it to show information. One of them is MemoScope, which appears unmaintained at the moment and is kind of weird looking, but it works. Unfortunately, it doesn't even come close to what dotMemory was showing me. If analyzing a random .NET app dump showed that the biggest memory usage was coming from MainWindow (makes sense) which then had a lvItems that held most of memory (it was a ListView with a lot of data), WinDbg and therefore MemoScope show the biggest usage comes from arrays of bytes. And it also makes sense, physically, but there is no logical context. A step by step memory profiling guide using WinDbg can be found here.

Conclusion


I am biased towards JetBrains software, which is usually amazing, although I haven't used it a quite a while because their obnoxious licensing system (you basically rent the software). I loved how quickly I got to the problem I had with dotMemory. .NET Memory Profiler from SciTech is way cheaper and seems to point in the right direction, but doesn't really compare in terms of quality. Anything else I've tried was quite subpar.


Doing it yourself


As they say, if you want to do something right, you gotta do it yourself.

There is a Windows API called MiniDumpWriteDump that you can use programatically to create the dumps. However, as you've seen, that's not the problem, the dump analysis is.

And it's hard to find anything that is not very deep and complicated. Memory dumps are usually analysed for forensic reasons. There are specialists who have written scores of books on memory dump analysis. What's worse, most people focus on crash dumps, with memory leak analysis usually done while debugging an application.

I certainly don't have the time to work on a tool to help me with this particular issue. I barely had the time to look for existing tools. But if you, guys, can add information to this blog post, I am sure it will help a lot of people going through the same thing.

Pretty Pictures


I didn't find a nice way of putting the images in the post's text, I am adding them here.








and has 0 comments
MemoryCache has a really strange behavior where all of a sudden, without any warning or exception, it stops caching. You use Set or Add, it doesn't matter. It returns true for Add, it throws no error for Set, but the cache stubbornly remains empty. This behavior can be replicated easily by running this line:
MemoryCache.Default.Dispose();
From that moment on, MemoryCache.Default will not cache anything anymore.

Of course, you will say, why the hell would you dispose the default object? First of all, why give me the option? It's not like I can set the Default instance after I dispose it, so this breaks a lot of things. Second of all, why pretend to continue to be working when in fact it is not? Most disposable classes throw exceptions if they are used after being disposed. And third of all, it might not be you who does the disposing.

In my case, I was trying to be a good boy and declare all my dependencies upfront. I was using StructureMap as a IoC framework and for my class that was using a MemoryCache I declared a constructor parameter of the basest type I could: ObjectCache. Then I registered MemoryCache.Default as the instance used when needing an ObjectCache. The problem is that each unit test would initialize and tear down the StructureMap container and it would take all the instances used that implement IDisposable and dispose them, including our friend MemoryCache.Default. As I couldn't find a simple way to tell StructureMap to not dispose the object, I had to create a MemoryCacheWrapper that implemented ObjectCache and was NOT IDisposable and was using MemoryCache.Default for all methods and use that as the singleton instance for an ObjectCache in StructureMap.

If you are working with a .NET version prior to .NET 4.5, you may encounter the same issue with a bug in .NET: MemoryCache Empty : Returns null after being set

and has 0 comments
Sometimes we need to execute an async method synchronously. And before you bristle up and try to deny it think of the following scenarios:
  • event handlers are by default synchronous
  • third party software might require you to implement a synchronous method (like a lot of Microsoft code, like Windows services)

All I want is to take public async Task ExecuteStuff(param1) and run it with ExecuteSynchronously(ExecuteStuff, param1).

"What's the problem?", you might ask, "Just use SomeMethod().Result instead of await SomeMethod()". Well, it doesn't work because of the dreaded deadlock (a dreadlock). Here is an excerpt of the .Net C# code for Task<T>.InternalWait (used by Result and Wait):
// Alert a listening debugger that we can't make forward progress unless it slips threads.
// We call NOCTD for two reasons:
// 1. If the task runs on another thread, then we'll be blocked here indefinitely.
// 2. If the task runs inline but takes some time to complete, it will suffer ThreadAbort with possible state corruption,
// and it is best to prevent this unless the user explicitly asks to view the value with thread-slipping enabled.
Debugger.NotifyOfCrossThreadDependency();
As you can see, it only warns the debugger, it doesn't crash it just freezes. And this is just in case it catches the problem at all.

What about other options? Task has a method called RunSynchronously, it should work, right? What about Task<T>.GetAwaiter().GetResult() ? What about .ConfigureAwait(false)? In my experience all of these caused deadlocks.

Now, I am giving you two options, one that I tried and works and one that I found on a Microsoft forum. I don't have the time and brain power to understand how this all works behind the scenes, so I am asking for assistance in telling me why this works and other things do not.

First of all, two sets of extensions methods I use to execute synchronously an async method or func:
/// <summary>
/// Execute an async function synchronously
/// </summary>
public static TResult ExecuteSynchronously<TResult>(this Func<Task<TResult>> asyncMethod)
{
TResult result = default(TResult);
Task.Run(async () => result = await asyncMethod()).Wait();
return result;
}
 
/// <summary>
/// Execute an async function synchronously
/// </summary>
public static TResult ExecuteSynchronously<TResult, TParam1>(this Func<TParam1, Task<TResult>> asyncMethod, TParam1 t1)
{
TResult result = default(TResult);
Task.Run(async () => result = await asyncMethod(t1)).Wait();
return result;
}
 
/// <summary>
/// Execute an async function synchronously
/// </summary>
public static void ExecuteSynchronously(this Func<Task> asyncMethod)
{
Task.Run(async () => await asyncMethod()).Wait();
}
 
/// <summary>
/// Execute an async function synchronously
/// </summary>
public static void ExecuteSynchronously<TParam1>(this Func<TParam1,Task> asyncMethod, TParam1 t1)
{
Task.Run(async () => await asyncMethod(t1)).Wait();
}

I know, the ones returning a result would probably be better as return Task.Run(...).Result or even Task.Run(...).GetAwaiter().GetResult(), but I didn't want to take any chances. Because of how the compiler works, you can't really use them as extension methods like SomeMethod.ExecuteSynchronously(param1) unless SomeMethod is a variable defined as Func<TParam1, TResult>. Instead you must use them like a normal static method: MethodExtensions.ExecuteSynchronously(SomeMethod, param1).

Second is the solution for this proposed by slang25 :

public static T ExecuteSynchronously<T>(Func<Task<T>> taskFunc)
{
var capturedContext = SynchronizationContext.Current;
try {
SynchronizationContext.SetSynchronizationContext(null);
return taskFunc.Invoke().GetAwaiter().GetResult();
}
finally {
SynchronizationContext.SetSynchronizationContext(capturedContext);
}
}

I haven't tried this, so I can only say that it looks OK.

A third options, I am told, works by configuring every await to run with ConfigureAwait(false). There is a NuGet package that does this for you. Again, untried.

and has 0 comments

I wrote some unit tests today (to see if I still feel) and I found a pattern that I believe I will use in the future as well. The problem was that I was testing a class that had multiple services injected in the constructor. Imagine a BigService that needs to receive instances of SmallService1, SmallService2 and SmallService3 in the constructor. In reality there will be more, but I am trying to keep this short. While a dependency injection framework handles this for the code, for unit tests these dependencies must be mocked.
C# Example (using the Moq framework for .NET):

// setup mock for SmallService1 which implements the ISmallService1 interface
var smallService1Mock = new Mock<ISmallService1>();
smallService1Mock
  .Setup(srv=>srv.SomeMethod)
  .Returns("some result");
var smallService1 = smallService1Mock.Object.
// repeat for 2 and 3
var bigService = new BigService(smallService1, smallService2, smallService3);


My desire was to keep tests separate. I didn't want to have methods that populated fields in the test class so that if they run in parallel, it wouldn't be a problem. But still, I didn't want to copy paste the same test a few times only to change some parameter or another. And the many local variables for mocks and object really bothered me. Moreover, there were some operations that I really wanted in all my tests, like a mock of a service that executed an action that I was giving it as a parameter, with some extra work before and after. I wanted that in all test cases, the mock would just execute the action.

Therefore I got to writing something like this:

public class BigServiceTester {
  public Mock<ISmallService1> SmallService1Mock {get;private set;}
  public Mock<ISmallService2> SmallService2Mock {get;private set;}
  public Mock<ISmallService3> SmallService3Mock {get;private set;}
 
  public ISmallService1 SmallService1 => SmallService1Mock.Object;
  public ISmallService2 SmallService2 => SmallService2Mock.Object;
  public ISmallService3 SmallService3 => SmallService3Mock.Object;
 
  public void BigServiceTester() {
    SmallService1Mock = new Mock<ISmallService1>();
    SmallService2Mock = new Mock<ISmallService2>();
    SmallService3Mock = new Mock<ISmallService3>();
  }
 
  public BigServiceTester SetupDefaultSmallService1() {
    SmallService1Mock
      .Setup(ss1=>ss1.Execute(It.IsAny<Action>()))
      .Callback<Action>(a=>a());
    return this;
  }
 
  public IBigService GetService() =>
    new BigService(SmallService1, SmallService2, SmallService3);
}
 
// and here is how to use it in an XUnit test
[Fact]
public void BigServiceShouldDoStuff() {
  var pars = new BigServiceTester()
    .SetupDefaultSmallService1();
 
  pars.SmallService2Mock
    .Setup(ss2=>ss2.SomeMethod())
    .Returns("some value");
 
  var bigService=pars.GetService();
  var stuff = bigService.DoStuff();
  Assert.Equal("expected value", stuff);
}

The idea is that everything related to BigService is encapsulated in BigServiceTester. Tests will be small because one only creates a new instance of tester, then sets up code specific to the test, then the tester will also instantiate the service, followed by the asserts. Since the setup code and the asserts depend on the specific test, everything else is reusable, but also encapsulated. There is no setup code in the constructor but a fluent interface is used to easily execute common solutions.

As a further step, tester classes can implement interfaces, so for example if some other class needs an instance of ISmallService1, all it has to do is implement INeedsSmallService1 and the SetupDefaultSmallService1 method can be written as an extension method. I kind of like this way of doing things. I hope you do, too.

I was trying to create an entity with several children entities and persist it to the database using Entity Framework. I was generating the entity, set its entry state to Added, saved the changes in the DbContext, everything right. However, in the database I had one parent entity and one child entity. I suspected it had something to do with the way I created the tables, the foreign keys, something related to the way I was declaring the connection between the entities in the model. It was none of that.

If you look at the way EF generates entities from database tables, it creates a two directional relationship from any foreign key: the parent entity gets an ICollection<Child> property to hold the children and the child entity a virtual property of type Parent to hold the parent. Moreover, the parent entity instantiates the collection in the constructor in the form of a HashSet<Child>. It doesn't have to be a HashSet, though. It works just as well if you overwrite it when you create the entity with something like a List. However, the HashSet approach tells something important of the way EF behaves when considering collections of child objects.

In my case, I was doing something like
var parent = new Parent { 
Children = Enumerable
.Repeat(new Child { SomeProperty = SomeValue }, 3)
.ToList()
};
Can you spot the problem? When I changed the code to
var parent = new Parent();
Enumerable
.Repeat(new Child { SomeProperty = SomeValue }, 3)
.ToList()
.ForEach(child => parent.Children.Add(child));
I was shocked to see that my Parent had only a list of Children with count 1. Why? Because Enumerable.Repeat takes the instance of the object you give it and repeats it:

Enumerable.Repeat(something,3).Distinct().Count() == 1 !

There was no problem that I was setting the Children collection to a List instead of a HashSet, but when saving the children, Entity Framework was considering the distinct instances of Child.

The solution here was to generate different instances of the same object, something like
var parent = new Parent {
Children = Enumerable.Range(0,3)
.Select(i => new Child { SomeProperty = SomeValue }, 3)
.ToList()
};
or, to make it more clear for this case:
var parent = new Parent {
Children = Enumerable
.Repeat(() => new Child { SomeProperty = SomeValue }, 3)
.Select(generator => generator())
.ToList()
};

and has 1 comment
So I was writing this class and I was frustrated with the time it took to complete an operation compared to another class that I wanted to replace. Anyway, I am using Visual Studio 2017 and I get this very nice grayed out font on one of my fields and a nice suggestion complete with an automatic fix for it: "Use auto property". Why should I? Because I have a Count property that wraps a _count field and simply using the property is clearer and with less lines of code. It makes sense, right?

However, let's remember the context here: I was going for performance. So when I did ten million operations with Count++ it took 11 seconds, but when I did ten million operations with _count++ it took only 9 seconds. That's a staggering 20% increase in used resources for wrapping one field into a property.

and has 2 comments
My colleague showed me today something that I found interesting. It involves (sometimes unwittingly) casting an awaitable method to Action. In my opinion, the cast itself should now work. After all, an awaitable method is a Func<Task> which should not be castable to Action. Or is it? Let's look at some code:
var method = async () => { await Task.Delay(1000); };
This does not work, as compilation fails with Error CS0815 Cannot assign lambda expression to an implicitly-typed variable, which means we need to set the type explicitly. But what is it? It receives no parameter and returns nothing. So it must be an Action, right? But it is also an async/await method, which means it's a Func<Task>. Let's try something else:
Task.Run(async () => { await Task.Delay(1000); });
This compiles. If we hover or go to implementation for the Task.Run method, we reach the public static Task Run(Func<Task> function); signature. So that does it, right? It IS a Func<Task>! Let's try something else, though.
Action action = async() => { await Task.Delay(1000); };
Task.Run(action);
This compiles again! So it IS an Action, too!

What is my point, though? Consider you would want to create a method that receives an Action as a parameter. You want something done, then to execute the function, something like this:
public void ExecuteWithLog(Action action)
{
Console.WriteLine("Start");
action();
Console.WriteLine("End");
}

And then you want to use it like this:
ExecuteWithLog(async () => {
Console.WriteLine("Start delay");
await Task.Delay(1000);
Console.WriteLine("End delay");
});

The output will be:
Start
Start delay
End
End delay
There is NO WAY of awaiting the original method in the ExecuteWithLog method, as it is received as an Action, and while it waits for a second, execution returns to ExecuteWithLog immediately. Write the method like this:
public async void ExecuteWithLog(Func<Task> action)
{
Console.WriteLine("Start");
await action();
Console.WriteLine("End");
}
and now the output is as expected:
Start
Start delay
End delay
End

Why does this happen? Well, as mentioned above, you start with an Action, then you need to await some method (because now everybody NEEDS to use await/async), and then you get an error that your method is not marked with async. Now it's suddenly something else, not an Action anymore. Perhaps that would be annoying, but this ambiguity in defining what an anonymous async parameterless void method is worse.

Visual Studio has a nice little feature called Code Coverage Results, which you can find in the Test menu. However, it does absolutely nothing unless you have the Enterprise edition. If you have it, then probably this post is not for you.

How can we test unit test coverage on .NET projects without spending a ton of money?

I have first tried AxoCover, which is a quick install from NuGet. However, while it looks like it does the job, it requires xUnit 2.2.0 and the developer doesn't have the resources to upgrade the extension to the latest xUnit version (which in my case is 2.3.1). In case you don't use xUnit, AxoCover looks like a good solution.

Then I tried OpenCover, which is what AxoCover uses in the background anyway, which is a library that is suspiciously showing in the Visual Studio Manage NuGet Packages as published in Monday, February 8, 2016 (2/8/2016). However, the latest commits on the GitHub site are from April, only three months ago, which means the library is still being maintained. Unfortunately OpenCover doesn't have any graphical interface. This is where ReportGenerator comes in and some batch file coding :)

Bottom line, these are the steps that you need to go through to enable code coverage reporting on your projects:
  1. install the latest version of OpenCover in your unit test project
  2. install the latest version of ReportGenerator in your unit test project
  3. create a folder in your project root called _coverage (or whatever)
  4. add a new batch file in this folder containing the code to run OpenCover, then ReportGenerator. Below I will publish the code for xUnit
  5. run the batch file

Batch file to run coverage and generate a report for xUnit tests:
@echo off
setlocal EnableDelayedExpansion
 
for /d %%f in (..\..\packages\*.*) do (
set name=%%f
set name="!name:~15!"
if "!name:~1,9!" equ "OpenCover" (
set opencover=%%f
)
if "!name:~1,15!" equ "ReportGenerator" (
set reportgenerator=%%f
)
if "!name:~1,20!" equ "xunit.runner.console" (
set xrunner=%%f
)
)
SET errorlevel=0
if "!opencover!" equ "" (
echo OpenCover not found in ..\..\packages !
SET errorlevel=1
) else (
echo Found OpenCover at !opencover!
)
if "!reportgenerator!" equ "" (
echo ReportGenerator not found in ..\..\packages !
SET errorlevel=1
) else (
echo Found ReportGenerator at !reportgenerator!
)
if "!xrunner!" equ "" (
SET errorlevel=1
echo xunit.runner.console not found in ..\..\packages !
) else (
echo Found xunit.runner.console at !xrunner!
)
if %errorlevel% neq 0 exit /b %errorlevel%
 
set cmdCoverage="!opencover:\=\\!\\tools\\opencover.console -register:user -output:coverage.xml -target:\"!xrunner:\=\\!\\tools\\net452\\xunit.console.exe\" \"-targetargs: ..\\bin\\x64\\Debug\\MYPROJECT.Tests.dll -noshadow -parallel none\" \"-filter:+[MYPROJECT.*]* -[MYPROJECT.Tests]* -[MYPROJECT.IntegrationTests]*\" | tee coverage_output.txt"
set cmdReport="!reportgenerator:\=\\!\\tools\\reportgenerator -reports:coverage.xml -targetdir:.\\html"
 
powershell "!cmdCoverage!"
powershell "!cmdReport!"
start "MYPROJECT test coverage" .\html\index.htm

Notes:
  • replace MYPROJECT with your project name
  • the filter says that I want to calculate the coverage for all the assemblies starting with MYPROJECT, but not Tests and IntegrationTests
  • most of the code above is used to find the OpenCover, ReportGenerator and xunit.runner.console folders in the NuGet packages folder
  • powershell is used to execute the command strings and for the tee command, which displays on the console while also writing to a file
  • the little square characters are Escape characters defining ANSI sequences for showing a red text in case of error.

Please feel free to comment with the lines you would use for other unit testing frameworks, I will update the post.

I was under the impression that .NET Framework can only reference .NET Framework assemblies and .NET Core can only reference .NET Core assemblies. After all, that's why .NET Standard appeared, so you can create assemblies that can be referenced from everywhere. However, one can actually reference .NET Framework assemblies from .NET Core (and not the other way around). Moreover, they work! How does that happen?

I chose a particular functionality that works only in Framework: AppDomain.CreateDomain. I've added it to a .NET 4.6 assembly, then referenced the assembly in a .NET Core program. And it compiled!! Does that mean that I can run whatever I want in .NET Core now?

The answer is no. When running the program, I get a PlatformNotSupportedException, meaning that the IL code is executed by .NET Core, but in its own context. It is basically a .NET Standard cheat. Personally, I don't like this, but I guess it's a measure to support adoption of the Core concept.

What goes on behind the scenes is that .NET Core implements .NET Standard, which can reference .NET Framework assemblies. For this to work you need .NET Core 2.0 and Visual Studio 2017 15.3 or higher.

In .NET APIs we usually adorn the action methods with [Http<HTTP method>] attributes, like HttpGetAttribute or AcceptVerbsAttribute, to set the HTTP methods that are accepted. However, there are conventions on the names of the methods that make them work when such attributes are not used. How does ASP.Net determine which methods on a controller are considered "actions"? The documentation explains this, but the information is hidden in one of the paragraphs:
  1. Attributes as described above: AcceptVerbs, HttpDelete, HttpGet, HttpHead, HttpOptions, HttpPatch, HttpPost, or HttpPut.
  2. Through the beginning of the method name: "Get", "Post", "Put", "Delete", "Head", "Options", or "Patch"
  3. If none of the rules above apply, POST is assumed

I've just published the package Siderite.StreamRegex, which allows to used regular expressions on Stream and TextReader objects in .NET. Please leave any feedback or feature requests or whatever in the comments below. I am grateful for any input.

Here is the GitHub link: Siderite/StreamRegex
Here is the NuGet package link: Siderite.StreamRegex

and has 0 comments
You created a library in .NET Core and you want it to be usable for .NET Framework or Xamarin as well, so you want to turn it into a .NET Standard library. You learn that it is simple: a matter of changing the TargetFramework in the .csproj file, so you do this for all projects in the solution.

But .NET Standard is only designed for libraries, so it makes no sense to change the TargetFramework for other types of projects, including some that are in fact libraries, but are not used as such, like unit test projects.

For example if you attempt to run XUnit tests in Visual Studio you will see that the tests are discovered by Test Explorer, but when you try to run them it says "No test is available in [your project]. Make sure that test discoverer & executors are registered and platform & framework version settings are appropriate and try again.". While this is an issue with XUnit, more likely, it is also a non-issue, since your test project should use the .NET Standard libraries, not be one itself.

and has 0 comments
Well, if Java can do it, so can I! Why not make my own C# string class that can do whatever I want it to do? Requirements? Here they are:
  • I want it to work just like regular strings
  • I want to know if the string is empty just by using it as an if clause, just like Javascript: if (text) doSomething(text);
  • I want to see if it's a number just by adding a plus sign in front of it: var number = +text;
  • I want to easily know if the string is null or whitespace!

OK, so first I want to get all the functionality of a normal string. So I will just inherit from it! Only I can't. String is a sealed class in .Net.
Yet there is an abstract class that seems destined to be used by this: ValueType! It's only implemented by all value types (more or less) except String, but I will be the next guy to use it! Only I can't. If I try, I get the occult message: "Error CS0644 'EString' cannot derive from special class 'ValueType'". But it does help with something, if I try to inherit from it, it tells me what methods to implement: Equals, GetHashCode and ToString.
OK, then, I will just do everything from scratch!

Start with a struct (no need for a class) that wraps a string _value field, then overwrite the equality methods:
public struct EString
{
private string _value;
 
public override bool Equals(object obj)
{
return String.Equals(obj, _value);
}
 
public override int GetHashCode()
{
return _value == null ? 0 : _value.GetHashCode();
}
 
public override string ToString()
{
return _value;
}
}

I used 0 for the hash code when the value is null because that's what the .Net object GetHashCode() does. Now, in order for this to work as a string, I need some implicit conversion from string to my EString class. So add these two beauties:
public string Value { get => _value; }
 
private EString(string value) { _value = value; }
 
public static implicit operator string(EString estring)
{
return estring.Value;
}
 
public static implicit operator EString(string value)
{
return new EString(value);
}

EString now supports stuff like EString text="Something";, but I want more! Let's overload some operators. I want to be able to see if a string is null or empty just like in Javascript:
public static bool operator !(EString estring)
{
return String.IsNullOrEmpty(estring);
}
 
public static bool operator true(EString estring)
{
return !!estring;
}
 
public static bool operator false(EString estring)
{
return !estring;
}
Yes, ladies and gentlemen, true and false are not only boolean constants, but also operators. That doesn't mean something like this is legal: if (text==true) ..., but you can use stuff like if (text) ... or text ? "something" : "empty". Overloading the ! operator allows also something more useful like if (!text) [this shit is empty].

Moar! How about a simple way to know if the string is null or empty or whitespace? Let's overload the ~ operator. How about getting the number encoded into the text? Let's overload the + operator. Here is the final result:
public struct EString
{
private string _value;
 
public string Value { get => _value; }
 
private EString(string value) { _value = value; }
 
public override bool Equals(object obj)
{
return String.Equals(obj, _value);
}
 
public override int GetHashCode()
{
return _value == null ? 0 : _value.GetHashCode();
}
 
public override string ToString()
{
return _value;
}
 
public static bool operator !(EString estring)
{
return String.IsNullOrEmpty(estring);
}
 
public static bool operator true(EString estring)
{
return !!estring;
}
 
public static bool operator false(EString estring)
{
return !estring;
}
 
public static bool operator ~(EString estring)
{
return String.IsNullOrWhiteSpace(estring);
}
 
public static decimal operator +(EString estring)
{
return decimal.TryParse(estring.Value, out decimal d) ? d : decimal.Zero;
}
 
public static implicit operator string(EString estring)
{
return estring.Value;
}
 
public static implicit operator EString(string value)
{
return new EString(value);
}
}

Disclamer: I did this mostly for fun. I can't condone replacing your strings with my class, but look at these examples for usage:
EString s=inputText;
if
(!~s) {
var d = Math.Round(+s,2);
if (d != 0) {
Console.WriteLine("Number was introduced: "+d);
}
}