Home > Blog > 2023 > Feb > 28

Unit Testing with In-Memory Databases in Entity Framework Core

by Ira Endres on Tuesday, February 28, 2023

"I like unit tests" is not something you hear from a developer very often however the utility of unit tests cannot be denied. I've used unit tests to perform quick discoveries when interacting with a new API, run one-time scripts against the database that were easier to do in C# than in SQL (yes, that does happen), and, of course, asserted the correctness of the systems behavior. The best part is when you take your unit tests and hook them up with your continuous delivery/integration (CI/CD) pipeline and watch the pull requests run those bad boys making sure that everyone is doing their jobs.

Mocking databases however is really complicated and in the Microsoft stack and Entity Framework 6 and Entity Framework Core (EF Core) both do these different ways. Once you figure out the way to utilize mocking frameworks, using them becomes second nature.

Entity Framework 6 Unit Testing

Before we get too deep, I want to shout out to Moq as it is a very popular in-memory unit test double. This is just a personal preference, but I dislike using Moq as the syntax for mocking up items in each unit test is different than how we would normally utilize the Entity Framework. What I prefer is the MSDN suggestion for in-memory tests using their replacement TestDbSet<T> in the MSDN article Testing with your own test doubles. In the article, they provide the code to override the default DbSet that would normally connect to a database and it replaces it with a class that tracks entities using an ObservableCollection. What's neat about this is that this can be expanded to really simulate a database.

For example, one neat trick I learned is to highjack the Add(TEntity item) method to simulate identity columns that are auto-generated. In the MSDN article the default constructor is:

public TestDbSet()
    _data = new ObservableCollection<TEntity>();
    _query = _data.AsQueryable();

Now let's assume for simplicity that our keys are always of type uniqueidentifier and have default constraints of newsequentialid() in our database. To simulate this using an Entity Framework 6 code-first approach, a domain model would look like:

public class Artist
    [Key, DatabaseGenerated(DatabaseGeneratedOption.Identity)]
    public Guid ArtistId { get; set; }
    [Required, MaxLength(100)]
    public string ArtistName { get; set; }

Back in out implementation of TestDbSet<TEntity> we will want to include a new member:

protected PropertyInfo _key;

Then add the following to the constructor to scan the models properties for anything that matches our primary key property:

PropertyInfo[] properties = typeof(TEntity).GetProperties(BindingFlags.Public | BindingFlags.Instance);
foreach (PropertyInfo property in properties)
    if (!property.CustomAttributes.Any(x => x.AttributeType == typeof(KeyAttribute)))

    if (property.PropertyType != typeof(Guid))

    DatabaseGeneratedAttribute attr = (DatabaseGeneratedAttribute)property.GetCustomAttribute(typeof(DatabaseGeneratedAttribute));
    if (attr == null)

    if (attr.DatabaseGeneratedOption == DatabaseGeneratedOption.Identity)
        _key = property;

Finally, on the Add(TEntity item) method call, you'll be able to assign a Guid to the primary key column, just like a real database would.

if (this._key != null && (Guid)this._key.GetValue(item) == Guid.Empty)
    object key = this._key.GetValue(item);
    if (key == null || (Guid)key == default)
        this._key.SetValue(item, Guid.NewGuid());

Using this approach, I've done lots of crazy things like simulating database triggers, simulating persistent databases, and mocking up database views and procedures in C# all while never touching a database. I've even done a couple nasty things to the T4 Text Templates for EDMX-based models to scaffold in-memory test doubles. Using the TestDbSet<TEntity> approach is the best unit test strategy for writing tests like you normally would; interactions with LINQ-to-SQL are isolated from each other so that no two tests step on each other, which is what you want 99% of the time.

In-Memory Databases using EF Core

Core changes the game with their dependency-injection approach. The kind of database mocking we are describing previously is technically challenging for some so many opt for writing a Repository layer on top of the database layer which, in my opinion, is WAY more work than doing these test doubles. Trust me, I've done it before. However in EF Core, the maintainers provide us with Microsoft.EntityFrameworkCore.InMemory for easily incorporating this into our code. You can learn more about the different providers in this MSDN article EF Core In-Memory Database Provider.

Before we get into the code examples, the maintainers of EF Core have made it much more challenging to do what was described previously like overriding TestDbSet<TEntity> to provide our own functionality, however the new In-Memory package for EF core seems to be able to do all of the things that we previously had to write our own implementation for! We want our contexts to simulate DatabaseGeneratedOption.Identity properties in our models and behave just like a real database, but most importantly, we want our tests isolated from other contexts so that no two tests step on each other.

We can easily create an in-memory database double by passing in the correct DbContextOptions settings. Now, the MSDN article Testing without your production database system highlights the basic constructor for the in-memory double:

_contextOptions = new DbContextOptionsBuilder<BloggingContext>()
    .ConfigureWarnings(b => b.Ignore(InMemoryEventId.TransactionIgnoredWarning))

using var context = new BloggingContext(_contextOptions);


This code highlights an issue for me; the context needs to be deleted and then recreated, so somewhere the DbContext implementation is saving the database persistently where it needs to be destroyed and recreated for each test. I strongly prefer that entities are already isolated without having to enable test isolation when running the tests.

Another item that I discovered the hard way is that entities returned from the context using LINQ are returned in a EntityState.Detached which is different from when you run the code in production with AutoDetectChangesEnabled set to true (the most common setup and enabled by default). Really, Microsoft, who uses Find(params object?[]? keyValues)?

This is my preferred strategy for creating in-memory test doubles in EF Core using the Microsoft.EntityFrameworkCore.InMemory provider:

public class TestBloggingContext : BloggingContext
    public TestBloggingContext() : base(new DbContextOptionsBuilder<TestBloggingContext>()
        .UseInternalServiceProvider(new ServiceCollection()
        .ConfigureWarnings(b => b.Ignore(InMemoryEventId.TransactionIgnoredWarning))
        this.ChangeTracker.AutoDetectChangesEnabled = true;
        this.ChangeTracker.QueryTrackingBehavior = QueryTrackingBehavior.TrackAll;

Let's break this down. To start, we call UseInMemoryDatabase(string) to instantiate a new test context using the in-memory provider. The call to UseInternalServiceProvider(IServiceProvider?) ensures that the database context is isolated from other contexts that are created so that the tests can run in parallel safely; without this call, the tests share the same underlying database context as specified by name. AutoDetectChangesEnabled is enabled by default, but is here for clarity. Finally, QueryTrackingBehavior.TrackAll ensures that entities returned using the standard LINQ behavior are added to the internal change tracking collection such that when SaveChanges() is called, the objects in the internal DbSet are updated as expected of a real database.


A BIG thanks to the maintainers of Microsoft.EntityFrameworkCore.InMemory as they really have created a suitable replacement for the previous implementation of TestDbSet<TEntity> that behaves exactly like a real database implementation. There are other in-memory providers such as SQLite and the Repository patterns and each have their own advantages and disadvantages but these in-memory providers are my favorite as they are isolated and lightweight so tests run fast without stepping on each other. For more information on the differences between the strategies, please see Choosing a testing strategy on MSDN.