Simulating Latency in ASP.NET Core

When we're doing web development on our local machines, we usually don't experience any network latency. Web pages open nearly instantaneously, given we're not doing any long-running database or network operations. Because of that, the entire application tends to feel blazingly fast.

Of course, this responsiveness is in no way representative of the actual performance characteristics of the web application in production. Users remotely accessing the website encounter network latency with every HTTP request. Initial page loads take longer to complete, and so does every subsequent AJAX request. Generally speaking, the further away the server, the higher the latency.

Flight route from San Francisco to Munich

This whole difference in performance characteristics got me thinking: Why not simulate network latency when doing local development? It's going to be there in production anyway, so we might as well experience it when developing to get a more realistic feeling of the actual application behavior.

Latency Middleware for ASP.NET Core

In ASP.NET Core, every request is processed by the HTTP request pipeline, which is composed of various pieces of middleware. To simulate network latency, we can register a piece of custom middleware at the very beginning of the pipeline that delays the request processing. That way, every request — no matter if short-circuited by other middleware or not — is artificially slowed down.

Here's the beginning of the Configure method of the web application's Startup class:

public void Configure(IApplicationBuilder app, IHostingEnvironment env)
{
    if (env.IsDevelopment())
    {
        app.UseSimulatedLatency(
            min: TimeSpan.FromMilliseconds(100),
            max: TimeSpan.FromMilliseconds(300));
    }

    // ...
}

Our latency middleware is added by the UseSimulatedLatency method, which we'll look at in a minute. We pass it a lower and an upper bound for a random delay, which in this case is between 100ms and 300ms long.

Note that the latency is only simulated in development environments. After all, we don't want to slow down our web applications in production. Let's now look at the UseSimulatedLatency extension method:

public static IApplicationBuilder UseSimulatedLatency(
    this IApplicationBuilder app, TimeSpan min, TimeSpan max)
{
    return app.UseMiddleware(typeof(SimulatedLatencyMiddleware), min, max);
}

Nothing really interesting to see here. We specify which type of middleware to use and which arguments (in addition to the next RequestDelegate) to pass to its constructor. All the logic is implemented within the SimulatedLatencyMiddleware class:

using System;
using System.Threading;
using System.Threading.Tasks;
using Microsoft.AspNetCore.Http;

public class SimulatedLatencyMiddleware
{
    private readonly RequestDelegate _next;
    private readonly int _minDelayInMs;
    private readonly int _maxDelayInMs;
    private readonly ThreadLocal<Random> _random;

    public SimulatedLatencyMiddleware(RequestDelegate next,
        TimeSpan min, TimeSpan max)
    {
        _next = next;
        _minDelayInMs = (int)min.TotalMilliseconds;
        _maxDelayInMs = (int)max.TotalMilliseconds;
        _random = new ThreadLocal<Random>(() => new Random());
    }

    public async Task Invoke(HttpContext context)
    {
        int delayInMs = _random.Value.Next(_minDelayInMs, _maxDelayInMs);

        await Task.Delay(delayInMs);
        await _next(context);
    }
}

Because the Random class is not thread-safe, we wrap it within a ThreadLocal<T> so that every thread has its own instance. Within the Invoke method, a pseudo-random duration between min and max is calculated and passed to Task.Delay, which slows down the processing of that request. Finally, the next piece of middleware in the pipeline is executed.

It's a simple trick, really, but it might help you get a better sense of how your web application feels under realistic circumstances. Take a look at your monitoring tool of choice, figure out the average latency for an HTTP request in production, and adjust the min and max bounds for the random delays accordingly.

Use the coupon code LAUNCHDAY for $10 off!

Learn React

9 Comments

Maulesh Mevada

What about to simulate the network speed.

Someone

That code is not thread-safe and the closed over random can be invoked from multiple threads with Random not being thread-safe the code will generate inaccurate delayInMs

Marius Schulz

Thanks for the heads-up, much appreciated! I've updated the code sample. Now, a separate Random instance is created for each request, using the return value of the GetHashCode() method as the seed argument.

Someone

That's a good first try to fix the issue however creating an instance of Random per request is not a good idea.

Consider using ThreadLocal here's an example:

private ThreadLocal<Random> _random = new ThreadLocal<Random>(() => new Random());

void SomeMethod() { // Each thread now has it's own instance var random = _random.Value; }

Constantine

Hi.

You've got the Networking tab in the Chrome where you can set a throttling to simulate a bigger latency. Why should you write a line of code for that?

Someone

@Constantine, I hope you are not doing performance/load testing of your API endpoints using Chrome!

Marius Schulz

Well, I'm not sure if it really were that problematic to create a new instance of Random for every request. The middleware is only being used in development, really.

That said, I've chosen the "proper" way here and implemented a middleware class that can hold a field of type ThreadLocal<Random>.

Fredy Guibert

Take a look this answer on StackOverflow http://stackoverflow.com/a/19734174/3613462 which uses Developer Tools -> Network to simulate speed. Regards! @fredyfx

Sarin Na Wangkanai

Nice use case of an example in developing middleware