How to Track the Performance of Express.js Routes and Middleware

How to Track the performance of Express.js Routing and Middleware

First of all – why choose Express.js? One of the many reasons for Express’ success is its simplicity. The minimalistic and simple, straight forward design is one more reason why we love it. It’s also why when we first got started with Express Gateway, we knew we wanted to build the next big open source API Gateway on this framework. 

Business logic in Express.js is separated out into middleware.

So, even routing is implemented as middleware under the hood. Additionally, because middleware is such a broad abstraction in Express.js, measuring Express.js performance comes down to measuring the performance of middleware. In this article, I’ll show you how to track the performance of Express.js routes and individual middleware functions.

How Long Does an Express Route Take?

Suppose you have a simple Express.js route like the one below that just sends back “Hello, world!” in response to every GET request. How do you measure how long this route takes?

const app = require('express')();
 
 app.get('*', function(req, res) {
   res.send('Hello, world!');
 });
 
 app.listen(3000);

The key insight is that the Express response object res is an instance of the built-in Node.js http.ServerResponse class with a few convenient helpers added on. In particular, a Node.js ServerResponse has a handy ‘finish’ event that gets emitted when Node.js is done handing the response body over to the OS to send out over the network. The ‘finish’ event does not mean the client received anything. You can think of the ‘finish’ event as being emitted when Node.js is done handling the response.

You can write middleware profilerMiddleware to listen to the ‘finish’ event. The difference between when the middleware started and when the ‘finish’ event emitted is a good estimate for how long the route took.

The key caveat is that Express middleware runs sequentially, so make sure you call app.use(profilerMiddleware) before any other middleware to account for how long other middleware takes.

If you register profilerMiddleware after a slow middleware, your times will be off.

const app = require('express')();
 const { EventEmitter } = require('events');
 
 // Aggregate all profiler results into an event emitter to make
 // handling the results generic
 const profiles = new EventEmitter();
 
 profiles.on('route', ({ req, elapsedMS }) => {
   console.log(req.method, req.url, `${elapsedMS}ms`);
 });
 
 // Make sure you register this **before** other middleware
 app.use(function profilerMiddleware(req, res, next) {
   const start = Date.now();
   // The 'finish' event will emit once the response is done sending
   res.once('finish', () => {
     // Emit an object that contains the original request and the elapsed time in MS
     profiles.emit('route', { req, elapsedMS: Date.now() - start });
   });
 
   next();
 });
 
 app.get('*', function(req, res) {
   res.send('Hello, world!');
 });
 
 app.listen(3000);

For example, let’s say you added a slow middleware that’ll block the response for 1 second. If this slow middleware is before profilerMiddleware, you’ll see that profilerMiddleware prints times well below 1 second. However; if the slow middleware is after profilerMiddleware as shown below, you’ll see times above 1 second.

// Make sure you register this **before** other middleware
 app.use(function profilerMiddleware(req, res, next) {
   const start = Date.now();
   // The 'finish' event will emit once the response is done sending
   res.once('finish', () => {
     // Emit an object that contains the original request and the elapsed time in MS
     profiles.emit('route', { req, elapsedMS: Date.now() - start });
   });
 
   next();
 });
 
 app.use(function block(req, res, next) {
   setTimeout(() => next(), 1000);
 });

Individual Middleware Performance

Larger Express.js apps tend to have a lot of middleware. In order to figure out which individual middleware is contributing the most latency, you’ll want to drill down into the performance of each individual middleware. For example, let’s say you have the below Express.js app with 2 middleware, the block middleware from the previous example and the popular bodyParser middleware for parsing HTTP request bodies.

const app = require('express')();
 const bodyParser = require('body-parser');
 const { EventEmitter } = require('events');
 
 // Aggregate all profiler results into an event emitter to make
 // handling the results generic
 const profiles = new EventEmitter();
 
 profiles.on('route', ({ req, elapsedMS }) => {
   console.log(req.method, req.url, `${elapsedMS}ms`);
 });
 
 app.use(function block(req, res, next) {
   setTimeout(() => next(), 1000);
 });
 
 app.use(bodyParser.json());
 
 app.post('*', function(req, res) {
   res.send('Hello, world!');
 });
 
 app.listen(3000);

The route profiler says your endpoints are slow, but which one? In this example it’s obvious, but it’s a worthwhile exercise.

Important notes about Express.js business logic

So, the key idea is that Express.js business logic must conform to one of 3 different function signatures:

  • Route handlers are generally functions that take 2 arguments, req and res.
  • Normal middleware takes 3 arguments: req, res, next. The next() function is the only way to tell Express.js this middleware is done and Express.js should kick off the next one.
  • Error handling middeware takes 4 arguments: err, req, res, next. Internally, Express.js checks to see if a function takes at least 4 arguments to distinguish between middleware and error handling middleware.

Thus, you can write a function that wraps your normal middleware functions and instruments next() to check when it gets called.

function wrap(fn) {
   return function(req, res, next) {
     const start = Date.now();
     fn(req, res, function() {
       profiles.emit('middleware', {
         req,
         name: fn.name,
         elapsedMS: Date.now() - start
       });
 
       next.apply(this, arguments);
     });
   };
 }

Here’s how you use the wrap() function:

const app = require('express')();
 const bodyParser = require('body-parser');
 const { EventEmitter } = require('events');
 
 // Aggregate all profiler results into an event emitter to make
 // handling the results generic
 const profiles = new EventEmitter();
 
 profiles.on('middleware', ({ req, name, elapsedMS }) => {
   console.log(req.method, req.url, ':', name, `${elapsedMS}ms`);
 });
 
 app.use(wrap(function block(req, res, next) {
   setTimeout(() => next(), 1000);
 }));
 
 app.use(wrap(bodyParser.json()));
 
 app.post('*', function(req, res) {
   res.send('Hello, world!');
 });
 
 app.listen(3000);
 
 function wrap(fn) {
   return function(req, res, next) {
     const start = Date.now();
     fn(req, res, function() {
       profiles.emit('middleware', {
         req,
         name: fn.name,
         elapsedMS: Date.now() - start
       });
 
       next.apply(this, arguments);
     });
   };
 }

The wrap() function will then output how long each middleware takes to the console, so you can tell that the block middleware is the one that is causing the latency.

POST / : block 1001ms
 POST / : jsonParser 1ms

Tying It All Together

Based on the difference between function signatures, you can use middleware and the wrap() function to track how long route handlers and middleware take. With the profilerMiddleware from the first section and the notion of a unique identifier for a request, you can track how long the whole Express middleware stack took and drill down to the latency of each individual middleware.

First, let’s add a middleware to set a unique id:

// Attach a unique identifier for the request. MongoDB ObjectIds are designed
 // to have great performance and a high likelihood of uniqueness, even in
 // distributed environments. A unique ID lets you tie multiple profiling results
 // back to a single request.
 app.use(wrap(function(req, res, next) {
   req.id = new ObjectId();
   next();
 }));

Then, let’s make the wrap() function handle Express.js route handlers (functions that take 2 arguments).

function wrap(fn) {
   // Function takes 2 arguments
   if (fn.length === 2) {
     return function(req, res) {
       const start = Date.now();
       res.once('finish', () => profiles.emit('middleware', {
         req,
         name: fn.name,
         elapsedMS: Date.now() - start
       }));
       return fn.apply(this, arguments);
     };
   } else if (fn.length === 3) {
     return function(req, res, next) {
       const start = Date.now();
       fn.call(this, req, res, function() {
         profiles.emit('middleware', {
           req,
           name: fn.name,
           elapsedMS: Date.now() - start
         });
 
         next.apply(this, arguments);
       });
     };
   } else {
     throw new Error('Function must take 2 or 3 arguments');
   }
 }

Logging the latency results with the ID then makes it easy to see how slow individual endpoints are, and how each individual route handler and middleware contribute to the overall latency.

// Aggregate all profiler results into an event emitter to make
 // handling the results generic
 const profiles = new EventEmitter();
 
 profiles.on('route', ({ req, elapsedMS }) => {
   console.log(req.id.toString(), req.method, req.url, `${elapsedMS}ms`);
 });
 
 profiles.on('middleware', ({ req, name, elapsedMS }) => {
   console.log(req.id.toString(), req.method, req.url, ':', name, `${elapsedMS}ms`);
 });

594329b793039e5b3444720c POST / :  0ms
 594329b793039e5b3444720c POST / : profilerMiddleware 0ms
 594329b793039e5b3444720c POST / : block 1003ms
 594329b793039e5b3444720c POST / : jsonParser 26ms
 594329b793039e5b3444720c POST / 1043ms
 594329b793039e5b3444720c POST / : routeHandler 11ms

Moving On

Express middleware is easy to instrument because different types of middleware (route handlers, normal middleware, and error handling middleware) take different numbers of arguments. Instrumenting your middleware functions combined with listening to the ‘finish’ event lets you have a drill-down view of your app’s latency. In other words, you can see how your endpoints are performing and how each individual middleware contributes to latency.


Thank you for sharing!

Next Post