Understanding AdonisJS Request Lifecycle

HTTP Kernel and Context Container

Each incoming HTTP request in AdonisJS is wrapped in a "context" object containing request, response, user, logger, and other bindings. This context is isolated per request using a service container and the underlying Server.handle() lifecycle. Misusing this context in background jobs, event listeners, or WebSocket callbacks can lead to invalid references and race conditions.

Memory and Lifecycle Binding

Persistent bindings (like database instances, queues, or sessions) are managed globally or scoped per request. Failing to release or reinitialize them in cluster mode may cause memory leaks or context bleed between concurrent requests.

Common Symptoms

  • Unexpected user context in WebSocket or scheduled jobs
  • Memory usage increasing linearly over time
  • Database connection pool exhaustion under traffic
  • Missing logger or auth instances inside custom service classes
  • Clustered apps showing inconsistent state between workers

Root Causes

1. Misuse of Request Context Outside HTTP Lifecycle

Using ctx in modules triggered by queues, background services, or WebSockets causes invalid or missing context errors. The context exists only during the lifetime of the request.

2. Improper Binding of Services in Global Scope

Singletons or services registered globally with shared state (e.g., user ID, session) can cause data leaks between requests if not carefully isolated per context.

3. Memory Leaks in Custom Providers or IoC Bindings

Services registered in the container that retain references to request objects or file streams can accumulate in memory if not cleared after use.

4. WebSocket Event Handlers Holding State

Socket connection handlers may retain references to request context or services, especially if scoped variables are not explicitly released.

5. Cluster Mode Misconfiguration

Node.js cluster mode requires explicit connection handling, especially for shared services (e.g., Redis, Lucid ORM). Failing to isolate connections per worker causes contention and leakage.

Diagnostics and Tools

1. Use process.memoryUsage()

console.log(process.memoryUsage())

Monitor heapUsed and rss to track memory leaks across request bursts or background jobs.

2. Log Context Contents

Log ctx properties during requests to ensure correct bindings. Watch for null or undefined auth, logger, or request.

3. Profile Clustered Processes

Use pm2 or node --inspect to attach debugger and memory profiler to each worker node.

4. Audit Service Bindings in start/app.ts

Ensure providers or bindings that use use() or ioc.make() are either transient or context-aware.

5. Monitor ORM Pool Usage

Use Lucid or DB client logs to track open/active connections. Ensure connections are released after use in transactions or jobs.

Step-by-Step Fix Strategy

1. Avoid Using ctx in Asynchronous or Detached Code

Pass only required values (e.g., user ID, token) from request into jobs or event handlers. Do not persist the full ctx object.

2. Register Services Per Context Where Needed

ioc.bind('MyService', (app) => {
  const request = app.use('Adonis/Core/Request')
  return new MyService(request.ip())
})

Use closures to bind services per request context when needed.

3. Dispose Objects in WebSocket Handlers

On disconnect events, release resources such as user maps, streams, or references held inside handlers.

4. Use Configurable Pooling for ORM

Configure Lucid with connection pooling and limits in config/database.ts to prevent pool saturation.

5. Validate Cluster Isolation

If using cluster mode (e.g., with pm2), ensure each worker initializes services independently and no shared global state exists across workers.

Best Practices

  • Keep ctx usage scoped only to controllers and middleware
  • Use IOC to inject dependencies, not context
  • Close DB transactions and streams explicitly
  • Profile memory regularly in staging under load
  • Design WebSocket handlers to be stateless per connection

Conclusion

AdonisJS enables clean and powerful back-end development with first-class tooling, but improper use of request context and shared state can introduce memory leaks and lifecycle issues in modern deployments. By isolating services, properly managing request-bound data, and avoiding global references in asynchronous environments, teams can build fast, scalable AdonisJS applications that perform reliably in clustered and real-time setups.

FAQs

1. Can I use ctx inside queues or background jobs?

No. ctx is tied to the HTTP lifecycle. Pass only required primitives (e.g., IDs, inputs) to jobs or services.

2. Why does Lucid leak connections under load?

Improper transaction handling or failure to close DB operations can exhaust the pool. Always use await trx.commit() or trx.rollback().

3. What is the correct way to share data across requests?

Use Redis, cache services, or database-backed state. Avoid global variables or singleton patterns with mutable values.

4. How can I monitor memory usage over time?

Use process.memoryUsage() or external tools like PM2, Clinic.js, or Node.js inspector to detect heap growth and leaks.

5. Should I use PM2 with clustering for AdonisJS?

Yes, but configure it carefully. Ensure each worker handles connections independently and that no shared state exists in boot files or providers.