Understanding Hanami in Enterprise Contexts
Architecture Overview
Hanami embraces a clean architecture approach by separating concerns into slices (contexts), encouraging testable and maintainable code. Components such as entities, repositories, views, and actions are decoupled, promoting clear boundaries.
Why Enterprises Choose Hanami
- Lightweight runtime and fast boot times
- Built-in thread-safety and concurrency
- Clear separation of concerns
- Compatibility with dry-rb and rom-rb ecosystems
Common Issues in Large-Scale Hanami Systems
1. Misconfigured Autoloading in Containerized Deployments
Hanami uses Zeitwerk-style autoloading under the hood. In Dockerized or CI environments, file watches and eager loading configurations can break due to volume mounts or incorrect path resolution.
2. Inconsistent Dependency Injection (DI)
Hanami uses its own DI container via Hanami::App.register
. Mismanagement of shared state, especially for DB connections or cache clients, leads to thread leaks or stale data in multithreaded servers.
3. Database Transaction Isolation Issues
When using rom-rb with concurrent actions or background jobs, improper use of unit-of-work patterns causes phantom reads or uncommitted transactions, especially with PostgreSQL or MySQL under load.
4. Routing Conflicts in Multi-Slice Applications
In multi-slice applications, duplicated route names or unmounted routes result in 404s, especially if the slice is not explicitly mounted or scoped correctly in config/routes.rb
.
Diagnosing Hanami Runtime Issues
1. Enable Verbose Logging and Backtraces
Use environment variables to enable detailed logs for framework internals:
HANAMI_LOG_LEVEL=debug HANAMI_ENV=development bundle exec hanami server
2. Analyze Dependency Graph
Use Hanami's container inspector to trace service registrations and resolution failures.
Hanami.app.container.keys.each { |k| puts k }
3. Monitor Database Pooling and Connection Leaks
Enable ActiveSupport::Notifications or integrate rack-mini-profiler
to detect long-held or leaked DB connections.
4. Validate Slice Routing
List active routes to confirm proper mounting:
bundle exec hanami routes
Fixes and Workarounds
1. Configure Eager Load Paths for Containers
Ensure eager loading is enabled properly in production builds to avoid file system latency:
config.autoloader = :zeitwerk config.eager_load_paths << "./lib"
2. Properly Register Dependencies
Inject shared services (e.g., Redis, DB clients) via the app container, and scope them appropriately:
Hanami.app.register("redis", Redis.new(url: ENV["REDIS_URL"]))
3. Use Unit-of-Work Wrappers for DB Operations
Encapsulate write-heavy workflows inside ROM transactions:
ROM::SQL::Relation.transaction do repo.create(user_data) audit_log.write(event) end
4. Namespace and Mount Routes Explicitly
Use scoped routing to prevent path collisions:
mount Admin::Slice, at: "/admin" mount Api::Slice, at: "/api"
Best Practices for Hanami in the Enterprise
- Separate service logic and delivery mechanisms via slice isolation
- Use contracts (via dry-validation) for strict input schemas
- Integrate Hanami with Sidekiq or Faktory for background processing
- Run container health checks against mounted routes or DI keys
- Monitor GC time and heap growth in Puma-based deployments
Conclusion
Hanami offers architectural clarity and modularity rarely found in other Ruby frameworks, making it a strong choice for enterprise-grade back-end systems. However, teams must account for autoloading pitfalls, DI lifecycle management, and slice-level isolation when scaling Hanami apps. With disciplined container usage, proper observability, and robust routing strategies, Hanami can serve as the backbone of performant, maintainable systems.
FAQs
1. Is Hanami thread-safe in multi-core environments?
Yes, Hanami's design promotes thread safety. But shared resource management must still be handled carefully, especially for DB and caches.
2. Can I use ActiveRecord with Hanami?
While possible, it's not idiomatic. Hanami prefers ROM for data mapping and dry-rb for validations. Mixing in ActiveRecord may complicate architecture.
3. How do I run background jobs in Hanami?
Use Sidekiq or Faktory as separate workers. Mount job processors within the Hanami container and isolate them from HTTP server lifecycles.
4. Does Hanami support API-only applications?
Yes. You can disable view and template rendering and focus solely on actions that return JSON, ideal for microservices or BFFs.
5. What logging strategy works best with Hanami?
Combine Hanami's logger with Lograge or a structured logger like SemanticLogger, especially when running in containerized or cloud-native environments.