Rails Database Connection Pooling Explained

· 14 min read

Every Rails application that handles multiple concurrent requests needs an efficient way to manage database connections. Without proper connection management, your app could create a new database connection for every request, quickly overwhelming your database server. This is where connection pooling comes in.

The Problem: Why Connection Pooling Matters

Creating database connections is expensive. Each new connection requires:

  • Network handshake between app and database
  • Authentication verification
  • Memory allocation on both client and server
  • Connection state initialization
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
# Without connection pooling (this is what would happen internally)
def handle_request
  # This takes 20-50ms each time!
  connection = PG.connect(
    host: 'localhost',
    dbname: 'myapp_production',
    user: 'rails',
    password: 'secret'
  )

  result = connection.exec("SELECT * FROM users WHERE id = 1")
  connection.close

  result
end

For a busy application handling 100 requests per second, creating fresh connections would add 2-5 seconds of overhead every second – clearly unsustainable.

How Rails Implements Connection Pooling

Rails solves this through ActiveRecord’s built-in connection pool. Instead of creating new connections for each request, Rails maintains a pool of reusable connections.

Important: Each Rails process maintains its own independent connection pool. If you’re running 5 Puma workers, you’ll have 5 separate pools, not one shared pool.

Critical Understanding: The pool size in your configuration is a maximum limit, not a pre-allocated number. Rails creates connections lazily - only when they’re actually needed. Setting pool: 100 doesn’t create 100 connections on startup; it just allows Rails to create up to 100 connections if demand requires it.

1
2
3
4
5
6
# config/database.yml
production:
  adapter: postgresql
  database: myapp_production
  pool: 5      # Maximum number of connections in the pool
  timeout: 5000 # Wait up to 5 seconds for a connection

When a request needs a database connection:

1
2
3
4
5
6
7
class UsersController < ApplicationController
  def show
    # ActiveRecord automatically checks out a connection from the pool
    @user = User.find(params[:id])
    # Connection is automatically returned to the pool after the request
  end
end

Here’s what happens behind the scenes:

1
2
3
4
5
6
7
# Simplified version of what ActiveRecord does
def with_connection(&block)
  connection = connection_pool.checkout  # Get connection from pool
  yield connection                       # Use it for queries
ensure
  connection_pool.checkin(connection)    # Return to pool
end
sequenceDiagram participant Thread participant Pool as Connection Pool participant Conn as Connection participant DB as Database Thread->>Pool: 1. Request connection (checkout) Pool->>Pool: Find available connection Pool->>Conn: Mark as "in use" Pool-->>Thread: Return connection Thread->>DB: 2. Execute query DB-->>Thread: Query results Thread->>Pool: 3. Return connection (checkin) Pool->>Conn: Mark as "available" Note over Pool: Connection ready for next thread

Understanding Checkout and Checkin:

  • Checkout: When a thread needs to run a query, it “checks out” a connection from the pool, like borrowing a book from a library. The connection is marked as “in use” and becomes unavailable to other threads.

  • Checkin: After the thread finishes its database work, it “checks in” the connection back to the pool, making it available for other threads to use. The connection stays open to the database but is now free to be borrowed again.

This borrow-and-return cycle happens automatically for every ActiveRecord query, ensuring connections are efficiently shared between requests without the overhead of creating new ones.

The Rails 7.2 Revolution

Rails 7.2 fundamentally changed how connections are managed, making precise pool calculations obsolete:

Before Rails 7.2:

1
2
3
4
5
6
7
def show
  # Thread checks out connection at request start
  @user = User.find(1)        # Uses connection A
  @posts = @user.posts        # Still using connection A
  @comments = Comment.recent  # Still using connection A
  # Connection returned at request end
end

Rails 7.2 and later:

1
2
3
4
5
def show
  @user = User.find(1)        # Uses connection A, returns it immediately
  @posts = @user.posts        # Might use connection B, returns it immediately
  @comments = Comment.recent  # Might use connection C, returns it immediately
end

This per-query connection handling means:

  • Connections are utilized far more efficiently
  • A pool of 5 can serve many more than 5 concurrent requests
  • Calculating exact pool requirements becomes nearly impossible

Understanding Pool Configuration

The pool setting in database.yml controls the maximum number of connections your app can maintain:

1
2
3
4
5
6
# config/database.yml
production:
  adapter: postgresql
  database: myapp_production
  pool: <%= ENV.fetch("RAILS_MAX_THREADS") { 5 } %>
  timeout: 5000

Key configuration options:

  • pool: Maximum connections (default: 5)
  • timeout: Milliseconds to wait for a connection (default: 5000)
  • checkout_timeout: Alias for timeout
  • idle_timeout: Seconds before closing idle connections (default: 300)
  • reaping_frequency: Seconds between reaping dead connections (default: 60)

Rails automatically runs a reaper thread that periodically removes connections that are dead or have been idle too long. This prevents your pool from filling up with unusable connections and helps maintain optimal resource usage.

Understanding Pool Size vs Actual Connections

Before diving into issues, it’s crucial to understand that your pool size setting and actual database connections are different things:

1
2
3
4
5
6
7
8
9
10
11
12
# This configuration:
production:
  pool: 100

# Does NOT mean:
# ❌ "Create 100 database connections on startup"
# ❌ "Always maintain 100 open connections"

# It actually means:
# ✅ "Allow up to 100 connections IF needed"
# ✅ "Create connections lazily as demand requires"
# ✅ "Automatically close idle connections"

This is why the emerging best practice is to set the pool size high (like 100) and let Rails manage the actual connections based on demand.

Common Connection Pool Issues

1. Pool Exhaustion

The most common issue is running out of connections:

1
2
# This error means all connections are in use
ActiveRecord::ConnectionTimeoutError: could not obtain a connection from the pool within 5.000 seconds

This happens when:

  • Your pool size is smaller than your thread count
  • Long-running queries hold connections
  • Connections leak due to improper handling

Debugging pool exhaustion:

1
2
3
4
5
6
7
8
9
10
11
12
# Check current pool status
ActiveRecord::Base.connection_pool.stat
# => { size: 5, connections: 5, busy: 5, dead: 0, idle: 0, waiting: 2, checkout_timeout: 5 }

# See all connections and what they're doing
ActiveRecord::Base.connection_pool.connections.map do |conn|
  {
    in_use: conn.in_use?,
    owner: conn.owner,
    last_query: conn.instance_variable_get(:@last_query)
  }
end
graph LR subgraph Queue["Waiting Queue"] direction TB T3["Thread 3
⏳ waiting"] T4["Thread 4
⏳ waiting"] T3 -.-> T4 end subgraph Pool["Connection Pool (Size: 5)"] direction TB subgraph Busy[" "] direction LR C1["Connection 1
🔴 BUSY
(Thread 1)"] C2["Connection 2
🔴 BUSY
(Thread 2)"] end subgraph Available[" "] direction LR C3["Connection 3
🟢 IDLE"] C4["Connection 4
⚫ DEAD"] C5["Connection 5
🟢 IDLE"] end end Queue -->|Waiting for
available connection| Pool classDef busy fill:#ffcccc,stroke:#ff0000,stroke-width:2px classDef idle fill:#ccffcc,stroke:#00aa00,stroke-width:2px classDef dead fill:#e0e0e0,stroke:#666666,stroke-width:2px classDef waiting fill:#ffffcc,stroke:#ffaa00,stroke-width:2px classDef poolStyle fill:#f9f9f9,stroke:#333,stroke-width:2px class C1,C2 busy class C3,C5 idle class C4 dead class T3,T4 waiting class Pool poolStyle

2. Thread Count Mismatch

Your pool size must accommodate your web server’s thread configuration:

1
2
3
4
5
6
7
# config/puma.rb
threads_count = ENV.fetch("RAILS_MAX_THREADS") { 5 }
threads threads_count, threads_count

# config/database.yml
production:
  pool: <%= ENV.fetch("RAILS_MAX_THREADS") { 5 } %>  # Should match!

If threads > pool size, some threads may timeout waiting for connections.

graph LR subgraph Puma["Puma Web Server"] direction TB T1["Thread 1 ✓"] T2["Thread 2 ✓"] T3["Thread 3 ✓"] T4["Thread 4 ⏳"] T5["Thread 5 ⏳"] end subgraph Pool["Connection Pool (Size: 3) ⚠️"] C1["Connection 1
🔴 Thread 1"] C2["Connection 2
🔴 Thread 2"] C3["Connection 3
🔴 Thread 3"] end subgraph Problem["Problem"] E1["Thread 4 & 5 waiting...
Will timeout after 5 seconds
❌ ConnectionTimeoutError"] end T1 --> C1 T2 --> C2 T3 --> C3 T4 -.->|"Can't get connection"| Problem T5 -.->|"Can't get connection"| Problem classDef connected fill:#ccffcc,stroke:#00aa00,stroke-width:2px classDef waiting fill:#ffffcc,stroke:#ffaa00,stroke-width:2px classDef error fill:#ffcccc,stroke:#ff0000,stroke-width:2px classDef poolWarn fill:#fff3cd,stroke:#ff9800,stroke-width:3px class T1,T2,T3 connected class T4,T5 waiting class Problem error class Pool poolWarn

The fix: Ensure your pool size matches or exceeds your thread count.

3. Connection Leaks

Manually checking out connections without returning them causes leaks:

1
2
3
4
5
6
7
8
9
10
11
12
13
# BAD: Connection leak!
def process_large_dataset
  conn = ActiveRecord::Base.connection_pool.checkout
  conn.execute("SELECT * FROM huge_table")
  # Forgot to check in the connection!
end

# GOOD: Proper connection handling
def process_large_dataset
  ActiveRecord::Base.connection_pool.with_connection do |conn|
    conn.execute("SELECT * FROM huge_table")
  end  # Connection automatically returned
end

For more details on proper connection management patterns, see our post on Rails with_connection: The better way to manage database connections.

Advanced Connection Pool Management

Multiple Database Connections

Rails 6+ supports multiple databases with separate pools:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
# config/database.yml
production:
  primary:
    adapter: postgresql
    database: myapp_production
    pool: 25

  analytics:
    adapter: postgresql
    database: myapp_analytics
    pool: 10

# app/models/analytics_base.rb
class AnalyticsBase < ApplicationRecord
  self.abstract_class = true
  connects_to database: { writing: :analytics, reading: :analytics }
end

Why separate pools matter here:

With a single pool, long-running analytics queries would check out connections for extended periods, starving your web requests. By using separate pools:

  • Your main app maintains 25 connections for quick user requests
  • Analytics gets its own 10 connections that can be held longer without impacting users
  • Connection timeouts can be configured differently (5 seconds for web, 30 seconds for analytics)
  • If analytics exhausts its pool, your main application continues serving users normally

This isolation prevents one workload from monopolizing all available connections.

flowchart TD subgraph App["Rails Application"] direction LR UserModel[User/Product Models] AnalyticsModel[Analytics/Report Models] end subgraph Pools["Separate Connection Pools"] PrimaryPool["Primary Pool
25 connections
Quick transactions"] AnalyticsPool["Analytics Pool
10 connections
Long queries OK"] end subgraph DBs["Databases"] PrimaryDB[("Primary DB
Users, Products, Orders")] AnalyticsDB[("Analytics DB
Reports, Metrics")] end UserModel --> PrimaryPool AnalyticsModel --> AnalyticsPool PrimaryPool --> PrimaryDB AnalyticsPool --> AnalyticsDB style PrimaryPool fill:#e1f5fe style AnalyticsPool fill:#fce4ec style PrimaryDB fill:#0277bd,color:#fff style AnalyticsDB fill:#c2185b,color:#fff

Connection Pool Middleware

While less critical with high pool limits, monitoring actual connection usage is still valuable:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
# app/middleware/connection_pool_monitor.rb
class ConnectionPoolMonitor
  def initialize(app)
    @app = app
  end

  def call(env)
    pool = ActiveRecord::Base.connection_pool
    
    @app.call(env)
  ensure
    stats = pool.stat
    
    # Focus on actual connections, not pool limits
    if stats[:connections] > 50  # Arbitrary threshold
      Rails.logger.info "High connection count: #{stats[:connections]} actual connections"
    end
    
    # Check for connection leaks
    if stats[:dead] > 0
      Rails.logger.warn "Dead connections detected: #{stats[:dead]}"
    end
  end
end

What to monitor with modern pooling:

  • Actual connections created: stats[:connections] tells you real usage
  • Dead connections: Indicates potential connection issues
  • Database-side metrics: Monitor pg_stat_activity or equivalent
  • Query performance: Slow queries holding connections are the real problem

With pool: 100, you’ll rarely see pool exhaustion. Instead, focus on:

1
2
3
Actual connections: 23 (pool allows 100)
Database max_connections: 100 (67% headroom)
Average query time: 5ms

Optimizing Connection Pool Performance

1. Stop Calculating - Just Set It High

The modern approach to pool sizing is surprisingly simple:

1
2
3
4
5
6
# config/database.yml
production:
  adapter: postgresql
  database: myapp_production
  pool: 100  # Set it high and forget about it
  timeout: 5000

Why this works:

  • Rails creates connections lazily (only when needed)
  • Unused connections are automatically reaped
  • No performance penalty for a high limit
  • Eliminates connection timeout errors

What about total database connections?

The only real limit you need to monitor is your database’s max_connections:

1
2
3
4
5
6
7
8
9
10
# Check PostgreSQL max connections
ActiveRecord::Base.connection.execute("SHOW max_connections").first
# => {"max_connections"=>"100"}

# Monitor actual connections in use
ActiveRecord::Base.connection.execute("
  SELECT count(*) FROM pg_stat_activity 
  WHERE datname = 'myapp_production'
").first
# => {"count"=>"23"}  # Only 23 connections actually created!

Even with pool: 100 across multiple processes, Rails will only create the connections it actually needs.

2. Use Read Replicas

Distribute load across multiple databases:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
# config/database.yml
production:
  primary:
    adapter: postgresql
    database: myapp_production
    pool: 15

  primary_replica:
    adapter: postgresql
    database: myapp_production
    host: replica.example.com
    pool: 10
    replica: true

# Queries automatically use replica for reads
User.where(active: true).to_a  # Uses replica
User.create!(name: "New")       # Uses primary

For a comprehensive guide on implementing read replicas, see our series starting with Scaling Rails with PostgreSQL Read Replicas: Part 1 - Understanding the Basics.

3. Monitor Actual Usage, Not Pool Limits

Shift your monitoring focus:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
# config/initializers/connection_monitoring.rb
module ConnectionMonitoring
  def self.check_database_connections
    # Monitor actual connections at the database
    result = ActiveRecord::Base.connection.execute("
      SELECT count(*) as total,
             count(*) FILTER (WHERE state = 'active') as active
      FROM pg_stat_activity
      WHERE datname = current_database()
    ").first
    
    StatsD.gauge('db.connections.total', result['total'])
    StatsD.gauge('db.connections.active', result['active'])
    
    # Alert on database limits, not pool limits
    max_conn = ActiveRecord::Base.connection.execute(
      "SHOW max_connections"
    ).first['max_connections'].to_i
    
    if result['total'] > max_conn * 0.8
      Rails.logger.warn "Approaching database connection limit: #{result['total']}/#{max_conn}"
    end
  end
end

Testing Connection Pool Behavior

Write tests to verify your pool configuration:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
# spec/connection_pool_spec.rb
RSpec.describe "Connection Pool" do
  it "handles concurrent requests without exhaustion" do
    threads = []
    errors = []

    20.times do
      threads << Thread.new do
        begin
          User.connection_pool.with_connection do
            User.count
            sleep 0.1  # Simulate work
          end
        rescue ActiveRecord::ConnectionTimeoutError => e
          errors << e
        end
      end
    end

    threads.each(&:join)

    expect(errors).to be_empty
    expect(User.connection_pool.stat[:waiting]).to eq(0)
  end
end

The Modern Approach: Simplify Your Connection Strategy

Just Set It High

1
2
3
4
5
# Old approach - trying to calculate the "perfect" size:
pool: <%= ENV.fetch("RAILS_MAX_THREADS") { 5 } %>

# Modern approach - set it high and let Rails manage:
pool: 100

Monitor What Actually Matters

Focus on real metrics, not pool limits:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
# Monitor actual database connections
ActiveRecord::Base.connection.execute("
  SELECT state, count(*) 
  FROM pg_stat_activity 
  WHERE datname = current_database()
  GROUP BY state
")
# => [{"state"=>"active", "count"=>3}, {"state"=>"idle", "count"=>20}]

# Check if approaching database limits
ActiveRecord::Base.connection.execute("
  SELECT setting::int - count(*) as connections_available
  FROM pg_settings, pg_stat_activity
  WHERE name = 'max_connections'
  GROUP BY setting::int
").first
# => {"connections_available"=>77}

When You Actually Need to Worry

Database connection limits: The only real constraint

1
2
3
-- PostgreSQL default: 100 connections
-- If you have 10 servers with pool: 100, that's a theoretical 1000 connections
-- But Rails will only create what it needs

Slow queries: The real culprit behind “connection exhaustion”

  • A query taking 30 seconds holds a connection for 30 seconds
  • Fix the query, not the pool size

Connection pooling in Rails has evolved from a complex optimization challenge to a simple configuration choice. Set your pool size high, let Rails manage the connections intelligently, and focus your efforts on query performance and database-side limits.

The Bottom Line

Stop calculating pool sizes. Set pool: 100 and move on to solving real problems. Rails’ lazy connection creation and automatic management make this approach both safe and optimal. The Rails core team is even moving towards removing pool limits entirely.

Focus your monitoring on actual database connections and query performance, not arbitrary pool limits.

References

Prateek Choudhary
Prateek Choudhary
Technology Leader