In an application using Spring Data I noticed frequent Warning lines and shaky behaviour, even locking with the following message.
com.zaxxer.hikari.pool.ProxyConnection checkException
WARNING: HikariPool-1 - Connection oracle.jdbc.driver.T4CConnection@456d6c1e marked as broken because of SQLSTATE(08003), ErrorCode(17008)
java.sql.SQLRecoverableException: Closed Connection
at oracle.jdbc.driver.PhysicalConnection.getAutoCommit(PhysicalConnection.java:2089)
This is a Connection Eviction Warning.
In simple terms, your application (using HikariCP) tried to use a database connection that it thought was valid, but when it reached out to the Oracle database, it found the connection was already dead (“Closed”).
Here is the breakdown of the error and how to fix it.
1. Decoding the Error Logs
SQLSTATE(08003): Standard SQL error for “Connection does not exist.”ErrorCode(17008): The specific Oracle JDBC driver code for “Closed Connection.”WARNING: HikariPool-1 ... marked as broken: This is HikariCP doing its job. It detected the dead connection, removed it from the pool to prevent further errors, and (likely) created a new one to replace it.SQLRecoverableException: This indicates that the error is not a programming bug, but a state issue. The application can “recover” by getting a fresh connection.
2. Why is this happening?
The root cause is usually a timeout mismatch. The connection was closed by an external force (Firewall, Database, or Network) while it was sitting idle in your pool, but HikariCP didn’t know about it until it tried to use it.
Common Causes:
- Firewall/Load Balancer Timeout: A firewall between your app and Oracle silently drops connections that have been idle for too long (e.g., 5 or 10 minutes) without sending a reset packet (
RST). - Oracle
IDLE_TIME: The Oracle User Profile has a configuredIDLE_TIMEthat disconnects sessions after a period of inactivity. - Database Restart: The Oracle database service was restarted, invalidating all existing connections in the pool.
- Network Fluctuation: A temporary packet loss caused the socket to close.
3. How to Fix It
The most effective solution is tuning your HikariCP configuration to retire connections before the external forces kill them.
A. Adjust maxLifetime (Most Likely Fix)
You must set Hikari’s maxLifetime to be shorter than any timeout configured on the Database or the Firewall.
- Recommendation: If you don’t know the firewall timeout, try setting
maxLifetimeto 10-15 minutes (600000 – 900000ms). - Rule:
maxLifetime<DB/Firewall Idle Timeout.
B. Enable keepaliveTime
If a firewall is cutting connections simply because they are quiet, you can ask Hikari to “ping” the database periodically to keep the line active.
- Recommendation: Set
keepaliveTimeto 5 minutes.
C. Configuration Example
If you are using application.properties (Spring Boot), the fix usually looks like this:
# Set max life of a connection to 14 minutes (must be less than DB/Firewall timeout)
spring.datasource.hikari.max-lifetime=840000
# Attempt to keep the connection alive every 5 minutes to satisfy firewalls
spring.datasource.hikari.keepalive-time=300000
# How long to wait for a connection validation before giving up
spring.datasource.hikari.validation-timeout=5000
4. Is this Critical?
Not immediate critical, but needs attention.
- Since the exception is
SQLRecoverableExceptionand Hikari marked it asWARNING, your application likely retried and succeeded immediately after this log. - However, if this happens frequently, it causes latency spikes (users waiting for a new connection to be established) and can eventually exhaust your pool if connections die faster than they recover.