Production-ready PostgreSQL storage adapter for LTI 1.3. Includes caching and optimized for AWS Lambda.
npm install @lti-tool/postgresql
import { PostgresStorage } from '@lti-tool/postgresql';
import { LTITool } from '@lti-tool/core';
const storage = new PostgresStorage({
connectionUrl: process.env.DATABASE_URL!,
});
const ltiTool = new LTITool({
storage,
// ... other config
});
# Set your DATABASE_URL
export DATABASE_URL="postgresql://user:password@host:port/database"
# Push schema to database
npx drizzle-kit push
# Apply migrations
npx drizzle-kit migrate
connectionUrl (required): PostgreSQL connection URL
Format: postgresql://user:password@host:port/database
poolOptions (optional): postgres.js connection options
max: Max connections (auto: 1 for serverless, 10 for servers)idleTimeout: Idle timeout in seconds before connection is closed (default: 20)nonceExpirationSeconds (optional): Nonce TTL in seconds (default: 600)
logger (optional): Pino logger for debugging
The adapter uses these tables:
(iss, clientId)(clientId, deploymentId)expiresAtnonce
Indexed: expiresAtexpiresAtAll tables use native PostgreSQL UUIDs for primary keys and include indexes for performance.
| Column | Type | Constraints | Description |
|---|---|---|---|
id |
UUID | PRIMARY KEY, NOT NULL | Internal UUID for the client |
name |
VARCHAR(255) | NOT NULL | Human-readable platform name |
iss |
VARCHAR(255) | NOT NULL | Issuer URL (LMS platform) |
clientId |
VARCHAR(255) | NOT NULL | LMS-provided client identifier |
authUrl |
TEXT | NOT NULL | OAuth2 authorization endpoint |
tokenUrl |
TEXT | NOT NULL | OAuth2 token endpoint |
jwksUrl |
TEXT | NOT NULL | JWKS endpoint for public keys |
Indexes:
issuer_client_idx: (clientId, iss) - For fast client lookupsiss_client_id_unique: (iss, clientId) - Unique constraint preventing duplicate clients| Column | Type | Constraints | Description |
|---|---|---|---|
id |
UUID | PRIMARY KEY, NOT NULL | Internal UUID for the deployment |
deploymentId |
VARCHAR(255) | NOT NULL | LMS-provided deployment identifier |
name |
VARCHAR(255) | NULL | Optional human-readable name |
description |
TEXT | NULL | Optional description |
clientId |
UUID | NOT NULL, FOREIGN KEY | References clients.id |
Indexes:
deployment_id_idx: (deploymentId) - For fast deployment lookupsclient_deployment_unique: (clientId, deploymentId) - Unique constraint per client| Column | Type | Constraints | Description |
|---|---|---|---|
id |
UUID | PRIMARY KEY, NOT NULL | Session UUID |
data |
JSONB | NOT NULL | Complete LTI session data |
expiresAt |
TIMESTAMP WITH TIME ZONE | NOT NULL | Session expiration timestamp |
Indexes:
sessions_expires_at_idx: (expiresAt) - For cleanup queries and expiration checks| Column | Type | Constraints | Description |
|---|---|---|---|
nonce |
VARCHAR(255) | PRIMARY KEY, NOT NULL | One-time use nonce value |
expiresAt |
TIMESTAMP WITH TIME ZONE | NOT NULL | Nonce expiration timestamp |
| Column | Type | Constraints | Description |
|---|---|---|---|
id |
UUID | PRIMARY KEY, NOT NULL | Registration session UUID |
data |
JSONB | NOT NULL | Dynamic registration session data |
expiresAt |
TIMESTAMP WITH TIME ZONE | NOT NULL | Session expiration timestamp |
Indexes:
reg_sessions_expires_at_idx: (expiresAt) - For cleanup queries and expiration checksThe adapter automatically detects your deployment environment:
max: 1max: 10max based on concurrent loadidleTimeout seconds (default: 20)const storage = new PostgresStorage({
connectionUrl: process.env.DATABASE_URL!,
poolOptions: {
max: 20, // Override auto-detection
idleTimeout: 30, // Keep connections alive longer
},
});
import { PostgresStorage } from '@lti-tool/postgresql';
export const storage = new PostgresStorage({
connectionUrl: process.env.DATABASE_URL!,
});
// Optional: Graceful shutdown
const shutdown = async () => {
await storage.close();
process.exit(0);
};
process.on('SIGTERM', shutdown);
process.on('SIGINT', shutdown);
Connection Limits:
5-10 connections10-20 connections20-50 connectionsmax_connectionsimport { PostgresStorage } from '@lti-tool/postgresql';
let storage: PostgresStorage | undefined;
export const handler = async (event) => {
if (!storage) {
storage = new PostgresStorage({
connectionUrl: process.env.DATABASE_URL!,
// Auto-detects Lambda, uses max: 1
});
}
// Use storage...
};
Why max: 1?
Lambda containers handle one request at a time. The connection is reused across warm invocations.
Do I need close()?
No! Lambda freezes containers efficiently. Calling close() destroys reusable connections.
⚠️ Not supported!
The adapter requires periodic cleanup of expired nonces and sessions.
// Example - AWS Lambda with EventBridge (every 30 minutes)
export const handler = async () => {
const result = await storage.cleanup();
console.log('Cleanup:', result);
// { noncesDeleted: 42, sessionsDeleted: 15, registrationSessionsDeleted: 3 }
};
# Using Docker
docker-compose up -d
# Using Podman
podman-compose up -d
# Or Podman directly
podman run -d \
--name lti-postgres \
-e POSTGRES_PASSWORD=postgres \
-e POSTGRES_DB=lti_test \
-e POSTGRES_USER=lti_user \
-p 5432:5432 \
postgres:16
DATABASE_URL="postgresql://lti_user:postgres@localhost:5432/lti_test" npm test
Important: Always close the pool after tests:
afterAll(async () => {
await storage.close();
});
Auto-detects serverless by checking:
AWS_LAMBDA_FUNCTION_NAME, AWS_EXECUTION_ENVFUNCTION_NAME, K_SERVICEFUNCTIONS_WORKER_RUNTIMEVERCELNETLIFY