fix: 修复TypeScript配置错误并更新项目文档

详细说明:
- 修复了@n8n/config包的TypeScript配置错误
- 移除了不存在的jest-expect-message类型引用
- 清理了所有TypeScript构建缓存
- 更新了可行性分析文档,添加了技术实施方案
- 更新了Agent prompt文档
- 添加了会展策划工作流文档
- 包含了n8n-chinese-translation子项目
- 添加了exhibition-demo展示系统框架
This commit is contained in:
Yep_Q
2025-09-08 10:49:45 +08:00
parent 8cf9d36d81
commit 3db7af209c
426 changed files with 71699 additions and 4401 deletions

View File

@@ -0,0 +1,205 @@
# AI Workflow Builder Evaluations
This module provides a evaluation framework for testing the AI Workflow Builder's ability to generate correct n8n workflows from natural language prompts.
## Architecture Overview
The evaluation system is split into two distinct modes:
1. **CLI Evaluation** - Runs predefined test cases locally with progress tracking
2. **Langsmith Evaluation** - Integrates with Langsmith for dataset-based evaluation and experiment tracking
### Directory Structure
```
evaluations/
├── cli/ # CLI evaluation implementation
│ ├── runner.ts # Main CLI evaluation orchestrator
│ └── display.ts # Console output and progress tracking
├── langsmith/ # Langsmith integration
│ ├── evaluator.ts # Langsmith-compatible evaluator function
│ └── runner.ts # Langsmith evaluation orchestrator
├── core/ # Shared evaluation logic
│ ├── environment.ts # Test environment setup and configuration
│ └── test-runner.ts # Core test execution logic
├── types/ # Type definitions
│ ├── evaluation.ts # Evaluation result schemas
│ ├── test-result.ts # Test result interfaces
│ └── langsmith.ts # Langsmith-specific types and guards
├── chains/ # LLM evaluation chains
│ ├── test-case-generator.ts # Dynamic test case generation
│ └── workflow-evaluator.ts # LLM-based workflow evaluation
├── utils/ # Utility functions
│ ├── evaluation-calculator.ts # Metrics calculation
│ ├── evaluation-helpers.ts # Common helper functions
│ ├── evaluation-reporter.ts # Report generation
└── index.ts # Main entry point
```
## Implementation Details
### Core Components
#### 1. Test Runner (`core/test-runner.ts`)
The core test runner handles individual test execution:
- Generates workflows using the WorkflowBuilderAgent
- Validates generated workflows using type guards
- Evaluates workflows against test criteria
- Returns structured test results with error handling
#### 2. Environment Setup (`core/environment.ts`)
Centralizes environment configuration:
- LLM initialization with API key validation
- Langsmith client setup
- Node types loading
- Concurrency and test generation settings
#### 3. Langsmith Integration
The Langsmith integration provides two key components:
**Evaluator (`langsmith/evaluator.ts`):**
- Converts Langsmith Run objects to evaluation inputs
- Validates all data using type guards before processing
- Safely extracts usage metadata without type coercion
- Returns structured evaluation results
**Runner (`langsmith/runner.ts`):**
- Creates workflow generation functions compatible with Langsmith
- Validates message content before processing
- Extracts usage metrics safely from message metadata
- Handles dataset verification and error reporting
#### 4. CLI Evaluation
The CLI evaluation provides local testing capabilities:
**Runner (`cli/runner.ts`):**
- Orchestrates parallel test execution with concurrency control
- Manages test case generation when enabled
- Generates detailed reports and saves results
**Display (`cli/display.ts`):**
- Progress bar management for real-time feedback
- Console output formatting
- Error display and reporting
### Evaluation Metrics
The system evaluates workflows across five categories:
1. **Functionality** (30% weight)
- Does the workflow achieve the intended goal?
- Are the right nodes selected?
2. **Connections** (25% weight)
- Are nodes properly connected?
- Is data flow logical?
3. **Expressions** (20% weight)
- Are n8n expressions syntactically correct?
- Do they reference valid data paths?
4. **Node Configuration** (15% weight)
- Are node parameters properly set?
- Are required fields populated?
5. **Structural Similarity** (10% weight, optional)
- How closely does the structure match a reference workflow?
- Only evaluated when reference workflow is provided
### Violation Severity Levels
Violations are categorized by severity:
- **Critical** (-40 to -50 points): Workflow-breaking issues
- **Major** (-15 to -25 points): Significant problems affecting functionality
- **Minor** (-5 to -15 points): Non-critical issues or inefficiencies
## Running Evaluations
### CLI Evaluation
```bash
# Run with default settings
pnpm eval
# With additional generated test cases
GENERATE_TEST_CASES=true pnpm eval
# With custom concurrency
EVALUATION_CONCURRENCY=10 pnpm eval
```
### Langsmith Evaluation
```bash
# Set required environment variables
export LANGSMITH_API_KEY=your_api_key
# Optionally specify dataset
export LANGSMITH_DATASET_NAME=your_dataset_name
# Run evaluation
pnpm eval:langsmith
```
## Configuration
### Required Files
#### nodes.json
**IMPORTANT**: The evaluation framework requires a `nodes.json` file in the evaluations root directory (`evaluations/nodes.json`).
This file contains all n8n node type definitions and is used by the AI Workflow Builder agent to:
- Know what nodes are available in n8n
- Understand node parameters and their schemas
- Generate valid workflows with proper node configurations
**Why is this required?**
The AI Workflow Builder agent needs access to node definitions to generate workflows. In a normal n8n runtime, these definitions are loaded automatically. However, since the evaluation framework instantiates the agent without a running n8n instance, we must provide the node definitions manually via `nodes.json`.
**How to generate nodes.json:**
1. Run your n8n instance
2. Download the node definitions from locally running n8n instance(http://localhost:5678/types/nodes.json)
3. Save the node definitions to `evaluations/nodes.json`
The evaluation will fail with a clear error message if `nodes.json` is missing.
### Environment Variables
- `N8N_AI_ANTHROPIC_KEY` - Required for LLM access
- `LANGSMITH_API_KEY` - Required for Langsmith evaluation
- `USE_LANGSMITH_EVAL` - Set to "true" to use Langsmith mode
- `LANGSMITH_DATASET_NAME` - Override default dataset name
- `EVALUATION_CONCURRENCY` - Number of parallel test executions (default: 5)
- `GENERATE_TEST_CASES` - Set to "true" to generate additional test cases
- `LLM_MODEL` - Model identifier for metadata tracking
## Output
### CLI Evaluation Output
- **Console Display**: Real-time progress, test results, and summary statistics
- **Markdown Report**: `results/evaluation-report-[timestamp].md`
- **JSON Results**: `results/evaluation-results-[timestamp].json`
### Langsmith Evaluation Output
- Results are stored in Langsmith dashboard
- Experiment name format: `workflow-builder-evaluation-[date]`
- Includes detailed metrics for each evaluation category
## Adding New Test Cases
Test cases are defined in `chains/test-case-generator.ts`. Each test case requires:
- `id`: Unique identifier
- `name`: Descriptive name
- `prompt`: Natural language description of the workflow to generate
- `referenceWorkflow` (optional): Expected workflow structure for comparison
## Extending the Framework
To add new evaluation metrics:
1. Update the `EvaluationResult` schema in `types/evaluation.ts`
2. Modify the evaluation logic in `chains/workflow-evaluator.ts`
3. Update the evaluator in `langsmith/evaluator.ts` to include new metrics
4. Adjust weight calculations in `utils/evaluation-calculator.ts`

View File

@@ -0,0 +1,27 @@
import { runCliEvaluation } from './cli/runner.js';
import { runLangsmithEvaluation } from './langsmith/runner.js';
// Re-export for external use if needed
export { runCliEvaluation } from './cli/runner.js';
export { runLangsmithEvaluation } from './langsmith/runner.js';
export { runSingleTest } from './core/test-runner.js';
export { setupTestEnvironment, createAgent } from './core/environment.js';
/**
* Main entry point for evaluation
* Determines which evaluation mode to run based on environment variables
*/
async function main(): Promise<void> {
const useLangsmith = process.env.USE_LANGSMITH_EVAL === 'true';
if (useLangsmith) {
await runLangsmithEvaluation();
} else {
await runCliEvaluation();
}
}
// Run if called directly
if (require.main === module) {
main().catch(console.error);
}

View File

@@ -0,0 +1,106 @@
import { readFileSync, existsSync } from 'fs';
import { jsonParse, type INodeTypeDescription } from 'n8n-workflow';
import { join } from 'path';
interface NodeWithVersion extends INodeTypeDescription {
version: number | number[];
defaultVersion?: number;
}
export function loadNodesFromFile(): INodeTypeDescription[] {
console.log('Loading nodes from nodes.json...');
const nodesPath = join(__dirname, 'nodes.json');
// Check if nodes.json exists
if (!existsSync(nodesPath)) {
const errorMessage = `
ERROR: nodes.json file not found at ${nodesPath}
The nodes.json file is required for evaluations to work properly.
Please ensure nodes.json is present in the evaluations root directory.
To generate nodes.json:
1. Run the n8n instance
2. Export the node definitions to evaluations/nodes.json
3. This file contains all available n8n node type definitions needed for validation
Without nodes.json, the evaluator cannot validate node types and parameters.
`;
console.error(errorMessage);
throw new Error('nodes.json file not found. See console output for details.');
}
const nodesData = readFileSync(nodesPath, 'utf-8');
const allNodes = jsonParse<NodeWithVersion[]>(nodesData);
console.log(`Total nodes loaded: ${allNodes.length}`);
// Group nodes by name
const nodesByName = new Map<string, NodeWithVersion[]>();
for (const node of allNodes) {
const existing = nodesByName.get(node.name) ?? [];
existing.push(node);
nodesByName.set(node.name, existing);
}
console.log(`Unique node types: ${nodesByName.size}`);
// Extract latest version for each node
const latestNodes: INodeTypeDescription[] = [];
let multiVersionCount = 0;
for (const [_nodeName, versions] of nodesByName.entries()) {
if (versions.length > 1) {
multiVersionCount++;
// Find the node with the default version
let selectedNode: NodeWithVersion | undefined;
for (const node of versions) {
// Select the node that matches the default version
if (node.defaultVersion !== undefined) {
if (Array.isArray(node.version)) {
// For array versions, check if it includes the default version
if (node.version.includes(node.defaultVersion)) {
selectedNode = node;
}
} else if (node.version === node.defaultVersion) {
selectedNode = node;
}
}
}
// If we found a matching node, use it; otherwise use the first one
if (selectedNode) {
latestNodes.push(selectedNode);
} else {
latestNodes.push(versions[0]);
}
} else {
// Single version node
latestNodes.push(versions[0]);
}
}
console.log(`\nNodes with multiple versions: ${multiVersionCount}`);
console.log(`Final node count: ${latestNodes.length}`);
// Filter out hidden nodes
const visibleNodes = latestNodes.filter((node) => !node.hidden);
console.log(`Visible nodes (after filtering hidden): ${visibleNodes.length}\n`);
return visibleNodes;
}
// Helper function to get specific node version for testing
export function getNodeVersion(nodes: INodeTypeDescription[], nodeName: string): string {
const node = nodes.find((n) => n.name === nodeName);
if (!node) return 'not found';
const version = (node as NodeWithVersion).version;
if (Array.isArray(version)) {
return `[${version.join(', ')}]`;
}
return version?.toString() || 'unknown';
}

View File

@@ -0,0 +1,184 @@
import type { BaseChatModel } from '@langchain/core/language_models/chat_models';
import { LangChainTracer } from '@langchain/core/tracers/tracer_langchain';
import { MemorySaver } from '@langchain/langgraph';
import { Logger } from '@n8n/backend-common';
import { Service } from '@n8n/di';
import { AiAssistantClient } from '@n8n_io/ai-assistant-sdk';
import { Client } from 'langsmith';
import { INodeTypes } from 'n8n-workflow';
import type { IUser, INodeTypeDescription } from 'n8n-workflow';
import { LLMServiceError } from './errors';
import { anthropicClaudeSonnet4, gpt41mini } from './llm-config';
import { WorkflowBuilderAgent, type ChatPayload } from './workflow-builder-agent';
@Service()
export class AiWorkflowBuilderService {
private parsedNodeTypes: INodeTypeDescription[] = [];
private llmSimpleTask: BaseChatModel | undefined;
private llmComplexTask: BaseChatModel | undefined;
private tracingClient: Client | undefined;
private checkpointer = new MemorySaver();
private agent: WorkflowBuilderAgent | undefined;
constructor(
private readonly nodeTypes: INodeTypes,
private readonly client?: AiAssistantClient,
private readonly logger?: Logger,
private readonly instanceUrl?: string,
) {
this.parsedNodeTypes = this.getNodeTypes();
}
private async setupModels(user?: IUser) {
try {
if (this.llmSimpleTask && this.llmComplexTask) {
return;
}
// If client is provided, use it for API proxy
if (this.client && user) {
const authHeaders = await this.client.generateApiProxyCredentials(user);
// Extract baseUrl from client configuration
const baseUrl = this.client.getApiProxyBaseUrl();
this.llmSimpleTask = await gpt41mini({
baseUrl: baseUrl + '/openai',
// When using api-proxy the key will be populated automatically, we just need to pass a placeholder
apiKey: '-',
headers: {
Authorization: authHeaders.apiKey,
},
});
this.llmComplexTask = await anthropicClaudeSonnet4({
baseUrl: baseUrl + '/anthropic',
apiKey: '-',
headers: {
Authorization: authHeaders.apiKey,
'anthropic-beta': 'prompt-caching-2024-07-31',
},
});
this.tracingClient = new Client({
apiKey: '-',
apiUrl: baseUrl + '/langsmith',
autoBatchTracing: false,
traceBatchConcurrency: 1,
fetchOptions: {
headers: {
Authorization: authHeaders.apiKey,
},
},
});
return;
}
// If base URL is not set, use environment variables
this.llmSimpleTask = await gpt41mini({
apiKey: process.env.N8N_AI_OPENAI_API_KEY ?? '',
});
this.llmComplexTask = await anthropicClaudeSonnet4({
apiKey: process.env.N8N_AI_ANTHROPIC_KEY ?? '',
headers: {
'anthropic-beta': 'prompt-caching-2024-07-31',
},
});
} catch (error) {
const llmError = new LLMServiceError('Failed to connect to LLM Provider', {
cause: error,
tags: {
hasClient: !!this.client,
hasUser: !!user,
},
});
throw llmError;
}
}
private getNodeTypes(): INodeTypeDescription[] {
// These types are ignored because they tend to cause issues when generating workflows
const ignoredTypes = [
'@n8n/n8n-nodes-langchain.toolVectorStore',
'@n8n/n8n-nodes-langchain.documentGithubLoader',
'@n8n/n8n-nodes-langchain.code',
];
const nodeTypesKeys = Object.keys(this.nodeTypes.getKnownTypes());
const nodeTypes = nodeTypesKeys
.filter((nodeType) => !ignoredTypes.includes(nodeType))
.map((nodeName) => {
try {
return { ...this.nodeTypes.getByNameAndVersion(nodeName).description, name: nodeName };
} catch (error) {
this.logger?.error('Error getting node type', {
nodeName,
error: error instanceof Error ? error.message : 'Unknown error',
});
return undefined;
}
})
.filter(
(nodeType): nodeType is INodeTypeDescription =>
nodeType !== undefined && nodeType.hidden !== true,
)
.map((nodeType, _index, nodeTypes: INodeTypeDescription[]) => {
// If the node type is a tool, we need to find the corresponding non-tool node type
// and merge the two node types to get the full node type description.
const isTool = nodeType.name.endsWith('Tool');
if (!isTool) return nodeType;
const nonToolNode = nodeTypes.find((nt) => nt.name === nodeType.name.replace('Tool', ''));
if (!nonToolNode) return nodeType;
return {
...nonToolNode,
...nodeType,
};
});
return nodeTypes;
}
private async getAgent(user?: IUser) {
if (!this.llmComplexTask || !this.llmSimpleTask) {
await this.setupModels(user);
}
if (!this.llmComplexTask || !this.llmSimpleTask) {
throw new LLMServiceError('Failed to initialize LLM models');
}
this.agent ??= new WorkflowBuilderAgent({
parsedNodeTypes: this.parsedNodeTypes,
// We use Sonnet both for simple and complex tasks
llmSimpleTask: this.llmComplexTask,
llmComplexTask: this.llmComplexTask,
logger: this.logger,
checkpointer: this.checkpointer,
tracer: this.tracingClient
? new LangChainTracer({ client: this.tracingClient, projectName: 'n8n-workflow-builder' })
: undefined,
instanceUrl: this.instanceUrl,
});
return this.agent;
}
async *chat(payload: ChatPayload, user?: IUser, abortSignal?: AbortSignal) {
const agent = await this.getAgent(user);
for await (const output of agent.chat(payload, user?.id?.toString(), abortSignal)) {
yield output;
}
}
async getSessions(workflowId: string | undefined, user?: IUser) {
const agent = await this.getAgent(user);
return await agent.getSessions(workflowId, user?.id?.toString());
}
}

View File

@@ -0,0 +1,3 @@
export const MAX_AI_BUILDER_PROMPT_LENGTH = 1000; // characters
export const DEFAULT_AUTO_COMPACT_THRESHOLD_TOKENS = 20_000; // Tokens threshold for auto-compacting the conversation

View File

@@ -0,0 +1,3 @@
export * from './ai-workflow-builder-agent.service';
export * from './types';
export * from './workflow-state';

View File

@@ -0,0 +1,60 @@
// Different LLMConfig type for this file - specific to LLM providers
interface LLMProviderConfig {
apiKey: string;
baseUrl?: string;
headers?: Record<string, string>;
}
export const o4mini = async (config: LLMProviderConfig) => {
const { ChatOpenAI } = await import('@langchain/openai');
return new ChatOpenAI({
model: 'o4-mini-2025-04-16',
apiKey: config.apiKey,
configuration: {
baseURL: config.baseUrl,
defaultHeaders: config.headers,
},
});
};
export const gpt41mini = async (config: LLMProviderConfig) => {
const { ChatOpenAI } = await import('@langchain/openai');
return new ChatOpenAI({
model: 'gpt-4.1-mini-2025-04-14',
apiKey: config.apiKey,
temperature: 0,
maxTokens: -1,
configuration: {
baseURL: config.baseUrl,
defaultHeaders: config.headers,
},
});
};
export const gpt41 = async (config: LLMProviderConfig) => {
const { ChatOpenAI } = await import('@langchain/openai');
return new ChatOpenAI({
model: 'gpt-4.1-2025-04-14',
apiKey: config.apiKey,
temperature: 0.3,
maxTokens: -1,
configuration: {
baseURL: config.baseUrl,
defaultHeaders: config.headers,
},
});
};
export const anthropicClaudeSonnet4 = async (config: LLMProviderConfig) => {
const { ChatAnthropic } = await import('@langchain/anthropic');
return new ChatAnthropic({
model: 'claude-sonnet-4-20250514',
apiKey: config.apiKey,
temperature: 0,
maxTokens: 16000,
anthropicApiUrl: config.baseUrl,
clientOptions: {
defaultHeaders: config.headers,
},
});
};

View File

@@ -0,0 +1,500 @@
import type { BaseChatModel } from '@langchain/core/language_models/chat_models';
import type { ToolMessage } from '@langchain/core/messages';
import { AIMessage, HumanMessage, RemoveMessage } from '@langchain/core/messages';
import type { RunnableConfig } from '@langchain/core/runnables';
import type { LangChainTracer } from '@langchain/core/tracers/tracer_langchain';
import { StateGraph, MemorySaver, END, GraphRecursionError } from '@langchain/langgraph';
import type { Logger } from '@n8n/backend-common';
import {
ApplicationError,
type INodeTypeDescription,
type IRunExecutionData,
type IWorkflowBase,
type NodeExecutionSchema,
} from 'n8n-workflow';
import { workflowNameChain } from '@/chains/workflow-name';
import { DEFAULT_AUTO_COMPACT_THRESHOLD_TOKENS, MAX_AI_BUILDER_PROMPT_LENGTH } from '@/constants';
import { conversationCompactChain } from './chains/conversation-compact';
import { LLMServiceError, ValidationError } from './errors';
import { createAddNodeTool } from './tools/add-node.tool';
import { createConnectNodesTool } from './tools/connect-nodes.tool';
import { createNodeDetailsTool } from './tools/node-details.tool';
import { createNodeSearchTool } from './tools/node-search.tool';
import { mainAgentPrompt } from './tools/prompts/main-agent.prompt';
import { createRemoveNodeTool } from './tools/remove-node.tool';
import { createUpdateNodeParametersTool } from './tools/update-node-parameters.tool';
import type { SimpleWorkflow } from './types/workflow';
import { processOperations } from './utils/operations-processor';
import { createStreamProcessor, formatMessages } from './utils/stream-processor';
import { extractLastTokenUsage } from './utils/token-usage';
import { executeToolsInParallel } from './utils/tool-executor';
import { WorkflowState } from './workflow-state';
export interface WorkflowBuilderAgentConfig {
parsedNodeTypes: INodeTypeDescription[];
llmSimpleTask: BaseChatModel;
llmComplexTask: BaseChatModel;
logger?: Logger;
checkpointer?: MemorySaver;
tracer?: LangChainTracer;
autoCompactThresholdTokens?: number;
instanceUrl?: string;
}
export interface ChatPayload {
message: string;
workflowContext?: {
executionSchema?: NodeExecutionSchema[];
currentWorkflow?: Partial<IWorkflowBase>;
executionData?: IRunExecutionData['resultData'];
};
}
export class WorkflowBuilderAgent {
private checkpointer: MemorySaver;
private parsedNodeTypes: INodeTypeDescription[];
private llmSimpleTask: BaseChatModel;
private llmComplexTask: BaseChatModel;
private logger?: Logger;
private tracer?: LangChainTracer;
private autoCompactThresholdTokens: number;
private instanceUrl?: string;
constructor(config: WorkflowBuilderAgentConfig) {
this.parsedNodeTypes = config.parsedNodeTypes;
this.llmSimpleTask = config.llmSimpleTask;
this.llmComplexTask = config.llmComplexTask;
this.logger = config.logger;
this.checkpointer = config.checkpointer ?? new MemorySaver();
this.tracer = config.tracer;
this.autoCompactThresholdTokens =
config.autoCompactThresholdTokens ?? DEFAULT_AUTO_COMPACT_THRESHOLD_TOKENS;
this.instanceUrl = config.instanceUrl;
}
private createWorkflow() {
const tools = [
createNodeSearchTool(this.parsedNodeTypes),
createNodeDetailsTool(this.parsedNodeTypes),
createAddNodeTool(this.parsedNodeTypes),
createConnectNodesTool(this.parsedNodeTypes, this.logger),
createRemoveNodeTool(this.logger),
createUpdateNodeParametersTool(
this.parsedNodeTypes,
this.llmComplexTask,
this.logger,
this.instanceUrl,
),
];
// Create a map for quick tool lookup
const toolMap = new Map(tools.map((tool) => [tool.name, tool]));
const callModel = async (state: typeof WorkflowState.State) => {
if (!this.llmSimpleTask) {
throw new LLMServiceError('LLM not setup');
}
if (typeof this.llmSimpleTask.bindTools !== 'function') {
throw new LLMServiceError('LLM does not support tools', {
llmModel: this.llmSimpleTask._llmType(),
});
}
const prompt = await mainAgentPrompt.invoke({
...state,
executionData: state.workflowContext?.executionData ?? {},
executionSchema: state.workflowContext?.executionSchema ?? [],
instanceUrl: this.instanceUrl,
});
const response = await this.llmSimpleTask.bindTools(tools).invoke(prompt);
return { messages: [response] };
};
const shouldAutoCompact = ({ messages }: typeof WorkflowState.State) => {
const tokenUsage = extractLastTokenUsage(messages);
if (!tokenUsage) {
this.logger?.debug('No token usage metadata found');
return false;
}
const tokensUsed = tokenUsage.input_tokens + tokenUsage.output_tokens;
this.logger?.debug('Token usage', {
inputTokens: tokenUsage.input_tokens,
outputTokens: tokenUsage.output_tokens,
totalTokens: tokensUsed,
});
return tokensUsed > this.autoCompactThresholdTokens;
};
const shouldModifyState = (state: typeof WorkflowState.State) => {
const { messages, workflowContext } = state;
const lastHumanMessage = messages.findLast((m) => m instanceof HumanMessage)!; // There always should be at least one human message in the array
if (lastHumanMessage.content === '/compact') {
return 'compact_messages';
}
if (lastHumanMessage.content === '/clear') {
return 'delete_messages';
}
// If the workflow is empty (no nodes),
// we consider it initial generation request and auto-generate a name for the workflow.
if (workflowContext?.currentWorkflow?.nodes?.length === 0 && messages.length === 1) {
return 'create_workflow_name';
}
if (shouldAutoCompact(state)) {
return 'auto_compact_messages';
}
return 'agent';
};
const shouldContinue = ({ messages }: typeof WorkflowState.State) => {
const lastMessage: AIMessage = messages[messages.length - 1];
if (lastMessage.tool_calls?.length) {
return 'tools';
}
return END;
};
const customToolExecutor = async (state: typeof WorkflowState.State) => {
return await executeToolsInParallel({ state, toolMap });
};
function deleteMessages(state: typeof WorkflowState.State) {
const messages = state.messages;
const stateUpdate: Partial<typeof WorkflowState.State> = {
workflowOperations: null,
workflowContext: {},
messages: messages.map((m) => new RemoveMessage({ id: m.id! })) ?? [],
workflowJSON: {
nodes: [],
connections: {},
name: '',
},
};
return stateUpdate;
}
/**
* Compacts the conversation history by summarizing it
* and removing original messages.
* Might be triggered manually by the user with `/compact` message, or run automatically
* when the conversation history exceeds a certain token limit.
*/
const compactSession = async (state: typeof WorkflowState.State) => {
if (!this.llmSimpleTask) {
throw new LLMServiceError('LLM not setup');
}
const { messages, previousSummary } = state;
const lastHumanMessage = messages[messages.length - 1] satisfies HumanMessage;
const isAutoCompact = lastHumanMessage.content !== '/compact';
this.logger?.debug('Compacting conversation history', {
isAutoCompact,
});
const compactedMessages = await conversationCompactChain(
this.llmSimpleTask,
messages,
previousSummary,
);
// The summarized conversation history will become a part of system prompt
// and will be used in the next LLM call.
// We will remove all messages and replace them with a mock HumanMessage and AIMessage
// to indicate that the conversation history has been compacted.
// If this is an auto-compact, we will also keep the last human message, as it will continue executing the workflow.
return {
previousSummary: compactedMessages.summaryPlain,
messages: [
...messages.map((m) => new RemoveMessage({ id: m.id! })),
new HumanMessage('Please compress the conversation history'),
new AIMessage('Successfully compacted conversation history'),
...(isAutoCompact ? [new HumanMessage({ content: lastHumanMessage.content })] : []),
],
};
};
/**
* Creates a workflow name based on the initial user message.
*/
const createWorkflowName = async (state: typeof WorkflowState.State) => {
if (!this.llmSimpleTask) {
throw new LLMServiceError('LLM not setup');
}
const { workflowJSON, messages } = state;
if (messages.length === 1 && messages[0] instanceof HumanMessage) {
const initialMessage = messages[0] satisfies HumanMessage;
if (typeof initialMessage.content !== 'string') {
this.logger?.debug(
'Initial message content is not a string, skipping workflow name generation',
);
return {};
}
this.logger?.debug('Generating workflow name');
const { name } = await workflowNameChain(this.llmSimpleTask, initialMessage.content);
return {
workflowJSON: {
...workflowJSON,
name,
},
};
}
return {};
};
const workflow = new StateGraph(WorkflowState)
.addNode('agent', callModel)
.addNode('tools', customToolExecutor)
.addNode('process_operations', processOperations)
.addNode('delete_messages', deleteMessages)
.addNode('compact_messages', compactSession)
.addNode('auto_compact_messages', compactSession)
.addNode('create_workflow_name', createWorkflowName)
.addConditionalEdges('__start__', shouldModifyState)
.addEdge('tools', 'process_operations')
.addEdge('process_operations', 'agent')
.addEdge('auto_compact_messages', 'agent')
.addEdge('create_workflow_name', 'agent')
.addEdge('delete_messages', END)
.addEdge('compact_messages', END)
.addConditionalEdges('agent', shouldContinue);
return workflow;
}
async getState(workflowId: string, userId?: string) {
const workflow = this.createWorkflow();
const agent = workflow.compile({ checkpointer: this.checkpointer });
return await agent.getState({
configurable: { thread_id: `workflow-${workflowId}-user-${userId ?? new Date().getTime()}` },
});
}
static generateThreadId(workflowId?: string, userId?: string) {
return workflowId
? `workflow-${workflowId}-user-${userId ?? new Date().getTime()}`
: crypto.randomUUID();
}
private getDefaultWorkflowJSON(payload: ChatPayload): SimpleWorkflow {
return (
(payload.workflowContext?.currentWorkflow as SimpleWorkflow) ?? {
nodes: [],
connections: {},
}
);
}
async *chat(payload: ChatPayload, userId?: string, abortSignal?: AbortSignal) {
this.validateMessageLength(payload.message);
const { agent, threadConfig, streamConfig } = this.setupAgentAndConfigs(
payload,
userId,
abortSignal,
);
try {
const stream = await this.createAgentStream(payload, streamConfig, agent);
yield* this.processAgentStream(stream, agent, threadConfig);
} catch (error: unknown) {
this.handleStreamError(error);
}
}
private validateMessageLength(message: string): void {
if (message.length > MAX_AI_BUILDER_PROMPT_LENGTH) {
this.logger?.warn('Message exceeds maximum length', {
messageLength: message.length,
maxLength: MAX_AI_BUILDER_PROMPT_LENGTH,
});
throw new ValidationError(
`Message exceeds maximum length of ${MAX_AI_BUILDER_PROMPT_LENGTH} characters`,
);
}
}
private setupAgentAndConfigs(payload: ChatPayload, userId?: string, abortSignal?: AbortSignal) {
const agent = this.createWorkflow().compile({ checkpointer: this.checkpointer });
const workflowId = payload.workflowContext?.currentWorkflow?.id;
// Generate thread ID from workflowId and userId
// This ensures one session per workflow per user
const threadId = WorkflowBuilderAgent.generateThreadId(workflowId, userId);
const threadConfig: RunnableConfig = {
configurable: {
thread_id: threadId,
},
};
const streamConfig = {
...threadConfig,
streamMode: ['updates', 'custom'],
recursionLimit: 50,
signal: abortSignal,
callbacks: this.tracer ? [this.tracer] : undefined,
};
return { agent, threadConfig, streamConfig };
}
private async createAgentStream(
payload: ChatPayload,
streamConfig: RunnableConfig,
agent: ReturnType<ReturnType<typeof this.createWorkflow>['compile']>,
) {
return await agent.stream(
{
messages: [new HumanMessage({ content: payload.message })],
workflowJSON: this.getDefaultWorkflowJSON(payload),
workflowOperations: [],
workflowContext: payload.workflowContext,
},
streamConfig,
);
}
private handleStreamError(error: unknown): never {
const invalidRequestErrorMessage = this.getInvalidRequestError(error);
if (invalidRequestErrorMessage) {
throw new ValidationError(invalidRequestErrorMessage);
}
throw error;
}
private async *processAgentStream(
stream: AsyncGenerator<[string, unknown], void, unknown>,
agent: ReturnType<ReturnType<typeof this.createWorkflow>['compile']>,
threadConfig: RunnableConfig,
) {
try {
const streamProcessor = createStreamProcessor(stream);
for await (const output of streamProcessor) {
yield output;
}
} catch (error) {
await this.handleAgentStreamError(error, agent, threadConfig);
}
}
private async handleAgentStreamError(
error: unknown,
agent: ReturnType<ReturnType<typeof this.createWorkflow>['compile']>,
threadConfig: RunnableConfig,
): Promise<void> {
if (
error &&
typeof error === 'object' &&
'message' in error &&
typeof error.message === 'string' &&
// This is naive, but it's all we get from LangGraph AbortError
['Abort', 'Aborted'].includes(error.message)
) {
// eslint-disable-next-line @typescript-eslint/no-unsafe-member-access
const messages = (await agent.getState(threadConfig)).values.messages as Array<
AIMessage | HumanMessage | ToolMessage
>;
// Handle abort errors gracefully
const abortedAiMessage = new AIMessage({
content: '[Task aborted]',
id: crypto.randomUUID(),
});
// TODO: Should we clear tool calls that are in progress?
await agent.updateState(threadConfig, { messages: [...messages, abortedAiMessage] });
return;
}
// If it's not an abort error, check for GraphRecursionError
if (error instanceof GraphRecursionError) {
throw new ApplicationError(
'Workflow generation stopped: The AI reached the maximum number of steps while building your workflow. This usually means the workflow design became too complex or got stuck in a loop while trying to create the nodes and connections.',
);
}
// Re-throw any other errors
throw error;
}
private getInvalidRequestError(error: unknown): string | undefined {
if (
error instanceof Error &&
'error' in error &&
typeof error.error === 'object' &&
error.error
) {
const innerError = error.error;
if ('error' in innerError && typeof innerError.error === 'object' && innerError.error) {
const errorDetails = innerError.error;
if (
'type' in errorDetails &&
errorDetails.type === 'invalid_request_error' &&
'message' in errorDetails &&
typeof errorDetails.message === 'string'
) {
return errorDetails.message;
}
}
}
return undefined;
}
async getSessions(workflowId: string | undefined, userId?: string) {
// For now, we'll return the current session if we have a workflowId
// MemorySaver doesn't expose a way to list all threads, so we'll need to
// track this differently if we want to list all sessions
const sessions = [];
if (workflowId) {
const threadId = WorkflowBuilderAgent.generateThreadId(workflowId, userId);
const threadConfig: RunnableConfig = {
configurable: {
thread_id: threadId,
},
};
try {
// Try to get the checkpoint for this thread
const checkpoint = await this.checkpointer.getTuple(threadConfig);
if (checkpoint?.checkpoint) {
const messages =
(checkpoint.checkpoint.channel_values?.messages as Array<
AIMessage | HumanMessage | ToolMessage
>) ?? [];
sessions.push({
sessionId: threadId,
messages: formatMessages(messages),
lastUpdated: checkpoint.checkpoint.ts,
});
}
} catch (error) {
// Thread doesn't exist yet
this.logger?.debug('No session found for workflow:', { workflowId, error });
}
}
return { sessions };
}
}

View File

@@ -0,0 +1,89 @@
import type { BaseMessage } from '@langchain/core/messages';
import { HumanMessage } from '@langchain/core/messages';
import { Annotation, messagesStateReducer } from '@langchain/langgraph';
import type { SimpleWorkflow, WorkflowOperation } from './types/workflow';
import type { ChatPayload } from './workflow-builder-agent';
/**
* Reducer for collecting workflow operations from parallel tool executions.
* This reducer intelligently merges operations, avoiding duplicates and handling special cases.
*/
function operationsReducer(
current: WorkflowOperation[] | null,
update: WorkflowOperation[] | null | undefined,
): WorkflowOperation[] {
if (update === null) {
return [];
}
if (!update || update.length === 0) {
return current ?? [];
}
// For clear operations, we can reset everything
if (update.some((op) => op.type === 'clear')) {
return update.filter((op) => op.type === 'clear').slice(-1); // Keep only the last clear
}
if (!current && !update) {
return [];
}
// Otherwise, append new operations
return [...(current ?? []), ...update];
}
// Creates a reducer that trims the message history to keep only the last `maxUserMessages` HumanMessage instances
export function createTrimMessagesReducer(maxUserMessages: number) {
return (current: BaseMessage[]): BaseMessage[] => {
// Count HumanMessage instances and remember their indices
const humanMessageIndices: number[] = [];
current.forEach((msg, index) => {
if (msg instanceof HumanMessage) {
humanMessageIndices.push(index);
}
});
// If we have fewer than or equal to maxUserMessages, return as is
if (humanMessageIndices.length <= maxUserMessages) {
return current;
}
// Find the index of the first HumanMessage that we want to keep
const startHumanMessageIndex =
humanMessageIndices[humanMessageIndices.length - maxUserMessages];
// Slice from that HumanMessage onwards
return current.slice(startHumanMessageIndex);
};
}
export const WorkflowState = Annotation.Root({
messages: Annotation<BaseMessage[]>({
reducer: messagesStateReducer,
default: () => [],
}),
// // The original prompt from the user.
// The JSON representation of the workflow being built.
// Now a simple field without custom reducer - all updates go through operations
workflowJSON: Annotation<SimpleWorkflow>({
reducer: (x, y) => y ?? x,
default: () => ({ nodes: [], connections: {}, name: '' }),
}),
// Operations to apply to the workflow - processed by a separate node
workflowOperations: Annotation<WorkflowOperation[] | null>({
reducer: operationsReducer,
default: () => [],
}),
// Whether the user prompt is a workflow prompt.
// Latest workflow context
workflowContext: Annotation<ChatPayload['workflowContext'] | undefined>({
reducer: (x, y) => y ?? x,
}),
// Previous conversation summary (used for compressing long conversations)
previousSummary: Annotation<string>({
reducer: (x, y) => y ?? x, // Overwrite with the latest summary
default: () => 'EMPTY',
}),
});

View File

@@ -0,0 +1,599 @@
import type { ToolRunnableConfig } from '@langchain/core/tools';
import type { LangGraphRunnableConfig } from '@langchain/langgraph';
import { getCurrentTaskInput } from '@langchain/langgraph';
import type { MockProxy } from 'jest-mock-extended';
import { mock } from 'jest-mock-extended';
import type {
INode,
INodeTypeDescription,
INodeParameters,
IConnection,
NodeConnectionType,
} from 'n8n-workflow';
import { jsonParse } from 'n8n-workflow';
import type { ProgressReporter, ToolProgressMessage } from '../src/types/tools';
import type { SimpleWorkflow } from '../src/types/workflow';
export const mockProgress = (): MockProxy<ProgressReporter> => mock<ProgressReporter>();
// Mock state helpers
export const mockStateHelpers = () => ({
getNodes: jest.fn(() => [] as INode[]),
getConnections: jest.fn(() => ({}) as SimpleWorkflow['connections']),
updateNode: jest.fn((_id: string, _updates: Partial<INode>) => undefined),
addNodes: jest.fn((_nodes: INode[]) => undefined),
removeNode: jest.fn((_id: string) => undefined),
addConnections: jest.fn((_connections: IConnection[]) => undefined),
removeConnection: jest.fn((_sourceId: string, _targetId: string, _type?: string) => undefined),
});
export type MockStateHelpers = ReturnType<typeof mockStateHelpers>;
// Simple node creation helper
export const createNode = (overrides: Partial<INode> = {}): INode => ({
id: 'node1',
name: 'TestNode',
type: 'n8n-nodes-base.code',
typeVersion: 1,
position: [0, 0],
...overrides,
// Ensure parameters are properly merged if provided in overrides
parameters: overrides.parameters ?? {},
});
// Simple workflow builder
export const createWorkflow = (nodes: INode[] = []): SimpleWorkflow => {
const workflow: SimpleWorkflow = { nodes, connections: {}, name: 'Test workflow' };
return workflow;
};
// Create mock node type description
export const createNodeType = (
overrides: Partial<INodeTypeDescription> = {},
): INodeTypeDescription => ({
displayName: overrides.displayName ?? 'Test Node',
name: overrides.name ?? 'test.node',
group: overrides.group ?? ['transform'],
version: overrides.version ?? 1,
description: overrides.description ?? 'Test node description',
defaults: overrides.defaults ?? { name: 'Test Node' },
inputs: overrides.inputs ?? ['main'],
outputs: overrides.outputs ?? ['main'],
properties: overrides.properties ?? [],
...overrides,
});
// Common node types for testing
export const nodeTypes = {
code: createNodeType({
displayName: 'Code',
name: 'n8n-nodes-base.code',
group: ['transform'],
properties: [
{
displayName: 'JavaScript',
name: 'jsCode',
type: 'string',
typeOptions: {
editor: 'codeNodeEditor',
},
default: '',
},
],
}),
httpRequest: createNodeType({
displayName: 'HTTP Request',
name: 'n8n-nodes-base.httpRequest',
group: ['input'],
properties: [
{
displayName: 'URL',
name: 'url',
type: 'string',
default: '',
},
{
displayName: 'Method',
name: 'method',
type: 'options',
options: [
{ name: 'GET', value: 'GET' },
{ name: 'POST', value: 'POST' },
],
default: 'GET',
},
],
}),
webhook: createNodeType({
displayName: 'Webhook',
name: 'n8n-nodes-base.webhook',
group: ['trigger'],
inputs: [],
outputs: ['main'],
webhooks: [
{
name: 'default',
httpMethod: 'POST',
responseMode: 'onReceived',
path: 'webhook',
},
],
properties: [
{
displayName: 'Path',
name: 'path',
type: 'string',
default: 'webhook',
},
],
}),
agent: createNodeType({
displayName: 'AI Agent',
name: '@n8n/n8n-nodes-langchain.agent',
group: ['output'],
inputs: ['ai_agent'],
outputs: ['main'],
properties: [],
}),
openAiModel: createNodeType({
displayName: 'OpenAI Chat Model',
name: '@n8n/n8n-nodes-langchain.lmChatOpenAi',
group: ['output'],
inputs: [],
outputs: ['ai_languageModel'],
properties: [],
}),
setNode: createNodeType({
displayName: 'Set',
name: 'n8n-nodes-base.set',
group: ['transform'],
properties: [
{
displayName: 'Values to Set',
name: 'values',
type: 'collection',
default: {},
},
],
}),
ifNode: createNodeType({
displayName: 'If',
name: 'n8n-nodes-base.if',
group: ['transform'],
inputs: ['main'],
outputs: ['main', 'main'],
outputNames: ['true', 'false'],
properties: [
{
displayName: 'Conditions',
name: 'conditions',
type: 'collection',
default: {},
},
],
}),
mergeNode: createNodeType({
displayName: 'Merge',
name: 'n8n-nodes-base.merge',
group: ['transform'],
inputs: ['main', 'main'],
outputs: ['main'],
inputNames: ['Input 1', 'Input 2'],
properties: [
{
displayName: 'Mode',
name: 'mode',
type: 'options',
options: [
{ name: 'Append', value: 'append' },
{ name: 'Merge By Index', value: 'mergeByIndex' },
{ name: 'Merge By Key', value: 'mergeByKey' },
],
default: 'append',
},
],
}),
vectorStoreNode: createNodeType({
displayName: 'Vector Store',
name: '@n8n/n8n-nodes-langchain.vectorStore',
subtitle: '={{$parameter["mode"] === "retrieve" ? "Retrieve" : "Insert"}}',
group: ['transform'],
inputs: `={{ ((parameter) => {
function getInputs(parameters) {
const mode = parameters?.mode;
const inputs = [];
if (mode === 'retrieve-as-tool') {
inputs.push({
displayName: 'Embedding',
type: 'ai_embedding',
required: true
});
} else {
inputs.push({
displayName: '',
type: 'main'
});
inputs.push({
displayName: 'Embedding',
type: 'ai_embedding',
required: true
});
}
return inputs;
};
return getInputs(parameter)
})($parameter) }}`,
outputs: `={{ ((parameter) => {
function getOutputs(parameters) {
const mode = parameters?.mode;
if (mode === 'retrieve-as-tool') {
return ['ai_tool'];
} else if (mode === 'retrieve') {
return ['ai_document'];
} else {
return ['main'];
}
};
return getOutputs(parameter)
})($parameter) }}`,
properties: [
{
displayName: 'Mode',
name: 'mode',
type: 'options',
options: [
{ name: 'Insert', value: 'insert' },
{ name: 'Retrieve', value: 'retrieve' },
{ name: 'Retrieve (As Tool)', value: 'retrieve-as-tool' },
],
default: 'insert',
},
// Many more properties would be here in reality
],
}),
};
// Helper to create connections
export const createConnection = (
_fromId: string,
toId: string,
type: NodeConnectionType = 'main',
index: number = 0,
) => ({
node: toId,
type,
index,
});
// Generic chain interface
interface Chain<TInput = Record<string, unknown>, TOutput = Record<string, unknown>> {
invoke: (input: TInput) => Promise<TOutput>;
}
// Generic mock chain factory with proper typing
export const mockChain = <
TInput = Record<string, unknown>,
TOutput = Record<string, unknown>,
>(): MockProxy<Chain<TInput, TOutput>> => {
return mock<Chain<TInput, TOutput>>();
};
// Convenience factory for parameter updater chain
export const mockParameterUpdaterChain = () => {
return mockChain<Record<string, unknown>, { parameters: Record<string, unknown> }>();
};
// Helper to assert node parameters
export const expectNodeToHaveParameters = (
node: INode,
expectedParams: Partial<INodeParameters>,
): void => {
expect(node.parameters).toMatchObject(expectedParams);
};
// Helper to assert connections exist
export const expectConnectionToExist = (
connections: SimpleWorkflow['connections'],
fromId: string,
toId: string,
type: string = 'main',
): void => {
expect(connections[fromId]).toBeDefined();
expect(connections[fromId][type]).toBeDefined();
expect(connections[fromId][type]).toContainEqual(
expect.arrayContaining([expect.objectContaining({ node: toId })]),
);
};
// ========== LangGraph Testing Utilities ==========
// Types for mocked Command results
export type MockedCommandResult = { content: string };
// Common parsed content structure for tool results
export interface ParsedToolContent {
update: {
messages: Array<{ kwargs: { content: string } }>;
workflowOperations?: Array<{
type: string;
nodes?: INode[];
[key: string]: unknown;
}>;
};
}
// Setup LangGraph mocks
export const setupLangGraphMocks = () => {
const mockGetCurrentTaskInput = getCurrentTaskInput as jest.MockedFunction<
typeof getCurrentTaskInput
>;
jest.mock('@langchain/langgraph', () => ({
getCurrentTaskInput: jest.fn(),
Command: jest.fn().mockImplementation((params: Record<string, unknown>) => ({
content: JSON.stringify(params),
})),
}));
return { mockGetCurrentTaskInput };
};
// Parse tool result with double-wrapped content handling
export const parseToolResult = <T = ParsedToolContent>(result: unknown): T => {
const parsed = jsonParse<{ content?: string }>((result as MockedCommandResult).content);
return parsed.content ? jsonParse<T>(parsed.content) : (parsed as T);
};
// ========== Progress Message Utilities ==========
// Extract progress messages from mockWriter
export const extractProgressMessages = (
mockWriter: jest.Mock,
): Array<ToolProgressMessage<string>> => {
const progressCalls: Array<ToolProgressMessage<string>> = [];
mockWriter.mock.calls.forEach((call) => {
// eslint-disable-next-line @typescript-eslint/no-unsafe-assignment
const [arg] = call;
progressCalls.push(arg as ToolProgressMessage<string>);
});
return progressCalls;
};
// Find specific progress message by type
export const findProgressMessage = (
messages: Array<ToolProgressMessage<string>>,
status: 'running' | 'completed' | 'error',
updateType?: string,
): ToolProgressMessage<string> | undefined => {
return messages.find(
(msg) => msg.status === status && (!updateType || msg.updates[0]?.type === updateType),
);
};
// ========== Tool Config Helpers ==========
// Create basic tool config
export const createToolConfig = (
toolName: string,
callId: string = 'test-call',
): ToolRunnableConfig => ({
toolCall: { id: callId, name: toolName, args: {} },
});
// Create tool config with writer for progress tracking
export const createToolConfigWithWriter = (
toolName: string,
callId: string = 'test-call',
): ToolRunnableConfig & LangGraphRunnableConfig & { writer: jest.Mock } => {
const mockWriter = jest.fn();
return {
toolCall: { id: callId, name: toolName, args: {} },
writer: mockWriter,
};
};
// ========== Workflow State Helpers ==========
// Setup workflow state with mockGetCurrentTaskInput
export const setupWorkflowState = (
mockGetCurrentTaskInput: jest.MockedFunction<typeof getCurrentTaskInput>,
workflow: SimpleWorkflow = createWorkflow([]),
) => {
mockGetCurrentTaskInput.mockReturnValue({
workflowJSON: workflow,
});
};
// ========== Common Tool Assertions ==========
// Expect tool success message
export const expectToolSuccess = (
content: ParsedToolContent,
expectedMessage: string | RegExp,
): void => {
const message = content.update.messages[0]?.kwargs.content;
expect(message).toBeDefined();
if (typeof expectedMessage === 'string') {
expect(message).toContain(expectedMessage);
} else {
expect(message).toMatch(expectedMessage);
}
};
// Expect tool error message
export const expectToolError = (
content: ParsedToolContent,
expectedError: string | RegExp,
): void => {
const message = content.update.messages[0]?.kwargs.content;
if (typeof expectedError === 'string') {
expect(message).toBe(expectedError);
} else {
expect(message).toMatch(expectedError);
}
};
// Expect workflow operation of specific type
export const expectWorkflowOperation = (
content: ParsedToolContent,
operationType: string,
matcher?: Record<string, unknown>,
): void => {
const operation = content.update.workflowOperations?.[0];
expect(operation).toBeDefined();
expect(operation?.type).toBe(operationType);
if (matcher) {
expect(operation).toMatchObject(matcher);
}
};
// Expect node was added
export const expectNodeAdded = (content: ParsedToolContent, expectedNode: Partial<INode>): void => {
expectWorkflowOperation(content, 'addNodes');
const addedNode = content.update.workflowOperations?.[0]?.nodes?.[0];
expect(addedNode).toBeDefined();
expect(addedNode).toMatchObject(expectedNode);
};
// Expect node was removed
export const expectNodeRemoved = (content: ParsedToolContent, nodeId: string): void => {
expectWorkflowOperation(content, 'removeNode', { nodeIds: [nodeId] });
};
// Expect connections were added
export const expectConnectionsAdded = (
content: ParsedToolContent,
expectedCount?: number,
): void => {
expectWorkflowOperation(content, 'addConnections');
if (expectedCount !== undefined) {
const connections = content.update.workflowOperations?.[0]?.connections;
expect(connections).toHaveLength(expectedCount);
}
};
// Expect node was updated
export const expectNodeUpdated = (
content: ParsedToolContent,
nodeId: string,
expectedUpdates?: Record<string, unknown>,
): void => {
expectWorkflowOperation(content, 'updateNode', {
nodeId,
...(expectedUpdates ? { updates: expect.objectContaining(expectedUpdates) } : {}),
});
};
// ========== Test Data Builders ==========
// Build add node input
export const buildAddNodeInput = (overrides: {
nodeType: string;
name?: string;
connectionParametersReasoning?: string;
connectionParameters?: Record<string, unknown>;
}) => ({
nodeType: overrides.nodeType,
name: overrides.name ?? 'Test Node',
connectionParametersReasoning:
overrides.connectionParametersReasoning ??
'Standard node with static inputs/outputs, no connection parameters needed',
connectionParameters: overrides.connectionParameters ?? {},
});
// Build connect nodes input
export const buildConnectNodesInput = (overrides: {
sourceNodeId: string;
targetNodeId: string;
sourceOutputIndex?: number;
targetInputIndex?: number;
}) => ({
sourceNodeId: overrides.sourceNodeId,
targetNodeId: overrides.targetNodeId,
sourceOutputIndex: overrides.sourceOutputIndex ?? 0,
targetInputIndex: overrides.targetInputIndex ?? 0,
});
// Build node search query
export const buildNodeSearchQuery = (
queryType: 'name' | 'subNodeSearch',
query?: string,
connectionType?: NodeConnectionType,
) => ({
queryType,
...(query && { query }),
...(connectionType && { connectionType }),
});
// Build update node parameters input
export const buildUpdateNodeInput = (nodeId: string, changes: string[]) => ({
nodeId,
changes,
});
// Build node details input
export const buildNodeDetailsInput = (overrides: {
nodeName: string;
withParameters?: boolean;
withConnections?: boolean;
}) => ({
nodeName: overrides.nodeName,
withParameters: overrides.withParameters ?? false,
withConnections: overrides.withConnections ?? true,
});
// Expect node details in response
export const expectNodeDetails = (
content: ParsedToolContent,
expectedDetails: Partial<{
name: string;
displayName: string;
description: string;
subtitle?: string;
}>,
): void => {
const message = content.update.messages[0]?.kwargs.content;
expect(message).toBeDefined();
// Check for expected XML-like tags in formatted output
if (expectedDetails.name) {
expect(message).toContain(`<name>${expectedDetails.name}</name>`);
}
if (expectedDetails.displayName) {
expect(message).toContain(`<display_name>${expectedDetails.displayName}</display_name>`);
}
if (expectedDetails.description) {
expect(message).toContain(`<description>${expectedDetails.description}</description>`);
}
if (expectedDetails.subtitle) {
expect(message).toContain(`<subtitle>${expectedDetails.subtitle}</subtitle>`);
}
};
// Helper to validate XML-like structure in output
export const expectXMLTag = (
content: string,
tagName: string,
expectedValue?: string | RegExp,
): void => {
const tagRegex = new RegExp(`<${tagName}>([\\s\\S]*?)</${tagName}>`);
const match = content.match(tagRegex);
expect(match).toBeDefined();
if (expectedValue) {
if (typeof expectedValue === 'string') {
expect(match?.[1]?.trim()).toBe(expectedValue);
} else {
expect(match?.[1]).toMatch(expectedValue);
}
}
};
// Common reasoning strings
export const REASONING = {
STATIC_NODE: 'Node has static inputs/outputs, no connection parameters needed',
DYNAMIC_AI_NODE: 'AI node has dynamic inputs, setting connection parameters',
TRIGGER_NODE: 'Trigger node, no connection parameters needed',
WEBHOOK_NODE: 'Webhook is a trigger node, no connection parameters needed',
} as const;

View File

@@ -0,0 +1,17 @@
import type { ApiKeyScope } from '@n8n/permissions';
/** Unix timestamp. Seconds since epoch */
export type UnixTimestamp = number | null;
export type ApiKey = {
id: string;
label: string;
apiKey: string;
createdAt: string;
updatedAt: string;
/** Null if API key never expires */
expiresAt: UnixTimestamp | null;
scopes: ApiKeyScope[];
};
export type ApiKeyWithRawValue = ApiKey & { rawApiKey: string };

View File

@@ -0,0 +1,21 @@
import type { INodeTypeDescription } from 'n8n-workflow';
export type CommunityNodeType = {
authorGithubUrl: string;
authorName: string;
checksum: string;
description: string;
displayName: string;
name: string;
numberOfStars: number;
numberOfDownloads: number;
packageName: string;
createdAt: string;
updatedAt: string;
npmVersion: string;
isOfficialNode: boolean;
companyName?: string;
nodeDescription: INodeTypeDescription;
isInstalled: boolean;
nodeVersions?: Array<{ npmVersion: string; checksum: string }>;
};

View File

@@ -0,0 +1,2 @@
/** Date time in the ISO 8601 format, e.g. 2024-10-31T00:00:00.123Z */
export type Iso8601DateTimeString = string;

View File

@@ -0,0 +1,227 @@
import type { LogLevel, WorkflowSettings } from 'n8n-workflow';
import { type InsightsDateRange } from './schemas/insights.schema';
export interface IVersionNotificationSettings {
enabled: boolean;
endpoint: string;
whatsNewEnabled: boolean;
whatsNewEndpoint: string;
infoUrl: string;
}
export interface ITelemetryClientConfig {
url: string;
key: string;
proxy: string;
sourceConfig: string;
}
export interface ITelemetrySettings {
enabled: boolean;
config?: ITelemetryClientConfig;
}
export type AuthenticationMethod = 'email' | 'ldap' | 'saml' | 'oidc';
export interface IUserManagementSettings {
quota: number;
showSetupOnFirstLoad?: boolean;
smtpSetup: boolean;
authenticationMethod: AuthenticationMethod;
}
export interface FrontendSettings {
inE2ETests: boolean;
isDocker: boolean;
databaseType: 'sqlite' | 'mariadb' | 'mysqldb' | 'postgresdb';
endpointForm: string;
endpointFormTest: string;
endpointFormWaiting: string;
endpointMcp: string;
endpointMcpTest: string;
endpointWebhook: string;
endpointWebhookTest: string;
endpointWebhookWaiting: string;
saveDataErrorExecution: WorkflowSettings.SaveDataExecution;
saveDataSuccessExecution: WorkflowSettings.SaveDataExecution;
saveManualExecutions: boolean;
saveExecutionProgress: boolean;
executionTimeout: number;
maxExecutionTimeout: number;
workflowCallerPolicyDefaultOption: WorkflowSettings.CallerPolicy;
oauthCallbackUrls: {
oauth1: string;
oauth2: string;
};
timezone: string;
urlBaseWebhook: string;
urlBaseEditor: string;
versionCli: string;
nodeJsVersion: string;
concurrency: number;
authCookie: {
secure: boolean;
};
binaryDataMode: 'default' | 'filesystem' | 's3';
releaseChannel: 'stable' | 'beta' | 'nightly' | 'dev';
n8nMetadata?: {
userId?: string;
[key: string]: string | number | undefined;
};
versionNotifications: IVersionNotificationSettings;
instanceId: string;
telemetry: ITelemetrySettings;
posthog: {
enabled: boolean;
apiHost: string;
apiKey: string;
autocapture: boolean;
disableSessionRecording: boolean;
debug: boolean;
};
personalizationSurveyEnabled: boolean;
defaultLocale: string;
userManagement: IUserManagementSettings;
sso: {
saml: {
loginLabel: string;
loginEnabled: boolean;
};
oidc: {
loginEnabled: boolean;
loginUrl: string;
callbackUrl: string;
};
ldap: {
loginLabel: string;
loginEnabled: boolean;
};
};
publicApi: {
enabled: boolean;
latestVersion: number;
path: string;
swaggerUi: {
enabled: boolean;
};
};
workflowTagsDisabled: boolean;
logLevel: LogLevel;
hiringBannerEnabled: boolean;
previewMode: boolean;
templates: {
enabled: boolean;
host: string;
};
missingPackages?: boolean;
executionMode: 'regular' | 'queue';
/** Whether multi-main mode is enabled and licensed for this main instance. */
isMultiMain: boolean;
pushBackend: 'sse' | 'websocket';
communityNodesEnabled: boolean;
unverifiedCommunityNodesEnabled: boolean;
aiAssistant: {
enabled: boolean;
};
askAi: {
enabled: boolean;
};
deployment: {
type: string;
};
allowedModules: {
builtIn?: string[];
external?: string[];
};
enterprise: {
sharing: boolean;
ldap: boolean;
saml: boolean;
oidc: boolean;
mfaEnforcement: boolean;
logStreaming: boolean;
advancedExecutionFilters: boolean;
variables: boolean;
sourceControl: boolean;
auditLogs: boolean;
externalSecrets: boolean;
showNonProdBanner: boolean;
debugInEditor: boolean;
binaryDataS3: boolean;
workflowHistory: boolean;
workerView: boolean;
advancedPermissions: boolean;
apiKeyScopes: boolean;
workflowDiffs: boolean;
projects: {
team: {
limit: number;
};
};
};
hideUsagePage: boolean;
license: {
planName?: string;
consumerId: string;
environment: 'development' | 'production' | 'staging';
};
variables: {
limit: number;
};
mfa: {
enabled: boolean;
enforced: boolean;
};
folders: {
enabled: boolean;
};
banners: {
dismissed: string[];
};
workflowHistory: {
pruneTime: number;
licensePruneTime: number;
};
aiCredits: {
enabled: boolean;
credits: number;
};
pruning?: {
isEnabled: boolean;
maxAge: number;
maxCount: number;
};
security: {
blockFileAccessToN8nFiles: boolean;
};
easyAIWorkflowOnboarded: boolean;
partialExecution: {
version: 1 | 2;
};
evaluation: {
quota: number;
};
/** Backend modules that were initialized during startup. */
activeModules: string[];
envFeatureFlags: N8nEnvFeatFlags;
}
export type FrontendModuleSettings = {
/**
* Client settings for [insights](https://docs.n8n.io/insights/) module.
*
* - `summary`: Whether the summary banner should be shown.
* - `dashboard`: Whether the full dashboard should be shown.
* - `dateRanges`: Date range filters available to select.
*/
insights?: {
summary: boolean;
dashboard: boolean;
dateRanges: InsightsDateRange[];
};
};
export type N8nEnvFeatFlagValue = boolean | string | number | undefined;
export type N8nEnvFeatFlags = Record<`N8N_ENV_FEAT_${Uppercase<string>}`, N8nEnvFeatFlagValue>;

View File

@@ -0,0 +1,57 @@
export type * from './datetime';
export * from './dto';
export type * from './push';
export type * from './scaling';
export type * from './frontend-settings';
export type * from './user';
export type * from './api-keys';
export type * from './community-node-types';
export type { Collaborator } from './push/collaboration';
export type { HeartbeatMessage } from './push/heartbeat';
export { createHeartbeatMessage, heartbeatMessageSchema } from './push/heartbeat';
export type { SendWorkerStatusMessage } from './push/worker';
export type { BannerName } from './schemas/banner-name.schema';
export { ViewableMimeTypes } from './schemas/binary-data.schema';
export { passwordSchema } from './schemas/password.schema';
export type {
ProjectType,
ProjectIcon,
ProjectRelation,
} from './schemas/project.schema';
export {
type SourceControlledFile,
SOURCE_CONTROL_FILE_LOCATION,
SOURCE_CONTROL_FILE_STATUS,
SOURCE_CONTROL_FILE_TYPE,
} from './schemas/source-controlled-file.schema';
export {
type InsightsSummaryType,
type InsightsSummaryUnit,
type InsightsSummary,
type InsightsByWorkflow,
type InsightsByTime,
type InsightsDateRange,
} from './schemas/insights.schema';
export {
ROLE,
type Role,
type User,
type UsersList,
usersListSchema,
} from './schemas/user.schema';
export {
DATA_STORE_COLUMN_REGEX,
type DataStore,
type DataStoreColumn,
type DataStoreCreateColumnSchema,
type DataStoreListFilter,
type DataStoreListOptions,
dateTimeSchema,
} from './schemas/data-store.schema';

View File

@@ -0,0 +1,30 @@
import type { ExecutionStatus, WorkflowExecuteMode } from 'n8n-workflow';
export type RunningJobSummary = {
executionId: string;
workflowId: string;
workflowName: string;
mode: WorkflowExecuteMode;
startedAt: Date;
retryOf?: string;
status: ExecutionStatus;
};
export type WorkerStatus = {
senderId: string;
runningJobsSummary: RunningJobSummary[];
freeMem: number;
totalMem: number;
uptime: number;
loadAvg: number[];
cpus: string;
arch: string;
platform: NodeJS.Platform;
hostname: string;
interfaces: Array<{
family: 'IPv4' | 'IPv6';
address: string;
internal: boolean;
}>;
version: string;
};

View File

@@ -0,0 +1,6 @@
export type MinimalUser = {
id: string;
email: string;
firstName: string;
lastName: string;
};

View File

@@ -0,0 +1,63 @@
import { Service } from '@n8n/di';
import argvParser from 'yargs-parser';
import type { z } from 'zod';
import { Logger } from './logging';
type CliInput<Flags extends z.ZodRawShape> = {
argv: string[];
flagsSchema?: z.ZodObject<Flags>;
description?: string;
examples?: string[];
};
type ParsedArgs<Flags = Record<string, unknown>> = {
flags: Flags;
args: string[];
};
@Service()
export class CliParser {
constructor(private readonly logger: Logger) {}
parse<Flags extends z.ZodRawShape>(
input: CliInput<Flags>,
): ParsedArgs<z.infer<z.ZodObject<Flags>>> {
// eslint-disable-next-line id-denylist
const { _: rest, ...rawFlags } = argvParser(input.argv, { string: ['id'] });
let flags = {} as z.infer<z.ZodObject<Flags>>;
if (input.flagsSchema) {
for (const key in input.flagsSchema.shape) {
const flagSchema = input.flagsSchema.shape[key];
let schemaDef = flagSchema._def as z.ZodTypeDef & {
typeName: string;
innerType?: z.ZodType;
_alias?: string;
};
if (schemaDef.typeName === 'ZodOptional' && schemaDef.innerType) {
schemaDef = schemaDef.innerType._def as typeof schemaDef;
}
const alias = schemaDef._alias;
if (alias?.length && !(key in rawFlags) && rawFlags[alias]) {
rawFlags[key] = rawFlags[alias] as unknown;
}
}
flags = input.flagsSchema.parse(rawFlags);
}
const args = rest.map(String).slice(2);
this.logger.debug('Received CLI command', {
execPath: rest[0],
scriptPath: rest[1],
args,
flags,
});
return { flags, args };
}
}

View File

@@ -0,0 +1,5 @@
const { NODE_ENV } = process.env;
export const inTest = NODE_ENV === 'test';
export const inProduction = NODE_ENV === 'production';
export const inDevelopment = !NODE_ENV || NODE_ENV === 'development';

View File

@@ -0,0 +1,10 @@
export * from './license-state';
export * from './types';
export { inDevelopment, inProduction, inTest } from './environment';
export { isObjectLiteral } from './utils/is-object-literal';
export { Logger } from './logging/logger';
export { ModuleRegistry } from './modules/module-registry';
export { ModulesConfig, ModuleName } from './modules/modules.config';
export { isContainedWithin, safeJoinPath } from './utils/path-util';
export { CliParser } from './cli-parser';

View File

@@ -0,0 +1,209 @@
import type { BooleanLicenseFeature } from '@n8n/constants';
import { UNLIMITED_LICENSE_QUOTA } from '@n8n/constants';
import { Service } from '@n8n/di';
import { UnexpectedError } from 'n8n-workflow';
import type { FeatureReturnType, LicenseProvider } from './types';
class ProviderNotSetError extends UnexpectedError {
constructor() {
super('Cannot query license state because license provider has not been set');
}
}
@Service()
export class LicenseState {
licenseProvider: LicenseProvider | null = null;
setLicenseProvider(provider: LicenseProvider) {
this.licenseProvider = provider;
}
private assertProvider(): asserts this is { licenseProvider: LicenseProvider } {
if (!this.licenseProvider) throw new ProviderNotSetError();
}
// --------------------
// core queries
// --------------------
isLicensed(feature: BooleanLicenseFeature) {
this.assertProvider();
return this.licenseProvider.isLicensed(feature);
}
getValue<T extends keyof FeatureReturnType>(feature: T): FeatureReturnType[T] {
this.assertProvider();
return this.licenseProvider.getValue(feature);
}
// --------------------
// booleans
// --------------------
isSharingLicensed() {
return this.isLicensed('feat:sharing');
}
isLogStreamingLicensed() {
return this.isLicensed('feat:logStreaming');
}
isLdapLicensed() {
return this.isLicensed('feat:ldap');
}
isSamlLicensed() {
return this.isLicensed('feat:saml');
}
isOidcLicensed() {
return this.isLicensed('feat:oidc');
}
isMFAEnforcementLicensed() {
return this.isLicensed('feat:mfaEnforcement');
}
isApiKeyScopesLicensed() {
return this.isLicensed('feat:apiKeyScopes');
}
isAiAssistantLicensed() {
return this.isLicensed('feat:aiAssistant');
}
isAskAiLicensed() {
return this.isLicensed('feat:askAi');
}
isAiCreditsLicensed() {
return this.isLicensed('feat:aiCredits');
}
isAdvancedExecutionFiltersLicensed() {
return this.isLicensed('feat:advancedExecutionFilters');
}
isAdvancedPermissionsLicensed() {
return this.isLicensed('feat:advancedPermissions');
}
isDebugInEditorLicensed() {
return this.isLicensed('feat:debugInEditor');
}
isBinaryDataS3Licensed() {
return this.isLicensed('feat:binaryDataS3');
}
isMultiMainLicensed() {
return this.isLicensed('feat:multipleMainInstances');
}
isVariablesLicensed() {
return this.isLicensed('feat:variables');
}
isSourceControlLicensed() {
return this.isLicensed('feat:sourceControl');
}
isExternalSecretsLicensed() {
return this.isLicensed('feat:externalSecrets');
}
isWorkflowHistoryLicensed() {
return this.isLicensed('feat:workflowHistory');
}
isAPIDisabled() {
return this.isLicensed('feat:apiDisabled');
}
isWorkerViewLicensed() {
return this.isLicensed('feat:workerView');
}
isProjectRoleAdminLicensed() {
return this.isLicensed('feat:projectRole:admin');
}
isProjectRoleEditorLicensed() {
return this.isLicensed('feat:projectRole:editor');
}
isProjectRoleViewerLicensed() {
return this.isLicensed('feat:projectRole:viewer');
}
isCustomNpmRegistryLicensed() {
return this.isLicensed('feat:communityNodes:customRegistry');
}
isFoldersLicensed() {
return this.isLicensed('feat:folders');
}
isInsightsSummaryLicensed() {
return this.isLicensed('feat:insights:viewSummary');
}
isInsightsDashboardLicensed() {
return this.isLicensed('feat:insights:viewDashboard');
}
isInsightsHourlyDataLicensed() {
return this.isLicensed('feat:insights:viewHourlyData');
}
isWorkflowDiffsLicensed() {
return this.isLicensed('feat:workflowDiffs');
}
// --------------------
// integers
// --------------------
getMaxUsers() {
return this.getValue('quota:users') ?? UNLIMITED_LICENSE_QUOTA;
}
getMaxActiveWorkflows() {
return this.getValue('quota:activeWorkflows') ?? UNLIMITED_LICENSE_QUOTA;
}
getMaxVariables() {
return this.getValue('quota:maxVariables') ?? UNLIMITED_LICENSE_QUOTA;
}
getMaxAiCredits() {
return this.getValue('quota:aiCredits') ?? 0;
}
getWorkflowHistoryPruneQuota() {
return this.getValue('quota:workflowHistoryPrune') ?? UNLIMITED_LICENSE_QUOTA;
}
getInsightsMaxHistory() {
return this.getValue('quota:insights:maxHistoryDays') ?? 7;
}
getInsightsRetentionMaxAge() {
return this.getValue('quota:insights:retention:maxAgeDays') ?? 180;
}
getInsightsRetentionPruneInterval() {
return this.getValue('quota:insights:retention:pruneIntervalDays') ?? 24;
}
getMaxTeamProjects() {
return this.getValue('quota:maxTeamProjects') ?? 0;
}
getMaxWorkflowsWithEvaluations() {
return this.getValue('quota:evaluations:maxWorkflows') ?? 0;
}
}

View File

@@ -0,0 +1,15 @@
import type { BooleanLicenseFeature, NumericLicenseFeature } from '@n8n/constants';
export type FeatureReturnType = Partial<
{
planName: string;
} & { [K in NumericLicenseFeature]: number } & { [K in BooleanLicenseFeature]: boolean }
>;
export interface LicenseProvider {
/** Returns whether a feature is included in the user's license plan. */
isLicensed(feature: BooleanLicenseFeature): boolean;
/** Returns the value of a feature in the user's license plan, typically a boolean or integer. */
getValue<T extends keyof FeatureReturnType>(feature: T): FeatureReturnType[T];
}

View File

@@ -0,0 +1,12 @@
import type { Logger } from '@n8n/backend-common';
import { mock } from 'jest-mock-extended';
export const mockLogger = (): Logger =>
mock<Logger>({ scoped: jest.fn().mockReturnValue(mock<Logger>()) });
export * from './random';
export * as testDb from './test-db';
export * as testModules from './test-modules';
export * from './db/workflows';
export * from './db/projects';
export * from './mocking';

View File

@@ -0,0 +1,12 @@
import { Container, type Constructable } from '@n8n/di';
import { mock } from 'jest-mock-extended';
import type { DeepPartial } from 'ts-essentials';
export const mockInstance = <T>(
serviceClass: Constructable<T>,
data: DeepPartial<T> | undefined = undefined,
) => {
const instance = mock<T>(data);
Container.set(serviceClass, instance);
return instance;
};

View File

@@ -0,0 +1,63 @@
import { MIN_PASSWORD_CHAR_LENGTH, MAX_PASSWORD_CHAR_LENGTH } from '@n8n/constants';
import { randomInt, randomString, UPPERCASE_LETTERS } from 'n8n-workflow';
import type { ICredentialDataDecryptedObject } from 'n8n-workflow';
import { v4 as uuid } from 'uuid';
export type CredentialPayload = {
name: string;
type: string;
data: ICredentialDataDecryptedObject;
isManaged?: boolean;
};
export const randomApiKey = () => `n8n_api_${randomString(40)}`;
export const chooseRandomly = <T>(array: T[]) => array[randomInt(array.length)];
const randomUppercaseLetter = () => chooseRandomly(UPPERCASE_LETTERS.split(''));
export const randomValidPassword = () =>
randomString(MIN_PASSWORD_CHAR_LENGTH, MAX_PASSWORD_CHAR_LENGTH - 2) +
randomUppercaseLetter() +
randomInt(10);
export const randomInvalidPassword = () =>
chooseRandomly([
randomString(1, MIN_PASSWORD_CHAR_LENGTH - 1),
randomString(MAX_PASSWORD_CHAR_LENGTH + 2, MAX_PASSWORD_CHAR_LENGTH + 100),
'abcdefgh', // valid length, no number, no uppercase
'abcdefg1', // valid length, has number, no uppercase
'abcdefgA', // valid length, no number, has uppercase
'abcdefA', // invalid length, no number, has uppercase
'abcdef1', // invalid length, has number, no uppercase
'abcdeA1', // invalid length, has number, has uppercase
'abcdefg', // invalid length, no number, no uppercase
]);
const POPULAR_TOP_LEVEL_DOMAINS = ['com', 'org', 'net', 'io', 'edu'];
const randomTopLevelDomain = () => chooseRandomly(POPULAR_TOP_LEVEL_DOMAINS);
export const randomName = () => randomString(4, 8).toLowerCase();
export const randomEmail = () => `${randomName()}@${randomName()}.${randomTopLevelDomain()}`;
export const randomCredentialPayload = ({
isManaged = false,
}: { isManaged?: boolean } = {}): CredentialPayload => ({
name: randomName(),
type: randomName(),
data: { accessToken: randomString(6, 16) },
isManaged,
});
export const randomCredentialPayloadWithOauthTokenData = ({
isManaged = false,
}: { isManaged?: boolean } = {}): CredentialPayload => ({
name: randomName(),
type: randomName(),
data: { accessToken: randomString(6, 16), oauthTokenData: { access_token: randomString(6, 16) } },
isManaged,
});
export const uniqueId = () => uuid();

View File

@@ -0,0 +1,86 @@
import { GlobalConfig } from '@n8n/config';
import type { entities } from '@n8n/db';
import { DbConnection, DbConnectionOptions } from '@n8n/db';
import { Container } from '@n8n/di';
import type { DataSourceOptions } from '@n8n/typeorm';
import { DataSource as Connection } from '@n8n/typeorm';
import { randomString } from 'n8n-workflow';
export const testDbPrefix = 'n8n_test_';
/**
* Generate options for a bootstrap DB connection, to create and drop test databases.
*/
export const getBootstrapDBOptions = (dbType: 'postgresdb' | 'mysqldb'): DataSourceOptions => {
const globalConfig = Container.get(GlobalConfig);
const type = dbType === 'postgresdb' ? 'postgres' : 'mysql';
return {
type,
...Container.get(DbConnectionOptions).getOverrides(dbType),
database: type,
entityPrefix: globalConfig.database.tablePrefix,
schema: dbType === 'postgresdb' ? globalConfig.database.postgresdb.schema : undefined,
};
};
/**
* Initialize one test DB per suite run, with bootstrap connection if needed.
*/
export async function init() {
const globalConfig = Container.get(GlobalConfig);
const dbType = globalConfig.database.type;
const testDbName = `${testDbPrefix}${randomString(6, 10).toLowerCase()}_${Date.now()}`;
if (dbType === 'postgresdb') {
const bootstrapPostgres = await new Connection(
getBootstrapDBOptions('postgresdb'),
).initialize();
await bootstrapPostgres.query(`CREATE DATABASE ${testDbName}`);
await bootstrapPostgres.destroy();
globalConfig.database.postgresdb.database = testDbName;
} else if (dbType === 'mysqldb' || dbType === 'mariadb') {
const bootstrapMysql = await new Connection(getBootstrapDBOptions('mysqldb')).initialize();
await bootstrapMysql.query(`CREATE DATABASE ${testDbName} DEFAULT CHARACTER SET utf8mb4`);
await bootstrapMysql.destroy();
globalConfig.database.mysqldb.database = testDbName;
}
const dbConnection = Container.get(DbConnection);
await dbConnection.init();
await dbConnection.migrate();
}
export function isReady() {
const { connectionState } = Container.get(DbConnection);
return connectionState.connected && connectionState.migrated;
}
/**
* Drop test DB, closing bootstrap connection if existing.
*/
export async function terminate() {
const dbConnection = Container.get(DbConnection);
await dbConnection.close();
dbConnection.connectionState.connected = false;
}
type EntityName =
| keyof typeof entities
| 'InsightsRaw'
| 'InsightsByPeriod'
| 'InsightsMetadata'
| 'DataStore'
| 'DataStoreColumn';
/**
* Truncate specific DB tables in a test DB.
*/
export async function truncate(entities: EntityName[]) {
const connection = Container.get(Connection);
for (const name of entities) {
await connection.getRepository(name).delete({});
}
}

View File

@@ -0,0 +1,7 @@
import { ModuleRegistry } from '@n8n/backend-common';
import type { ModuleName } from '@n8n/backend-common';
import { Container } from '@n8n/di';
export async function loadModules(moduleNames: ModuleName[]) {
await Container.get(ModuleRegistry).loadModules(moduleNames);
}

View File

@@ -0,0 +1,13 @@
#!/usr/bin/env node
// Check if version should be displayed
const versionFlags = ['-v', '-V', '--version'];
if (versionFlags.includes(process.argv.slice(-1)[0])) {
console.log(require('../package').version);
process.exit(0);
}
(async () => {
const oclif = require('@oclif/core');
await oclif.execute({ dir: __dirname });
})();

View File

@@ -0,0 +1,60 @@
# This file is maintained automatically by "terraform init".
# Manual edits may be lost in future updates.
provider "registry.terraform.io/hashicorp/azurerm" {
version = "3.115.0"
constraints = "~> 3.115.0"
hashes = [
"h1:O7C3Xb+MSOc9C/eAJ5C/CiJ4vuvUsYxxIzr9ZurmHNI=",
"zh:0ea93abd53cb872691bad6d5625bda88b5d9619ea813c208b36e0ee236308589",
"zh:26703cb9c2c38bc43e97bc83af03559d065750856ea85834b71fbcb2ef9d935c",
"zh:316255a3391c49fe9bd7c5b6aa53b56dd490e1083d19b722e7b8f956a2dfe004",
"zh:431637ae90c592126fb1ec813fee6390604275438a0d5e15904c65b0a6a0f826",
"zh:4cee0fa2e84f89853723c0bc72b7debf8ea2ffffc7ae34ff28d8a69269d3a879",
"zh:64a3a3c78ea877515365ed336bd0f3abbe71db7c99b3d2837915fbca168d429c",
"zh:7380d7b503b5a87fd71a31360c3eeab504f78e4f314824e3ceda724d9dc74cf0",
"zh:974213e05708037a6d2d8c58cc84981819138f44fe40e344034eb80e16ca6012",
"zh:9a91614de0476074e9c62bbf08d3bb9c64adbd1d3a4a2b5a3e8e41d9d6d5672f",
"zh:a438471c85b8788ab21bdef4cd5ca391a46cbae33bd0262668a80f5e6c4610e1",
"zh:bf823f2c941b336a1208f015466212b1a8fdf6da28abacf59bea708377709d9e",
"zh:f569b65999264a9416862bca5cd2a6177d94ccb0424f3a4ef424428912b9cb3c",
]
}
provider "registry.terraform.io/hashicorp/random" {
version = "3.6.2"
hashes = [
"h1:VavG5unYCa3SYISMKF9pzc3718M0bhPlcbUZZGl7wuo=",
"zh:0ef01a4f81147b32c1bea3429974d4d104bbc4be2ba3cfa667031a8183ef88ec",
"zh:1bcd2d8161e89e39886119965ef0f37fcce2da9c1aca34263dd3002ba05fcb53",
"zh:37c75d15e9514556a5f4ed02e1548aaa95c0ecd6ff9af1119ac905144c70c114",
"zh:4210550a767226976bc7e57d988b9ce48f4411fa8a60cd74a6b246baf7589dad",
"zh:562007382520cd4baa7320f35e1370ffe84e46ed4e2071fdc7e4b1a9b1f8ae9b",
"zh:5efb9da90f665e43f22c2e13e0ce48e86cae2d960aaf1abf721b497f32025916",
"zh:6f71257a6b1218d02a573fc9bff0657410404fb2ef23bc66ae8cd968f98d5ff6",
"zh:78d5eefdd9e494defcb3c68d282b8f96630502cac21d1ea161f53cfe9bb483b3",
"zh:9647e18f221380a85f2f0ab387c68fdafd58af6193a932417299cdcae4710150",
"zh:bb6297ce412c3c2fa9fec726114e5e0508dd2638cad6a0cb433194930c97a544",
"zh:f83e925ed73ff8a5ef6e3608ad9225baa5376446349572c2449c0c0b3cf184b7",
"zh:fbef0781cb64de76b1df1ca11078aecba7800d82fd4a956302734999cfd9a4af",
]
}
provider "registry.terraform.io/hashicorp/tls" {
version = "4.0.5"
hashes = [
"h1:zeG5RmggBZW/8JWIVrdaeSJa0OG62uFX5HY1eE8SjzY=",
"zh:01cfb11cb74654c003f6d4e32bbef8f5969ee2856394a96d127da4949c65153e",
"zh:0472ea1574026aa1e8ca82bb6df2c40cd0478e9336b7a8a64e652119a2fa4f32",
"zh:1a8ddba2b1550c5d02003ea5d6cdda2eef6870ece86c5619f33edd699c9dc14b",
"zh:1e3bb505c000adb12cdf60af5b08f0ed68bc3955b0d4d4a126db5ca4d429eb4a",
"zh:6636401b2463c25e03e68a6b786acf91a311c78444b1dc4f97c539f9f78de22a",
"zh:76858f9d8b460e7b2a338c477671d07286b0d287fd2d2e3214030ae8f61dd56e",
"zh:a13b69fb43cb8746793b3069c4d897bb18f454290b496f19d03c3387d1c9a2dc",
"zh:a90ca81bb9bb509063b736842250ecff0f886a91baae8de65c8430168001dad9",
"zh:c4de401395936e41234f1956ebadbd2ed9f414e6908f27d578614aaa529870d4",
"zh:c657e121af8fde19964482997f0de2d5173217274f6997e16389e7707ed8ece8",
"zh:d68b07a67fbd604c38ec9733069fbf23441436fecf554de6c75c032f82e1ef19",
"zh:f569b65999264a9416862bca5cd2a6177d94ccb0424f3a4ef424428912b9cb3c",
]
}

View File

@@ -0,0 +1,54 @@
data "azurerm_resource_group" "main" {
name = var.resource_group_name
}
# Random prefix for the resources
resource "random_string" "prefix" {
length = 8
special = false
}
# SSH key pair
resource "tls_private_key" "ssh_key" {
algorithm = "RSA"
rsa_bits = 4096
}
# Dedicated Host Group & Hosts
resource "azurerm_dedicated_host_group" "main" {
name = "${random_string.prefix.result}-hostgroup"
location = var.location
resource_group_name = data.azurerm_resource_group.main.name
platform_fault_domain_count = 1
automatic_placement_enabled = false
zone = 1
tags = local.common_tags
}
resource "azurerm_dedicated_host" "hosts" {
name = "${random_string.prefix.result}-host"
location = var.location
dedicated_host_group_id = azurerm_dedicated_host_group.main.id
sku_name = var.host_size_family
platform_fault_domain = 0
tags = local.common_tags
}
# VM
module "test_vm" {
source = "./modules/benchmark-vm"
location = var.location
resource_group_name = data.azurerm_resource_group.main.name
prefix = random_string.prefix.result
dedicated_host_id = azurerm_dedicated_host.hosts.id
ssh_public_key = tls_private_key.ssh_key.public_key_openssh
vm_size = var.vm_size
tags = local.common_tags
}

View File

@@ -0,0 +1,16 @@
output "vm_name" {
value = module.test_vm.vm_name
}
output "ip" {
value = module.test_vm.ip
}
output "ssh_username" {
value = module.test_vm.ssh_username
}
output "ssh_private_key" {
value = tls_private_key.ssh_key.private_key_pem
sensitive = true
}

View File

@@ -0,0 +1,23 @@
terraform {
required_providers {
azurerm = {
source = "hashicorp/azurerm"
version = "~> 3.115.0"
}
random = {
source = "hashicorp/random"
}
}
required_version = "~> 1.8.5"
}
provider "azurerm" {
features {}
skip_provider_registration = true
}
provider "random" {}

View File

@@ -0,0 +1,34 @@
variable "location" {
description = "Region to deploy resources"
default = "East US"
}
variable "resource_group_name" {
description = "Name of the resource group"
default = "n8n-benchmarking"
}
variable "host_size_family" {
description = "Size Family for the Host Group"
default = "DCSv2-Type1"
}
variable "vm_size" {
description = "VM Size"
# 8 vCPUs, 32 GiB memory
default = "Standard_DC8_v2"
}
variable "number_of_vms" {
description = "Number of VMs to create"
default = 1
}
locals {
common_tags = {
Id = "N8nBenchmark"
Terraform = "true"
Owner = "Catalysts"
CreatedAt = timestamp()
}
}

View File

@@ -0,0 +1,48 @@
{
"definitions": {
"ScenarioData": {
"type": "object",
"properties": {
"workflowFiles": {
"type": "array",
"items": {
"type": "string"
}
},
"credentialFiles": {
"type": "array",
"items": {
"type": "string"
}
}
},
"required": [],
"additionalProperties": false
}
},
"type": "object",
"properties": {
"$schema": {
"type": "string",
"description": "The JSON schema to validate this file"
},
"name": {
"type": "string",
"description": "The name of the scenario"
},
"description": {
"type": "string",
"description": "A longer description of the scenario"
},
"scriptPath": {
"type": "string",
"description": "Relative path to the k6 test script"
},
"scenarioData": {
"$ref": "#/definitions/ScenarioData",
"description": "Data to import before running the scenario"
}
},
"required": ["name", "description", "scriptPath", "scenarioData"],
"additionalProperties": false
}

View File

@@ -0,0 +1,63 @@
#!/bin/bash
#
# Script to initialize the benchmark environment on a VM
#
set -euo pipefail;
CURRENT_USER=$(whoami)
# Mount the data disk
# First wait for the disk to become available
WAIT_TIME=0
MAX_WAIT_TIME=60
while [ ! -e /dev/sdc ]; do
if [ $WAIT_TIME -ge $MAX_WAIT_TIME ]; then
echo "Error: /dev/sdc did not become available within $MAX_WAIT_TIME seconds."
exit 1
fi
echo "Waiting for /dev/sdc to be available... ($WAIT_TIME/$MAX_WAIT_TIME)"
sleep 1
WAIT_TIME=$((WAIT_TIME + 1))
done
# Then mount it
if [ -d "/n8n" ]; then
echo "Data disk already mounted. Clearing it..."
sudo rm -rf /n8n/*
sudo rm -rf /n8n/.[!.]*
else
sudo mkdir -p /n8n
sudo parted /dev/sdc --script mklabel gpt mkpart xfspart xfs 0% 100%
sudo mkfs.xfs /dev/sdc1
sudo partprobe /dev/sdc1
sudo mount /dev/sdc1 /n8n
sudo chown -R "$CURRENT_USER":"$CURRENT_USER" /n8n
fi
### Remove unneeded dependencies
# TTY
sudo systemctl disable getty@tty1.service
sudo systemctl disable serial-getty@ttyS0.service
# Snap
sudo systemctl disable snapd.service
# Unattended upgrades
sudo systemctl disable unattended-upgrades.service
# Cron
sudo systemctl disable cron.service
# Include nodejs v20 repository
curl -fsSL https://deb.nodesource.com/setup_20.x -o nodesource_setup.sh
sudo -E bash nodesource_setup.sh
# Install docker, docker compose and nodejs
sudo DEBIAN_FRONTEND=noninteractive apt-get update -yq
sudo DEBIAN_FRONTEND=noninteractive apt-get install -yq docker.io docker-compose nodejs
# Add the current user to the docker group
sudo usermod -aG docker "$CURRENT_USER"
# Install zx
npm install zx

View File

@@ -0,0 +1,86 @@
#!/usr/bin/env zx
/**
* Script that deletes all resources created by the benchmark environment.
*
* This scripts tries to delete resources created by Terraform. If Terraform
* state file is not found, it will try to delete resources using Azure CLI.
* The terraform state is not persisted, so we want to support both cases.
*/
// @ts-check
import { $, minimist } from 'zx';
import { TerraformClient } from './clients/terraform-client.mjs';
const RESOURCE_GROUP_NAME = 'n8n-benchmarking';
const args = minimist(process.argv.slice(3), {
boolean: ['debug'],
});
const isVerbose = !!args.debug;
async function main() {
const terraformClient = new TerraformClient({ isVerbose });
if (terraformClient.hasTerraformState()) {
await terraformClient.destroyEnvironment();
} else {
await destroyUsingAz();
}
}
async function destroyUsingAz() {
const resourcesResult =
await $`az resource list --resource-group ${RESOURCE_GROUP_NAME} --query "[?tags.Id == 'N8nBenchmark'].{id:id, createdAt:tags.CreatedAt}" -o json`;
const resources = JSON.parse(resourcesResult.stdout);
const resourcesToDelete = resources.map((resource) => resource.id);
if (resourcesToDelete.length === 0) {
console.log('No resources found in the resource group.');
return;
}
await deleteResources(resourcesToDelete);
}
async function deleteResources(resourceIds) {
// We don't know the order in which resource should be deleted.
// Here's a poor person's approach to try deletion until all complete
const MAX_ITERATIONS = 100;
let i = 0;
const toDelete = [...resourceIds];
console.log(`Deleting ${resourceIds.length} resources...`);
while (toDelete.length > 0) {
const resourceId = toDelete.shift();
const deleted = await deleteById(resourceId);
if (!deleted) {
toDelete.push(resourceId);
}
if (i++ > MAX_ITERATIONS) {
console.log(
`Max iterations reached. Exiting. Could not delete ${toDelete.length} resources.`,
);
process.exit(1);
}
}
}
async function deleteById(id) {
try {
await $`az resource delete --ids ${id}`;
return true;
} catch (error) {
return false;
}
}
main().catch((error) => {
console.error('An error occurred destroying cloud env:');
console.error(error);
process.exit(1);
});

View File

@@ -0,0 +1,36 @@
#!/usr/bin/env zx
/**
* Provisions the cloud benchmark environment
*
* NOTE: Must be run in the root of the package.
*/
// @ts-check
import { which, minimist } from 'zx';
import { TerraformClient } from './clients/terraform-client.mjs';
const args = minimist(process.argv.slice(3), {
boolean: ['debug'],
});
const isVerbose = !!args.debug;
export async function provision() {
await ensureDependencies();
const terraformClient = new TerraformClient({
isVerbose,
});
await terraformClient.provisionEnvironment();
}
async function ensureDependencies() {
await which('terraform');
}
provision().catch((error) => {
console.error('An error occurred while provisioning cloud env:');
console.error(error);
process.exit(1);
});

View File

@@ -0,0 +1,186 @@
#!/usr/bin/env zx
/**
* Script to run benchmarks either on the cloud benchmark environment or locally.
* The cloud environment needs to be provisioned using Terraform before running the benchmarks.
*
* NOTE: Must be run in the root of the package.
*/
// @ts-check
import fs from 'fs';
import minimist from 'minimist';
import path from 'path';
import { runInCloud } from './run-in-cloud.mjs';
import { runLocally } from './run-locally.mjs';
const paths = {
n8nSetupsDir: path.join(path.resolve('scripts'), 'n8n-setups'),
};
async function main() {
const config = await parseAndValidateConfig();
const n8nSetupsToUse =
config.n8nSetupToUse === 'all' ? readAvailableN8nSetups() : [config.n8nSetupToUse];
console.log('Using n8n tag', config.n8nTag);
console.log('Using benchmark cli tag', config.benchmarkTag);
console.log('Using environment', config.env);
console.log('Using n8n setups', n8nSetupsToUse.join(', '));
console.log('');
if (config.env === 'cloud') {
await runInCloud({
benchmarkTag: config.benchmarkTag,
isVerbose: config.isVerbose,
k6ApiToken: config.k6ApiToken,
resultWebhookUrl: config.resultWebhookUrl,
resultWebhookAuthHeader: config.resultWebhookAuthHeader,
n8nLicenseCert: config.n8nLicenseCert,
n8nTag: config.n8nTag,
n8nSetupsToUse,
vus: config.vus,
duration: config.duration,
});
} else if (config.env === 'local') {
await runLocally({
benchmarkTag: config.benchmarkTag,
isVerbose: config.isVerbose,
k6ApiToken: config.k6ApiToken,
resultWebhookUrl: config.resultWebhookUrl,
resultWebhookAuthHeader: config.resultWebhookAuthHeader,
n8nLicenseCert: config.n8nLicenseCert,
n8nTag: config.n8nTag,
runDir: config.runDir,
n8nSetupsToUse,
vus: config.vus,
duration: config.duration,
});
} else {
console.error('Invalid env:', config.env);
printUsage();
process.exit(1);
}
}
function readAvailableN8nSetups() {
const setups = fs.readdirSync(paths.n8nSetupsDir);
return setups;
}
/**
* @typedef {Object} Config
* @property {boolean} isVerbose
* @property {'cloud' | 'local'} env
* @property {string} n8nSetupToUse
* @property {string} n8nTag
* @property {string} benchmarkTag
* @property {string} [k6ApiToken]
* @property {string} [resultWebhookUrl]
* @property {string} [resultWebhookAuthHeader]
* @property {string} [n8nLicenseCert]
* @property {string} [runDir]
* @property {string} [vus]
* @property {string} [duration]
*
* @returns {Promise<Config>}
*/
async function parseAndValidateConfig() {
const args = minimist(process.argv.slice(3), {
boolean: ['debug', 'help'],
});
if (args.help) {
printUsage();
process.exit(0);
}
const n8nSetupToUse = await getAndValidateN8nSetup(args);
const isVerbose = args.debug || false;
const n8nTag = args.n8nTag || process.env.N8N_DOCKER_TAG || 'latest';
const benchmarkTag = args.benchmarkTag || process.env.BENCHMARK_DOCKER_TAG || 'latest';
const k6ApiToken = args.k6ApiToken || process.env.K6_API_TOKEN || undefined;
const resultWebhookUrl =
args.resultWebhookUrl || process.env.BENCHMARK_RESULT_WEBHOOK_URL || undefined;
const resultWebhookAuthHeader =
args.resultWebhookAuthHeader || process.env.BENCHMARK_RESULT_WEBHOOK_AUTH_HEADER || undefined;
const n8nLicenseCert = args.n8nLicenseCert || process.env.N8N_LICENSE_CERT || undefined;
const runDir = args.runDir || undefined;
const env = args.env || 'local';
const vus = args.vus;
const duration = args.duration;
if (!env) {
printUsage();
process.exit(1);
}
return {
isVerbose,
env,
n8nSetupToUse,
n8nTag,
benchmarkTag,
k6ApiToken,
resultWebhookUrl,
resultWebhookAuthHeader,
n8nLicenseCert,
runDir,
vus,
duration,
};
}
/**
* @param {minimist.ParsedArgs} args
*/
async function getAndValidateN8nSetup(args) {
// Last parameter is the n8n setup to use
const n8nSetupToUse = args._[args._.length - 1];
if (!n8nSetupToUse || n8nSetupToUse === 'all') {
return 'all';
}
const availableSetups = readAvailableN8nSetups();
if (!availableSetups.includes(n8nSetupToUse)) {
printUsage();
process.exit(1);
}
return n8nSetupToUse;
}
function printUsage() {
const availableSetups = readAvailableN8nSetups();
console.log(`Usage: zx scripts/${path.basename(__filename)} [n8n setup name]`);
console.log(` eg: zx scripts/${path.basename(__filename)}`);
console.log('');
console.log('Options:');
console.log(
` [n8n setup name] Against which n8n setup to run the benchmarks. One of: ${['all', ...availableSetups].join(', ')}. Default is all`,
);
console.log(
' --env Env where to run the benchmarks. Either cloud or local. Default is local.',
);
console.log(' --debug Enable verbose output');
console.log(' --n8nTag Docker tag for n8n image. Default is latest');
console.log(' --benchmarkTag Docker tag for benchmark cli image. Default is latest');
console.log(' --vus How many concurrent requests to make');
console.log(' --duration Test duration, e.g. 1m or 30s');
console.log(
' --k6ApiToken API token for k6 cloud. Default is read from K6_API_TOKEN env var. If omitted, k6 cloud will not be used',
);
console.log(
' --runDir Directory to share with the n8n container for storing data. Needed only for local runs.',
);
console.log('');
}
main().catch((error) => {
console.error('An error occurred while running the benchmarks:');
console.error(error);
process.exit(1);
});

View File

@@ -0,0 +1,158 @@
#!/usr/bin/env zx
/**
* This script runs the benchmarks for the given n8n setup.
*/
// @ts-check
import path from 'path';
import { $, argv, fs } from 'zx';
import { DockerComposeClient } from './clients/docker-compose-client.mjs';
import { flagsObjectToCliArgs } from './utils/flags.mjs';
const paths = {
n8nSetupsDir: path.join(__dirname, 'n8n-setups'),
mockApiDataPath: path.join(__dirname, 'mock-api'),
};
const N8N_ENCRYPTION_KEY = 'very-secret-encryption-key';
async function main() {
const [n8nSetupToUse] = argv._;
validateN8nSetup(n8nSetupToUse);
const composeFilePath = path.join(paths.n8nSetupsDir, n8nSetupToUse);
const setupScriptPath = path.join(paths.n8nSetupsDir, n8nSetupToUse, 'setup.mjs');
const n8nTag = argv.n8nDockerTag || process.env.N8N_DOCKER_TAG || 'latest';
const benchmarkTag = argv.benchmarkDockerTag || process.env.BENCHMARK_DOCKER_TAG || 'latest';
const k6ApiToken = argv.k6ApiToken || process.env.K6_API_TOKEN || undefined;
const resultWebhookUrl =
argv.resultWebhookUrl || process.env.BENCHMARK_RESULT_WEBHOOK_URL || undefined;
const resultWebhookAuthHeader =
argv.resultWebhookAuthHeader || process.env.BENCHMARK_RESULT_WEBHOOK_AUTH_HEADER || undefined;
const baseRunDir = argv.runDir || process.env.RUN_DIR || '/n8n';
const n8nLicenseCert = argv.n8nLicenseCert || process.env.N8N_LICENSE_CERT || undefined;
const n8nLicenseActivationKey = process.env.N8N_LICENSE_ACTIVATION_KEY || undefined;
const n8nLicenseTenantId = argv.n8nLicenseTenantId || process.env.N8N_LICENSE_TENANT_ID || '1';
const envTag = argv.env || 'local';
const vus = argv.vus;
const duration = argv.duration;
const hasN8nLicense = !!n8nLicenseCert || !!n8nLicenseActivationKey;
if (n8nSetupToUse === 'scaling-multi-main' && !hasN8nLicense) {
console.error(
'n8n license is required to run the multi-main scaling setup. Please provide N8N_LICENSE_CERT or N8N_LICENSE_ACTIVATION_KEY (and N8N_LICENSE_TENANT_ID if needed)',
);
process.exit(1);
}
if (!fs.existsSync(baseRunDir)) {
console.error(
`The run directory "${baseRunDir}" does not exist. Please specify a valid directory using --runDir`,
);
process.exit(1);
}
const runDir = path.join(baseRunDir, n8nSetupToUse);
fs.emptyDirSync(runDir);
const dockerComposeClient = new DockerComposeClient({
$: $({
cwd: composeFilePath,
verbose: true,
env: {
PATH: process.env.PATH,
N8N_VERSION: n8nTag,
N8N_LICENSE_CERT: n8nLicenseCert,
N8N_LICENSE_ACTIVATION_KEY: n8nLicenseActivationKey,
N8N_LICENSE_TENANT_ID: n8nLicenseTenantId,
N8N_ENCRYPTION_KEY,
BENCHMARK_VERSION: benchmarkTag,
K6_API_TOKEN: k6ApiToken,
BENCHMARK_RESULT_WEBHOOK_URL: resultWebhookUrl,
BENCHMARK_RESULT_WEBHOOK_AUTH_HEADER: resultWebhookAuthHeader,
RUN_DIR: runDir,
MOCK_API_DATA_PATH: paths.mockApiDataPath,
},
}),
});
// Run the setup script if it exists
if (fs.existsSync(setupScriptPath)) {
const setupScript = await import(setupScriptPath);
await setupScript.setup({ runDir });
}
try {
await dockerComposeClient.$('up', '-d', '--remove-orphans', 'n8n');
const tags = Object.entries({
Env: envTag,
N8nVersion: n8nTag,
N8nSetup: n8nSetupToUse,
})
.map(([key, value]) => `${key}=${value}`)
.join(',');
const cliArgs = flagsObjectToCliArgs({
scenarioNamePrefix: n8nSetupToUse,
vus,
duration,
tags,
});
await dockerComposeClient.$('run', 'benchmark', 'run', ...cliArgs);
} catch (error) {
console.error('An error occurred while running the benchmarks:');
console.error(error.message);
console.error('');
await printContainerStatus(dockerComposeClient);
} finally {
await dumpLogs(dockerComposeClient);
await dockerComposeClient.$('down');
}
}
async function printContainerStatus(dockerComposeClient) {
console.error('Container statuses:');
await dockerComposeClient.$('ps', '-a');
}
async function dumpLogs(dockerComposeClient) {
console.info('Container logs:');
await dockerComposeClient.$('logs');
}
function printUsage() {
const availableSetups = getAllN8nSetups();
console.log('Usage: zx runForN8nSetup.mjs --runDir /path/for/n8n/data <n8n setup to use>');
console.log(` eg: zx runForN8nSetup.mjs --runDir /path/for/n8n/data ${availableSetups[0]}`);
console.log('');
console.log('Flags:');
console.log(
' --runDir <path> Directory to share with the n8n container for storing data. Default is /n8n',
);
console.log(' --n8nDockerTag <tag> Docker tag for n8n image. Default is latest');
console.log(
' --benchmarkDockerTag <tag> Docker tag for benchmark cli image. Default is latest',
);
console.log(' --k6ApiToken <token> K6 API token to upload the results');
console.log('');
console.log('Available setups:');
console.log(availableSetups.join(', '));
}
/**
* @returns {string[]}
*/
function getAllN8nSetups() {
return fs.readdirSync(paths.n8nSetupsDir);
}
function validateN8nSetup(givenSetup) {
const availableSetups = getAllN8nSetups();
if (!availableSetups.includes(givenSetup)) {
printUsage();
process.exit(1);
}
}
main();

View File

@@ -0,0 +1,154 @@
#!/usr/bin/env zx
/**
* Script to run benchmarks on the cloud benchmark environment.
* This script will:
* 1. Provision a benchmark environment using Terraform.
* 2. Run the benchmarks on the VM.
* 3. Destroy the cloud environment.
*
* NOTE: Must be run in the root of the package.
*/
// @ts-check
import { sleep, which, $, tmpdir } from 'zx';
import path from 'path';
import { SshClient } from './clients/ssh-client.mjs';
import { TerraformClient } from './clients/terraform-client.mjs';
import { flagsObjectToCliArgs } from './utils/flags.mjs';
/**
* @typedef {Object} BenchmarkEnv
* @property {string} vmName
* @property {string} ip
* @property {string} sshUsername
* @property {string} sshPrivateKeyPath
*/
/**
* @typedef {Object} Config
* @property {boolean} isVerbose
* @property {string[]} n8nSetupsToUse
* @property {string} n8nTag
* @property {string} benchmarkTag
* @property {string} [k6ApiToken]
* @property {string} [resultWebhookUrl]
* @property {string} [resultWebhookAuthHeader]
* @property {string} [n8nLicenseCert]
* @property {string} [vus]
* @property {string} [duration]
*
* @param {Config} config
*/
export async function runInCloud(config) {
await ensureDependencies();
const terraformClient = new TerraformClient({
isVerbose: config.isVerbose,
});
const benchmarkEnv = await terraformClient.getTerraformOutputs();
await runBenchmarksOnVm(config, benchmarkEnv);
}
async function ensureDependencies() {
await which('terraform');
await which('az');
}
/**
* @param {Config} config
* @param {BenchmarkEnv} benchmarkEnv
*/
async function runBenchmarksOnVm(config, benchmarkEnv) {
console.log(`Setting up the environment...`);
const sshClient = new SshClient({
ip: benchmarkEnv.ip,
username: benchmarkEnv.sshUsername,
privateKeyPath: benchmarkEnv.sshPrivateKeyPath,
verbose: config.isVerbose,
});
await ensureVmIsReachable(sshClient);
const scriptsDir = await transferScriptsToVm(sshClient, config);
// Bootstrap the environment with dependencies
console.log('Running bootstrap script...');
const bootstrapScriptPath = path.join(scriptsDir, 'bootstrap.sh');
await sshClient.ssh(`chmod a+x ${bootstrapScriptPath} && ${bootstrapScriptPath}`);
// Give some time for the VM to be ready
await sleep(1000);
for (const n8nSetup of config.n8nSetupsToUse) {
await runBenchmarkForN8nSetup({
config,
sshClient,
scriptsDir,
n8nSetup,
});
}
}
/**
* @param {{ config: Config; sshClient: any; scriptsDir: string; n8nSetup: string; }} opts
*/
async function runBenchmarkForN8nSetup({ config, sshClient, scriptsDir, n8nSetup }) {
console.log(`Running benchmarks for ${n8nSetup}...`);
const runScriptPath = path.join(scriptsDir, 'run-for-n8n-setup.mjs');
const cliArgs = flagsObjectToCliArgs({
n8nDockerTag: config.n8nTag,
benchmarkDockerTag: config.benchmarkTag,
k6ApiToken: config.k6ApiToken,
resultWebhookUrl: config.resultWebhookUrl,
resultWebhookAuthHeader: config.resultWebhookAuthHeader,
n8nLicenseCert: config.n8nLicenseCert,
vus: config.vus,
duration: config.duration,
env: 'cloud',
});
const flagsString = cliArgs.join(' ');
await sshClient.ssh(`npx zx ${runScriptPath} ${flagsString} ${n8nSetup}`, {
// Test run should always log its output
verbose: true,
});
}
async function ensureVmIsReachable(sshClient) {
try {
await sshClient.ssh('echo "VM is reachable"');
} catch (error) {
console.error(`VM is not reachable: ${error.message}`);
console.error(
`Did you provision the cloud environment first with 'pnpm provision-cloud-env'? You can also run the benchmarks locally with 'pnpm run benchmark-locally'.`,
);
process.exit(1);
}
}
/**
* @returns Path where the scripts are located on the VM
*/
async function transferScriptsToVm(sshClient, config) {
const cwd = process.cwd();
const scriptsDir = path.resolve(cwd, './scripts');
const tarFilename = 'scripts.tar.gz';
const scriptsTarPath = path.join(tmpdir('n8n-benchmark'), tarFilename);
const $$ = $({ verbose: config.isVerbose });
// Compress the scripts folder
await $$`tar -czf ${scriptsTarPath} ${scriptsDir} -C ${cwd} ./scripts`;
// Transfer the scripts to the VM
await sshClient.scp(scriptsTarPath, `~/${tarFilename}`);
// Extract the scripts on the VM
await sshClient.ssh(`tar -xzf ~/${tarFilename}`);
return '~/scripts';
}

View File

@@ -0,0 +1,71 @@
#!/usr/bin/env zx
/**
* Script to run benchmarks on the cloud benchmark environment.
* This script will:
* 1. Provision a benchmark environment using Terraform.
* 2. Run the benchmarks on the VM.
* 3. Destroy the cloud environment.
*
* NOTE: Must be run in the root of the package.
*/
// @ts-check
import { $ } from 'zx';
import path from 'path';
import { flagsObjectToCliArgs } from './utils/flags.mjs';
/**
* @typedef {Object} BenchmarkEnv
* @property {string} vmName
*/
const paths = {
scriptsDir: path.join(path.resolve('scripts')),
};
/**
* @typedef {Object} Config
* @property {boolean} isVerbose
* @property {string[]} n8nSetupsToUse
* @property {string} n8nTag
* @property {string} benchmarkTag
* @property {string} [runDir]
* @property {string} [k6ApiToken]
* @property {string} [resultWebhookUrl]
* @property {string} [resultWebhookAuthHeader]
* @property {string} [n8nLicenseCert]
* @property {string} [vus]
* @property {string} [duration]
*
* @param {Config} config
*/
export async function runLocally(config) {
const runScriptPath = path.join(paths.scriptsDir, 'run-for-n8n-setup.mjs');
const cliArgs = flagsObjectToCliArgs({
n8nDockerTag: config.n8nTag,
benchmarkDockerTag: config.benchmarkTag,
runDir: config.runDir,
vus: config.vus,
duration: config.duration,
env: 'local',
});
try {
for (const n8nSetup of config.n8nSetupsToUse) {
console.log(`Running benchmarks for n8n setup: ${n8nSetup}`);
await $({
env: {
...process.env,
K6_API_TOKEN: config.k6ApiToken,
BENCHMARK_RESULT_WEBHOOK_URL: config.resultWebhookUrl,
BENCHMARK_RESULT_WEBHOOK_AUTH_HEADER: config.resultWebhookAuthHeader,
N8N_LICENSE_CERT: config.n8nLicenseCert,
},
})`npx ${runScriptPath} ${cliArgs} ${n8nSetup}`;
}
} catch (error) {
console.error('An error occurred while running the benchmarks:');
console.error(error);
}
}

View File

@@ -0,0 +1,144 @@
/* eslint-disable @typescript-eslint/no-explicit-any */
import axios from 'axios';
import type { AxiosRequestConfig, AxiosResponse } from 'axios';
import { Agent } from 'https';
import * as qs from 'querystring';
import type { ClientOAuth2TokenData } from './client-oauth2-token';
import { ClientOAuth2Token } from './client-oauth2-token';
import { CodeFlow } from './code-flow';
import { CredentialsFlow } from './credentials-flow';
import type { Headers, OAuth2AccessTokenErrorResponse } from './types';
import { getAuthError } from './utils';
export interface ClientOAuth2RequestObject {
url: string;
method: 'DELETE' | 'GET' | 'HEAD' | 'PATCH' | 'POST' | 'PUT';
body?: Record<string, any>;
query?: qs.ParsedUrlQuery;
headers?: Headers;
ignoreSSLIssues?: boolean;
}
export interface ClientOAuth2Options {
clientId: string;
clientSecret?: string;
accessTokenUri: string;
authentication?: 'header' | 'body';
authorizationUri?: string;
redirectUri?: string;
scopes?: string[];
scopesSeparator?: ',' | ' ';
authorizationGrants?: string[];
state?: string;
additionalBodyProperties?: Record<string, any>;
body?: Record<string, any>;
query?: qs.ParsedUrlQuery;
ignoreSSLIssues?: boolean;
}
export class ResponseError extends Error {
constructor(
readonly status: number,
readonly body: unknown,
readonly code = 'ESTATUS',
readonly message = `HTTP status ${status}`,
) {
super(message);
}
}
const sslIgnoringAgent = new Agent({ rejectUnauthorized: false });
/**
* Construct an object that can handle the multiple OAuth 2.0 flows.
*/
export class ClientOAuth2 {
code: CodeFlow;
credentials: CredentialsFlow;
constructor(readonly options: ClientOAuth2Options) {
this.code = new CodeFlow(this);
this.credentials = new CredentialsFlow(this);
}
/**
* Create a new token from existing data.
*/
createToken(data: ClientOAuth2TokenData, type?: string): ClientOAuth2Token {
return new ClientOAuth2Token(this, {
...data,
...(typeof type === 'string' ? { token_type: type } : type),
});
}
/**
* Request an access token from the OAuth2 server.
*
* @throws {ResponseError} If the response is an unexpected status code.
* @throws {AuthError} If the response is an authentication error.
*/
async accessTokenRequest(options: ClientOAuth2RequestObject): Promise<ClientOAuth2TokenData> {
let url = options.url;
const query = qs.stringify(options.query);
if (query) {
url += (url.indexOf('?') === -1 ? '?' : '&') + query;
}
const requestConfig: AxiosRequestConfig = {
url,
method: options.method,
data: qs.stringify(options.body),
headers: options.headers,
transformResponse: (res: unknown) => res,
// Axios rejects the promise by default for all status codes 4xx.
// We override this to reject promises only on 5xxs
validateStatus: (status) => status < 500,
};
if (options.ignoreSSLIssues) {
requestConfig.httpsAgent = sslIgnoringAgent;
}
const response = await axios.request(requestConfig);
if (response.status >= 400) {
const body = this.parseResponseBody<OAuth2AccessTokenErrorResponse>(response);
const authErr = getAuthError(body);
if (authErr) throw authErr;
else throw new ResponseError(response.status, response.data);
}
if (response.status >= 300) {
throw new ResponseError(response.status, response.data);
}
return this.parseResponseBody<ClientOAuth2TokenData>(response);
}
/**
* Attempt to parse response body based on the content type.
*/
private parseResponseBody<T extends object>(response: AxiosResponse<unknown>): T {
const contentType = (response.headers['content-type'] as string) ?? '';
const body = response.data as string;
if (contentType.startsWith('application/json')) {
return JSON.parse(body) as T;
}
if (contentType.startsWith('application/x-www-form-urlencoded')) {
return qs.parse(body) as T;
}
throw new ResponseError(
response.status,
body,
undefined,
`Unsupported content type: ${contentType}`,
);
}
}

View File

@@ -0,0 +1,113 @@
import * as a from 'node:assert';
import type { ClientOAuth2, ClientOAuth2Options, ClientOAuth2RequestObject } from './client-oauth2';
import { DEFAULT_HEADERS } from './constants';
import { auth, expects, getRequestOptions } from './utils';
export interface ClientOAuth2TokenData extends Record<string, string | undefined> {
token_type?: string | undefined;
access_token: string;
refresh_token: string;
expires_in?: string;
scope?: string | undefined;
}
/**
* General purpose client token generator.
*/
export class ClientOAuth2Token {
readonly tokenType?: string;
readonly accessToken: string;
readonly refreshToken: string;
private expires: Date;
constructor(
readonly client: ClientOAuth2,
readonly data: ClientOAuth2TokenData,
) {
this.tokenType = data.token_type?.toLowerCase() ?? 'bearer';
this.accessToken = data.access_token;
this.refreshToken = data.refresh_token;
this.expires = new Date();
this.expires.setSeconds(this.expires.getSeconds() + Number(data.expires_in));
}
/**
* Sign a standardized request object with user authentication information.
*/
sign(requestObject: ClientOAuth2RequestObject): ClientOAuth2RequestObject {
if (!this.accessToken) {
throw new Error('Unable to sign without access token');
}
requestObject.headers = requestObject.headers ?? {};
if (this.tokenType === 'bearer') {
requestObject.headers.Authorization = 'Bearer ' + this.accessToken;
} else {
const parts = requestObject.url.split('#');
const token = 'access_token=' + this.accessToken;
const url = parts[0].replace(/[?&]access_token=[^&#]/, '');
const fragment = parts[1] ? '#' + parts[1] : '';
// Prepend the correct query string parameter to the url.
requestObject.url = url + (url.indexOf('?') > -1 ? '&' : '?') + token + fragment;
// Attempt to avoid storing the url in proxies, since the access token
// is exposed in the query parameters.
requestObject.headers.Pragma = 'no-store';
requestObject.headers['Cache-Control'] = 'no-store';
}
return requestObject;
}
/**
* Refresh a user access token with the refresh token.
* As in RFC 6749 Section 6: https://www.rfc-editor.org/rfc/rfc6749.html#section-6
*/
async refresh(opts?: ClientOAuth2Options): Promise<ClientOAuth2Token> {
const options = { ...this.client.options, ...opts };
expects(options, 'clientSecret');
a.ok(this.refreshToken, 'refreshToken is required');
const { clientId, clientSecret } = options;
const headers = { ...DEFAULT_HEADERS };
const body: Record<string, string> = {
refresh_token: this.refreshToken,
grant_type: 'refresh_token',
};
if (options.authentication === 'body') {
body.client_id = clientId;
body.client_secret = clientSecret;
} else {
headers.Authorization = auth(clientId, clientSecret);
}
const requestOptions = getRequestOptions(
{
url: options.accessTokenUri,
method: 'POST',
headers,
body,
},
options,
);
const responseData = await this.client.accessTokenRequest(requestOptions);
return this.client.createToken({ ...this.data, ...responseData });
}
/**
* Check whether the token has expired.
*/
expired(): boolean {
return Date.now() > this.expires.getTime();
}
}

View File

@@ -0,0 +1,123 @@
import * as qs from 'querystring';
import type { ClientOAuth2, ClientOAuth2Options } from './client-oauth2';
import type { ClientOAuth2Token } from './client-oauth2-token';
import { DEFAULT_HEADERS, DEFAULT_URL_BASE } from './constants';
import { auth, expects, getAuthError, getRequestOptions } from './utils';
interface CodeFlowBody {
code: string | string[];
grant_type: 'authorization_code';
redirect_uri?: string;
client_id?: string;
}
/**
* Support authorization code OAuth 2.0 grant.
*
* Reference: http://tools.ietf.org/html/rfc6749#section-4.1
*/
export class CodeFlow {
constructor(private client: ClientOAuth2) {}
/**
* Generate the uri for doing the first redirect.
*/
getUri(opts?: Partial<ClientOAuth2Options>): string {
const options: ClientOAuth2Options = { ...this.client.options, ...opts };
// Check the required parameters are set.
expects(options, 'clientId', 'authorizationUri');
const url = new URL(options.authorizationUri);
const queryParams = {
...options.query,
client_id: options.clientId,
redirect_uri: options.redirectUri,
response_type: 'code',
state: options.state,
...(options.scopes ? { scope: options.scopes.join(options.scopesSeparator ?? ' ') } : {}),
};
for (const [key, value] of Object.entries(queryParams)) {
if (value !== null && value !== undefined) {
url.searchParams.append(key, value);
}
}
return url.toString();
}
/**
* Get the code token from the redirected uri and make another request for
* the user access token.
*/
async getToken(
urlString: string,
opts?: Partial<ClientOAuth2Options>,
): Promise<ClientOAuth2Token> {
const options: ClientOAuth2Options = { ...this.client.options, ...opts };
expects(options, 'clientId', 'accessTokenUri');
const url = new URL(urlString, DEFAULT_URL_BASE);
if (
typeof options.redirectUri === 'string' &&
typeof url.pathname === 'string' &&
url.pathname !== new URL(options.redirectUri, DEFAULT_URL_BASE).pathname
) {
throw new TypeError('Redirected path should match configured path, but got: ' + url.pathname);
}
if (!url.search?.substring(1)) {
throw new TypeError(`Unable to process uri: ${urlString}`);
}
const data =
typeof url.search === 'string' ? qs.parse(url.search.substring(1)) : url.search || {};
// eslint-disable-next-line @typescript-eslint/ban-ts-comment
// @ts-ignore
const error = getAuthError(data);
if (error) throw error;
if (options.state && data.state !== options.state) {
// eslint-disable-next-line @typescript-eslint/restrict-template-expressions
throw new TypeError(`Invalid state: ${data.state}`);
}
// Check whether the response code is set.
if (!data.code) {
throw new TypeError('Missing code, unable to request token');
}
const headers = { ...DEFAULT_HEADERS };
const body: CodeFlowBody = {
code: data.code,
grant_type: 'authorization_code',
redirect_uri: options.redirectUri,
};
// `client_id`: REQUIRED, if the client is not authenticating with the
// authorization server as described in Section 3.2.1.
// Reference: https://tools.ietf.org/html/rfc6749#section-3.2.1
if (options.clientSecret) {
headers.Authorization = auth(options.clientId, options.clientSecret);
} else {
body.client_id = options.clientId;
}
const requestOptions = getRequestOptions(
{
url: options.accessTokenUri,
method: 'POST',
headers,
body,
},
options,
);
const responseData = await this.client.accessTokenRequest(requestOptions);
return this.client.createToken(responseData);
}
}

View File

@@ -0,0 +1,62 @@
import type { Headers } from './types';
export const DEFAULT_URL_BASE = 'https://example.org/';
/**
* Default headers for executing OAuth 2.0 flows.
*/
export const DEFAULT_HEADERS: Headers = {
Accept: 'application/json, application/x-www-form-urlencoded',
'Content-Type': 'application/x-www-form-urlencoded',
};
/**
* Format error response types to regular strings for displaying to clients.
*
* Reference: http://tools.ietf.org/html/rfc6749#section-4.1.2.1
*/
export const ERROR_RESPONSES: Record<string, string> = {
invalid_request: [
'The request is missing a required parameter, includes an',
'invalid parameter value, includes a parameter more than',
'once, or is otherwise malformed.',
].join(' '),
invalid_client: [
'Client authentication failed (e.g., unknown client, no',
'client authentication included, or unsupported',
'authentication method).',
].join(' '),
invalid_grant: [
'The provided authorization grant (e.g., authorization',
'code, resource owner credentials) or refresh token is',
'invalid, expired, revoked, does not match the redirection',
'URI used in the authorization request, or was issued to',
'another client.',
].join(' '),
unauthorized_client: [
'The client is not authorized to request an authorization',
'code using this method.',
].join(' '),
unsupported_grant_type: [
'The authorization grant type is not supported by the',
'authorization server.',
].join(' '),
access_denied: ['The resource owner or authorization server denied the request.'].join(' '),
unsupported_response_type: [
'The authorization server does not support obtaining',
'an authorization code using this method.',
].join(' '),
invalid_scope: ['The requested scope is invalid, unknown, or malformed.'].join(' '),
server_error: [
'The authorization server encountered an unexpected',
'condition that prevented it from fulfilling the request.',
'(This error code is needed because a 500 Internal Server',
'Error HTTP status code cannot be returned to the client',
'via an HTTP redirect.)',
].join(' '),
temporarily_unavailable: [
'The authorization server is currently unable to handle',
'the request due to a temporary overloading or maintenance',
'of the server.',
].join(' '),
};

View File

@@ -0,0 +1,62 @@
import type { ClientOAuth2 } from './client-oauth2';
import type { ClientOAuth2Token } from './client-oauth2-token';
import { DEFAULT_HEADERS } from './constants';
import type { Headers } from './types';
import { auth, expects, getRequestOptions } from './utils';
interface CredentialsFlowBody {
client_id?: string;
client_secret?: string;
grant_type: 'client_credentials';
scope?: string;
}
/**
* Support client credentials OAuth 2.0 grant.
*
* Reference: http://tools.ietf.org/html/rfc6749#section-4.4
*/
export class CredentialsFlow {
constructor(private client: ClientOAuth2) {}
/**
* Request an access token using the client credentials.
*/
async getToken(): Promise<ClientOAuth2Token> {
const options = { ...this.client.options };
expects(options, 'clientId', 'clientSecret', 'accessTokenUri');
const headers: Headers = { ...DEFAULT_HEADERS };
const body: CredentialsFlowBody = {
grant_type: 'client_credentials',
...(options.additionalBodyProperties ?? {}),
};
if (options.scopes !== undefined) {
body.scope = options.scopes.join(options.scopesSeparator ?? ' ');
}
const clientId = options.clientId;
const clientSecret = options.clientSecret;
if (options.authentication === 'body') {
body.client_id = clientId;
body.client_secret = clientSecret;
} else {
headers.Authorization = auth(clientId, clientSecret);
}
const requestOptions = getRequestOptions(
{
url: options.accessTokenUri,
method: 'POST',
headers,
body,
},
options,
);
const responseData = await this.client.accessTokenRequest(requestOptions);
return this.client.createToken(responseData);
}
}

View File

@@ -0,0 +1,3 @@
export { ClientOAuth2, ClientOAuth2Options, ClientOAuth2RequestObject } from './client-oauth2';
export { ClientOAuth2Token, ClientOAuth2TokenData } from './client-oauth2-token';
export type * from './types';

View File

@@ -0,0 +1,31 @@
export type Headers = Record<string, string | string[]>;
export type OAuth2GrantType = 'pkce' | 'authorizationCode' | 'clientCredentials';
export interface OAuth2CredentialData {
clientId: string;
clientSecret?: string;
accessTokenUrl: string;
authentication?: 'header' | 'body';
authUrl?: string;
scope?: string;
authQueryParameters?: string;
additionalBodyProperties?: string;
grantType: OAuth2GrantType;
ignoreSSLIssues?: boolean;
oauthTokenData?: {
access_token: string;
refresh_token?: string;
};
}
/**
* The response from the OAuth2 server when the access token is not successfully
* retrieved. As specified in RFC 6749 Section 5.2:
* https://www.rfc-editor.org/rfc/rfc6749.html#section-5.2
*/
export interface OAuth2AccessTokenErrorResponse extends Record<string, unknown> {
error: string;
error_description?: string;
error_uri?: string;
}

View File

@@ -0,0 +1,82 @@
/* eslint-disable @typescript-eslint/no-explicit-any */
import type { ClientOAuth2Options, ClientOAuth2RequestObject } from './client-oauth2';
import { ERROR_RESPONSES } from './constants';
/**
* Check if properties exist on an object and throw when they aren't.
*/
export function expects<Keys extends keyof ClientOAuth2Options>(
obj: ClientOAuth2Options,
...keys: Keys[]
): asserts obj is ClientOAuth2Options & {
[K in Keys]: NonNullable<ClientOAuth2Options[K]>;
} {
for (const key of keys) {
if (obj[key] === null || obj[key] === undefined) {
throw new TypeError('Expected "' + key + '" to exist');
}
}
}
export class AuthError extends Error {
constructor(
message: string,
readonly body: any,
readonly code = 'EAUTH',
) {
super(message);
}
}
/**
* Pull an authentication error from the response data.
*/
export function getAuthError(body: {
error: string;
error_description?: string;
}): Error | undefined {
const message: string | undefined =
ERROR_RESPONSES[body.error] ?? body.error_description ?? body.error;
if (message) {
return new AuthError(message, body);
}
return undefined;
}
/**
* Ensure a value is a string.
*/
function toString(str: string | null | undefined) {
return str === null ? '' : String(str);
}
/**
* Create basic auth header.
*/
export function auth(username: string, password: string): string {
return 'Basic ' + Buffer.from(toString(username) + ':' + toString(password)).toString('base64');
}
/**
* Merge request options from an options object.
*/
export function getRequestOptions(
{ url, method, body, query, headers }: ClientOAuth2RequestObject,
options: ClientOAuth2Options,
): ClientOAuth2RequestObject {
const rOptions = {
url,
method,
body: { ...body, ...options.body },
query: { ...query, ...options.query },
headers: headers ?? {},
ignoreSSLIssues: options.ignoreSSLIssues,
};
// if request authorization was overridden delete it from header
if (rOptions.headers.Authorization === '') {
delete rOptions.headers.Authorization;
}
return rOptions;
}

View File

@@ -0,0 +1,168 @@
import axios from 'axios';
import nock from 'nock';
import { ClientOAuth2, ResponseError } from '@/client-oauth2';
import { ERROR_RESPONSES } from '@/constants';
import { auth, AuthError } from '@/utils';
import * as config from './config';
describe('ClientOAuth2', () => {
const client = new ClientOAuth2({
clientId: config.clientId,
clientSecret: config.clientSecret,
accessTokenUri: config.accessTokenUri,
authentication: 'header',
});
beforeAll(async () => {
nock.disableNetConnect();
});
afterAll(() => {
nock.restore();
});
describe('accessTokenRequest', () => {
const authHeader = auth(config.clientId, config.clientSecret);
const makeTokenCall = async () =>
await client.accessTokenRequest({
url: config.accessTokenUri,
method: 'POST',
headers: {
Authorization: authHeader,
Accept: 'application/json',
'Content-Type': 'application/x-www-form-urlencoded',
},
body: {
refresh_token: 'test',
grant_type: 'refresh_token',
},
});
const mockTokenResponse = ({
status = 200,
headers,
body,
}: {
status: number;
body: string;
headers: Record<string, string>;
}) =>
nock(config.baseUrl).post('/login/oauth/access_token').once().reply(status, body, headers);
it('should send the correct request based on given options', async () => {
mockTokenResponse({
status: 200,
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({
access_token: config.accessToken,
refresh_token: config.refreshToken,
}),
});
const axiosSpy = jest.spyOn(axios, 'request');
await makeTokenCall();
expect(axiosSpy).toHaveBeenCalledWith(
expect.objectContaining({
url: config.accessTokenUri,
method: 'POST',
data: 'refresh_token=test&grant_type=refresh_token',
headers: {
Authorization: authHeader,
Accept: 'application/json',
'Content-Type': 'application/x-www-form-urlencoded',
},
}),
);
});
test.each([
{
contentType: 'application/json',
body: JSON.stringify({
access_token: config.accessToken,
refresh_token: config.refreshToken,
}),
},
{
contentType: 'application/json; charset=utf-8',
body: JSON.stringify({
access_token: config.accessToken,
refresh_token: config.refreshToken,
}),
},
{
contentType: 'application/x-www-form-urlencoded',
body: `access_token=${config.accessToken}&refresh_token=${config.refreshToken}`,
},
])('should parse response with content type $contentType', async ({ contentType, body }) => {
mockTokenResponse({
status: 200,
headers: { 'Content-Type': contentType },
body,
});
const response = await makeTokenCall();
expect(response).toEqual({
access_token: config.accessToken,
refresh_token: config.refreshToken,
});
});
test.each([
{
contentType: 'text/html',
body: '<html><body>Hello, world!</body></html>',
},
{
contentType: 'application/xml',
body: '<xml><body>Hello, world!</body></xml>',
},
{
contentType: 'text/plain',
body: 'Hello, world!',
},
])('should reject content type $contentType', async ({ contentType, body }) => {
mockTokenResponse({
status: 200,
headers: { 'Content-Type': contentType },
body,
});
const result = await makeTokenCall().catch((err) => err);
expect(result).toBeInstanceOf(Error);
expect(result.message).toEqual(`Unsupported content type: ${contentType}`);
});
it('should reject 4xx responses with auth errors', async () => {
mockTokenResponse({
status: 401,
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({ error: 'access_denied' }),
});
const result = await makeTokenCall().catch((err) => err);
expect(result).toBeInstanceOf(AuthError);
expect(result.message).toEqual(ERROR_RESPONSES.access_denied);
expect(result.body).toEqual({ error: 'access_denied' });
});
it('should reject 3xx responses with response errors', async () => {
mockTokenResponse({
status: 302,
headers: {},
body: 'Redirected',
});
const result = await makeTokenCall().catch((err) => err);
expect(result).toBeInstanceOf(ResponseError);
expect(result.message).toEqual('HTTP status 302');
expect(result.body).toEqual('Redirected');
});
});
});

View File

@@ -0,0 +1,192 @@
import nock from 'nock';
import { ClientOAuth2 } from '@/client-oauth2';
import { ClientOAuth2Token } from '@/client-oauth2-token';
import { AuthError } from '@/utils';
import * as config from './config';
describe('CodeFlow', () => {
beforeAll(async () => {
nock.disableNetConnect();
});
afterAll(() => {
nock.restore();
});
const uri = `/auth/callback?code=${config.code}&state=${config.state}`;
const githubAuth = new ClientOAuth2({
clientId: config.clientId,
clientSecret: config.clientSecret,
accessTokenUri: config.accessTokenUri,
authorizationUri: config.authorizationUri,
authorizationGrants: ['code'],
redirectUri: config.redirectUri,
scopes: ['notifications'],
});
describe('#getUri', () => {
it('should return a valid uri', () => {
expect(githubAuth.code.getUri()).toEqual(
`${config.authorizationUri}?client_id=abc&` +
`redirect_uri=${encodeURIComponent(config.redirectUri)}&` +
'response_type=code&scope=notifications',
);
});
describe('when scopes are undefined', () => {
it('should not include scope in the uri', () => {
const authWithoutScopes = new ClientOAuth2({
clientId: config.clientId,
clientSecret: config.clientSecret,
accessTokenUri: config.accessTokenUri,
authorizationUri: config.authorizationUri,
authorizationGrants: ['code'],
redirectUri: config.redirectUri,
});
expect(authWithoutScopes.code.getUri()).toEqual(
`${config.authorizationUri}?client_id=abc&` +
`redirect_uri=${encodeURIComponent(config.redirectUri)}&` +
'response_type=code',
);
});
});
it('should include empty scopes array as an empty string', () => {
const authWithEmptyScopes = new ClientOAuth2({
clientId: config.clientId,
clientSecret: config.clientSecret,
accessTokenUri: config.accessTokenUri,
authorizationUri: config.authorizationUri,
authorizationGrants: ['code'],
redirectUri: config.redirectUri,
scopes: [],
});
expect(authWithEmptyScopes.code.getUri()).toEqual(
`${config.authorizationUri}?client_id=abc&` +
`redirect_uri=${encodeURIComponent(config.redirectUri)}&` +
'response_type=code&scope=',
);
});
it('should include empty scopes string as an empty string', () => {
const authWithEmptyScopes = new ClientOAuth2({
clientId: config.clientId,
clientSecret: config.clientSecret,
accessTokenUri: config.accessTokenUri,
authorizationUri: config.authorizationUri,
authorizationGrants: ['code'],
redirectUri: config.redirectUri,
scopes: [],
});
expect(authWithEmptyScopes.code.getUri()).toEqual(
`${config.authorizationUri}?client_id=abc&` +
`redirect_uri=${encodeURIComponent(config.redirectUri)}&` +
'response_type=code&scope=',
);
});
describe('when authorizationUri contains query parameters', () => {
it('should preserve query string parameters', () => {
const authWithParams = new ClientOAuth2({
clientId: config.clientId,
clientSecret: config.clientSecret,
accessTokenUri: config.accessTokenUri,
authorizationUri: `${config.authorizationUri}?bar=qux`,
authorizationGrants: ['code'],
redirectUri: config.redirectUri,
scopes: ['notifications'],
});
expect(authWithParams.code.getUri()).toEqual(
`${config.authorizationUri}?bar=qux&client_id=abc&` +
`redirect_uri=${encodeURIComponent(config.redirectUri)}&` +
'response_type=code&scope=notifications',
);
});
});
});
describe('#getToken', () => {
const mockTokenCall = () =>
nock(config.baseUrl)
.post(
'/login/oauth/access_token',
({ code, grant_type, redirect_uri }) =>
code === config.code &&
grant_type === 'authorization_code' &&
redirect_uri === config.redirectUri,
)
.once()
.reply(200, {
access_token: config.accessToken,
refresh_token: config.refreshToken,
});
it('should request the token', async () => {
mockTokenCall();
const user = await githubAuth.code.getToken(uri);
expect(user).toBeInstanceOf(ClientOAuth2Token);
expect(user.accessToken).toEqual(config.accessToken);
expect(user.tokenType).toEqual('bearer');
});
it('should reject with auth errors', async () => {
let errored = false;
try {
await githubAuth.code.getToken(`${config.redirectUri}?error=invalid_request`);
} catch (err) {
errored = true;
expect(err).toBeInstanceOf(AuthError);
if (err instanceof AuthError) {
expect(err.code).toEqual('EAUTH');
expect(err.body.error).toEqual('invalid_request');
}
}
expect(errored).toEqual(true);
});
describe('#sign', () => {
it('should be able to sign a standard request object', async () => {
mockTokenCall();
const token = await githubAuth.code.getToken(uri);
const requestOptions = token.sign({
method: 'GET',
url: 'http://api.github.com/user',
});
expect(requestOptions.headers?.Authorization).toEqual(`Bearer ${config.accessToken}`);
});
});
describe('#refresh', () => {
const mockRefreshCall = () =>
nock(config.baseUrl)
.post(
'/login/oauth/access_token',
({ refresh_token, grant_type }) =>
refresh_token === config.refreshToken && grant_type === 'refresh_token',
)
.once()
.reply(200, {
access_token: config.refreshedAccessToken,
refresh_token: config.refreshedRefreshToken,
});
it('should make a request to get a new access token', async () => {
mockTokenCall();
const token = await githubAuth.code.getToken(uri, { state: config.state });
expect(token.refreshToken).toEqual(config.refreshToken);
mockRefreshCall();
const token1 = await token.refresh();
expect(token1).toBeInstanceOf(ClientOAuth2Token);
expect(token1.accessToken).toEqual(config.refreshedAccessToken);
expect(token1.refreshToken).toEqual(config.refreshedRefreshToken);
expect(token1.tokenType).toEqual('bearer');
});
});
});
});

View File

@@ -0,0 +1,15 @@
export const baseUrl = 'https://mock.auth.service';
export const accessTokenUri = baseUrl + '/login/oauth/access_token';
export const authorizationUri = baseUrl + '/login/oauth/authorize';
export const redirectUri = 'http://example.com/auth/callback';
export const accessToken = '4430eb1615fb6127cbf828a8e403';
export const refreshToken = 'def456token';
export const refreshedAccessToken = 'f456okeendt';
export const refreshedRefreshToken = 'f4f6577c0f3af456okeendt';
export const clientId = 'abc';
export const clientSecret = '123';
export const code = 'fbe55d970377e0686746';
export const state = '7076840850058943';

View File

@@ -0,0 +1,215 @@
import nock from 'nock';
import { ClientOAuth2, type ClientOAuth2Options } from '@/client-oauth2';
import { ClientOAuth2Token } from '@/client-oauth2-token';
import type { Headers } from '@/types';
import * as config from './config';
describe('CredentialsFlow', () => {
beforeAll(async () => {
nock.disableNetConnect();
});
afterAll(() => {
nock.restore();
});
beforeEach(() => jest.clearAllMocks());
describe('#getToken', () => {
const createAuthClient = ({
scopes,
authentication,
}: Pick<ClientOAuth2Options, 'scopes' | 'authentication'> = {}) =>
new ClientOAuth2({
clientId: config.clientId,
clientSecret: config.clientSecret,
accessTokenUri: config.accessTokenUri,
authentication,
authorizationGrants: ['credentials'],
scopes,
});
const mockTokenCall = async ({ requestedScope }: { requestedScope?: string } = {}) => {
const nockScope = nock(config.baseUrl)
.post(
'/login/oauth/access_token',
({ scope, grant_type }) =>
scope === requestedScope && grant_type === 'client_credentials',
)
.once()
.reply(200, {
access_token: config.accessToken,
refresh_token: config.refreshToken,
scope: requestedScope,
});
return await new Promise<{ headers: Headers; body: unknown }>((resolve) => {
nockScope.once('request', (req) => {
resolve({
headers: req.headers,
body: req.requestBodyBuffers.toString('utf-8'),
});
});
});
};
it('should request the token', async () => {
const authClient = createAuthClient({ scopes: ['notifications'] });
const requestPromise = mockTokenCall({ requestedScope: 'notifications' });
const user = await authClient.credentials.getToken();
expect(user).toBeInstanceOf(ClientOAuth2Token);
expect(user.accessToken).toEqual(config.accessToken);
expect(user.tokenType).toEqual('bearer');
expect(user.data.scope).toEqual('notifications');
const { headers, body } = await requestPromise;
expect(headers.authorization).toBe('Basic YWJjOjEyMw==');
expect(body).toEqual('grant_type=client_credentials&scope=notifications');
});
it('when scopes are undefined, it should not send scopes to an auth server', async () => {
const authClient = createAuthClient();
const requestPromise = mockTokenCall();
const user = await authClient.credentials.getToken();
expect(user).toBeInstanceOf(ClientOAuth2Token);
expect(user.accessToken).toEqual(config.accessToken);
expect(user.tokenType).toEqual('bearer');
expect(user.data.scope).toEqual(undefined);
const { body } = await requestPromise;
expect(body).toEqual('grant_type=client_credentials');
});
it('when scopes is an empty array, it should send empty scope string to an auth server', async () => {
const authClient = createAuthClient({ scopes: [] });
const requestPromise = mockTokenCall({ requestedScope: '' });
const user = await authClient.credentials.getToken();
expect(user).toBeInstanceOf(ClientOAuth2Token);
expect(user.accessToken).toEqual(config.accessToken);
expect(user.tokenType).toEqual('bearer');
expect(user.data.scope).toEqual('');
const { body } = await requestPromise;
expect(body).toEqual('grant_type=client_credentials&scope=');
});
it('should handle authentication = "header"', async () => {
const authClient = createAuthClient({ scopes: [] });
const requestPromise = mockTokenCall({ requestedScope: '' });
await authClient.credentials.getToken();
const { headers, body } = await requestPromise;
expect(headers?.authorization).toBe('Basic YWJjOjEyMw==');
expect(body).toEqual('grant_type=client_credentials&scope=');
});
it('should handle authentication = "body"', async () => {
const authClient = createAuthClient({ scopes: [], authentication: 'body' });
const requestPromise = mockTokenCall({ requestedScope: '' });
await authClient.credentials.getToken();
const { headers, body } = await requestPromise;
expect(headers?.authorization).toBe(undefined);
expect(body).toEqual('grant_type=client_credentials&scope=&client_id=abc&client_secret=123');
});
describe('#sign', () => {
it('should be able to sign a standard request object', async () => {
const authClient = createAuthClient({ scopes: ['notifications'] });
void mockTokenCall({ requestedScope: 'notifications' });
const token = await authClient.credentials.getToken();
const requestOptions = token.sign({
method: 'GET',
url: `${config.baseUrl}/test`,
});
expect(requestOptions.headers?.Authorization).toEqual(`Bearer ${config.accessToken}`);
});
});
describe('#refresh', () => {
const mockRefreshCall = async () => {
const nockScope = nock(config.baseUrl)
.post(
'/login/oauth/access_token',
({ refresh_token, grant_type }) =>
refresh_token === config.refreshToken && grant_type === 'refresh_token',
)
.once()
.reply(200, {
access_token: config.refreshedAccessToken,
refresh_token: config.refreshedRefreshToken,
});
return await new Promise<{ headers: Headers; body: unknown }>((resolve) => {
nockScope.once('request', (req) => {
resolve({
headers: req.headers,
body: req.requestBodyBuffers.toString('utf-8'),
});
});
});
};
it('should make a request to get a new access token', async () => {
const authClient = createAuthClient({ scopes: ['notifications'] });
void mockTokenCall({ requestedScope: 'notifications' });
const token = await authClient.credentials.getToken();
expect(token.accessToken).toEqual(config.accessToken);
const requestPromise = mockRefreshCall();
const token1 = await token.refresh();
await requestPromise;
expect(token1).toBeInstanceOf(ClientOAuth2Token);
expect(token1.accessToken).toEqual(config.refreshedAccessToken);
expect(token1.tokenType).toEqual('bearer');
});
it('should make a request to get a new access token with authentication = "body"', async () => {
const authClient = createAuthClient({ scopes: ['notifications'], authentication: 'body' });
void mockTokenCall({ requestedScope: 'notifications' });
const token = await authClient.credentials.getToken();
expect(token.accessToken).toEqual(config.accessToken);
const requestPromise = mockRefreshCall();
const token1 = await token.refresh();
const { headers, body } = await requestPromise;
expect(token1).toBeInstanceOf(ClientOAuth2Token);
expect(token1.accessToken).toEqual(config.refreshedAccessToken);
expect(token1.tokenType).toEqual('bearer');
expect(headers?.authorization).toBe(undefined);
expect(body).toEqual(
'refresh_token=def456token&grant_type=refresh_token&client_id=abc&client_secret=123',
);
});
it('should make a request to get a new access token with authentication = "header"', async () => {
const authClient = createAuthClient({
scopes: ['notifications'],
authentication: 'header',
});
void mockTokenCall({ requestedScope: 'notifications' });
const token = await authClient.credentials.getToken();
expect(token.accessToken).toEqual(config.accessToken);
const requestPromise = mockRefreshCall();
const token1 = await token.refresh();
const { headers, body } = await requestPromise;
expect(token1).toBeInstanceOf(ClientOAuth2Token);
expect(token1.accessToken).toEqual(config.refreshedAccessToken);
expect(token1.tokenType).toEqual('bearer');
expect(headers?.authorization).toBe('Basic YWJjOjEyMw==');
expect(body).toEqual('refresh_token=def456token&grant_type=refresh_token');
});
});
});
});

View File

@@ -0,0 +1 @@
export { parserWithMetaData, n8nLanguage } from './expressions';

View File

@@ -0,0 +1,19 @@
abstract class StringArray<T extends string> extends Array<T> {
constructor(str: string, delimiter: string) {
super();
const parsed = str.split(delimiter) as this;
return parsed.filter((i) => typeof i === 'string' && i.length);
}
}
export class CommaSeparatedStringArray<T extends string> extends StringArray<T> {
constructor(str: string) {
super(str, ',');
}
}
export class ColonSeparatedStringArray<T extends string = string> extends StringArray<T> {
constructor(str: string) {
super(str, ':');
}
}

View File

@@ -0,0 +1,118 @@
import 'reflect-metadata';
import { Container, Service } from '@n8n/di';
import { readFileSync } from 'fs';
import { z } from 'zod';
// eslint-disable-next-line @typescript-eslint/no-restricted-types
type Class = Function;
type Constructable<T = unknown> = new (rawValue: string) => T;
type PropertyKey = string | symbol;
type PropertyType = number | boolean | string | Class;
interface PropertyMetadata {
type: PropertyType;
envName?: string;
schema?: z.ZodType<unknown>;
}
const globalMetadata = new Map<Class, Map<PropertyKey, PropertyMetadata>>();
const readEnv = (envName: string) => {
if (envName in process.env) return process.env[envName];
// Read the value from a file, if "_FILE" environment variable is defined
const filePath = process.env[`${envName}_FILE`];
if (filePath) return readFileSync(filePath, 'utf8');
return undefined;
};
export const Config: ClassDecorator = (ConfigClass: Class) => {
const factory = function (...args: unknown[]) {
const config = new (ConfigClass as new (...a: unknown[]) => Record<PropertyKey, unknown>)(
...args,
);
const classMetadata = globalMetadata.get(ConfigClass);
if (!classMetadata) {
throw new Error('Invalid config class: ' + ConfigClass.name);
}
for (const [key, { type, envName, schema }] of classMetadata) {
if (typeof type === 'function' && globalMetadata.has(type)) {
config[key] = Container.get(type as Constructable);
} else if (envName) {
const value = readEnv(envName);
if (value === undefined) continue;
if (schema) {
const result = schema.safeParse(value);
if (result.error) {
console.warn(
`Invalid value for ${envName} - ${result.error.issues[0].message}. Falling back to default value.`,
);
continue;
}
config[key] = result.data;
} else if (type === Number) {
const parsed = Number(value);
if (isNaN(parsed)) {
console.warn(`Invalid number value for ${envName}: ${value}`);
} else {
config[key] = parsed;
}
} else if (type === Boolean) {
if (['true', '1'].includes(value.toLowerCase())) {
config[key] = true;
} else if (['false', '0'].includes(value.toLowerCase())) {
config[key] = false;
} else {
console.warn(`Invalid boolean value for ${envName}: ${value}`);
}
} else if (type === Date) {
const timestamp = Date.parse(value);
if (isNaN(timestamp)) {
console.warn(`Invalid timestamp value for ${envName}: ${value}`);
} else {
config[key] = new Date(timestamp);
}
} else if (type === String) {
config[key] = value.trim().replace(/^(['"])(.*)\1$/, '$2');
} else {
config[key] = new (type as Constructable)(value);
}
}
}
if (typeof config.sanitize === 'function') config.sanitize();
return config;
};
// eslint-disable-next-line @typescript-eslint/no-unsafe-return
return Service({ factory })(ConfigClass);
};
export const Nested: PropertyDecorator = (target: object, key: PropertyKey) => {
const ConfigClass = target.constructor;
const classMetadata = globalMetadata.get(ConfigClass) ?? new Map<PropertyKey, PropertyMetadata>();
const type = Reflect.getMetadata('design:type', target, key) as PropertyType;
classMetadata.set(key, { type });
globalMetadata.set(ConfigClass, classMetadata);
};
export const Env =
(envName: string, schema?: PropertyMetadata['schema']): PropertyDecorator =>
(target: object, key: PropertyKey) => {
const ConfigClass = target.constructor;
const classMetadata =
globalMetadata.get(ConfigClass) ?? new Map<PropertyKey, PropertyMetadata>();
const type = Reflect.getMetadata('design:type', target, key) as PropertyType;
const isZodSchema = schema instanceof z.ZodType;
if (type === Object && !isZodSchema) {
throw new Error(
`Invalid decorator metadata on key "${key as string}" on ${ConfigClass.name}\n Please use explicit typing on all config fields`,
);
}
classMetadata.set(key, { type, envName, schema });
globalMetadata.set(ConfigClass, classMetadata);
};

View File

@@ -0,0 +1,211 @@
import { z } from 'zod';
import { AiAssistantConfig } from './configs/ai-assistant.config';
import { AiConfig } from './configs/ai.config';
import { AuthConfig } from './configs/auth.config';
import { CacheConfig } from './configs/cache.config';
import { CredentialsConfig } from './configs/credentials.config';
import { DatabaseConfig } from './configs/database.config';
import { DeploymentConfig } from './configs/deployment.config';
import { DiagnosticsConfig } from './configs/diagnostics.config';
import { EndpointsConfig } from './configs/endpoints.config';
import { EventBusConfig } from './configs/event-bus.config';
import { ExecutionsConfig } from './configs/executions.config';
import { ExternalHooksConfig } from './configs/external-hooks.config';
import { GenericConfig } from './configs/generic.config';
import { HiringBannerConfig } from './configs/hiring-banner.config';
import { LicenseConfig } from './configs/license.config';
import { LoggingConfig } from './configs/logging.config';
import { MfaConfig } from './configs/mfa.config';
import { MultiMainSetupConfig } from './configs/multi-main-setup.config';
import { NodesConfig } from './configs/nodes.config';
import { PartialExecutionsConfig } from './configs/partial-executions.config';
import { PersonalizationConfig } from './configs/personalization.config';
import { PublicApiConfig } from './configs/public-api.config';
import { RedisConfig } from './configs/redis.config';
import { TaskRunnersConfig } from './configs/runners.config';
import { ScalingModeConfig } from './configs/scaling-mode.config';
import { SecurityConfig } from './configs/security.config';
import { SentryConfig } from './configs/sentry.config';
import { SsoConfig } from './configs/sso.config';
import { TagsConfig } from './configs/tags.config';
import { TemplatesConfig } from './configs/templates.config';
import { UserManagementConfig } from './configs/user-management.config';
import { VersionNotificationsConfig } from './configs/version-notifications.config';
import { WorkflowHistoryConfig } from './configs/workflow-history.config';
import { WorkflowsConfig } from './configs/workflows.config';
import { Config, Env, Nested } from './decorators';
export { Config, Env, Nested } from './decorators';
export { DatabaseConfig } from './configs/database.config';
export { InstanceSettingsConfig } from './configs/instance-settings-config';
export { TaskRunnersConfig } from './configs/runners.config';
export { SecurityConfig } from './configs/security.config';
export { ExecutionsConfig } from './configs/executions.config';
export { LOG_SCOPES } from './configs/logging.config';
export type { LogScope } from './configs/logging.config';
export { WorkflowsConfig } from './configs/workflows.config';
export * from './custom-types';
export { DeploymentConfig } from './configs/deployment.config';
export { MfaConfig } from './configs/mfa.config';
export { HiringBannerConfig } from './configs/hiring-banner.config';
export { PersonalizationConfig } from './configs/personalization.config';
export { NodesConfig } from './configs/nodes.config';
export { CronLoggingConfig } from './configs/logging.config';
const protocolSchema = z.enum(['http', 'https']);
export type Protocol = z.infer<typeof protocolSchema>;
@Config
export class GlobalConfig {
@Nested
auth: AuthConfig;
@Nested
database: DatabaseConfig;
@Nested
credentials: CredentialsConfig;
@Nested
userManagement: UserManagementConfig;
@Nested
versionNotifications: VersionNotificationsConfig;
@Nested
publicApi: PublicApiConfig;
@Nested
externalHooks: ExternalHooksConfig;
@Nested
templates: TemplatesConfig;
@Nested
eventBus: EventBusConfig;
@Nested
nodes: NodesConfig;
@Nested
workflows: WorkflowsConfig;
@Nested
sentry: SentryConfig;
/** Path n8n is deployed to */
@Env('N8N_PATH')
path: string = '/';
/** Host name n8n can be reached */
@Env('N8N_HOST')
host: string = 'localhost';
/** HTTP port n8n can be reached */
@Env('N8N_PORT')
port: number = 5678;
/** IP address n8n should listen on */
@Env('N8N_LISTEN_ADDRESS')
listen_address: string = '::';
/** HTTP Protocol via which n8n can be reached */
@Env('N8N_PROTOCOL', protocolSchema)
protocol: Protocol = 'http';
@Nested
endpoints: EndpointsConfig;
@Nested
cache: CacheConfig;
@Nested
queue: ScalingModeConfig;
@Nested
logging: LoggingConfig;
@Nested
taskRunners: TaskRunnersConfig;
@Nested
multiMainSetup: MultiMainSetupConfig;
@Nested
generic: GenericConfig;
@Nested
license: LicenseConfig;
@Nested
security: SecurityConfig;
@Nested
executions: ExecutionsConfig;
@Nested
diagnostics: DiagnosticsConfig;
@Nested
aiAssistant: AiAssistantConfig;
@Nested
tags: TagsConfig;
@Nested
partialExecutions: PartialExecutionsConfig;
@Nested
workflowHistory: WorkflowHistoryConfig;
@Nested
deployment: DeploymentConfig;
@Nested
mfa: MfaConfig;
@Nested
hiringBanner: HiringBannerConfig;
@Nested
personalization: PersonalizationConfig;
@Nested
sso: SsoConfig;
/** Default locale for the UI. */
@Env('N8N_DEFAULT_LOCALE')
defaultLocale: string = 'en';
/** Whether to hide the page that shows active workflows and executions count. */
@Env('N8N_HIDE_USAGE_PAGE')
hideUsagePage: boolean = false;
/** Number of reverse proxies n8n is running behind. */
@Env('N8N_PROXY_HOPS')
proxy_hops: number = 0;
/** SSL key for HTTPS protocol. */
@Env('N8N_SSL_KEY')
ssl_key: string = '';
/** SSL cert for HTTPS protocol. */
@Env('N8N_SSL_CERT')
ssl_cert: string = '';
/** Public URL where the editor is accessible. Also used for emails sent from n8n. */
@Env('N8N_EDITOR_BASE_URL')
editorBaseUrl: string = '';
/** URLs to external frontend hooks files, separated by semicolons. */
@Env('EXTERNAL_FRONTEND_HOOKS_URLS')
externalFrontendHooksUrls: string = '';
@Nested
redis: RedisConfig;
@Nested
ai: AiConfig;
}

View File

@@ -0,0 +1,486 @@
import { Container } from '@n8n/di';
import fs from 'fs';
import { mock } from 'jest-mock-extended';
import type { UserManagementConfig } from '../src/configs/user-management.config';
import { GlobalConfig } from '../src/index';
jest.mock('fs');
const mockFs = mock<typeof fs>();
fs.readFileSync = mockFs.readFileSync;
const consoleWarnMock = jest.spyOn(console, 'warn').mockImplementation(() => {});
describe('GlobalConfig', () => {
beforeEach(() => {
Container.reset();
jest.clearAllMocks();
});
const originalEnv = process.env;
afterEach(() => {
process.env = originalEnv;
});
const defaultConfig: GlobalConfig = {
path: '/',
host: 'localhost',
port: 5678,
listen_address: '::',
protocol: 'http',
auth: {
cookie: {
samesite: 'lax',
secure: true,
},
},
defaultLocale: 'en',
hideUsagePage: false,
deployment: {
type: 'default',
},
mfa: {
enabled: true,
},
hiringBanner: {
enabled: true,
},
personalization: {
enabled: true,
},
proxy_hops: 0,
ssl_key: '',
ssl_cert: '',
editorBaseUrl: '',
database: {
logging: {
enabled: false,
maxQueryExecutionTime: 0,
options: 'error',
},
mysqldb: {
database: 'n8n',
host: 'localhost',
password: '',
port: 3306,
user: 'root',
},
postgresdb: {
database: 'n8n',
host: 'localhost',
password: '',
poolSize: 2,
port: 5432,
schema: 'public',
connectionTimeoutMs: 20_000,
ssl: {
ca: '',
cert: '',
enabled: false,
key: '',
rejectUnauthorized: true,
},
user: 'postgres',
idleTimeoutMs: 30_000,
},
sqlite: {
database: 'database.sqlite',
enableWAL: false,
executeVacuumOnStartup: false,
poolSize: 0,
},
tablePrefix: '',
type: 'sqlite',
isLegacySqlite: true,
pingIntervalSeconds: 2,
},
credentials: {
defaultName: 'My credentials',
overwrite: {
data: '{}',
endpoint: '',
},
},
userManagement: {
jwtSecret: '',
jwtSessionDurationHours: 168,
jwtRefreshTimeoutHours: 0,
emails: {
mode: 'smtp',
smtp: {
host: '',
port: 465,
secure: true,
sender: '',
startTLS: true,
auth: {
pass: '',
user: '',
privateKey: '',
serviceClient: '',
},
},
template: {
'credentials-shared': '',
'user-invited': '',
'password-reset-requested': '',
'workflow-shared': '',
'project-shared': '',
},
},
} as UserManagementConfig,
eventBus: {
checkUnsentInterval: 0,
crashRecoveryMode: 'extensive',
logWriter: {
keepLogCount: 3,
logBaseName: 'n8nEventLog',
maxFileSizeInKB: 10240,
},
},
externalHooks: {
files: [],
},
nodes: {
errorTriggerType: 'n8n-nodes-base.errorTrigger',
include: [],
exclude: [],
pythonEnabled: true,
},
publicApi: {
disabled: false,
path: 'api',
swaggerUiDisabled: false,
},
templates: {
enabled: true,
host: 'https://api.n8n.io/api/',
},
versionNotifications: {
enabled: true,
endpoint: 'https://api.n8n.io/api/versions/',
whatsNewEnabled: true,
whatsNewEndpoint: 'https://api.n8n.io/api/whats-new',
infoUrl: 'https://docs.n8n.io/hosting/installation/updating/',
},
workflows: {
defaultName: 'My workflow',
callerPolicyDefaultOption: 'workflowsFromSameOwner',
activationBatchSize: 1,
},
endpoints: {
metrics: {
enable: false,
prefix: 'n8n_',
includeWorkflowIdLabel: false,
includeWorkflowNameLabel: false,
includeDefaultMetrics: true,
includeMessageEventBusMetrics: false,
includeNodeTypeLabel: false,
includeCacheMetrics: false,
includeApiEndpoints: false,
includeApiPathLabel: false,
includeApiMethodLabel: false,
includeCredentialTypeLabel: false,
includeApiStatusCodeLabel: false,
includeQueueMetrics: false,
queueMetricsInterval: 20,
activeWorkflowCountInterval: 60,
},
additionalNonUIRoutes: '',
disableProductionWebhooksOnMainProcess: false,
disableUi: false,
form: 'form',
formTest: 'form-test',
formWaiting: 'form-waiting',
mcp: 'mcp',
mcpTest: 'mcp-test',
payloadSizeMax: 16,
formDataFileSizeMax: 200,
rest: 'rest',
webhook: 'webhook',
webhookTest: 'webhook-test',
webhookWaiting: 'webhook-waiting',
},
cache: {
backend: 'auto',
memory: {
maxSize: 3145728,
ttl: 3600000,
},
redis: {
prefix: 'cache',
ttl: 3600000,
},
},
queue: {
health: {
active: false,
port: 5678,
address: '::',
},
bull: {
redis: {
db: 0,
host: 'localhost',
password: '',
port: 6379,
timeoutThreshold: 10_000,
username: '',
clusterNodes: '',
tls: false,
dualStack: false,
},
gracefulShutdownTimeout: 30,
prefix: 'bull',
settings: {
lockDuration: 30_000,
lockRenewTime: 15_000,
stalledInterval: 30_000,
maxStalledCount: 1,
},
},
},
taskRunners: {
enabled: false,
mode: 'internal',
path: '/runners',
authToken: '',
listenAddress: '127.0.0.1',
maxPayload: 1024 * 1024 * 1024,
port: 5679,
maxOldSpaceSize: '',
maxConcurrency: 10,
taskTimeout: 300,
heartbeatInterval: 30,
insecureMode: false,
},
sentry: {
backendDsn: '',
frontendDsn: '',
environment: '',
deploymentName: '',
},
logging: {
level: 'info',
format: 'text',
outputs: ['console'],
file: {
fileCountMax: 100,
fileSizeMax: 16,
location: 'logs/n8n.log',
},
scopes: [],
cron: {
activeInterval: 0,
},
},
multiMainSetup: {
enabled: false,
ttl: 10,
interval: 3,
},
generic: {
timezone: 'America/New_York',
releaseChannel: 'dev',
gracefulShutdownTimeout: 30,
},
license: {
serverUrl: 'https://license.n8n.io/v1',
autoRenewalEnabled: true,
detachFloatingOnShutdown: true,
activationKey: '',
tenantId: 1,
cert: '',
},
security: {
restrictFileAccessTo: '',
blockFileAccessToN8nFiles: true,
daysAbandonedWorkflow: 90,
contentSecurityPolicy: '{}',
contentSecurityPolicyReportOnly: false,
disableWebhookHtmlSandboxing: false,
},
executions: {
pruneData: true,
pruneDataMaxAge: 336,
pruneDataMaxCount: 10_000,
pruneDataHardDeleteBuffer: 1,
pruneDataIntervals: {
hardDelete: 15,
softDelete: 60,
},
concurrency: {
productionLimit: -1,
evaluationLimit: -1,
},
queueRecovery: {
interval: 180,
batchSize: 100,
},
saveDataOnError: 'all',
saveDataOnSuccess: 'all',
saveExecutionProgress: false,
saveDataManualExecutions: true,
},
diagnostics: {
enabled: true,
frontendConfig: '1zPn9bgWPzlQc0p8Gj1uiK6DOTn;https://telemetry.n8n.io',
backendConfig: '1zPn7YoGC3ZXE9zLeTKLuQCB4F6;https://telemetry.n8n.io',
posthogConfig: {
apiKey: 'phc_4URIAm1uYfJO7j8kWSe0J8lc8IqnstRLS7Jx8NcakHo',
apiHost: 'https://ph.n8n.io',
},
},
aiAssistant: {
baseUrl: '',
},
tags: {
disabled: false,
},
partialExecutions: {
version: 2,
},
workflowHistory: {
enabled: true,
pruneTime: -1,
},
sso: {
justInTimeProvisioning: true,
redirectLoginToSso: true,
saml: {
loginEnabled: false,
loginLabel: '',
},
oidc: {
loginEnabled: false,
},
ldap: {
loginEnabled: false,
loginLabel: '',
},
},
redis: {
prefix: 'n8n',
},
externalFrontendHooksUrls: '',
ai: {
enabled: false,
},
};
it('should use all default values when no env variables are defined', () => {
process.env = {};
const config = Container.get(GlobalConfig);
// Makes sure the objects are structurally equal while respecting getters,
// which `toEqual` and `toBe` does not do.
expect(defaultConfig).toMatchObject(config);
expect(config).toMatchObject(defaultConfig);
expect(mockFs.readFileSync).not.toHaveBeenCalled();
});
it('should use values from env variables when defined', () => {
process.env = {
DB_POSTGRESDB_HOST: 'some-host',
DB_POSTGRESDB_USER: 'n8n',
DB_POSTGRESDB_IDLE_CONNECTION_TIMEOUT: '10000',
DB_TABLE_PREFIX: 'test_',
DB_PING_INTERVAL_SECONDS: '2',
NODES_INCLUDE: '["n8n-nodes-base.hackerNews"]',
DB_LOGGING_MAX_EXECUTION_TIME: '0',
N8N_METRICS: 'TRUE',
N8N_TEMPLATES_ENABLED: '0',
};
const config = Container.get(GlobalConfig);
expect(structuredClone(config)).toEqual({
...defaultConfig,
database: {
logging: defaultConfig.database.logging,
mysqldb: defaultConfig.database.mysqldb,
postgresdb: {
...defaultConfig.database.postgresdb,
host: 'some-host',
user: 'n8n',
idleTimeoutMs: 10_000,
},
sqlite: defaultConfig.database.sqlite,
tablePrefix: 'test_',
type: 'sqlite',
pingIntervalSeconds: 2,
},
endpoints: {
...defaultConfig.endpoints,
metrics: {
...defaultConfig.endpoints.metrics,
enable: true,
},
},
nodes: {
...defaultConfig.nodes,
include: ['n8n-nodes-base.hackerNews'],
},
templates: {
...defaultConfig.templates,
enabled: false,
},
});
expect(mockFs.readFileSync).not.toHaveBeenCalled();
});
it('should read values from files using _FILE env variables', () => {
const passwordFile = '/path/to/postgres/password';
process.env = {
DB_POSTGRESDB_PASSWORD_FILE: passwordFile,
};
mockFs.readFileSync.calledWith(passwordFile, 'utf8').mockReturnValueOnce('password-from-file');
const config = Container.get(GlobalConfig);
const expected = {
...defaultConfig,
database: {
...defaultConfig.database,
postgresdb: {
...defaultConfig.database.postgresdb,
password: 'password-from-file',
},
},
};
// Makes sure the objects are structurally equal while respecting getters,
// which `toEqual` and `toBe` does not do.
expect(config).toMatchObject(expected);
expect(expected).toMatchObject(config);
expect(mockFs.readFileSync).toHaveBeenCalled();
});
it('should handle invalid numbers', () => {
process.env = {
DB_LOGGING_MAX_EXECUTION_TIME: 'abcd',
};
const config = Container.get(GlobalConfig);
expect(config.database.logging.maxQueryExecutionTime).toEqual(0);
expect(consoleWarnMock).toHaveBeenCalledWith(
'Invalid number value for DB_LOGGING_MAX_EXECUTION_TIME: abcd',
);
});
describe('string unions', () => {
it('on invalid value, should warn and fall back to default value', () => {
process.env = {
N8N_RUNNERS_MODE: 'non-existing-mode',
N8N_RUNNERS_ENABLED: 'true',
DB_TYPE: 'postgresdb',
};
const globalConfig = Container.get(GlobalConfig);
expect(globalConfig.taskRunners.mode).toEqual('internal');
expect(consoleWarnMock).toHaveBeenCalledWith(
expect.stringContaining(
"Invalid value for N8N_RUNNERS_MODE - Invalid enum value. Expected 'internal' | 'external', received 'non-existing-mode'. Falling back to default value.",
),
);
expect(globalConfig.taskRunners.enabled).toEqual(true);
expect(globalConfig.database.type).toEqual('postgresdb');
});
});
});

View File

@@ -0,0 +1,25 @@
import { CommaSeparatedStringArray, ColonSeparatedStringArray } from '../src/custom-types';
describe('CommaSeparatedStringArray', () => {
it('should parse comma-separated string into array', () => {
const result = new CommaSeparatedStringArray('a,b,c');
expect(result).toEqual(['a', 'b', 'c']);
});
it('should handle empty strings', () => {
const result = new CommaSeparatedStringArray('a,b,,,');
expect(result).toEqual(['a', 'b']);
});
});
describe('ColonSeparatedStringArray', () => {
it('should parse colon-separated string into array', () => {
const result = new ColonSeparatedStringArray('a:b:c');
expect(result).toEqual(['a', 'b', 'c']);
});
it('should handle empty strings', () => {
const result = new ColonSeparatedStringArray('a::b:::');
expect(result).toEqual(['a', 'b']);
});
});

View File

@@ -0,0 +1,22 @@
import { Container } from '@n8n/di';
import { Config, Env } from '../src/decorators';
describe('decorators', () => {
beforeEach(() => {
Container.reset();
});
it('should throw when explicit typing is missing', () => {
expect(() => {
@Config
class InvalidConfig {
@Env('STRING_VALUE')
value = 'string';
}
Container.get(InvalidConfig);
}).toThrowError(
'Invalid decorator metadata on key "value" on InvalidConfig\n Please use explicit typing on all config fields',
);
});
});

View File

@@ -0,0 +1,123 @@
import { Container } from '@n8n/di';
import { GlobalConfig } from '../src/index';
beforeEach(() => {
Container.reset();
jest.clearAllMocks();
});
const originalEnv = process.env;
afterEach(() => {
process.env = originalEnv;
});
it('should strip double quotes from string values', () => {
process.env = {
GENERIC_TIMEZONE: '"America/Bogota"',
N8N_HOST: '"localhost"',
};
const config = Container.get(GlobalConfig);
expect(config.generic.timezone).toBe('America/Bogota');
expect(config.host).toBe('localhost');
});
it('should strip single quotes from string values', () => {
process.env = {
GENERIC_TIMEZONE: "'America/Bogota'",
N8N_HOST: "'localhost'",
};
const config = Container.get(GlobalConfig);
expect(config.generic.timezone).toBe('America/Bogota');
expect(config.host).toBe('localhost');
});
it('should trim whitespace from quoted values', () => {
process.env = {
GENERIC_TIMEZONE: ' "America/Bogota" ',
N8N_HOST: " 'localhost' ",
};
const config = Container.get(GlobalConfig);
expect(config.generic.timezone).toBe('America/Bogota');
expect(config.host).toBe('localhost');
});
it('should trim whitespace from unquoted values', () => {
process.env = {
GENERIC_TIMEZONE: ' America/Bogota ',
N8N_HOST: ' localhost ',
};
const config = Container.get(GlobalConfig);
expect(config.generic.timezone).toBe('America/Bogota');
expect(config.host).toBe('localhost');
});
it('should leave mismatched quotes unchanged', () => {
process.env = {
GENERIC_TIMEZONE: '"America/Bogota\'',
N8N_HOST: '\'localhost"',
};
const config = Container.get(GlobalConfig);
expect(config.generic.timezone).toBe('"America/Bogota\'');
expect(config.host).toBe('\'localhost"');
});
it('should handle empty quotes', () => {
process.env = {
GENERIC_TIMEZONE: '""',
N8N_HOST: "''",
};
const config = Container.get(GlobalConfig);
expect(config.generic.timezone).toBe('');
expect(config.host).toBe('');
});
it('should handle single character in quotes', () => {
process.env = {
GENERIC_TIMEZONE: '"A"',
N8N_HOST: "'B'",
};
const config = Container.get(GlobalConfig);
expect(config.generic.timezone).toBe('A');
expect(config.host).toBe('B');
});
it('should handle values with spaces in quotes', () => {
process.env = {
GENERIC_TIMEZONE: '"America/New York"',
N8N_HOST: "'my host name'",
};
const config = Container.get(GlobalConfig);
expect(config.generic.timezone).toBe('America/New York');
expect(config.host).toBe('my host name');
});
it('should handle nested quotes', () => {
process.env = {
GENERIC_TIMEZONE: '"America/\'Bogota\'"',
N8N_HOST: '\'"localhost"\'',
};
const config = Container.get(GlobalConfig);
expect(config.generic.timezone).toBe("America/'Bogota'");
expect(config.host).toBe('"localhost"');
});
it('should handle only opening or closing quotes', () => {
process.env = {
GENERIC_TIMEZONE: '"America/Bogota',
N8N_HOST: 'localhost"',
};
const config = Container.get(GlobalConfig);
expect(config.generic.timezone).toBe('"America/Bogota');
expect(config.host).toBe('localhost"');
});
it('should handle multiple quote pairs', () => {
process.env = {
GENERIC_TIMEZONE: '""America/Bogota""',
N8N_HOST: "''localhost''",
};
const config = Container.get(GlobalConfig);
expect(config.generic.timezone).toBe('"America/Bogota"'); // should strip only outer quotes
expect(config.host).toBe("'localhost'");
});

View File

@@ -0,0 +1,14 @@
{
"extends": "@n8n/typescript-config/tsconfig.common.json",
"compilerOptions": {
"rootDir": ".",
"emitDecoratorMetadata": true,
"experimentalDecorators": true,
"strictPropertyInitialization": false,
"types": ["node", "jest"],
"baseUrl": "src",
"tsBuildInfoFile": "dist/typecheck.tsbuildinfo"
},
"include": ["src/**/*.ts", "test/**/*.ts"],
"references": [{ "path": "../di/tsconfig.build.json" }]
}

View File

@@ -5,7 +5,7 @@
"emitDecoratorMetadata": true,
"experimentalDecorators": true,
"strictPropertyInitialization": false,
"types": ["node", "jest"],
"types": ["node"],
"baseUrl": "src",
"tsBuildInfoFile": "dist/typecheck.tsbuildinfo"
},

View File

@@ -0,0 +1 @@
export const N8N_IO_BASE_URL = 'https://api.n8n.io/api/';

View File

@@ -0,0 +1 @@
export const BROWSER_ID_STORAGE_KEY = 'n8n-browserId';

View File

@@ -0,0 +1 @@
export const NPM_COMMUNITY_NODE_SEARCH_API_URL = 'https://api.npms.io/v2/';

View File

@@ -0,0 +1 @@
export const TOOL_EXECUTOR_NODE_NAME = 'PartialExecutionToolExecutor';

View File

@@ -0,0 +1,111 @@
export * from './api';
export * from './browser';
export * from './community-nodes';
export * from './instance';
export * from './execution';
export const LICENSE_FEATURES = {
SHARING: 'feat:sharing',
LDAP: 'feat:ldap',
SAML: 'feat:saml',
OIDC: 'feat:oidc',
MFA_ENFORCEMENT: 'feat:mfaEnforcement',
LOG_STREAMING: 'feat:logStreaming',
ADVANCED_EXECUTION_FILTERS: 'feat:advancedExecutionFilters',
VARIABLES: 'feat:variables',
SOURCE_CONTROL: 'feat:sourceControl',
API_DISABLED: 'feat:apiDisabled',
EXTERNAL_SECRETS: 'feat:externalSecrets',
SHOW_NON_PROD_BANNER: 'feat:showNonProdBanner',
WORKFLOW_HISTORY: 'feat:workflowHistory',
DEBUG_IN_EDITOR: 'feat:debugInEditor',
BINARY_DATA_S3: 'feat:binaryDataS3',
MULTIPLE_MAIN_INSTANCES: 'feat:multipleMainInstances',
WORKER_VIEW: 'feat:workerView',
ADVANCED_PERMISSIONS: 'feat:advancedPermissions',
PROJECT_ROLE_ADMIN: 'feat:projectRole:admin',
PROJECT_ROLE_EDITOR: 'feat:projectRole:editor',
PROJECT_ROLE_VIEWER: 'feat:projectRole:viewer',
AI_ASSISTANT: 'feat:aiAssistant',
ASK_AI: 'feat:askAi',
COMMUNITY_NODES_CUSTOM_REGISTRY: 'feat:communityNodes:customRegistry',
AI_CREDITS: 'feat:aiCredits',
FOLDERS: 'feat:folders',
INSIGHTS_VIEW_SUMMARY: 'feat:insights:viewSummary',
INSIGHTS_VIEW_DASHBOARD: 'feat:insights:viewDashboard',
INSIGHTS_VIEW_HOURLY_DATA: 'feat:insights:viewHourlyData',
API_KEY_SCOPES: 'feat:apiKeyScopes',
WORKFLOW_DIFFS: 'feat:workflowDiffs',
} as const;
export const LICENSE_QUOTAS = {
TRIGGER_LIMIT: 'quota:activeWorkflows',
VARIABLES_LIMIT: 'quota:maxVariables',
USERS_LIMIT: 'quota:users',
WORKFLOW_HISTORY_PRUNE_LIMIT: 'quota:workflowHistoryPrune',
TEAM_PROJECT_LIMIT: 'quota:maxTeamProjects',
AI_CREDITS: 'quota:aiCredits',
INSIGHTS_MAX_HISTORY_DAYS: 'quota:insights:maxHistoryDays',
INSIGHTS_RETENTION_MAX_AGE_DAYS: 'quota:insights:retention:maxAgeDays',
INSIGHTS_RETENTION_PRUNE_INTERVAL_DAYS: 'quota:insights:retention:pruneIntervalDays',
WORKFLOWS_WITH_EVALUATION_LIMIT: 'quota:evaluations:maxWorkflows',
} as const;
export const UNLIMITED_LICENSE_QUOTA = -1;
export type BooleanLicenseFeature = (typeof LICENSE_FEATURES)[keyof typeof LICENSE_FEATURES];
export type NumericLicenseFeature = (typeof LICENSE_QUOTAS)[keyof typeof LICENSE_QUOTAS];
export const LDAP_FEATURE_NAME = 'features.ldap';
export type ConnectionSecurity = 'none' | 'tls' | 'startTls';
export interface LdapConfig {
loginEnabled: boolean;
loginLabel: string;
connectionUrl: string;
allowUnauthorizedCerts: boolean;
connectionSecurity: ConnectionSecurity;
connectionPort: number;
baseDn: string;
bindingAdminDn: string;
bindingAdminPassword: string;
firstNameAttribute: string;
lastNameAttribute: string;
emailAttribute: string;
loginIdAttribute: string;
ldapIdAttribute: string;
userFilter: string;
synchronizationEnabled: boolean;
synchronizationInterval: number; // minutes
searchPageSize: number;
searchTimeout: number;
}
export const LDAP_DEFAULT_CONFIGURATION: LdapConfig = {
loginEnabled: false,
loginLabel: '',
connectionUrl: '',
allowUnauthorizedCerts: false,
connectionSecurity: 'none',
connectionPort: 389,
baseDn: '',
bindingAdminDn: '',
bindingAdminPassword: '',
firstNameAttribute: '',
lastNameAttribute: '',
emailAttribute: '',
loginIdAttribute: '',
ldapIdAttribute: '',
userFilter: '',
synchronizationEnabled: false,
synchronizationInterval: 60,
searchPageSize: 0,
searchTimeout: 60,
};
export { Time } from './time';
export const MIN_PASSWORD_CHAR_LENGTH = 8;
export const MAX_PASSWORD_CHAR_LENGTH = 64;

View File

@@ -0,0 +1,8 @@
export const INSTANCE_ID_HEADER = 'n8n-instance-id';
export const INSTANCE_VERSION_HEADER = 'n8n-version';
export const INSTANCE_TYPES = ['main', 'webhook', 'worker'] as const;
export type InstanceType = (typeof INSTANCE_TYPES)[number];
export const INSTANCE_ROLES = ['unset', 'leader', 'follower'] as const;
export type InstanceRole = (typeof INSTANCE_ROLES)[number];

View File

@@ -0,0 +1,23 @@
/**
* Convert time from any time unit to any other unit
*/
export const Time = {
milliseconds: {
toMinutes: 1 / (60 * 1000),
toSeconds: 1 / 1000,
},
seconds: {
toMilliseconds: 1000,
},
minutes: {
toMilliseconds: 60 * 1000,
},
hours: {
toMilliseconds: 60 * 60 * 1000,
toSeconds: 60 * 60,
},
days: {
toSeconds: 24 * 60 * 60,
toMilliseconds: 24 * 60 * 60 * 1000,
},
};

View File

@@ -0,0 +1,9 @@
#!/usr/bin/env node
import { spawnSync } from 'node:child_process';
const result = spawnSync('n8n-node', ['new', ...process.argv.slice(2)], {
stdio: 'inherit',
});
process.exit(result.status ?? 1);

View File

@@ -0,0 +1,38 @@
export {
WithStringId,
WithTimestamps,
WithTimestampsAndStringId,
jsonColumnType,
datetimeColumnType,
dbType,
JsonColumn,
DateTimeColumn,
} from './entities/abstract-entity';
export { generateNanoId } from './utils/generators';
export { isStringArray } from './utils/is-string-array';
export { isValidEmail } from './utils/is-valid-email';
export { separate } from './utils/separate';
export { sql } from './utils/sql';
export { idStringifier, lowerCaser, objectRetriever, sqlite } from './utils/transformers';
export * from './entities';
export * from './entities/types-db';
export { NoXss } from './utils/validators/no-xss.validator';
export { NoUrl } from './utils/validators/no-url.validator';
export * from './repositories';
export * from './subscribers';
export { Column as DslColumn } from './migrations/dsl/column';
export { CreateTable } from './migrations/dsl/table';
export { sqliteMigrations } from './migrations/sqlite';
export { mysqlMigrations } from './migrations/mysqldb';
export { postgresMigrations } from './migrations/postgresdb';
export { wrapMigration } from './migrations/migration-helpers';
export * from './migrations/migration-types';
export { DbConnection } from './connection/db-connection';
export { DbConnectionOptions } from './connection/db-connection-options';
export { AuthRolesService } from './services/auth.roles.service';

View File

@@ -0,0 +1,37 @@
import debounce from 'lodash/debounce';
/**
* Debounce a class method using `lodash/debounce`.
*
* @param waitMs - Number of milliseconds to debounce method by.
*
* @example
* ```
* class MyClass {
* @Debounce(1000)
* async myMethod() {
* // debounced
* }
* }
* ```
*/
export const Debounce =
(waitMs: number): MethodDecorator =>
<T>(
_: object,
methodName: string | symbol,
originalDescriptor: PropertyDescriptor,
): TypedPropertyDescriptor<T> => ({
configurable: true,
get() {
const debouncedFn = debounce(originalDescriptor.value, waitMs);
Object.defineProperty(this, methodName, {
configurable: false,
value: debouncedFn,
});
return debouncedFn as T;
},
});

View File

@@ -0,0 +1,7 @@
import { UnexpectedError } from 'n8n-workflow';
export class NonMethodError extends UnexpectedError {
constructor(name: string) {
super(`${name} must be a method on a class to use this decorator`);
}
}

View File

@@ -0,0 +1,12 @@
export * from './controller';
export * from './command';
export { Debounce } from './debounce';
export * from './execution-lifecycle';
export { Memoized } from './memoized';
export * from './module';
export * from './multi-main';
export * from './pubsub';
export { Redactable } from './redactable';
export * from './shutdown';
export * from './module/module-metadata';
export { Timed, TimedOptions } from './timed';

View File

@@ -0,0 +1,41 @@
import assert from 'node:assert';
/**
* A decorator that implements memoization for class property getters.
*
* The decorated getter will only be executed once and its value cached for subsequent access
*
* @example
* class Example {
* @Memoized
* get computedValue() {
* // This will only run once and the result will be cached
* return heavyComputation();
* }
* }
*
* @throws If decorator is used on something other than a getter
*/
export function Memoized<T = unknown>(
target: object,
propertyKey: string | symbol,
descriptor?: TypedPropertyDescriptor<T>,
): TypedPropertyDescriptor<T> {
const originalGetter = descriptor?.get;
assert(originalGetter, '@Memoized can only be used on getters');
// Replace the original getter for the first call
descriptor.get = function (this: typeof target.constructor): T {
const value = originalGetter.call(this);
// Add a property on the class instance to stop reading from the getter on class prototype
Object.defineProperty(this, propertyKey, {
value,
configurable: false,
enumerable: false,
writable: false,
});
return value;
};
return descriptor;
}

View File

@@ -0,0 +1,66 @@
import { UnexpectedError } from 'n8n-workflow';
type UserLike = {
id: string;
email?: string;
firstName?: string;
lastName?: string;
role: string;
};
export class RedactableError extends UnexpectedError {
constructor(fieldName: string, args: string) {
super(
`Failed to find "${fieldName}" property in argument "${args.toString()}". Please set the decorator \`@Redactable()\` only on \`LogStreamingEventRelay\` methods where the argument contains a "${fieldName}" property.`,
);
}
}
function toRedactable(userLike: UserLike) {
return {
userId: userLike.id,
_email: userLike.email,
_firstName: userLike.firstName,
_lastName: userLike.lastName,
globalRole: userLike.role,
};
}
type FieldName = 'user' | 'inviter' | 'invitee';
/**
* Mark redactable properties in a `{ user: UserLike }` field in an `LogStreamingEventRelay`
* method arg. These properties will be later redacted by the log streaming
* destination based on user prefs. Only for `n8n.audit.*` logs.
*
* Also transform `id` to `userId` and `role` to `globalRole`.
*
* @example
*
* { id: '123'; email: 'test@example.com', role: 'some-role' } ->
* { userId: '123'; _email: 'test@example.com', globalRole: 'some-role' }
*/
export const Redactable =
(fieldName: FieldName = 'user'): MethodDecorator =>
(_target, _propertyName, propertyDescriptor: PropertyDescriptor) => {
// eslint-disable-next-line @typescript-eslint/no-restricted-types
const originalMethod = propertyDescriptor.value as Function;
type MethodArgs = Array<{ [fieldName: string]: UserLike }>;
propertyDescriptor.value = function (...args: MethodArgs) {
const index = args.findIndex((arg) => arg[fieldName] !== undefined);
if (index === -1) throw new RedactableError(fieldName, args.toString());
const userLike = args[index]?.[fieldName];
// @ts-expect-error Transformation
if (userLike) args[index][fieldName] = toRedactable(userLike);
// eslint-disable-next-line @typescript-eslint/no-unsafe-return
return originalMethod.apply(this, args);
};
return propertyDescriptor;
};

View File

@@ -0,0 +1,42 @@
export interface TimedOptions {
/** Duration (in ms) above which to log a warning. Defaults to `100`. */
threshold?: number;
/** Whether to include method parameters in the log. Defaults to `false`. */
logArgs?: boolean;
}
interface Logger {
warn(message: string, meta?: object): void;
}
/**
* Factory to create decorators to warn when method calls exceed a duration threshold.
*/
export const Timed =
(logger: Logger, msg = 'Slow method call') =>
(options: TimedOptions = {}): MethodDecorator =>
(_target, propertyKey, descriptor: PropertyDescriptor) => {
const originalMethod = descriptor.value as (...args: unknown[]) => unknown;
const thresholdMs = options.threshold ?? 100;
const logArgs = options.logArgs ?? false;
descriptor.value = async function (...args: unknown[]) {
const methodName = `${this.constructor.name}.${String(propertyKey)}`;
const start = performance.now();
const result = await originalMethod.apply(this, args);
const durationMs = performance.now() - start;
if (durationMs > thresholdMs) {
logger.warn(msg, {
method: methodName,
durationMs: Math.round(durationMs),
thresholdMs,
params: logArgs ? args : '[hidden]',
});
}
return result;
};
return descriptor;
};

View File

@@ -0,0 +1,14 @@
export type Class<T = object, A extends unknown[] = unknown[]> = new (...args: A) => T;
type EventHandlerFn = () => Promise<void> | void;
export type EventHandlerClass = Class<Record<string, EventHandlerFn>>;
export type EventHandler<T extends string> = {
/** Class holding the method to call on an event. */
eventHandlerClass: EventHandlerClass;
/** Name of the method to call on an event. */
methodName: string;
/** Name of the event to listen to. */
eventName: T;
};

View File

@@ -0,0 +1,143 @@
import 'reflect-metadata';
/**
* Represents a class constructor type that can be instantiated with 'new'
* @template T The type of instance the constructor creates
*/
// eslint-disable-next-line @typescript-eslint/no-explicit-any
export type Constructable<T = unknown> = new (...args: any[]) => T;
type AbstractConstructable<T = unknown> = abstract new (...args: unknown[]) => T;
type ServiceIdentifier<T = unknown> = Constructable<T> | AbstractConstructable<T>;
type Factory<T = unknown> = (...args: unknown[]) => T;
interface Metadata<T = unknown> {
instance?: T;
factory?: Factory<T>;
}
interface Options<T> {
factory?: Factory<T>;
}
const instances = new Map<ServiceIdentifier, Metadata>();
/**
* Decorator that marks a class as available for dependency injection.
* @param options Configuration options for the injectable class
* @param options.factory Optional factory function to create instances of this class
* @returns A class decorator to be applied to the target class
*/
// eslint-disable-next-line @typescript-eslint/no-restricted-types
export function Service<T = unknown>(): Function;
// eslint-disable-next-line @typescript-eslint/no-restricted-types
export function Service<T = unknown>(options: Options<T>): Function;
export function Service<T>({ factory }: Options<T> = {}) {
return function (target: Constructable<T>) {
instances.set(target, { factory });
return target;
};
}
class DIError extends Error {
constructor(message: string) {
super(`[DI] ${message}`);
}
}
class ContainerClass {
/** Stack to track types being resolved to detect circular dependencies */
private readonly resolutionStack: ServiceIdentifier[] = [];
/**
* Checks if a type is registered in the container
* @template T The type to check for
* @param type The constructor of the type to check
* @returns True if the type is registered (has metadata), false otherwise
*/
has<T>(type: ServiceIdentifier<T>): boolean {
return instances.has(type);
}
/**
* Retrieves or creates an instance of the specified type from the container
* @template T The type of instance to retrieve
* @param type The constructor of the type to retrieve
* @returns An instance of the specified type with all dependencies injected
* @throws {DIError} If circular dependencies are detected or if the type is not injectable
*/
get<T>(type: ServiceIdentifier<T>): T {
const { resolutionStack } = this;
const metadata = instances.get(type) as Metadata<T>;
if (!metadata) {
// Special case: Allow undefined returns for non-decorated constructor params
// when resolving a dependency chain (i.e., resolutionStack not empty)
if (resolutionStack.length) return undefined as T;
throw new DIError(`${type.name} is not decorated with ${Service.name}`);
}
if (metadata?.instance) return metadata.instance as T;
// Add current type to resolution stack before resolving dependencies
resolutionStack.push(type);
try {
let instance: T;
const paramTypes = (Reflect.getMetadata('design:paramtypes', type) ?? []) as Constructable[];
const dependencies = paramTypes.map(<P>(paramType: Constructable<P>, index: number) => {
if (paramType === undefined) {
throw new DIError(
`Circular dependency detected in ${type.name} at index ${index}.\n${resolutionStack.map((t) => t.name).join(' -> ')}\n`,
);
}
return this.get(paramType);
});
if (metadata?.factory) {
instance = metadata.factory(...dependencies);
} else {
// Create new instance with resolved dependencies
instance = new (type as Constructable)(...dependencies) as T;
}
instances.set(type, { ...metadata, instance });
return instance;
} catch (error) {
if (error instanceof TypeError && error.message.toLowerCase().includes('abstract')) {
throw new DIError(`${type.name} is an abstract class, and cannot be instantiated`);
}
throw error;
} finally {
resolutionStack.pop();
}
}
/**
* Manually sets an instance for a specific type in the container
* @template T The type of instance being set
* @param type The constructor of the type to set. This can also be an abstract class
* @param instance The instance to store in the container
*/
set<T>(type: ServiceIdentifier<T>, instance: T): void {
// Preserve any existing metadata (like factory) when setting new instance
const metadata = instances.get(type) ?? {};
instances.set(type, { ...metadata, instance });
}
/** Clears all instantiated instances from the container while preserving type registrations */
reset(): void {
for (const metadata of instances.values()) {
delete metadata.instance;
}
}
}
/**
* Global dependency injection container instance
* Used to retrieve and manage class instances and their dependencies
*/
export const Container = new ContainerClass();

View File

@@ -0,0 +1,36 @@
import type { Event } from '@sentry/node';
import callsites from 'callsites';
import type { ErrorLevel, ReportingOptions } from './types';
/**
* @deprecated Use `UserError`, `OperationalError` or `UnexpectedError` instead.
*/
export class ApplicationError extends Error {
level: ErrorLevel;
readonly tags: NonNullable<Event['tags']>;
readonly extra?: Event['extra'];
readonly packageName?: string;
constructor(
message: string,
{ level, tags = {}, extra, ...rest }: ErrorOptions & ReportingOptions = {},
) {
super(message, rest);
this.level = level ?? 'error';
this.tags = tags;
this.extra = extra;
try {
const filePath = callsites()[2].getFileName() ?? '';
// eslint-disable-next-line no-useless-escape
const match = /packages\/([^\/]+)\//.exec(filePath)?.[1];
if (match) this.tags.packageName = match;
// eslint-disable-next-line no-empty
} catch {}
}
}

View File

@@ -0,0 +1,2 @@
export { ApplicationError } from './application.error';
export * from './types';

View File

@@ -0,0 +1,16 @@
import type { Event } from '@sentry/node';
export type ErrorLevel = 'fatal' | 'error' | 'warning' | 'info';
export type ErrorTags = NonNullable<Event['tags']>;
export type ReportingOptions = {
/** Whether the error should be reported to Sentry */
shouldReport?: boolean;
/** Whether the error log should be logged (default to true) */
shouldBeLogged?: boolean;
level?: ErrorLevel;
tags?: ErrorTags;
extra?: Event['extra'];
executionId?: string;
};

View File

@@ -0,0 +1,30 @@
import type { ESLint } from 'eslint';
import { rules } from './rules/index.js';
const plugin = {
meta: {
name: 'n8n-local-rules',
},
configs: {},
// @ts-expect-error Rules type does not match for typescript-eslint and eslint
rules: rules as ESLint.Plugin['rules'],
} satisfies ESLint.Plugin;
export const localRulesPlugin = {
...plugin,
configs: {
recommended: {
plugins: {
'n8n-local-rules': plugin,
},
rules: {
'n8n-local-rules/no-uncaught-json-parse': 'error',
'n8n-local-rules/no-json-parse-json-stringify': 'error',
'n8n-local-rules/no-unneeded-backticks': 'error',
'n8n-local-rules/no-interpolation-in-regular-string': 'error',
'n8n-local-rules/no-unused-param-in-catch-clause': 'error',
'n8n-local-rules/no-useless-catch-throw': 'error',
},
},
},
} satisfies ESLint.Plugin;

View File

@@ -0,0 +1 @@
declare module 'eslint-plugin-lodash';

View File

@@ -0,0 +1,22 @@
import { extensionManifestSchema } from '../src/schema';
import { zodToJsonSchema } from 'zod-to-json-schema';
import { writeFile } from 'fs/promises';
import { dirname, resolve } from 'path';
import { fileURLToPath } from 'url';
import { format, resolveConfig } from 'prettier';
const __dirname = dirname(fileURLToPath(import.meta.url));
const rootDir = resolve(__dirname, '..');
const jsonSchema = zodToJsonSchema(extensionManifestSchema, {
name: 'N8nExtensionSchema',
nameStrategy: 'title',
});
(async () => {
const filepath = 'schema.json';
const schema = JSON.stringify(jsonSchema);
const config = await resolveConfig(filepath);
const formattedSchema = await format(schema, { ...config, filepath });
await writeFile(resolve(rootDir, filepath), formattedSchema);
})();

View File

@@ -0,0 +1 @@
export * from './schema';

View File

@@ -0,0 +1,96 @@
import { z } from 'zod';
/**
* Schema for the extension configuration.
*/
export const extensionManifestSchema = z.object({
/**
* Name of the extension package.
*/
name: z.string(),
/**
* The display name of the extension.
*/
displayName: z.string(),
/**
* Description of the extension package.
*/
description: z.string(),
/**
* Publisher of the extension.
*/
publisher: z.string(),
/**
* Version of the extension package.
*/
version: z.string(),
/**
* Category the extension belongs to.
*/
categories: z.array(z.string()),
/**
* Setup paths for backend and frontend code entry points.
*/
entry: z.object({
/**
* Path to the backend entry file.
*/
backend: z.string(),
/**
* Path to the frontend entry file.
*/
frontend: z.string(),
}),
/**
* Minimum SDK version required to run the extension.
*/
minSDKVersion: z.string(),
/**
* Permissions object specifying allowed access for frontend and backend.
*/
permissions: z.object({
/**
* List of frontend permissions (array of strings).
*/
frontend: z.array(z.string()),
/**
* List of backend permissions (array of strings).
*/
backend: z.array(z.string()),
}),
/**
* List of events that the extension listens to.
*/
events: z.array(z.string()),
/**
* Define extension points for existing functionalities.
*/
extends: z.object({
/**
* Extends the views configuration.
*/
views: z.object({
/**
* Extends the workflows view configuration.
*/
workflows: z.object({
/**
* Header component for the workflows view.
*/
header: z.string(),
}),
}),
}),
});
export type ExtensionManifest = z.infer<typeof extensionManifestSchema>;

View File

@@ -0,0 +1 @@
/// <reference types="vite/client" />

View File

@@ -0,0 +1,27 @@
export abstract class ImapError extends Error {}
/** Error thrown when a connection attempt has timed out */
export class ConnectionTimeoutError extends ImapError {
constructor(
/** timeout in milliseconds that the connection waited before timing out */
readonly timeout?: number,
) {
let message = 'connection timed out';
if (timeout) {
message += `. timeout = ${timeout} ms`;
}
super(message);
}
}
export class ConnectionClosedError extends ImapError {
constructor() {
super('Connection closed unexpectedly');
}
}
export class ConnectionEndedError extends ImapError {
constructor() {
super('Connection ended unexpectedly');
}
}

View File

@@ -0,0 +1,203 @@
import { EventEmitter } from 'events';
import type Imap from 'imap';
import { type ImapMessage } from 'imap';
import { getMessage } from './helpers/get-message';
import { PartData } from './part-data';
import type { Message, MessagePart, SearchCriteria } from './types';
const IMAP_EVENTS = ['alert', 'mail', 'expunge', 'uidvalidity', 'update', 'close', 'end'] as const;
export class ImapSimple extends EventEmitter {
/** flag to determine whether we should suppress ECONNRESET from bubbling up to listener */
private ending = false;
constructor(private readonly imap: Imap) {
super();
// pass most node-imap `Connection` events through 1:1
IMAP_EVENTS.forEach((event) => {
this.imap.on(event, this.emit.bind(this, event));
});
// special handling for `error` event
this.imap.on('error', (e: Error & { code?: string }) => {
// if .end() has been called and an 'ECONNRESET' error is received, don't bubble
if (e && this.ending && e.code?.toUpperCase() === 'ECONNRESET') {
return;
}
this.emit('error', e);
});
}
/** disconnect from the imap server */
end(): void {
// set state flag to suppress 'ECONNRESET' errors that are triggered when .end() is called.
// it is a known issue that has no known fix. This just temporarily ignores that error.
// https://github.com/mscdex/node-imap/issues/391
// https://github.com/mscdex/node-imap/issues/395
this.ending = true;
// using 'close' event to unbind ECONNRESET error handler, because the node-imap
// maintainer claims it is the more reliable event between 'end' and 'close'.
// https://github.com/mscdex/node-imap/issues/394
this.imap.once('close', () => {
this.ending = false;
});
this.imap.end();
}
/**
* Search the currently open mailbox, and retrieve the results
*
* Results are in the form:
*
* [{
* attributes: object,
* parts: [ { which: string, size: number, body: string }, ... ]
* }, ...]
*
* See node-imap's ImapMessage signature for information about `attributes`, `which`, `size`, and `body`.
* For any message part that is a `HEADER`, the body is automatically parsed into an object.
*/
async search(
/** Criteria to use to search. Passed to node-imap's .search() 1:1 */
searchCriteria: SearchCriteria[],
/** Criteria to use to fetch the search results. Passed to node-imap's .fetch() 1:1 */
fetchOptions: Imap.FetchOptions,
/** Optional limit to restrict the number of messages fetched */
limit?: number,
) {
return await new Promise<Message[]>((resolve, reject) => {
this.imap.search(searchCriteria, (e, uids) => {
if (e) {
reject(e);
return;
}
if (uids.length === 0) {
resolve([]);
return;
}
// If limit is specified, take only the first N UIDs
let uidsToFetch = uids;
if (limit && limit > 0 && uids.length > limit) {
uidsToFetch = uids.slice(0, limit);
}
const fetch = this.imap.fetch(uidsToFetch, fetchOptions);
let messagesRetrieved = 0;
const messages: Message[] = [];
const fetchOnMessage = async (message: Imap.ImapMessage, seqNo: number) => {
const msg: Message = await getMessage(message);
msg.seqNo = seqNo;
messages.push(msg);
messagesRetrieved++;
if (messagesRetrieved === uidsToFetch.length) {
resolve(messages.filter((m) => !!m));
}
};
const fetchOnError = (error: Error) => {
fetch.removeListener('message', fetchOnMessage);
fetch.removeListener('end', fetchOnEnd);
reject(error);
};
const fetchOnEnd = () => {
fetch.removeListener('message', fetchOnMessage);
fetch.removeListener('error', fetchOnError);
};
fetch.on('message', fetchOnMessage);
fetch.once('error', fetchOnError);
fetch.once('end', fetchOnEnd);
});
});
}
/** Download a "part" (either a portion of the message body, or an attachment) */
async getPartData(
/** The message returned from `search()` */
message: Message,
/** The message part to be downloaded, from the `message.attributes.struct` Array */
part: MessagePart,
) {
return await new Promise<PartData>((resolve, reject) => {
const fetch = this.imap.fetch(message.attributes.uid, {
bodies: [part.partID],
struct: true,
});
const fetchOnMessage = async (msg: ImapMessage) => {
const result = await getMessage(msg);
if (result.parts.length !== 1) {
reject(new Error('Got ' + result.parts.length + ' parts, should get 1'));
return;
}
const data = result.parts[0].body as string;
const encoding = part.encoding.toUpperCase();
resolve(PartData.fromData(data, encoding));
};
const fetchOnError = (error: Error) => {
fetch.removeListener('message', fetchOnMessage);
fetch.removeListener('end', fetchOnEnd);
reject(error);
};
const fetchOnEnd = () => {
fetch.removeListener('message', fetchOnMessage);
fetch.removeListener('error', fetchOnError);
};
fetch.once('message', fetchOnMessage);
fetch.once('error', fetchOnError);
fetch.once('end', fetchOnEnd);
});
}
/** Adds the provided flag(s) to the specified message(s). */
async addFlags(
/** The messages uid */
uid: number[],
/** The flags to add to the message(s). */
flags: string | string[],
) {
return await new Promise<void>((resolve, reject) => {
this.imap.addFlags(uid, flags, (e) => (e ? reject(e) : resolve()));
});
}
/** Returns a list of mailboxes (folders). */
async getBoxes() {
return await new Promise<Imap.MailBoxes>((resolve, reject) => {
this.imap.getBoxes((e, boxes) => (e ? reject(e) : resolve(boxes)));
});
}
/** Open a mailbox */
async openBox(
/** The name of the box to open */
boxName: string,
): Promise<Imap.Box> {
return await new Promise((resolve, reject) => {
this.imap.openBox(boxName, (e, result) => (e ? reject(e) : resolve(result)));
});
}
/** Close a mailbox */
async closeBox(
/** If autoExpunge is true, any messages marked as Deleted in the currently open mailbox will be removed @default true */
autoExpunge = true,
) {
return await new Promise<void>((resolve, reject) => {
this.imap.closeBox(autoExpunge, (e) => (e ? reject(e) : resolve()));
});
}
}

View File

@@ -0,0 +1,266 @@
import { EventEmitter } from 'events';
import Imap, { type Box, type MailBoxes } from 'imap';
import { Readable } from 'stream';
import type { Mocked } from 'vitest';
import { mock } from 'vitest-mock-extended';
import { ImapSimple } from './imap-simple';
import { PartData } from './part-data';
type MockImap = EventEmitter & {
connect: Mocked<() => unknown>;
fetch: Mocked<() => unknown>;
end: Mocked<() => unknown>;
search: Mocked<(...args: Parameters<Imap['search']>) => unknown>;
sort: Mocked<(...args: Parameters<Imap['sort']>) => unknown>;
openBox: Mocked<
(boxName: string, onOpen: (error: Error | null, box?: Box) => unknown) => unknown
>;
closeBox: Mocked<(...args: Parameters<Imap['closeBox']>) => unknown>;
getBoxes: Mocked<(onBoxes: (error: Error | null, boxes?: MailBoxes) => unknown) => unknown>;
addFlags: Mocked<(...args: Parameters<Imap['addFlags']>) => unknown>;
};
vi.mock('imap', () => {
return {
default: class InlineMockImap extends EventEmitter implements MockImap {
connect = vi.fn();
fetch = vi.fn();
end = vi.fn();
search = vi.fn();
sort = vi.fn();
openBox = vi.fn();
closeBox = vi.fn();
addFlags = vi.fn();
getBoxes = vi.fn();
},
};
});
vi.mock('./part-data', () => ({
// eslint-disable-next-line @typescript-eslint/naming-convention
PartData: { fromData: vi.fn(() => 'decoded') },
}));
describe('ImapSimple', () => {
function createImap() {
const imap = new Imap({ user: 'testuser', password: 'testpass' });
return { imapSimple: new ImapSimple(imap), mockImap: imap as unknown as MockImap };
}
describe('constructor', () => {
it('should forward nonerror events', () => {
const { imapSimple, mockImap } = createImap();
const onMail = vi.fn();
imapSimple.on('mail', onMail);
mockImap.emit('mail', 3);
expect(onMail).toHaveBeenCalledWith(3);
});
it('should suppress ECONNRESET errors if ending', () => {
const { imapSimple, mockImap } = createImap();
const onError = vi.fn();
imapSimple.on('error', onError);
imapSimple.end();
mockImap.emit('error', { message: 'reset', code: 'ECONNRESET' });
expect(onError).not.toHaveBeenCalled();
});
it('should forward ECONNRESET errors if not ending', () => {
const { imapSimple, mockImap } = createImap();
const onError = vi.fn();
imapSimple.on('error', onError);
const error = { message: 'reset', code: 'ECONNRESET' };
mockImap.emit('error', error);
expect(onError).toHaveBeenCalledWith(error);
});
});
describe('search', () => {
it('should resolve with messages returned from fetch', async () => {
const { imapSimple, mockImap } = createImap();
const fetchEmitter = new EventEmitter();
const mockMessages = [{ uid: 1 }, { uid: 2 }, { uid: 3 }];
vi.mocked(mockImap.search).mockImplementation((_criteria, onResult) =>
onResult(
null as unknown as Error,
mockMessages.map((m) => m.uid),
),
);
mockImap.fetch = vi.fn(() => fetchEmitter);
const searchPromise = imapSimple.search(['UNSEEN', ['FROM', 'test@n8n.io']], {
bodies: ['BODY'],
});
expect(mockImap.search).toHaveBeenCalledWith(
['UNSEEN', ['FROM', 'test@n8n.io']],
expect.any(Function),
);
for (const message of mockMessages) {
const messageEmitter = new EventEmitter();
const body = 'body' + message.uid;
const bodyStream = Readable.from(body);
fetchEmitter.emit('message', messageEmitter, message.uid);
messageEmitter.emit('body', bodyStream, { which: 'TEXT', size: Buffer.byteLength(body) });
messageEmitter.emit('attributes', { uid: message.uid });
await new Promise((resolve) => {
bodyStream.on('end', resolve);
});
messageEmitter.emit('end');
}
fetchEmitter.emit('end');
const messages = await searchPromise;
expect(messages).toEqual([
{
attributes: { uid: 1 },
parts: [{ body: 'body1', size: 5, which: 'TEXT' }],
seqNo: 1,
},
{
attributes: { uid: 2 },
parts: [{ body: 'body2', size: 5, which: 'TEXT' }],
seqNo: 2,
},
{
attributes: { uid: 3 },
parts: [{ body: 'body3', size: 5, which: 'TEXT' }],
seqNo: 3,
},
]);
});
});
describe('getPartData', () => {
it('should return decoded part data', async () => {
const { imapSimple, mockImap } = createImap();
const fetchEmitter = new EventEmitter();
mockImap.fetch = vi.fn(() => fetchEmitter);
const message = { attributes: { uid: 123 } };
const part = { partID: '1.2', encoding: 'BASE64' };
const partDataPromise = imapSimple.getPartData(mock(message), mock(part));
const body = 'encoded-body';
const messageEmitter = new EventEmitter();
const bodyStream = Readable.from(body);
fetchEmitter.emit('message', messageEmitter);
messageEmitter.emit('body', bodyStream, {
which: part.partID,
size: Buffer.byteLength(body),
});
messageEmitter.emit('attributes', {});
await new Promise((resolve) => bodyStream.on('end', resolve));
messageEmitter.emit('end');
fetchEmitter.emit('end');
const result = await partDataPromise;
expect(PartData.fromData).toHaveBeenCalledWith('encoded-body', 'BASE64');
expect(result).toBe('decoded');
});
});
describe('openBox', () => {
it('should open the mailbox', async () => {
const { imapSimple, mockImap } = createImap();
const box = mock<Box>({ name: 'INBOX' });
vi.mocked(mockImap.openBox).mockImplementation((_boxName, onOpen) =>
onOpen(null as unknown as Error, box),
);
await expect(imapSimple.openBox('INBOX')).resolves.toEqual(box);
});
it('should reject on error', async () => {
const { imapSimple, mockImap } = createImap();
vi.mocked(mockImap.openBox).mockImplementation((_boxName, onOpen) =>
onOpen(new Error('nope')),
);
await expect(imapSimple.openBox('INBOX')).rejects.toThrow('nope');
});
});
describe('closeBox', () => {
it('should close the mailbox with default autoExpunge=true', async () => {
const { imapSimple, mockImap } = createImap();
vi.mocked(mockImap.closeBox).mockImplementation((_expunge, onClose) =>
onClose(null as unknown as Error),
);
await expect(imapSimple.closeBox()).resolves.toBeUndefined();
expect(mockImap.closeBox).toHaveBeenCalledWith(true, expect.any(Function));
});
it('should close the mailbox with autoExpunge=false', async () => {
const { imapSimple, mockImap } = createImap();
vi.mocked(mockImap.closeBox).mockImplementation((_expunge, onClose) =>
onClose(null as unknown as Error),
);
await expect(imapSimple.closeBox(false)).resolves.toBeUndefined();
expect(mockImap.closeBox).toHaveBeenCalledWith(false, expect.any(Function));
});
it('should reject on error', async () => {
const { imapSimple, mockImap } = createImap();
vi.mocked(mockImap.closeBox).mockImplementation((_expunge, onClose) =>
onClose(new Error('fail')),
);
await expect(imapSimple.closeBox()).rejects.toThrow('fail');
});
});
describe('addFlags', () => {
it('should add flags to messages and resolve', async () => {
const { imapSimple, mockImap } = createImap();
vi.mocked(mockImap.addFlags).mockImplementation((_uids, _flags, onAdd) =>
onAdd(null as unknown as Error),
);
await expect(imapSimple.addFlags([1, 2], ['\\Seen'])).resolves.toBeUndefined();
expect(mockImap.addFlags).toHaveBeenCalledWith([1, 2], ['\\Seen'], expect.any(Function));
});
it('should reject on error', async () => {
const { imapSimple, mockImap } = createImap();
vi.mocked(mockImap.addFlags).mockImplementation((_uids, _flags, onAdd) =>
onAdd(new Error('add flags failed')),
);
await expect(imapSimple.addFlags([1], '\\Seen')).rejects.toThrow('add flags failed');
});
});
describe('getBoxes', () => {
it('should resolve with list of mailboxes', async () => {
const { imapSimple, mockImap } = createImap();
// eslint-disable-next-line @typescript-eslint/naming-convention
const boxes = mock<MailBoxes>({ INBOX: {}, Archive: {} });
vi.mocked(mockImap.getBoxes).mockImplementation((onBoxes) =>
onBoxes(null as unknown as Error, boxes),
);
await expect(imapSimple.getBoxes()).resolves.toEqual(boxes);
expect(mockImap.getBoxes).toHaveBeenCalledWith(expect.any(Function));
});
it('should reject on error', async () => {
const { imapSimple, mockImap } = createImap();
vi.mocked(mockImap.getBoxes).mockImplementation((onBoxes) =>
onBoxes(new Error('getBoxes failed')),
);
await expect(imapSimple.getBoxes()).rejects.toThrow('getBoxes failed');
});
});
});

View File

@@ -0,0 +1,100 @@
/* eslint-disable @typescript-eslint/no-unsafe-member-access */
import Imap from 'imap';
import { ConnectionClosedError, ConnectionEndedError, ConnectionTimeoutError } from './errors';
import { ImapSimple } from './imap-simple';
import type { ImapSimpleOptions, MessagePart } from './types';
/**
* Connect to an Imap server, returning an ImapSimple instance, which is a wrapper over node-imap to simplify it's api for common use cases.
*/
export async function connect(options: ImapSimpleOptions): Promise<ImapSimple> {
const authTimeout = options.imap.authTimeout ?? 2000;
options.imap.authTimeout = authTimeout;
const imap = new Imap(options.imap);
return await new Promise<ImapSimple>((resolve, reject) => {
const cleanUp = () => {
imap.removeListener('ready', imapOnReady);
imap.removeListener('error', imapOnError);
imap.removeListener('close', imapOnClose);
imap.removeListener('end', imapOnEnd);
};
const imapOnReady = () => {
cleanUp();
resolve(new ImapSimple(imap));
};
const imapOnError = (e: Error & { source?: string }) => {
if (e.source === 'timeout-auth') {
e = new ConnectionTimeoutError(authTimeout);
}
cleanUp();
reject(e);
};
const imapOnEnd = () => {
cleanUp();
reject(new ConnectionEndedError());
};
const imapOnClose = () => {
cleanUp();
reject(new ConnectionClosedError());
};
imap.once('ready', imapOnReady);
imap.once('error', imapOnError);
imap.once('close', imapOnClose);
imap.once('end', imapOnEnd);
if (options.onMail) {
imap.on('mail', options.onMail);
}
if (options.onExpunge) {
imap.on('expunge', options.onExpunge);
}
if (options.onUpdate) {
imap.on('update', options.onUpdate);
}
imap.connect();
});
}
/**
* Given the `message.attributes.struct`, retrieve a flattened array of `parts` objects that describe the structure of
* the different parts of the message's body. Useful for getting a simple list to iterate for the purposes of,
* for example, finding all attachments.
*
* Code taken from http://stackoverflow.com/questions/25247207/how-to-read-and-save-attachments-using-node-imap
*
* @returns {Array} a flattened array of `parts` objects that describe the structure of the different parts of the
* message's body
*/
export function getParts(
/** The `message.attributes.struct` value from the message you wish to retrieve parts for. */
// eslint-disable-next-line @typescript-eslint/no-explicit-any
struct: any,
/** The list of parts to push to. */
parts: MessagePart[] = [],
): MessagePart[] {
for (let i = 0; i < struct.length; i++) {
if (Array.isArray(struct[i])) {
getParts(struct[i], parts);
} else if (struct[i].partID) {
parts.push(struct[i] as MessagePart);
}
}
return parts;
}
export * from './imap-simple';
export * from './errors';
export * from './types';

View File

@@ -0,0 +1,83 @@
import * as iconvlite from 'iconv-lite';
import * as qp from 'quoted-printable';
import * as utf8 from 'utf8';
import * as uuencode from 'uuencode';
export abstract class PartData {
constructor(readonly buffer: Buffer) {}
toString() {
return this.buffer.toString();
}
static fromData(data: string, encoding: string, charset?: string): PartData {
if (encoding === 'BASE64') {
return new Base64PartData(data);
}
if (encoding === 'QUOTED-PRINTABLE') {
return new QuotedPrintablePartData(data, charset);
}
if (encoding === '7BIT') {
return new SevenBitPartData(data);
}
if (encoding === '8BIT' || encoding === 'BINARY') {
return new BinaryPartData(data, charset);
}
if (encoding === 'UUENCODE') {
return new UuencodedPartData(data);
}
// if it gets here, the encoding is not currently supported
throw new Error('Unknown encoding ' + encoding);
}
}
export class Base64PartData extends PartData {
constructor(data: string) {
super(Buffer.from(data, 'base64'));
}
}
export class QuotedPrintablePartData extends PartData {
constructor(data: string, charset?: string) {
const decoded =
charset?.toUpperCase() === 'UTF-8' ? utf8.decode(qp.decode(data)) : qp.decode(data);
super(Buffer.from(decoded));
}
}
export class SevenBitPartData extends PartData {
constructor(data: string) {
super(Buffer.from(data));
}
toString() {
return this.buffer.toString('ascii');
}
}
export class BinaryPartData extends PartData {
constructor(
data: string,
readonly charset: string = 'utf-8',
) {
super(Buffer.from(data));
}
toString() {
return iconvlite.decode(this.buffer, this.charset);
}
}
export class UuencodedPartData extends PartData {
constructor(data: string) {
const parts = data.split('\n'); // remove newline characters
const merged = parts.splice(1, parts.length - 4).join(''); // remove excess lines and join lines with empty string
const decoded = uuencode.decode(merged);
super(decoded);
}
}

View File

@@ -0,0 +1,88 @@
import {
PartData,
Base64PartData,
QuotedPrintablePartData,
SevenBitPartData,
BinaryPartData,
UuencodedPartData,
} from '../src/part-data';
describe('PartData', () => {
describe('fromData', () => {
it('should return an instance of Base64PartData when encoding is BASE64', () => {
const result = PartData.fromData('data', 'BASE64');
expect(result).toBeInstanceOf(Base64PartData);
});
it('should return an instance of QuotedPrintablePartData when encoding is QUOTED-PRINTABLE', () => {
const result = PartData.fromData('data', 'QUOTED-PRINTABLE');
expect(result).toBeInstanceOf(QuotedPrintablePartData);
});
it('should return an instance of SevenBitPartData when encoding is 7BIT', () => {
const result = PartData.fromData('data', '7BIT');
expect(result).toBeInstanceOf(SevenBitPartData);
});
it('should return an instance of BinaryPartData when encoding is 8BIT or BINARY', () => {
let result = PartData.fromData('data', '8BIT');
expect(result).toBeInstanceOf(BinaryPartData);
result = PartData.fromData('data', 'BINARY');
expect(result).toBeInstanceOf(BinaryPartData);
});
it('should return an instance of UuencodedPartData when encoding is UUENCODE', () => {
const result = PartData.fromData('data', 'UUENCODE');
expect(result).toBeInstanceOf(UuencodedPartData);
});
it('should throw an error when encoding is not supported', () => {
expect(() => PartData.fromData('data', 'UNSUPPORTED')).toThrow(
'Unknown encoding UNSUPPORTED',
);
});
});
});
describe('Base64PartData', () => {
it('should correctly decode base64 data', () => {
const data = Buffer.from('Hello, world!', 'utf-8').toString('base64');
const partData = new Base64PartData(data);
expect(partData.toString()).toBe('Hello, world!');
});
});
describe('QuotedPrintablePartData', () => {
it('should correctly decode quoted-printable data', () => {
const data = '=48=65=6C=6C=6F=2C=20=77=6F=72=6C=64=21'; // 'Hello, world!' in quoted-printable
const partData = new QuotedPrintablePartData(data);
expect(partData.toString()).toBe('Hello, world!');
});
});
describe('SevenBitPartData', () => {
it('should correctly decode 7bit data', () => {
const data = 'Hello, world!';
const partData = new SevenBitPartData(data);
expect(partData.toString()).toBe('Hello, world!');
});
});
describe('BinaryPartData', () => {
it('should correctly decode binary data', () => {
const data = Buffer.from('Hello, world!', 'utf-8').toString();
const partData = new BinaryPartData(data);
expect(partData.toString()).toBe('Hello, world!');
});
});
describe('UuencodedPartData', () => {
it('should correctly decode uuencoded data', () => {
const data = Buffer.from(
'YmVnaW4gNjQ0IGRhdGEKLTImNUw7JlxMKCc9TzxGUUQoMGBgCmAKZW5kCg==',
'base64',
).toString('binary');
const partData = new UuencodedPartData(data);
expect(partData.toString()).toBe('Hello, world!');
});
});

View File

@@ -0,0 +1,43 @@
import type { Config, ImapMessageBodyInfo, ImapMessageAttributes } from 'imap';
export interface ImapSimpleOptions {
/** Options to pass to node-imap constructor. */
imap: Config;
/** Server event emitted when new mail arrives in the currently open mailbox. */
onMail?: ((numNewMail: number) => void) | undefined;
/** Server event emitted when a message was expunged externally. seqNo is the sequence number (instead of the unique UID) of the message that was expunged. If you are caching sequence numbers, all sequence numbers higher than this value MUST be decremented by 1 in order to stay synchronized with the server and to keep correct continuity. */
onExpunge?: ((seqNo: number) => void) | undefined;
/** Server event emitted when message metadata (e.g. flags) changes externally. */
onUpdate?:
| ((seqNo: number, info: { num: number | undefined; text: unknown }) => void)
| undefined;
}
export interface MessagePart {
partID: string;
encoding: 'BASE64' | 'QUOTED-PRINTABLE' | '7BIT' | '8BIT' | 'BINARY' | 'UUENCODE';
type: 'TEXT';
subtype: string;
params?: {
charset?: string;
};
disposition?: {
type: string;
};
}
export interface MessageBodyPart extends ImapMessageBodyInfo {
/** string type where which=='TEXT', complex Object where which=='HEADER' */
body: string | object;
}
export interface Message {
attributes: ImapMessageAttributes;
parts: MessageBodyPart[];
seqNo?: number;
}
export type SearchCriteria = string | [string, string];

View File

@@ -0,0 +1,2 @@
export type * from './types';
export { jsonSchemaToZod } from './json-schema-to-zod';

View File

@@ -0,0 +1,15 @@
import type { z } from 'zod';
import { parseSchema } from './parsers/parse-schema';
import type { JsonSchemaToZodOptions, JsonSchema } from './types';
export const jsonSchemaToZod = <T extends z.ZodTypeAny = z.ZodTypeAny>(
schema: JsonSchema,
options: JsonSchemaToZodOptions = {},
): T => {
return parseSchema(schema, {
path: [],
seen: new Map(),
...options,
}) as T;
};

View File

@@ -0,0 +1,82 @@
import type { ZodTypeAny } from 'zod';
export type Serializable =
| { [key: string]: Serializable }
| Serializable[]
| string
| number
| boolean
| null;
export type JsonSchema = JsonSchemaObject | boolean;
export type JsonSchemaObject = {
// left permissive by design
type?: string | string[];
// object
properties?: { [key: string]: JsonSchema };
additionalProperties?: JsonSchema;
unevaluatedProperties?: JsonSchema;
patternProperties?: { [key: string]: JsonSchema };
minProperties?: number;
maxProperties?: number;
required?: string[] | boolean;
propertyNames?: JsonSchema;
// array
items?: JsonSchema | JsonSchema[];
additionalItems?: JsonSchema;
minItems?: number;
maxItems?: number;
uniqueItems?: boolean;
// string
minLength?: number;
maxLength?: number;
pattern?: string;
format?: string;
// number
minimum?: number;
maximum?: number;
exclusiveMinimum?: number | boolean;
exclusiveMaximum?: number | boolean;
multipleOf?: number;
// unions
anyOf?: JsonSchema[];
allOf?: JsonSchema[];
oneOf?: JsonSchema[];
if?: JsonSchema;
then?: JsonSchema;
else?: JsonSchema;
// shared
const?: Serializable;
enum?: Serializable[];
errorMessage?: { [key: string]: string | undefined };
description?: string;
default?: Serializable;
readOnly?: boolean;
not?: JsonSchema;
contentEncoding?: string;
nullable?: boolean;
};
export type ParserSelector = (schema: JsonSchemaObject, refs: Refs) => ZodTypeAny;
export type ParserOverride = (schema: JsonSchemaObject, refs: Refs) => ZodTypeAny | undefined;
export type JsonSchemaToZodOptions = {
withoutDefaults?: boolean;
withoutDescribes?: boolean;
parserOverride?: ParserOverride;
depth?: number;
};
export type Refs = JsonSchemaToZodOptions & {
path: Array<string | number>;
seen: Map<object | boolean, { n: number; r: ZodTypeAny | undefined }>;
};

View File

@@ -0,0 +1,143 @@
{
"$schema": "http://json-schema.org/draft-07/schema",
"properties": {
"allOf": {
"allOf": [
{
"type": "boolean"
},
{
"type": "number"
},
{
"type": "string"
}
]
},
"anyOf": {
"anyOf": [
{
"type": "boolean"
},
{
"type": "number"
},
{
"type": "string"
}
]
},
"oneOf": {
"oneOf": [
{
"type": "boolean"
},
{
"type": "number"
},
{
"type": "string"
}
]
},
"array": {
"type": "array",
"items": {
"type": "string"
},
"minItems": 2,
"maxItems": 3
},
"tuple": {
"type": "array",
"items": [
{
"type": "boolean"
},
{
"type": "number"
},
{
"type": "string"
}
],
"minItems": 2,
"maxItems": 3
},
"const": {
"const": "xbox"
},
"enum": {
"enum": ["ps4", "ps5"]
},
"ifThenElse": {
"if": {
"type": "string"
},
"then": {
"const": "x"
},
"else": {
"enum": [1, 2, 3]
}
},
"null": {
"type": "null"
},
"multiple": {
"type": ["array", "boolean"]
},
"objAdditionalTrue": {
"type": "object",
"properties": {
"x": {
"type": "string"
}
},
"additionalProperties": true
},
"objAdditionalFalse": {
"type": "object",
"properties": {
"x": {
"type": "string"
}
},
"additionalProperties": false
},
"objAdditionalNumber": {
"type": "object",
"properties": {
"x": {
"type": "string"
}
},
"additionalProperties": {
"type": "number"
}
},
"objAdditionalOnly": {
"type": "object",
"additionalProperties": {
"type": "number"
}
},
"patternProps": {
"type": "object",
"patternProperties": {
"^x": {
"type": "string"
},
"^y": {
"type": "number"
}
},
"properties": {
"z": {
"type": "string"
}
},
"additionalProperties": false
}
}
}

Some files were not shown because too many files have changed in this diff Show More