chore: 清理macOS同步产生的重复文件

详细说明:
- 删除了352个带数字后缀的重复文件
- 更新.gitignore防止未来产生此类文件
- 这些文件是由iCloud或其他同步服务冲突产生的
- 不影响项目功能,仅清理冗余文件
This commit is contained in:
Yep_Q
2025-09-08 12:06:01 +08:00
parent 1564396449
commit d6f48d6d14
365 changed files with 2039 additions and 68301 deletions

View File

@@ -1,205 +0,0 @@
# AI Workflow Builder Evaluations
This module provides a evaluation framework for testing the AI Workflow Builder's ability to generate correct n8n workflows from natural language prompts.
## Architecture Overview
The evaluation system is split into two distinct modes:
1. **CLI Evaluation** - Runs predefined test cases locally with progress tracking
2. **Langsmith Evaluation** - Integrates with Langsmith for dataset-based evaluation and experiment tracking
### Directory Structure
```
evaluations/
├── cli/ # CLI evaluation implementation
│ ├── runner.ts # Main CLI evaluation orchestrator
│ └── display.ts # Console output and progress tracking
├── langsmith/ # Langsmith integration
│ ├── evaluator.ts # Langsmith-compatible evaluator function
│ └── runner.ts # Langsmith evaluation orchestrator
├── core/ # Shared evaluation logic
│ ├── environment.ts # Test environment setup and configuration
│ └── test-runner.ts # Core test execution logic
├── types/ # Type definitions
│ ├── evaluation.ts # Evaluation result schemas
│ ├── test-result.ts # Test result interfaces
│ └── langsmith.ts # Langsmith-specific types and guards
├── chains/ # LLM evaluation chains
│ ├── test-case-generator.ts # Dynamic test case generation
│ └── workflow-evaluator.ts # LLM-based workflow evaluation
├── utils/ # Utility functions
│ ├── evaluation-calculator.ts # Metrics calculation
│ ├── evaluation-helpers.ts # Common helper functions
│ ├── evaluation-reporter.ts # Report generation
└── index.ts # Main entry point
```
## Implementation Details
### Core Components
#### 1. Test Runner (`core/test-runner.ts`)
The core test runner handles individual test execution:
- Generates workflows using the WorkflowBuilderAgent
- Validates generated workflows using type guards
- Evaluates workflows against test criteria
- Returns structured test results with error handling
#### 2. Environment Setup (`core/environment.ts`)
Centralizes environment configuration:
- LLM initialization with API key validation
- Langsmith client setup
- Node types loading
- Concurrency and test generation settings
#### 3. Langsmith Integration
The Langsmith integration provides two key components:
**Evaluator (`langsmith/evaluator.ts`):**
- Converts Langsmith Run objects to evaluation inputs
- Validates all data using type guards before processing
- Safely extracts usage metadata without type coercion
- Returns structured evaluation results
**Runner (`langsmith/runner.ts`):**
- Creates workflow generation functions compatible with Langsmith
- Validates message content before processing
- Extracts usage metrics safely from message metadata
- Handles dataset verification and error reporting
#### 4. CLI Evaluation
The CLI evaluation provides local testing capabilities:
**Runner (`cli/runner.ts`):**
- Orchestrates parallel test execution with concurrency control
- Manages test case generation when enabled
- Generates detailed reports and saves results
**Display (`cli/display.ts`):**
- Progress bar management for real-time feedback
- Console output formatting
- Error display and reporting
### Evaluation Metrics
The system evaluates workflows across five categories:
1. **Functionality** (30% weight)
- Does the workflow achieve the intended goal?
- Are the right nodes selected?
2. **Connections** (25% weight)
- Are nodes properly connected?
- Is data flow logical?
3. **Expressions** (20% weight)
- Are n8n expressions syntactically correct?
- Do they reference valid data paths?
4. **Node Configuration** (15% weight)
- Are node parameters properly set?
- Are required fields populated?
5. **Structural Similarity** (10% weight, optional)
- How closely does the structure match a reference workflow?
- Only evaluated when reference workflow is provided
### Violation Severity Levels
Violations are categorized by severity:
- **Critical** (-40 to -50 points): Workflow-breaking issues
- **Major** (-15 to -25 points): Significant problems affecting functionality
- **Minor** (-5 to -15 points): Non-critical issues or inefficiencies
## Running Evaluations
### CLI Evaluation
```bash
# Run with default settings
pnpm eval
# With additional generated test cases
GENERATE_TEST_CASES=true pnpm eval
# With custom concurrency
EVALUATION_CONCURRENCY=10 pnpm eval
```
### Langsmith Evaluation
```bash
# Set required environment variables
export LANGSMITH_API_KEY=your_api_key
# Optionally specify dataset
export LANGSMITH_DATASET_NAME=your_dataset_name
# Run evaluation
pnpm eval:langsmith
```
## Configuration
### Required Files
#### nodes.json
**IMPORTANT**: The evaluation framework requires a `nodes.json` file in the evaluations root directory (`evaluations/nodes.json`).
This file contains all n8n node type definitions and is used by the AI Workflow Builder agent to:
- Know what nodes are available in n8n
- Understand node parameters and their schemas
- Generate valid workflows with proper node configurations
**Why is this required?**
The AI Workflow Builder agent needs access to node definitions to generate workflows. In a normal n8n runtime, these definitions are loaded automatically. However, since the evaluation framework instantiates the agent without a running n8n instance, we must provide the node definitions manually via `nodes.json`.
**How to generate nodes.json:**
1. Run your n8n instance
2. Download the node definitions from locally running n8n instance(http://localhost:5678/types/nodes.json)
3. Save the node definitions to `evaluations/nodes.json`
The evaluation will fail with a clear error message if `nodes.json` is missing.
### Environment Variables
- `N8N_AI_ANTHROPIC_KEY` - Required for LLM access
- `LANGSMITH_API_KEY` - Required for Langsmith evaluation
- `USE_LANGSMITH_EVAL` - Set to "true" to use Langsmith mode
- `LANGSMITH_DATASET_NAME` - Override default dataset name
- `EVALUATION_CONCURRENCY` - Number of parallel test executions (default: 5)
- `GENERATE_TEST_CASES` - Set to "true" to generate additional test cases
- `LLM_MODEL` - Model identifier for metadata tracking
## Output
### CLI Evaluation Output
- **Console Display**: Real-time progress, test results, and summary statistics
- **Markdown Report**: `results/evaluation-report-[timestamp].md`
- **JSON Results**: `results/evaluation-results-[timestamp].json`
### Langsmith Evaluation Output
- Results are stored in Langsmith dashboard
- Experiment name format: `workflow-builder-evaluation-[date]`
- Includes detailed metrics for each evaluation category
## Adding New Test Cases
Test cases are defined in `chains/test-case-generator.ts`. Each test case requires:
- `id`: Unique identifier
- `name`: Descriptive name
- `prompt`: Natural language description of the workflow to generate
- `referenceWorkflow` (optional): Expected workflow structure for comparison
## Extending the Framework
To add new evaluation metrics:
1. Update the `EvaluationResult` schema in `types/evaluation.ts`
2. Modify the evaluation logic in `chains/workflow-evaluator.ts`
3. Update the evaluator in `langsmith/evaluator.ts` to include new metrics
4. Adjust weight calculations in `utils/evaluation-calculator.ts`

View File

@@ -1,27 +0,0 @@
import { runCliEvaluation } from './cli/runner.js';
import { runLangsmithEvaluation } from './langsmith/runner.js';
// Re-export for external use if needed
export { runCliEvaluation } from './cli/runner.js';
export { runLangsmithEvaluation } from './langsmith/runner.js';
export { runSingleTest } from './core/test-runner.js';
export { setupTestEnvironment, createAgent } from './core/environment.js';
/**
* Main entry point for evaluation
* Determines which evaluation mode to run based on environment variables
*/
async function main(): Promise<void> {
const useLangsmith = process.env.USE_LANGSMITH_EVAL === 'true';
if (useLangsmith) {
await runLangsmithEvaluation();
} else {
await runCliEvaluation();
}
}
// Run if called directly
if (require.main === module) {
main().catch(console.error);
}

View File

@@ -1,106 +0,0 @@
import { readFileSync, existsSync } from 'fs';
import { jsonParse, type INodeTypeDescription } from 'n8n-workflow';
import { join } from 'path';
interface NodeWithVersion extends INodeTypeDescription {
version: number | number[];
defaultVersion?: number;
}
export function loadNodesFromFile(): INodeTypeDescription[] {
console.log('Loading nodes from nodes.json...');
const nodesPath = join(__dirname, 'nodes.json');
// Check if nodes.json exists
if (!existsSync(nodesPath)) {
const errorMessage = `
ERROR: nodes.json file not found at ${nodesPath}
The nodes.json file is required for evaluations to work properly.
Please ensure nodes.json is present in the evaluations root directory.
To generate nodes.json:
1. Run the n8n instance
2. Export the node definitions to evaluations/nodes.json
3. This file contains all available n8n node type definitions needed for validation
Without nodes.json, the evaluator cannot validate node types and parameters.
`;
console.error(errorMessage);
throw new Error('nodes.json file not found. See console output for details.');
}
const nodesData = readFileSync(nodesPath, 'utf-8');
const allNodes = jsonParse<NodeWithVersion[]>(nodesData);
console.log(`Total nodes loaded: ${allNodes.length}`);
// Group nodes by name
const nodesByName = new Map<string, NodeWithVersion[]>();
for (const node of allNodes) {
const existing = nodesByName.get(node.name) ?? [];
existing.push(node);
nodesByName.set(node.name, existing);
}
console.log(`Unique node types: ${nodesByName.size}`);
// Extract latest version for each node
const latestNodes: INodeTypeDescription[] = [];
let multiVersionCount = 0;
for (const [_nodeName, versions] of nodesByName.entries()) {
if (versions.length > 1) {
multiVersionCount++;
// Find the node with the default version
let selectedNode: NodeWithVersion | undefined;
for (const node of versions) {
// Select the node that matches the default version
if (node.defaultVersion !== undefined) {
if (Array.isArray(node.version)) {
// For array versions, check if it includes the default version
if (node.version.includes(node.defaultVersion)) {
selectedNode = node;
}
} else if (node.version === node.defaultVersion) {
selectedNode = node;
}
}
}
// If we found a matching node, use it; otherwise use the first one
if (selectedNode) {
latestNodes.push(selectedNode);
} else {
latestNodes.push(versions[0]);
}
} else {
// Single version node
latestNodes.push(versions[0]);
}
}
console.log(`\nNodes with multiple versions: ${multiVersionCount}`);
console.log(`Final node count: ${latestNodes.length}`);
// Filter out hidden nodes
const visibleNodes = latestNodes.filter((node) => !node.hidden);
console.log(`Visible nodes (after filtering hidden): ${visibleNodes.length}\n`);
return visibleNodes;
}
// Helper function to get specific node version for testing
export function getNodeVersion(nodes: INodeTypeDescription[], nodeName: string): string {
const node = nodes.find((n) => n.name === nodeName);
if (!node) return 'not found';
const version = (node as NodeWithVersion).version;
if (Array.isArray(version)) {
return `[${version.join(', ')}]`;
}
return version?.toString() || 'unknown';
}

View File

@@ -1,184 +0,0 @@
import type { BaseChatModel } from '@langchain/core/language_models/chat_models';
import { LangChainTracer } from '@langchain/core/tracers/tracer_langchain';
import { MemorySaver } from '@langchain/langgraph';
import { Logger } from '@n8n/backend-common';
import { Service } from '@n8n/di';
import { AiAssistantClient } from '@n8n_io/ai-assistant-sdk';
import { Client } from 'langsmith';
import { INodeTypes } from 'n8n-workflow';
import type { IUser, INodeTypeDescription } from 'n8n-workflow';
import { LLMServiceError } from './errors';
import { anthropicClaudeSonnet4, gpt41mini } from './llm-config';
import { WorkflowBuilderAgent, type ChatPayload } from './workflow-builder-agent';
@Service()
export class AiWorkflowBuilderService {
private parsedNodeTypes: INodeTypeDescription[] = [];
private llmSimpleTask: BaseChatModel | undefined;
private llmComplexTask: BaseChatModel | undefined;
private tracingClient: Client | undefined;
private checkpointer = new MemorySaver();
private agent: WorkflowBuilderAgent | undefined;
constructor(
private readonly nodeTypes: INodeTypes,
private readonly client?: AiAssistantClient,
private readonly logger?: Logger,
private readonly instanceUrl?: string,
) {
this.parsedNodeTypes = this.getNodeTypes();
}
private async setupModels(user?: IUser) {
try {
if (this.llmSimpleTask && this.llmComplexTask) {
return;
}
// If client is provided, use it for API proxy
if (this.client && user) {
const authHeaders = await this.client.generateApiProxyCredentials(user);
// Extract baseUrl from client configuration
const baseUrl = this.client.getApiProxyBaseUrl();
this.llmSimpleTask = await gpt41mini({
baseUrl: baseUrl + '/openai',
// When using api-proxy the key will be populated automatically, we just need to pass a placeholder
apiKey: '-',
headers: {
Authorization: authHeaders.apiKey,
},
});
this.llmComplexTask = await anthropicClaudeSonnet4({
baseUrl: baseUrl + '/anthropic',
apiKey: '-',
headers: {
Authorization: authHeaders.apiKey,
'anthropic-beta': 'prompt-caching-2024-07-31',
},
});
this.tracingClient = new Client({
apiKey: '-',
apiUrl: baseUrl + '/langsmith',
autoBatchTracing: false,
traceBatchConcurrency: 1,
fetchOptions: {
headers: {
Authorization: authHeaders.apiKey,
},
},
});
return;
}
// If base URL is not set, use environment variables
this.llmSimpleTask = await gpt41mini({
apiKey: process.env.N8N_AI_OPENAI_API_KEY ?? '',
});
this.llmComplexTask = await anthropicClaudeSonnet4({
apiKey: process.env.N8N_AI_ANTHROPIC_KEY ?? '',
headers: {
'anthropic-beta': 'prompt-caching-2024-07-31',
},
});
} catch (error) {
const llmError = new LLMServiceError('Failed to connect to LLM Provider', {
cause: error,
tags: {
hasClient: !!this.client,
hasUser: !!user,
},
});
throw llmError;
}
}
private getNodeTypes(): INodeTypeDescription[] {
// These types are ignored because they tend to cause issues when generating workflows
const ignoredTypes = [
'@n8n/n8n-nodes-langchain.toolVectorStore',
'@n8n/n8n-nodes-langchain.documentGithubLoader',
'@n8n/n8n-nodes-langchain.code',
];
const nodeTypesKeys = Object.keys(this.nodeTypes.getKnownTypes());
const nodeTypes = nodeTypesKeys
.filter((nodeType) => !ignoredTypes.includes(nodeType))
.map((nodeName) => {
try {
return { ...this.nodeTypes.getByNameAndVersion(nodeName).description, name: nodeName };
} catch (error) {
this.logger?.error('Error getting node type', {
nodeName,
error: error instanceof Error ? error.message : 'Unknown error',
});
return undefined;
}
})
.filter(
(nodeType): nodeType is INodeTypeDescription =>
nodeType !== undefined && nodeType.hidden !== true,
)
.map((nodeType, _index, nodeTypes: INodeTypeDescription[]) => {
// If the node type is a tool, we need to find the corresponding non-tool node type
// and merge the two node types to get the full node type description.
const isTool = nodeType.name.endsWith('Tool');
if (!isTool) return nodeType;
const nonToolNode = nodeTypes.find((nt) => nt.name === nodeType.name.replace('Tool', ''));
if (!nonToolNode) return nodeType;
return {
...nonToolNode,
...nodeType,
};
});
return nodeTypes;
}
private async getAgent(user?: IUser) {
if (!this.llmComplexTask || !this.llmSimpleTask) {
await this.setupModels(user);
}
if (!this.llmComplexTask || !this.llmSimpleTask) {
throw new LLMServiceError('Failed to initialize LLM models');
}
this.agent ??= new WorkflowBuilderAgent({
parsedNodeTypes: this.parsedNodeTypes,
// We use Sonnet both for simple and complex tasks
llmSimpleTask: this.llmComplexTask,
llmComplexTask: this.llmComplexTask,
logger: this.logger,
checkpointer: this.checkpointer,
tracer: this.tracingClient
? new LangChainTracer({ client: this.tracingClient, projectName: 'n8n-workflow-builder' })
: undefined,
instanceUrl: this.instanceUrl,
});
return this.agent;
}
async *chat(payload: ChatPayload, user?: IUser, abortSignal?: AbortSignal) {
const agent = await this.getAgent(user);
for await (const output of agent.chat(payload, user?.id?.toString(), abortSignal)) {
yield output;
}
}
async getSessions(workflowId: string | undefined, user?: IUser) {
const agent = await this.getAgent(user);
return await agent.getSessions(workflowId, user?.id?.toString());
}
}

View File

@@ -1,3 +0,0 @@
export const MAX_AI_BUILDER_PROMPT_LENGTH = 1000; // characters
export const DEFAULT_AUTO_COMPACT_THRESHOLD_TOKENS = 20_000; // Tokens threshold for auto-compacting the conversation

View File

@@ -1,3 +0,0 @@
export * from './ai-workflow-builder-agent.service';
export * from './types';
export * from './workflow-state';

View File

@@ -1,60 +0,0 @@
// Different LLMConfig type for this file - specific to LLM providers
interface LLMProviderConfig {
apiKey: string;
baseUrl?: string;
headers?: Record<string, string>;
}
export const o4mini = async (config: LLMProviderConfig) => {
const { ChatOpenAI } = await import('@langchain/openai');
return new ChatOpenAI({
model: 'o4-mini-2025-04-16',
apiKey: config.apiKey,
configuration: {
baseURL: config.baseUrl,
defaultHeaders: config.headers,
},
});
};
export const gpt41mini = async (config: LLMProviderConfig) => {
const { ChatOpenAI } = await import('@langchain/openai');
return new ChatOpenAI({
model: 'gpt-4.1-mini-2025-04-14',
apiKey: config.apiKey,
temperature: 0,
maxTokens: -1,
configuration: {
baseURL: config.baseUrl,
defaultHeaders: config.headers,
},
});
};
export const gpt41 = async (config: LLMProviderConfig) => {
const { ChatOpenAI } = await import('@langchain/openai');
return new ChatOpenAI({
model: 'gpt-4.1-2025-04-14',
apiKey: config.apiKey,
temperature: 0.3,
maxTokens: -1,
configuration: {
baseURL: config.baseUrl,
defaultHeaders: config.headers,
},
});
};
export const anthropicClaudeSonnet4 = async (config: LLMProviderConfig) => {
const { ChatAnthropic } = await import('@langchain/anthropic');
return new ChatAnthropic({
model: 'claude-sonnet-4-20250514',
apiKey: config.apiKey,
temperature: 0,
maxTokens: 16000,
anthropicApiUrl: config.baseUrl,
clientOptions: {
defaultHeaders: config.headers,
},
});
};

View File

@@ -1,500 +0,0 @@
import type { BaseChatModel } from '@langchain/core/language_models/chat_models';
import type { ToolMessage } from '@langchain/core/messages';
import { AIMessage, HumanMessage, RemoveMessage } from '@langchain/core/messages';
import type { RunnableConfig } from '@langchain/core/runnables';
import type { LangChainTracer } from '@langchain/core/tracers/tracer_langchain';
import { StateGraph, MemorySaver, END, GraphRecursionError } from '@langchain/langgraph';
import type { Logger } from '@n8n/backend-common';
import {
ApplicationError,
type INodeTypeDescription,
type IRunExecutionData,
type IWorkflowBase,
type NodeExecutionSchema,
} from 'n8n-workflow';
import { workflowNameChain } from '@/chains/workflow-name';
import { DEFAULT_AUTO_COMPACT_THRESHOLD_TOKENS, MAX_AI_BUILDER_PROMPT_LENGTH } from '@/constants';
import { conversationCompactChain } from './chains/conversation-compact';
import { LLMServiceError, ValidationError } from './errors';
import { createAddNodeTool } from './tools/add-node.tool';
import { createConnectNodesTool } from './tools/connect-nodes.tool';
import { createNodeDetailsTool } from './tools/node-details.tool';
import { createNodeSearchTool } from './tools/node-search.tool';
import { mainAgentPrompt } from './tools/prompts/main-agent.prompt';
import { createRemoveNodeTool } from './tools/remove-node.tool';
import { createUpdateNodeParametersTool } from './tools/update-node-parameters.tool';
import type { SimpleWorkflow } from './types/workflow';
import { processOperations } from './utils/operations-processor';
import { createStreamProcessor, formatMessages } from './utils/stream-processor';
import { extractLastTokenUsage } from './utils/token-usage';
import { executeToolsInParallel } from './utils/tool-executor';
import { WorkflowState } from './workflow-state';
export interface WorkflowBuilderAgentConfig {
parsedNodeTypes: INodeTypeDescription[];
llmSimpleTask: BaseChatModel;
llmComplexTask: BaseChatModel;
logger?: Logger;
checkpointer?: MemorySaver;
tracer?: LangChainTracer;
autoCompactThresholdTokens?: number;
instanceUrl?: string;
}
export interface ChatPayload {
message: string;
workflowContext?: {
executionSchema?: NodeExecutionSchema[];
currentWorkflow?: Partial<IWorkflowBase>;
executionData?: IRunExecutionData['resultData'];
};
}
export class WorkflowBuilderAgent {
private checkpointer: MemorySaver;
private parsedNodeTypes: INodeTypeDescription[];
private llmSimpleTask: BaseChatModel;
private llmComplexTask: BaseChatModel;
private logger?: Logger;
private tracer?: LangChainTracer;
private autoCompactThresholdTokens: number;
private instanceUrl?: string;
constructor(config: WorkflowBuilderAgentConfig) {
this.parsedNodeTypes = config.parsedNodeTypes;
this.llmSimpleTask = config.llmSimpleTask;
this.llmComplexTask = config.llmComplexTask;
this.logger = config.logger;
this.checkpointer = config.checkpointer ?? new MemorySaver();
this.tracer = config.tracer;
this.autoCompactThresholdTokens =
config.autoCompactThresholdTokens ?? DEFAULT_AUTO_COMPACT_THRESHOLD_TOKENS;
this.instanceUrl = config.instanceUrl;
}
private createWorkflow() {
const tools = [
createNodeSearchTool(this.parsedNodeTypes),
createNodeDetailsTool(this.parsedNodeTypes),
createAddNodeTool(this.parsedNodeTypes),
createConnectNodesTool(this.parsedNodeTypes, this.logger),
createRemoveNodeTool(this.logger),
createUpdateNodeParametersTool(
this.parsedNodeTypes,
this.llmComplexTask,
this.logger,
this.instanceUrl,
),
];
// Create a map for quick tool lookup
const toolMap = new Map(tools.map((tool) => [tool.name, tool]));
const callModel = async (state: typeof WorkflowState.State) => {
if (!this.llmSimpleTask) {
throw new LLMServiceError('LLM not setup');
}
if (typeof this.llmSimpleTask.bindTools !== 'function') {
throw new LLMServiceError('LLM does not support tools', {
llmModel: this.llmSimpleTask._llmType(),
});
}
const prompt = await mainAgentPrompt.invoke({
...state,
executionData: state.workflowContext?.executionData ?? {},
executionSchema: state.workflowContext?.executionSchema ?? [],
instanceUrl: this.instanceUrl,
});
const response = await this.llmSimpleTask.bindTools(tools).invoke(prompt);
return { messages: [response] };
};
const shouldAutoCompact = ({ messages }: typeof WorkflowState.State) => {
const tokenUsage = extractLastTokenUsage(messages);
if (!tokenUsage) {
this.logger?.debug('No token usage metadata found');
return false;
}
const tokensUsed = tokenUsage.input_tokens + tokenUsage.output_tokens;
this.logger?.debug('Token usage', {
inputTokens: tokenUsage.input_tokens,
outputTokens: tokenUsage.output_tokens,
totalTokens: tokensUsed,
});
return tokensUsed > this.autoCompactThresholdTokens;
};
const shouldModifyState = (state: typeof WorkflowState.State) => {
const { messages, workflowContext } = state;
const lastHumanMessage = messages.findLast((m) => m instanceof HumanMessage)!; // There always should be at least one human message in the array
if (lastHumanMessage.content === '/compact') {
return 'compact_messages';
}
if (lastHumanMessage.content === '/clear') {
return 'delete_messages';
}
// If the workflow is empty (no nodes),
// we consider it initial generation request and auto-generate a name for the workflow.
if (workflowContext?.currentWorkflow?.nodes?.length === 0 && messages.length === 1) {
return 'create_workflow_name';
}
if (shouldAutoCompact(state)) {
return 'auto_compact_messages';
}
return 'agent';
};
const shouldContinue = ({ messages }: typeof WorkflowState.State) => {
const lastMessage: AIMessage = messages[messages.length - 1];
if (lastMessage.tool_calls?.length) {
return 'tools';
}
return END;
};
const customToolExecutor = async (state: typeof WorkflowState.State) => {
return await executeToolsInParallel({ state, toolMap });
};
function deleteMessages(state: typeof WorkflowState.State) {
const messages = state.messages;
const stateUpdate: Partial<typeof WorkflowState.State> = {
workflowOperations: null,
workflowContext: {},
messages: messages.map((m) => new RemoveMessage({ id: m.id! })) ?? [],
workflowJSON: {
nodes: [],
connections: {},
name: '',
},
};
return stateUpdate;
}
/**
* Compacts the conversation history by summarizing it
* and removing original messages.
* Might be triggered manually by the user with `/compact` message, or run automatically
* when the conversation history exceeds a certain token limit.
*/
const compactSession = async (state: typeof WorkflowState.State) => {
if (!this.llmSimpleTask) {
throw new LLMServiceError('LLM not setup');
}
const { messages, previousSummary } = state;
const lastHumanMessage = messages[messages.length - 1] satisfies HumanMessage;
const isAutoCompact = lastHumanMessage.content !== '/compact';
this.logger?.debug('Compacting conversation history', {
isAutoCompact,
});
const compactedMessages = await conversationCompactChain(
this.llmSimpleTask,
messages,
previousSummary,
);
// The summarized conversation history will become a part of system prompt
// and will be used in the next LLM call.
// We will remove all messages and replace them with a mock HumanMessage and AIMessage
// to indicate that the conversation history has been compacted.
// If this is an auto-compact, we will also keep the last human message, as it will continue executing the workflow.
return {
previousSummary: compactedMessages.summaryPlain,
messages: [
...messages.map((m) => new RemoveMessage({ id: m.id! })),
new HumanMessage('Please compress the conversation history'),
new AIMessage('Successfully compacted conversation history'),
...(isAutoCompact ? [new HumanMessage({ content: lastHumanMessage.content })] : []),
],
};
};
/**
* Creates a workflow name based on the initial user message.
*/
const createWorkflowName = async (state: typeof WorkflowState.State) => {
if (!this.llmSimpleTask) {
throw new LLMServiceError('LLM not setup');
}
const { workflowJSON, messages } = state;
if (messages.length === 1 && messages[0] instanceof HumanMessage) {
const initialMessage = messages[0] satisfies HumanMessage;
if (typeof initialMessage.content !== 'string') {
this.logger?.debug(
'Initial message content is not a string, skipping workflow name generation',
);
return {};
}
this.logger?.debug('Generating workflow name');
const { name } = await workflowNameChain(this.llmSimpleTask, initialMessage.content);
return {
workflowJSON: {
...workflowJSON,
name,
},
};
}
return {};
};
const workflow = new StateGraph(WorkflowState)
.addNode('agent', callModel)
.addNode('tools', customToolExecutor)
.addNode('process_operations', processOperations)
.addNode('delete_messages', deleteMessages)
.addNode('compact_messages', compactSession)
.addNode('auto_compact_messages', compactSession)
.addNode('create_workflow_name', createWorkflowName)
.addConditionalEdges('__start__', shouldModifyState)
.addEdge('tools', 'process_operations')
.addEdge('process_operations', 'agent')
.addEdge('auto_compact_messages', 'agent')
.addEdge('create_workflow_name', 'agent')
.addEdge('delete_messages', END)
.addEdge('compact_messages', END)
.addConditionalEdges('agent', shouldContinue);
return workflow;
}
async getState(workflowId: string, userId?: string) {
const workflow = this.createWorkflow();
const agent = workflow.compile({ checkpointer: this.checkpointer });
return await agent.getState({
configurable: { thread_id: `workflow-${workflowId}-user-${userId ?? new Date().getTime()}` },
});
}
static generateThreadId(workflowId?: string, userId?: string) {
return workflowId
? `workflow-${workflowId}-user-${userId ?? new Date().getTime()}`
: crypto.randomUUID();
}
private getDefaultWorkflowJSON(payload: ChatPayload): SimpleWorkflow {
return (
(payload.workflowContext?.currentWorkflow as SimpleWorkflow) ?? {
nodes: [],
connections: {},
}
);
}
async *chat(payload: ChatPayload, userId?: string, abortSignal?: AbortSignal) {
this.validateMessageLength(payload.message);
const { agent, threadConfig, streamConfig } = this.setupAgentAndConfigs(
payload,
userId,
abortSignal,
);
try {
const stream = await this.createAgentStream(payload, streamConfig, agent);
yield* this.processAgentStream(stream, agent, threadConfig);
} catch (error: unknown) {
this.handleStreamError(error);
}
}
private validateMessageLength(message: string): void {
if (message.length > MAX_AI_BUILDER_PROMPT_LENGTH) {
this.logger?.warn('Message exceeds maximum length', {
messageLength: message.length,
maxLength: MAX_AI_BUILDER_PROMPT_LENGTH,
});
throw new ValidationError(
`Message exceeds maximum length of ${MAX_AI_BUILDER_PROMPT_LENGTH} characters`,
);
}
}
private setupAgentAndConfigs(payload: ChatPayload, userId?: string, abortSignal?: AbortSignal) {
const agent = this.createWorkflow().compile({ checkpointer: this.checkpointer });
const workflowId = payload.workflowContext?.currentWorkflow?.id;
// Generate thread ID from workflowId and userId
// This ensures one session per workflow per user
const threadId = WorkflowBuilderAgent.generateThreadId(workflowId, userId);
const threadConfig: RunnableConfig = {
configurable: {
thread_id: threadId,
},
};
const streamConfig = {
...threadConfig,
streamMode: ['updates', 'custom'],
recursionLimit: 50,
signal: abortSignal,
callbacks: this.tracer ? [this.tracer] : undefined,
};
return { agent, threadConfig, streamConfig };
}
private async createAgentStream(
payload: ChatPayload,
streamConfig: RunnableConfig,
agent: ReturnType<ReturnType<typeof this.createWorkflow>['compile']>,
) {
return await agent.stream(
{
messages: [new HumanMessage({ content: payload.message })],
workflowJSON: this.getDefaultWorkflowJSON(payload),
workflowOperations: [],
workflowContext: payload.workflowContext,
},
streamConfig,
);
}
private handleStreamError(error: unknown): never {
const invalidRequestErrorMessage = this.getInvalidRequestError(error);
if (invalidRequestErrorMessage) {
throw new ValidationError(invalidRequestErrorMessage);
}
throw error;
}
private async *processAgentStream(
stream: AsyncGenerator<[string, unknown], void, unknown>,
agent: ReturnType<ReturnType<typeof this.createWorkflow>['compile']>,
threadConfig: RunnableConfig,
) {
try {
const streamProcessor = createStreamProcessor(stream);
for await (const output of streamProcessor) {
yield output;
}
} catch (error) {
await this.handleAgentStreamError(error, agent, threadConfig);
}
}
private async handleAgentStreamError(
error: unknown,
agent: ReturnType<ReturnType<typeof this.createWorkflow>['compile']>,
threadConfig: RunnableConfig,
): Promise<void> {
if (
error &&
typeof error === 'object' &&
'message' in error &&
typeof error.message === 'string' &&
// This is naive, but it's all we get from LangGraph AbortError
['Abort', 'Aborted'].includes(error.message)
) {
// eslint-disable-next-line @typescript-eslint/no-unsafe-member-access
const messages = (await agent.getState(threadConfig)).values.messages as Array<
AIMessage | HumanMessage | ToolMessage
>;
// Handle abort errors gracefully
const abortedAiMessage = new AIMessage({
content: '[Task aborted]',
id: crypto.randomUUID(),
});
// TODO: Should we clear tool calls that are in progress?
await agent.updateState(threadConfig, { messages: [...messages, abortedAiMessage] });
return;
}
// If it's not an abort error, check for GraphRecursionError
if (error instanceof GraphRecursionError) {
throw new ApplicationError(
'Workflow generation stopped: The AI reached the maximum number of steps while building your workflow. This usually means the workflow design became too complex or got stuck in a loop while trying to create the nodes and connections.',
);
}
// Re-throw any other errors
throw error;
}
private getInvalidRequestError(error: unknown): string | undefined {
if (
error instanceof Error &&
'error' in error &&
typeof error.error === 'object' &&
error.error
) {
const innerError = error.error;
if ('error' in innerError && typeof innerError.error === 'object' && innerError.error) {
const errorDetails = innerError.error;
if (
'type' in errorDetails &&
errorDetails.type === 'invalid_request_error' &&
'message' in errorDetails &&
typeof errorDetails.message === 'string'
) {
return errorDetails.message;
}
}
}
return undefined;
}
async getSessions(workflowId: string | undefined, userId?: string) {
// For now, we'll return the current session if we have a workflowId
// MemorySaver doesn't expose a way to list all threads, so we'll need to
// track this differently if we want to list all sessions
const sessions = [];
if (workflowId) {
const threadId = WorkflowBuilderAgent.generateThreadId(workflowId, userId);
const threadConfig: RunnableConfig = {
configurable: {
thread_id: threadId,
},
};
try {
// Try to get the checkpoint for this thread
const checkpoint = await this.checkpointer.getTuple(threadConfig);
if (checkpoint?.checkpoint) {
const messages =
(checkpoint.checkpoint.channel_values?.messages as Array<
AIMessage | HumanMessage | ToolMessage
>) ?? [];
sessions.push({
sessionId: threadId,
messages: formatMessages(messages),
lastUpdated: checkpoint.checkpoint.ts,
});
}
} catch (error) {
// Thread doesn't exist yet
this.logger?.debug('No session found for workflow:', { workflowId, error });
}
}
return { sessions };
}
}

View File

@@ -1,89 +0,0 @@
import type { BaseMessage } from '@langchain/core/messages';
import { HumanMessage } from '@langchain/core/messages';
import { Annotation, messagesStateReducer } from '@langchain/langgraph';
import type { SimpleWorkflow, WorkflowOperation } from './types/workflow';
import type { ChatPayload } from './workflow-builder-agent';
/**
* Reducer for collecting workflow operations from parallel tool executions.
* This reducer intelligently merges operations, avoiding duplicates and handling special cases.
*/
function operationsReducer(
current: WorkflowOperation[] | null,
update: WorkflowOperation[] | null | undefined,
): WorkflowOperation[] {
if (update === null) {
return [];
}
if (!update || update.length === 0) {
return current ?? [];
}
// For clear operations, we can reset everything
if (update.some((op) => op.type === 'clear')) {
return update.filter((op) => op.type === 'clear').slice(-1); // Keep only the last clear
}
if (!current && !update) {
return [];
}
// Otherwise, append new operations
return [...(current ?? []), ...update];
}
// Creates a reducer that trims the message history to keep only the last `maxUserMessages` HumanMessage instances
export function createTrimMessagesReducer(maxUserMessages: number) {
return (current: BaseMessage[]): BaseMessage[] => {
// Count HumanMessage instances and remember their indices
const humanMessageIndices: number[] = [];
current.forEach((msg, index) => {
if (msg instanceof HumanMessage) {
humanMessageIndices.push(index);
}
});
// If we have fewer than or equal to maxUserMessages, return as is
if (humanMessageIndices.length <= maxUserMessages) {
return current;
}
// Find the index of the first HumanMessage that we want to keep
const startHumanMessageIndex =
humanMessageIndices[humanMessageIndices.length - maxUserMessages];
// Slice from that HumanMessage onwards
return current.slice(startHumanMessageIndex);
};
}
export const WorkflowState = Annotation.Root({
messages: Annotation<BaseMessage[]>({
reducer: messagesStateReducer,
default: () => [],
}),
// // The original prompt from the user.
// The JSON representation of the workflow being built.
// Now a simple field without custom reducer - all updates go through operations
workflowJSON: Annotation<SimpleWorkflow>({
reducer: (x, y) => y ?? x,
default: () => ({ nodes: [], connections: {}, name: '' }),
}),
// Operations to apply to the workflow - processed by a separate node
workflowOperations: Annotation<WorkflowOperation[] | null>({
reducer: operationsReducer,
default: () => [],
}),
// Whether the user prompt is a workflow prompt.
// Latest workflow context
workflowContext: Annotation<ChatPayload['workflowContext'] | undefined>({
reducer: (x, y) => y ?? x,
}),
// Previous conversation summary (used for compressing long conversations)
previousSummary: Annotation<string>({
reducer: (x, y) => y ?? x, // Overwrite with the latest summary
default: () => 'EMPTY',
}),
});

View File

@@ -1,599 +0,0 @@
import type { ToolRunnableConfig } from '@langchain/core/tools';
import type { LangGraphRunnableConfig } from '@langchain/langgraph';
import { getCurrentTaskInput } from '@langchain/langgraph';
import type { MockProxy } from 'jest-mock-extended';
import { mock } from 'jest-mock-extended';
import type {
INode,
INodeTypeDescription,
INodeParameters,
IConnection,
NodeConnectionType,
} from 'n8n-workflow';
import { jsonParse } from 'n8n-workflow';
import type { ProgressReporter, ToolProgressMessage } from '../src/types/tools';
import type { SimpleWorkflow } from '../src/types/workflow';
export const mockProgress = (): MockProxy<ProgressReporter> => mock<ProgressReporter>();
// Mock state helpers
export const mockStateHelpers = () => ({
getNodes: jest.fn(() => [] as INode[]),
getConnections: jest.fn(() => ({}) as SimpleWorkflow['connections']),
updateNode: jest.fn((_id: string, _updates: Partial<INode>) => undefined),
addNodes: jest.fn((_nodes: INode[]) => undefined),
removeNode: jest.fn((_id: string) => undefined),
addConnections: jest.fn((_connections: IConnection[]) => undefined),
removeConnection: jest.fn((_sourceId: string, _targetId: string, _type?: string) => undefined),
});
export type MockStateHelpers = ReturnType<typeof mockStateHelpers>;
// Simple node creation helper
export const createNode = (overrides: Partial<INode> = {}): INode => ({
id: 'node1',
name: 'TestNode',
type: 'n8n-nodes-base.code',
typeVersion: 1,
position: [0, 0],
...overrides,
// Ensure parameters are properly merged if provided in overrides
parameters: overrides.parameters ?? {},
});
// Simple workflow builder
export const createWorkflow = (nodes: INode[] = []): SimpleWorkflow => {
const workflow: SimpleWorkflow = { nodes, connections: {}, name: 'Test workflow' };
return workflow;
};
// Create mock node type description
export const createNodeType = (
overrides: Partial<INodeTypeDescription> = {},
): INodeTypeDescription => ({
displayName: overrides.displayName ?? 'Test Node',
name: overrides.name ?? 'test.node',
group: overrides.group ?? ['transform'],
version: overrides.version ?? 1,
description: overrides.description ?? 'Test node description',
defaults: overrides.defaults ?? { name: 'Test Node' },
inputs: overrides.inputs ?? ['main'],
outputs: overrides.outputs ?? ['main'],
properties: overrides.properties ?? [],
...overrides,
});
// Common node types for testing
export const nodeTypes = {
code: createNodeType({
displayName: 'Code',
name: 'n8n-nodes-base.code',
group: ['transform'],
properties: [
{
displayName: 'JavaScript',
name: 'jsCode',
type: 'string',
typeOptions: {
editor: 'codeNodeEditor',
},
default: '',
},
],
}),
httpRequest: createNodeType({
displayName: 'HTTP Request',
name: 'n8n-nodes-base.httpRequest',
group: ['input'],
properties: [
{
displayName: 'URL',
name: 'url',
type: 'string',
default: '',
},
{
displayName: 'Method',
name: 'method',
type: 'options',
options: [
{ name: 'GET', value: 'GET' },
{ name: 'POST', value: 'POST' },
],
default: 'GET',
},
],
}),
webhook: createNodeType({
displayName: 'Webhook',
name: 'n8n-nodes-base.webhook',
group: ['trigger'],
inputs: [],
outputs: ['main'],
webhooks: [
{
name: 'default',
httpMethod: 'POST',
responseMode: 'onReceived',
path: 'webhook',
},
],
properties: [
{
displayName: 'Path',
name: 'path',
type: 'string',
default: 'webhook',
},
],
}),
agent: createNodeType({
displayName: 'AI Agent',
name: '@n8n/n8n-nodes-langchain.agent',
group: ['output'],
inputs: ['ai_agent'],
outputs: ['main'],
properties: [],
}),
openAiModel: createNodeType({
displayName: 'OpenAI Chat Model',
name: '@n8n/n8n-nodes-langchain.lmChatOpenAi',
group: ['output'],
inputs: [],
outputs: ['ai_languageModel'],
properties: [],
}),
setNode: createNodeType({
displayName: 'Set',
name: 'n8n-nodes-base.set',
group: ['transform'],
properties: [
{
displayName: 'Values to Set',
name: 'values',
type: 'collection',
default: {},
},
],
}),
ifNode: createNodeType({
displayName: 'If',
name: 'n8n-nodes-base.if',
group: ['transform'],
inputs: ['main'],
outputs: ['main', 'main'],
outputNames: ['true', 'false'],
properties: [
{
displayName: 'Conditions',
name: 'conditions',
type: 'collection',
default: {},
},
],
}),
mergeNode: createNodeType({
displayName: 'Merge',
name: 'n8n-nodes-base.merge',
group: ['transform'],
inputs: ['main', 'main'],
outputs: ['main'],
inputNames: ['Input 1', 'Input 2'],
properties: [
{
displayName: 'Mode',
name: 'mode',
type: 'options',
options: [
{ name: 'Append', value: 'append' },
{ name: 'Merge By Index', value: 'mergeByIndex' },
{ name: 'Merge By Key', value: 'mergeByKey' },
],
default: 'append',
},
],
}),
vectorStoreNode: createNodeType({
displayName: 'Vector Store',
name: '@n8n/n8n-nodes-langchain.vectorStore',
subtitle: '={{$parameter["mode"] === "retrieve" ? "Retrieve" : "Insert"}}',
group: ['transform'],
inputs: `={{ ((parameter) => {
function getInputs(parameters) {
const mode = parameters?.mode;
const inputs = [];
if (mode === 'retrieve-as-tool') {
inputs.push({
displayName: 'Embedding',
type: 'ai_embedding',
required: true
});
} else {
inputs.push({
displayName: '',
type: 'main'
});
inputs.push({
displayName: 'Embedding',
type: 'ai_embedding',
required: true
});
}
return inputs;
};
return getInputs(parameter)
})($parameter) }}`,
outputs: `={{ ((parameter) => {
function getOutputs(parameters) {
const mode = parameters?.mode;
if (mode === 'retrieve-as-tool') {
return ['ai_tool'];
} else if (mode === 'retrieve') {
return ['ai_document'];
} else {
return ['main'];
}
};
return getOutputs(parameter)
})($parameter) }}`,
properties: [
{
displayName: 'Mode',
name: 'mode',
type: 'options',
options: [
{ name: 'Insert', value: 'insert' },
{ name: 'Retrieve', value: 'retrieve' },
{ name: 'Retrieve (As Tool)', value: 'retrieve-as-tool' },
],
default: 'insert',
},
// Many more properties would be here in reality
],
}),
};
// Helper to create connections
export const createConnection = (
_fromId: string,
toId: string,
type: NodeConnectionType = 'main',
index: number = 0,
) => ({
node: toId,
type,
index,
});
// Generic chain interface
interface Chain<TInput = Record<string, unknown>, TOutput = Record<string, unknown>> {
invoke: (input: TInput) => Promise<TOutput>;
}
// Generic mock chain factory with proper typing
export const mockChain = <
TInput = Record<string, unknown>,
TOutput = Record<string, unknown>,
>(): MockProxy<Chain<TInput, TOutput>> => {
return mock<Chain<TInput, TOutput>>();
};
// Convenience factory for parameter updater chain
export const mockParameterUpdaterChain = () => {
return mockChain<Record<string, unknown>, { parameters: Record<string, unknown> }>();
};
// Helper to assert node parameters
export const expectNodeToHaveParameters = (
node: INode,
expectedParams: Partial<INodeParameters>,
): void => {
expect(node.parameters).toMatchObject(expectedParams);
};
// Helper to assert connections exist
export const expectConnectionToExist = (
connections: SimpleWorkflow['connections'],
fromId: string,
toId: string,
type: string = 'main',
): void => {
expect(connections[fromId]).toBeDefined();
expect(connections[fromId][type]).toBeDefined();
expect(connections[fromId][type]).toContainEqual(
expect.arrayContaining([expect.objectContaining({ node: toId })]),
);
};
// ========== LangGraph Testing Utilities ==========
// Types for mocked Command results
export type MockedCommandResult = { content: string };
// Common parsed content structure for tool results
export interface ParsedToolContent {
update: {
messages: Array<{ kwargs: { content: string } }>;
workflowOperations?: Array<{
type: string;
nodes?: INode[];
[key: string]: unknown;
}>;
};
}
// Setup LangGraph mocks
export const setupLangGraphMocks = () => {
const mockGetCurrentTaskInput = getCurrentTaskInput as jest.MockedFunction<
typeof getCurrentTaskInput
>;
jest.mock('@langchain/langgraph', () => ({
getCurrentTaskInput: jest.fn(),
Command: jest.fn().mockImplementation((params: Record<string, unknown>) => ({
content: JSON.stringify(params),
})),
}));
return { mockGetCurrentTaskInput };
};
// Parse tool result with double-wrapped content handling
export const parseToolResult = <T = ParsedToolContent>(result: unknown): T => {
const parsed = jsonParse<{ content?: string }>((result as MockedCommandResult).content);
return parsed.content ? jsonParse<T>(parsed.content) : (parsed as T);
};
// ========== Progress Message Utilities ==========
// Extract progress messages from mockWriter
export const extractProgressMessages = (
mockWriter: jest.Mock,
): Array<ToolProgressMessage<string>> => {
const progressCalls: Array<ToolProgressMessage<string>> = [];
mockWriter.mock.calls.forEach((call) => {
// eslint-disable-next-line @typescript-eslint/no-unsafe-assignment
const [arg] = call;
progressCalls.push(arg as ToolProgressMessage<string>);
});
return progressCalls;
};
// Find specific progress message by type
export const findProgressMessage = (
messages: Array<ToolProgressMessage<string>>,
status: 'running' | 'completed' | 'error',
updateType?: string,
): ToolProgressMessage<string> | undefined => {
return messages.find(
(msg) => msg.status === status && (!updateType || msg.updates[0]?.type === updateType),
);
};
// ========== Tool Config Helpers ==========
// Create basic tool config
export const createToolConfig = (
toolName: string,
callId: string = 'test-call',
): ToolRunnableConfig => ({
toolCall: { id: callId, name: toolName, args: {} },
});
// Create tool config with writer for progress tracking
export const createToolConfigWithWriter = (
toolName: string,
callId: string = 'test-call',
): ToolRunnableConfig & LangGraphRunnableConfig & { writer: jest.Mock } => {
const mockWriter = jest.fn();
return {
toolCall: { id: callId, name: toolName, args: {} },
writer: mockWriter,
};
};
// ========== Workflow State Helpers ==========
// Setup workflow state with mockGetCurrentTaskInput
export const setupWorkflowState = (
mockGetCurrentTaskInput: jest.MockedFunction<typeof getCurrentTaskInput>,
workflow: SimpleWorkflow = createWorkflow([]),
) => {
mockGetCurrentTaskInput.mockReturnValue({
workflowJSON: workflow,
});
};
// ========== Common Tool Assertions ==========
// Expect tool success message
export const expectToolSuccess = (
content: ParsedToolContent,
expectedMessage: string | RegExp,
): void => {
const message = content.update.messages[0]?.kwargs.content;
expect(message).toBeDefined();
if (typeof expectedMessage === 'string') {
expect(message).toContain(expectedMessage);
} else {
expect(message).toMatch(expectedMessage);
}
};
// Expect tool error message
export const expectToolError = (
content: ParsedToolContent,
expectedError: string | RegExp,
): void => {
const message = content.update.messages[0]?.kwargs.content;
if (typeof expectedError === 'string') {
expect(message).toBe(expectedError);
} else {
expect(message).toMatch(expectedError);
}
};
// Expect workflow operation of specific type
export const expectWorkflowOperation = (
content: ParsedToolContent,
operationType: string,
matcher?: Record<string, unknown>,
): void => {
const operation = content.update.workflowOperations?.[0];
expect(operation).toBeDefined();
expect(operation?.type).toBe(operationType);
if (matcher) {
expect(operation).toMatchObject(matcher);
}
};
// Expect node was added
export const expectNodeAdded = (content: ParsedToolContent, expectedNode: Partial<INode>): void => {
expectWorkflowOperation(content, 'addNodes');
const addedNode = content.update.workflowOperations?.[0]?.nodes?.[0];
expect(addedNode).toBeDefined();
expect(addedNode).toMatchObject(expectedNode);
};
// Expect node was removed
export const expectNodeRemoved = (content: ParsedToolContent, nodeId: string): void => {
expectWorkflowOperation(content, 'removeNode', { nodeIds: [nodeId] });
};
// Expect connections were added
export const expectConnectionsAdded = (
content: ParsedToolContent,
expectedCount?: number,
): void => {
expectWorkflowOperation(content, 'addConnections');
if (expectedCount !== undefined) {
const connections = content.update.workflowOperations?.[0]?.connections;
expect(connections).toHaveLength(expectedCount);
}
};
// Expect node was updated
export const expectNodeUpdated = (
content: ParsedToolContent,
nodeId: string,
expectedUpdates?: Record<string, unknown>,
): void => {
expectWorkflowOperation(content, 'updateNode', {
nodeId,
...(expectedUpdates ? { updates: expect.objectContaining(expectedUpdates) } : {}),
});
};
// ========== Test Data Builders ==========
// Build add node input
export const buildAddNodeInput = (overrides: {
nodeType: string;
name?: string;
connectionParametersReasoning?: string;
connectionParameters?: Record<string, unknown>;
}) => ({
nodeType: overrides.nodeType,
name: overrides.name ?? 'Test Node',
connectionParametersReasoning:
overrides.connectionParametersReasoning ??
'Standard node with static inputs/outputs, no connection parameters needed',
connectionParameters: overrides.connectionParameters ?? {},
});
// Build connect nodes input
export const buildConnectNodesInput = (overrides: {
sourceNodeId: string;
targetNodeId: string;
sourceOutputIndex?: number;
targetInputIndex?: number;
}) => ({
sourceNodeId: overrides.sourceNodeId,
targetNodeId: overrides.targetNodeId,
sourceOutputIndex: overrides.sourceOutputIndex ?? 0,
targetInputIndex: overrides.targetInputIndex ?? 0,
});
// Build node search query
export const buildNodeSearchQuery = (
queryType: 'name' | 'subNodeSearch',
query?: string,
connectionType?: NodeConnectionType,
) => ({
queryType,
...(query && { query }),
...(connectionType && { connectionType }),
});
// Build update node parameters input
export const buildUpdateNodeInput = (nodeId: string, changes: string[]) => ({
nodeId,
changes,
});
// Build node details input
export const buildNodeDetailsInput = (overrides: {
nodeName: string;
withParameters?: boolean;
withConnections?: boolean;
}) => ({
nodeName: overrides.nodeName,
withParameters: overrides.withParameters ?? false,
withConnections: overrides.withConnections ?? true,
});
// Expect node details in response
export const expectNodeDetails = (
content: ParsedToolContent,
expectedDetails: Partial<{
name: string;
displayName: string;
description: string;
subtitle?: string;
}>,
): void => {
const message = content.update.messages[0]?.kwargs.content;
expect(message).toBeDefined();
// Check for expected XML-like tags in formatted output
if (expectedDetails.name) {
expect(message).toContain(`<name>${expectedDetails.name}</name>`);
}
if (expectedDetails.displayName) {
expect(message).toContain(`<display_name>${expectedDetails.displayName}</display_name>`);
}
if (expectedDetails.description) {
expect(message).toContain(`<description>${expectedDetails.description}</description>`);
}
if (expectedDetails.subtitle) {
expect(message).toContain(`<subtitle>${expectedDetails.subtitle}</subtitle>`);
}
};
// Helper to validate XML-like structure in output
export const expectXMLTag = (
content: string,
tagName: string,
expectedValue?: string | RegExp,
): void => {
const tagRegex = new RegExp(`<${tagName}>([\\s\\S]*?)</${tagName}>`);
const match = content.match(tagRegex);
expect(match).toBeDefined();
if (expectedValue) {
if (typeof expectedValue === 'string') {
expect(match?.[1]?.trim()).toBe(expectedValue);
} else {
expect(match?.[1]).toMatch(expectedValue);
}
}
};
// Common reasoning strings
export const REASONING = {
STATIC_NODE: 'Node has static inputs/outputs, no connection parameters needed',
DYNAMIC_AI_NODE: 'AI node has dynamic inputs, setting connection parameters',
TRIGGER_NODE: 'Trigger node, no connection parameters needed',
WEBHOOK_NODE: 'Webhook is a trigger node, no connection parameters needed',
} as const;