Overview
The StatefulRuleEngine extends the base RuleEngine with state tracking capabilities, enabling you to detect changes in data over time and trigger actions based on state transitions.
Perfect for monitoring systems, event-driven workflows, and reactive business logic.
Key Features
State Tracking Maintains previous states for comparison across evaluations
Event System Subscribe to rule state changes with event listeners
Change Detection Specialized operators for detecting value changes
History Management Optional storage of evaluation history
Creating a Stateful Engine
import { createRuleEngine , StatefulRuleEngine } from 'rule-engine-js' ;
const baseEngine = createRuleEngine ();
const statefulEngine = new StatefulRuleEngine ( baseEngine , {
triggerOnEveryChange: false , // Trigger only on false → true
storeHistory: true , // Keep evaluation history
maxHistorySize: 100 , // Limit history entries
});
Configuration Options
Core Options
When false, triggers only on false → true transitions. When true, triggers on any state change.
Enable storage of evaluation history for analysis and debugging.
Global limit for total history entries across all rules. Uses FIFO queue (legacy mode).
Per-rule history limit. Recommended for multi-rule scenarios to prevent history domination.
Phase 3.1: State Management Options
Time-to-live for rule states in milliseconds. Set to null for no expiration.
Interval for automatic expired state cleanup (in milliseconds).
Deep copy contexts to prevent mutation. Handles circular references automatically.
Threshold for listener count warnings to detect potential memory leaks.
Phase 3.2: Concurrency Control Options
Configure concurrent evaluation behavior. Properties:
maxConcurrent (number, default: 10): Maximum concurrent evaluations per rule
timeout (number, default: 30000): Evaluation timeout in milliseconds
onTimeout (function): Callback when evaluation times out
onQueueFull (function): Callback when queue reaches capacity
Phase 3.3: Error Recovery Options
Configure error recovery strategies. Properties:
enabled (boolean, default: true): Enable error recovery
retry (object): Retry configuration
enabled (boolean): Enable retry mechanism
maxAttempts (number, default: 3): Maximum retry attempts
strategy (string): ‘exponential’, ‘fixed’, or ‘linear’
initialDelay (number, default: 100): Initial delay in ms
maxDelay (number, default: 5000): Maximum delay cap
onRetry (function): Callback on each retry
circuitBreaker (object): Circuit breaker configuration
enabled (boolean): Enable circuit breaker
failureThreshold (number, default: 5): Failures before opening
resetTimeout (number, default: 60000): Time before half-open state
onCircuitOpen (function): Callback when circuit opens
fallback (object): Fallback configuration
enabled (boolean): Enable fallback strategies
defaultValue (any): Default fallback value
onFallback (function): Callback when fallback used
Event System
Subscribe to rule state changes:
Triggered Event
Changed Event
Evaluated Event
statefulEngine . on ( 'triggered' , ( event ) => {
console . log ( `Rule ${ event . ruleId } was triggered!` );
console . log ( 'Previous state:' , event . previousState );
console . log ( 'Current state:' , event . currentState );
console . log ( 'Context:' , event . context );
});
State Change Operators
changed - Detect Any Change
const rule = { changed: [ 'user.email' ] };
// First evaluation
statefulEngine . evaluate ( 'email-change' , rule , { user: { email: 'old@example.com' } });
// Second evaluation - triggers because email changed
statefulEngine . evaluate ( 'email-change' , rule , { user: { email: 'new@example.com' } });
changedBy - Detect Numeric Change
const rule = { changedBy: [ 'temperature' , 5 ] };
// First evaluation
statefulEngine . evaluate ( 'temp' , rule , { temperature: 20 });
// Triggers when temperature changes by 5 or more
statefulEngine . evaluate ( 'temp' , rule , { temperature: 26 }); // true
changedFrom - Detect Change from Value
const rule = { changedFrom: [ 'status' , 'pending' ] };
// Triggers when status changes from 'pending' to anything else
statefulEngine . evaluate ( 'status' , rule , { status: 'pending' });
statefulEngine . evaluate ( 'status' , rule , { status: 'approved' }); // triggers
changedTo - Detect Change to Value
const rule = { changedTo: [ 'status' , 'completed' ] };
// Triggers when status changes to 'completed'
statefulEngine . evaluate ( 'status' , rule , { status: 'processing' });
statefulEngine . evaluate ( 'status' , rule , { status: 'completed' }); // triggers
increased - Detect Numeric Increase
const rule = { increased: [ 'stock' ] };
// Triggers when stock quantity increases
statefulEngine . evaluate ( 'stock' , rule , { stock: 100 });
statefulEngine . evaluate ( 'stock' , rule , { stock: 150 }); // triggers
decreased - Detect Numeric Decrease
const rule = { decreased: [ 'balance' ] };
// Triggers when balance decreases
statefulEngine . evaluate ( 'balance' , rule , { balance: 1000 });
statefulEngine . evaluate ( 'balance' , rule , { balance: 800 }); // triggers
Real-World Example
// Order processing workflow
const orderRules = {
'payment-received' : { changedTo: [ 'order.paymentStatus' , 'paid' ] },
'inventory-low' : {
and: [
{ decreased: [ 'product.stock' ] },
{ lte: [ 'product.stock' , 10 ] }
]
},
'price-drop' : {
and: [
{ decreased: [ 'product.price' ] },
{ changedBy: [ 'product.price' , 5 ] }
]
},
};
// Event handlers
statefulEngine . on ( 'triggered' , ( event ) => {
switch ( event . ruleId ) {
case 'payment-received' :
processOrder ( event . context );
break ;
case 'inventory-low' :
reorderStock ( event . context . product );
break ;
case 'price-drop' :
notifyCustomers ( event . context . product );
break ;
}
});
// Evaluate all rules
const orderData = {
order: { paymentStatus: 'paid' },
product: { stock: 8 , price: 95 },
};
statefulEngine . evaluateBatch ( orderRules , orderData );
Methods
Core Methods
evaluate()
Evaluate a single rule with state tracking:
const result = await statefulEngine . evaluate ( ruleId , rule , context );
All evaluation methods are now async as of Phase 3.2
evaluateBatch()
Evaluate multiple rules at once with error handling:
const results = await statefulEngine . evaluateBatch ( rulesObject , context , {
stopOnError: false , // Continue processing all rules even if one fails
collectErrors: true , // Gather detailed error information
});
// Result structure
{
results : { ruleId : { success , triggered , ... }, ... },
success : true ,
successCount : 5 ,
errorCount : 0 ,
totalCount : 5 ,
errors : [] // Array of error details if collectErrors: true
}
getRuleState()
Get current state of a specific rule:
const state = statefulEngine . getRuleState ( ruleId );
clearRuleState()
Clear state for a specific rule:
statefulEngine . clearRuleState ( ruleId );
getHistory()
Get evaluation history (if enabled):
const history = statefulEngine . getHistory ( ruleId );
Phase 3.1: State Management Methods
getStateStats()
Get comprehensive state statistics:
const stats = statefulEngine . getStateStats ();
// {
// totalRules: 42,
// historySize: 100,
// listenerCounts: { triggered: 5, changed: 3, ... },
// oldestStateAge: 3245000,
// memoryEstimate: { states: '~42KB', history: '~100KB', total: '~142KB' }
// }
cleanupExpiredStates()
Manually trigger cleanup of expired states:
const result = statefulEngine . cleanupExpiredStates ();
// { removedCount: 5, removedRules: ['old-rule-1', ...], timestamp: '...' }
getListenerCount()
Get listener count for a specific event:
const count = statefulEngine . getListenerCount ( 'triggered' );
getAllListenerCounts()
Get listener counts for all events:
const counts = statefulEngine . getAllListenerCounts ();
// { triggered: 5, changed: 3, evaluated: 2, untriggered: 1 }
startCleanupTimer() / stopCleanupTimer()
Control automatic state cleanup:
statefulEngine . stopCleanupTimer (); // Stop automatic cleanup
statefulEngine . startCleanupTimer (); // Restart automatic cleanup
destroy()
Complete resource cleanup (timers, listeners, state):
await statefulEngine . destroy ();
Always call destroy() when shutting down to prevent memory leaks
Phase 3.2: Concurrency Methods
getConcurrencyStats()
Get concurrency statistics for all rules:
const stats = statefulEngine . getConcurrencyStats ();
// {
// 'rule-1': { active: 2, queued: 5, completed: 100, timeout: 1 },
// 'rule-2': { active: 0, queued: 0, completed: 50, timeout: 0 }
// }
getConcurrencyState()
Get current concurrency state for a specific rule:
const state = statefulEngine . getConcurrencyState ( 'rule-1' );
// { active: 2, queued: 5, completed: 100, timeout: 1 }
Phase 3.3: Error Recovery Methods
registerFallbackRule()
Register a fallback rule for a specific rule:
const primaryRule = { gt: [ 'temperature' , 100 ] };
const fallbackRule = { gte: [ 'temperature' , 90 ] };
statefulEngine . registerFallbackRule ( 'temp-check' , fallbackRule );
registerFallbackValue()
Register a fallback value for a specific rule:
statefulEngine . registerFallbackValue ( 'temp-check' , { fallback: true , value: false });
getErrorRecoveryStats()
Get comprehensive error recovery statistics:
const stats = statefulEngine . getErrorRecoveryStats ();
// {
// enabled: true,
// retry: { enabled: true, strategy: 'exponential', maxAttempts: 3, ... },
// circuitBreaker: { enabled: true, totalCircuits: 5, circuitStates: {...} },
// fallback: { enabled: true, fallbackRulesCount: 2, ... },
// errorTracking: { totalRulesWithErrors: 3, totalErrorsRecorded: 15, ... }
// }
getCircuitState()
Get circuit breaker state for a specific rule:
const state = statefulEngine . getCircuitState ( 'rule-1' );
// Returns: 'closed', 'open', or 'half-open'
resetCircuit()
Manually reset circuit breaker:
statefulEngine . resetCircuit ( 'rule-1' ); // Reset specific rule
statefulEngine . resetCircuit (); // Reset all circuits
getErrorHistory()
Get error history for a specific rule:
const history = statefulEngine . getErrorHistory ( 'rule-1' );
// [{ message: '...', operator: '...', timestamp: ... }, ...]
getErrorRate()
Get error rate for a specific rule:
const rate = statefulEngine . getErrorRate ( 'rule-1' );
// {
// errorCount: 5,
// successCount: 95,
// total: 100,
// rate: 0.05,
// windowStart: timestamp,
// windowDuration: 60000
// }
Best Practices
Use Meaningful IDs Use descriptive rule IDs that indicate the rule’s purpose
Batch Evaluations Use evaluateBatch() for multiple related rules
Clean Up State Periodically clear state for inactive rules
Limit History Set appropriate maxHistoryPerRule to manage memory per rule
Phase 3 Best Practices
Memory Management (Phase 3.1)
const statefulEngine = new StatefulRuleEngine ( baseEngine , {
// Set TTL for applications with many unique rule IDs
stateExpirationMs: 3600000 , // 1 hour
// Run cleanup every 5 minutes
cleanupIntervalMs: 300000 ,
// Enable deep copy for safety (default)
enableDeepCopy: true ,
// Monitor listener counts
maxListeners: 50 ,
});
// Monitor memory in production
setInterval (() => {
const stats = statefulEngine . getStateStats ();
if ( stats . totalRules > 10000 ) {
console . warn ( 'High rule count:' , stats );
}
}, 60000 );
// Graceful shutdown
process . on ( 'SIGTERM' , async () => {
await statefulEngine . destroy ();
process . exit ( 0 );
});
Concurrency Control (Phase 3.2)
const statefulEngine = new StatefulRuleEngine ( baseEngine , {
concurrency: {
maxConcurrent: 5 , // Limit concurrent evaluations
timeout: 5000 , // 5 second timeout
onTimeout : ( ruleId ) => {
console . error ( `Rule ${ ruleId } timed out` );
},
onQueueFull : ( ruleId , queueSize ) => {
console . warn ( `Queue full for ${ ruleId } : ${ queueSize } ` );
},
},
});
// Monitor concurrency
const stats = statefulEngine . getConcurrencyStats ();
console . log ( 'Active evaluations:' , stats );
Error Recovery (Phase 3.3)
const statefulEngine = new StatefulRuleEngine ( baseEngine , {
errorRecovery: {
retry: {
enabled: true ,
maxAttempts: 3 ,
strategy: 'exponential' ,
initialDelay: 100 ,
onRetry : ( attempt , error , ruleId ) => {
console . log ( `Retry ${ attempt } for ${ ruleId } ` );
},
},
circuitBreaker: {
enabled: true ,
failureThreshold: 5 ,
resetTimeout: 60000 ,
onCircuitOpen : ( ruleId , info ) => {
console . error ( `Circuit opened for ${ ruleId } ` , info );
},
},
fallback: {
enabled: true ,
defaultValue: { success: false , fallback: true },
onFallback : ( ruleId , type , value ) => {
console . log ( `Fallback used for ${ ruleId } : ${ type } ` );
},
},
},
});
// Register fallbacks
statefulEngine . registerFallbackRule ( 'critical-rule' , fallbackRule );
statefulEngine . registerFallbackValue ( 'backup-rule' , { safe: true });
// Monitor error rates
const errorRate = statefulEngine . getErrorRate ( 'critical-rule' );
if ( errorRate && errorRate . rate > 0.1 ) {
console . warn ( 'High error rate:' , errorRate );
}
// Complete production-ready setup
const statefulEngine = new StatefulRuleEngine ( baseEngine , {
// Core options
triggerOnEveryChange: false ,
storeHistory: false , // Disable if not needed
// Phase 3.1: Memory Management
stateExpirationMs: 3600000 ,
cleanupIntervalMs: 300000 ,
enableDeepCopy: true ,
maxListeners: 100 ,
// Phase 3.2: Concurrency
concurrency: {
maxConcurrent: 10 ,
timeout: 30000 ,
onTimeout : ( ruleId ) => logger . error ( `Timeout: ${ ruleId } ` ),
},
// Phase 3.3: Error Recovery
errorRecovery: {
retry: {
enabled: true ,
maxAttempts: 3 ,
strategy: 'exponential' ,
},
circuitBreaker: {
enabled: true ,
failureThreshold: 5 ,
},
fallback: {
enabled: true ,
defaultValue: { success: false , fallback: true },
},
},
});
// Health check endpoint
app . get ( '/health/rules' , ( req , res ) => {
const stats = {
state: statefulEngine . getStateStats (),
concurrency: statefulEngine . getConcurrencyStats (),
errorRecovery: statefulEngine . getErrorRecoveryStats (),
};
res . json ( stats );
});
Next Steps