在脚本中搜索"如何用wind搜索股票的发行价和份数"
Shadow TradingThis is a very simple script that identify daily candles with big tails and enter the market if the lower or the higher price of the same candle is violated... this is the idea behind I hope you will find it usefull ;)
Swing Hull/rsi/EMA StrategyA Swing trading strategy that use a combination of indicators, Hull average to get the trend direction, ema and rsi do the rest, use it are your own risk expecially at the end of any hull trend
Past Performance Does Not Guarantee Future Results
Dimensional Resonance ProtocolDimensional Resonance Protocol
🌀 CORE INNOVATION: PHASE SPACE RECONSTRUCTION & EMERGENCE DETECTION
The Dimensional Resonance Protocol represents a paradigm shift from traditional technical analysis to complexity science. Rather than measuring price levels or indicator crossovers, DRP reconstructs the hidden attractor governing market dynamics using Takens' embedding theorem, then detects emergence —the rare moments when multiple dimensions of market behavior spontaneously synchronize into coherent, predictable states.
The Complexity Hypothesis:
Markets are not simple oscillators or random walks—they are complex adaptive systems existing in high-dimensional phase space. Traditional indicators see only shadows (one-dimensional projections) of this higher-dimensional reality. DRP reconstructs the full phase space using time-delay embedding, revealing the true structure of market dynamics.
Takens' Embedding Theorem (1981):
A profound mathematical result from dynamical systems theory: Given a time series from a complex system, we can reconstruct its full phase space by creating delayed copies of the observation.
Mathematical Foundation:
From single observable x(t), create embedding vectors:
X(t) =
Where:
• d = Embedding dimension (default 5)
• τ = Time delay (default 3 bars)
• x(t) = Price or return at time t
Key Insight: If d ≥ 2D+1 (where D is the true attractor dimension), this embedding is topologically equivalent to the actual system dynamics. We've reconstructed the hidden attractor from a single price series.
Why This Matters:
Markets appear random in one dimension (price chart). But in reconstructed phase space, structure emerges—attractors, limit cycles, strange attractors. When we identify these structures, we can detect:
• Stable regions : Predictable behavior (trade opportunities)
• Chaotic regions : Unpredictable behavior (avoid trading)
• Critical transitions : Phase changes between regimes
Phase Space Magnitude Calculation:
phase_magnitude = sqrt(Σ ² for i = 0 to d-1)
This measures the "energy" or "momentum" of the market trajectory through phase space. High magnitude = strong directional move. Low magnitude = consolidation.
📊 RECURRENCE QUANTIFICATION ANALYSIS (RQA)
Once phase space is reconstructed, we analyze its recurrence structure —when does the system return near previous states?
Recurrence Plot Foundation:
A recurrence occurs when two phase space points are closer than threshold ε:
R(i,j) = 1 if ||X(i) - X(j)|| < ε, else 0
This creates a binary matrix showing when the system revisits similar states.
Key RQA Metrics:
1. Recurrence Rate (RR):
RR = (Number of recurrent points) / (Total possible pairs)
• RR near 0: System never repeats (highly stochastic)
• RR = 0.1-0.3: Moderate recurrence (tradeable patterns)
• RR > 0.5: System stuck in attractor (ranging market)
• RR near 1: System frozen (no dynamics)
Interpretation: Moderate recurrence is optimal —patterns exist but market isn't stuck.
2. Determinism (DET):
Measures what fraction of recurrences form diagonal structures in the recurrence plot. Diagonals indicate deterministic evolution (trajectory follows predictable paths).
DET = (Recurrence points on diagonals) / (Total recurrence points)
• DET < 0.3: Random dynamics
• DET = 0.3-0.7: Moderate determinism (patterns with noise)
• DET > 0.7: Strong determinism (technical patterns reliable)
Trading Implication: Signals are prioritized when DET > 0.3 (deterministic state) and RR is moderate (not stuck).
Threshold Selection (ε):
Default ε = 0.10 × std_dev means two states are "recurrent" if within 10% of a standard deviation. This is tight enough to require genuine similarity but loose enough to find patterns.
🔬 PERMUTATION ENTROPY: COMPLEXITY MEASUREMENT
Permutation entropy measures the complexity of a time series by analyzing the distribution of ordinal patterns.
Algorithm (Bandt & Pompe, 2002):
1. Take overlapping windows of length n (default n=4)
2. For each window, record the rank order pattern
Example: → pattern (ranks from lowest to highest)
3. Count frequency of each possible pattern
4. Calculate Shannon entropy of pattern distribution
Mathematical Formula:
H_perm = -Σ p(π) · ln(p(π))
Where π ranges over all n! possible permutations, p(π) is the probability of pattern π.
Normalized to :
H_norm = H_perm / ln(n!)
Interpretation:
• H < 0.3 : Very ordered, crystalline structure (strong trending)
• H = 0.3-0.5 : Ordered regime (tradeable with patterns)
• H = 0.5-0.7 : Moderate complexity (mixed conditions)
• H = 0.7-0.85 : Complex dynamics (challenging to trade)
• H > 0.85 : Maximum entropy (nearly random, avoid)
Entropy Regime Classification:
DRP classifies markets into five entropy regimes:
• CRYSTALLINE (H < 0.3): Maximum order, persistent trends
• ORDERED (H < 0.5): Clear patterns, momentum strategies work
• MODERATE (H < 0.7): Mixed dynamics, adaptive required
• COMPLEX (H < 0.85): High entropy, mean reversion better
• CHAOTIC (H ≥ 0.85): Near-random, minimize trading
Why Permutation Entropy?
Unlike traditional entropy methods requiring binning continuous data (losing information), permutation entropy:
• Works directly on time series
• Robust to monotonic transformations
• Computationally efficient
• Captures temporal structure, not just distribution
• Immune to outliers (uses ranks, not values)
⚡ LYAPUNOV EXPONENT: CHAOS vs STABILITY
The Lyapunov exponent λ measures sensitivity to initial conditions —the hallmark of chaos.
Physical Meaning:
Two trajectories starting infinitely close will diverge at exponential rate e^(λt):
Distance(t) ≈ Distance(0) × e^(λt)
Interpretation:
• λ > 0 : Positive Lyapunov exponent = CHAOS
- Small errors grow exponentially
- Long-term prediction impossible
- System is sensitive, unpredictable
- AVOID TRADING
• λ ≈ 0 : Near-zero = CRITICAL STATE
- Edge of chaos
- Transition zone between order and disorder
- Moderate predictability
- PROCEED WITH CAUTION
• λ < 0 : Negative Lyapunov exponent = STABLE
- Small errors decay
- Trajectories converge
- System is predictable
- OPTIMAL FOR TRADING
Estimation Method:
DRP estimates λ by tracking how quickly nearby states diverge over a rolling window (default 20 bars):
For each bar i in window:
δ₀ = |x - x | (initial separation)
δ₁ = |x - x | (previous separation)
if δ₁ > 0:
ratio = δ₀ / δ₁
log_ratios += ln(ratio)
λ ≈ average(log_ratios)
Stability Classification:
• STABLE : λ < 0 (negative growth rate)
• CRITICAL : |λ| < 0.1 (near neutral)
• CHAOTIC : λ > 0.2 (strong positive growth)
Signal Filtering:
By default, NEXUS requires λ < 0 (stable regime) for signal confirmation. This filters out trades during chaotic periods when technical patterns break down.
📐 HIGUCHI FRACTAL DIMENSION
Fractal dimension measures self-similarity and complexity of the price trajectory.
Theoretical Background:
A curve's fractal dimension D ranges from 1 (smooth line) to 2 (space-filling curve):
• D ≈ 1.0 : Smooth, persistent trending
• D ≈ 1.5 : Random walk (Brownian motion)
• D ≈ 2.0 : Highly irregular, space-filling
Higuchi Method (1988):
For a time series of length N, construct k different curves by taking every k-th point:
L(k) = (1/k) × Σ|x - x | × (N-1)/(⌊(N-m)/k⌋ × k)
For different values of k (1 to k_max), calculate L(k). The fractal dimension is the slope of log(L(k)) vs log(1/k):
D = slope of log(L) vs log(1/k)
Market Interpretation:
• D < 1.35 : Strong trending, persistent (Hurst > 0.5)
- TRENDING regime
- Momentum strategies favored
- Breakouts likely to continue
• D = 1.35-1.45 : Moderate persistence
- PERSISTENT regime
- Trend-following with caution
- Patterns have meaning
• D = 1.45-1.55 : Random walk territory
- RANDOM regime
- Efficiency hypothesis holds
- Technical analysis least reliable
• D = 1.55-1.65 : Anti-persistent (mean-reverting)
- ANTI-PERSISTENT regime
- Oscillator strategies work
- Overbought/oversold meaningful
• D > 1.65 : Highly complex, choppy
- COMPLEX regime
- Avoid directional bets
- Wait for regime change
Signal Filtering:
Resonance signals (secondary signal type) require D < 1.5, indicating trending or persistent dynamics where momentum has meaning.
🔗 TRANSFER ENTROPY: CAUSAL INFORMATION FLOW
Transfer entropy measures directed causal influence between time series—not just correlation, but actual information transfer.
Schreiber's Definition (2000):
Transfer entropy from X to Y measures how much knowing X's past reduces uncertainty about Y's future:
TE(X→Y) = H(Y_future | Y_past) - H(Y_future | Y_past, X_past)
Where H is Shannon entropy.
Key Properties:
1. Directional : TE(X→Y) ≠ TE(Y→X) in general
2. Non-linear : Detects complex causal relationships
3. Model-free : No assumptions about functional form
4. Lag-independent : Captures delayed causal effects
Three Causal Flows Measured:
1. Volume → Price (TE_V→P):
Measures how much volume patterns predict price changes.
• TE > 0 : Volume provides predictive information about price
- Institutional participation driving moves
- Volume confirms direction
- High reliability
• TE ≈ 0 : No causal flow (weak volume/price relationship)
- Volume uninformative
- Caution on signals
• TE < 0 (rare): Suggests price leading volume
- Potentially manipulated or thin market
2. Volatility → Momentum (TE_σ→M):
Does volatility expansion predict momentum changes?
• Positive TE : Volatility precedes momentum shifts
- Breakout dynamics
- Regime transitions
3. Structure → Price (TE_S→P):
Do support/resistance patterns causally influence price?
• Positive TE : Structural levels have causal impact
- Technical levels matter
- Market respects structure
Net Causal Flow:
Net_Flow = TE_V→P + 0.5·TE_σ→M + TE_S→P
• Net > +0.1 : Bullish causal structure
• Net < -0.1 : Bearish causal structure
• |Net| < 0.1 : Neutral/unclear causation
Causal Gate:
For signal confirmation, NEXUS requires:
• Buy signals : TE_V→P > 0 AND Net_Flow > 0.05
• Sell signals : TE_V→P > 0 AND Net_Flow < -0.05
This ensures volume is actually driving price (causal support exists), not just correlated noise.
Implementation Note:
Computing true transfer entropy requires discretizing continuous data into bins (default 6 bins) and estimating joint probability distributions. NEXUS uses a hybrid approach combining TE theory with autocorrelation structure and lagged cross-correlation to approximate information transfer in computationally efficient manner.
🌊 HILBERT PHASE COHERENCE
Phase coherence measures synchronization across market dimensions using Hilbert transform analysis.
Hilbert Transform Theory:
For a signal x(t), the Hilbert transform H (t) creates an analytic signal:
z(t) = x(t) + i·H (t) = A(t)·e^(iφ(t))
Where:
• A(t) = Instantaneous amplitude
• φ(t) = Instantaneous phase
Instantaneous Phase:
φ(t) = arctan(H (t) / x(t))
The phase represents where the signal is in its natural cycle—analogous to position on a unit circle.
Four Dimensions Analyzed:
1. Momentum Phase : Phase of price rate-of-change
2. Volume Phase : Phase of volume intensity
3. Volatility Phase : Phase of ATR cycles
4. Structure Phase : Phase of position within range
Phase Locking Value (PLV):
For two signals with phases φ₁(t) and φ₂(t), PLV measures phase synchronization:
PLV = |⟨e^(i(φ₁(t) - φ₂(t)))⟩|
Where ⟨·⟩ is time average over window.
Interpretation:
• PLV = 0 : Completely random phase relationship (no synchronization)
• PLV = 0.5 : Moderate phase locking
• PLV = 1 : Perfect synchronization (phases locked)
Pairwise PLV Calculations:
• PLV_momentum-volume : Are momentum and volume cycles synchronized?
• PLV_momentum-structure : Are momentum cycles aligned with structure?
• PLV_volume-structure : Are volume and structural patterns in phase?
Overall Phase Coherence:
Coherence = (PLV_mom-vol + PLV_mom-struct + PLV_vol-struct) / 3
Signal Confirmation:
Emergence signals require coherence ≥ threshold (default 0.70):
• Below 0.70: Dimensions not synchronized, no coherent market state
• Above 0.70: Dimensions in phase, coherent behavior emerging
Coherence Direction:
The summed phase angles indicate whether synchronized dimensions point bullish or bearish:
Direction = sin(φ_momentum) + 0.5·sin(φ_volume) + 0.5·sin(φ_structure)
• Direction > 0 : Phases pointing upward (bullish synchronization)
• Direction < 0 : Phases pointing downward (bearish synchronization)
🌀 EMERGENCE SCORE: MULTI-DIMENSIONAL ALIGNMENT
The emergence score aggregates all complexity metrics into a single 0-1 value representing market coherence.
Eight Components with Weights:
1. Phase Coherence (20%):
Direct contribution: coherence × 0.20
Measures dimensional synchronization.
2. Entropy Regime (15%):
Contribution: (0.6 - H_perm) / 0.6 × 0.15 if H < 0.6, else 0
Rewards low entropy (ordered, predictable states).
3. Lyapunov Stability (12%):
• λ < 0 (stable): +0.12
• |λ| < 0.1 (critical): +0.08
• λ > 0.2 (chaotic): +0.0
Requires stable, predictable dynamics.
4. Fractal Dimension Trending (12%):
Contribution: (1.45 - D) / 0.45 × 0.12 if D < 1.45, else 0
Rewards trending fractal structure (D < 1.45).
5. Dimensional Resonance (12%):
Contribution: |dimensional_resonance| × 0.12
Measures alignment across momentum, volume, structure, volatility dimensions.
6. Causal Flow Strength (9%):
Contribution: |net_causal_flow| × 0.09
Rewards strong causal relationships.
7. Phase Space Embedding (10%):
Contribution: min(|phase_magnitude_norm|, 3.0) / 3.0 × 0.10 if |magnitude| > 1.0
Rewards strong trajectory in reconstructed phase space.
8. Recurrence Quality (10%):
Contribution: determinism × 0.10 if DET > 0.3 AND 0.1 < RR < 0.8
Rewards deterministic patterns with moderate recurrence.
Total Emergence Score:
E = Σ(components) ∈
Capped at 1.0 maximum.
Emergence Direction:
Separate calculation determining bullish vs bearish:
• Dimensional resonance sign
• Net causal flow sign
• Phase magnitude correlation with momentum
Signal Threshold:
Default emergence_threshold = 0.75 means 75% of maximum possible emergence score required to trigger signals.
Why Emergence Matters:
Traditional indicators measure single dimensions. Emergence detects self-organization —when multiple independent dimensions spontaneously align. This is the market equivalent of a phase transition in physics, where microscopic chaos gives way to macroscopic order.
These are the highest-probability trade opportunities because the entire system is resonating in the same direction.
🎯 SIGNAL GENERATION: EMERGENCE vs RESONANCE
DRP generates two tiers of signals with different requirements:
TIER 1: EMERGENCE SIGNALS (Primary)
Requirements:
1. Emergence score ≥ threshold (default 0.75)
2. Phase coherence ≥ threshold (default 0.70)
3. Emergence direction > 0.2 (bullish) or < -0.2 (bearish)
4. Causal gate passed (if enabled): TE_V→P > 0 and net_flow confirms direction
5. Stability zone (if enabled): λ < 0 or |λ| < 0.1
6. Price confirmation: Close > open (bulls) or close < open (bears)
7. Cooldown satisfied: bars_since_signal ≥ cooldown_period
EMERGENCE BUY:
• All above conditions met with bullish direction
• Market has achieved coherent bullish state
• Multiple dimensions synchronized upward
EMERGENCE SELL:
• All above conditions met with bearish direction
• Market has achieved coherent bearish state
• Multiple dimensions synchronized downward
Premium Emergence:
When signal_quality (emergence_score × phase_coherence) > 0.7:
• Displayed as ★ star symbol
• Highest conviction trades
• Maximum dimensional alignment
Standard Emergence:
When signal_quality 0.5-0.7:
• Displayed as ◆ diamond symbol
• Strong signals but not perfect alignment
TIER 2: RESONANCE SIGNALS (Secondary)
Requirements:
1. Dimensional resonance > +0.6 (bullish) or < -0.6 (bearish)
2. Fractal dimension < 1.5 (trending/persistent regime)
3. Price confirmation matches direction
4. NOT in chaotic regime (λ < 0.2)
5. Cooldown satisfied
6. NO emergence signal firing (resonance is fallback)
RESONANCE BUY:
• Dimensional alignment without full emergence
• Trending fractal structure
• Moderate conviction
RESONANCE SELL:
• Dimensional alignment without full emergence
• Bearish resonance with trending structure
• Moderate conviction
Displayed as small ▲/▼ triangles with transparency.
Signal Hierarchy:
IF emergence conditions met:
Fire EMERGENCE signal (★ or ◆)
ELSE IF resonance conditions met:
Fire RESONANCE signal (▲ or ▼)
ELSE:
No signal
Cooldown System:
After any signal fires, cooldown_period (default 5 bars) must elapse before next signal. This prevents signal clustering during persistent conditions.
Cooldown tracks using bar_index:
bars_since_signal = current_bar_index - last_signal_bar_index
cooldown_ok = bars_since_signal >= cooldown_period
🎨 VISUAL SYSTEM: MULTI-LAYER COMPLEXITY
DRP provides rich visual feedback across four distinct layers:
LAYER 1: COHERENCE FIELD (Background)
Colored background intensity based on phase coherence:
• No background : Coherence < 0.5 (incoherent state)
• Faint glow : Coherence 0.5-0.7 (building coherence)
• Stronger glow : Coherence > 0.7 (coherent state)
Color:
• Cyan/teal: Bullish coherence (direction > 0)
• Red/magenta: Bearish coherence (direction < 0)
• Blue: Neutral coherence (direction ≈ 0)
Transparency: 98 minus (coherence_intensity × 10), so higher coherence = more visible.
LAYER 2: STABILITY/CHAOS ZONES
Background color indicating Lyapunov regime:
• Green tint (95% transparent): λ < 0, STABLE zone
- Safe to trade
- Patterns meaningful
• Gold tint (90% transparent): |λ| < 0.1, CRITICAL zone
- Edge of chaos
- Moderate risk
• Red tint (85% transparent): λ > 0.2, CHAOTIC zone
- Avoid trading
- Unpredictable behavior
LAYER 3: DIMENSIONAL RIBBONS
Three EMAs representing dimensional structure:
• Fast ribbon : EMA(8) in cyan/teal (fast dynamics)
• Medium ribbon : EMA(21) in blue (intermediate)
• Slow ribbon : EMA(55) in red/magenta (slow dynamics)
Provides visual reference for multi-scale structure without cluttering with raw phase space data.
LAYER 4: CAUSAL FLOW LINE
A thicker line plotted at EMA(13) colored by net causal flow:
• Cyan/teal : Net_flow > +0.1 (bullish causation)
• Red/magenta : Net_flow < -0.1 (bearish causation)
• Gray : |Net_flow| < 0.1 (neutral causation)
Shows real-time direction of information flow.
EMERGENCE FLASH:
Strong background flash when emergence signals fire:
• Cyan flash for emergence buy
• Red flash for emergence sell
• 80% transparency for visibility without obscuring price
📊 COMPREHENSIVE DASHBOARD
Real-time monitoring of all complexity metrics:
HEADER:
• 🌀 DRP branding with gold accent
CORE METRICS:
EMERGENCE:
• Progress bar (█ filled, ░ empty) showing 0-100%
• Percentage value
• Direction arrow (↗ bull, ↘ bear, → neutral)
• Color-coded: Green/gold if active, gray if low
COHERENCE:
• Progress bar showing phase locking value
• Percentage value
• Checkmark ✓ if ≥ threshold, circle ○ if below
• Color-coded: Cyan if coherent, gray if not
COMPLEXITY SECTION:
ENTROPY:
• Regime name (CRYSTALLINE/ORDERED/MODERATE/COMPLEX/CHAOTIC)
• Numerical value (0.00-1.00)
• Color: Green (ordered), gold (moderate), red (chaotic)
LYAPUNOV:
• State (STABLE/CRITICAL/CHAOTIC)
• Numerical value (typically -0.5 to +0.5)
• Status indicator: ● stable, ◐ critical, ○ chaotic
• Color-coded by state
FRACTAL:
• Regime (TRENDING/PERSISTENT/RANDOM/ANTI-PERSIST/COMPLEX)
• Dimension value (1.0-2.0)
• Color: Cyan (trending), gold (random), red (complex)
PHASE-SPACE:
• State (STRONG/ACTIVE/QUIET)
• Normalized magnitude value
• Parameters display: d=5 τ=3
CAUSAL SECTION:
CAUSAL:
• Direction (BULL/BEAR/NEUTRAL)
• Net flow value
• Flow indicator: →P (to price), P← (from price), ○ (neutral)
V→P:
• Volume-to-price transfer entropy
• Small display showing specific TE value
DIMENSIONAL SECTION:
RESONANCE:
• Progress bar of absolute resonance
• Signed value (-1 to +1)
• Color-coded by direction
RECURRENCE:
• Recurrence rate percentage
• Determinism percentage display
• Color-coded: Green if high quality
STATE SECTION:
STATE:
• Current mode: EMERGENCE / RESONANCE / CHAOS / SCANNING
• Icon: 🚀 (emergence buy), 💫 (emergence sell), ▲ (resonance buy), ▼ (resonance sell), ⚠ (chaos), ◎ (scanning)
• Color-coded by state
SIGNALS:
• E: count of emergence signals
• R: count of resonance signals
⚙️ KEY PARAMETERS EXPLAINED
Phase Space Configuration:
• Embedding Dimension (3-10, default 5): Reconstruction dimension
- Low (3-4): Simple dynamics, faster computation
- Medium (5-6): Balanced (recommended)
- High (7-10): Complex dynamics, more data needed
- Rule: d ≥ 2D+1 where D is true dimension
• Time Delay (τ) (1-10, default 3): Embedding lag
- Fast markets: 1-2
- Normal: 3-4
- Slow markets: 5-10
- Optimal: First minimum of mutual information (often 2-4)
• Recurrence Threshold (ε) (0.01-0.5, default 0.10): Phase space proximity
- Tight (0.01-0.05): Very similar states only
- Medium (0.08-0.15): Balanced
- Loose (0.20-0.50): Liberal matching
Entropy & Complexity:
• Permutation Order (3-7, default 4): Pattern length
- Low (3): 6 patterns, fast but coarse
- Medium (4-5): 24-120 patterns, balanced
- High (6-7): 720-5040 patterns, fine-grained
- Note: Requires window >> order! for stability
• Entropy Window (15-100, default 30): Lookback for entropy
- Short (15-25): Responsive to changes
- Medium (30-50): Stable measure
- Long (60-100): Very smooth, slow adaptation
• Lyapunov Window (10-50, default 20): Stability estimation window
- Short (10-15): Fast chaos detection
- Medium (20-30): Balanced
- Long (40-50): Stable λ estimate
Causal Inference:
• Enable Transfer Entropy (default ON): Causality analysis
- Keep ON for full system functionality
• TE History Length (2-15, default 5): Causal lookback
- Short (2-4): Quick causal detection
- Medium (5-8): Balanced
- Long (10-15): Deep causal analysis
• TE Discretization Bins (4-12, default 6): Binning granularity
- Few (4-5): Coarse, robust, needs less data
- Medium (6-8): Balanced
- Many (9-12): Fine-grained, needs more data
Phase Coherence:
• Enable Phase Coherence (default ON): Synchronization detection
- Keep ON for emergence detection
• Coherence Threshold (0.3-0.95, default 0.70): PLV requirement
- Loose (0.3-0.5): More signals, lower quality
- Balanced (0.6-0.75): Recommended
- Strict (0.8-0.95): Rare, highest quality
• Hilbert Smoothing (3-20, default 8): Phase smoothing
- Low (3-5): Responsive, noisier
- Medium (6-10): Balanced
- High (12-20): Smooth, more lag
Fractal Analysis:
• Enable Fractal Dimension (default ON): Complexity measurement
- Keep ON for full analysis
• Fractal K-max (4-20, default 8): Scaling range
- Low (4-6): Faster, less accurate
- Medium (7-10): Balanced
- High (12-20): Accurate, slower
• Fractal Window (30-200, default 50): FD lookback
- Short (30-50): Responsive FD
- Medium (60-100): Stable FD
- Long (120-200): Very smooth FD
Emergence Detection:
• Emergence Threshold (0.5-0.95, default 0.75): Minimum coherence
- Sensitive (0.5-0.65): More signals
- Balanced (0.7-0.8): Recommended
- Strict (0.85-0.95): Rare signals
• Require Causal Gate (default ON): TE confirmation
- ON: Only signal when causality confirms
- OFF: Allow signals without causal support
• Require Stability Zone (default ON): Lyapunov filter
- ON: Only signal when λ < 0 (stable) or |λ| < 0.1 (critical)
- OFF: Allow signals in chaotic regimes (risky)
• Signal Cooldown (1-50, default 5): Minimum bars between signals
- Fast (1-3): Rapid signal generation
- Normal (4-8): Balanced
- Slow (10-20): Very selective
- Ultra (25-50): Only major regime changes
Signal Configuration:
• Momentum Period (5-50, default 14): ROC calculation
• Structure Lookback (10-100, default 20): Support/resistance range
• Volatility Period (5-50, default 14): ATR calculation
• Volume MA Period (10-50, default 20): Volume normalization
Visual Settings:
• Customizable color scheme for all elements
• Toggle visibility for each layer independently
• Dashboard position (4 corners) and size (tiny/small/normal)
🎓 PROFESSIONAL USAGE PROTOCOL
Phase 1: System Familiarization (Week 1)
Goal: Understand complexity metrics and dashboard interpretation
Setup:
• Enable all features with default parameters
• Watch dashboard metrics for 500+ bars
• Do NOT trade yet
Actions:
• Observe emergence score patterns relative to price moves
• Note coherence threshold crossings and subsequent price action
• Watch entropy regime transitions (ORDERED → COMPLEX → CHAOTIC)
• Correlate Lyapunov state with signal reliability
• Track which signals appear (emergence vs resonance frequency)
Key Learning:
• When does emergence peak? (usually before major moves)
• What entropy regime produces best signals? (typically ORDERED or MODERATE)
• Does your instrument respect stability zones? (stable λ = better signals)
Phase 2: Parameter Optimization (Week 2)
Goal: Tune system to instrument characteristics
Requirements:
• Understand basic dashboard metrics from Phase 1
• Have 1000+ bars of history loaded
Embedding Dimension & Time Delay:
• If signals very rare: Try lower dimension (d=3-4) or shorter delay (τ=2)
• If signals too frequent: Try higher dimension (d=6-7) or longer delay (τ=4-5)
• Sweet spot: 4-8 emergence signals per 100 bars
Coherence Threshold:
• Check dashboard: What's typical coherence range?
• If coherence rarely exceeds 0.70: Lower threshold to 0.60-0.65
• If coherence often >0.80: Can raise threshold to 0.75-0.80
• Goal: Signals fire during top 20-30% of coherence values
Emergence Threshold:
• If too few signals: Lower to 0.65-0.70
• If too many signals: Raise to 0.80-0.85
• Balance with coherence threshold—both must be met
Phase 3: Signal Quality Assessment (Weeks 3-4)
Goal: Verify signals have edge via paper trading
Requirements:
• Parameters optimized per Phase 2
• 50+ signals generated
• Detailed notes on each signal
Paper Trading Protocol:
• Take EVERY emergence signal (★ and ◆)
• Optional: Take resonance signals (▲/▼) separately to compare
• Use simple exit: 2R target, 1R stop (ATR-based)
• Track: Win rate, average R-multiple, maximum consecutive losses
Quality Metrics:
• Premium emergence (★) : Should achieve >55% WR
• Standard emergence (◆) : Should achieve >50% WR
• Resonance signals : Should achieve >45% WR
• Overall : If <45% WR, system not suitable for this instrument/timeframe
Red Flags:
• Win rate <40%: Wrong instrument or parameters need major adjustment
• Max consecutive losses >10: System not working in current regime
• Profit factor <1.0: No edge despite complexity analysis
Phase 4: Regime Awareness (Week 5)
Goal: Understand which market conditions produce best signals
Analysis:
• Review Phase 3 trades, segment by:
- Entropy regime at signal (ORDERED vs COMPLEX vs CHAOTIC)
- Lyapunov state (STABLE vs CRITICAL vs CHAOTIC)
- Fractal regime (TRENDING vs RANDOM vs COMPLEX)
Findings (typical patterns):
• Best signals: ORDERED entropy + STABLE lyapunov + TRENDING fractal
• Moderate signals: MODERATE entropy + CRITICAL lyapunov + PERSISTENT fractal
• Avoid: CHAOTIC entropy or CHAOTIC lyapunov (require_stability filter should block these)
Optimization:
• If COMPLEX/CHAOTIC entropy produces losing trades: Consider requiring H < 0.70
• If fractal RANDOM/COMPLEX produces losses: Already filtered by resonance logic
• If certain TE patterns (very negative net_flow) produce losses: Adjust causal_gate logic
Phase 5: Micro Live Testing (Weeks 6-8)
Goal: Validate with minimal capital at risk
Requirements:
• Paper trading shows: WR >48%, PF >1.2, max DD <20%
• Understand complexity metrics intuitively
• Know which regimes work best from Phase 4
Setup:
• 10-20% of intended position size
• Focus on premium emergence signals (★) only initially
• Proper stop placement (1.5-2.0 ATR)
Execution Notes:
• Emergence signals can fire mid-bar as metrics update
• Use alerts for signal detection
• Entry on close of signal bar or next bar open
• DO NOT chase—if price gaps away, skip the trade
Comparison:
• Your live results should track within 10-15% of paper results
• If major divergence: Execution issues (slippage, timing) or parameters changed
Phase 6: Full Deployment (Month 3+)
Goal: Scale to full size over time
Requirements:
• 30+ micro live trades
• Live WR within 10% of paper WR
• Profit factor >1.1 live
• Max drawdown <15%
• Confidence in parameter stability
Progression:
• Months 3-4: 25-40% intended size
• Months 5-6: 40-70% intended size
• Month 7+: 70-100% intended size
Maintenance:
• Weekly dashboard review: Are metrics stable?
• Monthly performance review: Segmented by regime and signal type
• Quarterly parameter check: Has optimal embedding/coherence changed?
Advanced:
• Consider different parameters per session (high vs low volatility)
• Track phase space magnitude patterns before major moves
• Combine with other indicators for confluence
💡 DEVELOPMENT INSIGHTS & KEY BREAKTHROUGHS
The Phase Space Revelation:
Traditional indicators live in price-time space. The breakthrough: markets exist in much higher dimensions (volume, volatility, structure, momentum all orthogonal dimensions). Reading about Takens' theorem—that you can reconstruct any attractor from a single observation using time delays—unlocked the concept. Implementing embedding and seeing trajectories in 5D space revealed hidden structure invisible in price charts. Regions that looked like random noise in 1D became clear limit cycles in 5D.
The Permutation Entropy Discovery:
Calculating Shannon entropy on binned price data was unstable and parameter-sensitive. Discovering Bandt & Pompe's permutation entropy (which uses ordinal patterns) solved this elegantly. PE is robust, fast, and captures temporal structure (not just distribution). Testing showed PE < 0.5 periods had 18% higher signal win rate than PE > 0.7 periods. Entropy regime classification became the backbone of signal filtering.
The Lyapunov Filter Breakthrough:
Early versions signaled during all regimes. Win rate hovered at 42%—barely better than random. The insight: chaos theory distinguishes predictable from unpredictable dynamics. Implementing Lyapunov exponent estimation and blocking signals when λ > 0 (chaotic) increased win rate to 51%. Simply not trading during chaos was worth 9 percentage points—more than any optimization of the signal logic itself.
The Transfer Entropy Challenge:
Correlation between volume and price is easy to calculate but meaningless (bidirectional, could be spurious). Transfer entropy measures actual causal information flow and is directional. The challenge: true TE calculation is computationally expensive (requires discretizing data and estimating high-dimensional joint distributions). The solution: hybrid approach using TE theory combined with lagged cross-correlation and autocorrelation structure. Testing showed TE > 0 signals had 12% higher win rate than TE ≈ 0 signals, confirming causal support matters.
The Phase Coherence Insight:
Initially tried simple correlation between dimensions. Not predictive. Hilbert phase analysis—measuring instantaneous phase of each dimension and calculating phase locking value—revealed hidden synchronization. When PLV > 0.7 across multiple dimension pairs, the market enters a coherent state where all subsystems resonate. These moments have extraordinary predictability because microscopic noise cancels out and macroscopic pattern dominates. Emergence signals require high PLV for this reason.
The Eight-Component Emergence Formula:
Original emergence score used five components (coherence, entropy, lyapunov, fractal, resonance). Performance was good but not exceptional. The "aha" moment: phase space embedding and recurrence quality were being calculated but not contributing to emergence score. Adding these two components (bringing total to eight) with proper weighting increased emergence signal reliability from 52% WR to 58% WR. All calculated metrics must contribute to the final score. If you compute something, use it.
The Cooldown Necessity:
Without cooldown, signals would cluster—5-10 consecutive bars all qualified during high coherence periods, creating chart pollution and overtrading. Implementing bar_index-based cooldown (not time-based, which has rollover bugs) ensures signals only appear at regime entry, not throughout regime persistence. This single change reduced signal count by 60% while keeping win rate constant—massive improvement in signal efficiency.
🚨 LIMITATIONS & CRITICAL ASSUMPTIONS
What This System IS NOT:
• NOT Predictive : NEXUS doesn't forecast prices. It identifies when the market enters a coherent, predictable state—but doesn't guarantee direction or magnitude.
• NOT Holy Grail : Typical performance is 50-58% win rate with 1.5-2.0 avg R-multiple. This is probabilistic edge from complexity analysis, not certainty.
• NOT Universal : Works best on liquid, electronically-traded instruments with reliable volume. Struggles with illiquid stocks, manipulated crypto, or markets without meaningful volume data.
• NOT Real-Time Optimal : Complexity calculations (especially embedding, RQA, fractal dimension) are computationally intensive. Dashboard updates may lag by 1-2 seconds on slower connections.
• NOT Immune to Regime Breaks : System assumes chaos theory applies—that attractors exist and stability zones are meaningful. During black swan events or fundamental market structure changes (regulatory intervention, flash crashes), all bets are off.
Core Assumptions:
1. Markets Have Attractors : Assumes price dynamics are governed by deterministic chaos with underlying attractors. Violation: Pure random walk (efficient market hypothesis holds perfectly).
2. Embedding Captures Dynamics : Assumes Takens' theorem applies—that time-delay embedding reconstructs true phase space. Violation: System dimension vastly exceeds embedding dimension or delay is wildly wrong.
3. Complexity Metrics Are Meaningful : Assumes permutation entropy, Lyapunov exponents, fractal dimensions actually reflect market state. Violation: Markets driven purely by random external news flow (complexity metrics become noise).
4. Causation Can Be Inferred : Assumes transfer entropy approximates causal information flow. Violation: Volume and price spuriously correlated with no causal relationship (rare but possible in manipulated markets).
5. Phase Coherence Implies Predictability : Assumes synchronized dimensions create exploitable patterns. Violation: Coherence by chance during random period (false positive).
6. Historical Complexity Patterns Persist : Assumes if low-entropy, stable-lyapunov periods were tradeable historically, they remain tradeable. Violation: Fundamental regime change (market structure shifts, e.g., transition from floor trading to HFT).
Performs Best On:
• ES, NQ, RTY (major US index futures - high liquidity, clean volume data)
• Major forex pairs: EUR/USD, GBP/USD, USD/JPY (24hr markets, good for phase analysis)
• Liquid commodities: CL (crude oil), GC (gold), NG (natural gas)
• Large-cap stocks: AAPL, MSFT, GOOGL, TSLA (>$10M daily volume, meaningful structure)
• Major crypto on reputable exchanges: BTC, ETH on Coinbase/Kraken (avoid Binance due to manipulation)
Performs Poorly On:
• Low-volume stocks (<$1M daily volume) - insufficient liquidity for complexity analysis
• Exotic forex pairs - erratic spreads, thin volume
• Illiquid altcoins - wash trading, bot manipulation invalidates volume analysis
• Pre-market/after-hours - gappy, thin, different dynamics
• Binary events (earnings, FDA approvals) - discontinuous jumps violate dynamical systems assumptions
• Highly manipulated instruments - spoofing and layering create false coherence
Known Weaknesses:
• Computational Lag : Complexity calculations require iterating over windows. On slow connections, dashboard may update 1-2 seconds after bar close. Signals may appear delayed.
• Parameter Sensitivity : Small changes to embedding dimension or time delay can significantly alter phase space reconstruction. Requires careful calibration per instrument.
• Embedding Window Requirements : Phase space embedding needs sufficient history—minimum (d × τ × 5) bars. If embedding_dimension=5 and time_delay=3, need 75+ bars. Early bars will be unreliable.
• Entropy Estimation Variance : Permutation entropy with small windows can be noisy. Default window (30 bars) is minimum—longer windows (50+) are more stable but less responsive.
• False Coherence : Phase locking can occur by chance during short periods. Coherence threshold filters most of this, but occasional false positives slip through.
• Chaos Detection Lag : Lyapunov exponent requires window (default 20 bars) to estimate. Market can enter chaos and produce bad signal before λ > 0 is detected. Stability filter helps but doesn't eliminate this.
• Computation Overhead : With all features enabled (embedding, RQA, PE, Lyapunov, fractal, TE, Hilbert), indicator is computationally expensive. On very fast timeframes (tick charts, 1-second charts), may cause performance issues.
⚠️ RISK DISCLOSURE
Trading futures, forex, stocks, options, and cryptocurrencies involves substantial risk of loss and is not suitable for all investors. Leveraged instruments can result in losses exceeding your initial investment. Past performance, whether backtested or live, is not indicative of future results.
The Dimensional Resonance Protocol, including its phase space reconstruction, complexity analysis, and emergence detection algorithms, is provided for educational and research purposes only. It is not financial advice, investment advice, or a recommendation to buy or sell any security or instrument.
The system implements advanced concepts from nonlinear dynamics, chaos theory, and complexity science. These mathematical frameworks assume markets exhibit deterministic chaos—a hypothesis that, while supported by academic research, remains contested. Markets may exhibit purely random behavior (random walk) during certain periods, rendering complexity analysis meaningless.
Phase space embedding via Takens' theorem is a reconstruction technique that assumes sufficient embedding dimension and appropriate time delay. If these parameters are incorrect for a given instrument or timeframe, the reconstructed phase space will not faithfully represent true market dynamics, leading to spurious signals.
Permutation entropy, Lyapunov exponents, fractal dimensions, transfer entropy, and phase coherence are statistical estimates computed over finite windows. All have inherent estimation error. Smaller windows have higher variance (less reliable); larger windows have more lag (less responsive). There is no universally optimal window size.
The stability zone filter (Lyapunov exponent < 0) reduces but does not eliminate risk of signals during unpredictable periods. Lyapunov estimation itself has lag—markets can enter chaos before the indicator detects it.
Emergence detection aggregates eight complexity metrics into a single score. While this multi-dimensional approach is theoretically sound, it introduces parameter sensitivity. Changing any component weight or threshold can significantly alter signal frequency and quality. Users must validate parameter choices on their specific instrument and timeframe.
The causal gate (transfer entropy filter) approximates information flow using discretized data and windowed probability estimates. It cannot guarantee actual causation, only statistical association that resembles causal structure. Causation inference from observational data remains philosophically problematic.
Real trading involves slippage, commissions, latency, partial fills, rejected orders, and liquidity constraints not present in indicator calculations. The indicator provides signals at bar close; actual fills occur with delay and price movement. Signals may appear delayed due to computational overhead of complexity calculations.
Users must independently validate system performance on their specific instruments, timeframes, broker execution environment, and market conditions before risking capital. Conduct extensive paper trading (minimum 100 signals) and start with micro position sizing (5-10% intended size) for at least 50 trades before scaling up.
Never risk more capital than you can afford to lose completely. Use proper position sizing (0.5-2% risk per trade maximum). Implement stop losses on every trade. Maintain adequate margin/capital reserves. Understand that most retail traders lose money. Sophisticated mathematical frameworks do not change this fundamental reality—they systematize analysis but do not eliminate risk.
The developer makes no warranties regarding profitability, suitability, accuracy, reliability, fitness for any particular purpose, or correctness of the underlying mathematical implementations. Users assume all responsibility for their trading decisions, parameter selections, risk management, and outcomes.
By using this indicator, you acknowledge that you have read, understood, and accepted these risk disclosures and limitations, and you accept full responsibility for all trading activity and potential losses.
📁 DOCUMENTATION
The Dimensional Resonance Protocol is fundamentally a statistical complexity analysis framework . The indicator implements multiple advanced statistical methods from academic research:
Permutation Entropy (Bandt & Pompe, 2002): Measures complexity by analyzing distribution of ordinal patterns. Pure statistical concept from information theory.
Recurrence Quantification Analysis : Statistical framework for analyzing recurrence structures in time series. Computes recurrence rate, determinism, and diagonal line statistics.
Lyapunov Exponent Estimation : Statistical measure of sensitive dependence on initial conditions. Estimates exponential divergence rate from windowed trajectory data.
Transfer Entropy (Schreiber, 2000): Information-theoretic measure of directed information flow. Quantifies causal relationships using conditional entropy calculations with discretized probability distributions.
Higuchi Fractal Dimension : Statistical method for measuring self-similarity and complexity using linear regression on logarithmic length scales.
Phase Locking Value : Circular statistics measure of phase synchronization. Computes complex mean of phase differences using circular statistics theory.
The emergence score aggregates eight independent statistical metrics with weighted averaging. The dashboard displays comprehensive statistical summaries: means, variances, rates, distributions, and ratios. Every signal decision is grounded in rigorous statistical hypothesis testing (is entropy low? is lyapunov negative? is coherence above threshold?).
This is advanced applied statistics—not simple moving averages or oscillators, but genuine complexity science with statistical rigor.
Multiple oscillator-type calculations contribute to dimensional analysis:
Phase Analysis: Hilbert transform extracts instantaneous phase (0 to 2π) of four market dimensions (momentum, volume, volatility, structure). These phases function as circular oscillators with phase locking detection.
Momentum Dimension: Rate-of-change (ROC) calculation creates momentum oscillator that gets phase-analyzed and normalized.
Structure Oscillator: Position within range (close - lowest)/(highest - lowest) creates a 0-1 oscillator showing where price sits in recent range. This gets embedded and phase-analyzed.
Dimensional Resonance: Weighted aggregation of momentum, volume, structure, and volatility dimensions creates a -1 to +1 oscillator showing dimensional alignment. Similar to traditional oscillators but multi-dimensional.
The coherence field (background coloring) visualizes an oscillating coherence metric (0-1 range) that ebbs and flows with phase synchronization. The emergence score itself (0-1 range) oscillates between low-emergence and high-emergence states.
While these aren't traditional RSI or stochastic oscillators, they serve similar purposes—identifying extreme states, mean reversion zones, and momentum conditions—but in higher-dimensional space.
Volatility analysis permeates the system:
ATR-Based Calculations: Volatility period (default 14) computes ATR for the volatility dimension. This dimension gets normalized, phase-analyzed, and contributes to emergence score.
Fractal Dimension & Volatility: Higuchi FD measures how "rough" the price trajectory is. Higher FD (>1.6) correlates with higher volatility/choppiness. FD < 1.4 indicates smooth trends (lower effective volatility).
Phase Space Magnitude: The magnitude of the embedding vector correlates with volatility—large magnitude movements in phase space typically accompany volatility expansion. This is the "energy" of the market trajectory.
Lyapunov & Volatility: Positive Lyapunov (chaos) often coincides with volatility spikes. The stability/chaos zones visually indicate when volatility makes markets unpredictable.
Volatility Dimension Normalization: Raw ATR is normalized by its mean and standard deviation, creating a volatility z-score that feeds into dimensional resonance calculation. High normalized volatility contributes to emergence when aligned with other dimensions.
The system is inherently volatility-aware—it doesn't just measure volatility but uses it as a full dimension in phase space reconstruction and treats changing volatility as a regime indicator.
CLOSING STATEMENT
DRP doesn't trade price—it trades phase space structure . It doesn't chase patterns—it detects emergence . It doesn't guess at trends—it measures coherence .
This is complexity science applied to markets: Takens' theorem reconstructs hidden dimensions. Permutation entropy measures order. Lyapunov exponents detect chaos. Transfer entropy reveals causation. Hilbert phases find synchronization. Fractal dimensions quantify self-similarity.
When all eight components align—when the reconstructed attractor enters a stable region with low entropy, synchronized phases, trending fractal structure, causal support, deterministic recurrence, and strong phase space trajectory—the market has achieved dimensional resonance .
These are the highest-probability moments. Not because an indicator said so. Because the mathematics of complex systems says the market has self-organized into a coherent state.
Most indicators see shadows on the wall. DRP reconstructs the cave.
"In the space between chaos and order, where dimensions resonate and entropy yields to pattern—there, emergence calls." DRP
Taking you to school. — Dskyz, Trade with insight. Trade with anticipation.
EdgeFlow Pullback [CHE]EdgeFlow Pullback \ — Icon & Visual Guide (Deep Dive)
TL;DR (1-minute read)
⏳ Hourglass = Pending verdict. A countdown runs from the signal bar until your Evaluation Window ends.
✔ Checkmark (green) = OK. After the evaluation window, price (HLC3) is on the correct side of the EMA144 for that signal’s direction.
✖ Cross (red) = Fail. After the evaluation window, price (HLC3) is on the wrong side of the EMA144.
▲ / ▼ Triangles = the actual PB Long/Short signal bar (sequence completed in time).
Small lime/red crosses = visual markers when HLC3 crosses EMA144 (context, not trade signals).
Orange line = EMA144 (baseline/trend filter).
T3 line color = Context signal: green when T3 is below HLC3, red when T3 is above HLC3.
Icon Glossary (What each symbol means)
1) ⏳ Hourglass — “Pending / Countdown”
Appears immediately when a PB signal fires (Long or Short).
Shows `⏳ currentBars / EvaluationBars` (e.g., `⏳ 7/30`).
The label stays anchored at the signal bar and its original price level (it does not drift with price).
During ⏳ you get no verdict yet. It’s simply the waiting period before grading.
2) ✔ Checkmark (green) — “Condition met”
Appears after the Evaluation Window completes.
Logic:
Long signal: HLC3 (typical price) is above EMA144 → ✔
Short signal: HLC3 is below EMA144 → ✔
The label turns green and text says “✔ … Condition met”.
This is rules-based grading, not PnL. It tells you if the post-signal structure behaved as expected.
3) ✖ Cross (red) — “Condition failed”
Appears after the Evaluation Window completes if the condition above is not met.
Label turns red with “✖ … Condition failed”.
Again: rules-based verdict, not a guarantee of profit or loss.
4) ▲ “PB Long” triangle (below bar)
Marks the exact bar where the 4-step Long sequence completed within the allowed window.
That bar is your signal bar for Long setups.
5) ▼ “PB Short” triangle (above bar, red)
Same as above, for Short setups.
6) Lime/Red “+” crosses (tiny cross markers)
Lime cross (below bar): HLC3 crosses above EMA144 (crossover).
Red cross (above bar): HLC3 crosses below EMA144 (crossunder).
These crosses are context markers; they’re not entry signals by themselves.
The Two Clocks (Don’t mix them up)
There are two different time windows at play:
1. Signal Window — “Max bars for full sequence”
A pullback signal (Long or Short) only fires if the 4-step sequence completes within this many bars.
If it takes too long: reset (no signal, no triangle, no label).
Purpose: avoid stale setups.
2. Evaluation Window — “Evaluation window after signal (bars)”
Starts after the signal bar. The label shows an ⏳ countdown.
When it reaches the set number of bars, the indicator checks whether HLC3 is on the correct side of EMA144 for the signal direction.
Then it stamps the signal with ✔ (OK) or ✖ (Fail).
Timeline sketch (Long example):
```
→ ▲ PB Long at bar t0
Label shows: ⏳ 0/EvalBars
t0+1, t0+2, ... t0+EvalBars-1 → still ⏳
At t0+EvalBars → Check HLC3 vs EMA144
Result → ✔ (green) or ✖ (red)
(Label remains anchored at t0 / signal price)
```
What Triggers the PB Signal (so you know why triangles appear)
LONG sequence (4 steps in order):
1. T3 falling (the pullback begins)
2. HLC3 crosses under EMA144
3. T3 rising (pullback ends)
4. HLC3 crosses over EMA144 → PB Long triangle
SHORT sequence (mirror):
1. T3 rising
2. HLC3 crosses over EMA144
3. T3 falling
4. HLC3 crosses under EMA144 → PB Short triangle
If steps 1→4 don’t complete in time (within Max bars for full sequence), the sequence is abandoned (no signal).
Lines & Colors (quick interpretation)
EMA144 (orange): your baseline trend filter.
T3 (green/red):
Green when T3 < HLC3 (price above the smoothed path; often supportive in up-moves)
Red when T3 > HLC3 (price below the smoothed path; often pressure in down-moves)
HLC3 (gray): the typical price the logic uses ( (H+L+C)/3 ).
Label Behavior (anchoring & cleanup)
Each signal creates one label at the signal bar with ⏳.
The label is position-locked: it stays at the same bar index and y-price it was born at.
After the evaluation check, the label text and color update to ✔/✖, but position stays fixed.
The indicator keeps only the last N labels (your “Show only the last N labels” input). Older ones are deleted to reduce clutter.
What You Can (and Can’t) Infer from ✔ / ✖
✔ OK: Structure behaved as intended during the evaluation window (HLC3 finished on the correct side of EMA144).
Inference: The pullback continued in the expected direction post-signal.
✖ Fail: Structure ended up opposite the expectation.
Inference: The pullback did not continue cleanly (chop, reversal, or insufficient follow-through).
> Important: ✔/✖ is not profit or loss. It’s an objective rule check. Use it to identify market regimes where your entries perform best.
Input Settings — How they change the visuals
T3 length:
Shorter → faster turns, more signals (and more noise).
Longer → smoother turns, fewer but cleaner sequences.
T3 volume factor (0–1, default 0.7):
Higher → more curvature/smoothing.
Typical sweet spot: 0.5–0.9.
EMA length (baseline) default 144:
Smaller → faster baseline, more cross events, more aggressive signals.
Larger → slower, stricter trend confirmation.
Max bars for full sequence (signal window):
Smaller → only fresh, snappy pullbacks can signal.
Larger → allows slower pullbacks to complete.
Evaluation window (after signal):
Smaller → verdict arrives quickly (less tolerance).
Larger → gives the trade more time to prove itself structurally.
Show only the last N labels:
Controls chart clutter. Increase for more history, decrease for focus.
(FYI: The “Debug” toggle exists but doesn’t draw extra overlays in this version.)
Practical Reading Flow (how to use visuals in seconds)
1. Triangles catch your eye: ▲ for Long, ▼ for Short. That’s the setup completion.
2. ⏳ label starts—don’t judge yet; let the evaluation run.
3. Watch EMA slope and T3 color for context (trend + pressure).
4. After the window: ✔/✖ stamps the outcome. Log what the market was like when you got ✔.
Common “Why did…?” Questions
Q: Why did I get no triangle even though T3 turned and EMA crossed?
A: The 4 steps must happen in order and within the Signal Window. If timing breaks, the sequence resets.
Q: Why did my label stay ⏳ for so long?
A: That’s by design until the Evaluation Window completes. The verdict only happens at the end of that window.
Q: Why is ✔/✖ different from my PnL?
A: It’s a structure check, not a profit check. It doesn’t know your entries/exits/stops.
Q: Do the small lime/red crosses mean buy/sell?
A: No. They’re context markers for HLC3↔EMA crosses, useful inside the sequence but not standalone signals.
Pro Tips (turn visuals into decisions)
Entry: Use the ▲/▼ triangle as your trigger, in trend direction (check EMA slope/market structure).
Stop: Behind the pullback swing around the signal bar.
Exit: Structure levels, R-multiples, or a reverse HLC3↔EMA cross as a trailing logic.
Tuning:
Intraday/volatile: shorter T3/EMA + tighter Signal Window.
Swing/slow: default 144 EMA + moderate windows.
Learn quickly: Filter your chart to show only ✔ or only ✖ windows in your notes; see which sessions, assets, and volatility regimes suit the system.
Disclaimer
No indicator guarantees profits. Sweep2Trade Pro \ is a decision aid; always combine with solid risk management and your own judgment. Backtest, forward test, and size responsibly.
The content provided, including all code and materials, is strictly for educational and informational purposes only. It is not intended as, and should not be interpreted as, financial advice, a recommendation to buy or sell any financial instrument, or an offer of any financial product or service. All strategies, tools, and examples discussed are provided for illustrative purposes to demonstrate coding techniques and the functionality of Pine Script within a trading context.
Any results from strategies or tools provided are hypothetical, and past performance is not indicative of future results. Trading and investing involve high risk, including the potential loss of principal, and may not be suitable for all individuals. Before making any trading decisions, please consult with a qualified financial professional to understand the risks involved.
By using this script, you acknowledge and agree that any trading decisions are made solely at your discretion and risk.
Enhance your trading precision and confidence 🚀
Happy trading
Chervolino
J12Matic Builder by galgoomA flexible Renko/tick strategy that lets you choose between two entry engines (Multi-Source 3-way or QBand+Moneyball), with a unified trailing/TP exit engine, NY-time trading windows with auto-flatten, daily profit/loss and trade-count limits (HALT mode), and clean webhook routing using {{strategy.order.alert_message}}.
Highlights
Two entry engines
Multi-Source (3): up to three long/short sources with Single / Dual / Triple logic and optional lookback.
QBand + Moneyball: Gate → Trigger workflow with timing windows, OR/AND trigger modes, per-window caps, optional same-bar fire.
Unified exit engine: Trailing by Bricks or Ticks, plus optional static TP/SL.
Session control (NY time): Evening / Overnight / NY Session windows; auto-flatten at end of any enabled window.
Day controls: Profit/Loss (USD) and Trade-count limits. When hit, strategy HALTS new entries, shows an on-chart label/background.
Alert routing designed for webhooks: Every order sets alert_message= so you can run alerts with:
Condition: this strategy
Notify on: Order fills only
Message: {{strategy.order.alert_message}}
Default JSONs or Custom payloads: If a Custom field is blank, a sensible default JSON is sent. Fill a field to override.
How to set up alerts (the 15-second version)
Create a TradingView alert with this strategy as Condition.
Notify on: Order fills only.
Message: {{strategy.order.alert_message}} (exactly).
If you want your own payloads, paste them into Inputs → 08) Custom Alert Payloads.
Leave blank → the strategy sends a default JSON.
Fill in → your text is sent as-is.
Note: Anything you type into the alert dialog’s Message box is ignored except the {{strategy.order.alert_message}} token, which forwards the payload supplied by the strategy at order time.
Publishing notes / best practices
Renko users: Make sure “Renko Brick Size” in Inputs matches your chart’s brick size exactly.
Ticks vs Bricks: Exit distances switch instantly when you toggle Exit Units.
Same-bar flips: If enabled, a new opposite signal will first close the open trade (with its exit payload), then enter the new side.
HALT mode: When day profit/loss limit or trade-count limit triggers, new entries are blocked for the rest of the session day. You’ll see a label and a soft background tint.
Session end flatten: Auto-closes positions at window ends; these exits use the “End of Session Window Exit” payload.
Bar magnifier: Strategy is configured for on-close execution; you can enable Bar Magnifier in Properties if needed.
Default JSONs (used when a Custom field is empty)
Open: {"event":"open","side":"long|short","symbol":""}
Close: {"event":"close","side":"long|short|flat","reason":"tp|sl|flip|session|limit_profit|limit_loss","symbol":""}
You can paste any text/JSON into the Custom fields; it will be forwarded as-is when that event occurs.
Input sections — user guide
01) Entries & Signals
Entry Logic: Choose Multi-Source (3) or QBand + Moneyball (pick one).
Enable Long/Short Signals: Master on/off switches for entering long/short.
Flip on opposite signal: If enabled, a new opposite signal will close the current position first, then open the other side.
Signal Logic (Multi-Source):
Single: any 1 of the 3 sources > 0
Dual: Source1 AND Source2 > 0
Triple (default): 1 AND 2 AND 3 > 0
Long/Short Signal Sources 1–3: Provide up to three series (often indicators). A positive value (> 0) is treated as a “pulse”.
Use Lookback: Keeps a source “true” for N bars after it pulses (helps catch late triggers).
Long/Short Lookback (bars): How many bars to remember that pulse.
01b) QBands + Moneyball (Gate -> Trigger)
Allow same-bar Gate->Trigger: If ON, a trigger can fire on the same bar as the gate pulse.
Trigger must fire within N bars after Gate: Size of the gate window (in bars).
Max signals per window (0 = unlimited): Cap the number of entries allowed while a gate window is open.
Buy/Sell Source 1 – Gate: Gate pulse sources that open the buy/sell window (often a regime/zone, e.g., QBands bull/bear).
Trigger Pulse Mode (Buy/Sell): How to detect a trigger pulse from the trigger sources (Change / Appear / Rise>0 / Fall<0).
Trigger A/B sources + Extend Bars: Primary/secondary triggers plus optional extension to persist their pulse for N bars.
Trigger Mode: Pick S2 only, S3 only, S2 OR S3, or S2 AND S3. AND mode remembers both pulses inside the window before firing.
02) Exit Units (Trailing/TP)
Exit Units: Choose Bricks (Renko) or Ticks. All distances below switch accordingly.
03) Tick-based Trailing / Stops (active when Exit Units = Ticks)
Initial SL (ticks): Starting stop distance from entry.
Start Trailing After (ticks): Start trailing once price moves this far in your favor.
Trailing Distance (ticks): Offset of the trailing stop from peak/trough once trailing begins.
Take Profit (ticks): Optional static TP distance.
Stop Loss (ticks): Optional static SL distance (overrides trailing if enabled).
04) Brick-based Trailing / Stops (active when Exit Units = Bricks)
Renko Brick Size: Must match your chart’s brick size.
Initial SL / Start Trailing After / Trailing Distance (bricks): Same definitions as tick mode, measured in bricks.
Take Profit / Stop Loss (bricks): Optional static distances.
05) TP / SL Switch
Enable Static Take Profit: If ON, closes the trade at the fixed TP distance.
Enable Static Stop Loss (Overrides Trailing): If ON, trailing is disabled and a fixed SL is used.
06) Trading Windows (NY time)
Use Trading Windows: Master toggle for all windows.
Evening / Overnight / NY Session: Define each session in NY time.
Flatten at End of : Auto-close any open position when a window ends (sends the Session Exit payload).
07) Day Controls & Limits
Enable Profit Limits / Profit Limit (Dollars): When daily net PnL ≥ limit → auto-flatten and HALT.
Enable Loss Limits / Loss Limit (Dollars): When daily net PnL ≤ −limit → auto-flatten and HALT.
Enable Trade Count Limits / Number of Trades Allowed: After N entries, HALT new entries (does not auto-flatten).
On-chart HUD: A label and soft background tint appear when HALTED; a compact status table shows Day PnL, trade count, and mode.
08) Custom Alert Payloads (used as strategy.order.alert_message)
Long/Short Entry: Payload sent on entries (if blank, a default open JSON is sent).
Regular Long/Short Exit: Payload sent on closes from SL/TP/flip (if blank, a default close JSON is sent).
End of Session Window Exit: Payload sent when any enabled window ends and positions are flattened.
Profit/Loss/Trade Limit Close: Payload sent when daily profit/loss limit causes auto-flatten.
Tip: Any tokens you include here are forwarded “as is”. If your downstream expects variables, do the substitution on the receiver side.
Known limitations
No bracket orders from Pine: This strategy doesn’t create OCO/attached brackets on the broker; it simulates exits with strategy logic and forwards your payloads for external automation.
alert_message is per order only: Alerts fire on order events. General status pings aren’t sent unless you wire a separate indicator/alert.
Renko specifics: Backtests on synthetic Renko can differ from live execution. Always forward-test on your instrument and settings.
Quick checklist before you publish
✅ Brick size in Inputs matches your Renko chart
✅ Exit Units set to Bricks or Ticks as you intend
✅ Day limits/Windows toggled as you want
✅ Custom payloads filled (or leave blank to use defaults)
✅ Your alert uses Order fills only + {{strategy.order.alert_message}}
Goertzel Browser [Loxx]As the financial markets become increasingly complex and data-driven, traders and analysts must leverage powerful tools to gain insights and make informed decisions. One such tool is the Goertzel Browser indicator, a sophisticated technical analysis indicator that helps identify cyclical patterns in financial data. This powerful tool is capable of detecting cyclical patterns in financial data, helping traders to make better predictions and optimize their trading strategies. With its unique combination of mathematical algorithms and advanced charting capabilities, this indicator has the potential to revolutionize the way we approach financial modeling and trading.
█ Brief Overview of the Goertzel Browser
The Goertzel Browser is a sophisticated technical analysis tool that utilizes the Goertzel algorithm to analyze and visualize cyclical components within a financial time series. By identifying these cycles and their characteristics, the indicator aims to provide valuable insights into the market's underlying price movements, which could potentially be used for making informed trading decisions.
The primary purpose of this indicator is to:
1. Detect and analyze the dominant cycles present in the price data.
2. Reconstruct and visualize the composite wave based on the detected cycles.
3. Project the composite wave into the future, providing a potential roadmap for upcoming price movements.
To achieve this, the indicator performs several tasks:
1. Detrending the price data: The indicator preprocesses the price data using various detrending techniques, such as Hodrick-Prescott filters, zero-lag moving averages, and linear regression, to remove the underlying trend and focus on the cyclical components.
2. Applying the Goertzel algorithm: The indicator applies the Goertzel algorithm to the detrended price data, identifying the dominant cycles and their characteristics, such as amplitude, phase, and cycle strength.
3. Constructing the composite wave: The indicator reconstructs the composite wave by combining the detected cycles, either by using a user-defined list of cycles or by selecting the top N cycles based on their amplitude or cycle strength.
4. Visualizing the composite wave: The indicator plots the composite wave, using solid lines for the past and dotted lines for the future projections. The color of the lines indicates whether the wave is increasing or decreasing.
5. Displaying cycle information: The indicator provides a table that displays detailed information about the detected cycles, including their rank, period, Bartel's test results, amplitude, and phase.
This indicator is a powerful tool that employs the Goertzel algorithm to analyze and visualize the cyclical components within a financial time series. By providing insights into the underlying price movements and their potential future trajectory, the indicator aims to assist traders in making more informed decisions.
█ What is the Goertzel Algorithm?
The Goertzel algorithm, named after Gerald Goertzel, is a digital signal processing technique that is used to efficiently compute individual terms of the Discrete Fourier Transform (DFT). It was first introduced in 1958, and since then, it has found various applications in the fields of engineering, mathematics, and physics.
The Goertzel algorithm is primarily used to detect specific frequency components within a digital signal, making it particularly useful in applications where only a few frequency components are of interest. The algorithm is computationally efficient, as it requires fewer calculations than the Fast Fourier Transform (FFT) when detecting a small number of frequency components. This efficiency makes the Goertzel algorithm a popular choice in applications such as:
1. Telecommunications: The Goertzel algorithm is used for decoding Dual-Tone Multi-Frequency (DTMF) signals, which are the tones generated when pressing buttons on a telephone keypad. By identifying specific frequency components, the algorithm can accurately determine which button has been pressed.
2. Audio processing: The algorithm can be used to detect specific pitches or harmonics in an audio signal, making it useful in applications like pitch detection and tuning musical instruments.
3. Vibration analysis: In the field of mechanical engineering, the Goertzel algorithm can be applied to analyze vibrations in rotating machinery, helping to identify faulty components or signs of wear.
4. Power system analysis: The algorithm can be used to measure harmonic content in power systems, allowing engineers to assess power quality and detect potential issues.
The Goertzel algorithm is used in these applications because it offers several advantages over other methods, such as the FFT:
1. Computational efficiency: The Goertzel algorithm requires fewer calculations when detecting a small number of frequency components, making it more computationally efficient than the FFT in these cases.
2. Real-time analysis: The algorithm can be implemented in a streaming fashion, allowing for real-time analysis of signals, which is crucial in applications like telecommunications and audio processing.
3. Memory efficiency: The Goertzel algorithm requires less memory than the FFT, as it only computes the frequency components of interest.
4. Precision: The algorithm is less susceptible to numerical errors compared to the FFT, ensuring more accurate results in applications where precision is essential.
The Goertzel algorithm is an efficient digital signal processing technique that is primarily used to detect specific frequency components within a signal. Its computational efficiency, real-time capabilities, and precision make it an attractive choice for various applications, including telecommunications, audio processing, vibration analysis, and power system analysis. The algorithm has been widely adopted since its introduction in 1958 and continues to be an essential tool in the fields of engineering, mathematics, and physics.
█ Goertzel Algorithm in Quantitative Finance: In-Depth Analysis and Applications
The Goertzel algorithm, initially designed for signal processing in telecommunications, has gained significant traction in the financial industry due to its efficient frequency detection capabilities. In quantitative finance, the Goertzel algorithm has been utilized for uncovering hidden market cycles, developing data-driven trading strategies, and optimizing risk management. This section delves deeper into the applications of the Goertzel algorithm in finance, particularly within the context of quantitative trading and analysis.
Unveiling Hidden Market Cycles:
Market cycles are prevalent in financial markets and arise from various factors, such as economic conditions, investor psychology, and market participant behavior. The Goertzel algorithm's ability to detect and isolate specific frequencies in price data helps trader analysts identify hidden market cycles that may otherwise go unnoticed. By examining the amplitude, phase, and periodicity of each cycle, traders can better understand the underlying market structure and dynamics, enabling them to develop more informed and effective trading strategies.
Developing Quantitative Trading Strategies:
The Goertzel algorithm's versatility allows traders to incorporate its insights into a wide range of trading strategies. By identifying the dominant market cycles in a financial instrument's price data, traders can create data-driven strategies that capitalize on the cyclical nature of markets.
For instance, a trader may develop a mean-reversion strategy that takes advantage of the identified cycles. By establishing positions when the price deviates from the predicted cycle, the trader can profit from the subsequent reversion to the cycle's mean. Similarly, a momentum-based strategy could be designed to exploit the persistence of a dominant cycle by entering positions that align with the cycle's direction.
Enhancing Risk Management:
The Goertzel algorithm plays a vital role in risk management for quantitative strategies. By analyzing the cyclical components of a financial instrument's price data, traders can gain insights into the potential risks associated with their trading strategies.
By monitoring the amplitude and phase of dominant cycles, a trader can detect changes in market dynamics that may pose risks to their positions. For example, a sudden increase in amplitude may indicate heightened volatility, prompting the trader to adjust position sizing or employ hedging techniques to protect their portfolio. Additionally, changes in phase alignment could signal a potential shift in market sentiment, necessitating adjustments to the trading strategy.
Expanding Quantitative Toolkits:
Traders can augment the Goertzel algorithm's insights by combining it with other quantitative techniques, creating a more comprehensive and sophisticated analysis framework. For example, machine learning algorithms, such as neural networks or support vector machines, could be trained on features extracted from the Goertzel algorithm to predict future price movements more accurately.
Furthermore, the Goertzel algorithm can be integrated with other technical analysis tools, such as moving averages or oscillators, to enhance their effectiveness. By applying these tools to the identified cycles, traders can generate more robust and reliable trading signals.
The Goertzel algorithm offers invaluable benefits to quantitative finance practitioners by uncovering hidden market cycles, aiding in the development of data-driven trading strategies, and improving risk management. By leveraging the insights provided by the Goertzel algorithm and integrating it with other quantitative techniques, traders can gain a deeper understanding of market dynamics and devise more effective trading strategies.
█ Indicator Inputs
src: This is the source data for the analysis, typically the closing price of the financial instrument.
detrendornot: This input determines the method used for detrending the source data. Detrending is the process of removing the underlying trend from the data to focus on the cyclical components.
The available options are:
hpsmthdt: Detrend using Hodrick-Prescott filter centered moving average.
zlagsmthdt: Detrend using zero-lag moving average centered moving average.
logZlagRegression: Detrend using logarithmic zero-lag linear regression.
hpsmth: Detrend using Hodrick-Prescott filter.
zlagsmth: Detrend using zero-lag moving average.
DT_HPper1 and DT_HPper2: These inputs define the period range for the Hodrick-Prescott filter centered moving average when detrendornot is set to hpsmthdt.
DT_ZLper1 and DT_ZLper2: These inputs define the period range for the zero-lag moving average centered moving average when detrendornot is set to zlagsmthdt.
DT_RegZLsmoothPer: This input defines the period for the zero-lag moving average used in logarithmic zero-lag linear regression when detrendornot is set to logZlagRegression.
HPsmoothPer: This input defines the period for the Hodrick-Prescott filter when detrendornot is set to hpsmth.
ZLMAsmoothPer: This input defines the period for the zero-lag moving average when detrendornot is set to zlagsmth.
MaxPer: This input sets the maximum period for the Goertzel algorithm to search for cycles.
squaredAmp: This boolean input determines whether the amplitude should be squared in the Goertzel algorithm.
useAddition: This boolean input determines whether the Goertzel algorithm should use addition for combining the cycles.
useCosine: This boolean input determines whether the Goertzel algorithm should use cosine waves instead of sine waves.
UseCycleStrength: This boolean input determines whether the Goertzel algorithm should compute the cycle strength, which is a normalized measure of the cycle's amplitude.
WindowSizePast and WindowSizeFuture: These inputs define the window size for past and future projections of the composite wave.
FilterBartels: This boolean input determines whether Bartel's test should be applied to filter out non-significant cycles.
BartNoCycles: This input sets the number of cycles to be used in Bartel's test.
BartSmoothPer: This input sets the period for the moving average used in Bartel's test.
BartSigLimit: This input sets the significance limit for Bartel's test, below which cycles are considered insignificant.
SortBartels: This boolean input determines whether the cycles should be sorted by their Bartel's test results.
UseCycleList: This boolean input determines whether a user-defined list of cycles should be used for constructing the composite wave. If set to false, the top N cycles will be used.
Cycle1, Cycle2, Cycle3, Cycle4, and Cycle5: These inputs define the user-defined list of cycles when 'UseCycleList' is set to true. If using a user-defined list, each of these inputs represents the period of a specific cycle to include in the composite wave.
StartAtCycle: This input determines the starting index for selecting the top N cycles when UseCycleList is set to false. This allows you to skip a certain number of cycles from the top before selecting the desired number of cycles.
UseTopCycles: This input sets the number of top cycles to use for constructing the composite wave when UseCycleList is set to false. The cycles are ranked based on their amplitudes or cycle strengths, depending on the UseCycleStrength input.
SubtractNoise: This boolean input determines whether to subtract the noise (remaining cycles) from the composite wave. If set to true, the composite wave will only include the top N cycles specified by UseTopCycles.
█ Exploring Auxiliary Functions
The following functions demonstrate advanced techniques for analyzing financial markets, including zero-lag moving averages, Bartels probability, detrending, and Hodrick-Prescott filtering. This section examines each function in detail, explaining their purpose, methodology, and applications in finance. We will examine how each function contributes to the overall performance and effectiveness of the indicator and how they work together to create a powerful analytical tool.
Zero-Lag Moving Average:
The zero-lag moving average function is designed to minimize the lag typically associated with moving averages. This is achieved through a two-step weighted linear regression process that emphasizes more recent data points. The function calculates a linearly weighted moving average (LWMA) on the input data and then applies another LWMA on the result. By doing this, the function creates a moving average that closely follows the price action, reducing the lag and improving the responsiveness of the indicator.
The zero-lag moving average function is used in the indicator to provide a responsive, low-lag smoothing of the input data. This function helps reduce the noise and fluctuations in the data, making it easier to identify and analyze underlying trends and patterns. By minimizing the lag associated with traditional moving averages, this function allows the indicator to react more quickly to changes in market conditions, providing timely signals and improving the overall effectiveness of the indicator.
Bartels Probability:
The Bartels probability function calculates the probability of a given cycle being significant in a time series. It uses a mathematical test called the Bartels test to assess the significance of cycles detected in the data. The function calculates coefficients for each detected cycle and computes an average amplitude and an expected amplitude. By comparing these values, the Bartels probability is derived, indicating the likelihood of a cycle's significance. This information can help in identifying and analyzing dominant cycles in financial markets.
The Bartels probability function is incorporated into the indicator to assess the significance of detected cycles in the input data. By calculating the Bartels probability for each cycle, the indicator can prioritize the most significant cycles and focus on the market dynamics that are most relevant to the current trading environment. This function enhances the indicator's ability to identify dominant market cycles, improving its predictive power and aiding in the development of effective trading strategies.
Detrend Logarithmic Zero-Lag Regression:
The detrend logarithmic zero-lag regression function is used for detrending data while minimizing lag. It combines a zero-lag moving average with a linear regression detrending method. The function first calculates the zero-lag moving average of the logarithm of input data and then applies a linear regression to remove the trend. By detrending the data, the function isolates the cyclical components, making it easier to analyze and interpret the underlying market dynamics.
The detrend logarithmic zero-lag regression function is used in the indicator to isolate the cyclical components of the input data. By detrending the data, the function enables the indicator to focus on the cyclical movements in the market, making it easier to analyze and interpret market dynamics. This function is essential for identifying cyclical patterns and understanding the interactions between different market cycles, which can inform trading decisions and enhance overall market understanding.
Bartels Cycle Significance Test:
The Bartels cycle significance test is a function that combines the Bartels probability function and the detrend logarithmic zero-lag regression function to assess the significance of detected cycles. The function calculates the Bartels probability for each cycle and stores the results in an array. By analyzing the probability values, traders and analysts can identify the most significant cycles in the data, which can be used to develop trading strategies and improve market understanding.
The Bartels cycle significance test function is integrated into the indicator to provide a comprehensive analysis of the significance of detected cycles. By combining the Bartels probability function and the detrend logarithmic zero-lag regression function, this test evaluates the significance of each cycle and stores the results in an array. The indicator can then use this information to prioritize the most significant cycles and focus on the most relevant market dynamics. This function enhances the indicator's ability to identify and analyze dominant market cycles, providing valuable insights for trading and market analysis.
Hodrick-Prescott Filter:
The Hodrick-Prescott filter is a popular technique used to separate the trend and cyclical components of a time series. The function applies a smoothing parameter to the input data and calculates a smoothed series using a two-sided filter. This smoothed series represents the trend component, which can be subtracted from the original data to obtain the cyclical component. The Hodrick-Prescott filter is commonly used in economics and finance to analyze economic data and financial market trends.
The Hodrick-Prescott filter is incorporated into the indicator to separate the trend and cyclical components of the input data. By applying the filter to the data, the indicator can isolate the trend component, which can be used to analyze long-term market trends and inform trading decisions. Additionally, the cyclical component can be used to identify shorter-term market dynamics and provide insights into potential trading opportunities. The inclusion of the Hodrick-Prescott filter adds another layer of analysis to the indicator, making it more versatile and comprehensive.
Detrending Options: Detrend Centered Moving Average:
The detrend centered moving average function provides different detrending methods, including the Hodrick-Prescott filter and the zero-lag moving average, based on the selected detrending method. The function calculates two sets of smoothed values using the chosen method and subtracts one set from the other to obtain a detrended series. By offering multiple detrending options, this function allows traders and analysts to select the most appropriate method for their specific needs and preferences.
The detrend centered moving average function is integrated into the indicator to provide users with multiple detrending options, including the Hodrick-Prescott filter and the zero-lag moving average. By offering multiple detrending methods, the indicator allows users to customize the analysis to their specific needs and preferences, enhancing the indicator's overall utility and adaptability. This function ensures that the indicator can cater to a wide range of trading styles and objectives, making it a valuable tool for a diverse group of market participants.
The auxiliary functions functions discussed in this section demonstrate the power and versatility of mathematical techniques in analyzing financial markets. By understanding and implementing these functions, traders and analysts can gain valuable insights into market dynamics, improve their trading strategies, and make more informed decisions. The combination of zero-lag moving averages, Bartels probability, detrending methods, and the Hodrick-Prescott filter provides a comprehensive toolkit for analyzing and interpreting financial data. The integration of advanced functions in a financial indicator creates a powerful and versatile analytical tool that can provide valuable insights into financial markets. By combining the zero-lag moving average,
█ In-Depth Analysis of the Goertzel Browser Code
The Goertzel Browser code is an implementation of the Goertzel Algorithm, an efficient technique to perform spectral analysis on a signal. The code is designed to detect and analyze dominant cycles within a given financial market data set. This section will provide an extremely detailed explanation of the code, its structure, functions, and intended purpose.
Function signature and input parameters:
The Goertzel Browser function accepts numerous input parameters for customization, including source data (src), the current bar (forBar), sample size (samplesize), period (per), squared amplitude flag (squaredAmp), addition flag (useAddition), cosine flag (useCosine), cycle strength flag (UseCycleStrength), past and future window sizes (WindowSizePast, WindowSizeFuture), Bartels filter flag (FilterBartels), Bartels-related parameters (BartNoCycles, BartSmoothPer, BartSigLimit), sorting flag (SortBartels), and output buffers (goeWorkPast, goeWorkFuture, cyclebuffer, amplitudebuffer, phasebuffer, cycleBartelsBuffer).
Initializing variables and arrays:
The code initializes several float arrays (goeWork1, goeWork2, goeWork3, goeWork4) with the same length as twice the period (2 * per). These arrays store intermediate results during the execution of the algorithm.
Preprocessing input data:
The input data (src) undergoes preprocessing to remove linear trends. This step enhances the algorithm's ability to focus on cyclical components in the data. The linear trend is calculated by finding the slope between the first and last values of the input data within the sample.
Iterative calculation of Goertzel coefficients:
The core of the Goertzel Browser algorithm lies in the iterative calculation of Goertzel coefficients for each frequency bin. These coefficients represent the spectral content of the input data at different frequencies. The code iterates through the range of frequencies, calculating the Goertzel coefficients using a nested loop structure.
Cycle strength computation:
The code calculates the cycle strength based on the Goertzel coefficients. This is an optional step, controlled by the UseCycleStrength flag. The cycle strength provides information on the relative influence of each cycle on the data per bar, considering both amplitude and cycle length. The algorithm computes the cycle strength either by squaring the amplitude (controlled by squaredAmp flag) or using the actual amplitude values.
Phase calculation:
The Goertzel Browser code computes the phase of each cycle, which represents the position of the cycle within the input data. The phase is calculated using the arctangent function (math.atan) based on the ratio of the imaginary and real components of the Goertzel coefficients.
Peak detection and cycle extraction:
The algorithm performs peak detection on the computed amplitudes or cycle strengths to identify dominant cycles. It stores the detected cycles in the cyclebuffer array, along with their corresponding amplitudes and phases in the amplitudebuffer and phasebuffer arrays, respectively.
Sorting cycles by amplitude or cycle strength:
The code sorts the detected cycles based on their amplitude or cycle strength in descending order. This allows the algorithm to prioritize cycles with the most significant impact on the input data.
Bartels cycle significance test:
If the FilterBartels flag is set, the code performs a Bartels cycle significance test on the detected cycles. This test determines the statistical significance of each cycle and filters out the insignificant cycles. The significant cycles are stored in the cycleBartelsBuffer array. If the SortBartels flag is set, the code sorts the significant cycles based on their Bartels significance values.
Waveform calculation:
The Goertzel Browser code calculates the waveform of the significant cycles for both past and future time windows. The past and future windows are defined by the WindowSizePast and WindowSizeFuture parameters, respectively. The algorithm uses either cosine or sine functions (controlled by the useCosine flag) to calculate the waveforms for each cycle. The useAddition flag determines whether the waveforms should be added or subtracted.
Storing waveforms in matrices:
The calculated waveforms for each cycle are stored in two matrices - goeWorkPast and goeWorkFuture. These matrices hold the waveforms for the past and future time windows, respectively. Each row in the matrices represents a time window position, and each column corresponds to a cycle.
Returning the number of cycles:
The Goertzel Browser function returns the total number of detected cycles (number_of_cycles) after processing the input data. This information can be used to further analyze the results or to visualize the detected cycles.
The Goertzel Browser code is a comprehensive implementation of the Goertzel Algorithm, specifically designed for detecting and analyzing dominant cycles within financial market data. The code offers a high level of customization, allowing users to fine-tune the algorithm based on their specific needs. The Goertzel Browser's combination of preprocessing, iterative calculations, cycle extraction, sorting, significance testing, and waveform calculation makes it a powerful tool for understanding cyclical components in financial data.
█ Generating and Visualizing Composite Waveform
The indicator calculates and visualizes the composite waveform for both past and future time windows based on the detected cycles. Here's a detailed explanation of this process:
Updating WindowSizePast and WindowSizeFuture:
The WindowSizePast and WindowSizeFuture are updated to ensure they are at least twice the MaxPer (maximum period).
Initializing matrices and arrays:
Two matrices, goeWorkPast and goeWorkFuture, are initialized to store the Goertzel results for past and future time windows. Multiple arrays are also initialized to store cycle, amplitude, phase, and Bartels information.
Preparing the source data (srcVal) array:
The source data is copied into an array, srcVal, and detrended using one of the selected methods (hpsmthdt, zlagsmthdt, logZlagRegression, hpsmth, or zlagsmth).
Goertzel function call:
The Goertzel function is called to analyze the detrended source data and extract cycle information. The output, number_of_cycles, contains the number of detected cycles.
Initializing arrays for past and future waveforms:
Three arrays, epgoertzel, goertzel, and goertzelFuture, are initialized to store the endpoint Goertzel, non-endpoint Goertzel, and future Goertzel projections, respectively.
Calculating composite waveform for past bars (goertzel array):
The past composite waveform is calculated by summing the selected cycles (either from the user-defined cycle list or the top cycles) and optionally subtracting the noise component.
Calculating composite waveform for future bars (goertzelFuture array):
The future composite waveform is calculated in a similar way as the past composite waveform.
Drawing past composite waveform (pvlines):
The past composite waveform is drawn on the chart using solid lines. The color of the lines is determined by the direction of the waveform (green for upward, red for downward).
Drawing future composite waveform (fvlines):
The future composite waveform is drawn on the chart using dotted lines. The color of the lines is determined by the direction of the waveform (fuchsia for upward, yellow for downward).
Displaying cycle information in a table (table3):
A table is created to display the cycle information, including the rank, period, Bartel value, amplitude (or cycle strength), and phase of each detected cycle.
Filling the table with cycle information:
The indicator iterates through the detected cycles and retrieves the relevant information (period, amplitude, phase, and Bartel value) from the corresponding arrays. It then fills the table with this information, displaying the values up to six decimal places.
To summarize, this indicator generates a composite waveform based on the detected cycles in the financial data. It calculates the composite waveforms for both past and future time windows and visualizes them on the chart using colored lines. Additionally, it displays detailed cycle information in a table, including the rank, period, Bartel value, amplitude (or cycle strength), and phase of each detected cycle.
█ Enhancing the Goertzel Algorithm-Based Script for Financial Modeling and Trading
The Goertzel algorithm-based script for detecting dominant cycles in financial data is a powerful tool for financial modeling and trading. It provides valuable insights into the past behavior of these cycles and potential future impact. However, as with any algorithm, there is always room for improvement. This section discusses potential enhancements to the existing script to make it even more robust and versatile for financial modeling, general trading, advanced trading, and high-frequency finance trading.
Enhancements for Financial Modeling
Data preprocessing: One way to improve the script's performance for financial modeling is to introduce more advanced data preprocessing techniques. This could include removing outliers, handling missing data, and normalizing the data to ensure consistent and accurate results.
Additional detrending and smoothing methods: Incorporating more sophisticated detrending and smoothing techniques, such as wavelet transform or empirical mode decomposition, can help improve the script's ability to accurately identify cycles and trends in the data.
Machine learning integration: Integrating machine learning techniques, such as artificial neural networks or support vector machines, can help enhance the script's predictive capabilities, leading to more accurate financial models.
Enhancements for General and Advanced Trading
Customizable indicator integration: Allowing users to integrate their own technical indicators can help improve the script's effectiveness for both general and advanced trading. By enabling the combination of the dominant cycle information with other technical analysis tools, traders can develop more comprehensive trading strategies.
Risk management and position sizing: Incorporating risk management and position sizing functionality into the script can help traders better manage their trades and control potential losses. This can be achieved by calculating the optimal position size based on the user's risk tolerance and account size.
Multi-timeframe analysis: Enhancing the script to perform multi-timeframe analysis can provide traders with a more holistic view of market trends and cycles. By identifying dominant cycles on different timeframes, traders can gain insights into the potential confluence of cycles and make better-informed trading decisions.
Enhancements for High-Frequency Finance Trading
Algorithm optimization: To ensure the script's suitability for high-frequency finance trading, optimizing the algorithm for faster execution is crucial. This can be achieved by employing efficient data structures and refining the calculation methods to minimize computational complexity.
Real-time data streaming: Integrating real-time data streaming capabilities into the script can help high-frequency traders react to market changes more quickly. By continuously updating the cycle information based on real-time market data, traders can adapt their strategies accordingly and capitalize on short-term market fluctuations.
Order execution and trade management: To fully leverage the script's capabilities for high-frequency trading, implementing functionality for automated order execution and trade management is essential. This can include features such as stop-loss and take-profit orders, trailing stops, and automated trade exit strategies.
While the existing Goertzel algorithm-based script is a valuable tool for detecting dominant cycles in financial data, there are several potential enhancements that can make it even more powerful for financial modeling, general trading, advanced trading, and high-frequency finance trading. By incorporating these improvements, the script can become a more versatile and effective tool for traders and financial analysts alike.
█ Understanding the Limitations of the Goertzel Algorithm
While the Goertzel algorithm-based script for detecting dominant cycles in financial data provides valuable insights, it is important to be aware of its limitations and drawbacks. Some of the key drawbacks of this indicator are:
Lagging nature:
As with many other technical indicators, the Goertzel algorithm-based script can suffer from lagging effects, meaning that it may not immediately react to real-time market changes. This lag can lead to late entries and exits, potentially resulting in reduced profitability or increased losses.
Parameter sensitivity:
The performance of the script can be sensitive to the chosen parameters, such as the detrending methods, smoothing techniques, and cycle detection settings. Improper parameter selection may lead to inaccurate cycle detection or increased false signals, which can negatively impact trading performance.
Complexity:
The Goertzel algorithm itself is relatively complex, making it difficult for novice traders or those unfamiliar with the concept of cycle analysis to fully understand and effectively utilize the script. This complexity can also make it challenging to optimize the script for specific trading styles or market conditions.
Overfitting risk:
As with any data-driven approach, there is a risk of overfitting when using the Goertzel algorithm-based script. Overfitting occurs when a model becomes too specific to the historical data it was trained on, leading to poor performance on new, unseen data. This can result in misleading signals and reduced trading performance.
No guarantee of future performance: While the script can provide insights into past cycles and potential future trends, it is important to remember that past performance does not guarantee future results. Market conditions can change, and relying solely on the script's predictions without considering other factors may lead to poor trading decisions.
Limited applicability: The Goertzel algorithm-based script may not be suitable for all markets, trading styles, or timeframes. Its effectiveness in detecting cycles may be limited in certain market conditions, such as during periods of extreme volatility or low liquidity.
While the Goertzel algorithm-based script offers valuable insights into dominant cycles in financial data, it is essential to consider its drawbacks and limitations when incorporating it into a trading strategy. Traders should always use the script in conjunction with other technical and fundamental analysis tools, as well as proper risk management, to make well-informed trading decisions.
█ Interpreting Results
The Goertzel Browser indicator can be interpreted by analyzing the plotted lines and the table presented alongside them. The indicator plots two lines: past and future composite waves. The past composite wave represents the composite wave of the past price data, and the future composite wave represents the projected composite wave for the next period.
The past composite wave line displays a solid line, with green indicating a bullish trend and red indicating a bearish trend. On the other hand, the future composite wave line is a dotted line with fuchsia indicating a bullish trend and yellow indicating a bearish trend.
The table presented alongside the indicator shows the top cycles with their corresponding rank, period, Bartels, amplitude or cycle strength, and phase. The amplitude is a measure of the strength of the cycle, while the phase is the position of the cycle within the data series.
Interpreting the Goertzel Browser indicator involves identifying the trend of the past and future composite wave lines and matching them with the corresponding bullish or bearish color. Additionally, traders can identify the top cycles with the highest amplitude or cycle strength and utilize them in conjunction with other technical indicators and fundamental analysis for trading decisions.
This indicator is considered a repainting indicator because the value of the indicator is calculated based on the past price data. As new price data becomes available, the indicator's value is recalculated, potentially causing the indicator's past values to change. This can create a false impression of the indicator's performance, as it may appear to have provided a profitable trading signal in the past when, in fact, that signal did not exist at the time.
The Goertzel indicator is also non-endpointed, meaning that it is not calculated up to the current bar or candle. Instead, it uses a fixed amount of historical data to calculate its values, which can make it difficult to use for real-time trading decisions. For example, if the indicator uses 100 bars of historical data to make its calculations, it cannot provide a signal until the current bar has closed and become part of the historical data. This can result in missed trading opportunities or delayed signals.
█ Conclusion
The Goertzel Browser indicator is a powerful tool for identifying and analyzing cyclical patterns in financial markets. Its ability to detect multiple cycles of varying frequencies and strengths make it a valuable addition to any trader's technical analysis toolkit. However, it is important to keep in mind that the Goertzel Browser indicator should be used in conjunction with other technical analysis tools and fundamental analysis to achieve the best results. With continued refinement and development, the Goertzel Browser indicator has the potential to become a highly effective tool for financial modeling, general trading, advanced trading, and high-frequency finance trading. Its accuracy and versatility make it a promising candidate for further research and development.
█ Footnotes
What is the Bartels Test for Cycle Significance?
The Bartels Cycle Significance Test is a statistical method that determines whether the peaks and troughs of a time series are statistically significant. The test is named after its inventor, George Bartels, who developed it in the mid-20th century.
The Bartels test is designed to analyze the cyclical components of a time series, which can help traders and analysts identify trends and cycles in financial markets. The test calculates a Bartels statistic, which measures the degree of non-randomness or autocorrelation in the time series.
The Bartels statistic is calculated by first splitting the time series into two halves and calculating the range of the peaks and troughs in each half. The test then compares these ranges using a t-test, which measures the significance of the difference between the two ranges.
If the Bartels statistic is greater than a critical value, it indicates that the peaks and troughs in the time series are non-random and that there is a significant cyclical component to the data. Conversely, if the Bartels statistic is less than the critical value, it suggests that the peaks and troughs are random and that there is no significant cyclical component.
The Bartels Cycle Significance Test is particularly useful in financial analysis because it can help traders and analysts identify significant cycles in asset prices, which can in turn inform investment decisions. However, it is important to note that the test is not perfect and can produce false signals in certain situations, particularly in noisy or volatile markets. Therefore, it is always recommended to use the test in conjunction with other technical and fundamental indicators to confirm trends and cycles.
Deep-dive into the Hodrick-Prescott Fitler
The Hodrick-Prescott (HP) filter is a statistical tool used in economics and finance to separate a time series into two components: a trend component and a cyclical component. It is a powerful tool for identifying long-term trends in economic and financial data and is widely used by economists, central banks, and financial institutions around the world.
The HP filter was first introduced in the 1990s by economists Robert Hodrick and Edward Prescott. It is a simple, two-parameter filter that separates a time series into a trend component and a cyclical component. The trend component represents the long-term behavior of the data, while the cyclical component captures the shorter-term fluctuations around the trend.
The HP filter works by minimizing the following objective function:
Minimize: (Sum of Squared Deviations) + λ (Sum of Squared Second Differences)
Where:
The first term represents the deviation of the data from the trend.
The second term represents the smoothness of the trend.
λ is a smoothing parameter that determines the degree of smoothness of the trend.
The smoothing parameter λ is typically set to a value between 100 and 1600, depending on the frequency of the data. Higher values of λ lead to a smoother trend, while lower values lead to a more volatile trend.
The HP filter has several advantages over other smoothing techniques. It is a non-parametric method, meaning that it does not make any assumptions about the underlying distribution of the data. It also allows for easy comparison of trends across different time series and can be used with data of any frequency.
However, the HP filter also has some limitations. It assumes that the trend is a smooth function, which may not be the case in some situations. It can also be sensitive to changes in the smoothing parameter λ, which may result in different trends for the same data. Additionally, the filter may produce unrealistic trends for very short time series.
Despite these limitations, the HP filter remains a valuable tool for analyzing economic and financial data. It is widely used by central banks and financial institutions to monitor long-term trends in the economy, and it can be used to identify turning points in the business cycle. The filter can also be used to analyze asset prices, exchange rates, and other financial variables.
The Hodrick-Prescott filter is a powerful tool for analyzing economic and financial data. It separates a time series into a trend component and a cyclical component, allowing for easy identification of long-term trends and turning points in the business cycle. While it has some limitations, it remains a valuable tool for economists, central banks, and financial institutions around the world.
Goertzel Cycle Composite Wave [Loxx]As the financial markets become increasingly complex and data-driven, traders and analysts must leverage powerful tools to gain insights and make informed decisions. One such tool is the Goertzel Cycle Composite Wave indicator, a sophisticated technical analysis indicator that helps identify cyclical patterns in financial data. This powerful tool is capable of detecting cyclical patterns in financial data, helping traders to make better predictions and optimize their trading strategies. With its unique combination of mathematical algorithms and advanced charting capabilities, this indicator has the potential to revolutionize the way we approach financial modeling and trading.
*** To decrease the load time of this indicator, only XX many bars back will render to the chart. You can control this value with the setting "Number of Bars to Render". This doesn't have anything to do with repainting or the indicator being endpointed***
█ Brief Overview of the Goertzel Cycle Composite Wave
The Goertzel Cycle Composite Wave is a sophisticated technical analysis tool that utilizes the Goertzel algorithm to analyze and visualize cyclical components within a financial time series. By identifying these cycles and their characteristics, the indicator aims to provide valuable insights into the market's underlying price movements, which could potentially be used for making informed trading decisions.
The Goertzel Cycle Composite Wave is considered a non-repainting and endpointed indicator. This means that once a value has been calculated for a specific bar, that value will not change in subsequent bars, and the indicator is designed to have a clear start and end point. This is an important characteristic for indicators used in technical analysis, as it allows traders to make informed decisions based on historical data without the risk of hindsight bias or future changes in the indicator's values. This means traders can use this indicator trading purposes.
The repainting version of this indicator with forecasting, cycle selection/elimination options, and data output table can be found here:
Goertzel Browser
The primary purpose of this indicator is to:
1. Detect and analyze the dominant cycles present in the price data.
2. Reconstruct and visualize the composite wave based on the detected cycles.
To achieve this, the indicator performs several tasks:
1. Detrending the price data: The indicator preprocesses the price data using various detrending techniques, such as Hodrick-Prescott filters, zero-lag moving averages, and linear regression, to remove the underlying trend and focus on the cyclical components.
2. Applying the Goertzel algorithm: The indicator applies the Goertzel algorithm to the detrended price data, identifying the dominant cycles and their characteristics, such as amplitude, phase, and cycle strength.
3. Constructing the composite wave: The indicator reconstructs the composite wave by combining the detected cycles, either by using a user-defined list of cycles or by selecting the top N cycles based on their amplitude or cycle strength.
4. Visualizing the composite wave: The indicator plots the composite wave, using solid lines for the cycles. The color of the lines indicates whether the wave is increasing or decreasing.
This indicator is a powerful tool that employs the Goertzel algorithm to analyze and visualize the cyclical components within a financial time series. By providing insights into the underlying price movements, the indicator aims to assist traders in making more informed decisions.
█ What is the Goertzel Algorithm?
The Goertzel algorithm, named after Gerald Goertzel, is a digital signal processing technique that is used to efficiently compute individual terms of the Discrete Fourier Transform (DFT). It was first introduced in 1958, and since then, it has found various applications in the fields of engineering, mathematics, and physics.
The Goertzel algorithm is primarily used to detect specific frequency components within a digital signal, making it particularly useful in applications where only a few frequency components are of interest. The algorithm is computationally efficient, as it requires fewer calculations than the Fast Fourier Transform (FFT) when detecting a small number of frequency components. This efficiency makes the Goertzel algorithm a popular choice in applications such as:
1. Telecommunications: The Goertzel algorithm is used for decoding Dual-Tone Multi-Frequency (DTMF) signals, which are the tones generated when pressing buttons on a telephone keypad. By identifying specific frequency components, the algorithm can accurately determine which button has been pressed.
2. Audio processing: The algorithm can be used to detect specific pitches or harmonics in an audio signal, making it useful in applications like pitch detection and tuning musical instruments.
3. Vibration analysis: In the field of mechanical engineering, the Goertzel algorithm can be applied to analyze vibrations in rotating machinery, helping to identify faulty components or signs of wear.
4. Power system analysis: The algorithm can be used to measure harmonic content in power systems, allowing engineers to assess power quality and detect potential issues.
The Goertzel algorithm is used in these applications because it offers several advantages over other methods, such as the FFT:
1. Computational efficiency: The Goertzel algorithm requires fewer calculations when detecting a small number of frequency components, making it more computationally efficient than the FFT in these cases.
2. Real-time analysis: The algorithm can be implemented in a streaming fashion, allowing for real-time analysis of signals, which is crucial in applications like telecommunications and audio processing.
3. Memory efficiency: The Goertzel algorithm requires less memory than the FFT, as it only computes the frequency components of interest.
4. Precision: The algorithm is less susceptible to numerical errors compared to the FFT, ensuring more accurate results in applications where precision is essential.
The Goertzel algorithm is an efficient digital signal processing technique that is primarily used to detect specific frequency components within a signal. Its computational efficiency, real-time capabilities, and precision make it an attractive choice for various applications, including telecommunications, audio processing, vibration analysis, and power system analysis. The algorithm has been widely adopted since its introduction in 1958 and continues to be an essential tool in the fields of engineering, mathematics, and physics.
█ Goertzel Algorithm in Quantitative Finance: In-Depth Analysis and Applications
The Goertzel algorithm, initially designed for signal processing in telecommunications, has gained significant traction in the financial industry due to its efficient frequency detection capabilities. In quantitative finance, the Goertzel algorithm has been utilized for uncovering hidden market cycles, developing data-driven trading strategies, and optimizing risk management. This section delves deeper into the applications of the Goertzel algorithm in finance, particularly within the context of quantitative trading and analysis.
Unveiling Hidden Market Cycles:
Market cycles are prevalent in financial markets and arise from various factors, such as economic conditions, investor psychology, and market participant behavior. The Goertzel algorithm's ability to detect and isolate specific frequencies in price data helps trader analysts identify hidden market cycles that may otherwise go unnoticed. By examining the amplitude, phase, and periodicity of each cycle, traders can better understand the underlying market structure and dynamics, enabling them to develop more informed and effective trading strategies.
Developing Quantitative Trading Strategies:
The Goertzel algorithm's versatility allows traders to incorporate its insights into a wide range of trading strategies. By identifying the dominant market cycles in a financial instrument's price data, traders can create data-driven strategies that capitalize on the cyclical nature of markets.
For instance, a trader may develop a mean-reversion strategy that takes advantage of the identified cycles. By establishing positions when the price deviates from the predicted cycle, the trader can profit from the subsequent reversion to the cycle's mean. Similarly, a momentum-based strategy could be designed to exploit the persistence of a dominant cycle by entering positions that align with the cycle's direction.
Enhancing Risk Management:
The Goertzel algorithm plays a vital role in risk management for quantitative strategies. By analyzing the cyclical components of a financial instrument's price data, traders can gain insights into the potential risks associated with their trading strategies.
By monitoring the amplitude and phase of dominant cycles, a trader can detect changes in market dynamics that may pose risks to their positions. For example, a sudden increase in amplitude may indicate heightened volatility, prompting the trader to adjust position sizing or employ hedging techniques to protect their portfolio. Additionally, changes in phase alignment could signal a potential shift in market sentiment, necessitating adjustments to the trading strategy.
Expanding Quantitative Toolkits:
Traders can augment the Goertzel algorithm's insights by combining it with other quantitative techniques, creating a more comprehensive and sophisticated analysis framework. For example, machine learning algorithms, such as neural networks or support vector machines, could be trained on features extracted from the Goertzel algorithm to predict future price movements more accurately.
Furthermore, the Goertzel algorithm can be integrated with other technical analysis tools, such as moving averages or oscillators, to enhance their effectiveness. By applying these tools to the identified cycles, traders can generate more robust and reliable trading signals.
The Goertzel algorithm offers invaluable benefits to quantitative finance practitioners by uncovering hidden market cycles, aiding in the development of data-driven trading strategies, and improving risk management. By leveraging the insights provided by the Goertzel algorithm and integrating it with other quantitative techniques, traders can gain a deeper understanding of market dynamics and devise more effective trading strategies.
█ Indicator Inputs
src: This is the source data for the analysis, typically the closing price of the financial instrument.
detrendornot: This input determines the method used for detrending the source data. Detrending is the process of removing the underlying trend from the data to focus on the cyclical components.
The available options are:
hpsmthdt: Detrend using Hodrick-Prescott filter centered moving average.
zlagsmthdt: Detrend using zero-lag moving average centered moving average.
logZlagRegression: Detrend using logarithmic zero-lag linear regression.
hpsmth: Detrend using Hodrick-Prescott filter.
zlagsmth: Detrend using zero-lag moving average.
DT_HPper1 and DT_HPper2: These inputs define the period range for the Hodrick-Prescott filter centered moving average when detrendornot is set to hpsmthdt.
DT_ZLper1 and DT_ZLper2: These inputs define the period range for the zero-lag moving average centered moving average when detrendornot is set to zlagsmthdt.
DT_RegZLsmoothPer: This input defines the period for the zero-lag moving average used in logarithmic zero-lag linear regression when detrendornot is set to logZlagRegression.
HPsmoothPer: This input defines the period for the Hodrick-Prescott filter when detrendornot is set to hpsmth.
ZLMAsmoothPer: This input defines the period for the zero-lag moving average when detrendornot is set to zlagsmth.
MaxPer: This input sets the maximum period for the Goertzel algorithm to search for cycles.
squaredAmp: This boolean input determines whether the amplitude should be squared in the Goertzel algorithm.
useAddition: This boolean input determines whether the Goertzel algorithm should use addition for combining the cycles.
useCosine: This boolean input determines whether the Goertzel algorithm should use cosine waves instead of sine waves.
UseCycleStrength: This boolean input determines whether the Goertzel algorithm should compute the cycle strength, which is a normalized measure of the cycle's amplitude.
WindowSizePast: These inputs define the window size for the composite wave.
FilterBartels: This boolean input determines whether Bartel's test should be applied to filter out non-significant cycles.
BartNoCycles: This input sets the number of cycles to be used in Bartel's test.
BartSmoothPer: This input sets the period for the moving average used in Bartel's test.
BartSigLimit: This input sets the significance limit for Bartel's test, below which cycles are considered insignificant.
SortBartels: This boolean input determines whether the cycles should be sorted by their Bartel's test results.
StartAtCycle: This input determines the starting index for selecting the top N cycles when UseCycleList is set to false. This allows you to skip a certain number of cycles from the top before selecting the desired number of cycles.
UseTopCycles: This input sets the number of top cycles to use for constructing the composite wave when UseCycleList is set to false. The cycles are ranked based on their amplitudes or cycle strengths, depending on the UseCycleStrength input.
SubtractNoise: This boolean input determines whether to subtract the noise (remaining cycles) from the composite wave. If set to true, the composite wave will only include the top N cycles specified by UseTopCycles.
█ Exploring Auxiliary Functions
The following functions demonstrate advanced techniques for analyzing financial markets, including zero-lag moving averages, Bartels probability, detrending, and Hodrick-Prescott filtering. This section examines each function in detail, explaining their purpose, methodology, and applications in finance. We will examine how each function contributes to the overall performance and effectiveness of the indicator and how they work together to create a powerful analytical tool.
Zero-Lag Moving Average:
The zero-lag moving average function is designed to minimize the lag typically associated with moving averages. This is achieved through a two-step weighted linear regression process that emphasizes more recent data points. The function calculates a linearly weighted moving average (LWMA) on the input data and then applies another LWMA on the result. By doing this, the function creates a moving average that closely follows the price action, reducing the lag and improving the responsiveness of the indicator.
The zero-lag moving average function is used in the indicator to provide a responsive, low-lag smoothing of the input data. This function helps reduce the noise and fluctuations in the data, making it easier to identify and analyze underlying trends and patterns. By minimizing the lag associated with traditional moving averages, this function allows the indicator to react more quickly to changes in market conditions, providing timely signals and improving the overall effectiveness of the indicator.
Bartels Probability:
The Bartels probability function calculates the probability of a given cycle being significant in a time series. It uses a mathematical test called the Bartels test to assess the significance of cycles detected in the data. The function calculates coefficients for each detected cycle and computes an average amplitude and an expected amplitude. By comparing these values, the Bartels probability is derived, indicating the likelihood of a cycle's significance. This information can help in identifying and analyzing dominant cycles in financial markets.
The Bartels probability function is incorporated into the indicator to assess the significance of detected cycles in the input data. By calculating the Bartels probability for each cycle, the indicator can prioritize the most significant cycles and focus on the market dynamics that are most relevant to the current trading environment. This function enhances the indicator's ability to identify dominant market cycles, improving its predictive power and aiding in the development of effective trading strategies.
Detrend Logarithmic Zero-Lag Regression:
The detrend logarithmic zero-lag regression function is used for detrending data while minimizing lag. It combines a zero-lag moving average with a linear regression detrending method. The function first calculates the zero-lag moving average of the logarithm of input data and then applies a linear regression to remove the trend. By detrending the data, the function isolates the cyclical components, making it easier to analyze and interpret the underlying market dynamics.
The detrend logarithmic zero-lag regression function is used in the indicator to isolate the cyclical components of the input data. By detrending the data, the function enables the indicator to focus on the cyclical movements in the market, making it easier to analyze and interpret market dynamics. This function is essential for identifying cyclical patterns and understanding the interactions between different market cycles, which can inform trading decisions and enhance overall market understanding.
Bartels Cycle Significance Test:
The Bartels cycle significance test is a function that combines the Bartels probability function and the detrend logarithmic zero-lag regression function to assess the significance of detected cycles. The function calculates the Bartels probability for each cycle and stores the results in an array. By analyzing the probability values, traders and analysts can identify the most significant cycles in the data, which can be used to develop trading strategies and improve market understanding.
The Bartels cycle significance test function is integrated into the indicator to provide a comprehensive analysis of the significance of detected cycles. By combining the Bartels probability function and the detrend logarithmic zero-lag regression function, this test evaluates the significance of each cycle and stores the results in an array. The indicator can then use this information to prioritize the most significant cycles and focus on the most relevant market dynamics. This function enhances the indicator's ability to identify and analyze dominant market cycles, providing valuable insights for trading and market analysis.
Hodrick-Prescott Filter:
The Hodrick-Prescott filter is a popular technique used to separate the trend and cyclical components of a time series. The function applies a smoothing parameter to the input data and calculates a smoothed series using a two-sided filter. This smoothed series represents the trend component, which can be subtracted from the original data to obtain the cyclical component. The Hodrick-Prescott filter is commonly used in economics and finance to analyze economic data and financial market trends.
The Hodrick-Prescott filter is incorporated into the indicator to separate the trend and cyclical components of the input data. By applying the filter to the data, the indicator can isolate the trend component, which can be used to analyze long-term market trends and inform trading decisions. Additionally, the cyclical component can be used to identify shorter-term market dynamics and provide insights into potential trading opportunities. The inclusion of the Hodrick-Prescott filter adds another layer of analysis to the indicator, making it more versatile and comprehensive.
Detrending Options: Detrend Centered Moving Average:
The detrend centered moving average function provides different detrending methods, including the Hodrick-Prescott filter and the zero-lag moving average, based on the selected detrending method. The function calculates two sets of smoothed values using the chosen method and subtracts one set from the other to obtain a detrended series. By offering multiple detrending options, this function allows traders and analysts to select the most appropriate method for their specific needs and preferences.
The detrend centered moving average function is integrated into the indicator to provide users with multiple detrending options, including the Hodrick-Prescott filter and the zero-lag moving average. By offering multiple detrending methods, the indicator allows users to customize the analysis to their specific needs and preferences, enhancing the indicator's overall utility and adaptability. This function ensures that the indicator can cater to a wide range of trading styles and objectives, making it a valuable tool for a diverse group of market participants.
The auxiliary functions functions discussed in this section demonstrate the power and versatility of mathematical techniques in analyzing financial markets. By understanding and implementing these functions, traders and analysts can gain valuable insights into market dynamics, improve their trading strategies, and make more informed decisions. The combination of zero-lag moving averages, Bartels probability, detrending methods, and the Hodrick-Prescott filter provides a comprehensive toolkit for analyzing and interpreting financial data. The integration of advanced functions in a financial indicator creates a powerful and versatile analytical tool that can provide valuable insights into financial markets. By combining the zero-lag moving average,
█ In-Depth Analysis of the Goertzel Cycle Composite Wave Code
The Goertzel Cycle Composite Wave code is an implementation of the Goertzel Algorithm, an efficient technique to perform spectral analysis on a signal. The code is designed to detect and analyze dominant cycles within a given financial market data set. This section will provide an extremely detailed explanation of the code, its structure, functions, and intended purpose.
Function signature and input parameters:
The Goertzel Cycle Composite Wave function accepts numerous input parameters for customization, including source data (src), the current bar (forBar), sample size (samplesize), period (per), squared amplitude flag (squaredAmp), addition flag (useAddition), cosine flag (useCosine), cycle strength flag (UseCycleStrength), past sizes (WindowSizePast), Bartels filter flag (FilterBartels), Bartels-related parameters (BartNoCycles, BartSmoothPer, BartSigLimit), sorting flag (SortBartels), and output buffers (goeWorkPast, cyclebuffer, amplitudebuffer, phasebuffer, cycleBartelsBuffer).
Initializing variables and arrays:
The code initializes several float arrays (goeWork1, goeWork2, goeWork3, goeWork4) with the same length as twice the period (2 * per). These arrays store intermediate results during the execution of the algorithm.
Preprocessing input data:
The input data (src) undergoes preprocessing to remove linear trends. This step enhances the algorithm's ability to focus on cyclical components in the data. The linear trend is calculated by finding the slope between the first and last values of the input data within the sample.
Iterative calculation of Goertzel coefficients:
The core of the Goertzel Cycle Composite Wave algorithm lies in the iterative calculation of Goertzel coefficients for each frequency bin. These coefficients represent the spectral content of the input data at different frequencies. The code iterates through the range of frequencies, calculating the Goertzel coefficients using a nested loop structure.
Cycle strength computation:
The code calculates the cycle strength based on the Goertzel coefficients. This is an optional step, controlled by the UseCycleStrength flag. The cycle strength provides information on the relative influence of each cycle on the data per bar, considering both amplitude and cycle length. The algorithm computes the cycle strength either by squaring the amplitude (controlled by squaredAmp flag) or using the actual amplitude values.
Phase calculation:
The Goertzel Cycle Composite Wave code computes the phase of each cycle, which represents the position of the cycle within the input data. The phase is calculated using the arctangent function (math.atan) based on the ratio of the imaginary and real components of the Goertzel coefficients.
Peak detection and cycle extraction:
The algorithm performs peak detection on the computed amplitudes or cycle strengths to identify dominant cycles. It stores the detected cycles in the cyclebuffer array, along with their corresponding amplitudes and phases in the amplitudebuffer and phasebuffer arrays, respectively.
Sorting cycles by amplitude or cycle strength:
The code sorts the detected cycles based on their amplitude or cycle strength in descending order. This allows the algorithm to prioritize cycles with the most significant impact on the input data.
Bartels cycle significance test:
If the FilterBartels flag is set, the code performs a Bartels cycle significance test on the detected cycles. This test determines the statistical significance of each cycle and filters out the insignificant cycles. The significant cycles are stored in the cycleBartelsBuffer array. If the SortBartels flag is set, the code sorts the significant cycles based on their Bartels significance values.
Waveform calculation:
The Goertzel Cycle Composite Wave code calculates the waveform of the significant cycles for specified time windows. The windows are defined by the WindowSizePast parameters, respectively. The algorithm uses either cosine or sine functions (controlled by the useCosine flag) to calculate the waveforms for each cycle. The useAddition flag determines whether the waveforms should be added or subtracted.
Storing waveforms in a matrix:
The calculated waveforms for the cycle is stored in the matrix - goeWorkPast. This matrix holds the waveforms for the specified time windows. Each row in the matrix represents a time window position, and each column corresponds to a cycle.
Returning the number of cycles:
The Goertzel Cycle Composite Wave function returns the total number of detected cycles (number_of_cycles) after processing the input data. This information can be used to further analyze the results or to visualize the detected cycles.
The Goertzel Cycle Composite Wave code is a comprehensive implementation of the Goertzel Algorithm, specifically designed for detecting and analyzing dominant cycles within financial market data. The code offers a high level of customization, allowing users to fine-tune the algorithm based on their specific needs. The Goertzel Cycle Composite Wave's combination of preprocessing, iterative calculations, cycle extraction, sorting, significance testing, and waveform calculation makes it a powerful tool for understanding cyclical components in financial data.
█ Generating and Visualizing Composite Waveform
The indicator calculates and visualizes the composite waveform for specified time windows based on the detected cycles. Here's a detailed explanation of this process:
Updating WindowSizePast:
The WindowSizePast is updated to ensure they are at least twice the MaxPer (maximum period).
Initializing matrices and arrays:
The matrix goeWorkPast is initialized to store the Goertzel results for specified time windows. Multiple arrays are also initialized to store cycle, amplitude, phase, and Bartels information.
Preparing the source data (srcVal) array:
The source data is copied into an array, srcVal, and detrended using one of the selected methods (hpsmthdt, zlagsmthdt, logZlagRegression, hpsmth, or zlagsmth).
Goertzel function call:
The Goertzel function is called to analyze the detrended source data and extract cycle information. The output, number_of_cycles, contains the number of detected cycles.
Initializing arrays for waveforms:
The goertzel array is initialized to store the endpoint Goertzel.
Calculating composite waveform (goertzel array):
The composite waveform is calculated by summing the selected cycles (either from the user-defined cycle list or the top cycles) and optionally subtracting the noise component.
Drawing composite waveform (pvlines):
The composite waveform is drawn on the chart using solid lines. The color of the lines is determined by the direction of the waveform (green for upward, red for downward).
To summarize, this indicator generates a composite waveform based on the detected cycles in the financial data. It calculates the composite waveforms and visualizes them on the chart using colored lines.
█ Enhancing the Goertzel Algorithm-Based Script for Financial Modeling and Trading
The Goertzel algorithm-based script for detecting dominant cycles in financial data is a powerful tool for financial modeling and trading. It provides valuable insights into the past behavior of these cycles. However, as with any algorithm, there is always room for improvement. This section discusses potential enhancements to the existing script to make it even more robust and versatile for financial modeling, general trading, advanced trading, and high-frequency finance trading.
Enhancements for Financial Modeling
Data preprocessing: One way to improve the script's performance for financial modeling is to introduce more advanced data preprocessing techniques. This could include removing outliers, handling missing data, and normalizing the data to ensure consistent and accurate results.
Additional detrending and smoothing methods: Incorporating more sophisticated detrending and smoothing techniques, such as wavelet transform or empirical mode decomposition, can help improve the script's ability to accurately identify cycles and trends in the data.
Machine learning integration: Integrating machine learning techniques, such as artificial neural networks or support vector machines, can help enhance the script's predictive capabilities, leading to more accurate financial models.
Enhancements for General and Advanced Trading
Customizable indicator integration: Allowing users to integrate their own technical indicators can help improve the script's effectiveness for both general and advanced trading. By enabling the combination of the dominant cycle information with other technical analysis tools, traders can develop more comprehensive trading strategies.
Risk management and position sizing: Incorporating risk management and position sizing functionality into the script can help traders better manage their trades and control potential losses. This can be achieved by calculating the optimal position size based on the user's risk tolerance and account size.
Multi-timeframe analysis: Enhancing the script to perform multi-timeframe analysis can provide traders with a more holistic view of market trends and cycles. By identifying dominant cycles on different timeframes, traders can gain insights into the potential confluence of cycles and make better-informed trading decisions.
Enhancements for High-Frequency Finance Trading
Algorithm optimization: To ensure the script's suitability for high-frequency finance trading, optimizing the algorithm for faster execution is crucial. This can be achieved by employing efficient data structures and refining the calculation methods to minimize computational complexity.
Real-time data streaming: Integrating real-time data streaming capabilities into the script can help high-frequency traders react to market changes more quickly. By continuously updating the cycle information based on real-time market data, traders can adapt their strategies accordingly and capitalize on short-term market fluctuations.
Order execution and trade management: To fully leverage the script's capabilities for high-frequency trading, implementing functionality for automated order execution and trade management is essential. This can include features such as stop-loss and take-profit orders, trailing stops, and automated trade exit strategies.
While the existing Goertzel algorithm-based script is a valuable tool for detecting dominant cycles in financial data, there are several potential enhancements that can make it even more powerful for financial modeling, general trading, advanced trading, and high-frequency finance trading. By incorporating these improvements, the script can become a more versatile and effective tool for traders and financial analysts alike.
█ Understanding the Limitations of the Goertzel Algorithm
While the Goertzel algorithm-based script for detecting dominant cycles in financial data provides valuable insights, it is important to be aware of its limitations and drawbacks. Some of the key drawbacks of this indicator are:
Lagging nature:
As with many other technical indicators, the Goertzel algorithm-based script can suffer from lagging effects, meaning that it may not immediately react to real-time market changes. This lag can lead to late entries and exits, potentially resulting in reduced profitability or increased losses.
Parameter sensitivity:
The performance of the script can be sensitive to the chosen parameters, such as the detrending methods, smoothing techniques, and cycle detection settings. Improper parameter selection may lead to inaccurate cycle detection or increased false signals, which can negatively impact trading performance.
Complexity:
The Goertzel algorithm itself is relatively complex, making it difficult for novice traders or those unfamiliar with the concept of cycle analysis to fully understand and effectively utilize the script. This complexity can also make it challenging to optimize the script for specific trading styles or market conditions.
Overfitting risk:
As with any data-driven approach, there is a risk of overfitting when using the Goertzel algorithm-based script. Overfitting occurs when a model becomes too specific to the historical data it was trained on, leading to poor performance on new, unseen data. This can result in misleading signals and reduced trading performance.
Limited applicability:
The Goertzel algorithm-based script may not be suitable for all markets, trading styles, or timeframes. Its effectiveness in detecting cycles may be limited in certain market conditions, such as during periods of extreme volatility or low liquidity.
While the Goertzel algorithm-based script offers valuable insights into dominant cycles in financial data, it is essential to consider its drawbacks and limitations when incorporating it into a trading strategy. Traders should always use the script in conjunction with other technical and fundamental analysis tools, as well as proper risk management, to make well-informed trading decisions.
█ Interpreting Results
The Goertzel Cycle Composite Wave indicator can be interpreted by analyzing the plotted lines. The indicator plots two lines: composite waves. The composite wave represents the composite wave of the price data.
The composite wave line displays a solid line, with green indicating a bullish trend and red indicating a bearish trend.
Interpreting the Goertzel Cycle Composite Wave indicator involves identifying the trend of the composite wave lines and matching them with the corresponding bullish or bearish color.
█ Conclusion
The Goertzel Cycle Composite Wave indicator is a powerful tool for identifying and analyzing cyclical patterns in financial markets. Its ability to detect multiple cycles of varying frequencies and strengths make it a valuable addition to any trader's technical analysis toolkit. However, it is important to keep in mind that the Goertzel Cycle Composite Wave indicator should be used in conjunction with other technical analysis tools and fundamental analysis to achieve the best results. With continued refinement and development, the Goertzel Cycle Composite Wave indicator has the potential to become a highly effective tool for financial modeling, general trading, advanced trading, and high-frequency finance trading. Its accuracy and versatility make it a promising candidate for further research and development.
█ Footnotes
What is the Bartels Test for Cycle Significance?
The Bartels Cycle Significance Test is a statistical method that determines whether the peaks and troughs of a time series are statistically significant. The test is named after its inventor, George Bartels, who developed it in the mid-20th century.
The Bartels test is designed to analyze the cyclical components of a time series, which can help traders and analysts identify trends and cycles in financial markets. The test calculates a Bartels statistic, which measures the degree of non-randomness or autocorrelation in the time series.
The Bartels statistic is calculated by first splitting the time series into two halves and calculating the range of the peaks and troughs in each half. The test then compares these ranges using a t-test, which measures the significance of the difference between the two ranges.
If the Bartels statistic is greater than a critical value, it indicates that the peaks and troughs in the time series are non-random and that there is a significant cyclical component to the data. Conversely, if the Bartels statistic is less than the critical value, it suggests that the peaks and troughs are random and that there is no significant cyclical component.
The Bartels Cycle Significance Test is particularly useful in financial analysis because it can help traders and analysts identify significant cycles in asset prices, which can in turn inform investment decisions. However, it is important to note that the test is not perfect and can produce false signals in certain situations, particularly in noisy or volatile markets. Therefore, it is always recommended to use the test in conjunction with other technical and fundamental indicators to confirm trends and cycles.
Deep-dive into the Hodrick-Prescott Fitler
The Hodrick-Prescott (HP) filter is a statistical tool used in economics and finance to separate a time series into two components: a trend component and a cyclical component. It is a powerful tool for identifying long-term trends in economic and financial data and is widely used by economists, central banks, and financial institutions around the world.
The HP filter was first introduced in the 1990s by economists Robert Hodrick and Edward Prescott. It is a simple, two-parameter filter that separates a time series into a trend component and a cyclical component. The trend component represents the long-term behavior of the data, while the cyclical component captures the shorter-term fluctuations around the trend.
The HP filter works by minimizing the following objective function:
Minimize: (Sum of Squared Deviations) + λ (Sum of Squared Second Differences)
Where:
1. The first term represents the deviation of the data from the trend.
2. The second term represents the smoothness of the trend.
3. λ is a smoothing parameter that determines the degree of smoothness of the trend.
The smoothing parameter λ is typically set to a value between 100 and 1600, depending on the frequency of the data. Higher values of λ lead to a smoother trend, while lower values lead to a more volatile trend.
The HP filter has several advantages over other smoothing techniques. It is a non-parametric method, meaning that it does not make any assumptions about the underlying distribution of the data. It also allows for easy comparison of trends across different time series and can be used with data of any frequency.
However, the HP filter also has some limitations. It assumes that the trend is a smooth function, which may not be the case in some situations. It can also be sensitive to changes in the smoothing parameter λ, which may result in different trends for the same data. Additionally, the filter may produce unrealistic trends for very short time series.
Despite these limitations, the HP filter remains a valuable tool for analyzing economic and financial data. It is widely used by central banks and financial institutions to monitor long-term trends in the economy, and it can be used to identify turning points in the business cycle. The filter can also be used to analyze asset prices, exchange rates, and other financial variables.
The Hodrick-Prescott filter is a powerful tool for analyzing economic and financial data. It separates a time series into a trend component and a cyclical component, allowing for easy identification of long-term trends and turning points in the business cycle. While it has some limitations, it remains a valuable tool for economists, central banks, and financial institutions around the world.
STD/Clutter-Filtered, Variety FIR Filters [Loxx]STD/Clutter-Filtered, Variety FIR Filters is a FIR filter explorer. The following FIR Digital Filters are included.
Rectangular - simple moving average
Hanning
Hamming
Blackman
Blackman/Harris
Linear weighted
Triangular
There are 10s of windowing functions like the ones listed above. This indicator will be updated over time as I create more windowing functions in Pine.
Uniform/Rectangular Window
The uniform window (also called the rectangular window) is a time window with unity amplitude for all time samples and has the same effect as not applying a window.
Use this window when leakage is not a concern, such as observing an entire transient signal.
The uniform window has a rectangular shape and does not attenuate any portion of the time record. It weights all parts of the time record equally. Because the uniform window does not force the signal to appear periodic in the time record, it is generally used only with functions that are already periodic within a time record, such as transients and bursts.
The uniform window is sometimes called a transient or boxcar window.
For sine waves that are exactly periodic within a time record, using the uniform window allows you to measure the amplitude exactly (to within hardware specifications) from the Spectrum trace.
Hanning Window
The Hanning window attenuates the input signal at both ends of the time record to zero. This forces the signal to appear periodic. The Hanning window offers good frequency resolution at the expense of some amplitude accuracy.
This window is typically used for broadband signals such as random noise. This window should not be used for burst or chirp source types or other strictly periodic signals. The Hanning window is sometimes called the Hann window or random window.
Hamming Window
Computers can't do computations with an infinite number of data points, so all signals are "cut off" at either end. This causes the ripple on either side of the peak that you see. The hamming window reduces this ripple, giving you a more accurate idea of the original signal's frequency spectrum.
Blackman
The Blackman window is a taper formed by using the first three terms of a summation of cosines. It was designed to have close to the minimal leakage possible. It is close to optimal, only slightly worse than a Kaiser window.
Blackman-Harris
This is the original "Minimum 4-sample Blackman-Harris" window, as given in the classic window paper by Fredric Harris "On the Use of Windows for Harmonic Analysis with the Discrete Fourier Transform", Proceedings of the IEEE, vol 66, no. 1, pp. 51-83, January 1978. The maximum side-lobe level is -92.00974072 dB.
Linear Weighted
A Weighted Moving Average puts more weight on recent data and less on past data. This is done by multiplying each bar’s price by a weighting factor. Because of its unique calculation, WMA will follow prices more closely than a corresponding Simple Moving Average.
Triangular Weighted
Triangular windowing is known for very smooth results. The weights in the triangular moving average are adding more weight to central values of the averaged data. Hence the coefficients are specifically distributed. Some of the examples that can give a clear picture of the coefficients progression:
period 1 : 1
period 2 : 1 1
period 3 : 1 2 1
period 4 : 1 2 2 1
period 5 : 1 2 3 2 1
period 6 : 1 2 3 3 2 1
period 7 : 1 2 3 4 3 2 1
period 8 : 1 2 3 4 4 3 2 1
Read here to read about how each of these filters compare with each other: Windowing
What is a Finite Impulse Response Filter?
In signal processing, a finite impulse response (FIR) filter is a filter whose impulse response (or response to any finite length input) is of finite duration, because it settles to zero in finite time. This is in contrast to infinite impulse response (IIR) filters, which may have internal feedback and may continue to respond indefinitely (usually decaying).
The impulse response (that is, the output in response to a Kronecker delta input) of an Nth-order discrete-time FIR filter lasts exactly {\displaystyle N+1}N+1 samples (from first nonzero element through last nonzero element) before it then settles to zero.
FIR filters can be discrete-time or continuous-time, and digital or analog.
A FIR filter is (similar to, or) just a weighted moving average filter, where (unlike a typical equally weighted moving average filter) the weights of each delay tap are not constrained to be identical or even of the same sign. By changing various values in the array of weights (the impulse response, or time shifted and sampled version of the same), the frequency response of a FIR filter can be completely changed.
An FIR filter simply CONVOLVES the input time series (price data) with its IMPULSE RESPONSE. The impulse response is just a set of weights (or "coefficients") that multiply each data point. Then you just add up all the products and divide by the sum of the weights and that is it; e.g., for a 10-bar SMA you just add up 10 bars of price data (each multiplied by 1) and divide by 10. For a weighted-MA you add up the product of the price data with triangular-number weights and divide by the total weight.
Ultra Low Lag Moving Average's weights are designed to have MAXIMUM possible smoothing and MINIMUM possible lag compatible with as-flat-as-possible phase response.
What is a Clutter Filter?
For our purposes here, this is a filter that compares the slope of the trading filter output to a threshold to determine whether to shift trends. If the slope is up but the slope doesn't exceed the threshold, then the color is gray and this indicates a chop zone. If the slope is down but the slope doesn't exceed the threshold, then the color is gray and this indicates a chop zone. Alternatively if either up or down slope exceeds the threshold then the trend turns green for up and red for down. Fro demonstration purposes, an EMA is used as the moving average. This acts to reduce the noise in the signal.
Included
Bar coloring
Loxx's Expanded Source Types
Signals
Alerts
Related Indicators
STD/Clutter-Filtered, Kaiser Window FIR Digital Filter
STD- and Clutter-Filtered, Non-Lag Moving Average
Clutter-Filtered, D-Lag Reducer, Spec. Ops FIR Filter
STD-Filtered, Ultra Low Lag Moving Average
Mystic Pulse V2.0 [CHE] Mystic Pulse V2.0 — Adaptive DI streaks with gradient intensity for clearer trend persistence
Summary
Mystic Pulse V2.0 measures directional persistence by counting how often the positive or negative directional index strengthens and dominates. These counts drive gradient colors for bars, wicks, and helper plots, so intensity reflects local momentum rather than absolute values. A windowed normalization and gamma control adapt the visuals to recent conditions, preventing one regime from overpowering the next. The result is an immediate, at-a-glance read of trend direction and stamina without relying on crossovers alone.
Motivation: Why this design?
Classical DI and ADX signals can flip during choppy phases or feel sluggish in calm regimes. This script focuses on persistence: it increments a positive or negative streak only when the corresponding directional pressure both strengthens compared with the prior bar and dominates the other side. Simple OHLC pre-smoothing reduces micro-noise, and local normalization keeps the scale relevant to the last segment of data, not a distant past.
What’s different vs. standard approaches?
Reference baseline: Traditional DI and ADX lines with crossovers and fixed-scale thresholds.
Architecture differences:
Wilder-style recursive smoothing on true range and directional movement.
Streak counters for positive and negative pressure that advance only on strengthening and dominance.
Windowed normalization and gamma shaping for visual intensity.
Wick coloring via `plotcandle` with forced overlay from a pane indicator.
Practical effect: Bars and wicks grow more vivid during sustained pressure and fade during indecision. The column plots show streak depth directly, which helps filter one-bar flips.
How it works (technical)
1. Pre-smoothing: Open, high, low, and close are averaged over a short simple moving window to dampen micro-ticks.
2. Directional inputs: True range and directional movement are formed from the smoothed prices, then recursively smoothed using a Wilder-style update that carries prior state forward.
3. DI comparison: The script derives positive and negative directional ratios relative to smoothed range. A side advances its streak when it increases compared with the previous bar and exceeds the opposite side. The other streak resets.
4. Trend score and color base: The difference between positive and negative streaks defines the active side.
5. Normalization and gamma: The absolute streak magnitude and each side’s streak are normalized within a rolling window. Gamma parameters reshape intensity so mid-range values are either compressed or emphasized.
6. Rendering:
Two column plots show positive and negative streak counts in the pane with gradient colors.
A square marker at the bottom uses the global gradient as a compact heat cue.
Bar colors on the main chart use either the gradient, neutral trend colors, or no paint depending on toggles.
Wick, border, and candle overlays are colored via `plotcandle` with forced overlay.
7. State handling: Smoothed values and counters persist across bars; initialization uses first available values without lookahead. No higher-timeframe requests are used, so repaint risk is limited to normal live-bar evolution.
Parameter Guide
Show neutral candles (fallback) — Paints main-chart bars in plain up or down colors when gradients are disabled — Default false — Use when you prefer simple up/down coloring.
Show last N shapes — Limits bottom square markers — Default 333 — Reduce if your chart gets cluttered.
ADX smoothing length — Controls the Wilder smoothing window for range and directional movement — Default 9 — Larger values increase stability but respond later.
OHLC SMA length — Pre-smoothing for inputs — Default 1 — Increase slightly on noisy assets to reduce flip risk.
Gradient barcolor — Enables gradient bar paint on the main chart — Default true — Turn off to use wicks only or neutral bars.
Wick coloring — Colors wicks, borders, and bodies via overlay — Default true — Disable if it conflicts with other overlays.
Gradient window — Lookback for local normalization — Default 100 — Shorter windows adapt faster; longer windows provide steadier intensity.
Gradient transparency — Overall transparency for gradient paints — Default 0 — Increase to make gradients subtler.
Gamma bars/shapes — Contrast for bar and shape intensity — Default 0.70 — Lower values brighten mid-tones; higher values compress them.
Gamma plots — Contrast for the column plots — Default 0.80 — Tune separately from bar intensity.
Wick transparency — Transparency for wick coloring — Default 0 — Raise to let price action show through.
Up/Down colors (dark and neon) — Base and accent colors for both directions — Defaults as provided — Adjust to match your chart theme.
Reading & Interpretation
Pane columns: The green column represents the positive streak count; the red column represents the negative streak count. Taller columns signal stronger persistence.
Gradient marker: The bottom square indicates the active side and persistence strength at a glance.
Main-chart bars and wicks: Color direction shows the dominant side; intensity reflects the normalized and gamma-shaped streak magnitude. Faded tones suggest weak or fading pressure.
Practical Workflows & Combinations
Trend following: Enter in the direction of the active side when the corresponding column expands over several bars. Confirm with structure such as higher highs and higher lows or lower highs and lower lows.
Exits and stops: Consider scaling out when intensity fades toward mid-range while structure stalls. Tighten stops after extended streaks or when wicks lose intensity.
Multi-asset/Multi-TF: Use defaults for liquid assets on intraday to swing timeframes. For highly volatile instruments, raise smoothing and the normalization window. For calm markets, lower them to regain sensitivity.
Behavior, Constraints & Performance
Repaint/confirmation: Values update during the live bar and stabilize after bar close. No historical repaint beyond normal live-bar updates.
security()/HTF: Not used; cross-timeframe repaint paths do not apply.
Resources: Declared `max_bars_back` two thousand; no explicit loops or arrays; plot and label limits are generous.
Known limits: Streak counters can remain elevated during slow reversals. Very short normalization windows can cause rapid intensity swings. Gaps or extreme spikes may temporarily distort intensity until the window adapts.
Sensible Defaults & Quick Tuning
Start with: ADX smoothing nine, OHLC SMA one, normalization window one hundred, gradient and wick coloring enabled, gamma around zero point seven to zero point eight.
Too many flips: Increase ADX smoothing and the normalization window; consider a small bump in OHLC SMA.
Too sluggish: Decrease ADX smoothing and the normalization window.
Colors overpower chart: Increase gradient and wick transparency or raise gamma to compress mid-tones.
What this indicator is—and isn’t
This is a visualization and signal layer that represents directional persistence and intensity. It does not issue trade entries or exits on its own and is not predictive. Use it alongside market structure, volume, and risk controls.
Disclaimer
The content, including any code, is for educational and informational purposes only and does not constitute financial advice or a recommendation to buy or sell any instrument. Trading involves substantial risk, including the possible loss of principal. Past performance is not indicative of future results. Always do your own research and consider consulting a qualified professional.
Single AHR DCA (HM) — AHR Pane (customized quantile)Customized note
The log-regression window LR length controls how long a long-term fair value path is estimated from historical data.
The AHR window AHR window length controls over which historical regime you measure whether the coin is “cheap / expensive”.
When you choose a log-regression window of length L (years) and an AHR window of length A (years), you can intuitively read the indicator as:
“Within the last A years of this regime, relative to the long-term trend estimated over the same A years, the current price is cheap / neutral / expensive.”
Guidelines:
In general, set the AHR window equal to or slightly longer than the LR window:
If the AHR window is much longer than LR, you mix different baselines (different LR regimes) into one distribution.
If the AHR window is much shorter than LR, quantiles mostly reflect a very local slice of history.
For BTC / ETH and other BTC-like assets, you can use relatively long horizons (e.g. LR ≈ 3–5 years, AHR window ≈ 3–8 years).
For major altcoins (BNB / SOL / XRP and similar high-beta assets), it is recommended to use equal or slightly shorter horizons, e.g. LR ≈ 2–3 years, AHR window ≈ 2–3 years.
1. Price series & windows
Working timeframe: daily (1D).
Let the daily close of the current symbol on day t be P_t .
Main length parameters:
HM window: L_HM = maLen (default 200 days)
Log-regression window: L_LR = lrLen (default 1095 days ≈ 3 years)
AHR window (regime window): W = windowLen (default 1095 days ≈ 3 years)
2. Harmonic moving average (HM)
On a window of length L_HM, define the harmonic mean:
HM_t = ^(-1)
Here eps = 1e-10 is used to avoid division by zero.
Intuition: HM is more sensitive to low prices – an extremely low price inside the window will drag HM down significantly.
3. Log-regression baseline (LR)
On a window of length L_LR, perform a linear regression on log price:
Over the last L_LR bars, build the series
x_k = log( max(P_k, eps) ), for k = t-L_LR+1 ... t, and fit
x_k ≈ a + b * k.
The fitted value at the current index t is
log_P_hat_t = a + b * t.
Exponentiate to get the log-regression baseline:
LR_t = exp( log_P_hat_t ).
Interpretation: LR_t is the long-term trend / fair value path of the current regime over the past L_LR days.
4. HM-based AHR (valuation ratio)
At each time t, build an HM-based AHR (valuation multiple):
AHR_t = ( P_t / HM_t ) * ( P_t / LR_t )
Interpretation:
P_t / HM_t : deviation of price from the mid-term HM (e.g. 200-day harmonic mean).
P_t / LR_t : deviation of price from the long-term log-regression trend.
Multiplying them means:
if price is above both HM and LR, “expensiveness” is amplified;
if price is below both, “cheapness” is amplified.
Typical reading:
AHR_t < 1 : price is below both mid-term mean and long-term trend → statistically cheaper.
AHR_t > 1 : price is above both mid-term mean and long-term trend → statistically more expensive.
5. Empirical quantile thresholds (Opp / Risk)
On each new day, whenever AHR_t is valid, add it into a rolling array:
A_t_window = { AHR_{t-W+1}, ..., AHR_t } (at most W = windowLen elements)
On this empirical distribution, define two quantiles:
Opportunity quantile: q_opp (default 15%)
Risk quantile: q_risk (default 65%)
Using standard percentile computation (order statistics + linear interpolation), we get:
Opp threshold:
theta_opp = Percentile( A_t_window, q_opp )
Risk threshold:
theta_risk = Percentile( A_t_window, q_risk )
We also compute the percentile rank of the current AHR inside the same history:
q_now = PercentileRank( A_t_window, AHR_t ) ∈
This yields three valuation zones:
Opportunity zone: AHR_t <= theta_opp
(corresponds to roughly the cheapest ~q_opp% of historical states in the last W days.)
Neutral zone: theta_opp < AHR_t < theta_risk
Risk zone: AHR_t >= theta_risk
(corresponds to roughly the most expensive ~(100 - q_risk)% of historical states in the last W days.)
All quantiles are purely empirical and symbol-specific: they are computed only from the current asset’s own history, without reusing BTC thresholds or assuming cross-asset similarity.
6. DCA simulation (lightweight, rolling window)
Given:
a daily budget B (input: budgetPerDay), and
a DCA simulation window H (input: dcaWindowLen, default 900 days ≈ 2.5 years),
The script applies the following rule on each new day t:
If thresholds are unavailable or AHR_t > theta_risk
→ classify as Risk zone → buy = 0
If AHR_t <= theta_opp
→ classify as Opportunity zone → buy = 2B (double size)
Otherwise (Neutral zone)
→ buy = B (normal DCA)
Daily invested cash:
C_t ∈ {0, B, 2B}
Daily bought quantity:
DeltaQ_t = C_t / P_t
The script keeps rolling sums over the last H days:
Cumulative position:
Q_H = sum_{k=t-H+1..t} DeltaQ_k
Cumulative invested cash:
C_H = sum_{k=t-H+1..t} C_k
Current portfolio value:
PortVal_t = Q_H * P_t
Cumulative P&L:
PnL_t = PortVal_t - C_H
Active days:
number of days in the last H with C_k > 0.
These results are only used to visualize how this AHR-quantile-driven DCA rule would have behaved over the recent regime, and do not constitute financial advice.
ICT Killzones & MacrosICT Killzones & Macros (v1.1.5) — configurable ICT session windows + refined “macro” windows with live High/Low levels, optional extensions, next-window previews, and lightweight opening-price lines. Built to be clock-robust, timezone-aware, and performant on intraday charts.
Tip: All times are interpreted in your chosen IANA timezone (default: America/New_York) and auto-handle DST. You can rename, recolor, enable/disable, and retime every window.
What it plots
- Killzones (5) : Asia (19:00–02:00), London (02:00–05:00), NY AM (07:00–09:30), London Close (10:00–12:00), NY PM (13:30–16:00) — full-height boxes with optional header.
- Macros (8) (defaults tailored for common ICT “refined” windows): Asia-1 (18:00–21:00), Asia-2 (21:00–00:00), London-1 (01:00–04:00), AM-1 (09:45–10:15), AM-2 (10:45–11:15), Lunch (12:00–13:00), PM-1 (13:30–14:30), Power Hour (15:10–16:00).
- Live High/Low lines for the current Macro/Killzone window.
- Optional HL extension to the right until price crosses or the trading day rolls (style selectable).
- “Next” previews : earliest upcoming Macro and Killzone header; optional next-window background band.
- Opening Prices (3 lightweight time lines) : defaults 00:00, 08:30, 09:30 with right-edge labels, scoped to a session you choose (auto-cleans at session end).
- Key inputs & styling
- General : Timezone (IANA), “Sessions to show” (per window) to keep only the last N completed windows.
- Header : height (ticks), gap (ticks), fill opacity, border width/style, text size/color, toggle “Next Macro/Killzone” headers.
- Boxes : global fill opacity, global border width/style (used by both Macros & Killzones).
- High/Low : show HL, HL line style, extend on/off + extension style, optional extension labels.
- Opening Prices : enable Time 1/2/3, set HH:MM for each, session window, per-line colors, style (dotted/dashed/solid), width.
- Per-window controls : each Macro/Killzone has Enable, Session (HHMM-HHMM), Label, Fill color.
How to use (quick start)
- Set Timezone to your preference (default America/New_York).
- Toggle on the Macros and Killzones you trade. Adjust session times if needed.
- (Optional) Turn on Extend High/Low to project levels until crossed/day-roll.
- (Optional) Enable Next… headers to see the next upcoming window at a glance.
- (Optional) Configure Opening Prices (00:00 / 08:30 / 09:30 by default) and the session over which they appear.
Behavior & notes
- Time windows are computed by clock, not by guessing bar timestamps, making them robust across brokers and timeframes.
- With HL extension on, the current window’s levels extend until crossed or the end of the trading day (in your timezone). With it off, completed windows keep static HL markers (limited by “Sessions to show”).
- “Sessions to show” applies per Macro/Killzone to automatically prune older windows and keep charts snappy.
- Opening-price lines exist only within the chosen “Opening Prices Session” and are removed when it ends (keeps charts clean).
Defaults (color cues)
Killzones: Asia (blue), London (purple), NY AM (green), London Close (yellow), NY PM (orange).
Macros: neutral greys with Lunch and PM accents out of the box (all customizable).
Performance tips
- Reduce “Sessions to show” if you scroll far back in history.
- Disable “Next…” previews and/or extension labels on very slow machines.
- Narrow the “Opening Prices Session” window to exactly when you need those lines.
Changelog highlights
- v1.1.5 : Internal refinements and stability.
- v1.1.3 : Live High/Low lines for current windows + optional extension.
- v1.1.2 : Added “next Killzone” preview (to match “next Macro”).
- v1.1.0 : Defaults updated (5 KZ, 8 Macros). Removed “snap-to-killzone” behavior.
- v1.0.0 : Independent Macro vs. Killzone rendering; cleaner header logic.
- Known limitations
If your chart warns about drawings, trim “Sessions to show”.
If your broker session times differ from NY hours, adjust the sessions or change the indicator timezone.
Credits & intent
Inspired by ICT timing concepts; provided for education/mark-up, not financial advice.
Built to be flexible so you can mirror your personal playbook and journaling workflow.
TopBot [CHE] TopBot — Structure pivots with buffered acceptance and gradient trend visualization
Summary
TopBot detects swing structure from confirmed pivot highs and lows, derives support and resistance levels, and switches trend only after a buffered and accepted break. It renders labels for recent structure points, maintains dynamic support and resistance lines that freeze on contact, and colors candles using a gradient that reflects consecutive trend persistence. The gradient communicates strength without extra panels, while the buffered acceptance reduces fragile flips around key levels. Everything runs in the main chart for immediate context.
Motivation: Why this design?
Classical swing tools often flip on single-bar spikes and produce lines that extend forever without acknowledging when price invalidates them. This script addresses that by requiring a user-controlled buffer and a run of consecutive closes before changing trend, while also freezing lines once price interacts with them. The gradient color layer communicates regime persistence so users can quickly judge whether a move is maturing or just starting.
What’s different vs. standard approaches?
Baseline reference: Simple pivot labeling and unbuffered break-of-structure tools.
Architecture differences:
Buffered level testing using ticks, percent, or ATR.
Acceptance logic that requires multiple consecutive closes.
Synchronized structure labeling with a single Top and Bottom within the active set.
Progressive support and resistance management that freezes lines on first contact.
Gradient candle and wick coloring driven by consecutive trend counts with windowed normalization and gamma control.
Practical effect: Fewer whipsaw flips, clearer status of active levels, and visual feedback about trend persistence without a secondary pane.
How it works (technical)
The script confirms swing points using left and right bar pivots, then forms a current structure window to classify each pivot as higher high, lower high, higher low, or lower low. Recent labels are trimmed to a user cap, and a postprocess step ensures one highest and one lowest label while preserving side information for the others. Support updates on higher low events, resistance on lower high events. Trend flips only after the close has moved beyond the active level by a chosen buffer and this condition holds for a chosen number of consecutive bars. Lines for new levels extend to the right and freeze once price touches them. A running count of consecutive trend bars produces a strength score, which is normalized over a rolling window, shaped by gamma, and mapped to user-defined dark and neon colors for both up and down regimes. Wick coloring uses `plotcandle`; fallback bar coloring uses `barcolor`. No higher-timeframe data is requested. Signals confirm only after the right-bar lookback of the pivot function.
Parameter Guide
Left Bars / Right Bars (default five each): Pivot sensitivity. Larger values confirm later and reduce noise; smaller values respond faster with more noise.
Draw S/R Lines (default true): Enables support and resistance line creation and updates.
Support / Resistance Colors (lime, red): Line colors for each side.
Line Style (Solid, Dashed, Dotted; default Dotted) and Width (default three): Visual style of S/R lines.
Max Labels & Lines (default ten): Cap for objects to control clutter and resource usage.
Change Bar Color (default true), Up/Down colors (blue, black): Fallback bar coloring when gradients or wick coloring are disabled.
Show Neutral Candles (default false): Optional coloring when no trend is active.
Enable Gradient Bar Colors (default true): Turns on gradient body coloring from the strength score.
Enable Wick Coloring (default true): Colors wicks and borders using `plotcandle`.
Collection Period (default one hundred): Rolling window used to scale the strength score. Shorter windows react faster but vary more.
Gamma Bars / Gamma Plots (defaults zero point seven and zero point eight): Shapes perceived contrast of bar and wick gradients. Lower values brighten early; higher values compress until stronger runs appear.
Gradient Transparency / Wick Transparency (default zero): Visual transparency for bodies and wicks.
Up/Down Trend Dark and Neon Colors: Endpoints for gradient mapping in each regime.
Acceptance closes (n) (default two): Number of consecutive closes beyond a level required before trend flips. Larger values reduce false breaks but react later.
Break buffer (None, Ticks, Percent, ATR; default ATR) and Value (default zero point five) and ATR Len (default fourteen): Defines the safety margin beyond the level. ATR mode adapts to volatility; Percent and Ticks are static.
Reading & Interpretation
Labels: “Top” and “Bottom” mark the most extreme points in the active set; “LT” and “HB” indicate side labels for lower top and higher bottom.
Lines: New support or resistance is drawn when structure confirms. A line freezes once price touches it, signaling that the dynamic phase ended.
Trend: Internal state switches to up or down only after buffered acceptance.
Colors: Brighter neon tones indicate stronger and more persistent runs; darker tones suggest early or weakening runs. When gradients are off, fallback bar colors indicate trend sign.
Practical Workflows & Combinations
Trend following: Wait for a buffered and accepted break through the most recent level, then use gradient intensity to stage entries or scale-ins.
Structure-first filtering: Trade only in the direction of the last accepted trend while price remains above support or below resistance.
Exits and stops: Consider exiting on loss of gradient intensity combined with a return through the most recent structure level.
Multi-asset / Multi-timeframe: Works on liquid symbols across common timeframes. Use larger pivot bars and higher acceptance on lower timeframes. No built-in higher-timeframe aggregation is used.
Behavior, Constraints & Performance
Repaint/confirmation: Pivot confirmation waits for the right bar window; trend acceptance is based on closes and can change during a live bar. Final signals stabilize on bar close.
security/HTF: Not used. No cross-timeframe data.
Resources: Arrays and loops are used for labels, lines, and structure search up to a capped historical span. Object counts are clamped by user input and platform limits.
Known limits: Delayed confirmation at sharp turns due to pivot windows; rapid gaps can jump over buffers; gradient scaling depends on the chosen collection period.
Sensible Defaults & Quick Tuning
Start with the defaults: pivot windows at five, ATR buffer with value near one half, acceptance at two, collection period near one hundred, gamma near zero point seven to zero point eight.
Too many flips: increase acceptance, increase buffer value, or increase pivot windows.
Too sluggish: reduce acceptance, reduce buffer value, or reduce pivot windows.
Colors too flat: lower gamma or shorten the collection period.
Visual clutter: reduce the max labels and lines cap or disable wicks.
What this indicator is—and isn’t
This is a visualization and signal layer that encodes swing structure, level state, and regime persistence. It is not a complete trading system, not predictive, and does not manage orders. Use it with broader context such as higher timeframe structure, session behavior, and defined risk controls.
Disclaimer
The content provided, including all code and materials, is strictly for educational and informational purposes only. It is not intended as, and should not be interpreted as, financial advice, a recommendation to buy or sell any financial instrument, or an offer of any financial product or service. All strategies, tools, and examples discussed are provided for illustrative purposes to demonstrate coding techniques and the functionality of Pine Script within a trading context.
Any results from strategies or tools provided are hypothetical, and past performance is not indicative of future results. Trading and investing involve high risk, including the potential loss of principal, and may not be suitable for all individuals. Before making any trading decisions, please consult with a qualified financial professional to understand the risks involved.
By using this script, you acknowledge and agree that any trading decisions are made solely at your discretion and risk.
Do not use this indicator on Heikin-Ashi, Renko, Kagi, Point-and-Figure, or Range charts, as these chart types can produce unrealistic results for signal markers and alerts.
Best regards and happy trading
Chervolino
Acknowledgment
Thanks to LonesomeTheBlue for the fantastic and inspiring "Higher High Lower Low Strategy" .
Original script:
Credit for the original concept and implementation goes to the author; any adaptations or errors here are mine.
Time-Based Fair Value Gaps (FVG) with Inversions (iFVG)Overview
The Time-Based Fair Value Gaps (FVG) with Inversions (iFVG) (ICT/SMT) indicator is a specialized tool designed for traders using Inner Circle Trader (ICT) methodologies. Inspired by LuxAlgo's Fair Value Gap indicator, this script introduces significant enhancements by integrating ICT principles, focusing on precise time-based FVG detection, inversion tracking, and retest signals tailored for institutional trading strategies. Unlike LuxAlgo’s general FVG approach, this indicator filters FVGs within customizable 10-minute windows aligned with ICT’s macro timeframes and incorporates ICT-specific concepts like mitigation, liquidity grabs, and session-based gap prioritization.
This tool is optimized for 1–5 minute charts, though probably best for 1 minute charts, identifying bullish and bearish FVGs, tracking their mitigation into inverted FVGs (iFVGs) as key support/resistance zones, and generating retest signals with customizable “Close” or “Wick” confirmation. Features like ATR-based filtering, optional FVG labels, mitigation removal, and session-specific FVG detection (e.g., first FVG in AM/PM sessions) make it a powerful tool for ICT traders.
Originality and Improvements
While inspired by LuxAlgo’s FVG indicator (credit to LuxAlgo for their foundational work), this script significantly extends the original concept by:
1. Time-Based FVG Detection: Unlike LuxAlgo’s continuous FVG identification, this script filters FVGs within user-defined 10-minute windows each hour (:00–:10, :10–:20, etc.), aligning with ICT’s emphasis on specific periods of institutional activity, such as hourly opens/closes or kill zones (e.g., New York 7:00–11:00 AM EST). This ensures FVGs are relevant to high-probability ICT setups.
2. Session-Specific First FVG Option: A unique feature allows traders to display only the first FVG in ICT-defined AM (9:30–10:00 AM EST) or PM (1:30–2:00 PM EST) sessions, reflecting ICT’s focus on initial market imbalances during key liquidity events.
3. ICT-Driven Mitigation and Inversion Logic: The script tracks FVG mitigation (when price closes through a gap) and converts mitigated FVGs into iFVGs, which serve as ICT-style support/resistance zones. This aligns with ICT’s view that mitigated gaps become critical reversal points, unlike LuxAlgo’s simpler gap display.
4. Customizable Retest Signals: Retest signals for iFVGs are configurable for “Close” (conservative, requiring candle body confirmation) or “Wick” (faster, using highs/lows), catering to ICT traders’ need for precise entry timing during liquidity grabs or Judas swings.
5. ATR Filtering and Mitigation Removal: An optional ATR filter ensures only significant FVGs are displayed, reducing noise, while mitigation removal declutters the chart by removing filled gaps, aligning with ICT’s principle that mitigated gaps lose relevance unless inverted.
6. Timezone and Timeframe Safeguards: A timezone offset setting aligns FVG detection with EST for ICT’s New York-centric strategies, and a timeframe warning alerts users to avoid ≥1-hour charts, ensuring accuracy in time-based filtering.
These enhancements make the script a distinct tool that builds on LuxAlgo’s foundation while offering ICT traders a tailored, high-precision solution.
How It Works
FVG Detection
FVGs are identified when a candle’s low is higher than the high of two candles prior (bullish FVG) or a candle’s high is lower than the low of two candles prior (bearish FVG). Detection is restricted to:
• User-selected 10-minute windows (e.g., :00–:10, :50–:60) to capture ICT-relevant periods like hourly transitions.
• AM/PM session first FVGs (if enabled), focusing on 9:30–10:00 AM or 1:30–2:00 PM EST for key market opens.
An optional ATR filter (default: 0.25× ATR) ensures only gaps larger than the threshold are displayed, prioritizing significant imbalances.
Mitigation and Inversion
When price closes through an FVG (e.g., below a bullish FVG’s bottom), the FVG is mitigated and becomes an iFVG, plotted as a support/resistance zone. iFVGs are critical in ICT for identifying reversal points where institutional orders accumulate.
Retest Signals
The script generates signals when price retests an iFVG:
• Close: Triggers when the candle body confirms the retest (conservative, lower noise).
• Wick: Triggers when the candle’s high/low touches the iFVG (faster, higher sensitivity). Signals are visualized with triangular markers (▲ for bullish, ▼ for bearish) and can trigger alerts.
Visualization
• FVGs: Displayed as colored boxes (green for bullish, red for bearish) with optional “Bull FVG”/“Bear FVG” labels.
• iFVGs: Shown as extended boxes with dashed midlines, limited to the user-defined number of recent zones (default: 5).
• Mitigation Removal: Mitigated FVGs/iFVGs are removed (if enabled) to keep the chart clean.
How to Use
Recommended Settings
• Timeframe: Use 1–5 minute charts for precision, avoiding ≥1-hour timeframes (a warning label appears if misconfigured).
• Time Windows: Enable :00–:10 and :50–:60 for hourly open/close FVGs, or use the “Show only 1st presented FVG” option for AM/PM session focus.
• ATR Filter: Keep enabled (multiplier 0.25–0.5) for significant gaps; disable on 1-minute charts for more FVGs during volatility.
• Signal Preference: Use “Close” for conservative entries, “Wick” for aggressive setups.
• Timezone Offset: Set to -5 for EST (or -4 for EDT) to align with ICT’s New York session.
Trading Strategy
1. Macro Timeframes: Focus on New York (7:00–11:00 AM EST) or London (2:00–5:00 AM EST) kill zones for high institutional activity.
2. FVG Entries: Trade bullish FVGs as support in uptrends or bearish FVGs as resistance in downtrends, especially in :00–:10 or :50–:60 windows.
3. iFVG Retests: Enter on retest signals (▲/▼) during liquidity grabs or Judas swings, using “Close” for confirmation or “Wick” for speed.
4. Session FVGs: Use the “Show only 1st presented FVG” option to target the first gap in AM/PM sessions, often tied to ICT’s market maker algorithms.
5. Risk Management: Combine with ICT concepts like order blocks or breaker blocks for confluence, and set stops beyond FVG/iFVG boundaries.
Alerts
Set alerts for:
• “Bullish FVG Detected”/“Bearish FVG Detected”: New FVGs in selected windows.
• “Bullish Signal”/“Bearish Signal”: iFVG retest confirmations.
Settings Description
• Show Last (1–100, default: 5): Number of recent iFVGs to display. Lower values reduce clutter.
• Show only 1st presented FVG : Limits FVGs to the first in 9:30–10:00 AM or 1:30–2:00 PM EST sessions (overrides time window checkboxes).
• Time Window Checkboxes: Enable/disable FVG detection in 10-minute windows (:00–:10, :10–:20, etc.). All enabled by default.
• Signal Preference: “Close” (default) or “Wick” for iFVG retest signals.
• Use ATR Filter: Enables ATR-based size filtering (default: true).
• ATR Multiplier (0–∞, default: 0.25): Sets FVG size threshold (higher values = larger gaps).
• Remove Mitigated FVGs: Removes filled FVGs/iFVGs (default: true).
• Show FVG Labels: Displays “Bull FVG”/“Bear FVG” labels (default: true).
• Timezone Offset (-12 to 12, default: -5): Aligns time windows with EST.
• Colors: Customize bullish (green), bearish (red), and midline (gray) colors.
Why Use This Indicator?
This indicator empowers ICT traders with a tool that goes beyond generic FVG detection, offering precise, time-filtered gaps and inversion tracking aligned with institutional trading principles. By focusing on ICT’s macro timeframes, session-specific imbalances, and customizable signal logic, it provides a clear edge for scalping, swing trading, or reversal setups in high-liquidity markets.
Daytrade Forex Scalper TwinPulse Auction Timer IndicatorWhat this indicator is
TwinPulse Auction Timer is a multi component execution aid designed for liquid markets. It looks for two families of opportunities
Breakouts that leave a compression area after a fresh sweep
Reversals that trigger after a sweep with strong wick polarity
It does not try to predict future prices. It measures present auction conditions with transparent rules and shows you when those conditions align. You get a simple table that says LONG SHORT or WAIT, optional session shading, clean entry and exit level visuals, and alerts you can wire to your workflow.
Why it is different
Most tools show a single signal. TwinPulse combines several independent signals into an Edge Score that you can tune. The components are
• Pulse. A signed measure of wick asymmetry with candle body direction
• Compression. Current true range compared with an average range
• Sweep timer. Bars elapsed since the most recent sweep of a prior high or low
• Bias. Direction of a higher timeframe candle
• Regime. Efficiency ratio and the relation of micro to macro volatility
• Location. Distance from the daily anchored VWAP
• Session. London and New York filter by time windows
Each component is visible in the inputs and in the table so you can understand why a suggestion appears. The script uses request.security() with lookahead off in all calls so it does not peek into the future. Shapes may move while a bar is open since price is still forming. They stop moving when the bar closes.
What you will see on the chart
• L and S shapes on entry bars
• An Exit shape at the price where a stop or the runner target would have been hit
• Four horizontal lines while a trade is active
Entry
Stop
TP1 at one R
TP2 at the runner target expressed in R
• Labels anchored to each line so you can instantly read Entry SL TP1 and TP2 with current values
• Optional shading during your session windows
• Optional daily VWAP line
The table in the top right shows
Action LONG SHORT IN LONG IN SHORT or WAIT
Session ON or OFF
Bias UP DOWN or FLAT
Pulse value
Compression value
Edge L percent and Edge S percent
How it works in detail
Pulse
For each bar the script measures up wick minus down wick divided by range and multiplies that by the sign of the candle body. The result is averaged with pulse_len. Positive numbers indicate aggressive buying. Negative numbers indicate aggressive selling. You control the minimum absolute value with pulse_thr.
Compression
Compression is the ratio of current range to an average range. You can choose the range basis. HL SMA uses simple high minus low smoothed by range_len. ATR uses classic True Range smoothed by atr_len. Values below comp_thr indicate a coil.
Sweeps and the timer
A sweep occurs when price trades beyond the highest high or lowest low seen in the previous sweep_len bars. A strict sweep requires a close back inside that prior range. The timer measures how many bars have elapsed since the last sweep. Breakout setups require the timer to exceed timer_thr.
Bias on a confirmation timeframe
A higher timeframe candle is read with confirm_tf. If close is above open bias is UP. If close is below open bias is DOWN. This keeps breakouts aligned with the prevailing drift.
Regime filters
Efficiency ratio measures the straight line change over the sum of absolute bar to bar changes over er_len. It rises in trendy conditions and falls in noise. Minimum efficiency is controlled by er_min.
Micro to macro volatility ratio compares a short lookback average range with a longer lookback average range using your chosen basis. For breakouts you usually want micro volatility to be near or above macro hence mvr_min. For reversals you often want micro volatility that is not overheated relative to macro hence mvr_max_rev.
VWAP distance gate
Daily anchored VWAP is rebuilt from the open of each session. The script computes the absolute distance from VWAP in units of your average range and requires that distance to exceed vwap_dist_thr when use_vwap_gate is true. This keeps entries away from the mean.
Edge Score
Each gate contributes a weight that you control. The script sums weights of the satisfied gates and divides by the sum of all weights to produce an Edge percent for long and an Edge percent for short. You can then require a minimum Edge percent using edge_min_pct. This turns the indicator into a step by step checklist that you can tune to your taste.
Using the indicator step by step
Choose markets and timeframes
The logic is designed for liquid instruments. Major currency pairs, index futures and cash index CFDs, and the most liquid crypto pairs work well. On intraday use one to fifteen minutes for signals and fifteen to sixty minutes for confirmation. On swing use one hour to one day for signals and one day for confirmation.
Decide on entry mode
Breakouts require a compression area and a sweep timer. Reversals require a strict sweep and a strong pulse. If you are unsure leave the default which allows both.
Pick a range basis
For FX and crypto HL SMA is often stable. For indices and single name equities with gaps ATR can adapt better. If results look too reactive increase the window. If results are too slow reduce it.
Tune regime filters
If you trade trend continuation raise er_min and mvr_min. If you trade counter rotation lower them and rely on the reversal path with the strict sweep condition.
Set the VWAP gate
Enabling it helps you avoid entries at the mean. Push the threshold higher on range bound days. Reduce it in strong trend days.
Table driven decision
Watch Action and the Edge percents. If the script says WAIT you can read Pulse and Compression to see what is missing. Often the best trades appear when both Edge percents are well separated and your session switch is ON.
Use the visuals
When a suggestion triggers you will see entry stop and targets. You can mirror the levels in your own workflow or use alerts.
Consider bar close
Signals are computed in real time. For a strict process you can wait until the bar closes to reduce noise.
Inputs explained with quick guidance
Setup
Signal TF chooses where the logic is computed. Leave blank to use the chart.
Confirm TF sets the higher timeframe for bias.
Session filter restricts signals to the London and New York windows you specify.
Invert flips long and short. It is useful on inverse instruments.
Logic options
Entry mode allows Breakouts Reversals or Both.
Average range basis selects HL SMA or ATR.
ATR length is used when ATR is selected.
Pulse source can be Regular OHLC or Heikin Ashi. Heikin Ashi smooths noisy series, but the script still runs on regular bars and you should publish and use it on standard candles to respect the platform guidance.
Core numeric settings
Sweep lookback controls the size of the liquidity pool targeted by the sweep condition.
Pulse window smooths the wick polarity measure.
Average range window controls your base range when you use HL SMA.
Pulse threshold sets the minimum polarity required.
Compression threshold sets the maximum current range relative to average to consider the market coiled.
Expansion timer bars sets how much time has passed since the last sweep before you allow a breakout.
Regime filters
Efficiency ratio length and minimum value keep you out of aimless drift.
Micro and Macro range lengths feed the micro to macro ratio.
Minimum micro to macro for breakouts and maximum micro to macro for reversals steer the two entry families.
VWAP gate and distance threshold keep you away from the mean.
Levels and trade management visuals
Runner target in R sets TP2 as a multiple of initial risk.
Stop distance as average range multiple sets initial risk size for the visuals.
Move stop to entry after one R touch turns on break even logic once price has traveled one risk unit.
Trail buffer as R fraction uses the last sweep as an anchor and keeps a dynamic stop at a chosen fraction of R beyond it.
Cooldown after exit prevents immediate re entries.
Edge Score
Weights for pulse compression timer bias efficiency ratio micro to macro VWAP gate and session let you align the checklist with your style.
Minimum Edge percent to suggest applies a final filter to LONG or SHORT suggestions.
UI
Table and markers switch the compact dashboard and the shapes.
TP and SL lines and labels draw and name each level.
TP1 partial label percent is printed in the TP1 label for clarity.
Session shading helps with focus.
Daily VWAP line is optional.
Alerts
The script provides alerts for Long Short Exit and for Edge percent crossing the threshold on either side. Use them to drive notifications or to sync with webhooks and your broker integration. Alerts trigger in real time and will repaint during a bar. For conservative use trigger on bar close.
Recommended presets
Intraday trend continuation
Confirm TF fifteen minutes
Entry mode Breakouts
Range basis HL SMA
Pulse threshold near 0.10
Compression threshold near 0.60
Timer around 18
Minimum efficiency ratio near 0.20
Minimum micro to macro near 1.00
VWAP gate enabled with distance near 0.35
Edge minimum 50 or higher
Intraday mean reversion at sweeps
Entry mode Reversals
Pulse source Regular OHLC
Compression threshold can be a little higher
Maximum micro to macro near 1.60
Efficiency ratio minimum lower near 0.12
VWAP gate enabled
Edge minimum 40 to 60
Swing trend continuation
Signal TF one hour
Confirm TF one day
Range basis ATR
ATR length around 14
Average range window 20 to 30
Efficiency ratio minimum near 0.18
Micro to macro windows 12 and 60
Edge minimum 50 to 70
These are starting points only. Your instrument and timeframe will require small adjustments.
Limitations and honest warnings
No indicator is perfect. TwinPulse will mark attractive conditions that do not always lead to profitable trades. During economic releases or very thin liquidity the assumptions behind compression and sweeps may fail. In strong gap environments the HL SMA basis may lag while ATR may overreact. Heikin Ashi pulse can help in choppy markets but it will lag during sharp reversals. Session times use the exchange time of your chart. If you switch symbol or exchange verify the windows.
Edge percent is not a probability of profit. It is the fraction of satisfied gates with your chosen weights. Two traders can set different weights and see different Edge readings on the same bar. That is the design. The score is a guide that helps you act with discipline.
This indicator does not place orders or manage real risk. The lines and labels show a model entry a model stop and two model targets built from the average range at entry and from recent swing points. Use them as references and not as hard rules. Always test on historical data and demo first. Past results do not guarantee anything in the future.
Credits and originality
All code in this publication is original and written for this indicator. The concept of the efficiency ratio originates from Perry Kaufman. The use of a daily anchored volume weighted average price is a standard industry tool. The specific combination of pulse from wick polarity strict sweep timing compression and the tunable Edge Score is unique to this script at the time of publication. If you reuse parts of the open source code in your own work remember to credit the author and contribute meaningful improvements.
How to read the table at a glance
Action reflects your current state.
IN LONG or IN SHORT appears while a trade is active.
LONG or SHORT appears when conditions for entry are met and the Edge threshold is satisfied.
WAIT appears when at least one gate is missing.
Session shows ON during your chosen windows.
Bias shows the color of the confirmation candle.
Pulse is the smoothed polarity number.
Comp shows current range divided by the average range. Values below one mean compression.
Edge L percent and Edge S percent show the long and short checklists as percents.
Final thoughts
Markets move because orders accumulate at certain prices and at certain times. The indicator tries to measure two things that often matter at those turning points. One is the existence of a hidden imbalance revealed by wick polarity and by sweeps of prior extremes. The other is the presence of energy stored in a coil that can release in the direction of a drift. Neither force guarantees profit. Together they can improve your selection and your timing.
Use the defaults for a few days so you learn the personality of the signals. After that adjust one group at a time. Start with the session filter and the Edge threshold. Then tune compression and the timer. Finally adjust the regime filters. Keep notes. You will learn which weights matter for your market and timeframe. The result is a process you can apply with consistency.
Disclaimer
This script and description are for education and analysis. They are not investment advice and they do not promise future results. Use at your own risk. Test thoroughly on historical data and in simulation before considering any live use.
Hurst Exponent - Detrended Fluctuation AnalysisIn stochastic processes, chaos theory and time series analysis, detrended fluctuation analysis (DFA) is a method for determining the statistical self-affinity of a signal. It is useful for analyzing time series that appear to be long-memory processes and noise.
█ OVERVIEW
We have introduced the concept of Hurst Exponent in our previous open indicator Hurst Exponent (Simple). It is an indicator that measures market state from autocorrelation. However, we apply a more advanced and accurate way to calculate Hurst Exponent rather than simple approximation. Therefore, we recommend using this version of Hurst Exponent over our previous publication going forward. The method we used here is called detrended fluctuation analysis. (For folks that are not interested in the math behind the calculation, feel free to skip to "features" and "how to use" section. However, it is recommended that you read it all to gain a better understanding of the mathematical reasoning).
█ Detrend Fluctuation Analysis
Detrended Fluctuation Analysis was first introduced by by Peng, C.K. (Original Paper) in order to measure the long-range power-law correlations in DNA sequences . DFA measures the scaling-behavior of the second moment-fluctuations, the scaling exponent is a generalization of Hurst exponent.
The traditional way of measuring Hurst exponent is the rescaled range method. However DFA provides the following benefits over the traditional rescaled range method (RS) method:
• Can be applied to non-stationary time series. While asset returns are generally stationary, DFA can measure Hurst more accurately in the instances where they are non-stationary.
• According the the asymptotic distribution value of DFA and RS, the latter usually overestimates Hurst exponent (even after Anis- Llyod correction) resulting in the expected value of RS Hurst being close to 0.54, instead of the 0.5 that it should be. Therefore it's harder to determine the autocorrelation based on the expected value. The expected value is significantly closer to 0.5 making that threshold much more useful, using the DFA method on the Hurst Exponent (HE).
• Lastly, DFA requires lower sample size relative to the RS method. While the RS method generally requires thousands of observations to reduce the variance of HE, DFA only needs a sample size greater than a hundred to accomplish the above mentioned.
█ Calculation
DFA is a modified root-mean-squares (RMS) analysis of a random walk. In short, DFA computes the RMS error of linear fits over progressively larger bins (non-overlapped “boxes” of similar size) of an integrated time series.
Our signal time series is the log returns. First we subtract the mean from the log return to calculate the demeaned returns. Then, we calculate the cumulative sum of demeaned returns resulting in the cumulative sum being mean centered and we can use the DFA method on this. The subtraction of the mean eliminates the “global trend” of the signal. The advantage of applying scaling analysis to the signal profile instead of the signal, allows the original signal to be non-stationary when needed. (For example, this process converts an i.i.d. white noise process into a random walk.)
We slice the cumulative sum into windows of equal space and run linear regression on each window to measure the linear trend. After we conduct each linear regression. We detrend the series by deducting the linear regression line from the cumulative sum in each windows. The fluctuation is the difference between cumulative sum and regression.
We use different windows sizes on the same cumulative sum series. The window sizes scales are log spaced. Eg: powers of 2, 2,4,8,16... This is where the scale free measurements come in, how we measure the fractal nature and self similarity of the time series, as well as how the well smaller scale represent the larger scale.
As the window size decreases, we uses more regression lines to measure the trend. Therefore, the fitness of regression should be better with smaller fluctuation. It allows one to zoom into the “picture” to see the details. The linear regression is like rulers. If you use more rulers to measure the smaller scale details you will get a more precise measurement.
The exponent we are measuring here is to determine the relationship between the window size and fitness of regression (the rate of change). The more complex the time series are the more it will depend on decreasing window sizes (using more linear regression lines to measure). The less complex or the more trend in the time series, it will depend less. The fitness is calculated by the average of root mean square errors (RMS) of regression from each window.
Root mean Square error is calculated by square root of the sum of the difference between cumulative sum and regression. The following chart displays average RMS of different window sizes. As the chart shows, values for smaller window sizes shows more details due to higher complexity of measurements.
The last step is to measure the exponent. In order to measure the power law exponent. We measure the slope on the log-log plot chart. The x axis is the log of the size of windows, the y axis is the log of the average RMS. We run a linear regression through the plotted points. The slope of regression is the exponent. It's easy to see the relationship between RMS and window size on the chart. Larger RMS equals less fitness of the regression. We know the RMS will increase (fitness will decrease) as we increases window size (use less regressions to measure), we focus on the rate of RMS increasing (how fast) as window size increases.
If the slope is < 0.5, It means the rate of of increase in RMS is small when window size increases. Therefore the fit is much better when it's measured by a large number of linear regression lines. So the series is more complex. (Mean reversion, negative autocorrelation).
If the slope is > 0.5, It means the rate of increase in RMS is larger when window sizes increases. Therefore even when window size is large, the larger trend can be measured well by a small number of regression lines. Therefore the series has a trend with positive autocorrelation.
If the slope = 0.5, It means the series follows a random walk.
█ FEATURES
• Sample Size is the lookback period for calculation. Even though DFA requires a lower sample size than RS, a sample size larger > 50 is recommended for accurate measurement.
• When a larger sample size is used (for example = 1000 lookback length), the loading speed may be slower due to a longer calculation. Date Range is used to limit numbers of historical calculation bars. When loading speed is too slow, change the data range "all" into numbers of weeks/days/hours to reduce loading time. (Credit to allanster)
• “show filter” option applies a smoothing moving average to smooth the exponent.
• Log scale is my work around for dynamic log space scaling. Traditionally the smallest log space for bars is power of 2. It requires at least 10 points for an accurate regression, resulting in the minimum lookback to be 1024. I made some changes to round the fractional log space into integer bars requiring the said log space to be less than 2.
• For a more accurate calculation a larger "Base Scale" and "Max Scale" should be selected. However, when the sample size is small, a larger value would cause issues. Therefore, a general rule to be followed is: A larger "Base Scale" and "Max Scale" should be selected for a larger the sample size. It is recommended for the user to try and choose a larger scale if increasing the value doesn't cause issues.
The following chart shows the change in value using various scales. As shown, sometimes increasing the value makes the value itself messy and overshoot.
When using the lowest scale (4,2), the value seems stable. When we increase the scale to (8,2), the value is still alright. However, when we increase it to (8,4), it begins to look messy. And when we increase it to (16,4), it starts overshooting. Therefore, (8,2) seems to be optimal for our use.
█ How to Use
Similar to Hurst Exponent (Simple). 0.5 is a level for determine long term memory.
• In the efficient market hypothesis, market follows a random walk and Hurst exponent should be 0.5. When Hurst Exponent is significantly different from 0.5, the market is inefficient.
• When Hurst Exponent is > 0.5. Positive Autocorrelation. Market is Trending. Positive returns tend to be followed by positive returns and vice versa.
• Hurst Exponent is < 0.5. Negative Autocorrelation. Market is Mean reverting. Positive returns trends to follow by negative return and vice versa.
However, we can't really tell if the Hurst exponent value is generated by random chance by only looking at the 0.5 level. Even if we measure a pure random walk, the Hurst Exponent will never be exactly 0.5, it will be close like 0.506 but not equal to 0.5. That's why we need a level to tell us if Hurst Exponent is significant.
So we also computed the 95% confidence interval according to Monte Carlo simulation. The confidence level adjusts itself by sample size. When Hurst Exponent is above the top or below the bottom confidence level, the value of Hurst exponent has statistical significance. The efficient market hypothesis is rejected and market has significant inefficiency.
The state of market is painted in different color as the following chart shows. The users can also tell the state from the table displayed on the right.
An important point is that Hurst Value only represents the market state according to the past value measurement. Which means it only tells you the market state now and in the past. If Hurst Exponent on sample size 100 shows significant trend, it means according to the past 100 bars, the market is trending significantly. It doesn't mean the market will continue to trend. It's not forecasting market state in the future.
However, this is also another way to use it. The market is not always random and it is not always inefficient, the state switches around from time to time. But there's one pattern, when the market stays inefficient for too long, the market participants see this and will try to take advantage of it. Therefore, the inefficiency will be traded away. That's why Hurst exponent won't stay in significant trend or mean reversion too long. When it's significant the market participants see that as well and the market adjusts itself back to normal.
The Hurst Exponent can be used as a mean reverting oscillator itself. In a liquid market, the value tends to return back inside the confidence interval after significant moves(In smaller markets, it could stay inefficient for a long time). So when Hurst Exponent shows significant values, the market has just entered significant trend or mean reversion state. However, when it stays outside of confidence interval for too long, it would suggest the market might be closer to the end of trend or mean reversion instead.
Larger sample size makes the Hurst Exponent Statistics more reliable. Therefore, if the user want to know if long term memory exist in general on the selected ticker, they can use a large sample size and maximize the log scale. Eg: 1024 sample size, scale (16,4).
Following Chart is Bitcoin on Daily timeframe with 1024 lookback. It suggests the market for bitcoin tends to have long term memory in general. It generally has significant trend and is more inefficient at it's early stage.
Session Markers - JDK AnalysisSession Markers is a tool designed to study how markets behave during specific, recurring time windows. Many traders know that price behaves differently depending on the day of the week, the time of the day, or particular market sessions such as the weekly open, the London session, or the New York open. This indicator makes those recurring windows visible on the chart and then analyzes what price typically does inside them. The result is a clear statistical understanding of how a chosen session behaves, both in direction and in strength.
The script works by allowing the trader to define any time window using a start day and time and an end day and time. Every time this window occurs on the chart, the indicator highlights it with a full-height vertical band. These visual markers reveal patterns that are otherwise difficult to detect manually, such as whether certain sessions tend to trend, reverse, consolidate, or create large imbalances. They also help the trader quickly scan through historical price action to see how the market has behaved under similar conditions.
For every completed session window, the indicator measures how much price changed from the moment the window began to the moment it ended. Instead of using raw price differences, it converts these changes into percentage moves. This makes the measurement consistent across different price ranges and market regimes. A one-percent move always has the same meaning, whether the asset is trading at 100 or 50,000. These percentage moves are collected for a user-selected number of past sessions, creating a dataset of how the market has behaved in the chosen time window.
Based on this dataset, the indicator generates several statistics. It counts how many past sessions closed higher and how many closed lower, producing a directional tendency. It also computes the probability of an upward session by dividing the number of positive sessions by the total. More importantly, it calculates the average percentage movement for all sessions in the lookback period. This average move reflects not just the direction but also the magnitude of price changes. A session with frequent small upward moves but occasional large downward moves will show a negative average movement, even if more sessions ended positive. This creates a more realistic representation of true market behavior.
Using this average movement, the script determines a “Bias” for the session. If the average percentage move is positive, the bias is considered bullish. If it is negative, the bias is bearish. If the values are very close to zero, the bias is neutral. This way, the indicator takes both frequency and impact into account, producing a magnitude-aware assessment instead of one that only counts wins and losses. A sequence such as +5%, –1% results in a bullish bias because the overall impact is strongly positive. On the other hand, a series of small gains followed by a large drop produces a bearish bias even if more sessions ended positive, because the large move dominates the average. This provides a far more truthful picture of what the market tends to do during the chosen window.
All relevant statistics are displayed neatly in a small panel in the top-right corner of the chart. The panel updates in real time as new sessions complete and older ones fall out of the lookback range. It shows how many sessions were analyzed, how many ended up or down, the probability of an upward move, the average percentage change, and the final bias. The background color of the panel instantly reflects that bias, making it easy to interpret at a glance.
To use the tool effectively, the trader simply needs to define a time window of interest. This could be something like the weekly opening window from Sunday to Monday, the London open each day, or even a unique custom window. After selecting how many past sessions to analyze, the indicator takes care of the rest. The vertical session markers reveal the structure visually. The statistics summarize the historical behavior objectively. The magnitude-weighted bias provides a realistic indication of whether the window tends to produce upward or downward movement on average.
Session Markers is helpful because it translates repeated market timing behavior into measurable data. It exposes hidden tendencies that are easy to feel intuitively but hard to quantify manually. By analyzing both direction and magnitude, it prevents misleading interpretations that can arise from looking only at win rates. It helps traders understand whether a session typically produces meaningful moves or just small noise, whether it tends to trend or reverse, and whether its behavior has recently changed. Whether used for bias building, session filtering, or deeper market research, it offers a structured framework for understanding the market through time-based patterns.
Fear–Greed Index📈 Fear–Greed Index
This indicator provides a sophisticated, multi-faceted measure of market sentiment, plotting it as an oscillator that ranges from -100 (Extreme Fear) to +100 (Extreme Greed).
Unlike standard indicators like RSI or MACD, this tool is built on principles from behavioral finance and social physics to model the complex psychology of the market. It does not use any of TradingView's built-in math functions and instead calculates everything from scratch.
🤔 How It Works: The Three-Model Approach
The final index is a comprehensive blend of three different academic models, each calculated across three distinct time horizons (Short, Mid, and Long) to capture sentiment at different scales.
Prospect Theory (CPT): This model, based on Nobel Prize-winning work, evaluates how traders perceive gains and losses. It assumes that the pain of a loss is felt more strongly than the pleasure of an equal gain, modeling the market's asymmetric emotional response.
Herding (Brock–Durlauf): This component measures the "follow the crowd" instinct. It analyzes the synchronization of positive and negative returns to determine if traders are acting in a coordinated, "herd-like" manner, which is a classic sign of building fear or greed.
Social Impact Theory (SIT): This model assesses how social forces influence market participants.
It combines three factors:
Strength (S): The magnitude of recent price moves (volatility).
Immediacy (I): How recently the most significant price action occurred.
Number (N): The level of market participation (volume).
The indicator calculates all three models for a Short, Mid, and Long lookback period. It then aggregates these nine components (3 models x 3 timeframes) using customizable weights to produce a single, final Fear–Greed Index value.
Interpretar How to Read the Index
Main Line: This is the final FGI score.
Lime/Green: Indicates Greed (positive values).
Red: Indicates Fear (negative values).
Fading Color: The color becomes more transparent as the index approaches the '0' (Neutral) line, and more solid as it moves toward the extremes.
Key Zones:
+100 to +30 (Extreme Greed): The market is highly euphoric and potentially overbought. This can be a contrarian signal for caution or profit-taking.
+30 to +18 (Greed Zone): Strong bullish sentiment.
+18 to -18 (Neutral Zone): The market is undecided, or fear and greed are in balance.
-18 to -30 (Fear Zone): Strong bearish sentiment.
-30 to -100 (Extreme Fear): The market is in a state of panic and may be oversold. This can be a contrarian signal for potential buying opportunities.
Reference Plots: The indicator also plots the aggregated scores for each of the three models (Herding, Prospect, and SIT) as faint, secondary lines. This allows you to see which component is driving the overall sentiment.
⚙️ Settings & Customization
This indicator is highly tunable, allowing you to adjust its sensitivity and component makeup.
Time Windows:
Short window: Lookback period for short-term sentiment.
Mid window: Lookback for medium-term sentiment.
Long window: Lookback for long-term sentiment.
Model Aggregation Weights:
Weight CPT, Weight Herding, Weight SIT: Control how much each of the three behavioral models contributes to the final score (they should sum to 1.0).
Cross-Horizon Weights:
Weight Short, Weight Mid, Weight Long: Control the influence of each timeframe on the final score (they should also sum to 1.0).
ORB + Session VWAP Pro (London & NY) — fixedORB + Session VWAP Pro (London & NY) — Listing copy (EN)
What it is
A clean, non-repainting intraday tool that fuses the classic Opening Range Breakout (ORB) with a session-anchored VWAP filter for London and New York. It highlights only the higher-quality breakouts (above/below session VWAP), adds an optional retest confirmation, and scores each signal with an intuitive Confidence metric (0–100).
Why it works
• ORB provides the day’s first actionable structure (range high/low).
• Session VWAP filters “cheap” breaks and favors flows aligned with session value.
• Optional retest reduces first-tick whipsaws.
• Confidence blends breakout depth (vs ATR), VWAP slope and band distance.
Key visuals
• LDN/NY OR High/Low (line break style) + optional OR boxes.
• Active Session VWAP (resets per signal window; falls back to daily VWAP outside).
• Optional VWAP bands (stdev or %).
• Session shading (London/NY windows).
• Signal markers (LDN BUY/SELL, NY BUY/SELL) fired with cooldown.
Signals
• London Long / Short: Break of LDN OR High/Low ± ATR buffer, aligned with VWAP side.
• NY Long / Short: Same logic during NY window.
• Retest (optional): Requires a tag back to the OR level ± tolerance before confirmation.
• Confidence: 0–100; gate via Min Confidence (default 55).
Inputs that matter
• Open Range Length (min): Default 15.
• London/NY times & timezones.
• ATR buffer & retest tolerance.
• Bands mode: Stdev (with lookback) or % (e.g., 1%).
• Signal cooldown: Avoids clutter on fast moves.
Non-repaint policy
• OR lines build within fixed time windows using the current bar’s timestamp.
• VWAP is cumulative within the session window; no lookahead.
• All ta.crossover/ta.crossunder are precomputed every bar (no conditional execution).
• Signals are based on live bar values, not future bars.
⸻
Quick start (examples)
1) EURUSD, London momentum
• Chart: 5m or 15m.
• OR: 15 min starting 08:00 Europe/London.
• Signals: Use defaults; keep ATR buffer = 0.2 and Retest = ON, Min Confidence ≥ 55.
• Play:
• BUY when price breaks LDN OR High + buffer and stays above VWAP; retest confirms.
• Trail behind VWAP or band #1; partials into band #2.
2) NAS100, New York breakout & run
• Chart: 5m.
• NY window: 09:30 America/New_York, OR = 15 min.
• Retest OFF on high momentum days; Min Confidence ≥ 60.
• Use band mode Stdev, bandLen=50, show ±1/±2.
• Momentum continuation: add on pullbacks that hold above VWAP after the breakout.
3) XAUUSD, London fake & VWAP fade
• Chart: 5m.
• Keep Retest ON; accept only shorts that break OR Low but retest fails back under VWAP.
• Confidence gate ≥ 50 to allow more mean-reversion setups.
⸻
Pro tips
• Adjust ATR buffer to the instrument: FX 0.15–0.25, indices 0.20–0.35, metals 0.20–0.30.
• Retest ON for choppy conditions; OFF for news momentum.
• Use VWAP bands: take partials at ±1; stretch targets at ±2/±3.
• Session timezones are explicit (London/New York). Ensure they match your instrument’s behavior.
• Pair with a higher-TF bias (e.g., 1H/4H trend) for directional filtering.
⸻
Alerts (ready to use)
• ORB+SVWAP — LDN Long, LDN Short, NY Long, NY Short
(Respect your cooldown; alerts fire only after confirmation and confidence gate.)
⸻
Known limits & notes
• Designed for intraday. On 1D+ charts, session windows compress.
• If your broker session differs from London/NY clocks on a holiday, adjust input times.
• Session-anchored VWAP uses the script’s signal window, not exchange sessions, by design.
chrono_utilsLibrary "chrono_utils"
📝 Description
Collection of objects and common functions that are related to datetime windows session days and time ranges. The main purpose of this library is to handle time-related functionality and make it easy to reason about a future bar checking if it will be part of a predefined session and/or inside a datetime window. All existing session functionality I found in the documentation e.g. "not na(time(timeframe, session, timezone))" are not suitable for strategy scripts, since the execution of the orders is delayed by one bar, due to the script execution happening at the bar close. Moreover, a history operator with a negative value that looks forward is not allowed in any pinescript expression. So, a prediction for the next bar using the bars_back argument of "time()"" and "time_close()" was necessary. Thus, I created this library to overcome this small but very important limitation. In the meantime, I added useful functionality to handle session-based behavior. An interesting utility that emerged from this development is data anomaly detection where a comparison between the prediction and the actual value is happening. If those two values are different then a data inconsistency happens between the prediction bar and the actual bar (probably due to a holiday, half session day, a timezone change etc..)
🤔 How to Guide
To use the functionality this library provides in your script you have to import it first!
Copy the import statement of the latest release by pressing the copy button below and then paste it into your script. Give a short name to this library so you can refer to it later on. The import statement should look like this:
import jason5480/chrono_utils/2 as chr
To check if a future bar will be inside a window first of all you have to initialize a DateTimeWindow object.
A code example is the following:
var dateTimeWindow = chr.DateTimeWindow.new().init(fromDateTime = timestamp('01 Jan 2023 00:00'), toDateTime = timestamp('01 Jan 2024 00:00'))
Then you have to "ask" the dateTimeWindow if the future bar defined by an offset (default is 1 that corresponds th the next bar), will be inside that window:
// Filter bars outside of the datetime window
bool dateFilterApproval = dateTimeWindow.is_bar_included()
You can visualize the result by drawing the background of the bars that are outside the given window:
bgcolor(color = dateFilterApproval ? na : color.new(color.fuchsia, 90), offset = 1, title = 'Datetime Window Filter')
In the same way, you can "ask" the Session if the future bar defined by an offset it will be inside that session.
First of all, you should initialize a Session object.
A code example is the following:
var sess = chr.Session.new().from_sess_string(sess = '0800-1700:23456', refTimezone = 'UTC')
Then check if the given bar defined by the offset (default is 1 that corresponds th the next bar), will be inside the session like that:
// Filter bars outside the sessions
bool sessionFilterApproval = view.sess.is_bar_included()
You can visualize the result by drawing the background of the bars that are outside the given session:
bgcolor(color = sessionFilterApproval ? na : color.new(color.red, 90), offset = 1, title = 'Session Filter')
In case you want to visualize multiple session ranges you can create a SessionView object like that:
var view = SessionView.new().init(SessionDays.new().from_sess_string('2345'), array.from(SessionTimeRange.new().from_sess_string('0800-1600'), SessionTimeRange.new().from_sess_string('1300-2200')), array.from('London', 'New York'), array.from(color.blue, color.orange))
and then call the draw method of the SessionView object like that:
view.draw()
🏋️♂️ Please refer to the "EXAMPLE DATETIME WINDOW FILTER" and "EXAMPLE SESSION FILTER" regions of the script for more advanced code examples of how to utilize the full potential of this library, including user input settings and advanced visualization!
⚠️ Caveats
As I mentioned in the description there are some cases that the prediction of the next bar is not accurate. A wrong prediction will affect the outcome of the filtering. The main reasons this could happen are the following:
Public holidays when the market is closed
Half trading days usually before public holidays
Change in the daylight saving time (DST)
A data anomaly of the chart, where there are missing and/or inconsistent data.
A bug in this library (Please report by PM sending the symbol, timeframe, and settings)
Special thanks to @robbatt and @skinra for the constructive feedback 🏆. Without them, the exposed API of this library would be very lengthy and complicated to use. Thanks to them, now the user of this library will be able to get the most, with only a few lines of code!






















