PROTECTED SOURCE SCRIPT
Universal Normalizer

Universal Normalizer — PhD Grade
A comprehensive statistical normalization engine that transforms any external indicator into a standardized, comparable scale using nine distinct PhD-level mathematical methods, featuring robust outlier handling and advanced signal processing.
Overview
The Universal Normalizer solves the fundamental problem of indicator incomparability by converting any input—whether RSI, MACD, Bollinger Bands, or custom oscillators—into a consistent statistical framework. Unlike simple rescaling, this implementation offers nine distinct normalization methodologies drawn from academic statistics and machine learning, each designed to handle specific data characteristics such as outliers, non-normal distributions, or varying magnitudes. All calculations use first-principles mathematics with manual implementations (zero reliance on built-in assumptions), ensuring full transparency and customization.
Key Features
Normalization Methods Reference
1. Z-Score (Adaptive)
The classical statistical standardization: (Value - Mean) / Standard Deviation. Transforms data to have mean 0 and standard deviation 1. Values beyond ±2 indicate statistical extremes (95% confidence). Best for normally distributed indicators like RSI or MACD. This implementation calculates population standard deviation manually over the lookback window.
2. Min-Max Dynamic
Rescales data linearly to the range [-1, 1] based on the minimum and maximum values within the lookback window: 2 × ((X - Min) / (Max - Min)) - 1. Sensitive to outliers (extremes affect the entire scale) but preserves exact relative positioning within the range. Ideal for bounded oscillators or when you need to see exact position within recent extremes.
3. Robust Scale (Median-IQR)
Outlier-resistant normalization using median and interquartile range (IQR): (Value - Median) / IQR where IQR = Q3 - Q1 (75th percentile minus 25th percentile). Unlike Z-score, this ignores extreme tails (top/bottom 25%), making it ideal for noisy crypto markets or volatility spikes that would distort mean-based methods. The "PhD standard" for financial data with fat tails.
4. Yeo-Johnson Power Transform
Advanced variance-stabilizing transformation that handles both positive and negative values (unlike Box-Cox which requires positive data). Applies power transformation ((X+1)^λ - 1)/λ for positive values and -((-X+1)^(2-λ) - 1)/(2-λ) for negative values, then Z-scores the result. The λ (lambda) parameter controls transformation strength:
Best for indicators with severe skewness or heteroscedasticity (changing variance over time).
5. Quantile Rank
Non-parametric normalization that converts values to their percentile rank within the lookback window, then scales to [-1, 1]. A value of 0.6 means the current reading is higher than 60% of recent history. Makes no assumptions about distribution shape—purely rank-based. Excellent for comparing indicators with completely different underlying distributions (e.g., comparing Volume RSI to Price RSI).
6. Sigmoid Logistic
Maps Z-scores through the logistic sigmoid function: 2/(1 + e^(-Z)) - 1. Compresses extreme values asymptotically toward ±1 while maintaining linearity near zero. Probabilistic interpretation: values near ±1 represent "saturation" or high confidence in directional bias. Useful for creating smooth gradient inputs for machine learning models or when you want to bound output strictly between -1 and 1 regardless of how extreme the input becomes.
7. Vector L2 Normalize
Geometric normalization treating the lookback window as a vector in Hilbert space: X / ||X|| × √N. Calculates the Euclidean norm (square root of sum of squares) and projects the current value onto the unit sphere, then rescales by √N for interpretability. Emphasizes relative magnitude within the recent vector space rather than statistical properties. Useful for momentum indicators where you want to compare current "energy" relative to recent history.
8. Decimal Scaling
Magnitude-based normalization that shifts the decimal point based on the maximum absolute value in the window: X / 10^k × 10 where k is the smallest integer such that max|X| < 10^k. Simple but effective for indicators with wildly different decimal places (e.g., Bitcoin prices vs. forex pip values). Preserves order of magnitude relationships while bringing values into comparable ranges.
9. Winsorized Z-Score
Hybrid approach that combines outlier trimming with Z-scoring. First caps (Winsorizes) extreme values at the specified percentiles (default 5th and 95th percentiles), then calculates mean and standard deviation on the trimmed data, finally applies standard Z-score. Eliminates the influence of black swan events while maintaining the interpretability of Z-scores. The best choice for mean-reversion strategies in volatile markets.
How It Works
1. Input Acquisition
The script accepts any series input via the Source parameter—this can be close price, an external indicator output, or a calculated series. This flexibility allows normalization of complex custom indicators.
2. Method Selection & Calculation
Based on the selected method, the engine:
3. Optional Smoothing
If Savitzky-Golay smoothing is enabled, applies a 5-point quadratic polynomial fit that preserves peak height and width better than traditional moving averages—ideal for cleaning noise without signal distortion.
4. Output Scaling
All methods converge to a roughly comparable scale centered on 0, with typical ranges between -2 and +2 (though some methods strictly bound to ±1).
Settings Guide
Use this tool to create apple-to-apple comparisons between disparate indicators. For example, normalize both RSI (0-100) and MACD (-∞ to +∞) to Z-scores, then create a composite signal when both exceed +1.5. Or use Quantile Rank to identify when Volume is in its top 10% while Price is in its bottom 10% (divergence). For high-volatility crypto assets, always prefer Robust Scale or Winsorized Z-Score over standard Z-Score to prevent whale wicks from distorting your normalization baseline.
A comprehensive statistical normalization engine that transforms any external indicator into a standardized, comparable scale using nine distinct PhD-level mathematical methods, featuring robust outlier handling and advanced signal processing.
Overview
The Universal Normalizer solves the fundamental problem of indicator incomparability by converting any input—whether RSI, MACD, Bollinger Bands, or custom oscillators—into a consistent statistical framework. Unlike simple rescaling, this implementation offers nine distinct normalization methodologies drawn from academic statistics and machine learning, each designed to handle specific data characteristics such as outliers, non-normal distributions, or varying magnitudes. All calculations use first-principles mathematics with manual implementations (zero reliance on built-in assumptions), ensuring full transparency and customization.
Key Features
- Universal Input: Accepts any external indicator or price series via the Source input—normalize RSI, Stochastic, custom algorithms, or raw price
- Nine Normalization Methods: Complete statistical toolkit ranging from classical Z-scores to advanced power transforms and robust scaling
- Outlier Resistance: Dedicated methods (Robust Scale, Winsorized Z-Score) designed specifically for noisy financial data with fat tails
- Distribution Handling: Yeo-Johnson transform handles negative values and non-normal distributions where Box-Cox fails
- Optional Savitzky-Golay Smoothing: Polynomial least-squares smoothing filter that preserves signal shape better than moving averages
- Pure Mathematical Implementation: Manual calculation of standard deviations, medians, percentiles, and sorting algorithms—no hidden assumptions
- Dynamic Visual Feedback: Color-coded output ranging from purple (extreme negative) through gray (neutral) to red (extreme positive)
- Statistical Reference Bands: Visual guides at ±1σ (standard deviation) and ±2σ (extreme) with gradient background fills
Normalization Methods Reference
1. Z-Score (Adaptive)
The classical statistical standardization: (Value - Mean) / Standard Deviation. Transforms data to have mean 0 and standard deviation 1. Values beyond ±2 indicate statistical extremes (95% confidence). Best for normally distributed indicators like RSI or MACD. This implementation calculates population standard deviation manually over the lookback window.
2. Min-Max Dynamic
Rescales data linearly to the range [-1, 1] based on the minimum and maximum values within the lookback window: 2 × ((X - Min) / (Max - Min)) - 1. Sensitive to outliers (extremes affect the entire scale) but preserves exact relative positioning within the range. Ideal for bounded oscillators or when you need to see exact position within recent extremes.
3. Robust Scale (Median-IQR)
Outlier-resistant normalization using median and interquartile range (IQR): (Value - Median) / IQR where IQR = Q3 - Q1 (75th percentile minus 25th percentile). Unlike Z-score, this ignores extreme tails (top/bottom 25%), making it ideal for noisy crypto markets or volatility spikes that would distort mean-based methods. The "PhD standard" for financial data with fat tails.
4. Yeo-Johnson Power Transform
Advanced variance-stabilizing transformation that handles both positive and negative values (unlike Box-Cox which requires positive data). Applies power transformation ((X+1)^λ - 1)/λ for positive values and -((-X+1)^(2-λ) - 1)/(2-λ) for negative values, then Z-scores the result. The λ (lambda) parameter controls transformation strength:
- λ = 1: No transformation (linear)
- λ = 0: Log transform (for right-skewed data)
- λ = 0.5: Square root transform (moderate skew)
- λ = -1: Inverse transform (for heavy right tails)
Best for indicators with severe skewness or heteroscedasticity (changing variance over time).
5. Quantile Rank
Non-parametric normalization that converts values to their percentile rank within the lookback window, then scales to [-1, 1]. A value of 0.6 means the current reading is higher than 60% of recent history. Makes no assumptions about distribution shape—purely rank-based. Excellent for comparing indicators with completely different underlying distributions (e.g., comparing Volume RSI to Price RSI).
6. Sigmoid Logistic
Maps Z-scores through the logistic sigmoid function: 2/(1 + e^(-Z)) - 1. Compresses extreme values asymptotically toward ±1 while maintaining linearity near zero. Probabilistic interpretation: values near ±1 represent "saturation" or high confidence in directional bias. Useful for creating smooth gradient inputs for machine learning models or when you want to bound output strictly between -1 and 1 regardless of how extreme the input becomes.
7. Vector L2 Normalize
Geometric normalization treating the lookback window as a vector in Hilbert space: X / ||X|| × √N. Calculates the Euclidean norm (square root of sum of squares) and projects the current value onto the unit sphere, then rescales by √N for interpretability. Emphasizes relative magnitude within the recent vector space rather than statistical properties. Useful for momentum indicators where you want to compare current "energy" relative to recent history.
8. Decimal Scaling
Magnitude-based normalization that shifts the decimal point based on the maximum absolute value in the window: X / 10^k × 10 where k is the smallest integer such that max|X| < 10^k. Simple but effective for indicators with wildly different decimal places (e.g., Bitcoin prices vs. forex pip values). Preserves order of magnitude relationships while bringing values into comparable ranges.
9. Winsorized Z-Score
Hybrid approach that combines outlier trimming with Z-scoring. First caps (Winsorizes) extreme values at the specified percentiles (default 5th and 95th percentiles), then calculates mean and standard deviation on the trimmed data, finally applies standard Z-score. Eliminates the influence of black swan events while maintaining the interpretability of Z-scores. The best choice for mean-reversion strategies in volatile markets.
How It Works
1. Input Acquisition
The script accepts any series input via the Source parameter—this can be close price, an external indicator output, or a calculated series. This flexibility allows normalization of complex custom indicators.
2. Method Selection & Calculation
Based on the selected method, the engine:
- Builds a historical array of the last Lookback Window values
- Calculates required statistics (mean, std dev, median, percentiles, min/max) using manual first-principles algorithms
- Applies the selected transformation formula
- Handles edge cases (zero division, negative values for power transforms, etc.)
3. Optional Smoothing
If Savitzky-Golay smoothing is enabled, applies a 5-point quadratic polynomial fit that preserves peak height and width better than traditional moving averages—ideal for cleaning noise without signal distortion.
4. Output Scaling
All methods converge to a roughly comparable scale centered on 0, with typical ranges between -2 and +2 (though some methods strictly bound to ±1).
Settings Guide
- Source Indicator: Connect any external indicator or price series here. Click the dropdown and select "External Indicator" to link from other plots.
- Normalization Method: Select from the 9 methodologies based on your data characteristics:
Use Z-Score for standard Gaussian analysis
Use Robust Scale or Winsorized for noisy/outlier-prone data
Use Yeo-Johnson for skewed distributions (adjust λ between -5 and 5)
Use Quantile Rank for non-parametric ranking
Use Sigmoid for probabilistic bounded outputs - Lookback Window: Statistical calculation period (10-500 bars). Longer windows = more stable norms but slower adaptation to regime changes.
- Savitzky-Golay Smoothing: Toggle polynomial smoothing for noise reduction without lag.
- Yeo-Johnson λ: Power parameter (-5 to 5). 0.5 for square-root-like, 0 for log-like, 1 for linear.
- Winsorization Percentile: Percentage of extreme values to trim (0.01 to 0.25). Higher values = more aggressive outlier removal.
Use this tool to create apple-to-apple comparisons between disparate indicators. For example, normalize both RSI (0-100) and MACD (-∞ to +∞) to Z-scores, then create a composite signal when both exceed +1.5. Or use Quantile Rank to identify when Volume is in its top 10% while Price is in its bottom 10% (divergence). For high-volatility crypto assets, always prefer Robust Scale or Winsorized Z-Score over standard Z-Score to prevent whale wicks from distorting your normalization baseline.
受保护脚本
此脚本以闭源形式发布。 但是,您可以自由使用,没有任何限制 — 了解更多信息这里。
免责声明
这些信息和出版物并非旨在提供,也不构成TradingView提供或认可的任何形式的财务、投资、交易或其他类型的建议或推荐。请阅读使用条款了解更多信息。
受保护脚本
此脚本以闭源形式发布。 但是,您可以自由使用,没有任何限制 — 了解更多信息这里。
免责声明
这些信息和出版物并非旨在提供,也不构成TradingView提供或认可的任何形式的财务、投资、交易或其他类型的建议或推荐。请阅读使用条款了解更多信息。