IV vs Realised Volatility (VIX/HV Comparator)VIX / HV Comparator – Implied vs Realised Volatility
This indicator compares Implied Volatility (IV) from a volatility index (VIX, India VIX, etc.) with the Realised / Historical Volatility (HV) of the current chart symbol.
It helps you see whether options are pricing volatility as rich or cheap relative to what the underlying is actually doing.
What it does
Pulls IV from any user-selected vol index symbol (e.g. CBOE:VIX for SPX, NSEINDIA:INDIAVIX for Nifty).
Calculates realised volatility from the chart’s price data using returns over a user-defined lookback.
Annualises HV so IV and HV are displayed on the same percentage scale, on any timeframe (intraday or higher).
Optionally shows an IV/HV ratio in a separate pane to highlight when options are rich or cheap relative to realised volatility.
How to read it
Main panel:
Orange line – Implied Volatility (IV) from your chosen vol index.
Aqua line – Realised / Historical Volatility (HV) of the current chart symbol.
Fill between lines:
Green shading -> IV > HV -> options are priced richer than what the underlying is currently realising.
Red shading -> HV > IV -> realised vol is higher than the options market is implying.
Sub-panel (optional):
IV / HV ratio
- Above 1 -> IV > HV (vol rich).
- Below 1 -> IV < HV (vol cheap).
- Horizontal guides (for example 1.2 / 0.8) help frame “significantly rich/cheap” zones.
A small label on the latest bar displays the current IV, HV and their difference in vol points.
Inputs (key ones)
IV Index Symbol – choose the volatility index that corresponds to your underlying (VIX, India VIX, etc.).
Realised Vol Lookback – number of bars used to compute HV (for example 20).
Trading Days per Year and Active Hours per Day – used for annualising HV so it stays consistent across timeframes.
IV Scale Factor – adjust if your IV index is quoted in decimals (0.15) instead of points (15).
Practical uses
Context for options trades – Quickly see if current IV is high or low relative to realised volatility when deciding on strategies (premium selling vs buying, spreads, hedges).
Vol regime analysis – Track shifts where HV starts to rise above IV (real stress building) or IV spikes far above HV (fear premium / insurance bid).
Cross-timeframe checks – Use on intraday charts for short-term trading context, or on daily/weekly charts for bigger picture vol regimes.
This tool is not a stand-alone signal generator. It is meant to be a volatility dashboard you combine with your usual price action, trend, and options strategy rules to understand how the options market is pricing risk vs what the underlying is actually delivering.
历史波动率(HV)
Volatility-Dynamic Risk Manager MNQ [HERMAN]Title: Volatility-Dynamic Risk Manager MNQ
Description:
The Volatility-Dynamic Risk Manager is a dedicated risk management utility designed specifically for traders of Micro Nasdaq 100 Futures (MNQ).
Many traders struggle with position sizing because they use a fixed Stop Loss size regardless of market conditions. A 10-point stop might be safe in a slow market but easily stopped out in a high-volatility environment. This indicator solves that problem by monitoring real-time volatility (using ATR) and automatically suggesting the appropriate Stop Loss size and Position Size (Contracts) to keep your dollar risk constant.
Note: This tool is hardcoded for MNQ (Micro Nasdaq) with a tick value calculation of $2 per point.
📈 How It Works
-This script operates on a logical flow that adapts to market behavior:
-Volatility Measurement: It calculates the Average True Range (ATR) over a user-defined length (Default: 14) to gauge the current "speed" of the market.
-State Detection: Based on the current ATR, the script classifies the market into one of three states:
Low Volatility: The market is chopping or moving slowly.
Normal Volatility: Standard trading conditions.
High Volatility: The market is moving aggressively.
Dynamic Stop Loss Selection: Depending on the detected state, the script selects a pre-defined Stop Loss (in points) that you have configured for that specific environment.
Position Sizing Calculation: Finally, it calculates how many MNQ contracts you can trade so that if your Stop Loss is hit, you do not lose more than your defined "Max Risk per Trade."
🧮 Methodology & Calculations
Since this script handles risk management, transparency in calculation is vital.
Here is the exact math used:
ATR Calculation: Contracts = Max Risk / Risk Per Contract
⚙️ Settings
You can fully customize the behavior of the risk manager via the settings panel:
Risk Management
-Max Risk per Trade ($): The maximum amount of USD you are willing to lose on a single trade.
Volatility Thresholds (ATR)
-ATR Length: The lookback period for volatility calculation.
-Upper Limit for LOW Volatility: If ATR is below this number, the market is "Low Volatility."
-Lower Limit for HIGH Volatility: If ATR is above this number, the market is "High Volatility." (Anything between Low and High is considered "Normal").
Stop Loss Settings (Points)
-SL for Low/Normal/High: Define how wide your stop loss should be in points for each of the three market states.
Visual Settings
-Color Theme: Switch between Light and Dark modes.
-Panel Position: Move the dashboard to any corner or center of your chart.
-Panel Size: Adjust the scale (Tiny to Large) to fit your screen resolution.
📊 Dashboard Overview
-The on-screen panel provides a quick-glance summary for live execution:
-Market State: Color-coded status (Green = Low Vol, Orange = Normal, Red = High Vol).
-Current ATR: The live volatility reading.
-Suggested SL: The Stop Loss size you should enter in your execution platform.
-CONTRACTS: The calculated position size.
-Est. Loss: The actual dollar amount you will lose if the stop is hit (usually slightly less than your Max Risk due to rounding down).
Who is this for?
-Discretionary and systematic futures traders on MNQ (/MNQ or MES also works with small adjustments)
-Anyone who wants perfect risk consistency regardless of whether the market is asleep or exploding
-Traders who hate manual position-size calculations on every trade
No repainting
Works on any timeframe
Real-time updates on every bar
Overlay indicator (no signals, pure risk-management tool)
⚠️ Disclaimer
This tool is for informational and educational purposes only. It calculates mathematical position sizes based on user inputs. It does not execute trades, nor does it guarantee profits. Past performance (volatility) is not indicative of future results. Always manually verify your order size before executing trades on your broker platform.
Volatility Risk PremiumTHE INSURANCE PREMIUM OF THE STOCK MARKET
Every day, millions of investors face a fundamental question that has puzzled economists for decades: how much should protection against market crashes cost? The answer lies in a phenomenon called the Volatility Risk Premium, and understanding it may fundamentally change how you interpret market conditions.
Think of the stock market like a neighborhood where homeowners buy insurance against fire. The insurance company charges premiums based on their estimates of fire risk. But here is the interesting part: insurance companies systematically charge more than the actual expected losses. This difference between what people pay and what actually happens is the insurance premium. The same principle operates in financial markets, but instead of fire insurance, investors buy protection against market volatility through options contracts.
The Volatility Risk Premium, or VRP, measures exactly this difference. It represents the gap between what the market expects volatility to be (implied volatility, as reflected in options prices) and what volatility actually turns out to be (realized volatility, calculated from actual price movements). This indicator quantifies that gap and transforms it into actionable intelligence.
THE FOUNDATION
The academic study of volatility risk premiums began gaining serious traction in the early 2000s, though the phenomenon itself had been observed by practitioners for much longer. Three research papers form the backbone of this indicator's methodology.
Peter Carr and Liuren Wu published their seminal work "Variance Risk Premiums" in the Review of Financial Studies in 2009. Their research established that variance risk premiums exist across virtually all asset classes and persist over time. They documented that on average, implied volatility exceeds realized volatility by approximately three to four percentage points annualized. This is not a small number. It means that sellers of volatility insurance have historically collected a substantial premium for bearing this risk.
Tim Bollerslev, George Tauchen, and Hao Zhou extended this research in their 2009 paper "Expected Stock Returns and Variance Risk Premia," also published in the Review of Financial Studies. Their critical contribution was demonstrating that the VRP is a statistically significant predictor of future equity returns. When the VRP is high, meaning investors are paying substantial premiums for protection, future stock returns tend to be positive. When the VRP collapses or turns negative, it often signals that realized volatility has spiked above expectations, typically during market stress periods.
Gurdip Bakshi and Nikunj Kapadia provided additional theoretical grounding in their 2003 paper "Delta-Hedged Gains and the Negative Market Volatility Risk Premium." They demonstrated through careful empirical analysis why volatility sellers are compensated: the risk is not diversifiable and tends to materialize precisely when investors can least afford losses.
HOW THE INDICATOR CALCULATES VOLATILITY
The calculation begins with two separate measurements that must be compared: implied volatility and realized volatility.
For implied volatility, the indicator uses the CBOE Volatility Index, commonly known as the VIX. The VIX represents the market's expectation of 30-day forward volatility on the S&P 500, calculated from a weighted average of out-of-the-money put and call options. It is often called the "fear gauge" because it rises when investors rush to buy protective options.
Realized volatility requires more careful consideration. The indicator offers three distinct calculation methods, each with specific advantages rooted in academic literature.
The Close-to-Close method is the most straightforward approach. It calculates the standard deviation of logarithmic daily returns over a specified lookback period, then annualizes this figure by multiplying by the square root of 252, the approximate number of trading days in a year. This method is intuitive and widely used, but it only captures information from closing prices and ignores intraday price movements.
The Parkinson estimator, developed by Michael Parkinson in 1980, improves efficiency by incorporating high and low prices. The mathematical formula calculates variance as the sum of squared log ratios of daily highs to lows, divided by four times the natural logarithm of two, times the number of observations. This estimator is theoretically about five times more efficient than the close-to-close method because high and low prices contain additional information about the volatility process.
The Garman-Klass estimator, published by Mark Garman and Michael Klass in 1980, goes further by incorporating opening, high, low, and closing prices. The formula combines half the squared log ratio of high to low prices minus a factor involving the log ratio of close to open. This method achieves the minimum variance among estimators using only these four price points, making it particularly valuable for markets where intraday information is meaningful.
THE CORE VRP CALCULATION
Once both volatility measures are obtained, the VRP calculation is straightforward: subtract realized volatility from implied volatility. A positive result means the market is paying a premium for volatility insurance. A negative result means realized volatility has exceeded expectations, typically indicating market stress.
The raw VRP signal receives slight smoothing through an exponential moving average to reduce noise while preserving responsiveness. The default smoothing period of five days balances signal clarity against lag.
INTERPRETING THE REGIMES
The indicator classifies market conditions into five distinct regimes based on VRP levels.
The EXTREME regime occurs when VRP exceeds ten percentage points. This represents an unusual situation where the gap between implied and realized volatility is historically wide. Markets are pricing in significantly more fear than is materializing. Research suggests this often precedes positive equity returns as the premium normalizes.
The HIGH regime, between five and ten percentage points, indicates elevated risk aversion. Investors are paying above-average premiums for protection. This often occurs after market corrections when fear remains elevated but realized volatility has begun subsiding.
The NORMAL regime covers VRP between zero and five percentage points. This represents the long-term average state of markets where implied volatility modestly exceeds realized volatility. The insurance premium is being collected at typical rates.
The LOW regime, between negative two and zero percentage points, suggests either unusual complacency or that realized volatility is catching up to implied volatility. The premium is shrinking, which can precede either calm continuation or increased stress.
The NEGATIVE regime occurs when realized volatility exceeds implied volatility. This is relatively rare and typically indicates active market stress. Options were priced for less volatility than actually occurred, meaning volatility sellers are experiencing losses. Historically, deeply negative VRP readings have often coincided with market bottoms, though timing the reversal remains challenging.
TERM STRUCTURE ANALYSIS
Beyond the basic VRP calculation, sophisticated market participants analyze how volatility behaves across different time horizons. The indicator calculates VRP using both short-term (default ten days) and long-term (default sixty days) realized volatility windows.
Under normal market conditions, short-term realized volatility tends to be lower than long-term realized volatility. This produces what traders call contango in the term structure, analogous to futures markets where later delivery dates trade at premiums. The RV Slope metric quantifies this relationship.
When markets enter stress periods, the term structure often inverts. Short-term realized volatility spikes above long-term realized volatility as markets experience immediate turmoil. This backwardation condition serves as an early warning signal that current volatility is elevated relative to historical norms.
The academic foundation for term structure analysis comes from Scott Mixon's 2007 paper "The Implied Volatility Term Structure" in the Journal of Derivatives, which documented the predictive power of term structure dynamics.
MEAN REVERSION CHARACTERISTICS
One of the most practically useful properties of the VRP is its tendency to mean-revert. Extreme readings, whether high or low, tend to normalize over time. This creates opportunities for systematic trading strategies.
The indicator tracks VRP in statistical terms by calculating its Z-score relative to the trailing one-year distribution. A Z-score above two indicates that current VRP is more than two standard deviations above its mean, a statistically unusual condition. Similarly, a Z-score below negative two indicates VRP is unusually low.
Mean reversion signals trigger when VRP reaches extreme Z-score levels and then shows initial signs of reversal. A buy signal occurs when VRP recovers from oversold conditions (Z-score below negative two and rising), suggesting that the period of elevated realized volatility may be ending. A sell signal occurs when VRP contracts from overbought conditions (Z-score above two and falling), suggesting the fear premium may be excessive and due for normalization.
These signals should not be interpreted as standalone trading recommendations. They indicate probabilistic conditions based on historical patterns. Market context and other factors always matter.
MOMENTUM ANALYSIS
The rate of change in VRP carries its own information content. Rapidly rising VRP suggests fear is building faster than volatility is materializing, often seen in the early stages of corrections before realized volatility catches up. Rapidly falling VRP indicates either calming conditions or rising realized volatility eating into the premium.
The indicator tracks VRP momentum as the difference between current VRP and VRP from a specified number of bars ago. Positive momentum with positive acceleration suggests strengthening risk aversion. Negative momentum with negative acceleration suggests intensifying stress or rapid normalization from elevated levels.
PRACTICAL APPLICATION
For equity investors, the VRP provides context for risk management decisions. High VRP environments historically favor equity exposure because the market is pricing in more pessimism than typically materializes. Low or negative VRP environments suggest either reducing exposure or hedging, as markets may be underpricing risk.
For options traders, understanding VRP is fundamental to strategy selection. Strategies that sell volatility, such as covered calls, cash-secured puts, or iron condors, tend to profit when VRP is elevated and compress toward its mean. Strategies that buy volatility tend to profit when VRP is low and risk materializes.
For systematic traders, VRP provides a regime filter for other strategies. Momentum strategies may benefit from different parameters in high versus low VRP environments. Mean reversion strategies in VRP itself can form the basis of a complete trading system.
LIMITATIONS AND CONSIDERATIONS
No indicator provides perfect foresight, and the VRP is no exception. Several limitations deserve attention.
The VRP measures a relationship between two estimates, each subject to measurement error. The VIX represents expectations that may prove incorrect. Realized volatility calculations depend on the chosen method and lookback period.
Mean reversion tendencies hold over longer time horizons but provide limited guidance for short-term timing. VRP can remain extreme for extended periods, and mean reversion signals can generate losses if the extremity persists or intensifies.
The indicator is calibrated for equity markets, specifically the S&P 500. Application to other asset classes requires recalibration of thresholds and potentially different data sources.
Historical relationships between VRP and subsequent returns, while statistically robust, do not guarantee future performance. Structural changes in markets, options pricing, or investor behavior could alter these dynamics.
STATISTICAL OUTPUTS
The indicator presents comprehensive statistics including current VRP level, implied volatility from VIX, realized volatility from the selected method, current regime classification, number of bars in the current regime, percentile ranking over the lookback period, Z-score relative to recent history, mean VRP over the lookback period, realized volatility term structure slope, VRP momentum, mean reversion signal status, and overall market bias interpretation.
Color coding throughout the indicator provides immediate visual interpretation. Green tones indicate elevated VRP associated with fear and potential opportunity. Red tones indicate compressed or negative VRP associated with complacency or active stress. Neutral tones indicate normal market conditions.
ALERT CONDITIONS
The indicator provides alerts for regime transitions, extreme statistical readings, term structure inversions, mean reversion signals, and momentum shifts. These can be configured through the TradingView alert system for real-time monitoring across multiple timeframes.
REFERENCES
Bakshi, G., and Kapadia, N. (2003). Delta-Hedged Gains and the Negative Market Volatility Risk Premium. Review of Financial Studies, 16(2), 527-566.
Bollerslev, T., Tauchen, G., and Zhou, H. (2009). Expected Stock Returns and Variance Risk Premia. Review of Financial Studies, 22(11), 4463-4492.
Carr, P., and Wu, L. (2009). Variance Risk Premiums. Review of Financial Studies, 22(3), 1311-1341.
Garman, M. B., and Klass, M. J. (1980). On the Estimation of Security Price Volatilities from Historical Data. Journal of Business, 53(1), 67-78.
Mixon, S. (2007). The Implied Volatility Term Structure of Stock Index Options. Journal of Empirical Finance, 14(3), 333-354.
Parkinson, M. (1980). The Extreme Value Method for Estimating the Variance of the Rate of Return. Journal of Business, 53(1), 61-65.
Bubbles + Clusters + SweepsIndicator For Bubbles + Clusters + Sweeps
✔ Volume bubbles
✔ Delta coloring (green/red intensity)
✔ Auto supply/demand zones
✔ Volume-profile style blocks inside zones
✔ Liquidity sweep markers
✔ Box drawings extending until filled
✔ Optional bubble filters (min-volume threshold)
Average Candle SizeI created this indicator because I couldn't find a simple tool that calculates just the average candle size without additional complexity. Built for traders who want a straightforward volatility measure they can fully understand. How it works:
1. Calculate high-low for each candle
2. Sum all results
3. Divide by the total number of candles
Simple math to get the average candle size of the period specified in Length.
CapitalFlowsResearch: Returns Regime MapCapitalFlowsResearch: Returns Regime Map — Two-Asset Behaviour & Correlation Lens
CapitalFlowsResearch: Returns Regime Map is a two-asset regime overlay that shows how a primary market and a linked macro series are really moving together over short rolling windows. Instead of just eyeballing two separate charts, the tool classifies each bar into one of four states based on the combined direction of recent returns:
Up / Up
Up / Down
Down / Up
Down / Down
These states are calculated from aggregated, windowed returns (using configurable return definitions for each asset), then painted directly onto the price chart as background regimes. On top of that, the indicator monitors the correlation of the same return streams and can optionally tint periods where correlation sits within a user-defined “low-correlation” band—highlighting moments when the usual relationship between the two series is weak, unstable, or breaking down.
In practice, this turns the chart into a compact co-movement map: you can see at a glance whether price and rates (or any two chosen markets) are trending together, diverging in a meaningful way, or moving in choppy, low-conviction fashion. It’s especially powerful for macro traders who need to frame trades in terms of “risk asset vs. rates,” “index vs. volatility,” or similar pairs—while keeping the actual construction details of the regime logic abstracted.
Intraday Volatility Map (Bajrang Bali Indicator)Indicator Name
Bollinger Bands on Historical Volatility (BB-HV) – Intraday Volatility Map
Concept
This indicator applies Bollinger Bands directly on Historical Volatility (HV) instead of price.
HV tells you the “energy” behind the move
Bollinger Bands on HV tell you when volatility is compressing or expanding
Works on all intraday timeframes for indices and stocks
You are not watching price alone — you are watching the strength behind price movement .
---
Core Logic
Historical Volatility is calculated using log returns, annualized
Bollinger Bands are plotted on the HV line (separate pane)
Upper/Lower bands show volatility expansion or contraction
User inputs: HV length, BB length, BB multiplier
---
What This Indicator Shows
Volatility Squeeze (Low HV) – HV stays below lower BB → quiet market, breakout coming
Volatility Expansion (High HV) – HV breaches upper BB → trend day or news-driven move
Normal Regime – HV oscillates between bands → balanced intraday structure
---
How to Use – Indices (NIFTY / BankNifty)
Trend Day Detection – Early HV breakout above upper band suggests strong directional movement
Range Day Identification – HV hugging lower band implies consolidation and mean reversion
Event Risk Mapping – Sudden HV spikes warn of macro data, gaps, policy events
---
How to Use – Stocks
Stock Selection – Choose stocks where HV is rising above mid-band → active, tradable
Entry Confirmation – Breakouts with rising HV have stronger follow-through
Risk Management – High HV → wide stops, low size; Low HV → tight stops, fakeouts possible
---
Signals Generated (For Alerts)
Squeeze Signal – HV < lower band for a set number of bars
Expansion Signal – HV crossing above upper band
Reversion Signal – HV falls back inside bands after a volatility spike
---
Best Timeframes
1m, 3m, 5m, 15m intraday
Works best on indices and liquid stocks
---
Suggested Inputs
HV Length: 20
BB Length: 20
BB Multiplier: 2.0
Squeeze Bars: 5–10
Expansion Filter: HV closing above upper band
---
Who Should Use This Indicator?
Scalpers – Identify high-energy zones
Day Traders – Ride trend days via volatility regime shifts
Option Traders – Read intraday realized volatility patterns
---
Trading Notes
This is a volatility-regime tool, not a buy/sell signal generator
Works best when combined with VWAP, structure, volume, or trend indicators
Always use personal judgment and risk management
---
Disclaimer
This script is for educational and research use only. It does not provide investment advice or guaranteed trading outcomes. Use at your own risk.
Expected Move BandsExpected move is the amount that an asset is predicted to increase or decrease from its current price, based on the current levels of volatility.
In this model, we assume asset price follows a log-normal distribution and the log return follows a normal distribution.
Note: Normal distribution is just an assumption, it's not the real distribution of return
Settings:
"Estimation Period Selection" is for selecting the period we want to construct the prediction interval.
For "Current Bar", the interval is calculated based on the data of the previous bar close. Therefore changes in the current price will have little effect on the range. What current bar means is that the estimated range is for when this bar close. E.g., If the Timeframe on 4 hours and 1 hour has passed, the interval is for how much time this bar has left, in this case, 3 hours.
For "Future Bars", the interval is calculated based on the current close. Therefore the range will be very much affected by the change in the current price. If the current price moves up, the range will also move up, vice versa. Future Bars is estimating the range for the period at least one bar ahead.
There are also other source selections based on high low.
Time setting is used when "Future Bars" is chosen for the period. The value in time means how many bars ahead of the current bar the range is estimating. When time = 1, it means the interval is constructing for 1 bar head. E.g., If the timeframe is on 4 hours, then it's estimating the next 4 hours range no matter how much time has passed in the current bar.
Note: It's probably better to use "probability cone" for visual presentation when time > 1
Volatility Models :
Sample SD: traditional sample standard deviation, most commonly used, use (n-1) period to adjust the bias
Parkinson: Uses High/ Low to estimate volatility, assumes continuous no gap, zero mean no drift, 5 times more efficient than Close to Close
Garman Klass: Uses OHLC volatility, zero drift, no jumps, about 7 times more efficient
Yangzhang Garman Klass Extension: Added jump calculation in Garman Klass, has the same value as Garman Klass on markets with no gaps.
about 8 x efficient
Rogers: Uses OHLC, Assume non-zero mean volatility, handles drift, does not handle jump 8 x efficient
EWMA: Exponentially Weighted Volatility. Weight recently volatility more, more reactive volatility better in taking account of volatility autocorrelation and cluster.
YangZhang: Uses OHLC, combines Rogers and Garmand Klass, handles both drift and jump, 14 times efficient, alpha is the constant to weight rogers volatility to minimize variance.
Median absolute deviation: It's a more direct way of measuring volatility. It measures volatility without using Standard deviation. The MAD used here is adjusted to be an unbiased estimator.
Volatility Period is the sample size for variance estimation. A longer period makes the estimation range more stable less reactive to recent price. Distribution is more significant on a larger sample size. A short period makes the range more responsive to recent price. Might be better for high volatility clusters.
Standard deviations:
Standard Deviation One shows the estimated range where the closing price will be about 68% of the time.
Standard Deviation two shows the estimated range where the closing price will be about 95% of the time.
Standard Deviation three shows the estimated range where the closing price will be about 99.7% of the time.
Note: All these probabilities are based on the normal distribution assumption for returns. It's the estimated probability, not the actual probability.
Manually Entered Standard Deviation shows the range of any entered standard deviation. The probability of that range will be presented on the panel.
People usually assume the mean of returns to be zero. To be more accurate, we can consider the drift in price from calculating the geometric mean of returns. Drift happens in the long run, so short lookback periods are not recommended. Assuming zero mean is recommended when time is not greater than 1.
When we are estimating the future range for time > 1, we typically assume constant volatility and the returns to be independent and identically distributed. We scale the volatility in term of time to get future range. However, when there's autocorrelation in returns( when returns are not independent), the assumption fails to take account of this effect. Volatility scaled with autocorrelation is required when returns are not iid. We use an AR(1) model to scale the first-order autocorrelation to adjust the effect. Returns typically don't have significant autocorrelation. Adjustment for autocorrelation is not usually needed. A long length is recommended in Autocorrelation calculation.
Note: The significance of autocorrelation can be checked on an ACF indicator.
ACF
The multimeframe option enables people to use higher period expected move on the lower time frame. People should only use time frame higher than the current time frame for the input. An error warning will appear when input Tf is lower. The input format is multiplier * time unit. E.g. : 1D
Unit: M for months, W for Weeks, D for Days, integers with no unit for minutes (E.g. 240 = 240 minutes). S for Seconds.
Smoothing option is using a filter to smooth out the range. The filter used here is John Ehler's supersmoother. It's an advance smoothing technique that gets rid of aliasing noise. It affects is similar to a simple moving average with half the lookback length but smoother and has less lag.
Note: The range here after smooth no long represent the probability
Panel positions can be adjusted in the settings.
X position adjusts the horizontal position of the panel. Higher X moves panel to the right and lower X moves panel to the left.
Y position adjusts the vertical position of the panel. Higher Y moves panel up and lower Y moves panel down.
Step line display changes the style of the bands from line to step line. Step line is recommended because it gets rid of the directional bias of slope of expected move when displaying the bands.
Warnings:
People should not blindly trust the probability. They should be aware of the risk evolves by using the normal distribution assumption. The real return has skewness and high kurtosis. While skewness is not very significant, the high kurtosis should be noticed. The Real returns have much fatter tails than the normal distribution, which also makes the peak higher. This property makes the tail ranges such as range more than 2SD highly underestimate the actual range and the body such as 1 SD slightly overestimate the actual range. For ranges more than 2SD, people shouldn't trust them. They should beware of extreme events in the tails.
Different volatility models provide different properties if people are interested in the accuracy and the fit of expected move, they can try expected move occurrence indicator. (The result also demonstrate the previous point about the drawback of using normal distribution assumption).
Expected move Occurrence Test
The prediction interval is only for the closing price, not wicks. It only estimates the probability of the price closing at this level, not in between. E.g., If 1 SD range is 100 - 200, the price can go to 80 or 230 intrabar, but if the bar close within 100 - 200 in the end. It's still considered a 68% one standard deviation move.
️Omega RatioThe Omega Ratio is a risk-return performance measure of an investment asset, portfolio, or strategy. It is defined as the probability-weighted ratio, of gains versus losses for some threshold return target. The ratio is an alternative for the widely used Sharpe ratio and is based on information the Sharpe ratio discards.
█ OVERVIEW
As we have mentioned many times, stock market returns are usually not normally distributed. Therefore the models that assume a normal distribution of returns may provide us with misleading information. The Omega Ratio improves upon the common normality assumption among other risk-return ratios by taking into account the distribution as a whole.
█ CONCEPTS
Two distributions with the same mean and variance, would according to the most commonly used Sharpe Ratio suggest that the underlying assets of the distribution offer the same risk-return ratio. But as we have mentioned in our Moments indicator, variance and standard deviation are not a sufficient measure of risk in the stock market since other shape features of a distribution like skewness and excess kurtosis come into play. Omega Ratio tackles this problem by employing all four Moments of the distribution and therefore taking into account the differences in the shape features of the distributions. Another important feature of the Omega Ratio is that it does not require any estimation but is rather calculated directly from the observed data. This gives it an advantage over standard statistical estimators that require estimation of parameters and are therefore sampling uncertainty in its calculations.
█ WAYS TO USE THIS INDICATOR
Omega calculates a probability-adjusted ratio of gains to losses, relative to the Minimum Acceptable Return (MAR). This means that at a given MAR using the simple rule of preferring more to less, an asset with a higher value of Omega is preferable to one with a lower value. The indicator displays the values of Omega at increasing levels of MARs and creating the so-called Omega Curve. Knowing this one can compare Omega Curves of different assets and decide which is preferable given the MAR of your strategy. The indicator plots two Omega Curves. One for the on chart symbol and another for the off chart symbol that u can use for comparison.
When comparing curves of different assets make sure their trading days are the same in order to ensure the same period for the Omega calculations. Value interpretation: Omega<1 will indicate that the risk outweighs the reward and therefore there are more excess negative returns than positive. Omega>1 will indicate that the reward outweighs the risk and that there are more excess positive returns than negative. Omega=1 will indicate that the minimum acceptable return equals the mean return of an asset. And that the probability of gain is equal to the probability of loss.
█ FEATURES
• "Low-Risk security" lets you select the security that you want to use as a benchmark for Omega calculations.
• "Omega Period" is the size of the sample that is used for the calculations.
• “Increments” is the number of Minimal Acceptable Return levels the calculation is carried on. • “Other Symbol” lets you select the source of the second curve.
• “Color Settings” you can set the color for each curve.
Linear Moments█ OVERVIEW
The Linear Moments indicator, also known as L-moments, is a statistical tool used to estimate the properties of a probability distribution. It is an alternative to conventional moments and is more robust to outliers and extreme values.
█ CONCEPTS
█ Four moments of a distribution
We have mentioned the concept of the Moments of a distribution in one of our previous posts. The method of Linear Moments allows us to calculate more robust measures that describe the shape features of a distribution and are anallougous to those of conventional moments. L-moments therefore provide estimates of the location, scale, skewness, and kurtosis of a probability distribution.
The first L-moment, λ₁, is equivalent to the sample mean and represents the location of the distribution. The second L-moment, λ₂, is a measure of the dispersion of the distribution, similar to the sample standard deviation. The third and fourth L-moments, λ₃ and λ₄, respectively, are the measures of skewness and kurtosis of the distribution. Higher order L-moments can also be calculated to provide more detailed information about the shape of the distribution.
One advantage of using L-moments over conventional moments is that they are less affected by outliers and extreme values. This is because L-moments are based on order statistics, which are more resistant to the influence of outliers. By contrast, conventional moments are based on the deviations of each data point from the sample mean, and outliers can have a disproportionate effect on these deviations, leading to skewed or biased estimates of the distribution parameters.
█ Order Statistics
L-moments are statistical measures that are based on linear combinations of order statistics, which are the sorted values in a dataset. This approach makes L-moments more resistant to the influence of outliers and extreme values. However, the computation of L-moments requires sorting the order statistics, which can lead to a higher computational complexity.
To address this issue, we have implemented an Online Sorting Algorithm that efficiently obtains the sorted dataset of order statistics, reducing the time complexity of the indicator. The Online Sorting Algorithm is an efficient method for sorting large datasets that can be updated incrementally, making it well-suited for use in trading applications where data is often streamed in real-time. By using this algorithm to compute L-moments, we can obtain robust estimates of distribution parameters while minimizing the computational resources required.
█ Bias and efficiency of an estimator
One of the key advantages of L-moments over conventional moments is that they approach their asymptotic normal closer than conventional moments. This means that as the sample size increases, the L-moments provide more accurate estimates of the distribution parameters.
Asymptotic normality is a statistical property that describes the behavior of an estimator as the sample size increases. As the sample size gets larger, the distribution of the estimator approaches a normal distribution, which is a bell-shaped curve. The mean and variance of the estimator are also related to the true mean and variance of the population, and these relationships become more accurate as the sample size increases.
The concept of asymptotic normality is important because it allows us to make inferences about the population based on the properties of the sample. If an estimator is asymptotically normal, we can use the properties of the normal distribution to calculate the probability of observing a particular value of the estimator, given the sample size and other relevant parameters.
In the case of L-moments, the fact that they approach their asymptotic normal more closely than conventional moments means that they provide more accurate estimates of the distribution parameters as the sample size increases. This is especially useful in situations where the sample size is small, such as when working with financial data. By using L-moments to estimate the properties of a distribution, traders can make more informed decisions about their investments and manage their risk more effectively.
Below we can see the empirical dsitributions of the Variance and L-scale estimators. We ran 10000 simulations with a sample size of 100. Here we can clearly see how the L-moment estimator approaches the normal distribution more closely and how such an estimator can be more representative of the underlying population.
█ WAYS TO USE THIS INDICATOR
The Linear Moments indicator can be used to estimate the L-moments of a dataset and provide insights into the underlying probability distribution. By analyzing the L-moments, traders can make inferences about the shape of the distribution, such as whether it is symmetric or skewed, and the degree of its spread and peakedness. This information can be useful in predicting future market movements and developing trading strategies.
One can also compare the L-moments of the dataset at hand with the L-moments of certain commonly used probability distributions. Finance is especially known for the use of certain fat tailed distributions such as Laplace or Student-t. We have built in the theoretical values of L-kurtosis for certain common distributions. In this way a person can compare our observed L-kurtosis with the one of the selected theoretical distribution.
█ FEATURES
Source Settings
Source - Select the source you wish the indicator to calculate on
Source Selection - Selec whether you wish to calculate on the source value or its log return
Moments Settings
Moments Selection - Select the L-moment you wish to be displayed
Lookback - Determine the sample size you wish the L-moments to be calculated with
Theoretical Distribution - This setting is only for investingating the kurtosis of our dataset. One can compare our observed kurtosis with the kurtosis of a selected theoretical distribution.
Historical Volatility EstimatorsHistorical volatility is a statistical measure of the dispersion of returns for a given security or market index over a given period. This indicator provides different historical volatility model estimators with percentile gradient coloring and volatility stats panel.
█ OVERVIEW There are multiple ways to estimate historical volatility. Other than the traditional close-to-close estimator. This indicator provides different range-based volatility estimators that take high low open into account for volatility calculation and volatility estimators that use other statistics measurements instead of standard deviation. The gradient coloring and stats panel provides an overview of how high or low the current volatility is compared to its historical values.
█ CONCEPTS We have mentioned the concepts of historical volatility in our previous indicators, Historical Volatility, Historical Volatility Rank, and Historical Volatility Percentile. You can check the definition of these scripts. The basic calculation is just the sample standard deviation of log return scaled with the square root of time. The main focus of this script is the difference between volatility models.
Close-to-Close HV Estimator: Close-to-Close is the traditional historical volatility calculation. It uses sample standard deviation. Note: the TradingView build in historical volatility value is a bit off because it uses population standard deviation instead of sample deviation. N – 1 should be used here to get rid of the sampling bias.
Pros:
• Close-to-Close HV estimators are the most commonly used estimators in finance. The calculation is straightforward and easy to understand. When people reference historical volatility, most of the time they are talking about the close to close estimator.
Cons:
• The Close-to-close estimator only calculates volatility based on the closing price. It does not take account into intraday volatility drift such as high, low. It also does not take account into the jump when open and close prices are not the same.
• Close-to-Close weights past volatility equally during the lookback period, while there are other ways to weight the historical data.
• Close-to-Close is calculated based on standard deviation so it is vulnerable to returns that are not normally distributed and have fat tails. Mean and Median absolute deviation makes the historical volatility more stable with extreme values.
Parkinson Hv Estimator:
• Parkinson was one of the first to come up with improvements to historical volatility calculation. • Parkinson suggests using the High and Low of each bar can represent volatility better as it takes into account intraday volatility. So Parkinson HV is also known as Parkinson High Low HV. • It is about 5.2 times more efficient than Close-to-Close estimator. But it does not take account into jumps and drift. Therefore, it underestimates volatility. Note: By Dividing the Parkinson Volatility by Close-to-Close volatility you can get a similar result to Variance Ratio Test. It is called the Parkinson number. It can be used to test if the market follows a random walk. (It is mentioned in Nassim Taleb's Dynamic Hedging book but it seems like he made a mistake and wrote the ratio wrongly.)
Garman-Klass Estimator:
• Garman Klass expanded on Parkinson’s Estimator. Instead of Parkinson’s estimator using high and low, Garman Klass’s method uses open, close, high, and low to find the minimum variance method.
• The estimator is about 7.4 more efficient than the traditional estimator. But like Parkinson HV, it ignores jumps and drifts. Therefore, it underestimates volatility.
Rogers-Satchell Estimator:
• Rogers and Satchell found some drawbacks in Garman-Klass’s estimator. The Garman-Klass assumes price as Brownian motion with zero drift.
• The Rogers Satchell Estimator calculates based on open, close, high, and low. And it can also handle drift in the financial series.
• Rogers-Satchell HV is more efficient than Garman-Klass HV when there’s drift in the data. However, it is a little bit less efficient when drift is zero. The estimator doesn’t handle jumps, therefore it still underestimates volatility.
Garman-Klass Yang-Zhang extension:
• Yang Zhang expanded Garman Klass HV so that it can handle jumps. However, unlike the Rogers-Satchell estimator, this estimator cannot handle drift. It is about 8 times more efficient than the traditional estimator.
• The Garman-Klass Yang-Zhang extension HV has the same value as Garman-Klass when there’s no gap in the data such as in cryptocurrencies.
Yang-Zhang Estimator:
• The Yang Zhang Estimator combines Garman-Klass and Rogers-Satchell Estimator so that it is based on Open, close, high, and low and it can also handle non-zero drift. It also expands the calculation so that the estimator can also handle overnight jumps in the data.
• This estimator is the most powerful estimator among the range-based estimators. It has the minimum variance error among them, and it is 14 times more efficient than the close-to-close estimator. When the overnight and daily volatility are correlated, it might underestimate volatility a little.
• 1.34 is the optimal value for alpha according to their paper. The alpha constant in the calculation can be adjusted in the settings. Note: There are already some volatility estimators coded on TradingView. Some of them are right, some of them are wrong. But for Yang Zhang Estimator I have not seen a correct version on TV.
EWMA Estimator:
• EWMA stands for Exponentially Weighted Moving Average. The Close-to-Close and all other estimators here are all equally weighted.
• EWMA weighs more recent volatility more and older volatility less. The benefit of this is that volatility is usually autocorrelated. The autocorrelation has close to exponential decay as you can see using an Autocorrelation Function indicator on absolute or squared returns. The autocorrelation causes volatility clustering which values the recent volatility more. Therefore, exponentially weighted volatility can suit the property of volatility well.
• RiskMetrics uses 0.94 for lambda which equals 30 lookback period. In this indicator Lambda is coded to adjust with the lookback. It's also easy for EWMA to forecast one period volatility ahead.
• However, EWMA volatility is not often used because there are better options to weight volatility such as ARCH and GARCH.
Adjusted Mean Absolute Deviation Estimator:
• This estimator does not use standard deviation to calculate volatility. It uses the distance log return is from its moving average as volatility.
• It’s a simple way to calculate volatility and it’s effective. The difference is the estimator does not have to square the log returns to get the volatility. The paper suggests this estimator has more predictive power.
• The mean absolute deviation here is adjusted to get rid of the bias. It scales the value so that it can be comparable to the other historical volatility estimators.
• In Nassim Taleb’s paper, he mentions people sometimes confuse MAD with standard deviation for volatility measurements. And he suggests people use mean absolute deviation instead of standard deviation when we talk about volatility.
Adjusted Median Absolute Deviation Estimator:
• This is another estimator that does not use standard deviation to measure volatility.
• Using the median gives a more robust estimator when there are extreme values in the returns. It works better in fat-tailed distribution.
• The median absolute deviation is adjusted by maximum likelihood estimation so that its value is scaled to be comparable to other volatility estimators.
█ FEATURES
• You can select the volatility estimator models in the Volatility Model input
• Historical Volatility is annualized. You can type in the numbers of trading days in a year in the Annual input based on the asset you are trading.
• Alpha is used to adjust the Yang Zhang volatility estimator value.
• Percentile Length is used to Adjust Percentile coloring lookbacks.
• The gradient coloring will be based on the percentile value (0- 100). The higher the percentile value, the warmer the color will be, which indicates high volatility. The lower the percentile value, the colder the color will be, which indicates low volatility.
• When percentile coloring is off, it won’t show the gradient color.
• You can also use invert color to make the high volatility a cold color and a low volatility high color. Volatility has some mean reversion properties. Therefore when volatility is very low, and color is close to aqua, you would expect it to expand soon. When volatility is very high, and close to red, you would it expect it to contract and cool down.
• When the background signal is on, it gives a signal when HVP is very low. Warning there might be a volatility expansion soon.
• You can choose the plot style, such as lines, columns, areas in the plotstyle input.
• When the show information panel is on, a small panel will display on the right.
• The information panel displays the historical volatility model name, the 50th percentile of HV, and HV percentile. 50 the percentile of HV also means the median of HV. You can compare the value with the current HV value to see how much it is above or below so that you can get an idea of how high or low HV is. HV Percentile value is from 0 to 100. It tells us the percentage of periods over the entire lookback that historical volatility traded below the current level. Higher HVP, higher HV compared to its historical data. The gradient color is also based on this value.
█ HOW TO USE If you haven’t used the hvp indicator, we suggest you use the HVP indicator first. This indicator is more like historical volatility with HVP coloring. So it displays HVP values in the color and panel, but it’s not range bound like the HVP and it displays HV values. The user can have a quick understanding of how high or low the current volatility is compared to its historical value based on the gradient color. They can also time the market better based on volatility mean reversion. High volatility means volatility contracts soon (Move about to End, Market will cooldown), low volatility means volatility expansion soon (Market About to Move).
█ FINAL THOUGHTS HV vs ATR The above volatility estimator concepts are a display of history in the quantitative finance realm of the research of historical volatility estimations. It's a timeline of range based from the Parkinson Volatility to Yang Zhang volatility. We hope these descriptions make more people know that even though ATR is the most popular volatility indicator in technical analysis, it's not the best estimator. Almost no one in quant finance uses ATR to measure volatility (otherwise these papers will be based on how to improve ATR measurements instead of HV). As you can see, there are much more advanced volatility estimators that also take account into open, close, high, and low. HV values are based on log returns with some calculation adjustment. It can also be scaled in terms of price just like ATR. And for profit-taking ranges, ATR is not based on probabilities. Historical volatility can be used in a probability distribution function to calculated the probability of the ranges such as the Expected Move indicator. Other Estimators There are also other more advanced historical volatility estimators. There are high frequency sampled HV that uses intraday data to calculate volatility. We will publish the high frequency volatility estimator in the future. There's also ARCH and GARCH models that takes volatility clustering into account. GARCH models require maximum likelihood estimation which needs a solver to find the best weights for each component. This is currently not possible on TV due to large computational power requirements. All the other indicators claims to be GARCH are all wrong.
Smart Accumulation Pro – US SmallCap Edition v2
Smart Accumulation Pro v2 — US SmallCap Edition
Institutional Footprint and Structural Behavior Engine
Overview
Smart Accumulation Pro v2 detects structural behavior, internal liquidity shifts, and multi-phase accumulation footprints that are not visible through momentum or volatility indicators. The engine focuses on underlying institutional habits rather than reacting to price alone.
ULTRA — High-Threshold Structural Trigger
ULTRA appears only when multiple internal phases align simultaneously. It is not a momentum spike or volume anomaly. It represents compression pressure, phase readiness, and structural alignment. ULTRA does not repaint. When this signal appears, internal liquidity has already transitioned into an acceleration phase.
PRE — Early Structural Drift (Not a Buy Signal)
PRE should not be interpreted as a buy signal. It indicates gradual accumulation or controlled liquidity positioning. PRE usually appears during stable or quiet phases but rarely appears during panic drops or disorderly downtrends.
ACC — Transitional Footprint Signal
ACC identifies late-stage structural footprints. It is not intended as a standalone buy trigger. ACC highlights that structural preparation is underway, but direction and timing require user validation. ACC often precedes larger institutional behavior.
Philosophy
This engine does not attempt to cover every market pattern. It focuses on the highest-probability institutional habits. Exit timing, risk management, and execution remain user responsibility. The tool minimizes noise and emphasizes rare, high-impact structural zones.
Preset Modes
1) Conservative
For ETFs or stable large-cap instruments. Minimal noise and lower signal frequency.
2) Normal
Optimized for US mid-cap and small-cap behavior. Balanced and recommended as the default mode.
3) Aggressive
For volatile or thematic instruments. Higher frequency, higher risk.
Usage Notes
This indicator does not provide financial advice. It highlights structural conditions that often precede institutional movement. Execution and risk decisions depend on the user.
License Notice
Unauthorized copying, redistribution, or sharing is prohibited. Invite-Only access requires your TradingView username. One purchase equals one user license.
------------------------------------------------------------
Korean Summary (한국어 요약본)
------------------------------------------------------------
Smart Accumulation Pro v2는 세력의 습관, 유동성 이동, 압축 단계 등의 “보이지 않는 내부 구조”를 추적하는 지표다. 기존 모멘텀 기반 지표로는 포착되지 않는 패턴을 분석한다.
ULTRA 신호는 여러 내부 단계가 동시에 정렬될 때만 등장하는 극히 희귀한 트리거다. 페인팅이 없으며, 신호가 뜰 때 이미 내부 구조는 가속 단계에 진입한 상태다.
PRE는 매수 신호가 아니다. 세력이 서서히 움직이기 시작하거나 유동성을 재정렬할 때 나타나는 미세한 초기 흔적이다.
ACC는 본격 움직임 전에 나타나는 마지막 흔적이다. 단독 매수 신호가 아니며, 이후 더 큰 구조적 변화로 이어질 가능성을 나타내는 정도로 해석해야 한다.
이 지표는 모든 패턴을 잡지 않는다. 세력이 반복적으로 사용해 온 고확률 구조만 좁게 추적한다. 출구 전략과 리스크 관리는 사용자의 몫이다.
프리셋은 Conservative, Normal, Aggressive의 3가지 모드로 구성되며, 각각 안정형·균형형·변동성형 종목에 맞춰 설계되었다.
본 지표는 금융 조언을 제공하지 않으며, 무단 공유 또는 재배포는 금지된다. Invite-Only 기반이며 1인 1라이선스 방식이다.
Orbital Barycenter Matrix @darshaksscThe Orbital Barycenter Matrix is a visual, informational-only tool that models how price behaves around a dynamically calculated barycenter —a type of moving equilibrium derived entirely from historical price data.
Instead of focusing on signals, this indicator focuses on market structure symmetry, distance, compression, expansion, and volatility-adjusted movement.
This script does not predict future price and does not provide buy/sell signals .
All values and visuals come solely from confirmed historical data , in full compliance with TradingView policy.
📘 How the Indicator Works
1. Dynamic Barycenter (Core Mean Line)
The barycenter is calculated from a smoothed blend of historical price components.
It represents the center of mass around which price tends to oscillate.
This is not a forecast line—only a representation of historical average behavior.
2. Orbital Rings (Distance Zones)
Around the barycenter, the indicator draws several “orbital rings.”
Each ring shows a volatility-scaled distance from the barycenter using ATR-based calculations.
These rings help visualize:
How far price has drifted from its historical center
Whether price is moving in an inner, mid, or outer region
How volatility influences the spacing of the rings
Rings do not imply future targets and are informational only.
3. Orbital Extension Range
Beyond the outermost ring, a wider band (extension range) shows a high-volatility reference distance.
It represents extended displacement relative to past price behavior—not a projected target.
4. Orbit Trail (Motion Trace)
The Orbit Trail plots small circles behind price, helping visualize how price has moved through the orbital regions over time.
Colors adjust with “pressure” (distance from center), making compression and expansion easy to observe.
5. Satellite Nodes (Swing Markers)
Confirmed swing highs and lows (using fixed pivots) are marked as small dots.
Their color reflects the orbital zone they formed in, giving context to how significant or extended each pivot was.
These swing markers do not repaint because they use confirmed pivots.
6. Pressure & Distance Calculations
The indicator converts price displacement away from the barycenter into a pressure metric, scaled between 0%–100%.
Higher pressure means price is further from its historical center relative to volatility.
The dashboard displays:
Zone classification
ATR-based distance
Pressure level
A small intensity gauge
All are informational readings—no direction or forecast.
📊 Key Features
✔ Dynamic barycenter core
✔ Up to four orbital rings
✔ Informational orbital extension band
✔ Visual orbit trail showing recent movement
✔ Non-repainting satellite swing nodes
✔ Distance & pressure analytics
✔ Fully adjustable HUD
✔ Always-visible floating dashboard (screen-anchored)
✔ Zero repainting on confirmed elements
✔ 100% sourced from historical data only
✔ Policy-safe: no predictions, no signals, no targets
🎯 What to Look For
1. How close price is to the barycenter
This can reveal whether price is in:
The inner region
The mid zone
The outer region
The extended field
2. Pressure level
Shows how “stretched” price is relative to its past behavior.
3. Satellite nodes
Indicate where confirmed pivots formed and in which orbital band.
4. Ring interactions
Observe how price moves between rings—inside, outside, or oscillating around them.
5. Color changes in the orbit trail
These show changes in market compression/expansion.
🧭 How to Read the Indicator
Inner Orbit
Price close to its historical equilibrium.
Mid Orbit
Moderate displacement from typical range.
Outer Orbit
Historically extended movement.
Beyond Extension Field
Price has moved further than usual relative to historical volatility.
These are descriptive conditions only , not trade recommendations.
🛠 How to Apply It on the Chart
Use the barycenter to understand where price has historically balanced.
Observe how volatility changes the spacing between rings.
Use pressure readings to identify when price is compressed, neutral, or extended.
Use swing nodes to contextualize historical pivot formation.
Watch how price interacts with rings to better understand rhythm, velocity, and structural behavior.
This tool is meant to enhance visual understanding—not to generate trade entries or exits.
⚠️ Important Disclosure
This indicator is strictly informational.
It does not predict or project future price movement.
It does not provide buy/sell/long/short signals.
All lines, zones, and values are derived solely from past market data.
Any interpretation is at the user’s discretion.
WeeklyDealingRange Pro+ (Fib Edition)Weekly Dealing Range Indicator
Overview
The Weekly Dealing Range indicator identifies range + volatility based pivot levels that form at the close of the first trading session and extend for the entire week. This tool provides key reference points for both trending and range-bound market conditions.
What It Provides
Range High & Low: Weekly session extremes
Median Level: Mid-point of the weekly range
Weekly Open: First session opening price
Fibonacci Extensions: Calculated levels above the high and below the low
Practical Application
These levels serve as:
Reversal zones for mean reversion setups
Support/resistance reference points
Target levels for existing positions
Framework for building trade ideas around high-probability pivot areas
Key Features
Optional function based alerts
Traditional price crosses level alerts
Automatically updates each week
Clean, uncluttered chart display
Works across all timeframes
Suitable for all markets and instruments
WeeklyDealingRange Pro+Weekly Dealing Range Indicator
Overview
The Weekly Dealing Range indicator identifies range + volatility based pivot levels that form at the close of the first trading session and extend for the entire week. This tool provides key reference points for both trending and range-bound market conditions.
What It Provides
Range High & Low: Weekly session extremes
Median Level: Mid-point of the weekly range
Weekly Open: First session opening price
Standard Deviation Extensions: Calculated levels above the high and below the low
Practical Application
These levels serve as:
Reversal zones for mean reversion setups
Support/resistance reference points
Target levels for existing positions
Framework for building trade ideas around high-probability pivot areas
Key Features
Optional function based alerts
Traditional price crosses level alerts
Automatically updates each week
Clean, uncluttered chart display
Works across all timeframes
Suitable for all markets and instruments
VIX Calm vs Choppy (Bar Version, VIX High Threshold)This indicator tracks market stability by measuring how long the VIX stays below or above a chosen intraday threshold. Instead of looking at VIX closes, it uses VIX high, so even a brief intraday spike will flip the regime into “choppy.”
The tool builds a running clock of consecutive bars spent in each regime:
Calm regime: VIX high stays below the threshold
Choppy regime: VIX high hits or exceeds the threshold
Calm streaks plot as positive bars (light blue background).
Choppy streaks plot as negative bars (dark pink background).
This gives a clean picture of how long the market has been stable vs volatile — useful for trend traders, breakout traders, and anyone who watches risk-on/risk-off conditions. A table shows the current regime and streak length for quick reference.
HTF Ranges - AWR/AMR/AYR [bilal]📊 Overview
Professional higher timeframe range indicator for swing and position traders. Calculate Average Weekly Range (AWR), Average Monthly Range (AMR), and Average Yearly Range (AYR) with precision projection levels.
✨ Key Features
📅 Three Timeframe Modes
AWR (Average Weekly Range): Weekly swing targets - Default 4 weeks
AMR (Average Monthly Range): Monthly position targets - Default 6 months
AYR (Average Yearly Range): Yearly extremes - Default 9 years
🎯 Dual Anchor Options
Period Open: Week/Month/Year opening price
RTH Open: First RTH session (09:30 NY) of the period
📐 Projection Levels
100% Range Levels: Upper and lower targets from anchor
Fractional Levels: 33% and 66% zones for partial targets
Custom Mirrored Levels: Set any percentage (0-200%) with automatic mirroring
Example: 25% shows both 25% and 75%
Example: 150% shows both 150% and -50%
📊 Information Table
Active range type (AWR/AMR/AYR)
Average range value for selected period
Current period range and percentage used
Distance remaining to targets (up/down)
Color-coded progress (green/orange/red)
🎨 Fully Customizable
Orange theme by default (differentiates from daily indicators)
Line colors, styles (solid/dashed/dotted), and widths
Toggle labels on/off
Adjustable lookback periods for each timeframe
Independent settings for each range type
⚡ Smart Features
Lines start at actual period open (not fixed lookback)
Automatically tracks current period high/low
Works on any chart timeframe
Real-time range tracking
Alert conditions when targets reached or exceeded
🎯 Use Cases
AWR (Weekly Ranges):
Swing trade targets (3-7 day holds)
Weekly support/resistance zones
Identify weekly trend vs rotation
Compare daily moves to weekly context
AMR (Monthly Ranges):
Position trade targets (2-4 week holds)
Monthly breakout levels
Institutional-level zones
Earnings play targets
AYR (Yearly Ranges):
Major reversal zones
Long-term support/resistance
Identify macro trend strength
Annual high/low projections
💡 Trading Strategies
AWR Strategy (Swing Trading):
Week opens near AWR lower level = potential long setup
Target AWR 66% and 100% levels
Week hits AWR upper in first 2 days = watch for reversal
Use fractional levels as scale-in/scale-out points
AMR Strategy (Position Trading):
Month opens near AMR extremes = fade setup
Month breaks AMR in week 1 = expansion (trend) month
Target opposite AMR extreme for swing positions
Use 33%/66% for partial profit taking
AYR Strategy (Long-term Context):
Price near AYR extremes = major reversal zones
Breaking AYR levels = historic moves (rare)
Use for macro trend confirmation
Great for yearly forecasting and planning
📊 Range Interpretation
<33% Range Used: Early in period, room for expansion
33-66% Range Used: Normal progression
66-100% Range Used: Extended, approaching extremes
>100% Range Used: Expansion period - trending or high volatility
⚙️ Settings Guide
Lookback Periods:
AWR: 4 weeks (standard) - adjust to 8-12 for smoother average
AMR: 6 months (standard) - seasonal patterns
AYR: 9 years (standard) - captures full cycles
Anchor Type:
Period Open: Use for clean week/month/year open reference
RTH Open: Use if you only trade day session, ignores overnight gaps
Custom Levels:
25% = quartile targets
75% = three-quarter targets
80% = "danger zone" for reversals
111% = extended breakout target
🔄 Combine with ADR Indicator
Run both indicators together for complete multi-timeframe analysis:
ADR for intraday precision
AWR/AMR/AYR for swing/position context
See if today's ADR move is significant in weekly/monthly context
Multi-timeframe confluence = highest probability setups
💼 Ideal For
Swing Traders: Use AWR for 3-10 day holds
Position Traders: Use AMR for 2-8 week holds
Long-term Investors: Use AYR for macro context
Index Futures Traders: ES, NQ, YM, RTY
Multi-timeframe Analysis: Combine with daily ADR
Liquidity & Momentum Master (LMM)💎 Liquidity & Momentum Master (LMM)
A professional dual-system indicator that combines:
📦 High-Volume Support/Resistance Zones and
📊 RSI + Bollinger Band Combo Signals — to visualize both smart money footprints and momentum reversals in one clean tool.
🧱 1. High-Volume Liquidity Zones (Support/Resistance Boxes)
Conditions
Visible only on 1H and higher timeframes (1H, 4H, 1D, etc.)
Detects candles with abnormally high volume and strong ATR-based range
Separates bullish (support) and bearish (resistance) zones
Visualization
All boxes are white, with adjustable transparency (alphaW, alphaBorder)
Each box extends to the right automatically
Only the most important (Top-N) zones are kept — weaker ones are removed automatically
Interpretation
White boxes = price areas with heavy liquidity and volume concentration
Price approaching these zones often leads to bounces or rejections
Narrow spacing = consolidation, wide spacing = potential large move
💎 2. RSI Exit + BB-RSI Combo Signals
RSI Exit (Overbought/Oversold Recovery)
RSI drops from overbought (>70) → plots red “RSI” above the candle
RSI rises from oversold (<30) → plots green “RSI” below the candle
Works on 15m, 30m, 1H, 4H, 1D
→ Indicates short-term exhaustion recovery
BB-RSI Combo (Momentum Reversal Confirmation)
Active on 1H and higher only
Requires both:
✅ RSI divergence (bullish or bearish)
✅ Bollinger Band re-entry (after temporary breakout)
Combo Buy (Green Diamond)
Bullish RSI divergence
Candle closes back above lower Bollinger Band
Combo Sell (Red Diamond)
Bearish RSI divergence
Candle closes back below upper Bollinger Band
→ Confirms stronger reversal momentum compared to standard RSI signals
Exact-Month Avg & Vol (USDT) + Winsor/Clip/RV - V0.95d by McTogaWhat the indicator does
1) Data basis & window Uses the chart symbol by default (or manually via input). Works internally with daily closing prices (request.security(..., “D”, ...)) – regardless of whether you use D/W/M in the chart. Exact monthly logic: The evaluation window is determined by actual calendar months (1/3/6/12 or custom), not by fixed 30/90 days.
2) Returns & preprocessingReturn definition selectable: logarithmic (default) or simple return – each in %/day. Winsorizing (lower/upper percentile) and clipping (|r| ≤ X percentage points) optional to limit outliers.
3) Key figures (not annualized)Arithmetic daily volatility: Mean of absolute daily returns in %.Median volatility: Median of absolute daily returns in %. Standard deviation (StdDev) of daily returns in %. Realized variance (RV) = Σ r² and √RV (indicative of cumulative realized vol).
4) Annualization (optional) Basis 252 or 365 days. For |r|-based measures (arithmetic/median), optional Abs→Sigma factor √(π/2) to bring them closer to StdDev. Provides arithmetic p.a., median p.a., StdDev p.a., RV p.a..
5) Additional metrics Avg Price: Arithmetic average of daily closes in the window. Days (effDays): Number of trading days actually considered in the window.
6) Display & operation Only active on D/W/M (intraday everything is suppressed). Short mode (default): compact table with Avg Price • Days • Arith Vol • Median Vol • StdDev • √RV • Arith p.a. • Median p.a. • StdDev p.a. • RV p.a. Full mode: detailed table; individual rows can be toggled (Avg Price, Arith/Median/StdDev, RV, Annuals). Positions: table can be selected at 4 corners; toggle without restarting. Label (optional): compact text summary on the chart (corner selectable). Price plot (optional): 1D close as overlay.
7) Help, Debug & AlertsOnline help: 3 lines in the status bar (Help-On variant).Debug table (optional): shows RetCount, Winsor limits, clip status, key figures.Alert: Trigger when arithmetic daily volatility ≥ threshold value.
8) RobustnessTables/label handles are correctly deleted/recreated when TF/mode/position changes.No prohibited global writes in functions; clear typing (no na on bool).Typical useSelect monthly window (e.g., 3M) → optionally set Winsor/clip.Short or full as needed; select position. Annualization on/off; activate Abs→Sigma if necessary. Optional: Switch on label/price plot/debug; set alert threshold.
Volatility Dashboard (ATR-Based)Here's a brief description of what this indicator does:
- This measures volatility of currents based on ATR (Average True Range) and plots them against the smoothed ATR baseline (SMA of ATR for the same periods).
- It categorizes the market as one of the three regimes depending on the above-mentioned ratio:
- High Volatility (ratio > 1.2)
- Normal Volatility (between 0.8 and 1.2),
|- Low Volatility (ratio < 0.8, green)
- For each type of trading regime, Value Area (VA) coverage to use: for example: 60-65% in high vol trade regimes, 70% in normal trade regimes, 80-85% in low trade regimes
* What you’ll see on the chart:
- Compact dashboard in the top-right corner featuring:
- ATR (present, default length 20)
- ATR Avg (ATR baseline)
- The volatility regime identified based on the color-coded background and the coverage recommended for the VA.
Important inputs that can be adjusted:
- ATR Length (default 20) - “High/Low volatility thresholds” (default values: 1.2 – The VA coverage recommendations for each scheme (text) Purpose: - Quickly determine whether volatility is above/below average and adjust the coverage of the Value Area.
If you're using this for the GC1! Use 14 ATR Length, For ES or NQ Use Default Setting(20)
Volatility Cones **Volatility Cones - Interactive**
This indicator visualizes volatility cones based on historical or manual volatility and projects them up to 252 trading days into the future.
**Features:**
- Automatic start at the first trading day of the year (customizable)
- Volatility calculation from historical data or manual input
- Display of ±1σ, ±2σ, and ±3σ bands
- Projection of expected price movements based on volatility
**Use Case:**
Ideal for options traders and risk management to assess expected price movements over different time horizons.
MTF K-Means Price Regimes [matteovesperi] ⚠️ The preview uses a custom example to identify support/resistance zones. due to the fact that this identifier clusterizes, this is possible. this example was set up "in a hurry", therefore it has a possible inaccuracy. When setting up the indicator, it is extremely important to select the correct parameters and double-check them on the selected history.
📊 OVERVIEW
Purpose
MTF K-Means Price Regimes is a TradingView indicator that automatically identifies and classifies the current market regime based on the K-Means machine learning algorithm. The indicator uses data from a higher timeframe (Multi-TimeFrame, MTF) to build stable classification and applies it to the working timeframe in real-time.
Key Features
✅ Automatic market regime detection — the algorithm finds clusters of similar market conditions
✅ Multi-timeframe (MTF) — clustering on higher TF, application on lower TF
✅ Adaptive — model recalculates when a new HTF bar appears with a rolling window
✅ Non-Repainting — classification is performed only on closed bars
✅ Visualization — bar coloring + information panel with cluster characteristics
✅ Flexible settings — from 2 to 10 clusters, customizable feature periods, HTF selection
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
🔬 TECHNICAL DETAILS
K-Means Clustering Algorithm
What is K-Means?
K-Means is one of the most popular clustering algorithms (unsupervised machine learning). It divides a dataset into K groups (clusters) so that similar elements are within each cluster, and different elements are between clusters.
Algorithm objective:
Minimize within-cluster variance (sum of squared distances from points to their cluster center).
How Does K-Means Work in Our Indicator?
Step 1: Data Collection
The indicator accumulates history from the higher timeframe (HTF):
RSI (Relative Strength Index) — overbought/oversold indicator
ATR% (Average True Range as % of price) — volatility indicator
ΔP% (Price Change in %) — trend strength and direction indicator
By default, 200 HTF bars are accumulated (clusterLookback parameter).
Step 2: Creating Feature Vectors
Each HTF bar is described by a three-dimensional vector:
Vector =
Step 3: Normalization (Z-Score)
All features are normalized to bring them to a common scale:
Normalized_Value = (Value - Mean) / StdDev
This is critically important, as RSI is in the range 0-100, while ATR% and ΔP% have different scales. Without normalization, one feature would dominate over others.
Step 4: K-Means++ Centroid Initialization
Instead of random selection of K initial centers, an improved K-Means++ method is used:
First centroid is randomly selected from the data
Each subsequent centroid is selected with probability proportional to the square of the distance to the nearest already selected centroid
This ensures better initial centroid distribution and faster convergence
Step 5: Iterative Optimization (Lloyd's Algorithm)
Repeat until convergence (or maxIterations):
1. Assignment step:
For each point find the nearest centroid and assign it to this cluster
2. Update step:
Recalculate centroids as the average of all points in each cluster
3. Convergence check:
If centroids shifted less than 0.001 → STOP
Euclidean distance in 3D space is used:
Distance = sqrt((RSI1 - RSI2)² + (ATR1 - ATR2)² + (ΔP1 - ΔP2)²)
Step 6: Adaptive Update
With each new HTF bar:
The oldest bar is removed from history (rolling window method)
New bar is added to history
K-Means algorithm is executed again on updated data
Model remains relevant for current market conditions
Real-Time Classification
After building the model (clusters + centroids), the indicator works in classification mode:
On each closed bar of the current timeframe, RSI, ATR%, ΔP% are calculated
Feature vector is normalized using HTF statistics (Mean/StdDev)
Distance to all K centroids is calculated
Bar is assigned to the cluster with minimum distance
Bar is colored with the corresponding cluster color
Important: Classification occurs only on a closed bar (barstate.isconfirmed), which guarantees no repainting .
Data Architecture
Persistent variables (var):
├── featureVectors - Normalized HTF feature vectors
├── centroids - Cluster center coordinates (K * 3 values)
├── assignments - Assignment of each HTF bar to a cluster
├── htfRsiHistory - History of RSI values from HTF
├── htfAtrHistory - History of ATR values from HTF
├── htfPcHistory - History of price changes from HTF
├── htfCloseHistory - History of close prices from HTF
├── htfRsiMean, htfRsiStd - Statistics for RSI normalization
├── htfAtrMean, htfAtrStd - Statistics for ATR normalization
├── htfPcMean, htfPcStd - Statistics for Price Change normalization
├── isCalculated - Model readiness flag
└── currentCluster - Current active cluster
All arrays are synchronized and updated atomically when a new HTF bar appears.
Computational Complexity
Data collection: O(1) per bar
K-Means (one pass):
- Assignment: O(N * K) where N = number of points, K = number of clusters
- Update: O(N * K)
- Total: O(N * K * I) where I = number of iterations (usually 5-20)
Example: With N=200 HTF bars, K=5 clusters, I=20 iterations:
200 * 5 * 20 = 20,000 operations (executes quickly)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
📖 USER GUIDE
Quick Start
1. Adding the Indicator
TradingView → Indicators → Favorites → MTF K-Means Price Regimes
Or copy the code from mtf_kmeans_price_regimes.pine into Pine Editor.
2. First Launch
When adding the indicator to the chart, you'll see a table in the upper right corner:
┌─────────────────────────┐
│ Status │ Collecting HTF │
├─────────────────────────┤
│ Collected│ 15 / 50 │
└─────────────────────────┘
This means the indicator is accumulating history from the higher timeframe. Wait until the counter reaches the minimum (default 50 bars for K=5).
3. Active Operation
After data collection is complete, the main table with cluster information will appear:
┌────┬──────┬──────┬──────┬──────────────┬────────┐
│ ID │ RSI │ ATR% │ ΔP% │ Description │Current │
├────┼──────┼──────┼──────┼──────────────┼────────┤
│ 1 │ 68.5 │ 2.15 │ 1.2 │ High Vol,Bull│ │
│ 2 │ 52.3 │ 0.85 │ 0.1 │ Low Vol,Flat │ ► │
│ 3 │ 35.2 │ 1.95 │ -1.5 │ High Vol,Bear│ │
└────┴──────┴──────┴──────┴──────────────┴────────┘
The arrow ► indicates the current active regime. Chart bars are colored with the corresponding cluster color.
Customizing for Your Strategy
Choosing Higher Timeframe (HTF)
Rule: HTF should be at least 4 times higher than the working timeframe.
| Working TF | Recommended HTF |
|------------|-----------------|
| 1 min | 15 min - 1H |
| 5 min | 1H - 4H |
| 15 min | 4H - D |
| 1H | D - W |
| 4H | D - W |
| D | W - M |
HTF Selection Effect:
Lower HTF (closer to working TF): More sensitive, frequently changing classification
Higher HTF (much larger than working TF): More stable, long-term regime assessment
Number of Clusters (K)
K = 2-3: Rough division (e.g., "uptrend", "downtrend", "flat")
K = 4-5: Optimal for most cases (DEFAULT: 5)
K = 6-8: Detailed segmentation (requires more data)
K = 9-10: Very fine division (only for long-term analysis with large windows)
Important constraint:
clusterLookback ≥ numClusters * 10
I.e., for K=5 you need at least 50 HTF bars, for K=10 — at least 100 bars.
Clustering Depth (clusterLookback)
This is the rolling window size for building the model.
50-100 HTF bars: Fast adaptation to market changes
200 HTF bars: Optimal balance (DEFAULT)
500-1000 HTF bars: Long-term, stable model
If you get an "Insufficient data" error:
Decrease clusterLookback
Or select a lower HTF (e.g., "4H" instead of "D")
Or decrease numClusters
Color Scheme
Default 10 colors:
Red → Often: strong bearish, high volatility
Orange → Transition, medium volatility
Yellow → Neutral, decreasing activity
Green → Often: strong bullish, high volatility
Blue → Medium bullish, medium volatility
Purple → Oversold, possible reversal
Fuchsia → Overbought, possible reversal
Lime → Strong upward momentum
Aqua → Consolidation, low volatility
White → Undefined regime (rare)
Important: Cluster colors are assigned randomly at each model recalculation! Don't rely on "red = bearish". Instead, look at the description in the table (RSI, ATR%, ΔP%).
You can customize colors in the "Colors" settings section.
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
⚙️ INDICATOR PARAMETERS
Main Parameters
Higher Timeframe (htf)
Type: Timeframe selection
Default: "D" (daily)
Description: Timeframe on which the clustering model is built
Recommendation: At least 4 times larger than your working TF
Clustering Depth (clusterLookback)
Type: Integer
Range: 50 - 2000
Default: 200
Description: Number of HTF bars for building the model (rolling window size)
Recommendation:
- Increase for more stable long-term model
- Decrease for fast adaptation or if there's insufficient historical data
Number of Clusters (K) (numClusters)
Type: Integer
Range: 2 - 10
Default: 5
Description: Number of market regimes the algorithm will identify
Recommendation:
- K=3-4 for simple strategies (trending/ranging)
- K=5-6 for universal strategies
- K=7-10 only when clusterLookback ≥ 100*K
Max K-Means Iterations (maxIterations)
Type: Integer
Range: 5 - 50
Default: 20
Description: Maximum number of algorithm iterations
Recommendation:
- 10-20 is sufficient for most cases
- Increase to 30-50 if using K > 7
Feature Parameters
RSI Period (rsiLength)
Type: Integer
Default: 14
Description: Period for RSI calculation (overbought/oversold feature)
Recommendation:
- 14 — standard
- 7-10 — more sensitive
- 20-25 — more smoothed
ATR Period (atrLength)
Type: Integer
Default: 14
Description: Period for ATR calculation (volatility feature)
Recommendation: Usually kept equal to rsiLength
Price Change Period (pcLength)
Type: Integer
Default: 5
Description: Period for percentage price change calculation (trend feature)
Recommendation:
- 3-5 — short-term trend
- 10-20 — medium-term trend
Visualization
Show Info Panel (showDashboard)
Type: Checkbox
Default: true
Description: Enables/disables the information table on the chart
Cluster Color 1-10
Type: Color selection
Description: Customize colors for visual cluster distinction
Recommendation: Use contrasting colors for better readability
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
📊 INTERPRETING RESULTS
Reading the Information Table
┌────┬──────┬──────┬──────┬──────────────┬────────┐
│ ID │ RSI │ ATR% │ ΔP% │ Description │Current │
├────┼──────┼──────┼──────┼──────────────┼────────┤
│ 1 │ 68.5 │ 2.15 │ 1.2 │ High Vol,Bull│ │
│ 2 │ 52.3 │ 0.85 │ 0.1 │ Low Vol,Flat │ ► │
│ 3 │ 35.2 │ 1.95 │ -1.5 │ High Vol,Bear│ │
│ 4 │ 45.0 │ 1.20 │ -0.3 │ Low Vol,Bear │ │
│ 5 │ 72.1 │ 3.05 │ 2.8 │ High Vol,Bull│ │
└────┴──────┴──────┴──────┴──────────────┴────────┘
"ID" Column
Cluster number (1-K). Order doesn't matter.
"RSI" Column
Average RSI value in the cluster (0-100):
< 30: Oversold zone
30-45: Bearish sentiment
45-55: Neutral zone
55-70: Bullish sentiment
> 70: Overbought zone
"ATR%" Column
Average volatility in the cluster (as % of price):
< 1%: Low volatility (consolidation, narrow range)
1-2%: Normal volatility
2-3%: Elevated volatility
> 3%: High volatility (strong movements, impulses)
Compared to the average volatility across all clusters to determine "High Vol" or "Low Vol".
"ΔP%" Column
Average price change in the cluster (in % over pcLength period):
> +0.05%: Bullish regime
-0.05% ... +0.05%: Flat (sideways movement)
< -0.05%: Bearish regime
"Description" Column
Automatic interpretation:
"High Vol, Bull" → Strong upward momentum, high activity
"Low Vol, Flat" → Consolidation, narrow range, uncertainty
"High Vol, Bear" → Strong decline, panic, high activity
"Low Vol, Bull" → Slow growth, low activity
"Low Vol, Bear" → Slow decline, low activity
"Current" Column
Arrow ► shows which cluster the last closed bar of your working timeframe is in.
Typical Cluster Patterns
Example 1: Trend/Flat Division (K=3)
Cluster 1: RSI=65, ATR%=2.5, ΔP%=+1.5 → Bullish trend
Cluster 2: RSI=50, ATR%=0.8, ΔP%=0.0 → Flat/Consolidation
Cluster 3: RSI=35, ATR%=2.3, ΔP%=-1.4 → Bearish trend
Strategy: Open positions when regime changes Flat → Trend, avoid flat.
Example 2: Volatility Breakdown (K=5)
Cluster 1: RSI=72, ATR%=3.5, ΔP%=+2.5 → Strong bullish impulse (high risk)
Cluster 2: RSI=60, ATR%=1.5, ΔP%=+0.8 → Moderate bullish (optimal entry point)
Cluster 3: RSI=50, ATR%=0.7, ΔP%=0.0 → Flat
Cluster 4: RSI=40, ATR%=1.4, ΔP%=-0.7 → Moderate bearish
Cluster 5: RSI=28, ATR%=3.2, ΔP%=-2.3 → Strong bearish impulse (panic)
Strategy: Enter in Cluster 2 or 4, avoid extremes (1, 5).
Example 3: Mixed Regimes (K=7+)
With large K, clusters can represent condition combinations:
High RSI + Low volatility → "Quiet overbought"
Neutral RSI + High volatility → "Uncertainty with high activity"
Etc.
Requires individual analysis of each cluster.
Regime Changes
Important signal: Transition from one cluster to another!
Trading situation examples:
Flat → Bullish trend → Buy signal
Bullish trend → Flat → Take profit, close longs
Flat → Bearish trend → Sell signal
Bearish trend → Flat → Close shorts, wait
You can build a trading system based on the current active cluster and transitions between them.
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
💡 USAGE EXAMPLES
Example 1: Scalping with HTF Filter
Task: Scalping on 5-minute charts, but only enter in the direction of the daily regime.
Settings:
Working TF: 5 min
HTF: D (daily)
K: 3 (simple division)
clusterLookback: 100
Logic:
IF current cluster = "Bullish" (ΔP% > 0.5)
→ Look for long entry points on 5M
IF current cluster = "Bearish" (ΔP% < -0.5)
→ Look for short entry points on 5M
IF current cluster = "Flat"
→ Don't trade / reduce risk
Example 2: Swing Trading with Volatility Filtering
Task: Swing trading on 4H, enter only in regimes with medium volatility.
Settings:
Working TF: 4H
HTF: D (daily)
K: 5
clusterLookback: 200
Logic:
Allowed clusters for entry:
- ATR% from 1.5% to 2.5% (not too quiet, not too chaotic)
- ΔP% with clear direction (|ΔP%| > 0.5)
Prohibited clusters:
- ATR% > 3% → Too risky (possible gaps, sharp reversals)
- ATR% < 1% → Too quiet (small movements, commissions eat profit)
Example 3: Portfolio Rotation
Task: Managing a portfolio of multiple assets, allocate capital depending on regimes.
Settings:
Working TF: D (daily)
HTF: W (weekly)
K: 4
clusterLookback: 100
Logic:
For each asset in portfolio:
IF regime = "Strong trend + Low volatility"
→ Increase asset weight in portfolio (40-50%)
IF regime = "Medium trend + Medium volatility"
→ Standard weight (20-30%)
IF regime = "Flat" or "High volatility without trend"
→ Minimum weight or exclude (0-10%)
Example 4: Combining with Other Indicators
MTF K-Means as a filter:
Main strategy: MA Crossover
Filter: MTF K-Means on higher TF
Rule:
IF MA_fast > MA_slow AND Cluster = "Bullish regime"
→ LONG
IF MA_fast < MA_slow AND Cluster = "Bearish regime"
→ SHORT
ELSE
→ Don't trade (regime doesn't confirm signal)
This dramatically reduces false signals in unsuitable market conditions.
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
📈 OPTIMIZATION RECOMMENDATIONS
Optimal Settings for Different Styles
Day Trading
Working TF: 5M - 15M
HTF: 1H - 4H
numClusters: 4-5
clusterLookback: 100-150
Swing Trading
Working TF: 1H - 4H
HTF: D
numClusters: 5-6
clusterLookback: 150-250
Position Trading
Working TF: D
HTF: W - M
numClusters: 4-5
clusterLookback: 100-200
Scalping
Working TF: 1M - 5M
HTF: 15M - 1H
numClusters: 3-4
clusterLookback: 50-100
Backtesting
To evaluate effectiveness:
Load historical data (minimum 2x clusterLookback HTF bars)
Apply the indicator with your settings
Study cluster change history:
- Do changes coincide with actual trend transitions?
- How often do false signals occur?
Optimize parameters:
- If too much noise → increase HTF or clusterLookback
- If reaction too slow → decrease HTF or increase numClusters
Combining with Other Techniques
Regime-Based Approach:
MTF K-Means (regime identification)
↓
+---+---+---+
| | | |
v v v v
Trend Flat High_Vol Low_Vol
↓ ↓ ↓ ↓
Strategy_A Strategy_B Don't_trade
Examples:
Trend: Use trend-following strategies (MA crossover, Breakout)
Flat: Use mean-reversion strategies (RSI, Bollinger Bands)
High volatility: Reduce position sizes, widen stops
Low volatility: Expect breakout, don't open positions inside range
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
📞 SUPPORT
Report an Issue
If you found a bug or have a suggestion for improvement:
Describe the problem in as much detail as possible
Specify your indicator settings
Attach a screenshot (if possible)
Specify the asset and timeframe where the problem is observed






















