Alerts█ OVERVIEW
This library is a Pine Script™ programmers tool that provides functions to simplify the creation of compound conditions and alert messages. With these functions, scripts can use comma-separated "string" lists to specify condition groups from arbitrarily large "bool" arrays , offering a convenient way to provide highly flexible alert creation to script users without requiring numerous inputs in the "Settings/Inputs" menu.
█ CONCEPTS
Compound conditions
Compound conditions are essentially groups of two or more conditions, where each required condition must occur to produce a `true` result. Traders often combine conditions, including signals from various indicators, to drive and reinforce trade decisions. Similarly, programmers use compound conditions in logical operations to create scripts that respond dynamically to groups of events.
Condition conundrum
Providing flexible condition combinations to script users for signals and alerts often poses a significant challenge: input complexity . Conventionally, such flexibility comes at the cost of an extensive list of separate inputs for toggling individual conditions and customizing their properties, often resulting in complicated input menus that are difficult for users to navigate effectively. Furthermore, managing all those inputs usually entails tediously handling many extra variables and logical expressions, making such projects more complex for programmers.
Condensing complexity
This library introduces a technique using parsed strings to reference groups of elements from "bool" arrays , helping to simplify and streamline the construction of compound conditions and alert messages. With this approach, programmers can provide one or more "string" inputs in their scripts where users can list numbers corresponding to the conditions they want to combine.
For example, suppose you have a script that creates alert triggers based on a combination of up to 20 individual conditions, and you want to make inputs for users to choose which conditions to combine. Instead of creating 20 separate checkboxes in the "Settings/Inputs" tab and manually adding associated logic for each one, you can store the conditional values in arrays, make one or more "string" inputs that accept values listing the array item locations (e.g., "1,4,8,11"), and then pass the inputs to these functions to determine the compound conditions formed by the specified groups.
This approach condenses the input space, improving navigability and utility. Additionally, it helps provide high-level simplicity to complex conditional code, making it easier to maintain and expand over time.
█ CALCULATIONS AND USE
This library contains three functions for evaluating compound conditions: `getCompoundConditon()`, `getCompoundConditionsArray()`, and `compoundAlertMessage()`. Each function has two overloads that evaluate compound conditions based on groups of items from one or two "bool" arrays . The sections below explain the functions' calculations and how to use them.
Referencing conditions using "string" index lists
Each function processes "string" values containing comma-separated lists of numerals representing the indices of the "bool" array items to use in its calculations (e.g., "4, 8, 12"). The functions split each supplied "string" list by its commas, then iterate over those specified indices in the "bool" arrays to determine each group's combined `true` or `false` state.
For convenience, the numbers in the "string" lists can represent zero-based indices (where the first item is at index 0) or one-based indices (where the first item is at index 1), depending on the function's `zeroIndex` parameter. For example, an index list of "0, 2, 4" with a `zeroIndex` value of `true` specifies that the condition group uses the first , third , and fifth "bool" values in the array, ignoring all others. If the `zeroIndex` value is `false`, the list "1, 3, 5" also refers to those same elements.
Zero-based indexing is convenient for programmers because Pine arrays always use this index format. However, one-based indexing is often more convenient and familiar for script users, especially non-programmers.
Evaluating one or many condition groups
The `getCompoundCondition()` function evaluates singular condition groups determined by its `indexList` parameter, returning `true` values whenever the specified array elements are `true`. This function is helpful when a script has to evaluate specific groups of conditions and does not require many combinations.
In contrast, the `getCompoundConditionsArray()` function can evaluate numerous condition groups, one for each "string" included in its `indexLists` argument. It returns arrays containing `true` or `false` states for each listed group. This function is helpful when a script requires multiple condition combinations in additional calculations or logic.
The `compoundAlertMessage()` function is similar to the `getCompoundConditionsArray()` function. It also evaluates a separate compound condition group for each "string" in its `indexLists` array, but it returns "string" values containing the marker (name) of each group with a `true` result. You can use these returned values as the `message` argument in alert() calls, display them in labels and other drawing objects, or even use them in additional calculations and logic.
Directional condition pairs
The first overload of each function operates on a single `conditions` array, returning values representing one or more compound conditions from groups in that array. These functions are ideal for general-purpose condition groups that may or may not represent direction information.
The second overloads accept two arrays representing upward and downward conditions separately: `upConditions` and `downConditions`. These overloads evaluate opposing directional conditions in pairs (e.g., RSI is above/below a level) and return upward and downward condition information separately in a tuple .
When using the directional overloads, ensure the `upConditions` and `downConditions` arrays are the same size, with the intended condition pairs at the same indices . For instance, if you have a specific upward RSI condition's value at the first index in the `upConditions` array, include the opposing downward RSI condition's value at that same index in the `downConditions` array. If a condition can apply to both directions (e.g., rising volume), include its value at the same index in both arrays.
Group markers
To simplify the generation of informative alert messages, the `compoundAlertMessage()` function assigns "string" markers to each condition group, where "marker" refers to the group's name. The `groupMarkers` parameter allows you to assign custom markers to each listed group. If not specified, the function generates default group markers in the format "M", where "M" is short for "Marker" and "" represents the group number starting from 1. For example, the default marker for the first group specified in the `indexLists` array is "M1".
The function's returned "string" values contain a comma-separated list with markers for each activated condition group (e.g., "M1, M4"). The function's second overload, which processes directional pairs of conditions, also appends extra characters to the markers to signify the direction. The default for upward groups is "▲" (e.g., "M1▲") and the default for downward ones is "▼" (e.g., "M1▼"). You can customize these appended characters with the `upChar` and `downChar` parameters.
Designing customizable alerts
We recommend following these primary steps when using this library to design flexible alerts for script users:
1. Create text inputs for users to specify comma-separated lists of conditions with the input.string() or input.text_area() functions, and then collect all the input values in a "string" array . Note that each separate "string" in the array will represent a distinct condition group.
2. Create arrays of "bool" values representing the possible conditions to choose from. If your script will process pairs of upward and downward conditions, ensure the related elements in the arrays align at the same indices.
3. Call `compoundAlertMessage()` using the arrays from steps 1 and 2 as arguments to get the alert message text. If your script will use the text for alerts only, not historical display or calculation purposes, the call is necessary only on realtime bars .
4. Pass the calculated "string" values as the `message` argument in alert() calls. We recommend calling the function only when the "string" is not empty (i.e., `messageText != ""`). To avoid repainting alerts on open bars, use barstate.isconfirmed in the condition to allow alert triggers only on each bar's close .
5. Test the alerts. Open the "Create Alert" dialog box and select "Any alert() function call" in the "Condition" field. It is also helpful to inspect the strings with Pine Logs .
NOTE: Because the techniques in this library use lists of numbers to specify conditions, we recommend including a tooltip for the "string" inputs that lists the available numbers and the conditions they represent. This tooltip provides a legend for script users, making it simple to understand and utilize. To create the tooltip, declare a "const string" listing the options and pass it to the `input.*()` call's `tooltip` parameter. See the library's example code for a simple demonstration.
█ EXAMPLE CODE
This library's example code demonstrates one possible way to offer a selection of compound conditions with "string" inputs and these functions. It uses three input.string() calls, each accepting a comma-separated list representing a distinct condition group. The title of each input represents the default group marker that appears in the label and alert text. The code collects these three input values in a `conditionGroups` array for use with the `compoundAlertMessage()` function.
In this code, we created two "bool" arrays to store six arbitrary condition pairs for demonstration:
1. Bar up/down: The bar's close price must be above the open price for upward conditions, and vice versa for downward conditions.
2. Fast EMA above/below slow EMA : The 9-period Exponential Moving Average of close prices must be above the 21-period EMA for upward conditions, and vice versa for downward conditions.
3. Volume above average : The bar's volume must exceed its 20-bar average to activate an upward or downward condition.
4. Volume rising : The volume must exceed that of the previous bar to activate an upward or downward condition.
5. RSI trending up/down : The 14-period Relative Strength Index of close prices must be between 50 and 70 for upward conditions, and between 30 and 50 for downward conditions.
6. High volatility : The 7-period Average True Range (ATR) must be above the 40-period ATR to activate an upward or downward condition.
We included a `tooltip` argument for the third input.string() call that displays the condition numbers and titles, where 1 is the first condition number.
The `bullConditions` array contains the `true` or `false` states of all individual upward conditions, and the `bearConditions` array contains all downward condition states. For the conditions that filter either direction because they are non-directional, such as "High volatility", both arrays contain the condition's `true` or `false` value at the same index. If you use these conditions alone, they activate upward and downward alert conditions simultaneously.
The example code calls `compoundAlertMessage()` using the `bullConditions`, `bearConditions`, and `conditionGroups` arrays to create a tuple of strings containing the directional markers for each activated group. On confirmed bars, it displays non-empty strings in labels and uses them in alert() calls. For the text shown in the labels, we used str.replace_all() to replace commas with newline characters, aligning the markers vertically in the display.
Look first. Then leap.
█ FUNCTIONS
This library exports the following functions:
getCompoundCondition(conditions, indexList, minRequired, zeroIndex)
(Overload 1 of 2) Determines a compound condition based on selected elements from a `conditions` array.
Parameters:
conditions (array) : (array) An array containing the possible "bool" values to use in the compound condition.
indexList (string) : (series string) A "string" containing a comma-separated list of whole numbers representing the group of `conditions` elements to use in the compound condition. For example, if the value is `"0, 2, 4"`, and `minRequired` is `na`, the function returns `true` only if the `conditions` elements at index 0, 2, and 4 are all `true`. If the value is an empty "string", the function returns `false`.
minRequired (int) : (series int) Optional. Determines the minimum number of selected conditions required to activate the compound condition. For example, if the value is 2, the function returns `true` if at least two of the specified `conditions` elements are `true`. If the value is `na`, the function returns `true` only if all specified elements are `true`. The default is `na`.
zeroIndex (bool) : (series bool) Optional. Specifies whether the `indexList` represents zero-based array indices. If `true`, a value of "0" in the list represents the first array index. If `false`, a `value` of "1" represents the first index. The default is `true`.
Returns: (bool) `true` if `conditions` elements in the group specified by the `indexList` are `true`, `false` otherwise.
getCompoundCondition(upConditions, downConditions, indexList, minRequired, allowUp, allowDown, zeroIndex)
(Overload 2 of 2) Determines upward and downward compound conditions based on selected elements from `upConditions` and `downConditions` arrays.
Parameters:
upConditions (array) : (array) An array containing the possible "bool" values to use in the upward compound condition.
downConditions (array) : (array) An array containing the possible "bool" values to use in the downward compound condition.
indexList (string) : (series string) A "string" containing a comma-separated list of whole numbers representing the `upConditions` and `downConditions` elements to use in the compound conditions. For example, if the value is `"0, 2, 4"` and `minRequired` is `na`, the function returns `true` for the first value only if the `upConditions` elements at index 0, 2, and 4 are all `true`. If the value is an empty "string", the function returns ` `.
minRequired (int) : (series int) Optional. Determines the minimum number of selected conditions required to activate either compound condition. For example, if the value is 2, the function returns `true` for its first value if at least two of the specified `upConditions` elements are `true`. If the value is `na`, the function returns `true` only if all specified elements are `true`. The default is `na`.
allowUp (bool) : (series bool) Optional. Controls whether the function considers upward compound conditions. If `false`, the function ignores the `upConditions` array, and the first item in the returned tuple is `false`. The default is `true`.
allowDown (bool) : (series bool) Optional. Controls whether the function considers downward compound conditions. If `false`, the function ignores the `downConditions` array, and the second item in the returned tuple is `false`. The default is `true`.
zeroIndex (bool) : (series bool) Optional. Specifies whether the `indexList` represents zero-based array indices. If `true`, a value of "0" in the list represents the first array index. If `false`, a value of "1" represents the first index. The default is `true`.
Returns: ( ) A tuple containing two "bool" values representing the upward and downward compound condition states, respectively.
getCompoundConditionsArray(conditions, indexLists, zeroIndex)
(Overload 1 of 2) Creates an array of "bool" values representing compound conditions formed by selected elements from a `conditions` array.
Parameters:
conditions (array) : (array) An array containing the possible "bool" values to use in each compound condition.
indexLists (array) : (array) An array of strings containing comma-separated lists of whole numbers representing the `conditions` elements to use in each compound condition. For example, if an item is `"0, 2, 4"`, the corresponding item in the returned array is `true` only if the `conditions` elements at index 0, 2, and 4 are all `true`. If an item is an empty "string", the item in the returned array is `false`.
zeroIndex (bool) : (series bool) Optional. Specifies whether the "string" lists in the `indexLists` represent zero-based array indices. If `true`, a value of "0" in a list represents the first array index. If `false`, a value of "1" represents the first index. The default is `true`.
Returns: (array) An array of "bool" values representing compound condition states for each condition group. An item in the array is `true` only if all the `conditions` elements specified by the corresponding `indexLists` item are `true`. Otherwise, the item is `false`.
getCompoundConditionsArray(upConditions, downConditions, indexLists, allowUp, allowDown, zeroIndex)
(Overload 2 of 2) Creates two arrays of "bool" values representing compound upward and
downward conditions formed by selected elements from `upConditions` and `downConditions` arrays.
Parameters:
upConditions (array) : (array) An array containing the possible "bool" values to use in each upward compound condition.
downConditions (array) : (array) An array containing the possible "bool" values to use in each downward compound condition.
indexLists (array) : (array) An array of strings containing comma-separated lists of whole numbers representing the `upConditions` and `downConditions` elements to use in each compound condition. For example, if an item is `"0, 2, 4"`, the corresponding item in the first returned array is `true` only if the `upConditions` elements at index 0, 2, and 4 are all `true`. If an item is an empty "string", the items in both returned arrays are `false`.
allowUp (bool) : (series bool) Optional. Controls whether the function considers upward compound conditions. If `false`, the function ignores the `upConditions` array, and all elements in the first returned array are `false`. The default is `true`.
allowDown (bool) : (series bool) Optional. Controls whether the function considers downward compound conditions. If `false`, the function ignores the `downConditions` array, and all elements in the second returned array are `false`. The default is `true`.
zeroIndex (bool) : (series bool) Optional. Specifies whether the "string" lists in the `indexLists` represent zero-based array indices. If `true`, a value of "0" in a list represents the first array index. If `false`, a value of "1" represents the first index. The default is `true`.
Returns: ( ) A tuple containing two "bool" arrays:
- The first array contains values representing upward compound condition states determined using the `upConditions`.
- The second array contains values representing downward compound condition states determined using the `downConditions`.
compoundAlertMessage(conditions, indexLists, zeroIndex, groupMarkers)
(Overload 1 of 2) Creates a "string" message containing a comma-separated list of markers representing active compound conditions formed by specified element groups from a `conditions` array.
Parameters:
conditions (array) : (array) An array containing the possible "bool" values to use in each compound condition.
indexLists (array) : (array) An array of strings containing comma-separated lists of whole numbers representing the `conditions` elements to use in each compound condition. For example, if an item is `"0, 2, 4"`, the corresponding marker for that item appears in the returned "string" only if the `conditions` elements at index 0, 2, and 4 are all `true`.
zeroIndex (bool) : (series bool) Optional. Specifies whether the "string" lists in the `indexLists` represent zero-based array indices. If `true`, a value of "0" in a list represents the first array index. If `false`, a value of "1" represents the first index. The default is `true`.
groupMarkers (array) : (array) Optional. If specified, sets the marker (name) for each condition group specified in the `indexLists` array. If `na`, the function uses the format `"M"` for each group, where "M" is short for "Marker" and `` represents the one-based index for the group (e.g., the marker for the first listed group is "M1"). The default is `na`.
Returns: (string) A "string" containing a list of markers corresponding to each active compound condition.
compoundAlertMessage(upConditions, downConditions, indexLists, allowUp, allowDown, zeroIndex, groupMarkers, upChar, downChar)
(Overload 2 of 2) Creates two "string" messages containing comma-separated lists of markers representing active upward and downward compound conditions formed by specified element groups from `upConditions` and `downConditions` arrays.
Parameters:
upConditions (array) An array containing the possible "bool" values to use in each upward compound condition.
downConditions (array) An array containing the possible "bool" values to use in each downward compound condition.
indexLists (array) An array of strings containing comma-separated lists of whole numbers representing the `upConditions` and `downConditions` element groups to use in each compound condition. For example, if an item is `"0, 2, 4"`, the corresponding group marker for that item appears in the first returned "string" only if the `upConditions` elements at index 0, 2, and 4 are all `true`.
allowUp (bool) Optional. Controls whether the function considers upward compound conditions. If `false`, the function ignores the `upConditions` array and returns an empty "string" for the first tuple element. The default is `true`.
allowDown (bool) Optional. Controls whether the function considers downward compound conditions. If `false`, the function ignores the `downConditions` array and returns an empty "string" for the second tuple element. The default is `true`.
zeroIndex (bool) Optional. Specifies whether the "string" lists in the `indexLists` represent zero-based array indices. If `true`, a value of "0" in a list represents the first array index. If `false`, a value of "1" represents the first index. The default is `true`.
groupMarkers (array) Optional. If specified, sets the name (marker) of each condition group specified in the `indexLists` array. If `na`, the function uses the format `"M"` for each group, where "M" is short for "Marker" and `` represents the one-based index for the group (e.g., the marker for the first listed group is "M1"). The default is `na`.
upChar (string) Optional. A "string" appended to all group markers for upward conditions to signify direction. The default is "▲".
downChar (string) Optional. A "string" appended to all group markers for downward conditions to signify direction. The default is "▼".
Returns: ( ): A tuple of "string" values containing lists of markers corresponding to active upward and downward compound conditions, respectively.
在脚本中搜索"one一季度财报"
MTF_DrawingsLibrary 'MTF_Drawings'
This library helps with drawing indicators and candle charts on all timeframes.
FEATURES
CHART DRAWING : Library provides functions for drawing High Time Frame (HTF) and Low Time Frame (LTF) candles.
INDICATOR DRAWING : Library provides functions for drawing various types of HTF and LTF indicators.
CUSTOM COLOR DRAWING : Library allows to color candles and indicators based on specific conditions.
LINEFILLS : Library provides functions for drawing linefills.
CATEGORIES
The functions are named in a way that indicates they purpose:
{Ind} : Function is meant only for indicators.
{Hist} : Function is meant only for histograms.
{Candle} : Function is meant only for candles.
{Draw} : Function draws indicators, histograms and candle charts.
{Populate} : Function generates necessary arrays required by drawing functions.
{LTF} : Function is meant only for lower timeframes.
{HTF} : Function is meant only for higher timeframes.
{D} : Function draws indicators that are composed of two lines.
{CC} : Function draws custom colored indicators.
USAGE
Import the library into your script.
Before using any {Draw} function it is necessary to use a {Populate} function.
Choose the appropriate one based on the category, provide the necessary arguments, and then use the {Draw} function, forwarding the arrays generated by the {Populate} function.
This doesn't apply to {Draw_Lines}, {LineFill}, or {Barcolor} functions.
EXAMPLE
import Spacex_trader/MTF_Drawings/1 as tf
//Request lower timeframe data.
Security(simple string Ticker, simple string New_LTF, float Ind) =>
float Value = request.security_lower_tf(Ticker, New_LTF, Ind)
Value
Timeframe = input.timeframe('1', 'Timeframe: ')
tf.Draw_Ind(tf.Populate_LTF_Ind(Security(syminfo.tickerid, Timeframe, ta.rsi(close, 14)), 498, color.purple), 1, true)
FUNCTION LIST
HTF_Candle(BarsBack, BodyBear, BodyBull, BordersBear, BordersBull, WickBear, WickBull, LineStyle, BoxStyle, LineWidth, HTF_Open, HTF_High, HTF_Low, HTF_Close, HTF_Bar_Index)
Populates two arrays with drawing data of the HTF candles.
Parameters:
BarsBack (int) : Bars number to display.
BodyBear (color) : Candle body bear color.
BodyBull (color) : Candle body bull color.
BordersBear (color) : Candle border bear color.
BordersBull (color) : Candle border bull color.
WickBear (color) : Candle wick bear color.
WickBull (color) : Candle wick bull color.
LineStyle (string) : Wick style (Solid-Dotted-Dashed).
BoxStyle (string) : Border style (Solid-Dotted-Dashed).
LineWidth (int) : Wick width.
HTF_Open (float) : HTF open price.
HTF_High (float) : HTF high price.
HTF_Low (float) : HTF low price.
HTF_Close (float) : HTF close price.
HTF_Bar_Index (int) : HTF bar_index.
Returns: Two arrays with drawing data of the HTF candles.
LTF_Candle(BarsBack, BodyBear, BodyBull, BordersBear, BordersBull, WickBear, WickBull, LineStyle, BoxStyle, LineWidth, LTF_Open, LTF_High, LTF_Low, LTF_Close)
Populates two arrays with drawing data of the LTF candles.
Parameters:
BarsBack (int) : Bars number to display.
BodyBear (color) : Candle body bear color.
BodyBull (color) : Candle body bull color.
BordersBear (color) : Candle border bear color.
BordersBull (color) : Candle border bull color.
WickBear (color) : Candle wick bear color.
WickBull (color) : Candle wick bull color.
LineStyle (string) : Wick style (Solid-Dotted-Dashed).
BoxStyle (string) : Border style (Solid-Dotted-Dashed).
LineWidth (int) : Wick width.
LTF_Open (float ) : LTF open price.
LTF_High (float ) : LTF high price.
LTF_Low (float ) : LTF low price.
LTF_Close (float ) : LTF close price.
Returns: Two arrays with drawing data of the LTF candles.
Draw_Candle(Box, Line, Offset)
Draws HTF or LTF candles.
Parameters:
Box (box ) : Box array with drawing data.
Line (line ) : Line array with drawing data.
Offset (int) : Offset of the candles.
Returns: Drawing of the candles.
Populate_HTF_Ind(IndValue, BarsBack, IndColor, HTF_Bar_Index)
Populates one array with drawing data of the HTF indicator.
Parameters:
IndValue (float) : Indicator value.
BarsBack (int) : Indicator lines to display.
IndColor (color) : Indicator color.
HTF_Bar_Index (int) : HTF bar_index.
Returns: An array with drawing data of the HTF indicator.
Populate_LTF_Ind(IndValue, BarsBack, IndColor)
Populates one array with drawing data of the LTF indicator.
Parameters:
IndValue (float ) : Indicator value.
BarsBack (int) : Indicator lines to display.
IndColor (color) : Indicator color.
Returns: An array with drawing data of the LTF indicator.
Draw_Ind(Line, Mult, Exe)
Draws one HTF or LTF indicator.
Parameters:
Line (line ) : Line array with drawing data.
Mult (int) : Coordinates multiplier.
Exe (bool) : Display the indicator.
Returns: Drawing of the indicator.
Populate_HTF_Ind_D(IndValue_1, IndValue_2, BarsBack, IndColor_1, IndColor_2, HTF_Bar_Index)
Populates two arrays with drawing data of the HTF indicators.
Parameters:
IndValue_1 (float) : First indicator value.
IndValue_2 (float) : Second indicator value.
BarsBack (int) : Indicator lines to display.
IndColor_1 (color) : First indicator color.
IndColor_2 (color) : Second indicator color.
HTF_Bar_Index (int) : HTF bar_index.
Returns: Two arrays with drawing data of the HTF indicators.
Populate_LTF_Ind_D(IndValue_1, IndValue_2, BarsBack, IndColor_1, IndColor_2)
Populates two arrays with drawing data of the LTF indicators.
Parameters:
IndValue_1 (float ) : First indicator value.
IndValue_2 (float ) : Second indicator value.
BarsBack (int) : Indicator lines to display.
IndColor_1 (color) : First indicator color.
IndColor_2 (color) : Second indicator color.
Returns: Two arrays with drawing data of the LTF indicators.
Draw_Ind_D(Line_1, Line_2, Mult, Exe_1, Exe_2)
Draws two LTF or HTF indicators.
Parameters:
Line_1 (line ) : First line array with drawing data.
Line_2 (line ) : Second line array with drawing data.
Mult (int) : Coordinates multiplier.
Exe_1 (bool) : Display the first indicator.
Exe_2 (bool) : Display the second indicator.
Returns: Drawings of the indicators.
Barcolor(Box, Line, BarColor)
Colors the candles based on indicators output.
Parameters:
Box (box ) : Candle box array.
Line (line ) : Candle line array.
BarColor (color ) : Indicator color array.
Returns: Colored candles.
Populate_HTF_Ind_D_CC(IndValue_1, IndValue_2, BarsBack, BullColor, BearColor, IndColor_1, HTF_Bar_Index)
Populates two array with drawing data of the HTF indicators with color based on: IndValue_1 >= IndValue_2 ? BullColor : BearColor.
Parameters:
IndValue_1 (float) : First indicator value.
IndValue_2 (float) : Second indicator value.
BarsBack (int) : Indicator lines to display.
BullColor (color) : Bull color.
BearColor (color) : Bear color.
IndColor_1 (color) : First indicator color.
HTF_Bar_Index (int) : HTF bar_index.
Returns: Three arrays with drawing and color data of the HTF indicators.
Populate_LTF_Ind_D_CC(IndValue_1, IndValue_2, BarsBack, BullColor, BearColor, IndColor_1)
Populates two arrays with drawing data of the LTF indicators with color based on: IndValue_1 >= IndValue_2 ? BullColor : BearColor.
Parameters:
IndValue_1 (float ) : First indicator value.
IndValue_2 (float ) : Second indicator value.
BarsBack (int) : Indicator lines to display.
BullColor (color) : Bull color.
BearColor (color) : Bearcolor.
IndColor_1 (color) : First indicator color.
Returns: Three arrays with drawing and color data of the LTF indicators.
Populate_HTF_Hist_CC(HistValue, IndValue_1, IndValue_2, BarsBack, BullColor, BearColor, HTF_Bar_Index)
Populates one array with drawing data of the HTF histogram with color based on: IndValue_1 >= IndValue_2 ? BullColor : BearColor.
Parameters:
HistValue (float) : Indicator value.
IndValue_1 (float) : First indicator value.
IndValue_2 (float) : Second indicator value.
BarsBack (int) : Indicator lines to display.
BullColor (color) : Bull color.
BearColor (color) : Bearcolor.
HTF_Bar_Index (int) : HTF bar_index
Returns: Two arrays with drawing and color data of the HTF histogram.
Populate_LTF_Hist_CC(HistValue, IndValue_1, IndValue_2, BarsBack, BullColor, BearColor)
Populates one array with drawing data of the LTF histogram with color based on: IndValue_1 >= IndValue_2 ? BullColor : BearColor.
Parameters:
HistValue (float ) : Indicator value.
IndValue_1 (float ) : First indicator value.
IndValue_2 (float ) : Second indicator value.
BarsBack (int) : Indicator lines to display.
BullColor (color) : Bull color.
BearColor (color) : Bearcolor.
Returns: Two array with drawing and color data of the LTF histogram.
Populate_LTF_Hist_CC_VA(HistValue, Value, BarsBack, BullColor, BearColor)
Populates one array with drawing data of the LTF histogram with color based on: HistValue >= Value ? BullColor : BearColor.
Parameters:
HistValue (float ) : Indicator value.
Value (float) : First indicator value.
BarsBack (int) : Indicator lines to display.
BullColor (color) : Bull color.
BearColor (color) : Bearcolor.
Returns: Two array with drawing and color data of the LTF histogram.
Populate_HTF_Ind_CC(IndValue, IndValue_1, BarsBack, BullColor, BearColor, HTF_Bar_Index)
Populates one array with drawing data of the HTF indicator with color based on: IndValue >= IndValue_1 ? BullColor : BearColor.
Parameters:
IndValue (float) : Indicator value.
IndValue_1 (float) : Second indicator value.
BarsBack (int) : Indicator lines to display.
BullColor (color) : Bull color.
BearColor (color) : Bearcolor.
HTF_Bar_Index (int) : HTF bar_index
Returns: Two arrays with drawing and color data of the HTF indicator.
Populate_LTF_Ind_CC(IndValue, IndValue_1, BarsBack, BullColor, BearColor)
Populates one array with drawing data of the LTF indicator with color based on: IndValue >= IndValue_1 ? BullColor : BearColor.
Parameters:
IndValue (float ) : Indicator value.
IndValue_1 (float ) : Second indicator value.
BarsBack (int) : Indicator lines to display.
BullColor (color) : Bull color.
BearColor (color) : Bearcolor.
Returns: Two arrays with drawing and color data of the LTF indicator.
Draw_Lines(BarsBack, y1, y2, LineType, Fill)
Draws price lines on indicators.
Parameters:
BarsBack (int) : Indicator lines to display.
y1 (float) : Coordinates of the first line.
y2 (float) : Coordinates of the second line.
LineType (string) : Line type.
Fill (color) : Fill color.
Returns: Drawing of the lines.
LineFill(Upper, Lower, BarsBack, FillColor)
Fills two lines with linefill HTF or LTF.
Parameters:
Upper (line ) : Upper line.
Lower (line ) : Lower line.
BarsBack (int) : Indicator lines to display.
FillColor (color) : Fill color.
Returns: Linefill of the lines.
Populate_LTF_Hist(HistValue, BarsBack, HistColor)
Populates one array with drawing data of the LTF histogram.
Parameters:
HistValue (float ) : Indicator value.
BarsBack (int) : Indicator lines to display.
HistColor (color) : Indicator color.
Returns: One array with drawing data of the LTF histogram.
Populate_HTF_Hist(HistValue, BarsBack, HistColor, HTF_Bar_Index)
Populates one array with drawing data of the HTF histogram.
Parameters:
HistValue (float) : Indicator value.
BarsBack (int) : Indicator lines to display.
HistColor (color) : Indicator color.
HTF_Bar_Index (int) : HTF bar_index.
Returns: One array with drawing data of the HTF histogram.
Draw_Hist(Box, Mult, Exe)
Draws HTF or LTF histogram.
Parameters:
Box (box ) : Box Array.
Mult (int) : Coordinates multiplier.
Exe (bool) : Display the histogram.
Returns: Drawing of the histogram.
Statistical Package for the Trading Sciences [SS]
This is SPTS.
It stands for Statistical Package for the Trading Sciences.
Its a play on SPSS (Statistical Package for the Social Sciences) by IBM (software that, prior to Pinescript, I would use on a daily basis for trading).
Let's preface this indicator first:
This isn't so much an indicator as it is a project. A passion project really.
This has been in the works for months and I still feel like its incomplete. But the plan here is to continue to add functionality to it and actually have the Pinecoding and Tradingview community contribute to it.
As a math based trader, I relied on Excel, SPSS and R constantly to plan my trades. Since learning a functional amount of Pinescript and coding a lot of what I do and what I relied on SPSS, Excel and R for, I use it perhaps maybe a few times a week.
This indicator, or package, has some of the key things I used Excel and SPSS for on a daily and weekly basis. This also adds a lot of, I would say, fairly complex math functionality to Pinescript. Because this is adding functionality not necessarily native to Pinescript, I have placed most, if not all, of the functionality into actual exportable functions. I have also set it up as a kind of library, with explanations and tips on how other coders can take these functions and implement them into other scripts.
The hope here is that other coders will take it, build upon it, improve it and hopefully share additional functionality that can be added into this package. Hence why I call it a project. Okay, let's get into an overview:
Current Functions of SPTS:
SPTS currently has the following functionality (further explanations will be offered below):
Ability to Perform a One-Tailed, Two-Tailed and Paired Sample T-Test, with corresponding P value.
Standard Pearson Correlation (with functionality to be able to calculate the Pearson Correlation between 2 arrays).
Quadratic (or Curvlinear) correlation assessments.
R squared Assessments.
Standard Linear Regression.
Multiple Regression of 2 independent variables.
Tests of Normality (with Kurtosis and Skewness) and recognition of up to 7 Different Distributions.
ARIMA Modeller (Sort of, more details below)
Okay, so let's go over each of them!
T-Tests
So traditionally, most correlation assessments on Pinescript are done with a generic Pearson Correlation using the "ta.correlation" argument. However, this is not always the best test to be used for correlations and determine effects. One approach to correlation assessments used frequently in economics is the T-Test assessment.
The t-test is a statistical hypothesis test used to determine if there is a significant difference between the means of two groups. It assesses whether the sample means are likely to have come from populations with the same mean. The test produces a t-statistic, which is then compared to a critical value from the t-distribution to determine statistical significance. Lower p-values indicate stronger evidence against the null hypothesis of equal means.
A significant t-test result, indicating the rejection of the null hypothesis, suggests that there is statistical evidence to support that there is a significant difference between the means of the two groups being compared. In practical terms, it means that the observed difference in sample means is unlikely to have occurred by random chance alone. Researchers typically interpret this as evidence that there is a real, meaningful difference between the groups being studied.
Some uses of the T-Test in finance include:
Risk Assessment: The t-test can be used to compare the risk profiles of different financial assets or portfolios. It helps investors assess whether the differences in returns or volatility are statistically significant.
Pairs Trading: Traders often apply the t-test when engaging in pairs trading, a strategy that involves trading two correlated securities. It helps determine when the price spread between the two assets is statistically significant and may revert to the mean.
Volatility Analysis: Traders and risk managers use t-tests to compare the volatility of different assets or portfolios, assessing whether one is significantly more or less volatile than another.
Market Efficiency Tests: Financial researchers use t-tests to test the Efficient Market Hypothesis by assessing whether stock price movements follow a random walk or if there are statistically significant deviations from it.
Value at Risk (VaR) Calculation: Risk managers use t-tests to calculate VaR, a measure of potential losses in a portfolio. It helps assess whether a portfolio's value is likely to fall below a certain threshold.
There are many other applications, but these are a few of the highlights. SPTS permits 3 different types of T-Test analyses, these being the One Tailed T-Test (if you want to test a single direction), two tailed T-Test (if you are unsure of which direction is significant) and a paired sample t-test.
Which T is the Right T?
Generally, a one-tailed t-test is used to determine if a sample mean is significantly greater than or less than a specified population mean, whereas a two-tailed t-test assesses if the sample mean is significantly different (either greater or less) from the population mean. In contrast, a paired sample t-test compares two sets of paired observations (e.g., before and after treatment) to assess if there's a significant difference in their means, typically used when the data points in each pair are related or dependent.
So which do you use? Well, it depends on what you want to know. As a general rule a one tailed t-test is sufficient and will help you pinpoint directionality of the relationship (that one ticker or economic indicator has a significant affect on another in a linear way).
A two tailed is more broad and looks for significance in either direction.
A paired sample t-test usually looks at identical groups to see if one group has a statistically different outcome. This is usually used in clinical trials to compare treatment interventions in identical groups. It's use in finance is somewhat limited, but it is invaluable when you want to compare equities that track the same thing (for example SPX vs SPY vs ES1!) or you want to test a hypothesis about an index and a leveraged share (for example, the relationship between FNGU and, say, MSFT or NVDA).
Statistical Significance
In general, with a t-test you would need to reference a T-Table to determine the statistical significance of the degree of Freedom and the T-Statistic.
However, because I wanted Pinescript to full fledge replace SPSS and Excel, I went ahead and threw the T-Table into an array, so that Pinescript can make the determination itself of the actual P value for a t-test, no cross referencing required :-).
Left tail (Significant):
Both tails (Significant):
Distributed throughout (insignificant):
As you can see in the images above, the t-test will also display a bell-curve analysis of where the significance falls (left tail, both tails or insignificant, distributed throughout).
That said, I have not included this function for the paired sample t-test because that is a bit more nuanced. But for the one and two tailed assessments, the indicator will provide you the P value.
Pearson Correlation Assessment
I don't think I need to go into too much detail on this one.
I have put in functionality to quickly calculate the Pearson Correlation of two array's, which is not currently possible with the "ta.correlation" function.
Quadratic (Curvlinear) Correlation
Not everything in life is linear, sometimes things are curved!
The Pearson Correlation is great for linear assessments, but tends to under-estimate the degree of the relationship in curved relationships. There currently is no native function to t-test for quadratic/curvlinear relationships, so I went ahead and created one.
You can see an example of how Quadratic and Pearson Correlations vary when you look at CME_MINI:ES1! against AMEX:DIA for the past 10 ish months:
Pearson Correlation:
Quadratic Correlation:
One or the other is not always the best, so it is important to check both!
R-Squared Assessments:
The R-squared value, or the square of the Pearson correlation coefficient (r), is used to measure the proportion of variance in one variable that can be explained by the linear relationship with another variable. It represents the goodness-of-fit of a linear regression model with a single predictor variable.
R-Squared is offered in 3 separate forms within this indicator. First, there is the generic R squared which is taking the square root of a Pearson Correlation assessment to assess the variance.
The next is the R-Squared which is calculated from an actual linear regression model done within the indicator.
The first is the R-Squared which is calculated from a multiple regression model done within the indicator.
Regardless of which R-Squared value you are using, the meaning is the same. R-Square assesses the variance between the variables under assessment and can offer an insight into the goodness of fit and the ability of the model to account for the degree of variance.
Here is the R Squared assessment of the SPX against the US Money Supply:
Standard Linear Regression
The indicator contains the ability to do a standard linear regression model. You can convert one ticker or economic indicator into a stock, ticker or other economic indicator. The indicator will provide you with all of the expected information from a linear regression model, including the coefficients, intercept, error assessments, correlation and R2 value.
Here is AAPL and MSFT as an example:
Multiple Regression
Oh man, this was something I really wanted in Pinescript, and now we have it!
I have created a function for multiple regression, which, if you export the function, will permit you to perform multiple regression on any variables available in Pinescript!
Using this functionality in the indicator, you will need to select 2, dependent variables and a single independent variable.
Here is an example of multiple regression for NASDAQ:AAPL using NASDAQ:MSFT and NASDAQ:NVDA :
And an example of SPX using the US Money Supply (M2) and AMEX:GLD :
Tests of Normality:
Many indicators perform a lot of functions on the assumption of normality, yet there are no indicators that actually test that assumption!
So, I have inputted a function to assess for normality. It uses the Kurtosis and Skewness to determine up to 7 different distribution types and it will explain the implication of the distribution. Here is an example of SP:SPX on the Monthly Perspective since 2010:
And NYSE:BA since the 60s:
And NVDA since 2015:
ARIMA Modeller
Okay, so let me disclose, this isn't a full fledge ARIMA modeller. I took some shortcuts.
True ARIMA modelling would involve decomposing the seasonality from the trend. I omitted this step for simplicity sake. Instead, you can select between using an EMA or SMA based approach, and it will perform an autogressive type analysis on the EMA or SMA.
I have tested it on lookback with results provided by SPSS and this actually works better than SPSS' ARIMA function. So I am actually kind of impressed.
You will need to input your parameters for the ARIMA model, I usually would do a 14, 21 and 50 day EMA of the close price, and it will forecast out that range over the length of the EMA.
So for example, if you select the EMA 50 on the daily, it will plot out the forecast for the next 50 days based on an autoregressive model created on the EMA 50. Here is how it looks on AMEX:SPY :
You can also elect to plot the upper and lower confidence bands:
Closing Remarks
So that is the indicator/package.
I do hope to continue expanding its functionality, but as of now, it does already have quite a lot of functionality.
I really hope you enjoy it and find it helpful. This. Has. Taken. AGES! No joke. Between referencing my old statistics textbooks, trying to remember how to calculate some of these things, and wanting to throw my computer against the wall because of errors in the code, this was a task, that's for sure. So I really hope you find some usefulness in it all and enjoy the ability to be able to do functions that previously could really only be done in external software.
As always, leave your comments, suggestions and feedback below!
Take care!
[Excalibur] Ehlers AutoCorrelation Periodogram ModifiedKeep your coins folks, I don't need them, don't want them. If you wish be generous, I do hope that charitable peoples worldwide with surplus food stocks may consider stocking local food banks before stuffing monetary bank vaults, for the crusade of remedying the needs of less than fortunate children, parents, elderly, homeless veterans, and everyone else who deserves nutritional sustenance for the soul.
DEDICATION:
This script is dedicated to the memory of Nikolai Dmitriyevich Kondratiev (Никола́й Дми́триевич Кондра́тьев) as tribute for being a pioneering economist and statistician, paving the way for modern econometrics by advocation of rigorous and empirical methodologies. One of his most substantial contributions to the study of business cycle theory include a revolutionary hypothesis recognizing the existence of dynamic cycle-like phenomenon inherent to economies that are characterized by distinct phases of expansion, stagnation, recession and recovery, what we now know as "Kondratiev Waves" (K-waves). Kondratiev was one of the first economists to recognize the vital significance of applying quantitative analysis on empirical data to evaluate economic dynamics by means of statistical methods. His understanding was that conceptual models alone were insufficient to adequately interpret real-world economic conditions, and that sophisticated analysis was necessary to better comprehend the nature of trending/cycling economic behaviors. Additionally, he recognized prosperous economic cycles were predominantly driven by a combination of technological innovations and infrastructure investments that resulted in profound implications for economic growth and development.
I will mention this... nation's economies MUST be supported and defended to continuously evolve incrementally in order to flourish in perpetuity OR suffer through eras with lasting ramifications of societal stagnation and implosion.
Analogous to the realm of economics, aperiodic cycles/frequencies, both enduring and ephemeral, do exist in all facets of life, every second of every day. To name a few that any blind man can naturally see are: heartbeat (cardiac cycles), respiration rates, circadian rhythms of sleep, powerful magnetic solar cycles, seasonal cycles, lunar cycles, weather patterns, vegetative growth cycles, and ocean waves. Do not pretend for one second that these basic aforementioned examples do not affect business cycle fluctuations in minuscule and monumental ways hour to hour, day to day, season to season, year to year, and decade to decade in every nation on the planet. Kondratiev's original seminal theories in macroeconomics from nearly a century ago have proven remarkably prescient with many of his antiquated elementary observations/notions/hypotheses in macroeconomics being scholastically studied and topically researched further. Therefore, I am compelled to honor and recognize his statistical insight and foresight.
If only.. Kondratiev could hold a pocket sized computer in the cup of both hands bearing the TradingView logo and platform services, I truly believe he would be amazed in marvelous delight with a GARGANTUAN smile on his face.
INTRODUCTION:
Firstly, this is NOT technically speaking an indicator like most others. I would describe it as an advanced cycle period detector to obtain market data spectral estimates with low latency and moderate frequency resolution. Developers can take advantage of this detector by creating scripts that utilize a "Dominant Cycle Source" input to adaptively govern algorithms. Be forewarned, I would only recommend this for advanced developers, not novice code dabbling. Although, there is some Pine wizardry introduced here for novice Pine enthusiasts to witness and learn from. AI did describe the code into one super-crunched sentence as, "a rare feat of exceptionally formatted code masterfully balancing visual clarity, precision, and complexity to provide immense educational value for both programming newcomers and expert Pine coders alike."
Understand all of the above aforementioned? Buckle up and proceed for a lengthy read of verbose complexity...
This is my enhanced and heavily modified version of autocorrelation periodogram (ACP) for Pine Script v5.0. It was originally devised by the mathemagician John Ehlers for detecting dominant cycles (frequencies) in an asset's price action. I have been sitting on code similar to this for a long time, but I decided to unleash the advanced code with my fashion. Originally Ehlers released this with multiple versions, one in a 2016 TASC article and the other in his last published 2013 book "Cycle Analytics for Traders", chapter 8. He wasn't joking about "concepts of advanced technical trading" and ACP is nowhere near to his most intimidating and ingenious calculations in code. I will say the book goes into many finer details about the original periodogram, so if you wish to delve into even more elaborate info regarding Ehlers' original ACP form AND how you may adapt algorithms, you'll have to obtain one. Note to reader, comparing Ehlers' original code to my chimeric code embracing the "Power of Pine", you will notice they have little resemblance.
What you see is a new species of autocorrelation periodogram combining Ehlers' innovation with my fascinations of what ACP could be in a Pine package. One other intention of this script's code is to pay homage to Ehlers' lifelong works. Like Kondratiev, Ehlers is also a hardcore cycle enthusiast. I intend to carry on the fire Ehlers envisioned and I believe that is literally displayed here as a pleasant "fiery" example endowed with Pine. With that said, I tried to make the code as computationally efficient as possible, without going into dozens of more crazy lines of code to speed things up even more. There's also a few creative modifications I made by making alterations to the originating formulas that I felt were improvements, one of them being lag reduction. By recently questioning every single thing I thought I knew about ACP, combined with the accumulation of my current knowledge base, this is the innovative revision I came up with. I could have improved it more but decided not to mind thrash too many TV members, maybe later...
I am now confident Pine should have adequate overhead left over to attach various indicators to the dominant cycle via input.source(). TV, I apologize in advance if in the future a server cluster combusts into a raging inferno... Coders, be fully prepared to build entire algorithms from pure raw code, because not all of the built-in Pine functions fully support dynamic periods (e.g. length=ANYTHING). Many of them do, as this was requested and granted a while ago, but some functions are just inherently finicky due to implementation combinations and MUST be emulated via raw code. I would imagine some comprehensive library or numerous authored scripts have portions of raw code for Pine built-ins some where on TV if you look diligently enough.
Notice: Unfortunately, I will not provide any integration support into member's projects at all. I have my own projects that require way too much of my day already. While I was refactoring my life (forgoing many other "important" endeavors) in the early half of 2023, I primarily focused on this code over and over in my surplus time. During that same time I was working on other innovations that are far above and beyond what this code is. I hope you understand.
The best way programmatically may be to incorporate this code into your private Pine project directly, after brutal testing of course, but that may be too challenging for many in early development. Being able to see the periodogram is also beneficial, so input sourcing may be the "better" avenue to tether portions of the dominant cycle to algorithms. Unique indication being able to utilize the dominantCycle may be advantageous when tethering this script to those algorithms. The easiest way is to manually set your indicators to what ACP recognizes as the dominant cycle, but that's actually not considered dynamic real time adaption of an indicator. Different indicators may need a proportion of the dominantCycle, say half it's value, while others may need the full value of it. That's up to you to figure that out in practice. Sourcing one or more custom indicators dynamically to one detector's dominantCycle may require code like this: `int sourceDC = int(math.max(6, math.min(49, input.source(close, "Dominant Cycle Source"))))`. Keep in mind, some algos can use a float, while algos with a for loop require an integer.
I have witnessed a few attempts by talented TV members for a Pine based autocorrelation periodogram, but not in this caliber. Trust me, coding ACP is no ordinary task to accomplish in Pine and modifying it blessed with applicable improvements is even more challenging. For over 4 years, I have been slowly improving this code here and there randomly. It is beautiful just like a real flame, but... this one can still burn you! My mind was fried to charcoal black a few times wrestling with it in the distant past. My very first attempt at translating ACP was a month long endeavor because PSv3 simply didn't have arrays back then. Anyways, this is ACP with a newer engine, I hope you enjoy it. Any TV subscriber can utilize this code as they please. If you are capable of sufficiently using it properly, please use it wisely with intended good will. That is all I beg of you.
Lastly, you now see how I have rasterized my Pine with Ehlers' swami-like tech. Yep, this whole time I have been using hline() since PSv3, not plot(). Evidently, plot() still has a deficiency limited to only 32 plots when it comes to creating intense eye candy indicators, the last I checked. The use of hline() is the optimal choice for rasterizing Ehlers styled heatmaps. This does only contain two color schemes of the many I have formerly created, but that's all that is essentially needed for this gizmo. Anything else is generally for a spectacle or seeing how brutal Pine can be color treated. The real hurdle is being able to manipulate colors dynamically with Merlin like capabilities from multiple algo results. That's the true challenging part of these heatmap contraptions to obtain multi-colored "predator vision" level indication. You now have basic hline() food for thought empowerment to wield as you can imaginatively dream in Pine projects.
PERIODOGRAM UTILITY IN REAL WORLD SCENARIOS:
This code is a testament to the abilities that have yet to be fully realized with indication advancements. Periodograms, spectrograms, and heatmaps are a powerful tool with real-world applications in various fields such as financial markets, electrical engineering, astronomy, seismology, and neuro/medical applications. For instance, among these diverse fields, it may help traders and investors identify market cycles/periodicities in financial markets, support engineers in optimizing electrical or acoustic systems, aid astronomers in understanding celestial object attributes, assist seismologists with predicting earthquake risks, help medical researchers with neurological disorder identification, and detection of asymptomatic cardiovascular clotting in the vaxxed via full body thermography. In either field of study, technologies in likeness to periodograms may very well provide us with a better sliver of analysis beyond what was ever formerly invented. Periodograms can identify dominant cycles and frequency components in data, which may provide valuable insights and possibly provide better-informed decisions. By utilizing periodograms within aspects of market analytics, individuals and organizations can potentially refrain from making blinded decisions and leverage data-driven insights instead.
PERIODOGRAM INTERPRETATION:
The periodogram renders the power spectrum of a signal, with the y-axis representing the periodicity (frequencies/wavelengths) and the x-axis representing time. The y-axis is divided into periods, with each elevation representing a period. In this periodogram, the y-axis ranges from 6 at the very bottom to 49 at the top, with intermediate values in between, all indicating the power of the corresponding frequency component by color. The higher the position occurs on the y-axis, the longer the period or lower the frequency. The x-axis of the periodogram represents time and is divided into equal intervals, with each vertical column on the axis corresponding to the time interval when the signal was measured. The most recent values/colors are on the right side.
The intensity of the colors on the periodogram indicate the power level of the corresponding frequency or period. The fire color scheme is distinctly like the heat intensity from any casual flame witnessed in a small fire from a lighter, match, or camp fire. The most intense power would be indicated by the brightest of yellow, while the lowest power would be indicated by the darkest shade of red or just black. By analyzing the pattern of colors across different periods, one may gain insights into the dominant frequency components of the signal and visually identify recurring cycles/patterns of periodicity.
SETTINGS CONFIGURATIONS BRIEFLY EXPLAINED:
Source Options: These settings allow you to choose the data source for the analysis. Using the `Source` selection, you may tether to additional data streams (e.g. close, hlcc4, hl2), which also may include samples from any other indicator. For example, this could be my "Chirped Sine Wave Generator" script found in my member profile. By using the `SineWave` selection, you may analyze a theoretical sinusoidal wave with a user-defined period, something already incorporated into the code. The `SineWave` will be displayed over top of the periodogram.
Roofing Filter Options: These inputs control the range of the passband for ACP to analyze. Ehlers had two versions of his highpass filters for his releases, so I included an option for you to see the obvious difference when performing a comparison of both. You may choose between 1st and 2nd order high-pass filters.
Spectral Controls: These settings control the core functionality of the spectral analysis results. You can adjust the autocorrelation lag, adjust the level of smoothing for Fourier coefficients, and control the contrast/behavior of the heatmap displaying the power spectra. I provided two color schemes by checking or unchecking a checkbox.
Dominant Cycle Options: These settings allow you to customize the various types of dominant cycle values. You can choose between floating-point and integer values, and select the rounding method used to derive the final dominantCycle values. Also, you may control the level of smoothing applied to the dominant cycle values.
DOMINANT CYCLE VALUE SELECTIONS:
External to the acs() function, the code takes a dominant cycle value returned from acs() and changes its numeric form based on a specified type and form chosen within the indicator settings. The dominant cycle value can be represented as an integer or a decimal number, depending on the attached algorithm's requirements. For example, FIR filters will require an integer while many IIR filters can use a float. The float forms can be either rounded, smoothed, or floored. If the resulting value is desired to be an integer, it can be rounded up/down or just be in an integer form, depending on how your algorithm may utilize it.
AUTOCORRELATION SPECTRUM FUNCTION BASICALLY EXPLAINED:
In the beginning of the acs() code, the population of caches for precalculated angular frequency factors and smoothing coefficients occur. By precalculating these factors/coefs only once and then storing them in an array, the indicator can save time and computational resources when performing subsequent calculations that require them later.
In the following code block, the "Calculate AutoCorrelations" is calculated for each period within the passband width. The calculation involves numerous summations of values extracted from the roofing filter. Finally, a correlation values array is populated with the resulting values, which are normalized correlation coefficients.
Moving on to the next block of code, labeled "Decompose Fourier Components", Fourier decomposition is performed on the autocorrelation coefficients. It iterates this time through the applicable period range of 6 to 49, calculating the real and imaginary parts of the Fourier components. Frequencies 6 to 49 are the primary focus of interest for this periodogram. Using the precalculated angular frequency factors, the resulting real and imaginary parts are then utilized to calculate the spectral Fourier components, which are stored in an array for later use.
The next section of code smooths the noise ridden Fourier components between the periods of 6 and 49 with a selected filter. This species also employs numerous SuperSmoothers to condition noisy Fourier components. One of the big differences is Ehlers' versions used basic EMAs in this section of code. I decided to add SuperSmoothers.
The final sections of the acs() code determines the peak power component for normalization and then computes the dominant cycle period from the smoothed Fourier components. It first identifies a single spectral component with the highest power value and then assigns it as the peak power. Next, it normalizes the spectral components using the peak power value as a denominator. It then calculates the average dominant cycle period from the normalized spectral components using Ehlers' "Center of Gravity" calculation. Finally, the function returns the dominant cycle period along with the normalized spectral components for later external use to plot the periodogram.
POST SCRIPT:
Concluding, I have to acknowledge a newly found analyst for assistance that I couldn't receive from anywhere else. For one, Claude doesn't know much about Pine, is unfortunately color blind, and can't even see the Pine reference, but it was able to intuitively shred my code with laser precise realizations. Not only that, formulating and reformulating my description needed crucial finesse applied to it, and I couldn't have provided what you have read here without that artificial insight. Finding the right order of words to convey the complexity of ACP and the elaborate accompanying content was a daunting task. No code in my life has ever absorbed so much time and hard fricking work, than what you witness here, an ACP gem cut pristinely. I'm unveiling my version of ACP for an empowering cause, in the hopes a future global army of code wielders will tether it to highly functional computational contraptions they might possess. Here is ACP fully blessed poetically with the "Power of Pine" in sublime code. ENJOY!
[ChasinAlts]Top-Wicked Good S/R LinesHello Tradeurs, as per usual, I hope everyone is having a FAN-FRIGGIN-TASTIC day. With the soon incoming bull market approaching fast(Nov 7, 2022), there are a few ideas that I've really been trying to push out to help nail a few coins as they are near their bottom peak of this closing Bear Market. This one may seem very similar to the last one I posted but I think this one takes the cake...esp when you see the next script from my 'Market Overview' series that I will be publishing shortly after this one as it is utilizing this new script for a market scanner that will be SUPER legit and profitable. Though it is alway nice to be noticed, I'm glad that I'm relatively unpopular so the few people that are now following me can have some time to make some money with some of these scripts I'm trying to pump out for the benefit of the community. I will rarely give my full analysis of how I take in and utilize these scripts but I can tell you, QUITE A FEW of them are money in the bank. Esp these last few I've done/am doing and even more-so the ones that are soon to come (I'm speaking of about the next 3-4 that I will be attempting to pump out in this next VERY IMPORTANT week.). One more thing I'll add before going to the script is a little alpha(Im pretty certain this is the way it is going but NOTHING is EVERY 100% in life). What I believe should be realized is the bottoming out of MANY of the crypto coins at the VERY bottom of a LONG TERM Cup and Handle (so it seems but shat can still change in the blink of an eye). Thus there are quite a few coins that I believe have already bottomed and wont be returning to said bottom for a few years or so but there are also quite a few still at the brink of the bottomest part before the real market breakout occurs. My goal with these scripts coming out this week to help you all find those coins that have yet to hit their very bottom (thus the ATH/ATL script recently published). Going back in history looking for the lowest points of long term Cup & Handles I will point out 2 key things. Near the center/bottomest part of these historical CnH you will see either Double Bottoms OR a Huge dump and then its V-shaped recovery. After these print the point of no return has occurred where only a few coins will be going lower than these Double Bottoms/V-Shaped recoveries. So the time is at hand. Now that many coins are seemingly pumping after this long consolidation, I believe we need to keep a keen eye out for THE FINAL RUG PULL (as soon as enough degenerates are leveraging Long their entire savings.). What Im saying is be ready for this final rug pull to finally be seeing these Double Bottoms/V-Shaped recoveries VERY soon. DO NOT waste all your capital yet and MAKE SURE to use stop losses or else rather than stop losses you will be burdened with MUCH WORSE losses. Im currently not even in the market bc I am waiting on said rug pull. Ok for the Script now.
This script is similar to the last one but with the previous one, one general set of settings can produce VASTLY different results (might have 2 S/R lines on one coin and 80 on another). I wanted to fix that with this script, turn it into a "Market Overview" Scanner and create alerts for the MO Scanner to be able to get alerted any time a coin is passing its largest wick S/R levels bc DULY NOTE...it is VERY rare that a coin will blow past it if it hasn't approached it recently. That means that a small retrace of 3-5%(or more) is EASY to acquire (with leverage that can really add up with how many coins are in the Kucoin Margin Coin list that I have in my scanners). Now, once price does shoot through a level you best be sure to be looking down the line for a retest of the S/R level it blew past before as they are MANY times the retest level and price will be coming back to it before continuing
in the direction it was going. Depending on the TF your using this could be a few hours to a few days to a few weeks...you get it. With this script you can choose to draw S/R lines 2 ways: 1) by having it plot S/R lines on the end of the largest 2(3,4,5..however many you choose) wicks that the chart has access to. For the scanner ill just be putting the largest 2-3 wicks and set alerts when coming up to them/crossing them & 2) having it draw S/R lines on the ends of the largest X% of wicks. it will be erasing the lines and drawing new ones on each new candle occurrence so the same general settings will no longer be producing VASTLY diff amounts of S/R lines and will be way more consistent amongst the coins for better utilization with the scanner (when I publish it). There is also a Wick Max Cutoff % so for those coins that had it's first few hours printing 100% sized wicks...you can choose to ignore them so they are not taking up one of your top spots for the S/R lines. There is similarly a Wick % min Size that can be selected so if you’re using the top % setting, it will help decrease those coins that can be still plotting 30 lines even though the top 3% of the largest wicks are set in the settings. Hope Im being clear but it's easy enough. I believe in you and your capabilities of comprehending it all and getting it all figured out. So this script is for a visualization for the scanner that I will be uploading soon-after. It's always nice to get a few comments if my ideas/scripts have been helpful to you and please don't hold back if you have something to tell me that I screwed up on (I am still rather new to this coding thing but I like to think I at least have some fresh ideas that aren’t out there in the public library). Talk to you soon and may the force be with your trades. Peace and love people...peace and love. -ChasinAlts out.
OrdinaryLeastSquaresLibrary "OrdinaryLeastSquares"
One of the most common ways to estimate the coefficients for a linear regression is to use the Ordinary Least Squares (OLS) method.
This library implements OLS in pine. This implementation can be used to fit a linear regression of multiple independent variables onto one dependent variable,
as long as the assumptions behind OLS hold.
solve_xtx_inv(x, y) Solve a linear system of equations using the Ordinary Least Squares method.
This function returns both the estimated OLS solution and a matrix that essentially measures the model stability (linear dependence between the columns of 'x').
NOTE: The latter is an intermediate step when estimating the OLS solution but is useful when calculating the covariance matrix and is returned here to save computation time
so that this step doesn't have to be calculated again when things like standard errors should be calculated.
Parameters:
x : The matrix containing the independent variables. Each column is regarded by the algorithm as one independent variable. The row count of 'x' and 'y' must match.
y : The matrix containing the dependent variable. This matrix can only contain one dependent variable and can therefore only contain one column. The row count of 'x' and 'y' must match.
Returns: Returns both the estimated OLS solution and a matrix that essentially measures the model stability (xtx_inv is equal to (X'X)^-1).
solve(x, y) Solve a linear system of equations using the Ordinary Least Squares method.
Parameters:
x : The matrix containing the independent variables. Each column is regarded by the algorithm as one independent variable. The row count of 'x' and 'y' must match.
y : The matrix containing the dependent variable. This matrix can only contain one dependent variable and can therefore only contain one column. The row count of 'x' and 'y' must match.
Returns: Returns the estimated OLS solution.
standard_errors(x, y, beta_hat, xtx_inv) Calculate the standard errors.
Parameters:
x : The matrix containing the independent variables. Each column is regarded by the algorithm as one independent variable. The row count of 'x' and 'y' must match.
y : The matrix containing the dependent variable. This matrix can only contain one dependent variable and can therefore only contain one column. The row count of 'x' and 'y' must match.
beta_hat : The Ordinary Least Squares (OLS) solution provided by solve_xtx_inv() or solve().
xtx_inv : This is (X'X)^-1, which means we take the transpose of the X matrix, multiply that the X matrix and then take the inverse of the result.
This essentially measures the linear dependence between the columns of the X matrix.
Returns: The standard errors.
estimate(x, beta_hat) Estimate the next step of a linear model.
Parameters:
x : The matrix containing the independent variables. Each column is regarded by the algorithm as one independent variable. The row count of 'x' and 'y' must match.
beta_hat : The Ordinary Least Squares (OLS) solution provided by solve_xtx_inv() or solve().
Returns: Returns the new estimate of Y based on the linear model.
Higher-timeframe requests█ OVERVIEW
This publication focuses on enhancing awareness of the best practices for accessing higher-timeframe (HTF) data via the request.security() function. Some "traditional" approaches, such as what we explored in our previous `security()` revisited publication, have shown limitations in their ability to retrieve non-repainting HTF data. The fundamental technique outlined in this script is currently the most effective in preventing repainting when requesting data from a higher timeframe. For detailed information about why it works, see this section in the Pine Script™ User Manual .
█ CONCEPTS
Understanding repainting
Repainting is a behavior that occurs when a script's calculations or outputs behave differently after restarting it. There are several types of repainting behavior, not all of which are inherently useless or misleading. The most prevalent form of repainting occurs when a script's calculations or outputs exhibit different behaviors on historical and realtime bars.
When a script calculates across historical data, it only needs to execute once per bar, as those values are confirmed and not subject to change. After each historical execution, the script commits the states of its calculations for later access.
On a realtime, unconfirmed bar, values are fluid . They are subject to change on each new tick from the data provider until the bar closes. A script's code can execute on each tick in a realtime bar, meaning its calculations and outputs are subject to realtime fluctuations, just like the underlying data it uses. Each time a script executes on an unconfirmed bar, it first reverts applicable values to their last committed states, a process referred to as rollback . It only commits the new values from a realtime bar after the bar closes. See the User Manual's Execution model page to learn more.
In essence, a script can repaint when it calculates on realtime bars due to fluctuations before a bar's confirmation, which it cannot reproduce on historical data. A common strategy to avoid repainting when necessary involves forcing only confirmed values on realtime bars, which remain unchanged until each bar's conclusion.
Repainting in higher-timeframe (HTF) requests
When working with a script that retrieves data from higher timeframes with request.security() , it's crucial to understand the differences in how such requests behave on historical and realtime bars .
The request.security() function executes all code required by its `expression` argument using data from the specified context (symbol, timeframe, or modifiers) rather than on the chart's data. As when executing code in the chart's context, request.security() only returns new historical values when a bar closes in the requested context. However, the values it returns on realtime HTF bars can also update before confirmation, akin to the rollback and recalculation process that scripts perform in the chart's context on the open bar. Similar to how scripts operate in the chart's context, request.security() only confirms new values after a realtime bar closes in its specified context.
Once a script's execution cycle restarts, what were previously realtime bars become historical bars, meaning the request.security() call will only return confirmed values from the HTF on those bars. Therefore, if the requested data fluctuates across an open HTF bar, the script will repaint those values after it restarts.
This behavior is not a bug; it's simply the default behavior of request.security() . In some cases, having the latest information from an unconfirmed HTF bar is precisely what a script needs. However, in many other cases, traders will require confirmed, stable values that do not fluctuate across an open HTF bar. Below, we explain the most reliable approach to achieve such a result.
Achieving consistent timing on all bars
One can retrieve non-fluctuating values with consistent timing across historical and realtime feeds by exclusively using request.security() to fetch the data from confirmed HTF bars. The best way to achieve this result is offsetting the `expression` argument by at least one bar (e.g., `close [1 ]`) and using barmerge.lookahead_on as the `lookahead` argument.
We discourage the use of barmerge.lookahead_on alone since it prompts the function to look toward future values of HTF bars across historical data, which is heavily misleading. However, when paired with a requested `expression` that includes a one-bar historical offset, the "future" data the function retrieves is not from the future. Instead, it represents the last confirmed bar's values at the start of each HTF bar, thus preventing the results on realtime bars from fluctuating before confirmation from the timeframe.
For example, this line of code uses a request.security() call with barmerge.lookahead_on to request the close price from the "1D" timeframe, offset by one bar with the history-referencing operator [ ] . This line will return the daily price with consistent timing across all bars:
float htfClose = request.security(syminfo.tickerid, "1D", close , lookahead = barmerge.lookahead_on)
Note that:
• This technique only works as intended for higher-timeframe requests .
• When designing a script to work specifically with HTFs, we recommend including conditions to prevent request.security() from accessing timeframes equal to or lower than the chart's timeframe, especially if you intend to publish it. In this script, we included an if structure that raises a runtime error when the requested timeframe is too small.
• A necessary trade-off with this approach is that the script must wait for an HTF bar's confirmation to retrieve new data on realtime bars, thus delaying its availability until the open of the subsequent HTF bar. The time elapsed during such a delay varies with each market, but it's typically relatively small.
👉 Failing to offset the function's `expression` argument while using barmerge.lookahead_on will produce historical results with lookahead bias , as it will look to the future states of historical HTF bars, retrieving values before the times at which they're available in the feed. See the `lookahead` and Future leak with `request.security()` sections in the Pine Script™ User Manual for more information.
Evolving practices
The fundamental technique outlined in this publication is currently the only reliable approach to requesting non-repainting HTF data with request.security() . It is the superior approach because it avoids the pitfalls of other methods, such as the one introduced in the `security()` revisited publication. That publication proposed using a custom `f_security()` function, which applied offsets to the `expression` and the requested result based on historical and realtime bar states. At that time, we explored techniques that didn't carry the risk of lookahead bias if misused (i.e., removing the historical offset on the `expression` while using lookahead), as requests that look ahead to the future on historical bars exhibit dangerously misleading behavior.
Despite these efforts, we've unfortunately found that the bar state method employed by `f_security()` can produce inaccurate results with inconsistent timing in some scenarios, undermining its credibility as a universal non-repainting technique. As such, we've deprecated that approach, and the Pine Script™ User Manual no longer recommends it.
█ METHOD VARIANTS
In this script, all non-repainting requests employ the same underlying technique to avoid repainting. However, we've applied variants to cater to specific use cases, as outlined below:
Variant 1
Variant 1, which the script displays using a lime plot, demonstrates a non-repainting HTF request in its simplest form, aligning with the concept explained in the "Achieving consistent timing" section above. It uses barmerge.lookahead_on and offsets the `expression` argument in request.security() by one bar to retrieve the value from the last confirmed HTF bar. For detailed information about why this works, see the Avoiding Repainting section of the User Manual's Other timeframes and data page.
Variant 2
Variant 2 ( fuchsia ) introduces a custom function, `htfSecurity()`, which wraps the request.security() function to facilitate convenient repainting control. By specifying a value for its `repaint` parameter, users can determine whether to allow repainting HTF data. When the `repaint` value is `false`, the function applies lookahead and a one-bar offset to request the last confirmed value from the specified `timeframe`. When the value is `true`, the function requests the `expression` using the default behavior of request.security() , meaning the results can fluctuate across chart bars within realtime HTF bars and repaint when the script restarts.
Note that:
• This function exclusively handles HTF requests. If the requested timeframe is not higher than the chart's, it will raise a runtime error .
• We prefer this approach since it provides optional repainting control. Sometimes, a script's calculations need to respond immediately to realtime HTF changes, which `repaint = true` allows. In other cases, such as when issuing alerts, triggering strategy commands, and more, one will typically need stable values that do not repaint, in which case `repaint = false` will produce the desired behavior.
Variant 3
Variant 3 ( white ) builds upon the same fundamental non-repainting approach used by the first two. The difference in this variant is that it applies repainting control to tuples , which one cannot pass as the `expression` argument in our `htfSecurity()` function. Tuples are handy for consolidating `request.*()` calls when a script requires several values from the same context, as one can request a single tuple from the context rather than executing multiple separate request.security() calls.
This variant applies the internal logic of our `htfSecurity()` function in the script's global scope to request a tuple containing open and `srcInput` values from a higher timeframe with repainting control. Historically, Pine Script™ did not allow the history-referencing operator [ ] when requesting tuples unless the tuple came from a function call, which limited this technique. However, updates to Pine over time have lifted this restriction, allowing us to pass tuples with historical offsets directly as the `expression` in request.security() . By offsetting all items in a tuple `expression` by one bar and using barmerge.lookahead_on , we effectively retrieve a tuple of stable, non-repainting HTF values.
Since we cannot encapsulate this method within the `htfSecurity()` function and must execute the calculations in the global scope, the script's "Repainting" input directly controls the global `offset` and `lookahead` values to ensure it behaves as intended.
Variant 4 (Control)
Variant 4, which the script displays as a translucent orange plot, uses a default request.security() call, providing a reference point to compare the difference between a repainting request and the non-repainting variants outlined above. Whenever the script restarts its execution cycle, realtime bars become historical bars, and the request.security() call here will repaint the results on those bars.
█ Inputs
Repainting
The "Repainting" input (`repaintInput` variable) controls whether Variant 2 and Variant 3 are allowed to use fluctuating values from an unconfirmed HTF bar. If its value is `false` (default), these requests will only retrieve stable values from the last confirmed HTF bar.
Source
The "Source" input (`srcInput` variable) determines the series the script will use in the `expression` for all HTF data requests. Its default value is close .
HTF Selection
This script features two ways to specify the higher timeframe for all its data requests, which users can control with the "HTF Selection" input (`tfTypeInput` variable):
1) If its value is "Fixed TF", the script uses the timeframe value specified by the "Fixed Higher Timeframe" input (`fixedTfInput` variable). The script will raise a runtime error if the selected timeframe is not larger than the chart's.
2) If the input's value is "Multiple of chart TF", the script multiplies the value of the "Timeframe Multiple" input (`tfMultInput` variable) by the chart's timeframe.in_seconds() value, then converts the result to a valid timeframe string via timeframe.from_seconds() .
Timeframe Display
This script features the option to display an "information box", i.e., a single-cell table that shows the higher timeframe the script is currently using. Users can toggle the display and determine the table's size, location, and color scheme via the inputs in the "Timeframe Display" group.
█ Outputs
This script produces the following outputs:
• It plots the results from all four of the above variants for visual comparison.
• It highlights the chart's background gray whenever a new bar starts on the higher timeframe, signifying when confirmations occur in the requested context.
• To demarcate which bars the script considers historical or realtime bars, it plots squares with contrasting colors corresponding to bar states at the bottom of the chart pane.
• It displays the higher timeframe string in a single-cell table with a user-specified size, location, and color scheme.
Look first. Then leap.
CandlestickPatternsLibrary "CandlestickPatterns"
This library provides a wide range of candlestick patterns, and available for user to call each pattern individually. It's a comprehensive and common tool designed for traders seeking to raise their technical analysis, and it may help users identify key turning of price action in financial instruments. Credit to public technical “*All Candlestick Patterns*” indicator.
abandonedBaby(order, d1)
The "Abandoned Baby" candlestick pattern is a bullish/bearish pattern consists of three candles.
Parameters:
order (simple string) : (simple string) Pattern order type "bull" or "bear".
d1 (simple float) : (simple float) Previous candle's body percentage out of candle range. Optional argument, default is 5.
darkCloudCover(c1, n)
The "Dark Cloud Cover" is a bearish pattern consists of two candles.
Parameters:
c1 (simple bool) : (simple bool) Previous candle's body must be higher than average. Optional argument, default is true.
n (simple int) : (simple int) Length of average candle's body. Optional argument, default is 14.
doji(d0)
The "Doji" is neither bullish or bearish consists of one candles.
Parameters:
d0 (simple float) : (simple float) Current candle's body percentage out of candle range. Optional argument, default is 5.
dojiStar(order, c1, n, d0)
The "Doji Star" is a bullish/bearish pattern consists of two candles.
Parameters:
order (simple string) : (simple string) Pattern order type "bull" or "bear" .
c1 (simple bool) : (simple bool) Previous candle's body must be higher than average. Optional argument, default is true.
n (simple int) : (simple int) Length of average candle's body. Optional argument, default is 14.
d0 (simple float) : (simple float) Current candle's body percentage out of candle range. Optional argument, default is 5.
downsideTasukiGap(c2, c1, n)
The "Downside Tasuki Gap" is a bearish pattern consists of three candles.
Parameters:
c2 (simple bool) : (simple bool) Before previous candle's body must be higher than average. Optional argument, default is true.
c1 (simple bool) : (simple bool) Previous candle's body must be lower than average. Optional argument, default is true.
n (simple int) : (simple int) Length of average candle's body. Optional argument, default is 14.
dragonflyDoji(d0)
The "Dragon Fly Doji" is a bullish pattern consists of one candle.
Parameters:
d0 (simple float) : (simple float) Current candle's body percentage out of candle range. Optional argument, default is 5.
engulfing(order, c1, c0, n)
The "Engulfing" is a bullish/bearish pattern consists of two candles.
Parameters:
order (simple string) : (simple string) Pattern order type "bull" or "bear".
c1 (simple bool) : (simple bool) Previous candle's body must be lower than average. Optional argument, default is true.
c0 (simple bool) : (simple bool) Current candle's body must be higher than average. Optional argument, default is true.
n (simple int) : (simple int) Length of average candle's body. Optional argument, default is 14.
eveningDojiStar(c2, c0, d1, n)
The "Evening Doji Star" is a bearish pattern consists of three candles.
Parameters:
c2 (simple bool) : (simple bool) Before previous candle's body must be higher than average, default is true.
c0 (simple bool) : (simple bool) Current candle's body must be higher than average. Optional argument, default is true.
d1 (simple float) : (simple float) Previous candle's body percentage out of candle range. Optional argument, default is 5.
n (simple int) : (simple int) Length of average candle's body. Optional argument, default is 14.
eveningStar(c2, c1, c0, n)
The "Evening Star" is a bearish pattern consists of three candles.
Parameters:
c2 (simple bool) : (simple bool) Before previous candle's body must be higher than average. Optional argument, default is true.
c1 (simple bool) : (simple bool) Previous candle's body must be lower than average. Optional argument, default is true.
c0 (simple bool) : (simple bool) Current candle's body must be higher than average. Optional argument, default is true.
n (simple int) : (simple int) Length of average candle's body. Optional argument, default is 14.
fallingThreeMethods(c4, c3, c2, c1, c0, n)
The "Falling Three Methods" is a bearish pattern consists of five candles.
Parameters:
c4 (simple bool) : (simple bool) 5th candle ago body must be higher than average. Optional argument, default is true.
c3 (simple bool) : (simple bool) 4th candle ago body must be lower than average. Optional argument, default is true.
c2 (simple bool) : (simple bool) 3rd candle ago body must be lower than average. Optional argument, default is true.
c1 (simple bool) : (simple bool) 2nd candle ago body must be lower than average. Optional argument, default is true.
c0 (simple bool) : (simple bool) Current candle's body must be higher than average. Optional argument, default is true.
n (simple int) : (simple int) Length of average candle's body. Optional argument, default is 14.
Returns: (bool)
fallingWindow()
The "Falling Window" is a bearish pattern consists of two candles.
gravestoneDoji(d0)
The "Gravestone Doji" is a bearish pattern consists of one candle.
Parameters:
d0 (simple float) : (simple float) Current candle's body percentage out of candle range. Optional argument, default is 5.
hammer(c0, n)
The "Hammer" is a bullish pattern consists of one candle.
Parameters:
c0 (simple bool) : (simple bool) Current candle's body must be lower than average. Optional argument, default is true.
n (simple int) : (simple int) Length of average candle's body. Optional argument, default is 14.
hangingMan(c0, n)
The "Hanging Man" is a bearish pattern consists of one candle.
Parameters:
c0 (simple bool) : (simple bool) Current candle's body must be lower than average. Optional argument, default is true.
n (simple int) : (simple int) Length of average candle's body. Optional argument, default is 14.
haramiCross(order, c1, n)
The "Harami Cross" candlestick pattern is a bullish/bearish pattern consists of two candles.
Parameters:
order (string) : (simple string) Pattern order type "bull" or "bear".
c1 (simple bool) : (simple bool) Previous candle's body must be higher than average. Optional argument, default is true.
n (simple int) : (simple int) Length of average candle's body. Optional argument, default is 14.
harami(order, c1, c0, n)
The "Harami" candlestick pattern is a bullish/bearish pattern consists of two candles.
Parameters:
order (string) : (simple string) Pattern order type "bull" or "bear"
c1 (simple bool) : (simple bool) Previous candle's body must be higher than average. Optional argument, default is true.
c0 (simple bool) : (simple bool) Current candle's body must be lower than average. Optional argument, default is true.
n (simple int) : (simple int) Length of average candle's body. Optional argument, default is 14.
invertedHammer(c0, n)
The "Inverted Hammer" is a bullish pattern consists of one candle.
Parameters:
c0 (simple bool) : (simple bool) Current candle's body must be lower than average. Optional argument, default is true.
n (simple int) : (simple int) Length of average candle's body. Optional argument, default is 14.
kicking(order, c1, c0, n)
The "Kicking" candlestick pattern is a bullish/bearish pattern consists of two candles.
Parameters:
order (string) : (simple string) Pattern order type "bull" or "bear"
c1 (simple bool) : (simple bool) Previous candle's body must be higher than average. Optional argument, default is true.
c0 (simple bool) : (simple bool) Current candle's body must be higher than average. Optional argument, default is true.
n (simple int) : (simple int) Length of average candle's body. Optional argument, default is 14.
longLowerShadow(l0)
The "Long Lower Shadow" candlestick pattern is a bullish pattern consists of one candles.
Parameters:
l0 (simple float) : (simple float) Current candle's lower wick min percentage out of candle range. Optional argument, default is 75.
longUpperShadow(u0)
The "Long Upper Shadow" candlestick pattern is a bearish pattern consists of one candles.
Parameters:
u0 (simple float) : (simple float) Current candle's upper wick min percentage out of candle range. Optional argument, default is 75.
marubozuBlack(c0, n)
The "Marubozu Black" candlestick pattern is a bearish pattern consists of one candles.
Parameters:
c0 (simple bool) : (simple bool) Current candle's body must be higher than average. Optional argument, default is true.
n (simple int) : (simple int) Length of average candle's body. Optional argument, default is 14.
marubozuWhite(c0, n)
The "Marubozu White" candlestick pattern is a bullish pattern consists of one candles.
Parameters:
c0 (simple bool) : (simple bool) Current candle's body must be higher than average. Optional argument, default is true.
n (simple int) : (simple int) Length of average candle's body. Optional argument, default is 14.
morningDojiStar(c2, d1, c0, n)
The "Morning Doji Star" candlestick pattern is a bullish pattern consists of three candles.
Parameters:
c2 (simple bool) : (simple bool) Before previous candle's body must be higher than average. Optional argument, default is true.
d1 (simple float) : (simple float) Previous candle's body percentage out of candle range. Optional argument, default is 5.
c0 (simple bool) : (simple bool) Current candle's body must be higher than average. Optional argument, default is true.
n (simple int) : (simple int) Length of average candle's body. Optional argument, default is 14.
morningStar(c2, c1, c0, n)
The "Morning Star" candlestick pattern is a bullish pattern consists of three candles.
Parameters:
c2 (simple bool) : (simple bool) Before previous candle's body must be higher than average. Optional argument, default is true.
c1 (simple bool) : (simple bool) Previous candle's body must be lower than average. Optional argument, default is true.
c0 (simple bool) : (simple bool) Cuurent candle's body must be higher than average. Optional argument, default is true.
n (simple int) : (simple int) Length of average candle's body. Optional argument, default is 14.
onNeck(c1, c0, n)
The "On Neck" candlestick pattern is a bearish pattern consists of two candles.
Parameters:
c1 (simple bool) : (simple bool) Previous candle's body must be higher than average. Optional argument, default is true.
c0 (simple bool) : (simple bool) Cuurent candle's body must be lower than average. Optional argument, default is true.
n (simple int) : (simple int) Length of average candle's body. Optional argument, default is 14.
piercing(c1, n)
The "Piercing" candlestick pattern is a bullish pattern consists of two candles.
Parameters:
c1 (simple bool) : (simple bool) Previous candle's body must be higher than average. Optional argument, default is true.
n (simple int) : (simple int) Length of average candle's body. Optional argument, default is 14.
risingThreeMethods(c4, c3, c2, c1, c0, n)
The "Rising Three Methods" candlestick pattern is a bullish pattern consists of five candles.
Parameters:
c4 (simple bool) : (simple bool) 5th candle ago body must be higher than average. Optional argument, default is true.
c3 (simple bool) : (simple bool) 4th candle ago body must be Lower than average. Optional argument, default is true.
c2 (simple bool) : (simple bool) 3rd candle ago body must be Lower than average. Optional argument, default is true.
c1 (simple bool) : (simple bool) 2nd candle ago body must be Lower than average. Optional argument, default is true.
c0 (simple bool) : (simple bool) Current candle's body must be higher than average. Optional argument, default is true.
n (simple int) : (simple int) Length of average candle's body. Optional argument, default is 14.
risingWindow()
The "Rising Window" candlestick pattern is a bullish pattern consists of two candle.
shootingStar(c0, n)
The "Shooting Star" candlestick pattern is a bearish pattern consists of one candle.
Parameters:
c0 (simple bool) : (simple bool) Current candle's body must be higher than average. Optional argument, default is true.
n (simple int) : (simple int) Length of average candle's body. Optional argument, default is 14.
spinningTopBlack(l0, u0)
The "Spinning Top Black" is neither bullish or bearish.
Parameters:
l0 (simple float) : (simple float) Current candle's lower wick min percentage out of candle range. Optional argument, default is 34.
u0 (simple float) : (simple float) Current candle's upper wick min percentage out of candle range. Optional argument, default is 34.
spinningTopWhite(l0, u0)
The "Spinning Top White" is neither bullish or bearish.
Parameters:
l0 (simple float) : (simple float) Current candle's lower wick min percentage out of candle range. Optional argument, default is 34.
u0 (simple float) : (simple float) Current candle's upper wick min percentage out of candle range. Optional argument, default is 34.
threeBlackCrows(c2, c1, c0, n)
The "Three Black Crows" candlestick pattern is a bearish pattern consists of three candles.
Parameters:
c2 (simple bool) : (simple bool) Before previous candle's body must be higher than average. Optional argument, default is true.
c1 (simple bool) : (simple bool) Previous candle's body must be higher than average. Optional argument, default is true.
c0 (simple bool) : (simple bool) Cuurent candle's body must be higher than average. Optional argument, default is true.
n (simple int) : (simple int) Length of average candle's body. Optional argument, default is 14.
threeWhiteSoldiers(c2, c1, c0, n)
The "Three White Soldiers" candlestick pattern is a bullish pattern consists of three candles.
Parameters:
c2 (simple bool) : (simple bool) Before previous candle's body must be higher than average. Optional argument, default is true.
c1 (simple bool) : (simple bool) Previous candle's body must be higher than average. Optional argument, default is true.
c0 (simple bool) : (simple bool) Cuurent candle's body must be higher than average. Optional argument, default is true.
n (simple int) : (simple int) Length of average candle's body. Optional argument, default is 14.
triStar(order, d2, d1, d0)
The "Tri Star" candlestick pattern is a bullish/bearish pattern consists of three candles.
Parameters:
order (simple string) : (simple string) Pattern order type "bull" or "bear".
d2 (simple float) : (simple float) Before previous candle's body percentage out of candle range. Optional argument, default is 5.
d1 (simple float) : (simple float) Previous candle's body percentage out of candle range. Optional argument, default is 5.
d0 (simple float) : (simple float) Current candle's body percentage out of candle range. Optional argument, default is 5.
tweezerBottom(c1, n)
The "Tweezer Bottom" candlestick pattern is a bullish pattern consists of two candles.
Parameters:
c1 (simple bool) : (simple bool) Previous candle's body must be higher than average. Optional argument, default is true.
n (simple int) : (simple int) Length of average candle's body. Optional argument, default is 14.
tweezerTop(c1, n)
The "Tweezer Top" candlestick pattern is a bearish pattern consists of two candles.
Parameters:
c1 (simple bool) : (simple bool) Previous candle's body must be higher than average. Optional argument, default is true.
n (simple int) : (simple int) Length of average candle's body. Optional argument, default is 14.
upsideTasukiGap(c2, c1, n)
The "Tri Star" candlestick pattern is a bullish pattern consists of three candles.
Parameters:
c2 (simple bool) : (simple bool) Before Previous candle's body must be higher than average. Optional argument, default is true.
c1 (simple bool) : (simple bool) Previous candle's body must be lower than average. Optional argument, default is true.
n (simple int) : (simple int) Length of average candle's body. Optional argument, default is 14.
Realtime FootprintThe purpose of this script is to gain a better understanding of the order flow by the footprint. To that end, i have added unusual features in addition to the standard features.
I use "Real Time 5D Profile by LucF" main engine to create basic footprint(profile type) and added some popular features and my favorites.
This script can only be used in realtime, because tradingview doesn't provide historical Bid/Ask date.
Bid/Ask date used this script are up/down ticks.
This script can only be used by time based chart (1m, 5m , 60m and daily etc)
This script use many labels and these are limited max 500, so you can't display many bars.
If you want to display foot print bars longer, turn off the unused sub-display function.
Default setting is footprint is 25 labels, IB count is 1, COT high and Ratio high is 1, COT low and Ratio low is 1 and Delta Box Ratio Volume is 1 , total 29.
plus UA , IB stripes , ladder fading mark use several labels.
///////// General Setting ///////////
Resets on Volume / Range bar
: If you want to use simple time based Resets on, please set Total Volume is 0.
Your timeframe is always the first condition. So if you set Total Volume is 1000, both conditions(Volume >= 1000 and your timeframe start next bar) must be met. (that is, new footprint bar doesn't start at when total volume = exactly 1000).
Ticks per row and Maximum row of Bar
: 1 is minimum size(tick). "Maximum row of Bar" decide the number of rows used in one footprint. 1 row is created from 1 label, so you need to reduce this number to display many footprints (Max label is 500).
Volume Filter and For Calculation and Display
: "Volume Filter" decide minimum size of using volume for this script.
"For Calculation and Display" is used to convert volume to an integer.
This script only use integer to make profile look better (I contained Bid number and Ask number in one row( one label) to saving labels. This require to make no difference in width by the number of digits and this script corresponds integers from 0 to 3 digits).
ex) Symbol average volume size is from 0.0001 to 0.001. You decide only use Volume >= 0.0005 by "Volume Filter".
Next, you convert volume to integer, by setting "For Calculation and Display" is 1000 (0.0005 * 1000 = 5).
If 0.00052 → 5.2 → 5, 0.00058 → 5.8 → 6 (Decimal numbers are rounded off)
This integer is used to all calculation in this script.
//////// Main Display ///////
Footprint, Total, Row Delta, Diagonal Delta and Profile
: "Footprint" display Ask and Bid per row. "Total" display Ask + Bid per row.
"Row Delta" display Ask - Bid per row. "Diagonal Delta" display Ask(row N) - Bid(row N -1) per row.
Profile display Total Volume(Ask + Bid) per row by using Block. Profile Block coloring are decided by Row Delta value(default: positive Row Delta (Ask > Bid) is greenish colors and negative Row Delta (Ask < Bid) is reddish colors.)
Volume per Profile Block, Row Imbalance Ratio and Delta Bull/Bear/Neutral Colors
: "Volume per Profile Block" decide one block contain how many total volume.
ex) When you set 20, Total volume 70 display 3 block.
The maximum number of blocks that can be used per low is 20.
So if you set 20, Total volume 400 is 20 blocks. total volume 800 is 20 blocks too.
"Row Imbalance Ratio" decide block coloring. The row imbalance is that the difference between Ask and Bid (row delta) is large.
default is x3, x2 and x1. The larger the difference, the brighter the color.
ex) Ask 30 Bid 10 is light green. Ask 20 Bid 10 is green. Ask 11 Bid 10 is dark green.
Ask 0 Bid 1 is light red. Ask 1 Bid 2 is red. ask 30 Bid 59 is dark green.
Ask 10 Bid 10 is neutral color(gray)
profile coloring is reflected same row's other elements(Ask, Bid, Total and Delta) too.
It's because one label can only use one text color.
/////// Sub Display ///////
Delta, total and Commitment of Traders
: "Delta" is total Ask - total Bid in one footprint bar. Total is total Ask + total Bid in one footprint bar.
"Commitment of traders" is variation of "Delta". COT High is reset to 0 when current highest is touched. COT Low is opposite.
Basic concept of Delta is to compare price with Delta. Ordinary, when price move up, delta is positive. Price move down is negative delta.
This is because market orders move price and market orders are counted by Delta (although this description is not exactly correct).
But, sometimes prices do not move even though many market orders are putting pressure on price , or conversely, price move strongly without many market orders.
This is key point. Big player absorb market orders by iceberg order(Subdivide large orders and pretend to be small limit orders.
Small limit orders look weak in the order book, but they are added each time you fill, so they are more powerful than they look.), so price don't move.
On the other hand, when the price is moving easily, smart players may be aiming to attract and counterattack to a better price for them.
It's more of a sport than science, and there's always no right response. Pay attention to the relationship between price, volume and delta.
ex) If COT Low is large negative value, it means many sell market orders is coming, but iceberg order is absorbing their attack at limit order.
you should not do buy entry, only this clue. but this is one of the hints.
"Delta, Box Ratio and Total texts is contained same label and its color are "Delta" coloring. Positive Delta is Delta Bull color(green),Negative Delta is Delta Bear Color
and Delta = 0 is Neutral Color(gray). When Delta direction and price direction are opposite is Delta Divergence Color(yellow).
I didn't add the cumulative volume delta because I prefer to display the CVD line on the price chart rather than the number.
Box Ratio , Box Ratio Divisor and Heavy Box Ratio Ratio
: This is not ordinary footprint features, but I like this concept so I added.
Box Ratio by Richard W. Arms is simple but useful tool. calculation is "total volume (one bar) divided by Bar range (highest - lowest)."
When Bull and bear are fighting fiercely this number become large, and then important price move happen.
I made average BR from something like 5 SMA and if current BR exceeds average BR x (Heavy Box Ratio Ratio), BR box mark will be filled.
Box Ratio Divisor is used to good looking display(BR multiplied by Box Ratio Divisor is rounded off and displayed as an integer)
Diagonal Imbalance Count , D IB Mark and D IB Stripes
: Diagonal Imbalance is defined by "Diagonal Imbalance Ratio".
ex) You set 2. When Ask(row N) 30 Bid(row N -1)10, it's 30 > 10*2, so positive Diagonal Imbalance.
When Ask(row N) 4 Bid(row N -1)9, it's 4*2 < 9, so negative Diagonal Imbalance.
This calculation does not use equals to avoid Ask(row N) 0 Bid(row N -1)0 became Diagonal Imbalance.
Ask(row N) 0 Bid(row N -1)0, it's 0 = 0*2, not Diagonal Imbalance. Ask(row N) 10 Bid(row N -1)5, it's 10 = 5*2, not Diagonal Imbalance.
"D IB Mark" emphasize Ask or Bid number which is dominant side(Winner of Diagonal Imbalance calculation), by under line.
"Diagonal Imbalance Count" compare Ask side D IB Mark to Bid side D IB Mark in one footprint.
Coloring depend on which is more aggressive side (it has many IB Mark) and When Aggressive direction and price direction are opposite is Delta Divergence Color(yellow).
"D IB Stripes" is a function that further emphasizes with an arrow Mark, when a DIB mark is added on the same side for three consecutive row. Three consecutive arrow is added at third row.
Unfinished Auction, Ratio Bounds and Ladder fading Mark
: "Unfinished Auction" emphasize highest or lowest row which has both Ask and Bid, by Delta Divergence Color(yellow) XXXXXX mark.
Unfinished Auction sometimes has magnet effect, price may touch and breakout at UA side in the future.
This concept is famous as profit taking target than entry decision.
But, I'm interested in the case that Big player make fake breakout at UA side and trapped retail traders, and then do reversal with retail traders stop-loss hunt.
Anyway, it's not stand alone signal.
"Ratio Bounds" gauge decrease of pressure at extreme price. Ratio Bounds High is number which second highest ask is divided by highest ask.
Ratio Bounds Low is number which second lowest bid is divided by lowest bid. The larger the number, the less momentum the price has.
ex)first footprint bar has Ratio Bounds Low 2, second footprint bar has RBL 4, third footprint bar has RBL 20.
This indicates that the bear's power is gradually diminishing.
"Ladder fading mark" emphasizes the decrease of the value in 3 consecutive row at extreme price. I added two type Marks.
Ask/Bid type(triangle Mark) is Ask/Bid values are decreasing of three consecutive row at extreme price.
Row Imbalance type(Diamond Mark) are row Imbalance values are decreasing of three consecutive row at extreme price.
ex)Third lowest Bid 40, second lowest Bid 10 and lowest Bid 5 have triangle up Mark. That is bear's power is gradually diminishing.
(This Mark only check Bid value at lowest price and Ask value at highest price).
Third highest row delta + 60, second highest row delta + 5, highest delta - 20 have diamond Mark. That is Bull's power is gradually diminishing.
Sub display use Delta colors at bottom of Sub display section.
////// Candle & POC /////////
candle and POC
: Ordinary, "POC" Point of Control is row of largest total volume, but this script'POC is volume weighted average.
This is because the regular POC was visually displayed by the profile ,and I was influenced LucF's ideas.
POC coloring is decided in relation to the previous POC. When current POC is higher than previous POC, color is UP Bar Color(green).
In the opposite case, Down Bar color is used.
POC Divergence Color is used when Current POC is up but current bar close is lower than open (Down price Bar),or in the opposite case.
POC coloring has option also highlight background by Delta Divergence Color(yellow). but bg color is displayed at your time frame current price bar not current footprint bar.
The basic explanation is over.
I add some image to promote understanding basic ideas.
Delta Volume Candles [LucF]█ OVERVIEW
This indicator plots on-chart volume delta information using candles that can replace your normal candles, tops and bottoms appended to normal candles, optional MAs of those tops and bottoms levels, a divergence channel and a chart background. The indicator calculates volume delta using intrabar analysis, meaning that it uses the lower timeframe bars constituting each chart bar.
█ CONCEPTS
Volume Delta
The volume delta concept divides a bar's volume in "up" and "down" volumes. The delta is calculated by subtracting down volume from up volume. Many calculation techniques exist to isolate up and down volume within a bar. The simplest use the polarity of interbar price changes to assign their volume to up or down slots, e.g., On Balance Volume or the Klinger Oscillator . Others such as Chaikin Money Flow use assumptions based on a bar's OHLC values. The most precise calculation method uses tick data and assigns the volume of each tick to the up or down slot depending on whether the transaction occurs at the bid or ask price. While this technique is ideal, it requires huge amounts of data on historical bars, which considerably limits the historical depth of charts and the number of symbols for which tick data is available. Furthermore, historical tick data is not yet available on TradingView.
This indicator uses intrabar analysis to achieve a compromise between the simplest and most precise methods of calculating volume delta. It is currently the most precise method usable on TradingView charts. TradingView's Volume Profile built-in indicators use it, as do the CVD - Cumulative Volume Delta Candles and CVD - Cumulative Volume Delta (Chart) indicators published from the TradingView account . My Delta Volume Channels and Volume Delta Columns Pro indicators also use intrabar analysis. Other volume delta indicators such as my Realtime 5D Profile use realtime chart updates to calculate volume delta without intrabar analysis, but that type of indicator only works in real time; they cannot calculate on historical bars.
This is the logic I use to determine the polarity of intrabars, which determines the up or down slot where its volume is added:
• If the intrabar's open and close values are different, their relative position is used.
• If the intrabar's open and close values are the same, the difference between the intrabar's close and the previous intrabar's close is used.
• As a last resort, when there is no movement during an intrabar, and it closes at the same price as the previous intrabar, the last known polarity is used.
Once all intrabars making up a chart bar have been analyzed and the up or down property of each intrabar's volume determined, the up volumes are added, and the down volumes subtracted. The resulting value is volume delta for that chart bar, which can be used as an estimate of the buying/selling pressure on an instrument. Not all markets have volume information. Without it, this indicator is useless.
Intrabar analysis
Intrabars are chart bars at a lower timeframe than the chart's. The timeframe used to access intrabars determines the number of intrabars accessible for each chart bar. On a 1H chart, each chart bar of an active market will, for example, usually contain 60 bars at the lower timeframe of 1min, provided there was market activity during each minute of the hour.
This indicator automatically calculates an appropriate lower timeframe using the chart's timeframe and the settings you use in the script's "Intrabars" section of the inputs. As it can access lower timeframes as small as seconds when available, the indicator can be used on charts at relatively small timeframes such as 1min, provided the market is active enough to produce bars at second timeframes.
The quantity of intrabars analyzed in each chart bar determines:
• The precision of calculations (more intrabars yield more precise results).
• The chart coverage of calculations (there is a 100K limit to the quantity of intrabars that can be analyzed on any chart,
so the more intrabars you analyze per chart bar, the less chart bars can be calculated by the indicator).
The information box displayed at the bottom right of the chart shows the lower timeframe used for intrabars, as well as the average number of intrabars detected for chart bars and statistics on chart coverage.
Balances
This indicator calculates five balances from volume delta values. The balances are oscillators with a zero centerline; positive values are bullish, and negative values are bearish. It is important to understand the balances as they can be used to:
• Color candle bodies.
• Calculate body and top and bottom divergences.
• Color an EMA channel.
• Color the chart's background.
• Configure markers and alerts.
The five balances are:
1 — Bar Balance : This is the only balance using instant values; it is simply the subtraction of the down volume from the up volume on the bar, so the instant volume delta for that bar.
2 — Average Balance : Calculates a distinct EMA for both the up and down volumes, and subtracts the down EMA from the up EMA.
The result is akin to MACD's histogram because it is the subtraction of two moving averages.
3 — Momentum Balance : Starts by calculating, separately for both up and down volumes, the difference between the same EMAs used in "Average Balance" and
an SMA of twice the period used for the "Average Balance" EMAs. The difference for the up side is subtracted from the difference for the down side,
and an RSI of that value is calculated and brought over the −50/+50 scale.
4 — Relative Balance : The reference values used in the calculation are the up and down EMAs used in the "Average Balance".
From those, we calculate two intermediate values using how much the instant up and down volumes on the bar exceed their respective EMA — but with a twist.
If the bar's up volume does not exceed the EMA of up volume, a zero value is used. The same goes for the down volume with the EMA of down volume.
Once we have our two intermediate values for the up and down volumes exceeding their respective MA, we subtract them. The final value is an ALMA of that subtraction.
The rationale behind using zero values when the bar's up/down volume does not exceed its EMA is to only take into account the more significant volume.
If both instant volume values exceed their MA, then the difference between the two is the signal's value.
The signal is called "relative" because the intermediate values are the difference between the instant up/down volumes and their respective MA.
This balance flatlines when the bar's up/down volumes do not exceed their EMAs, which makes it useful to spot areas where trader interest dwindles, such as consolidations.
The smaller the period of the final value's ALMA, the more easily it will flatline. These flat zones should be considered no-trade zones.
5 — Percent Balance : This balance is the ALMA of the ratio of the "Bar Balance" over the total volume for that bar.
From the balances and marker conditions, two more values are calculated:
1 — Marker Bias : This sums the up/down (+1/‒1) occurrences of the markers 1 to 4 over a period you define, so it ranges from −4 to +4, times the period.
Its calculation will depend on the modes used to calculate markers 3 and 4.
2 — Combined Balances : This is the sum of the bull/bear (+1/−1) states of each of the five balances, so it ranges from −5 to +5.
The periods for all of these balances can be configured in the "Periods" section at the bottom of the script's inputs. As you cannot see the balances on the chart, you can use my Volume Delta Columns Pro indicator in a pane; it can plot the same balances, so you will be able to analyze them.
Divergences
In the context of this indicator, a divergence is any bar where the bear/bull state of a balance (above/below its zero centerline) diverges from the polarity of a chart bar. No directional bias is assigned to divergences when they occur. Candle bodies and tops/bottoms can each be colored differently on divergences detected from distinct balances.
Divergence Channel
The divergence channel is the space between two levels (by default, the bar's open and close ) saved when divergences occur. When price (by default the close ) has breached a channel and a new divergence occurs, a new channel is created. Until that new channel is breached, bars where additional divergences occur will expand the channel's levels if the bar's price points are outside the channel.
Prices breaches of the divergence channel will change its state. Divergence channels can be in one of three different states:
• Bull (green): Price has breached the channel to the upside.
• Bear (red): Price has breached the channel to the downside.
• Neutral (gray): The channel has not yet been breached.
█ HOW TO USE THE INDICATOR
I do not make videos to explain how to use my indicators. I do, however, try hard to include in their description everything one needs to understand what they do. From there, it's up to you to explore and figure out if they can be useful in your trading practice. Communicating in videos what this description and the script's tooltips contain would make for very long videos that would likely exceed the attention span of most people who find this description too long. There is no quick way to understand an indicator such as this one because it uses many different concepts and has quite a bit of settings one can use to modify its visuals and behavior — thus how one uses it. I will happily answer questions on the inner workings of the indicator, but I do not answer questions like "How do I trade using this indicator?" A useful answer to that question would require an in-depth analysis of who you are, your trading methodology and objectives, which I do not have time for. I do not teach trading.
Start by loading the indicator on an active chart containing volume information. See here if you need help.
The default configuration displays:
• Normal candles where the bodies are only colored if the bar's volume has increased since the last bar.
If you want to use this indicator's candles, you may want to disable your chart's candles by clicking the eye icon to the right of the symbol's name in the top left of the chart.
• A top or bottom appended to the normal candles. It represents the difference between up and down volume for that bar
and is positioned at the top or bottom, depending on its polarity. If up volume is greater than down volume, a top is displayed. If down volume is greater, a bottom is plotted.
The size of tops and bottoms is determined by calculating a factor which is the proportion of volume delta over the bar's total volume.
That factor is then used to calculate the top or bottom size relative to a baseline of the average candle body size of the last 100 bars.
• An information box in the bottom right displaying intrabar and chart coverage information.
• A light red background when the intrabar volume differs from the chart's volume by more than 1%.
The script's inputs contain tooltips explaining most of the fields. I will not repeat them here. Following is a brief description of each section of the indicator's inputs which will give you an idea of what the indicator can do:
Normal Candles is where you configure the replacement candles plotted by the script. You can choose from different coloring schemes for their bodies and specify a unique color for bodies where a divergence calculated using the method you choose occurs.
Volume Tops & Botttoms is where you configure the display of tops and bottoms, and their EMAs. The EMAs are calculated from the high point of tops and the low point of bottoms. They can act as a channel to evaluate price, and you can choose to color the channel using a gradient reflecting the advances/declines in the balance of your choice.
Divergence Channel is where you set up the appearance and behavior of the divergence channel. These areas represent levels where price and volume delta information do not converge. They can be interpreted as regions with no clear direction from where one will look for breaches. You can configure the channel to take into account one or both types of divergences you have configured for candle bodies and tops/bottoms.
Background allows you to configure a gradient background color that reflects the advances/declines in the balance of your choice. You can use this to provide context to the volume delta values from bars. You can also control the background color displayed on volume discrepancies between the intrabar and the chart's timeframe.
Intrabars is where you choose the calculation mode determining the lower timeframe used to access intrabars. The indicator uses the chart's timeframe and the type of market you are on to calculate the lower timeframe. Your setting there should reflect which compromise you prefer between the precision of calculations and chart coverage. This is also where you control the display of the information box in the lower right corner of the chart.
Markers allows you to control the plotting of chart markers on different conditions. Their configuration determines when alerts generated from the indicator will fire. Note that in order to generate alerts from this script, they must be created from your chart. See this Help Center page to learn how. Only the last 500 markers will be visible on the chart, but this will not affect the generation of alerts.
Periods is where you configure the periods for the balances and the EMAs used in the indicator.
The raw values calculated by this script can be inspected using the Data Window.
█ INTERPRETATION
Rightly or wrongly, volume delta is considered by many a useful complement to the interpretation of price action. I use it extensively in an attempt to find convergence between my read of volume delta and price movement — not so much as a predictor of future price movement. No system or person can predict the future. Accordingly, I consider people who speak or act as if they know the future with certainty to be dangerous to themselves and others; they are charlatans, imprudent or blissfully ignorant.
I try to avoid elaborate volume delta interpretation schemes involving too many variables and prefer to keep things simple:
• Trends that have more chances of continuing should be accompanied by VD of the same polarity.
In trends, I am looking for "slow and steady". I work from the assumption that traders and systems often overreact, which translates into unproductive volatility.
Wild trends are more susceptible to overreactions.
• I prefer steady VD values over wildly increasing ones, as large VD increases often come with increased price volatility, which can backfire.
Large VD values caused by stopping volume will also often occur on trend reversals with abnormally high candles.
• Prices escaping divergence channels may be leading a trend in that direction, although there is no telling how long that trend will last; could be just a few bars or hundreds.
When price is in a channel, shifts in VD balances can sometimes give us an idea of the direction where price has the most chance of breaking.
• Dwindling VD will often indicate trend exhaustion and predate reversals by many bars, but the problem is that mere pauses in a trend will often produce the same behavior in VD.
I think it is too perilous to infer rigidly from VD decreases.
Divergence Channel
Here I have configured the divergence channels to be visible. First, I set the bodies to display divergences on the default Bar Balance. They are indicated by yellow bodies. Then I activated the divergence channels by choosing to draw levels on body divergences and checked the "Fill" checkbox to fill the channel with the same color as the levels. The divergence channel is best understood as a direction-less area from where a breach can be acted on if other variables converge with the breach's direction:
Tops and Bottoms EMAs
I find these EMAs rather interesting. They have no equivalent elsewhere, as they are calculated from the top and bottom values this indicator plots. The only similarity they have with volume-weighted MAs, including VWAP, is that they use price and volume. This indicator's Tops and Bottoms EMAs, however, use the price and volume delta. While the channel differs from other channels in how it is calculated, it can be used like others, as a baseline from which to evaluate price movement or, alternatively, as stop levels. Remember that you can change the period used for the EMAs in the "Periods" section of the inputs.
This chart shows the EMAs in action, filled with a gradient representing the advances/decline from the Momentum balance. Notice the anomaly in the chart's latest bars where the Momentum balance gradient has been indicating a bullish bias for some time, during which price was mostly below the EMAs. Price has just broken above the channel on positive VD. My interpretation of this situation would be that it is a risky opportunity for a long trade in the larger context where the market has been in a downtrend since the 5th. Intrepid traders choosing to enter here could do so with a "make or break" tight stop that will minimize their losses should the market continue its downtrend while hopefully preserving the potential upside of price continuing on the longer-term uptrend prevalent since the 28th:
█ NOTES
Volume
If you use indicators such as this one which depends on volume information, it is important to realize that the volume data they consume comes from data feeds, and that all data feeds are NOT created equally. Those who create the data feeds we use must make decisions concerning the nature of the transactions they tally and the way they are tallied in each feed, and these decisions affect the nature of our volume data. My Volume X-ray publication discusses some of the reasons why volume information from different timeframes, brokers/exchanges or sectors may vary considerably. I encourage you to read it. This indicator's display of a warning through a background color on volume discrepancies between the timeframe used to access intrabars and the chart's timeframe is an attempt to help you realize these variations in feeds. Don't take things for granted, and understand that the quality of a given feed's volume information affects the quality of the results this indicator calculates.
Markets as ecosystems
I believe it is perilous to think that behavioral patterns you discover in one market through the lens of this or any other indicator will necessarily port to other markets. While this may sometimes be the case, it will often not. Why is that? Because each market is its own ecosystem. As cities do, all markets share some common characteristics, but they also all have their idiosyncrasies. A proportion of a city's inhabitants is always composed of outsiders who come and go, but a core population of regulars and systems is usually the force that actually defines most of the city's observable characteristics. I believe markets work somewhat the same way; they may look the same, but if you live there for a while and pay attention, you will notice the idiosyncrasies. Some things that work in some markets will, accordingly, not work in others. Please keep that in mind when you draw conclusions.
On Up/Down or Buy/Sell Volume
Buying or selling volume are misnomers, as every unit of volume transacted is both bought and sold by two different traders. While this does not keep me from using the terms, there is no such thing as “buy only” or “sell only” volume. Trader lingo is riddled with peculiarities. Without access to order book information, traders work with the assumption that when price moves up during a bar, there was more buying pressure than selling pressure, just as when buy market orders take out limit ask orders in the order book at successively higher levels. The built-in volume indicator available on TradingView uses this logic to color the volume columns green or red. While this script’s calculations are more precise because it analyses intrabars to calculate its information, it uses pretty much the same imperfect logic. Until Pine scripts can have access to how much volume was transacted at the bid/ask prices, our volume delta calculations will remain a mere proxy.
Repainting
• The values calculated on the realtime bar will update as new information comes from the feed.
• Historical values may recalculate if the historical feed is updated or when calculations start from a new point in history.
• Markers and alerts will not repaint as they only occur on a bar's close. Keep this in mind when viewing markers on historical bars,
where one could understandably and incorrectly assume they appear at the bar's open.
To learn more about repainting, see the Pine Script™ User Manual's page on the subject .
Superfluity
In "The Bed of Procrustes", Nassim Nicholas Taleb writes: To bankrupt a fool, give him information . This indicator can display a lot of information. The inevitable adaptation period you will need to figure out how to use it should help you eliminate all the visuals you do not need. The more you eliminate, the easier it will be to focus on those that are the most useful to your trading practice. Don't be a fool.
█ THANKS
Thanks to alexgrover for his Dekidaka-Ashi indicator. His volume plots on candles were the inspiration for my top/bottom plots.
Kudos to PineCoders for their libraries. I use two of them in this script: Time and lower_tf .
The first versions of this script used functionality that I would not have known about were it not for these two guys:
— A guy called Kuan who commented on a Backtest Rookies presentation of their Volume Profile indicator.
— theheirophant , my partner in the exploration of the sometimes weird abysses of request.security() ’s behavior at lower timeframes.
Custom Buy/Sell Pattern BuilderAre you tired of using trading indicators that only let you follow fixed, pre-designed rules? Do you wish you could build your own “Buy” or “Sell” signals, experiment with your own ideas, or see instantly if your unique pattern works—without learning coding or hiring a developer?
The Custom Buy/Sell Pattern Builder is designed for YOU.
This TradingView indicator lets ANY trader—even a complete beginner—define exactly what kind of price and volume conditions should create a BUY or SELL label on any chart, in any market, at any timeframe.
You don’t need to know programming. You don’t need to know the definition of a hammer, doji, volume spike, or Engulfing pattern.
With a few clicks and easy dropdown choices, you can:
Make your own rules for buying or selling
Choose how many candles your pattern should look at
Decide if you want the biggest body, the lowest volume, the biggest movement, or any combination you can imagine
The result?
You’ll see clear “BUY” or “SELL” labels automatically show up on your chart whenever the exact rule YOU built matches current price action.
No more guessing. No more forced strategies. Just pure control and visual feedback!
Why Is This Powerful?
Traditional indicators (like MACD, RSI, or even classic candlestick scanners) work the same for everyone—and only as their inventors defined.
But every trader, and every market, is unique.
What if you could say:
“Show me a ‘SELL’ every time the newest candle is bigger than the one before, but with LESS volume, while the bar before that had an even smaller body—but more volume than all others?”
With this tool, it’s EASY!
You simply pick which candle you want to compare (most recent, previous, etc), what to compare (body or volume—body means the candle’s “thickness”, from open to close), choose “greater than”, “less than”, or “equal to”, and set a multiplier if you want (like “half as much”, “twice as big”, etc).
After this, if any bar on the chart fits all your rules, it will mark it as a BUY or SELL, depending on your selection.
This means—
Beginners can start experimenting with their intuition or small ideas, without tech hurdles
Experienced traders can visualize and fine-tune any possible logic, before they commit to backtesting or automating a real strategy
Every “what if” or “I wonder” setup is just 2–3 clicks away
How Does It Work? Simple Steps
1. Choose Your Signal Type
“Buy” or “Sell”
This tells the indicator whether to mark the qualifying bars with a green “BUY” or red “SELL” label
2. Pick How Many Candles To Use
“Pattern Candle Count” input (2, 3, or 4)
Example: If you use 4, the pattern will be applied to the most recent 4 candles at every step
3. Define Your Pattern With Inputs
For each candle (from newest “0” to oldest “3”), you can set:
Body Condition (example: “is this candle’s body bigger/smaller/equal to another?”)
Pick which candle to compare against
Pick “>”, “<”, “>=”, “<=”, or “=”
Set a multiplier if needed (like “0.5” to mean “half as big as” or “2” for “twice as big as”)
Volume Condition (exact same choices, but based on trading volume—not the candle’s price body)
For example:
“Candle0 Body > Candle2 Body”
means “the latest candle’s real-body (open–close) is bigger than the one two bars ago.”
“Candle1 Volume <= Candle2 Volume”
means “the previous candle’s volume is less than or equal to the volume of the bar two periods ago.”
You can leave a comparison blank if you don’t want to use it for a particular candle.
What Happens After You Set Your Rules?
Every bar on your chart is checked for your logic:
If ALL body AND volume conditions are true (for each candle you specified),
AND
The signal side (“Buy” or “Sell”) matches your dropdown,
Then a green “BUY” or red “SELL” label will show right on the bar, so you can visually spot exactly where your logic works!
Practical Example:
Suppose you want an entry setup that is:
“Sell whenever the newest candle’s body is bigger than two bars ago, body before that is bigger than three bars ago, AND the newest candle’s volume is less than or equal to two bars ago, AND the candle three bars ago’s volume is less than or equal to half the candle two bars ago’s volume.”
You’d set:
Pattern Candle Count: 4
Side: Sell
Candle0 Body Ref#: 2, Op: >, Mult: 1
Candle1 Body Ref#: 3, Op: >, Mult: 1
Candle0 Vol Ref#: 2, Op: <=, Mult: 1
Candle3 Vol Ref#: 2, Op: <=, Mult: 0.5
And the script will find all “SELL” bars on your chart matching these conditions.
Inputs Section: What Does Each Setting Do?
Let’s break down each input in the indicator’s Settings one by one, so even if you’re new, you’ll understand exactly how to use it!
1. Pattern Candle Count (2–4)
What is it?
This sets how many candles in a row you want your rule to look at.
Example:
“4” means your rules are based on the most recent candle and the 3 before it.
“2” means you are only comparing the current and previous candles.
Tip:
Beginners often use 4 to spot stronger patterns, but you can experiment!
2. Signal Side
What is it?
Choose “Buy” or “Sell”. The word you pick here decides which colored label (green for Buy, red for Sell) appears if your pattern matches.
Example:
Want to spot where “Sell” is likely? Pick “Sell”.
Change to “Buy” if you want bullish signals instead.
3. Body & Volume Comparison Settings (per Candle)
For each candle (#0 is newest/current, #3 is oldest in your pattern window):
Body Comparison
Candle# Body Ref#
Choose which other candle you want to compare this one’s body to.
“0” = newest, “1” = previous, “2” = two bars ago, “3” = three bars ago
Candle# Body Op (Operator; >, <, >=, <=, =)
How do you want to compare?
“>” means “greater than” (is bigger than)
“<” means “less than” (is smaller than)
“=” means “equal to”
Candle# Body Mult (Multiplier)
If you want relative comparisons. For example, with Mult=1:
“Candle0 body > Candle2 body x 1” means just “0 is larger than 2.”
“Candle0 body > Candle2 body x 2” means “0 is more than double 2.”
Volume Comparison
Candle# Vol Ref# / Op / Mult
Exact same logic as body, but works on the “Volume” of each candle (how much was traded during that bar).
How to Set Up a Rule (Step by Step Example)
Say you want to mark a Sell every time:
The most recent candle’s real body is BIGGER than the candle 2 bars ago;
The previous candle’s body is also BIGGER than the candle 3 bars ago;
The current candle’s volume is LESS than or equal to the volume of candle 2;
The previous candle’s volume is LESS than or equal to candle 2’s volume;
The candle 3 bars ago’s volume is LESS than or equal to HALF candle 2’s volume.
You’d set:
Pattern Candle Count: 4
Side: "Sell"
Candle0 Body Ref#: 2, Op: “>”, Mult: 1
Candle1 Body Ref#: 3, Op: “>”, Mult: 1
Candle0 Vol Ref#: 2, Op: “<=”, Mult: 1
Candle1 Vol Ref#: 2, Op: “<=”, Mult: 1
Candle3 Vol Ref#: 2, Op: “<=”, Mult: 0.5
All other comparisons (operators) can be left blank if you don’t want to use them!
When these rules are met, a bright red “SELL” label will appear right above the bar matching all your conditions.
Practical Tips & FAQ for Beginners
What does “body” mean?
It’s the “true range” of the candle: the difference between open and close. This ignores wicks for simple setups.
What does “volume” mean?
This is the total trading activity during that candle/bar. Many traders believe that patterns with different volume “meaning” (such as low-volume up bars, or high-volume down bars) signal a meaningful change.
What if nothing shows on chart?
It just means your current rules are rarely or never matched! Try making your comparisons simpler (maybe just 2-body and 2-volume conditions to start).
You can always hit “Reset Settings” to go back to default.
Can I use this for both buying and selling?
YES! You can detect both bullish (Buy) and bearish (Sell) custom conditions; just switch “Signal Side.”
Do I need to know coding?
Not at all! Everything is in simple input panels.
Creative Use Cases, Example Recipes & Troubleshooting
Creative Ways to Use
Spotting Reversals
Example:
Buy when: the newest candle body is LARGER than the previous 3 bars, but ALL volumes are lower than their neighbors.
Why? Sometimes, a big candle with surprisingly low volume after a sequence of small bars can signal a reversal.
Finding Exhaustion Moves
Example:
Sell when: the current bar body is twice as big as two bars ago, but volume is half.
Why? A very big candle with very little volume compared to similar bars may show the move is “running out of steam.”
Custom “Breakout + Confirmation” Patterns
Example:
Buy when:
Candle 0’s body is greater than Candle 2’s by at least 1.5x,
Candle 0’s volume is greater than Candle 1 and Candle 2,
Candle 1’s volume is less than Candle 0.
Why? This could catch strong breakouts but filter out noisy moves.
Multi-bar Bias/Squeeze Filter
Use “Pattern Candle Count: 4”
Set all 4 volume conditions to “<” and each reference to the previous candle.
Now, a BUY or SELL only marks when each bar is “dryer”/less active than the last — a classic squeeze or low-volatility buildup.
Troubleshooting Guide
“I don’t see any Buy/Sell label; is something broken?”
Most likely, your rules are too strict or rare! Try using only two comparisons and leave other “Op” inputs blank as a test.
Double-check you have enough candles on the chart: you need at least as many bars as your pattern count.
“Why does a label appear but not where I expect?”
Remember, the script checks your rules for every NEW candle. The candle “0” is always the most recent, then “1” is one bar back, etc.
Check the color and type chosen: “Signal Side” must be “Buy” for green, “Sell” for red.
“What if I want a more complex pattern?”
Stack conditions! You can demand the body/volume of each candle in your window meet a different rule or all follow the same rule in sequence.
Mini Glossary — For Newcomers
Candle/Bar: Each bar on the chart, shows price movement during a fixed time (e.g., one minute, one hour, one day).
Body: The colored (or filled) part of the candle — the open-to-close price range.
Volume: How much of the asset was actually traded that candle/bar.
Reference Index: When you pick “2” as a reference, it means “the candle two bars ago in the pattern window.”
Operator (“Op”): The math symbol used to compare (>, <, =, etc).
Signal Side: Whether you want to highlight bullish (“Buy”) or bearish (“Sell”) bars.
Tips for Getting More Value
Start Simple—try just one or two conditions at first. See what lights up. Slowly add more logic as you get comfortable.
Watch the chart live as you change settings. The labels update instantly—this makes strategy design fast and visual!
Try flipping your ideas: If a certain pattern doesn’t work for buys, try reversing the direction for possible “sell” setups.
Remember: There is NO wrong idea. This indicator is only limited by your creativity—it’s a “strategy playground.”
Example Quick-Start Recipes
Classic Sell:
4 candles, side = Sell
Candle0 Body > Candle2; Candle1 Body > Candle3
Candle0 Vol <= Candle2; Candle1 Vol <= Candle2; Candle3 Vol <= Candle2 × 0.5
Simple Buy After Pause:
3 candles, side = Buy
Candle0 Body > Candle1; Candle0 Vol > Candle1
All other Ops blank
Low-Volume Pullback for Entry:
4 candles, side = Buy
Candle0 Body > Candle2
Candle0 Vol < Candle1; Candle1 Vol < Candle2; Candle2 Vol < Candle3
Final Words
Think of this as your “pattern lab.” No code, no guesswork—just experiment, see what the market actually gives, and design your own visual rulebook.
If you’re stuck, reset the script to defaults—it’s always safe to start again!
If you want more ready-made “recipes” for different strategies/styles, just ask and I’ll send some more setups for you.
Happy building—and may your edge always be YOUR edge!
Partial Profit Calculator [TFO]This indicator was built to help calculate the outcome of trades that utilize multiple profit targets and/or multiple entries.
In its simplest form, we can have a single entry and a single profit target. As shown below in this long trade example, the indicator will draw risk and reward boxes (red and green, respectively) with several annotations. On the left-hand side, all entries will be displayed (in this case there is only one entry, "E1"). On the bottom, the "SL" label indicates the trade's stop loss placement. On the top, all target prices are displayed (in this case there is only one target, "TP1"). Lastly, on the right-hand side a label will display the total R that is to be expected from a winning trade, where R is one's unit of risk.
In the following example, we have two target prices - one at 18600 and one at 18700. You can input as many target prices as you'd like, separated by commas, i.e. "18600,18700" in this example. Make sure the values are separated by commas only, and not spaces, new lines, etc. As a result, we can see that the indicator draws where our profit targets would be with respect to our entry, E1. The indicator assumes that equal parts of the trade position are taken off at each target price. In this example on Nasdaq futures (NQ1!), since we have 2 target prices, this would be equivalent to assuming that we take exactly half the trade position off at TP1, and the remaining half of the position at TP2.
If we wanted to take more of the position off at a certain target, we could simply duplicate the target price. Here I set the target prices to "18600,18600,18700" to enforce that two thirds of the position be taken off at TP1 and TP2, while the remaining third gets taken off at TP3.
We can also show outcome annotations to describe how much R is generated from each possible trade outcome. Using the below chart as an example, the stop loss indicates a -1R loss. The total R from this trade criteria is 1.33 R, and each target price shows how much R is being generated if one were to take off an equal part of the position at said target prices. In this case, we would generate 0.17 R from taking one third of the position off at TP1, another 0.5 R from taking one third of the position off at TP2, and another 0.67 R from taking the remaining one third of the position off at TP3, all adding up to the total R indicated on the right-hand side label.
Using multiple entries works the same way as using multiple target prices, where the input should indicate each entry price separated by commas. In this example I've used "18550,18450" to achieve an average price of 18500, as indicated by the "E_avg" label that appears when more than one entry price is utilized. We can also opt to display risk as dollars instead of R values, where you can input your desired risk per trade, and all values are shown as dollar amounts instead of R multiples, as shown below with a risk per trade of $100.
This is meant to be an educational tool for trades that utilize multiple profit targets and/or entries. Hope you like it!
ATR Bands with Optional Risk/Reward Colors█ OVERVIEW
This indicator projects ATR bands and, optionally, colors them based on a risk/reward advantage for those who trade breakouts/breakdowns using moving averages as partial or full exit points.
█ DEFINITIONS
► True Range
The True Range is a measure of the volatility of a financial asset and is defined as the maximum difference among one of the following values:
- The high of the current period minus the low of the current period.
- The absolute value of the high of the current period minus the closing price of the previous period.
- The absolute value of the low of the current period minus the closing price of the previous period.
► Average True Range
The Average True Range was developed by J. Welles Wilder Jr. and was introduced in his 1978 book titled "New Concepts in Technical Trading Systems". It is calculated as an average of the true range values over a certain number of periods (usually 14) and is commonly used to measure volatility and set stop-loss and profit targets (1).
For example, if you are looking at a daily chart and you want to calculate the 14-day ATR, you would take the True Range of the previous 14 days, calculate their average, and this would be the ATR for that day. The process is then repeated every day to obtain a series of ATR values over time.
The ATR can be smoothed using different methods, such as the Simple Moving Average (SMA), the Exponential Moving Average (EMA), or others, depending on the user's preferences or analysis needs.
► ATR Bands
The ATR bands are created by adding or subtracting the ATR from a reference point (usually the closing price). This process generates bands around the central point that expand and contract based on market volatility, allowing traders to assess dynamic support and resistance levels and to adapt their trading strategies to current market conditions.
█ INDICATOR
► ATR Bands
The indicator provides all the essential parameters for calculating the ATR: period length, time frame, smoothing method, and multiplier.
It is then possible to choose the reference point from which to create the bands. The most commonly used reference points are Open, High, Low, and Close, but you can also choose the commonly used candle averages: HL2, HLC3, HLCC4, OHLC4. Among these, there is also a less common "OC2", which represents the average of the candle body. Additionally, two parameters have been specifically created for this indicator: Open/Close and High/Low.
With the "Open/Close" parameter, the upper band is calculated from the higher value between Open and Close, while the lower one is calculated from the lower value between Open and Close. In the case of bullish candles, therefore, the Close value is taken as the starting point for the upper band and the Open value for the lower one; conversely, in bearish candles, the Open value is used for the upper band and the Close value for the lower band. This setting can be useful for precautionally generating broader bands when trading with candlesticks like hammers or inverted hammers.
The "High/Low" parameter calculates the upper band starting from the High and the lower band starting from the Low. Among all the available options, this one allows drawing the widest bands.
Other possible options to improve the drawing of ATR bands, aligning them with the price action, are:
• Doji Smoothing: When the current candle is a doji (having the same Open and Close price), the bands assume the values they had on the previous candle. This can be useful to avoid steep fluctuations of the bands themselves.
• Extend to High/Low: Extends the bands to the High or Low values when they exceed the value of the band.
• Round Last Cent: Expands the upper band by one cent if the price ends with x.x9, and the lower band if the price ends with x.x1. This function only works when the asset's tick is 0.01.
► Risk/Reward Advantage
The indicator optionally colors the ATR bands after setting a breakpoint, one or two risk/reward ratios, and a series of moving averages. This function allows you to know in advance whether entering a trade can provide an advantage over the risk. The band is colored when the ratio between the distance from the break point to the band and the distance from the break point to the first available moving average reaches at least the set ratio value. It is possible to set two colorings, one for a minimum risk/reward ratio and one for an optimal risk/reward ratio.
The break point can be chosen between High/Low (High in case of breakout, Low in case of breakdown) or Open/Close (on breakouts, Close with bullish candles or Open with bearish candles; on breakdowns, Close with bearish candles or Open with bullish candles).
It is possible to choose up to 10 moving averages of various types, including the VWAP with the Anchor Period (2).
Depending on the "Price to MA" setting, the bands can be individually or simultaneously colored.
By selecting "Single Direction," the risk/reward calculation is performed only when all moving averages are above or below the break point, resulting in only one band being colored at a time. For this reason, when the break point is in between the moving averages, the calculation is not executed. This setting can be useful for strategies involving price movement from a level towards a series of specific moving averages (for example, in reversals starting from a certain level towards the VWAP with possible partial take profits on some previous moving averages, or simply in trend following towards one or more moving averages).
Choosing "Both Directions" the risk/reward ratio is calculated based on the first available moving averages both above and below the price. This setting is useful for those who operate in range bound markets or simply take advantage of movements between moving averages.
█ NOTE
This script may not be suitable for scalping strategies that require immediate entries due to the inability to know the ATR of a candle in advance until its closure. Once the candle is closed, you should have time to place a stop or stop-limit order, so your strategy should not anticipate an immediate start with the next candle. Even more conveniently, if your strategy involves an entry on a pullback, you can place a limit order at the breakout level.
(1) www.tradingview.com
(2) For convenience, the code for the Anchor Period has been entirely copied from the VWAP code provided by TradingView.
Candle Counter [theEccentricTrader]█ OVERVIEW
This indicator counts the number of confirmed candle scenarios on any given candlestick chart and displays the statistics in a table, which can be repositioned and resized at the user's discretion.
█ CONCEPTS
Green and Red Candles
A green candle is one that closes with a high price equal to or above the price it opened.
A red candle is one that closes with a low price that is lower than the price it opened.
Upper Candle Trends
A higher high candle is one that closes with a higher high price than the high price of the preceding candle.
A lower high candle is one that closes with a lower high price than the high price of the preceding candle.
A double-top candle is one that closes with a high price that is equal to the high price of the preceding candle.
Lower Candle Trends
A higher low candle is one that closes with a higher low price than the low price of the preceding candle.
A lower low candle is one that closes with a lower low price than the low price of the preceding candle.
A double-bottom candle is one that closes with a low price that is equal to the low price of the preceding candle.
█ FEATURES
Inputs
Start Date
End Date
Position
Text Size
Show Sample Period
Show Plots
Table
The table is colour coded, consists of three columns and twenty-two rows. Blue cells denote all candle scenarios, green cells denote green candle scenarios and red cells denote red candle scenarios.
The candle scenarios are listed in the first column with their corresponding total counts to the right, in the second column. The last row in column one, row twenty-two, displays the sample period which can be adjusted or hidden via indicator settings.
Rows two and three in the third column of the table display the total green and red candles as percentages of total candles. Rows four to nine in column three, coloured blue, display the corresponding candle scenarios as percentages of total candles. Rows ten to fifteen in column three, coloured green, display the corresponding candle scenarios as percentages of total green candles. And lastly, rows sixteen to twenty-one in column three, coloured red, display the corresponding candle scenarios as percentages of total red candles.
Plots
I have added plots as a visual aid to the various candle scenarios listed in the table. Green up-arrows denote higher high candles when above bar and higher low candles when below bar. Red down-arrows denote lower high candles when above bar and lower low candles when below bar. Similarly, blue diamonds when above bar denote double-top candles and when below bar denote double-bottom candles. These plots can also be hidden via indicator settings.
█ HOW TO USE
This indicator is intended for research purposes and strategy development. I hope it will be useful in helping to gain a better understanding of the underlying dynamics at play on any given market and timeframe. It can, for example, give you an idea of any inherent biases such as a greater proportion of green candles to red. Or a greater proportion of higher low green candles to lower low green candles. Such information can be very useful when conducting top down analysis across multiple timeframes, or considering trailing stop loss methods.
What you do with these statistics and how far you decide to take your research is entirely up to you, the possibilities are endless.
This is just the first and most basic in a series of indicators that can be used to study objective price action scenarios and develop a systematic approach to trading.
█ LIMITATIONS
Some higher timeframe candles on tickers with larger lookbacks such as the DXY, do not actually contain all the open, high, low and close (OHLC) data at the beginning of the chart. Instead, they use the close price for open, high and low prices. So, while we can determine whether the close price is higher or lower than the preceding close price, there is no way of knowing what actually happened intra-bar for these candles. And by default candles that close at the same price as the open price, will be counted as green. You can avoid this problem by utilising the sample period filter.
The green and red candle calculations are based solely on differences between open and close prices, as such I have made no attempt to account for green candles that gap lower and close below the close price of the preceding candle, or red candles that gap higher and close above the close price of the preceding candle. I can only recommend using 24-hour markets, if and where possible, as there are far fewer gaps and, generally, more data to work with. Alternatively, you can replace the scenarios with your own logic to account for the gap anomalies, if you are feeling up to the challenge.
It is also worth noting that the sample size will be limited to your Trading View subscription plan. Premium users get 20,000 candles worth of data, pro+ and pro users get 10,000, and basic users get 5,000. If upgrading is currently not an option, you can always keep a rolling tally of the statistics in an excel spreadsheet or something of the like.
Dynamic Zone Range on OMA [Loxx]Dynamic Zone Range on OMA is an One More Moving Average oscillator with Dynamic Zones.
What is the One More Moving Average (OMA)?
The usual story goes something like this : which is the best moving average? Everyone that ever started to do any kind of technical analysis was pulled into this "game". Comparing, testing, looking for new ones, testing ...
The idea of this one is simple: it should not be itself, but it should be a kind of a chameleon - it should "imitate" as much other moving averages as it can. So the need for zillion different moving averages would diminish. And it should have some extra, of course:
The extras:
it has to be smooth
it has to be able to "change speed" without length change
it has to be able to adapt or not (since it has to "imitate" the non-adaptive as well as the adaptive ones)
The steps:
Smoothing - compared are the simple moving average (that is the basis and the first step of this indicator - a smoothed simple moving average with as little lag added as it is possible and as close to the original as it is possible) Speed 1 and non-adaptive are the reference for this basic setup.
Speed changing - same chart only added one more average with "speeds" 2 and 3 (for comparison purposes only here)
Finally - adapting : same chart with SMA compared to one more average with speed 1 but adaptive (so this parameters would make it a "smoothed adaptive simple average") Adapting part is a modified Kaufman adapting way and this part (the adapting part) may be a subject for changes in the future (it is giving satisfactory results, but if or when I find a better way, it will be implemented here)
Some comparisons for different speed settings (all the comparisons are without adaptive turned on, and are approximate. Approximation comes from a fact that it is impossible to get exactly the same values from only one way of calculation, and frankly, I even did not try to get those same values).
speed 0.5 - T3 (0.618 Tilson)
speed 2.5 - T3 (0.618 Fulks/Matulich)
speed 1 - SMA , harmonic mean
speed 2 - LWMA
speed 7 - very similar to Hull and TEMA
speed 8 - very similar to LSMA and Linear regression value
Parameters:
Length - length (period) for averaging
Source - price to use for averaging
Speed - desired speed (i limited to -1.5 on the lower side but it even does not need that limit - some interesting results with speeds that are less than 0 can be achieved)
Adaptive - does it adapt or not
Variety Moving Averages w/ Dynamic Zones contains 33 source types and 35+ moving averages with double dynamic zones levels.
What are Dynamic Zones?
As explained in "Stocks & Commodities V15:7 (306-310): Dynamic Zones by Leo Zamansky, Ph .D., and David Stendahl"
Most indicators use a fixed zone for buy and sell signals. Here’ s a concept based on zones that are responsive to past levels of the indicator.
One approach to active investing employs the use of oscillators to exploit tradable market trends. This investing style follows a very simple form of logic: Enter the market only when an oscillator has moved far above or below traditional trading lev- els. However, these oscillator- driven systems lack the ability to evolve with the market because they use fixed buy and sell zones. Traders typically use one set of buy and sell zones for a bull market and substantially different zones for a bear market. And therein lies the problem.
Once traders begin introducing their market opinions into trading equations, by changing the zones, they negate the system’s mechanical nature. The objective is to have a system automatically define its own buy and sell zones and thereby profitably trade in any market — bull or bear. Dynamic zones offer a solution to the problem of fixed buy and sell zones for any oscillator-driven system.
An indicator’s extreme levels can be quantified using statistical methods. These extreme levels are calculated for a certain period and serve as the buy and sell zones for a trading system. The repetition of this statistical process for every value of the indicator creates values that become the dynamic zones. The zones are calculated in such a way that the probability of the indicator value rising above, or falling below, the dynamic zones is equal to a given probability input set by the trader.
To better understand dynamic zones, let's first describe them mathematically and then explain their use. The dynamic zones definition:
Find V such that:
For dynamic zone buy: P{X <= V}=P1
For dynamic zone sell: P{X >= V}=P2
where P1 and P2 are the probabilities set by the trader, X is the value of the indicator for the selected period and V represents the value of the dynamic zone.
The probability input P1 and P2 can be adjusted by the trader to encompass as much or as little data as the trader would like. The smaller the probability, the fewer data values above and below the dynamic zones. This translates into a wider range between the buy and sell zones. If a 10% probability is used for P1 and P2, only those data values that make up the top 10% and bottom 10% for an indicator are used in the construction of the zones. Of the values, 80% will fall between the two extreme levels. Because dynamic zone levels are penetrated so infrequently, when this happens, traders know that the market has truly moved into overbought or oversold territory.
Calculating the Dynamic Zones
The algorithm for the dynamic zones is a series of steps. First, decide the value of the lookback period t. Next, decide the value of the probability Pbuy for buy zone and value of the probability Psell for the sell zone.
For i=1, to the last lookback period, build the distribution f(x) of the price during the lookback period i. Then find the value Vi1 such that the probability of the price less than or equal to Vi1 during the lookback period i is equal to Pbuy. Find the value Vi2 such that the probability of the price greater or equal to Vi2 during the lookback period i is equal to Psell. The sequence of Vi1 for all periods gives the buy zone. The sequence of Vi2 for all periods gives the sell zone.
In the algorithm description, we have: Build the distribution f(x) of the price during the lookback period i. The distribution here is empirical namely, how many times a given value of x appeared during the lookback period. The problem is to find such x that the probability of a price being greater or equal to x will be equal to a probability selected by the user. Probability is the area under the distribution curve. The task is to find such value of x that the area under the distribution curve to the right of x will be equal to the probability selected by the user. That x is the dynamic zone.
Included
4 signal types
Bar coloring
Alerts
Channels fill
Dekidaka-Ashi - Candles And Volume Teaming Up (Again)The introduction of candlestick methods for market price data visualization might be one of the most important events in the history of technical analysis, as it totally changed the way to see a trading chart. Candlestick charts are extremely efficient, as they allow the trader to visualize the opening, high, low and closing price (OHLC) each at the same time, something impossible with a traditional line chart. Candlesticks are also cleaner than bars charts and make a more efficient use of space. Japanese peoples are always better than everyone at an incredible amount of stuff, look at what they made, the candlesticks/renko/kagi/heikin-ashi charts, the Ichimoku, manga, ecchi...
However classical candlesticks only include historical market price data, and won't include other type of data such as volume, which is considered by many investors a key information toward effective financial forecasting as volume is an indicator of trading activity. In order to tackle to this problem solutions where proposed, the most common one being to adapt the width of the candle based on the amount of volume, this method is the most commonly accepted one when it comes to visualizing both volume and OHLC data using candlesticks.
Now why proposing an additional tool for volume data visualization ? Because the classical width approach don't provide usable data regarding volume (as the width is directly related to the volume data). Therefore a new trading tool based on candlesticks that allow the trader to gain access to information about the volume is proposed. The approach is based on rescaling the volume directly to the price without the direct use of user settings. We will also see that this tool allow to create support and resistances as well as providing signals based on a breakout methodology.
Dekidaka-Ashi - Kakatte Koi Yo!
"Dekidaka" (出来高) mean "Volume" in a financial context, while "Ashi" (足) mean "leg" or "bar". In general methods based on candlesticks will have "Ashi" in their name.
Now that the name of the indicator has been explained lets see how it works, the indicator should be overlayed directly to a candlestick chart. The proposed method don't alter the shape of the candlesticks and allow to visualize any information given by the candles. As you can see on the figure below the candle body of the proposed tool only return the border of the candle, this allow to show the high/low wick of the candle.
The body size of the candle is based on two things : the absolute close/open difference, and the volume, if the absolute close/open difference is high and the volume is high then the body of the candle will be clearly visible, if the volume is high but the absolute close/open difference is low, then the body will be less visible. This approach is used because of the rescaling method used, the volume is divided by the sum between the current volume value and the precedent volume value, this rescale the volume in a (0,1) range, this result is multiplied by the absolute close/open difference and added/subtracted to the high/low price. The original approach was based on normalization using the rolling maximum, but this approach would have led to repainting.
You have access to certain settings that can help you obtain a better visualization, the first one being the body size setting, with higher values increasing the body amplitude.
In green body with size 2, in red with size 1. The smooth parameter will smooth the volume data before being used, this allow to create more visible bodies.
Here smooth = 100.
Making Bands From The Dekidaka-Ashi
This tool is made so it output two rescaled volume values, with the highest value being denoted as "Dekidaka-high" and the lowest one as "Dekidaka-low". In order to get bands we must use two moving averages, one using the Dekidaka-high as input and the other one using Dekidaka-low, the body size parameter should be fairly high, therefore i will hide the tool as it could cause trouble visualizing the bands.
Bands with both MA's of period 20 and the body size equal to 20. Larger periods of the MA's will require a larger amount of body size.
Breakout Signals
There is a wide variety of signals that can be made from candles, ones i personally like comes from the HA candles. The proposed tool is no exception and can produce a wide variety of signals. The signals generated are basic ones based on a breakout methodology, here is each signal with their associated label :
Strong Bullish signal "⇈" : The high price cross the Dekidaka-high and the closing price is greater than the opening price
Strong Bearish signal "⇊" : The low price cross the Dekidaka-low and the closing price is lower than the opening price
Weak Bullish signal "↑" : The high price cross the Dekidaka-high and the closing price is lower than the opening price
Weak Bearish signal "↓" : The low price cross the Dekidaka-low and the closing price is greater than the opening price
Uncertain "↕" : The high price cross the Dekidaka-high and the low price cross the the Dekidaka-low
In order to see the signals on the chart check the "Show signals" option. Note that such signals are not based on an advanced study, and even if they are based on a breakout methodology we can see that volatile movement rarely produce signals, therefore signals mostly occur during low volume/volatility periods, which isn't necessarily a great thing.
Conclusion
A trading tool based on candlesticks that aim to include volume information has been presented and a brief methodology has been introduced. A study of the signals generated is required, however i'am not confident at all on their accuracy, i could work on that in the future. We have also seen how to make bands from the tool.
Candlesticks remain a beautiful charting technique that can provide an enormous amount of information to the trader, and even if the accuracy of patterns based on candlesticks is subject to debates, we can all agree that candlesticks will remain the most widely used type of financial chart.
On a side note i mostly use a dark color for a bullish candle, and a light gray for a bearish candle, with the border color being of the same color as the bullish candle. This is in my opinion the best setup for a candlestick chart, as candles using the traditional green/red can kill the eyes and because this setup allow to apply a wide variety of colors to the plot of overlayed indicators without the fear of causing conflict with the candles color.
Thanks for reading ! :3 Nya
A Word
This morning i received some hateful messages on twitter, the users behind them certainly coming from tradingview, so lets be clear, i know i'am not the most liked person in this community, i know that perfectly, but no one merit to be receive hateful messages. I'am not responsible for the losses of peoples using my indicators, nor is tradingview, using technical indicators does not guarantee long term returns, your ability to be profitable will mostly be based on the quality and quantity of knowledge you have.
FX Meter ScriptA while ago, we wrote* about the usefulness of using a currency strength meter and how you can build one from scratch.
See here: www.globalprime.com.au
Now we've taken this little project to the next level by visually spotting, via color signals in a dashboard and alerts, when a potential new trend might be developing in a currency pair.
*It's critical that you first read that article before you jump into reading this one or else you could get easily lost.
The script gives a trigger every time two currencies show diverging flows via opposing moving average slopes.
The signals originate from a first chart where currency indexes can be found, calculated through a formula, in various thin lines. Then a moving average to each currency index is applied so that it can smooth out the lines (what I call Micro moving averages – thicker lines -) and is usually a 4-5 period MA, with the key input to pay attention being the slope. One can perform their own tests on what works best for their particular trading style. The smaller the period in the moving average, the more responsive to changes in biases but the downside is that you will get a greater number of false moves. In the windows below the 1st chart, the stochRSI is calculated for each currency index (these values originate from the currency index and not from the applied MA). By default, a 25-period is applied to both RSI and Stoch length.
A 2nd chart that looks at the same logic is also accounted for to build this script, but instead of checking the micro trend, it applies a 25MA to the currency index, so it looks at what I call the slope of the macro trend. In this case, by default, a 125-period is applied to both RSI and Stoch length.
We had in mind to transition from just eye-balling and monitoring these charts manually to build a script via Tradingview that makes calculations real time (whenever the change in the moving average slope first occurs, and not when the bar/line closes), so that one can decide whether or not its a signal worth trading as part of a new trend emerging. Note, this is not so much a signal-triggering indicator but rather a tool to constantly be on the lookout monitoring what currencies might start to develop trends.
The actual script consists of a dashboard with different colored rectangles being triggered depending on the quality of the signal.
We will be happy to discuss it further with anyone who is interested in exploiting all the benefits that it can offer.
The way you add the script into your Tradingview chart is by first copy everything in the txt file. Then go to Pine editor (bottom middle-left) in your tradingview chart, delete everything there, then Paste the script. Then click Add to Chart (top right of the pine editor).
Note, you should add via the Anchored Text function the following list of pairs below, in this alphabetic order, on the right-hand side of the chart, as demonstrated above:
AUDCAD
AUDJPY
AUDNZD
AUDUSD
CADJPY
EURAUD
EURJPY
EURCAD
EURNZD
EURGBP
EURUSD
GBPAUD
GBPCAD
GBPJPY
GBPNZD
GBPUSD
NZDCAD
NZDJPY
NZDUSD
USDCAD
USDJPY
There are only 2 rules for the script to trigger a signal (see below). However, as I will elaborate further down, there are up to 6 different colors we can grade a signal
RULE 1 -> 2 moving averages, which are a calculation applied to a currency index as shown in the micro trend above, exhibit slopes in the opposite direction.
RULE 2 -> The Stoch RSI cannot be in overbought conditions if the slope of the moving average points higher or in oversold if the slope points lower.
Note 1: Even if the chart is a 60m timeframe by default (can be changed to any timeframe(, one gets the signal the moment the change of slope is identified, which means the indicator monitors changes in price tick by tick, and not on a candle close, otherwise one would get the trigger too late.
As an example of the highest-graded signal triggering (in green), a few hours ago we were given the visual cue that GBPCAD was experiencing a change of behavior. If we crosscheck the time the green-colored trigger was given with the actual GBPCAD chart, this is what we can observe. The pair is 30p higher since the trigger.
HOW TO SETUP ALERTS
One can easily setup a notification window each time the above rules are met, for example, if the EUR MA slope changes to bullish, and the AUD MA slope changes to bearish, and none of the 2 currency index values corresponding to these 2 moving averages (EUR and AUD) show a stoch RSI in overbought (above 80) in the case of the EUR, or oversold (below 20) in the case of the AUD, then the notification pop up would show a customized line: Long EURAUD
Note 1: Recording the slope of the macro moving average, which is usually a 25period MA applied to the currency index, is not included as part of the rules to trigger a signal, but it is taken into account to grade the quality of each signal.
Note 2: I recommend each signal to be triggered once or if you prefer, simply monitor the chart visually on the change of colors via the dashboard. The calculation resets and can appear again the moment that the slope changes to the opposite direction, so it’s a very dynamic indicator that will alert you the second a pair of currencies starts trending.
Note 3: When the signal is triggered, the indicator draws a colored rectangle. Each signal notification should be colored based on the following logic below.
LOGIC TO QUALIFY SIGNALS
-> Any long micro position with Macro MA in full agreement (ie/ Long EURAUD, Macro EUR up, Macro AUD down) is highlighted with green color
-> Any long micro position with macro moving averages in partial agreement (for example Long EURAUD, Macro EUR up AUD up) is highlighted with blue color
-> Any long micro position with macro moving averages in full disagreement (for example Long EURAUD, Macro EUR down AUD up) is highlighted with magenta color
-> Any short micro position with macro moving averages in full agreement (for example Short EURAUD, Macro EUR down AUD up) is highlighted with red color
-> Any short micro position with macro moving averages in partial agreement (for example Short EURAUD, Macro EUR up AUD up) is highlighted with orange color
-> Any short micro position with macro moving averages in full disagreement (for example Short EURAUD, Macro EUR up AUD down) is highlighted with purple color
PARAMETERS IN THE SCRIPT SETTINGS
Overbought/oversold: One can modify the stoch RSI level from which the indicator considers the value to be in overbought or oversold conditions. As a rule of thumb, consider 20/30 for oversold and 70/80 for oversold.
Slopes micro/macro MAs: One can edit the slope of the micro MA period (rule of thumb 4-5) and the macro MA (by default 25).
Value StochRSI: The default inputs are K 3, D 3, RSI Length 25, Stoch Length 25 for the micro and 125 period for the macro.
Change colors: One can edit the assigned colors in the signals dashboard.
Timeframe applied: The indicator has the flexibility to be applied to any timeframe, not just the 60m by default. Simply change the timeframe temporality.
CURRENCY INDEXES FORMULAS
It is the responsibility of the user to keep the values of the indexes updated. Find a recent sample below, as per values in early April. What this means is that at least once a week, in order to not let the values outdated, you should update the script with the latest valuations in the denominator.
NZD INDEX -> FX_IDC:NZDAUD/0.96+FX:NZDJPY/75.81+FX:NZDUSD/0.68+FX_IDC:NZDEUR/0.6+FX_IDC:NZDGBP/0.52+FX:NZDCHF/0.69+FX:NZDCAD/0.9
EUR INDEX -> FX:EURUSD/1.13+FX:EURJPY/125.5+FX:EURGBP/0.87+FX:EURCHF/1.135+FX:EURCAD/1.49+FX:EURNZD/1.655+FX:EURAUD/1.59
JPY INDEX -> 1/(FX:USDJPY/110.5+FX:EURJPY/125.5+FX:AUDJPY/79+FX:NZDJPY/75.5+FX:GBPJPY/144.5+FX:CHFJPY/110.5+FX:CADJPY/84)
USD INDEX -> FX_IDC:USDEUR/0.88+FX:USDJPY/110.5+FX_IDC:USDGBP/0.77+FX:USDCHF+FX:USDCAD/1.315+FX_IDC:USDNZD/1.46+FX_IDC:USDAUD/1.4
CAD INDEX-> FX_IDC:CADAUD/1.07+FX_IDC:CADNZD/1.11+FX:CADJPY/84.27+FX_IDC:CADUSD/0.76+FX_IDC:CADEUR/0.67+FX:CADCHF/0.76+FX_IDC:CADGBP/0.58
GBP INDEX -> FX:GBPAUD/1.83+FX:GBPNZD/1.91+FX:GBPJPY/144.5+FX_IDC:GBPEUR/1.15+FX:GBPCHF/1.31+FX:GBPUSD/1.31+FX:GBPCAD/1.71
Remember, I have provided a manual on how to build a currency strength meter. That’s what you will need to do first if you want to obtain the actual currency indexes other than just the indicator, which is just the visual cue to get you alerted when the slopes turn.
Once you’ve created your indexes via tradingview, you then apply a moving average to each index. Then apply the stochrsi 25 period to each index. For the macro trend, I make the same calculations, but the period of the MA is 25 instead of 4, while the stoch rsi is 125 periods vs 25 periods.
FINAL NOTE
This is a tool that should be interpreted as visual assistance, via the dashboard, to get that first cue when opposing micro slopes via the FX meter occur. However, you still need to check the technical context of the pair (levels marked, proj reached, etc.) but that first cue is a major time saver to constantly spot what's trending in FX. The permutations u can play with, as part of this script, are significant. You can tweak the timeframes you use, the periods of the moving averages, etc. I find the micro and macro trend combos when either a green or red signals is triggered the most reliable, with positions to be exploited via 15m and hourly under the right technical context.
Great Expectations [LucF]Great Expectations helps traders answer the question: What is possible? It is a powerful question, yet exploration of the unknown always entails risk. A more complete set of questions better suited to traders could be:
What opportunity exists from any given point on a chart?
What portion of this opportunity can be realistically captured?
What risk will be incurred in trying to do so, and how long will it take?
Great Expectations is the result of an exploration of these questions. It is a trade simulator that generates visual and quantitative information to help strategy modelers visually identify and analyse areas of optimal expectation on charts, whether they are designing automated or discretionary strategies.
WARNING: Great Expectations is NOT an indicator that helps determine the current state of a market. It works by looking at points in the past from which the future is already known. It uses one definition of repainting extensively (i.e. it goes back in the past to print information that could not have been know at the time). Repainting understood that way is in fact almost all the indicator does! —albeit for what I hope is a noble cause. The indicator is of no use whatsoever in analyzing markets in real-time. If you do not understand what it does, please stay away!
This is an indicator—not a strategy that uses TradingView’s backtesting engine. It works by simulating trades, not unlike a backtest, but with the crucial difference that it assumes a trade (either long or short) is entered on all bars in the historic sample. It walks forward from each bar and determines possible outcomes, gathering individual trade statistics that in turn generate precious global statistics from all outcomes tested on the chart.
Great Expectations provides numbers summarizing trade results on all simulations run from the chart. Those numbers cannot be compared to backtest-produced numbers since all non-filtered bars are examined, even if an entry was taken on the bar immediately preceding the current one, which never happens in a backtest. This peculiarity does NOT invalidate Great Expectations calculations; it just entails that results be considered under a different light. Provided they are evaluated within the indicator’s context, they can be useful—sometimes even more than backtesting results, e.g. in evaluating the impact of parameter-fitting or variations in entry, exit or filtering strats.
Traders and strategy modelers are creatures of hope often suffering from blurred vision; my hope is that Great Expectations will help them appraise the validity of their setup and strat intuitions in a realistic fashion, preventing confirmation bias from obstructing perspective—and great expectations from turning into financial great deceptions.
USE CASES
You’ve identified what looks like a promising setup on other indicators. You load Great Expectations on the chart and evaluate if its high-expectation areas match locations where your setup’s conditions occur. Unless today is your lucky day, chances are the indicator will help you realize your setup is not as promising as you had hoped.
You want to get a rough estimate of the optimal trade duration for a chart and you don’t mind using the entry and exit strategies provided with the indicator. You use the trade length readouts of the indicator.
You’re experimenting with a new stop strategy and want to know how long it will keep you in trades, on average. You integrate your stop strategy in the indicator’s code and look at the average trade length it produces and the TST ratio to evaluate its performance.
You have put together your own entry and exit criteria and are looking for a filter that will help you improve backtesting results. You visually ascertain the suitability of your filter by looking at its results on the charts with great Expectations, to see if your filter is choosing its areas correctly.
You have a strategy that shows backtested trades on your chart. Great Expectations can help you evaluate how well your strategy is benefitting from high-opportunity areas while avoiding poor expectation spots.
You want more complete statistics on your set of strategies than what backtesting will provide. You use Great Expectations, knowing that it tests all bars in the sample that correspond to your criteria, as opposed to backtesting results which are limited to a subset of all possible entries.
You want to fool your friends into thinking you’ve designed the holy grail of indicators, something that identifies optimal opportunities on any chart; you show them the P&L cloud.
FEATURES
For one trade
At any given point on the chart, assuming a trade is entered there, Great Expectations shows you information specific to that trade simulation both on the chart and in the Data Window.
The chart can display:
the P & L Cloud which shows whether the trade ended profitably or not, and by how much,
the Opportunity & Risk Cloud which the maximum opportunity and risk the simulation encountered. When superimposed over the P & L cloud, you will see what I call the managed opportunity and risk, i.e the portion of maximum opportunity that was captured and the portion of the maximum risk that was incurred,
the target and if it was reached,
a background that uses a gradient to show different levels of trade length, P&L or how frequently the target was reached during simulation.
The Data Window displays more than 40 values on individual trades and global results. For any given trade you will know:
Entry/Exit levels, including slippage impact,
It’s outcome and duration,
P/L achieved,
The fraction of the maximum opportunity/risk managed by the trade.
For all trades
After going through all the possible trades on the chart, the indicator will provide you with a rare view of all outcomes expressed with the P&L cloud, which allows us to instantly see the most/least profitable areas of a chart using trade data as support, while also showing its relationship with the opportunity/risk encountered during the simulation. The difference between the two clouds is the managed opportunity and risk.
The Data Window will present you with numbers which we will go through later. Some of them are: average stop size, P/L, win rate, % opportunity managed, trade lengths for different types of trade outcomes and the TST (Target:Stop Travel) ratio.
Let’s see Great Expectations in action… and remember to open your Data Window!
INPUTS
Trade direction : You must first choose if you wish to look at long or short trades. Because of the way the indicator works and the amount of visual information on the chart, it is only practical to look at one type of trades at a time. The default is Longs.
Maximum trade Length (MaxL) : This is the maximum walk forward distance the simulator will go in analyzing outcomes from any given point in the past. It also determines the size of the dead zone among the chart’s last bars. A red background line identifies the beginning of the dead zone for which not enough bars have elapsed to analyze outcomes for the maximum trade length defined. If an ATR-based entry stop is used, that length is added to the wait time before beginning simulations, so that the first entry starts with a clean ATR value. On a sample of around 16000 bars, my tests show that the indicator runs into server errors at lengths of around 290, i.e. having completed ~4,6M simulation loop iterations. That is way too high a length anyways; 100 will usually be amply enough to ring out all the possibilities out of a simulation, and on shorter time frames, 30 can be enough. While making it unduly small will prevent simulations of expressing the market’s potential, the less you use, the faster the indicator will run. The default is 40.
Unrealized P&L base at End of Trade (EOT) : When a simulation ends and the trade is still open, we calculate unrealized P&L from an exit order executed from either the last in-trade stop on the previous bar, or the close of the last bar. You can readily see the impact of this selection on the chart, with the P&L cloud. The default is on the close.
Display : The check box besides the title does nothing.
Show target : Shows a green line displaying the trade’s target expressed as a multiple of X, i.e. the amplitude of the entry stop. I call this value “X” and use it as a unit to express profit and loss on a trade (some call it “R”). The line is highlighted for trades where the close reached the target during the trade, whether the trade ended in profit or loss. This is also where you specify the multiple of X you wish to use in calculating targets. The multiple is used even if targets are not displayed.
Show P&L Cloud : The cloud allows traders to see right away the profitable areas of the chart. The only line printed with the cloud is the “end of trade line” (EOT). The EOT line is the only way one can see the level where a trade ended on the chart (in the Data Window you can see it as the “Exit Fill” value). The EOT level for the trade determines if the trade ended in a profit or a loss. Its value represents one of the following:
- fill from order executed at close of bar where stop is breached during trade (which produces “Realized P/L”),
- simulation of a fill pseudo-fill at the user-defined EOT level (last close or stop level) if the trade runs its course through MaxL bars without getting stopped (producing Unrealized P/L).
The EOT line and the cloud fill print in green when the trade’s outcome is profitable and in red when it is not. If the trade was closed after breaching the stop, the line appears brighter.
Show Opportunity&Risk Cloud : Displays the maximum opportunity/risk that was present during the trade, i.e. the maximum and minimum prices reached.
Background Color Scheme : Allows you to choose between 3 different color schemes for the background gradients, to accommodate different types of chart background/candles. Select “None” if you don’t want a background.
Background source : Determines what value will be used to generate the different intensities of the gradient. You can choose trade length (brighter is shorter), Trade P&L (brighter is higher) or the number of times the target was reached during simulation (brighter is higher). The default is Trade Length.
Entry strat : The check box besides the title does nothing. The default strat is All bars, meaning a trade will be simulated from all bars not excluded by the filters where a MaxL bars future exists. For fun, I’ve included a pseudo-random entry strat (an indirect way of changing the seed is to vary the starting date of the simulation).
Show Filter State : Displays areas where the combination of filters you have selected are allowing entries. Filtering occurs as per your selection(s), whether the state is displayed or not. The effect of multiple selections is additive. The filters are:
1. Bar direction: Longs will only be entered if close>open and vice versa.
2. Rising Volume: Applies to both long and shorts.
3. Rising/falling MA of the length you choose over the number of bars you choose.
4. Custom indicator: You can feed your own filtering signal through this from another indicator. It must produce a signal of 1 to allow long entries and 0 to allow shorts.
Show Entry Stops :
1. Multiple of user-defined length ATR.
2. Fixed percentage.
3. Fixed value.
All entry stops are calculated using the entry fill price as a reference. The fill price is calculated from the current bar’s open, to which slippage is added if configured. This simulates the case where the strategy issued the entry signal on the previous bar for it to be executed at the next bar’s open.
The entry stop remains active until the in-trade stop becomes the more aggressive of the two stops. From then on, the entry stop will be ignored, unless a bar close breaches the in-trade stop, in which case the stop will be reset with a new entry stop and the process repeats.
Show In-trade stops : Displays in bright red the selected in-trade stop (be sure to read the note in this section about them).
1. ATR multiple: added/subtracted from the average of the two previous bars minimum/maximum of open/close.
2. A trailing stop with a deviation expressed as a multiple of entry stop (X).
3. A fixed percentage trailing stop.
Trailing stops deviations are measured from the highest/lowest high/low reached during the trade.
Note: There is a twist with the in-trade stops. It’s that for any given bar, its in-trade stop can hold multiple values, as each successive pass of the advancing simulation loops goes over it from a different entry points. What is printed is the stop from the loop that ended on that bar, which may have nothing to do with other instances of the trade’s in-trade stop for the same bar when visited from other starting points in previous simulations. There is just no practical way to print all stop values that were used for any given bar. While the printed entry stops are the actual ones used on each bar, the in-trade stops shown are merely the last instance used among many.
Include Slippage : if checked, slippage will be added/subtracted from order price to yield the fill price. Slippage is in percentage. If you choose to include slippage in the simulations, remember to adjust it by considering the liquidity of the markets and the time frame you’ll be analyzing.
Include Fees : if checked, fees will be subtracted/added to both realized an unrealized trade profits/losses. Fees are in percentage. The default fees work well for crypto markets but will need adjusting for others—especially in Forex. Remember to modify them accordingly as they can have a major impact on results. Both fees and slippage are included to remind us of their importance, even if the global numbers produced by the indicator are not representative of a real trading scenario composed of sequential trades.
Date Range filtering : the usual. Just note that the checkbox has to be selected for date filtering to activate.
DATA WINDOW
Most of the information produced by this indicator is made available in the Data Window, which you bring up by using the icon below the Watchlist and Alerts buttons at the right of the TV UI. Here’s what’s there.
Some of the information presented in the Data Window is standard trade data; other values are not so standard; e. g. the notions of managed opportunity and risk and Target:Stop Travel ratio. The interplay between all the values provided by Great Expectations is inherently complex, even for a static set of entry/filter/exit strats. During the constant updating which the habitual process of progressive refinement in building strategies that is the lot of strategy modelers entails, another level of complexity is no doubt added to the analysis of this indicator’s values. While I don’t want to sound like Wolfram presenting A New Kind of Science , I do believe that if you are a serious strategy modeler and spend the time required to get used to using all the information this indicator makes available, you may find it useful.
Trade Information
Entry Order : This is the open of the bar where simulation starts. We suppose that an entry signal was generated at the previous bar.
Entry Fill (including slip.) : The actual entry price, including slippage. This is the base price from which other values will be calculated.
Exit Order : When a stop is breached, an exit order is executed from the close of the bar that breached the stop. While there is no “In-trade stop” value included in the Data Window (other than the End of trade Stop previously discussed), this “Exit Order” value is how we can know the level where the trade was stopped during the simulation. The “Trade Length” value will then show the bar where the stop was breached.
Exit Fill (including slip.) : When the exit order is simulated, slippage is added to the order level to create the fill.
Chart: Target : This is the target calculated at the beginning of the simulation. This value also appear on the chart in teal. It is controlled by the multiple of X defined under the “Show Target” checkbox in the Inputs.
Chart: Entry Stop : This value also appears on the chart (the red dots under points where a trade was simulated). Its value is controlled by the Entry Strat chosen in the Inputs.
X (% Fill, including Fees) and X (currency) : This is the stop’s amplitude (Entry Fill – Entry Stop) + Fees. It represents the risk incurred upon entry and will be used to express P&L. We will show R expressed in both a percentage of the Entry Fill level (this value), and currency (the next value). This value represents the risk in the risk:reward ratio and is considered to be a unit of 1 so that RR can be expressed as a single value (i.e. “2” actually meaning “1:2”).
Trade Length : If trade was stopped, it’s the number of bars elapsed until then. The trade is then considered “Closed”. If the trade ends without being stopped (there is no profit-taking strat implemented, so the stop is the only exit strat), then the trade is “Open”, the length is MaxL and it will show in orange. Otherwise the value will print in green/red to reflect if the trade is winning/losing.
P&L (X) : The P&L of the trade, expressed as a multiple of X, which takes into account fees paid at entry and exit. Given our default target setting at 2 units of “X”, a trade that closes at its target will have produced a P&L of +2.0, i.e. twice the value of X (not counting fees paid at exit ). A trade that gets stopped late 50% further that the entry stop’s level will produce a P&L of -1.5X.
P&L (currency, including Fees) : same value as above, but expressed in currency.
Target first reached at bar : If price closed above the target during the trade (even if it occurs after the trade was stopped), this will show when. This value will be used in calculating our TST ratio.
Times Stop/Target reached in sim. : Includes all occurrences during the complete simulation loop.
Opportunity (X) : The highest/lowest price reached during a simulation, i.e. the maximum opportunity encountered, whether the trade was previously stopped or not, expressed as a multiple of X.
Risk (X) : The lowest/highest price reached during a simulation, i.e. the maximum risk encountered, whether the trade was previously stopped or not, expressed as a multiple of X.
Risk:Opportunity : The greater this ratio, the greater Opportunity is, compared to Risk.
Managed Opportunity (%) : The portion of Opportunity that was captured by the highest/low stop position, even if it occurred after a previous stop closed the trade.
Managed Risk (%) : The portion of risk that was protected by the lowest/highest stop position, even if it occurred after a previous stop closed the trade. When this value is greater than 100%, it means the trade’s stop is protecting more than the maximum risk, which is frequent. You will, however, never see close to those values for the Managed Opportunity value, since the stop would have to be higher than the Maximum opportunity. It is much easier to alleviate the risk than it is to lock in profits.
Managed Risk:Opportunity : The ratio of the two preceding values.
Managed Opp. vs. Risk : The Managed Opportunity minus the Managed Risk. When it is negative, which is most often is, it means your strat is protecting a greater portion of the risk than it captures opportunity.
Global Numbers
Win Rate(%) : Percentage of winning trades over all entries. Open trades are considered winning if their last stop/close (as per user selection) locks in profits.
Avg X%, Avg X (currency) : Averages of previously described values:.
Avg Profitability/Trade (APPT) : This measures expectation using: Average Profitability Per Trade = (Probability of Win × Average Win) − (Probability of Loss × Average Loss) . It quantifies the average expectation/trade, which RR alone can’t do, as the probabilities of each outcome (win/lose) must also be used to calculate expectancy. The APPT combine the RR with the win rate to yield the true expectancy of a strategy. In my usual way of expressing risk with X, APPT is the equivalent of the average P&L per trade expressed in X. An APPT of -1.5 means that we lose on average 1.5X/trade.
Equity (X), Equity (currency) : The cumulative result of all trade outcomes, expressed as a multiple of X. Multiplied by the Average X in currency, this yields the Equity in currency.
Risk:Opportunity, Managed Risk:Opportunity, Managed Opp. vs. Risk : The global values of the ones previously described.
Avg Trade Length (TL) : One of the most important values derived by going through all the simulations. Again, it is composed of either the length of stopped trades, or MaxL when the trade isn’t stopped (open). This value can help systems modelers shape the characteristics of the components they use to build their strategies.
Avg Closed Win TL and Avg Closed Lose TL : The average lengths of winning/losing trades that were stopped.
Target reached? Avg bars to Stop and Target reached? Avg bars to Target : For the trades where the target was reached at some point in the simulation, the number of bars to the first point where the stop was breached and where the target was reached, respectively. These two values are used to calculate the next value.
TST (Target:Stop Travel Ratio) : This tracks the ratio between the two preceding values (Bars to first stop/Bars to first target), but only for trades where the target was reached somewhere in the loop. A ratio of 2 means targets are reached twice as fast as stops.
The next values of this section are counts or percentages and are self-explanatory.
Chart Plots
Contains chart plots of values already describes.
NOTES
Optimization/Overfitting: There is a fine line between optimizing and overfitting. Tools like this indicator can lead unsuspecting modelers down a path of overfitting that often turns strategies into over-specialized beasts that do not perform elegantly when confronted to the real-world. Proven testing strategies like walk forward analysis will go a long way in helping modelers alleviate this risk.
Input tuning: Because the results generated by the indicator will vary with the parameters used in the active entry, filtering and exit strats, it’s important to realize that although it may be fun at first, just slapping the default settings on a chart and time frame will not yield optimal nor reliable results. While using ATR as often as possible (as I do in this indicator) is a good way to make strat parametrization adaptable, it is not a foolproof solution.
There is no data for the last MaxL bars of the chart, since not enough trade future has elapsed to run a simulation from MaxL bars back.
Modifying the code: I have tried to structure the code modularly, even if that entails a larger code base, so that you can adapt it to your needs. I’ve included a few token components in each of the placeholders designed for entry strategies, filters, entry stops and in-trade stops. This will hopefully make it easier to add your own. In the same spirit, I have also commented liberally.
You will find in the code many instances of standard trade management tasks that can be lifted to code TV strategies where, as I do in mine, you manage everything yourself and don’t rely on built-in Pine strategy functions to act on your trades.
Enjoy!
THANKS
To @scarf who showed me how plotchar() could be used to plot values without ruining scale.
To @glaz for the suggestion to include a Chandelier stop strat; I will.
To @simpelyfe for the idea of using an indicator input for the filters (if some day TV lets us use more than one, it will be useful in other modules of the indicator).
To @RicardoSantos for the random generator used in the random entry strat.
To all scripters publishing open source on TradingView; their code is the best way to learn.
To my trading buddies Irving and Bruno; who showed me way back how pro traders get it done.
Free Stock ScreenerMissing great trade opportunities is annoying, and unless you have 12 screens or only trade one market, you are missing a lot of trades. To fix that, we created this free stock screener so you get notified instantly of potential great trading conditions in real time, right on your chart.
You get notified of trading benchmarks being met by the value being displayed on the scanner as well as a color change so that it grabs your attention and makes you aware that you should take a look at the other market and look for a potential trade. It also has built in alerts so you can have an alert notification go off when any of your trading conditions are met instead of needing to watch the scanner for color changes.
The screener will change the ticker symbol background color to red green when price is above or below the previous daily range and above or below both VWAPs. This signals that the ticker is trending, which typically means it is a great time to trade that market and follow the trend.
This free stock screener allows you to scan up to 10 different markets at the same time for various different conditions so you always know what is going on with your favorite trading symbols. If you want to scan more tickers, just add the indicator to your chart again and change the table position to the other side of the screen and update the tickers on the 2nd screener, allowing you to have 20 tickers at a time.
The scanner can be fully customized by changing the markets that it screens and turning on or off as many of them as you would like. You can also turn on or off any of the different data sets so that you only get information about trading conditions that matter to you.
The screener can provide data on any type of market, such as stocks, crypto, futures, forex and more. Each ticker can be adjusted to whatever market you would like it to scan for data in the settings panel, the only limitation is that it will not provide data for the VWAP and volume trend score if the ticker you are screening does not provide volume data.
Screener Features
The scanner will provide the following types of data for each ticker that is turned on:
Volume - Provides a volume score compared to the average volume and notifies you of higher than normal volume and volume spikes on individual bars by changing colors.
Volatility - Provides a volatility score compared to the average volatility and notifies you of higher than normal volatility by changing colors.
Oscillator - Choose between the RSI or CCI. The value of that oscillator will be displayed and will notify you when values are in extreme ranges such as overbought or oversold conditions according to the threshold values you enter in the settings panel. When those thresholds have been breached, you will be notified by it changing color.
Big Candles - Compares the current candle to average previous candle sizes, and changes color to notify you of big candles including a big top wick, big bottom wick, big candle body and big candle high to low range.
Daily Level Touches & Trends - Calculates and displays various daily candle and intraday open price levels that act as support and resistance. Notifies you when price is touching any of the daily levels that are turned on. The levels you can have on are as follows: previous day high, previous day low or previous day open. It also will notify you when price is touching the current day’s open, NY 930am open, Asia 8pm open, London 2am open and NY midnight 12am open. It will also say “Above” if price is above the previous day’s high or it will say “Below” if price is below the previous day’s low. The color of the cell will also change when a level touch is happening or price is above the previous day high or below the previous day low.
VWAP - Choose from 2 different VWAP lengths, default settings are daily and weekly VWAPs. You will get notified if price touches either of the VWAPs and they will also say “Above” or “Below” if price is currently above or below each VWAP.
How To Use The Screener To Help You Trade
The main purpose of the screener is to scan other markets and notify you of potential good trading opportunities such as price bouncing off of the daily levels or VWAPs. It can also be used to know when price is trending according to the VWAPs and daily levels. Lastly, you can use it to know how the volume and volatility trends are currently which gives you more confidence in taking a trade with this data when volume and volatility are present.
Volume Score
When volume is high, this represents a good time to trade because there are many market participants and price is likely to be volatile while there is high volume which can present a lot of good trade setups for you to take.
The volume score shown on the screener measures the current volume trend compared to previous volume trends and calculates that into a score based on 100 being the same as the previous volume trend. So any value above 100 means it is high volume and any value less than 100 means it is lower volume than normal.
In the settings panel, you can adjust the volume threshold that needs to be met for a volume notification to show up. The default setting is at 120, so you will get notified when the current volume trend score is 120 or higher or you can adjust that threshold value to whatever value you prefer.
It also will notify you when there is a volume spike on the current bar. This is determined by calculating an average of the recent volume totals and then checking to see if the current bar is greater than or equal to that average multiplied by 3. So if a single bar has volume that is greater than 3 times what the average volume is, then you will get a notification that says “Spike” to make you aware of that volume spike.
The volume trend threshold, volume spike multiplier and lookback length for the average volume used in volume spike calculations can all be adjusted in the settings panel to fit your desired preferences.
Volatility Score
High volatility can mean it is a great time to trade because the market is moving quickly and providing large enough movements that you can get in and out in a short amount of time, while still accruing decent sized trade PnL.
The volatility score will calculate the current volatility for each market compared to previous conditions and then divide the current volatility by the average volatility to give you a volatility score. Anything over 100 means the market is decently volatile and you should look at that market to find potential trade setups to execute on. Anything below 100 means the market is not very volatile and it is usually best to just wait until volatility returns before you start trading again.
The screener will notify you when the volatility score is above the threshold you set. The default value is set to 90, but can be adjusted to your preference. Pay attention to any market that shows an alert and take a look at that chart because the high volatility may present a good trade setup for you in the near future.
Oscillator Score
The oscillator data can be switched between Relative Strength Index(RSI) and Commodity Channel Index(CCI).
The RSI provides a value between 0 and 100 that indicates the momentum and strength of the recent price action. Many traders use the extremes of the 0-100 range to signal overbought or oversold conditions and use that as a sign to look for price to reverse in the near future. The typical values used for this and the default settings to provide notifications are: 70 for overbought and 30 for oversold. The scanner will notify you when the RSI value is considered overbought or oversold so you know to take a look at the chart and analyze if it is ready for a trade to be taken.
The CCI provides a value that can be used to determine the trend strength of the underlying asset when the oscillator moves above 100 or below -100. These extreme values are outside of the normal accumulation range and signify that price is moving strongly in that direction so it may be a good time to take a trade in the direction of the trend. The scanner will show you the value of the CCI for each market and notify you if that value is above 100 or below -100.
Both RSI and CCI settings can be adjusted in the settings panel to your desired settings so you have the exact oscillator settings you prefer to use as well as the exact values that you want to use for being notified.
Big Candles
Big candles can mean that many traders are buying or selling at the same time and many times indicate a good signal to trade in that same direction. That is why we included this calculation in the screener, so you are always aware when a large candle prints.
It calculates the average size of the recent candles and then uses that average as the benchmark to determine if the current candle is considered big and worthy of notifying you to take a look at that chart.
You can adjust the multiplier used for the big candle threshold to whatever you desire, but the default setting is 3 which means the candle will be considered big and notify you if it is 3 times as large as an average candle.
The big candles data will track the following candle values and notify you with these labels:
High to Low candle size = HL
Candle Body from open to close candle size = OC
Top Wick size = TW
Bottom Wick size = BW
Daily Level Touches & Trend
Daily level touches are excellent levels to watch for price to bounce because they often act as support and resistance levels for intraday trading. The scanner will track each market and notify you when the current candle is touching any of the daily levels that you have turned on in the settings panel.
The main levels that are turned on by default and are useful for all markets and how they will be labeled on the scanner are as follows:
Previous Day High = High
Previous Day Low = Low
Previous Day Open = < Open
Previous Day Close = Close
Current Day Open = Open
We also included some extra levels that are useful for futures traders. They are as follows:
NY 930am Open = 930am
NY 12am Midnight Open = 12am
Asia Open at 8pm NY time = Asia
London Open at 2am NY Time = London
Watch how price reacts to these levels and then trade the bounces off of these levels if the price action confirms that it is going to respect that level.
When price is currently above the previous day high, the scanner will say “Above” and show a green color, indicating a bullish trend and that price is above the previous daily candle’s high.
When price is currently below the previous day low, the scanner will say “Below” and show a red color, indicating a bearish trend and that price is below the previous daily candle’s low.
Pay attention to when price is trending above or below the previous daily candle as those trends can provide excellent trend trading opportunities.
The daily levels that you have turned on in the settings will also show as lines on the chart and include a label next to them, identifying each level so you know what each line represents. You can turn on or off all of the lines shown on the chart in the main settings or turn them off one by one in the style panel of the settings. Labels can also be turned on or off for all of the lines in the main settings panel. You can adjust the label positioning in the Label Offset section of the settings panel.
VWAP Touches & Trend
VWAP stands for volume weighted average price and is a very popular tool that traders use to determine trend direction based on volume as well as an excellent level to trade price bounces off of.
The typical VWAP time period used is Daily, which means the volume weighted average price will reset at the beginning of a new day. We set the first VWAP to be the daily VWAP by default and the second one to be the weekly VWAP. You can adjust both of the time periods to be any of the provided time lengths that you choose.
The screener will show “Above” with a green background color when price is above the VWAP, indicating a bullish trend. It will show “Below” with a red background color when price is below the VWAP, indicating a bearish trend. When both VWAPs are showing Above or Below, you can expect price to trend in that direction, so look for pullbacks you can trade in the direction of the trend. If the VWAPs are showing different directions, then you should expect to bounce back and forth between the VWAPs, but be careful and watch out for price to break beyond either one and start a trend.
When the current candle is touching the VWAP, the scanner will change colors and say VWAP to notify you that price is touching the VWAP and you should look at that chart and analyze the market for a potential bounce off of the VWAP to trade.
Trending Market Signals
Strong trends are excellent markets to trade and can many times provide excellent trading opportunities that don’t require expert price action reading skills to be able to take winning trades from. That is why we included a signal to notify you of a strong trending market.
The strong trending market will show up as a green or red background color for the ticker name. If the color of the ticker name is green, it is notifying you that the price is above the previous daily high, above VWAP 1 and above VWAP 2 and is a good market to look for bullish trend trades. If the color of the ticker name is red, it is notifying you that the price is below the previous daily low, below VWAP 1 and below VWAP 2 and is a good market to look for bearish trend trades.
Changing The Tickers It Scans
To change the tickers that the indicator scans, scroll near the bottom of the settings panel and select the ticker symbol you want to update and then search for the exact symbol you want to use. If you want to scan less tickers, then just turn some of the tickers off that you don’t need.
Scanning More Than 10 Tickers
If you want to scan more than 10 tickers, you can add the scanner to your chart again and then just change the table position to the other side of the screen. This will allow you to scan 10 more tickers that will show up separately. Then if you want even more, just add the indicator to your chart again and update the table position until you have as many markets as you want. The table position setting can be found at the bottom of the main settings panel.
Alerts
The screener has alerts that can be used to notify you when any of the data set thresholds have been met or if price is touching one of the levels. You can set alerts for the following events:
Bullish Trend Alert - Price is above the previous daily high and above both VWAPs.
Bearish Trend Alert - Price is below the previous daily low and below both VWAPs.
High Volume Alert - Volume is higher than the threshold or a volume spike is detected.
High Volatility Alert - Volatility is higher than the threshold.
Oscillator Is Extended Alert - Oscillator value has exceeded the upper or lower threshold.
Big Candle Alert - A big candle has been detected.
Daily Level Touch Alert - One of the daily levels that is turned on is being touched.
VWAP Touch Alert - One of the 2 VWAPs are being touched.
An alert will trigger when any one of tickers on your scanner meets the alert conditions, so when you see the alert, you will need to go to your chart and look at the scanner to see which ticker it was and then navigate to that chart to look for potential trade setups.
The alerts will use the exact same settings you have configured in the settings panel to send you alert notifications. With normal settings, this could give you a lot of alerts, so if you only want alerts to fire when abnormal conditions are being met, try setting up a second screener on your chart that has very high threshold values and only has the most important level touches on. Then turn the setting "Do Not Show The Screener On The Chart" to off so the calculations will still run and fire alerts, but won't clog up your charts. This way you can only get alert notifications when major events happen but still have your normal screener settings available on your chart.
Markets This Can Be Used On
This screener uses the price action and volume data so you can use it to scan any type of market you would like as long as the ticker you are scanning has price and volume data feeds. If a market does not have volume data, then it will just show NaN in the volume row and the VWAP rows will not show anything.
Dynamic Equity Allocation Model"Cash is Trash"? Not Always. Here's Why Science Beats Guesswork.
Every retail trader knows the frustration: you draw support and resistance lines, you spot patterns, you follow market gurus on social media—and still, when the next bear market hits, your portfolio bleeds red. Meanwhile, institutional investors seem to navigate market turbulence with ease, preserving capital when markets crash and participating when they rally. What's their secret?
The answer isn't insider information or access to exotic derivatives. It's systematic, scientifically validated decision-making. While most retail traders rely on subjective chart analysis and emotional reactions, professional portfolio managers use quantitative models that remove emotion from the equation and process multiple streams of market information simultaneously.
This document presents exactly such a system—not a proprietary black box available only to hedge funds, but a fully transparent, academically grounded framework that any serious investor can understand and apply. The Dynamic Equity Allocation Model (DEAM) synthesizes decades of financial research from Nobel laureates and leading academics into a practical tool for tactical asset allocation.
Stop drawing colorful lines on your chart and start thinking like a quant. This isn't about predicting where the market goes next week—it's about systematically adjusting your risk exposure based on what the data actually tells you. When valuations scream danger, when volatility spikes, when credit markets freeze, when multiple warning signals align—that's when cash isn't trash. That's when cash saves your portfolio.
The irony of "cash is trash" rhetoric is that it ignores timing. Yes, being 100% cash for decades would be disastrous. But being 100% equities through every crisis is equally foolish. The sophisticated approach is dynamic: aggressive when conditions favor risk-taking, defensive when they don't. This model shows you how to make that decision systematically, not emotionally.
Whether you're managing your own retirement portfolio or seeking to understand how institutional allocation strategies work, this comprehensive analysis provides the theoretical foundation, mathematical implementation, and practical guidance to elevate your investment approach from amateur to professional.
The choice is yours: keep hoping your chart patterns work out, or start using the same quantitative methods that professionals rely on. The tools are here. The research is cited. The methodology is explained. All you need to do is read, understand, and apply.
The Dynamic Equity Allocation Model (DEAM) is a quantitative framework for systematic allocation between equities and cash, grounded in modern portfolio theory and empirical market research. The model integrates five scientifically validated dimensions of market analysis—market regime, risk metrics, valuation, sentiment, and macroeconomic conditions—to generate dynamic allocation recommendations ranging from 0% to 100% equity exposure. This work documents the theoretical foundations, mathematical implementation, and practical application of this multi-factor approach.
1. Introduction and Theoretical Background
1.1 The Limitations of Static Portfolio Allocation
Traditional portfolio theory, as formulated by Markowitz (1952) in his seminal work "Portfolio Selection," assumes an optimal static allocation where investors distribute their wealth across asset classes according to their risk aversion. This approach rests on the assumption that returns and risks remain constant over time. However, empirical research demonstrates that this assumption does not hold in reality. Fama and French (1989) showed that expected returns vary over time and correlate with macroeconomic variables such as the spread between long-term and short-term interest rates. Campbell and Shiller (1988) demonstrated that the price-earnings ratio possesses predictive power for future stock returns, providing a foundation for dynamic allocation strategies.
The academic literature on tactical asset allocation has evolved considerably over recent decades. Ilmanen (2011) argues in "Expected Returns" that investors can improve their risk-adjusted returns by considering valuation levels, business cycles, and market sentiment. The Dynamic Equity Allocation Model presented here builds on this research tradition and operationalizes these insights into a practically applicable allocation framework.
1.2 Multi-Factor Approaches in Asset Allocation
Modern financial research has shown that different factors capture distinct aspects of market dynamics and together provide a more robust picture of market conditions than individual indicators. Ross (1976) developed the Arbitrage Pricing Theory, a model that employs multiple factors to explain security returns. Following this multi-factor philosophy, DEAM integrates five complementary analytical dimensions, each tapping different information sources and collectively enabling comprehensive market understanding.
2. Data Foundation and Data Quality
2.1 Data Sources Used
The model draws its data exclusively from publicly available market data via the TradingView platform. This transparency and accessibility is a significant advantage over proprietary models that rely on non-public data. The data foundation encompasses several categories of market information, each capturing specific aspects of market dynamics.
First, price data for the S&P 500 Index is obtained through the SPDR S&P 500 ETF (ticker: SPY). The use of a highly liquid ETF instead of the index itself has practical reasons, as ETF data is available in real-time and reflects actual tradability. In addition to closing prices, high, low, and volume data are captured, which are required for calculating advanced volatility measures.
Fundamental corporate metrics are retrieved via TradingView's Financial Data API. These include earnings per share, price-to-earnings ratio, return on equity, debt-to-equity ratio, dividend yield, and share buyback yield. Cochrane (2011) emphasizes in "Presidential Address: Discount Rates" the central importance of valuation metrics for forecasting future returns, making these fundamental data a cornerstone of the model.
Volatility indicators are represented by the CBOE Volatility Index (VIX) and related metrics. The VIX, often referred to as the market's "fear gauge," measures the implied volatility of S&P 500 index options and serves as a proxy for market participants' risk perception. Whaley (2000) describes in "The Investor Fear Gauge" the construction and interpretation of the VIX and its use as a sentiment indicator.
Macroeconomic data includes yield curve information through US Treasury bonds of various maturities and credit risk premiums through the spread between high-yield bonds and risk-free government bonds. These variables capture the macroeconomic conditions and financing conditions relevant for equity valuation. Estrella and Hardouvelis (1991) showed that the shape of the yield curve has predictive power for future economic activity, justifying the inclusion of these data.
2.2 Handling Missing Data
A practical problem when working with financial data is dealing with missing or unavailable values. The model implements a fallback system where a plausible historical average value is stored for each fundamental metric. When current data is unavailable for a specific point in time, this fallback value is used. This approach ensures that the model remains functional even during temporary data outages and avoids systematic biases from missing data. The use of average values as fallback is conservative, as it generates neither overly optimistic nor pessimistic signals.
3. Component 1: Market Regime Detection
3.1 The Concept of Market Regimes
The idea that financial markets exist in different "regimes" or states that differ in their statistical properties has a long tradition in financial science. Hamilton (1989) developed regime-switching models that allow distinguishing between different market states with different return and volatility characteristics. The practical application of this theory consists of identifying the current market state and adjusting portfolio allocation accordingly.
DEAM classifies market regimes using a scoring system that considers three main dimensions: trend strength, volatility level, and drawdown depth. This multidimensional view is more robust than focusing on individual indicators, as it captures various facets of market dynamics. Classification occurs into six distinct regimes: Strong Bull, Bull Market, Neutral, Correction, Bear Market, and Crisis.
3.2 Trend Analysis Through Moving Averages
Moving averages are among the oldest and most widely used technical indicators and have also received attention in academic literature. Brock, Lakonishok, and LeBaron (1992) examined in "Simple Technical Trading Rules and the Stochastic Properties of Stock Returns" the profitability of trading rules based on moving averages and found evidence for their predictive power, although later studies questioned the robustness of these results when considering transaction costs.
The model calculates three moving averages with different time windows: a 20-day average (approximately one trading month), a 50-day average (approximately one quarter), and a 200-day average (approximately one trading year). The relationship of the current price to these averages and the relationship of the averages to each other provide information about trend strength and direction. When the price trades above all three averages and the short-term average is above the long-term, this indicates an established uptrend. The model assigns points based on these constellations, with longer-term trends weighted more heavily as they are considered more persistent.
3.3 Volatility Regimes
Volatility, understood as the standard deviation of returns, is a central concept of financial theory and serves as the primary risk measure. However, research has shown that volatility is not constant but changes over time and occurs in clusters—a phenomenon first documented by Mandelbrot (1963) and later formalized through ARCH and GARCH models (Engle, 1982; Bollerslev, 1986).
DEAM calculates volatility not only through the classic method of return standard deviation but also uses more advanced estimators such as the Parkinson estimator and the Garman-Klass estimator. These methods utilize intraday information (high and low prices) and are more efficient than simple close-to-close volatility estimators. The Parkinson estimator (Parkinson, 1980) uses the range between high and low of a trading day and is based on the recognition that this information reveals more about true volatility than just the closing price difference. The Garman-Klass estimator (Garman and Klass, 1980) extends this approach by additionally considering opening and closing prices.
The calculated volatility is annualized by multiplying it by the square root of 252 (the average number of trading days per year), enabling standardized comparability. The model compares current volatility with the VIX, the implied volatility from option prices. A low VIX (below 15) signals market comfort and increases the regime score, while a high VIX (above 35) indicates market stress and reduces the score. This interpretation follows the empirical observation that elevated volatility is typically associated with falling markets (Schwert, 1989).
3.4 Drawdown Analysis
A drawdown refers to the percentage decline from the highest point (peak) to the lowest point (trough) during a specific period. This metric is psychologically significant for investors as it represents the maximum loss experienced. Calmar (1991) developed the Calmar Ratio, which relates return to maximum drawdown, underscoring the practical relevance of this metric.
The model calculates current drawdown as the percentage distance from the highest price of the last 252 trading days (one year). A drawdown below 3% is considered negligible and maximally increases the regime score. As drawdown increases, the score decreases progressively, with drawdowns above 20% classified as severe and indicating a crisis or bear market regime. These thresholds are empirically motivated by historical market cycles, in which corrections typically encompassed 5-10% drawdowns, bear markets 20-30%, and crises over 30%.
3.5 Regime Classification
Final regime classification occurs through aggregation of scores from trend (40% weight), volatility (30%), and drawdown (30%). The higher weighting of trend reflects the empirical observation that trend-following strategies have historically delivered robust results (Moskowitz, Ooi, and Pedersen, 2012). A total score above 80 signals a strong bull market with established uptrend, low volatility, and minimal losses. At a score below 10, a crisis situation exists requiring defensive positioning. The six regime categories enable a differentiated allocation strategy that not only distinguishes binarily between bullish and bearish but allows gradual gradations.
4. Component 2: Risk-Based Allocation
4.1 Volatility Targeting as Risk Management Approach
The concept of volatility targeting is based on the idea that investors should maximize not returns but risk-adjusted returns. Sharpe (1966, 1994) defined with the Sharpe Ratio the fundamental concept of return per unit of risk, measured as volatility. Volatility targeting goes a step further and adjusts portfolio allocation to achieve constant target volatility. This means that in times of low market volatility, equity allocation is increased, and in times of high volatility, it is reduced.
Moreira and Muir (2017) showed in "Volatility-Managed Portfolios" that strategies that adjust their exposure based on volatility forecasts achieve higher Sharpe Ratios than passive buy-and-hold strategies. DEAM implements this principle by defining a target portfolio volatility (default 12% annualized) and adjusting equity allocation to achieve it. The mathematical foundation is simple: if market volatility is 20% and target volatility is 12%, equity allocation should be 60% (12/20 = 0.6), with the remaining 40% held in cash with zero volatility.
4.2 Market Volatility Calculation
Estimating current market volatility is central to the risk-based allocation approach. The model uses several volatility estimators in parallel and selects the higher value between traditional close-to-close volatility and the Parkinson estimator. This conservative choice ensures the model does not underestimate true volatility, which could lead to excessive risk exposure.
Traditional volatility calculation uses logarithmic returns, as these have mathematically advantageous properties (additive linkage over multiple periods). The logarithmic return is calculated as ln(P_t / P_{t-1}), where P_t is the price at time t. The standard deviation of these returns over a rolling 20-trading-day window is then multiplied by √252 to obtain annualized volatility. This annualization is based on the assumption of independently identically distributed returns, which is an idealization but widely accepted in practice.
The Parkinson estimator uses additional information from the trading range (High minus Low) of each day. The formula is: σ_P = (1/√(4ln2)) × √(1/n × Σln²(H_i/L_i)) × √252, where H_i and L_i are high and low prices. Under ideal conditions, this estimator is approximately five times more efficient than the close-to-close estimator (Parkinson, 1980), as it uses more information per observation.
4.3 Drawdown-Based Position Size Adjustment
In addition to volatility targeting, the model implements drawdown-based risk control. The logic is that deep market declines often signal further losses and therefore justify exposure reduction. This behavior corresponds with the concept of path-dependent risk tolerance: investors who have already suffered losses are typically less willing to take additional risk (Kahneman and Tversky, 1979).
The model defines a maximum portfolio drawdown as a target parameter (default 15%). Since portfolio volatility and portfolio drawdown are proportional to equity allocation (assuming cash has neither volatility nor drawdown), allocation-based control is possible. For example, if the market exhibits a 25% drawdown and target portfolio drawdown is 15%, equity allocation should be at most 60% (15/25).
4.4 Dynamic Risk Adjustment
An advanced feature of DEAM is dynamic adjustment of risk-based allocation through a feedback mechanism. The model continuously estimates what actual portfolio volatility and portfolio drawdown would result at the current allocation. If risk utilization (ratio of actual to target risk) exceeds 1.0, allocation is reduced by an adjustment factor that grows exponentially with overutilization. This implements a form of dynamic feedback that avoids overexposure.
Mathematically, a risk adjustment factor r_adjust is calculated: if risk utilization u > 1, then r_adjust = exp(-0.5 × (u - 1)). This exponential function ensures that moderate overutilization is gently corrected, while strong overutilization triggers drastic reductions. The factor 0.5 in the exponent was empirically calibrated to achieve a balanced ratio between sensitivity and stability.
5. Component 3: Valuation Analysis
5.1 Theoretical Foundations of Fundamental Valuation
DEAM's valuation component is based on the fundamental premise that the intrinsic value of a security is determined by its future cash flows and that deviations between market price and intrinsic value are eventually corrected. Graham and Dodd (1934) established in "Security Analysis" the basic principles of fundamental analysis that remain relevant today. Translated into modern portfolio context, this means that markets with high valuation metrics (high price-earnings ratios) should have lower expected returns than cheaply valued markets.
Campbell and Shiller (1988) developed the Cyclically Adjusted P/E Ratio (CAPE), which smooths earnings over a full business cycle. Their empirical analysis showed that this ratio has significant predictive power for 10-year returns. Asness, Moskowitz, and Pedersen (2013) demonstrated in "Value and Momentum Everywhere" that value effects exist not only in individual stocks but also in asset classes and markets.
5.2 Equity Risk Premium as Central Valuation Metric
The Equity Risk Premium (ERP) is defined as the expected excess return of stocks over risk-free government bonds. It is the theoretical heart of valuation analysis, as it represents the compensation investors demand for bearing equity risk. Damodaran (2012) discusses in "Equity Risk Premiums: Determinants, Estimation and Implications" various methods for ERP estimation.
DEAM calculates ERP not through a single method but combines four complementary approaches with different weights. This multi-method strategy increases estimation robustness and avoids dependence on single, potentially erroneous inputs.
The first method (35% weight) uses earnings yield, calculated as 1/P/E or directly from operating earnings data, and subtracts the 10-year Treasury yield. This method follows Fed Model logic (Yardeni, 2003), although this model has theoretical weaknesses as it does not consistently treat inflation (Asness, 2003).
The second method (30% weight) extends earnings yield by share buyback yield. Share buybacks are a form of capital return to shareholders and increase value per share. Boudoukh et al. (2007) showed in "The Total Shareholder Yield" that the sum of dividend yield and buyback yield is a better predictor of future returns than dividend yield alone.
The third method (20% weight) implements the Gordon Growth Model (Gordon, 1962), which models stock value as the sum of discounted future dividends. Under constant growth g assumption: Expected Return = Dividend Yield + g. The model estimates sustainable growth as g = ROE × (1 - Payout Ratio), where ROE is return on equity and payout ratio is the ratio of dividends to earnings. This formula follows from equity theory: unretained earnings are reinvested at ROE and generate additional earnings growth.
The fourth method (15% weight) combines total shareholder yield (Dividend + Buybacks) with implied growth derived from revenue growth. This method considers that companies with strong revenue growth should generate higher future earnings, even if current valuations do not yet fully reflect this.
The final ERP is the weighted average of these four methods. A high ERP (above 4%) signals attractive valuations and increases the valuation score to 95 out of 100 possible points. A negative ERP, where stocks have lower expected returns than bonds, results in a minimal score of 10.
5.3 Quality Adjustments to Valuation
Valuation metrics alone can be misleading if not interpreted in the context of company quality. A company with a low P/E may be cheap or fundamentally problematic. The model therefore implements quality adjustments based on growth, profitability, and capital structure.
Revenue growth above 10% annually adds 10 points to the valuation score, moderate growth above 5% adds 5 points. This adjustment reflects that growth has independent value (Modigliani and Miller, 1961, extended by later growth theory). Net margin above 15% signals pricing power and operational efficiency and increases the score by 5 points, while low margins below 8% indicate competitive pressure and subtract 5 points.
Return on equity (ROE) above 20% characterizes outstanding capital efficiency and increases the score by 5 points. Piotroski (2000) showed in "Value Investing: The Use of Historical Financial Statement Information" that fundamental quality signals such as high ROE can improve the performance of value strategies.
Capital structure is evaluated through the debt-to-equity ratio. A conservative ratio below 1.0 multiplies the valuation score by 1.2, while high leverage above 2.0 applies a multiplier of 0.8. This adjustment reflects that high debt constrains financial flexibility and can become problematic in crisis times (Korteweg, 2010).
6. Component 4: Sentiment Analysis
6.1 The Role of Sentiment in Financial Markets
Investor sentiment, defined as the collective psychological attitude of market participants, influences asset prices independently of fundamental data. Baker and Wurgler (2006, 2007) developed a sentiment index and showed that periods of high sentiment are followed by overvaluations that later correct. This insight justifies integrating a sentiment component into allocation decisions.
Sentiment is difficult to measure directly but can be proxied through market indicators. The VIX is the most widely used sentiment indicator, as it aggregates implied volatility from option prices. High VIX values reflect elevated uncertainty and risk aversion, while low values signal market comfort. Whaley (2009) refers to the VIX as the "Investor Fear Gauge" and documents its role as a contrarian indicator: extremely high values typically occur at market bottoms, while low values occur at tops.
6.2 VIX-Based Sentiment Assessment
DEAM uses statistical normalization of the VIX by calculating the Z-score: z = (VIX_current - VIX_average) / VIX_standard_deviation. The Z-score indicates how many standard deviations the current VIX is from the historical average. This approach is more robust than absolute thresholds, as it adapts to the average volatility level, which can vary over longer periods.
A Z-score below -1.5 (VIX is 1.5 standard deviations below average) signals exceptionally low risk perception and adds 40 points to the sentiment score. This may seem counterintuitive—shouldn't low fear be bullish? However, the logic follows the contrarian principle: when no one is afraid, everyone is already invested, and there is limited further upside potential (Zweig, 1973). Conversely, a Z-score above 1.5 (extreme fear) adds -40 points, reflecting market panic but simultaneously suggesting potential buying opportunities.
6.3 VIX Term Structure as Sentiment Signal
The VIX term structure provides additional sentiment information. Normally, the VIX trades in contango, meaning longer-term VIX futures have higher prices than short-term. This reflects that short-term volatility is currently known, while long-term volatility is more uncertain and carries a risk premium. The model compares the VIX with VIX9D (9-day volatility) and identifies backwardation (VIX > 1.05 × VIX9D) and steep backwardation (VIX > 1.15 × VIX9D).
Backwardation occurs when short-term implied volatility is higher than longer-term, which typically happens during market stress. Investors anticipate immediate turbulence but expect calming. Psychologically, this reflects acute fear. The model subtracts 15 points for backwardation and 30 for steep backwardation, as these constellations signal elevated risk. Simon and Wiggins (2001) analyzed the VIX futures curve and showed that backwardation is associated with market declines.
6.4 Safe-Haven Flows
During crisis times, investors flee from risky assets into safe havens: gold, US dollar, and Japanese yen. This "flight to quality" is a sentiment signal. The model calculates the performance of these assets relative to stocks over the last 20 trading days. When gold or the dollar strongly rise while stocks fall, this indicates elevated risk aversion.
The safe-haven component is calculated as the difference between safe-haven performance and stock performance. Positive values (safe havens outperform) subtract up to 20 points from the sentiment score, negative values (stocks outperform) add up to 10 points. The asymmetric treatment (larger deduction for risk-off than bonus for risk-on) reflects that risk-off movements are typically sharper and more informative than risk-on phases.
Baur and Lucey (2010) examined safe-haven properties of gold and showed that gold indeed exhibits negative correlation with stocks during extreme market movements, confirming its role as crisis protection.
7. Component 5: Macroeconomic Analysis
7.1 The Yield Curve as Economic Indicator
The yield curve, represented as yields of government bonds of various maturities, contains aggregated expectations about future interest rates, inflation, and economic growth. The slope of the yield curve has remarkable predictive power for recessions. Estrella and Mishkin (1998) showed that an inverted yield curve (short-term rates higher than long-term) predicts recessions with high reliability. This is because inverted curves reflect restrictive monetary policy: the central bank raises short-term rates to combat inflation, dampening economic activity.
DEAM calculates two spread measures: the 2-year-minus-10-year spread and the 3-month-minus-10-year spread. A steep, positive curve (spreads above 1.5% and 2% respectively) signals healthy growth expectations and generates the maximum yield curve score of 40 points. A flat curve (spreads near zero) reduces the score to 20 points. An inverted curve (negative spreads) is particularly alarming and results in only 10 points.
The choice of two different spreads increases analysis robustness. The 2-10 spread is most established in academic literature, while the 3M-10Y spread is often considered more sensitive, as the 3-month rate directly reflects current monetary policy (Ang, Piazzesi, and Wei, 2006).
7.2 Credit Conditions and Spreads
Credit spreads—the yield difference between risky corporate bonds and safe government bonds—reflect risk perception in the credit market. Gilchrist and Zakrajšek (2012) constructed an "Excess Bond Premium" that measures the component of credit spreads not explained by fundamentals and showed this is a predictor of future economic activity and stock returns.
The model approximates credit spread by comparing the yield of high-yield bond ETFs (HYG) with investment-grade bond ETFs (LQD). A narrow spread below 200 basis points signals healthy credit conditions and risk appetite, contributing 30 points to the macro score. Very wide spreads above 1000 basis points (as during the 2008 financial crisis) signal credit crunch and generate zero points.
Additionally, the model evaluates whether "flight to quality" is occurring, identified through strong performance of Treasury bonds (TLT) with simultaneous weakness in high-yield bonds. This constellation indicates elevated risk aversion and reduces the credit conditions score.
7.3 Financial Stability at Corporate Level
While the yield curve and credit spreads reflect macroeconomic conditions, financial stability evaluates the health of companies themselves. The model uses the aggregated debt-to-equity ratio and return on equity of the S&P 500 as proxies for corporate health.
A low leverage level below 0.5 combined with high ROE above 15% signals robust corporate balance sheets and generates 20 points. This combination is particularly valuable as it represents both defensive strength (low debt means crisis resistance) and offensive strength (high ROE means earnings power). High leverage above 1.5 generates only 5 points, as it implies vulnerability to interest rate increases and recessions.
Korteweg (2010) showed in "The Net Benefits to Leverage" that optimal debt maximizes firm value, but excessive debt increases distress costs. At the aggregated market level, high debt indicates fragilities that can become problematic during stress phases.
8. Component 6: Crisis Detection
8.1 The Need for Systematic Crisis Detection
Financial crises are rare but extremely impactful events that suspend normal statistical relationships. During normal market volatility, diversified portfolios and traditional risk management approaches function, but during systemic crises, seemingly independent assets suddenly correlate strongly, and losses exceed historical expectations (Longin and Solnik, 2001). This justifies a separate crisis detection mechanism that operates independently of regular allocation components.
Reinhart and Rogoff (2009) documented in "This Time Is Different: Eight Centuries of Financial Folly" recurring patterns in financial crises: extreme volatility, massive drawdowns, credit market dysfunction, and asset price collapse. DEAM operationalizes these patterns into quantifiable crisis indicators.
8.2 Multi-Signal Crisis Identification
The model uses a counter-based approach where various stress signals are identified and aggregated. This methodology is more robust than relying on a single indicator, as true crises typically occur simultaneously across multiple dimensions. A single signal may be a false alarm, but the simultaneous presence of multiple signals increases confidence.
The first indicator is a VIX above the crisis threshold (default 40), adding one point. A VIX above 60 (as in 2008 and March 2020) adds two additional points, as such extreme values are historically very rare. This tiered approach captures the intensity of volatility.
The second indicator is market drawdown. A drawdown above 15% adds one point, as corrections of this magnitude can be potential harbingers of larger crises. A drawdown above 25% adds another point, as historical bear markets typically encompass 25-40% drawdowns.
The third indicator is credit market spreads above 500 basis points, adding one point. Such wide spreads occur only during significant credit market disruptions, as in 2008 during the Lehman crisis.
The fourth indicator identifies simultaneous losses in stocks and bonds. Normally, Treasury bonds act as a hedge against equity risk (negative correlation), but when both fall simultaneously, this indicates systemic liquidity problems or inflation/stagflation fears. The model checks whether both SPY and TLT have fallen more than 10% and 5% respectively over 5 trading days, adding two points.
The fifth indicator is a volume spike combined with negative returns. Extreme trading volumes (above twice the 20-day average) with falling prices signal panic selling. This adds one point.
A crisis situation is diagnosed when at least 3 indicators trigger, a severe crisis at 5 or more indicators. These thresholds were calibrated through historical backtesting to identify true crises (2008, 2020) without generating excessive false alarms.
8.3 Crisis-Based Allocation Override
When a crisis is detected, the system overrides the normal allocation recommendation and caps equity allocation at maximum 25%. In a severe crisis, the cap is set at 10%. This drastic defensive posture follows the empirical observation that crises typically require time to develop and that early reduction can avoid substantial losses (Faber, 2007).
This override logic implements a "safety first" principle: in situations of existential danger to the portfolio, capital preservation becomes the top priority. Roy (1952) formalized this approach in "Safety First and the Holding of Assets," arguing that investors should primarily minimize ruin probability.
9. Integration and Final Allocation Calculation
9.1 Component Weighting
The final allocation recommendation emerges through weighted aggregation of the five components. The standard weighting is: Market Regime 35%, Risk Management 25%, Valuation 20%, Sentiment 15%, Macro 5%. These weights reflect both theoretical considerations and empirical backtesting results.
The highest weighting of market regime is based on evidence that trend-following and momentum strategies have delivered robust results across various asset classes and time periods (Moskowitz, Ooi, and Pedersen, 2012). Current market momentum is highly informative for the near future, although it provides no information about long-term expectations.
The substantial weighting of risk management (25%) follows from the central importance of risk control. Wealth preservation is the foundation of long-term wealth creation, and systematic risk management is demonstrably value-creating (Moreira and Muir, 2017).
The valuation component receives 20% weight, based on the long-term mean reversion of valuation metrics. While valuation has limited short-term predictive power (bull and bear markets can begin at any valuation), the long-term relationship between valuation and returns is robustly documented (Campbell and Shiller, 1988).
Sentiment (15%) and Macro (5%) receive lower weights, as these factors are subtler and harder to measure. Sentiment is valuable as a contrarian indicator at extremes but less informative in normal ranges. Macro variables such as the yield curve have strong predictive power for recessions, but the transmission from recessions to stock market performance is complex and temporally variable.
9.2 Model Type Adjustments
DEAM allows users to choose between four model types: Conservative, Balanced, Aggressive, and Adaptive. This choice modifies the final allocation through additive adjustments.
Conservative mode subtracts 10 percentage points from allocation, resulting in consistently more cautious positioning. This is suitable for risk-averse investors or those with limited investment horizons. Aggressive mode adds 10 percentage points, suitable for risk-tolerant investors with long horizons.
Adaptive mode implements procyclical adjustment based on short-term momentum: if the market has risen more than 5% in the last 20 days, 5 percentage points are added; if it has declined more than 5%, 5 points are subtracted. This logic follows the observation that short-term momentum persists (Jegadeesh and Titman, 1993), but the moderate size of adjustment avoids excessive timing bets.
Balanced mode makes no adjustment and uses raw model output. This neutral setting is suitable for investors who wish to trust model recommendations unchanged.
9.3 Smoothing and Stability
The allocation resulting from aggregation undergoes final smoothing through a simple moving average over 3 periods. This smoothing is crucial for model practicality, as it reduces frequent trading and thus transaction costs. Without smoothing, the model could fluctuate between adjacent allocations with every small input change.
The choice of 3 periods as smoothing window is a compromise between responsiveness and stability. Longer smoothing would excessively delay signals and impede response to true regime changes. Shorter or no smoothing would allow too much noise. Empirical tests showed that 3-period smoothing offers an optimal ratio between these goals.
10. Visualization and Interpretation
10.1 Main Output: Equity Allocation
DEAM's primary output is a time series from 0 to 100 representing the recommended percentage allocation to equities. This representation is intuitive: 100% means full investment in stocks (specifically: an S&P 500 ETF), 0% means complete cash position, and intermediate values correspond to mixed portfolios. A value of 60% means, for example: invest 60% of wealth in SPY, hold 40% in money market instruments or cash.
The time series is color-coded to enable quick visual interpretation. Green shades represent high allocations (above 80%, bullish), red shades low allocations (below 20%, bearish), and neutral colors middle allocations. The chart background is dynamically colored based on the signal, enhancing readability in different market phases.
10.2 Dashboard Metrics
A tabular dashboard presents key metrics compactly. This includes current allocation, cash allocation (complement), an aggregated signal (BULLISH/NEUTRAL/BEARISH), current market regime, VIX level, market drawdown, and crisis status.
Additionally, fundamental metrics are displayed: P/E Ratio, Equity Risk Premium, Return on Equity, Debt-to-Equity Ratio, and Total Shareholder Yield. This transparency allows users to understand model decisions and form their own assessments.
Component scores (Regime, Risk, Valuation, Sentiment, Macro) are also displayed, each normalized on a 0-100 scale. This shows which factors primarily drive the current recommendation. If, for example, the Risk score is very low (20) while other scores are moderate (50-60), this indicates that risk management considerations are pulling allocation down.
10.3 Component Breakdown (Optional)
Advanced users can display individual components as separate lines in the chart. This enables analysis of component dynamics: do all components move synchronously, or are there divergences? Divergences can be particularly informative. If, for example, the market regime is bullish (high score) but the valuation component is very negative, this signals an overbought market not fundamentally supported—a classic "bubble warning."
This feature is disabled by default to keep the chart clean but can be activated for deeper analysis.
10.4 Confidence Bands
The model optionally displays uncertainty bands around the main allocation line. These are calculated as ±1 standard deviation of allocation over a rolling 20-period window. Wide bands indicate high volatility of model recommendations, suggesting uncertain market conditions. Narrow bands indicate stable recommendations.
This visualization implements a concept of epistemic uncertainty—uncertainty about the model estimate itself, not just market volatility. In phases where various indicators send conflicting signals, the allocation recommendation becomes more volatile, manifesting in wider bands. Users can understand this as a warning to act more cautiously or consult alternative information sources.
11. Alert System
11.1 Allocation Alerts
DEAM implements an alert system that notifies users of significant events. Allocation alerts trigger when smoothed allocation crosses certain thresholds. An alert is generated when allocation reaches 80% (from below), signaling strong bullish conditions. Another alert triggers when allocation falls to 20%, indicating defensive positioning.
These thresholds are not arbitrary but correspond with boundaries between model regimes. An allocation of 80% roughly corresponds to a clear bull market regime, while 20% corresponds to a bear market regime. Alerts at these points are therefore informative about fundamental regime shifts.
11.2 Crisis Alerts
Separate alerts trigger upon detection of crisis and severe crisis. These alerts have highest priority as they signal large risks. A crisis alert should prompt investors to review their portfolio and potentially take defensive measures beyond the automatic model recommendation (e.g., hedging through put options, rebalancing to more defensive sectors).
11.3 Regime Change Alerts
An alert triggers upon change of market regime (e.g., from Neutral to Correction, or from Bull Market to Strong Bull). Regime changes are highly informative events that typically entail substantial allocation changes. These alerts enable investors to proactively respond to changes in market dynamics.
11.4 Risk Breach Alerts
A specialized alert triggers when actual portfolio risk utilization exceeds target parameters by 20%. This is a warning signal that the risk management system is reaching its limits, possibly because market volatility is rising faster than allocation can be reduced. In such situations, investors should consider manual interventions.
12. Practical Application and Limitations
12.1 Portfolio Implementation
DEAM generates a recommendation for allocation between equities (S&P 500) and cash. Implementation by an investor can take various forms. The most direct method is using an S&P 500 ETF (e.g., SPY, VOO) for equity allocation and a money market fund or savings account for cash allocation.
A rebalancing strategy is required to synchronize actual allocation with model recommendation. Two approaches are possible: (1) rule-based rebalancing at every 10% deviation between actual and target, or (2) time-based monthly rebalancing. Both have trade-offs between responsiveness and transaction costs. Empirical evidence (Jaconetti, Kinniry, and Zilbering, 2010) suggests rebalancing frequency has moderate impact on performance, and investors should optimize based on their transaction costs.
12.2 Adaptation to Individual Preferences
The model offers numerous adjustment parameters. Component weights can be modified if investors place more or less belief in certain factors. A fundamentally-oriented investor might increase valuation weight, while a technical trader might increase regime weight.
Risk target parameters (target volatility, max drawdown) should be adapted to individual risk tolerance. Younger investors with long investment horizons can choose higher target volatility (15-18%), while retirees may prefer lower volatility (8-10%). This adjustment systematically shifts average equity allocation.
Crisis thresholds can be adjusted based on preference for sensitivity versus specificity of crisis detection. Lower thresholds (e.g., VIX > 35 instead of 40) increase sensitivity (more crises are detected) but reduce specificity (more false alarms). Higher thresholds have the reverse effect.
12.3 Limitations and Disclaimers
DEAM is based on historical relationships between indicators and market performance. There is no guarantee these relationships will persist in the future. Structural changes in markets (e.g., through regulation, technology, or central bank policy) can break established patterns. This is the fundamental problem of induction in financial science (Taleb, 2007).
The model is optimized for US equities (S&P 500). Application to other markets (international stocks, bonds, commodities) would require recalibration. The indicators and thresholds are specific to the statistical properties of the US equity market.
The model cannot eliminate losses. Even with perfect crisis prediction, an investor following the model would lose money in bear markets—just less than a buy-and-hold investor. The goal is risk-adjusted performance improvement, not risk elimination.
Transaction costs are not modeled. In practice, spreads, commissions, and taxes reduce net returns. Frequent trading can cause substantial costs. Model smoothing helps minimize this, but users should consider their specific cost situation.
The model reacts to information; it does not anticipate it. During sudden shocks (e.g., 9/11, COVID-19 lockdowns), the model can only react after price movements, not before. This limitation is inherent to all reactive systems.
12.4 Relationship to Other Strategies
DEAM is a tactical asset allocation approach and should be viewed as a complement, not replacement, for strategic asset allocation. Brinson, Hood, and Beebower (1986) showed in their influential study "Determinants of Portfolio Performance" that strategic asset allocation (long-term policy allocation) explains the majority of portfolio performance, but this leaves room for tactical adjustments based on market timing.
The model can be combined with value and momentum strategies at the individual stock level. While DEAM controls overall market exposure, within-equity decisions can be optimized through stock-picking models. This separation between strategic (market exposure) and tactical (stock selection) levels follows classical portfolio theory.
The model does not replace diversification across asset classes. A complete portfolio should also include bonds, international stocks, real estate, and alternative investments. DEAM addresses only the US equity allocation decision within a broader portfolio.
13. Scientific Foundation and Evaluation
13.1 Theoretical Consistency
DEAM's components are based on established financial theory and empirical evidence. The market regime component follows from regime-switching models (Hamilton, 1989) and trend-following literature. The risk management component implements volatility targeting (Moreira and Muir, 2017) and modern portfolio theory (Markowitz, 1952). The valuation component is based on discounted cash flow theory and empirical value research (Campbell and Shiller, 1988; Fama and French, 1992). The sentiment component integrates behavioral finance (Baker and Wurgler, 2006). The macro component uses established business cycle indicators (Estrella and Mishkin, 1998).
This theoretical grounding distinguishes DEAM from purely data-mining-based approaches that identify patterns without causal theory. Theory-guided models have greater probability of functioning out-of-sample, as they are based on fundamental mechanisms, not random correlations (Lo and MacKinlay, 1990).
13.2 Empirical Validation
While this document does not present detailed backtest analysis, it should be noted that rigorous validation of a tactical asset allocation model should include several elements:
In-sample testing establishes whether the model functions at all in the data on which it was calibrated. Out-of-sample testing is crucial: the model should be tested in time periods not used for development. Walk-forward analysis, where the model is successively trained on rolling windows and tested in the next window, approximates real implementation.
Performance metrics should be risk-adjusted. Pure return consideration is misleading, as higher returns often only compensate for higher risk. Sharpe Ratio, Sortino Ratio, Calmar Ratio, and Maximum Drawdown are relevant metrics. Comparison with benchmarks (Buy-and-Hold S&P 500, 60/40 Stock/Bond portfolio) contextualizes performance.
Robustness checks test sensitivity to parameter variation. If the model only functions at specific parameter settings, this indicates overfitting. Robust models show consistent performance over a range of plausible parameters.
13.3 Comparison with Existing Literature
DEAM fits into the broader literature on tactical asset allocation. Faber (2007) presented a simple momentum-based timing system that goes long when the market is above its 10-month average, otherwise cash. This simple system avoided large drawdowns in bear markets. DEAM can be understood as a sophistication of this approach that integrates multiple information sources.
Ilmanen (2011) discusses various timing factors in "Expected Returns" and argues for multi-factor approaches. DEAM operationalizes this philosophy. Asness, Moskowitz, and Pedersen (2013) showed that value and momentum effects work across asset classes, justifying cross-asset application of regime and valuation signals.
Ang (2014) emphasizes in "Asset Management: A Systematic Approach to Factor Investing" the importance of systematic, rule-based approaches over discretionary decisions. DEAM is fully systematic and eliminates emotional biases that plague individual investors (overconfidence, hindsight bias, loss aversion).
References
Ang, A. (2014) *Asset Management: A Systematic Approach to Factor Investing*. Oxford: Oxford University Press.
Ang, A., Piazzesi, M. and Wei, M. (2006) 'What does the yield curve tell us about GDP growth?', *Journal of Econometrics*, 131(1-2), pp. 359-403.
Asness, C.S. (2003) 'Fight the Fed Model', *The Journal of Portfolio Management*, 30(1), pp. 11-24.
Asness, C.S., Moskowitz, T.J. and Pedersen, L.H. (2013) 'Value and Momentum Everywhere', *The Journal of Finance*, 68(3), pp. 929-985.
Baker, M. and Wurgler, J. (2006) 'Investor Sentiment and the Cross-Section of Stock Returns', *The Journal of Finance*, 61(4), pp. 1645-1680.
Baker, M. and Wurgler, J. (2007) 'Investor Sentiment in the Stock Market', *Journal of Economic Perspectives*, 21(2), pp. 129-152.
Baur, D.G. and Lucey, B.M. (2010) 'Is Gold a Hedge or a Safe Haven? An Analysis of Stocks, Bonds and Gold', *Financial Review*, 45(2), pp. 217-229.
Bollerslev, T. (1986) 'Generalized Autoregressive Conditional Heteroskedasticity', *Journal of Econometrics*, 31(3), pp. 307-327.
Boudoukh, J., Michaely, R., Richardson, M. and Roberts, M.R. (2007) 'On the Importance of Measuring Payout Yield: Implications for Empirical Asset Pricing', *The Journal of Finance*, 62(2), pp. 877-915.
Brinson, G.P., Hood, L.R. and Beebower, G.L. (1986) 'Determinants of Portfolio Performance', *Financial Analysts Journal*, 42(4), pp. 39-44.
Brock, W., Lakonishok, J. and LeBaron, B. (1992) 'Simple Technical Trading Rules and the Stochastic Properties of Stock Returns', *The Journal of Finance*, 47(5), pp. 1731-1764.
Calmar, T.W. (1991) 'The Calmar Ratio', *Futures*, October issue.
Campbell, J.Y. and Shiller, R.J. (1988) 'The Dividend-Price Ratio and Expectations of Future Dividends and Discount Factors', *Review of Financial Studies*, 1(3), pp. 195-228.
Cochrane, J.H. (2011) 'Presidential Address: Discount Rates', *The Journal of Finance*, 66(4), pp. 1047-1108.
Damodaran, A. (2012) *Equity Risk Premiums: Determinants, Estimation and Implications*. Working Paper, Stern School of Business.
Engle, R.F. (1982) 'Autoregressive Conditional Heteroskedasticity with Estimates of the Variance of United Kingdom Inflation', *Econometrica*, 50(4), pp. 987-1007.
Estrella, A. and Hardouvelis, G.A. (1991) 'The Term Structure as a Predictor of Real Economic Activity', *The Journal of Finance*, 46(2), pp. 555-576.
Estrella, A. and Mishkin, F.S. (1998) 'Predicting U.S. Recessions: Financial Variables as Leading Indicators', *Review of Economics and Statistics*, 80(1), pp. 45-61.
Faber, M.T. (2007) 'A Quantitative Approach to Tactical Asset Allocation', *The Journal of Wealth Management*, 9(4), pp. 69-79.
Fama, E.F. and French, K.R. (1989) 'Business Conditions and Expected Returns on Stocks and Bonds', *Journal of Financial Economics*, 25(1), pp. 23-49.
Fama, E.F. and French, K.R. (1992) 'The Cross-Section of Expected Stock Returns', *The Journal of Finance*, 47(2), pp. 427-465.
Garman, M.B. and Klass, M.J. (1980) 'On the Estimation of Security Price Volatilities from Historical Data', *Journal of Business*, 53(1), pp. 67-78.
Gilchrist, S. and Zakrajšek, E. (2012) 'Credit Spreads and Business Cycle Fluctuations', *American Economic Review*, 102(4), pp. 1692-1720.
Gordon, M.J. (1962) *The Investment, Financing, and Valuation of the Corporation*. Homewood: Irwin.
Graham, B. and Dodd, D.L. (1934) *Security Analysis*. New York: McGraw-Hill.
Hamilton, J.D. (1989) 'A New Approach to the Economic Analysis of Nonstationary Time Series and the Business Cycle', *Econometrica*, 57(2), pp. 357-384.
Ilmanen, A. (2011) *Expected Returns: An Investor's Guide to Harvesting Market Rewards*. Chichester: Wiley.
Jaconetti, C.M., Kinniry, F.M. and Zilbering, Y. (2010) 'Best Practices for Portfolio Rebalancing', *Vanguard Research Paper*.
Jegadeesh, N. and Titman, S. (1993) 'Returns to Buying Winners and Selling Losers: Implications for Stock Market Efficiency', *The Journal of Finance*, 48(1), pp. 65-91.
Kahneman, D. and Tversky, A. (1979) 'Prospect Theory: An Analysis of Decision under Risk', *Econometrica*, 47(2), pp. 263-292.
Korteweg, A. (2010) 'The Net Benefits to Leverage', *The Journal of Finance*, 65(6), pp. 2137-2170.
Lo, A.W. and MacKinlay, A.C. (1990) 'Data-Snooping Biases in Tests of Financial Asset Pricing Models', *Review of Financial Studies*, 3(3), pp. 431-467.
Longin, F. and Solnik, B. (2001) 'Extreme Correlation of International Equity Markets', *The Journal of Finance*, 56(2), pp. 649-676.
Mandelbrot, B. (1963) 'The Variation of Certain Speculative Prices', *The Journal of Business*, 36(4), pp. 394-419.
Markowitz, H. (1952) 'Portfolio Selection', *The Journal of Finance*, 7(1), pp. 77-91.
Modigliani, F. and Miller, M.H. (1961) 'Dividend Policy, Growth, and the Valuation of Shares', *The Journal of Business*, 34(4), pp. 411-433.
Moreira, A. and Muir, T. (2017) 'Volatility-Managed Portfolios', *The Journal of Finance*, 72(4), pp. 1611-1644.
Moskowitz, T.J., Ooi, Y.H. and Pedersen, L.H. (2012) 'Time Series Momentum', *Journal of Financial Economics*, 104(2), pp. 228-250.
Parkinson, M. (1980) 'The Extreme Value Method for Estimating the Variance of the Rate of Return', *Journal of Business*, 53(1), pp. 61-65.
Piotroski, J.D. (2000) 'Value Investing: The Use of Historical Financial Statement Information to Separate Winners from Losers', *Journal of Accounting Research*, 38, pp. 1-41.
Reinhart, C.M. and Rogoff, K.S. (2009) *This Time Is Different: Eight Centuries of Financial Folly*. Princeton: Princeton University Press.
Ross, S.A. (1976) 'The Arbitrage Theory of Capital Asset Pricing', *Journal of Economic Theory*, 13(3), pp. 341-360.
Roy, A.D. (1952) 'Safety First and the Holding of Assets', *Econometrica*, 20(3), pp. 431-449.
Schwert, G.W. (1989) 'Why Does Stock Market Volatility Change Over Time?', *The Journal of Finance*, 44(5), pp. 1115-1153.
Sharpe, W.F. (1966) 'Mutual Fund Performance', *The Journal of Business*, 39(1), pp. 119-138.
Sharpe, W.F. (1994) 'The Sharpe Ratio', *The Journal of Portfolio Management*, 21(1), pp. 49-58.
Simon, D.P. and Wiggins, R.A. (2001) 'S&P Futures Returns and Contrary Sentiment Indicators', *Journal of Futures Markets*, 21(5), pp. 447-462.
Taleb, N.N. (2007) *The Black Swan: The Impact of the Highly Improbable*. New York: Random House.
Whaley, R.E. (2000) 'The Investor Fear Gauge', *The Journal of Portfolio Management*, 26(3), pp. 12-17.
Whaley, R.E. (2009) 'Understanding the VIX', *The Journal of Portfolio Management*, 35(3), pp. 98-105.
Yardeni, E. (2003) 'Stock Valuation Models', *Topical Study*, 51, Yardeni Research.
Zweig, M.E. (1973) 'An Investor Expectations Stock Price Predictive Model Using Closed-End Fund Premiums', *The Journal of Finance*, 28(1), pp. 67-78.
SuperTrend Optimizer Remastered[CHE] SuperTrend Optimizer Remastered — Grid-ranked SuperTrend with additive or multiplicative scoring
Summary
This indicator evaluates a fixed grid of one hundred and two SuperTrend parameter pairs and ranks them by a simple flip-to-flip return model. It auto-selects the currently best-scoring combination and renders its SuperTrend in real time, with optional gradient coloring for faster visual parsing. The original concept is by KioseffTrading Thanks a lot for it.
For years I wanted to shorten the roughly two thousand three hundred seventy-one lines; I have now reduced the core to about three hundred eighty lines without triggering script errors. The simplification is generalizable to other indicators. A multiplicative return mode was added alongside the existing additive aggregation, enabling different rankings and often more realistic compounding behavior.
Motivation: Why this design?
SuperTrend is sensitive to its factor and period. Picking a single pair statically can underperform across regimes. This design sweeps a compact parameter grid around user-defined lower bounds, measures flip-to-flip outcomes, and promotes the combination with the strongest cumulative return. The approach keeps the visual footprint familiar while removing manual trial-and-error. The multiplicative mode captures compounding effects; the additive mode remains available for linear aggregation.
Originally (by KioseffTrading)
Very long script (~2,371 lines), monolithic structure.
SuperTrend optimization with additive (cumulative percentage-sum) scoring only.
Heavier use of repetitive code; limited modularity and fewer UI conveniences.
No explicit multiplicative compounding option; rankings did not reflect sequence-sensitive equity growth.
Now (remastered by CHE)
Compact core (~380 lines) with the same functional intent, no compile errors.
Adds multiplicative (compounding) scoring alongside additive, changing rankings to reflect real equity paths and penalize drawdown sequences.
Fixed 34×3 grid sweep, live ranking, gradient-based bar/wick/line visuals, top-table display, and an optional override plot.
Cleaner arrays/state handling, last-bar table updates, and reusable simplification pattern that can be applied to other indicators.
What’s different vs. standard approaches?
Baseline: A single SuperTrend with hand-picked inputs.
Architecture differences:
Fixed grid of thirty-four factor offsets across three ATR offsets.
Per-combination flip-to-flip backtest with additive or multiplicative aggregation.
Live ranking with optional “Best” or “Worst” table output.
Gradient bar, wick, and line coloring driven by consecutive trend counts.
Optional override plot to force a specific SuperTrend independent of ranking.
Practical effect: Charts show the currently best-scoring SuperTrend, not a static choice, plus an on-chart table of top performers for transparency.
How it works (technical)
For each parameter pair, the script computes SuperTrend value and direction. It monitors direction transitions and treats a change from up to down as a long entry and the reverse as an exit, measuring the move between entry and exit using close prices. Results are aggregated per pair either by summing percentage changes or by compounding return factors and then converting to percent for comparison. On the last bar, open trades are included as unrealized contributions to ranking. The best combination’s line is plotted, with separate styling for up and down regimes. Consecutive regime counts are normalized within a rolling window and mapped to gradients for bars, wicks, and lines. A two-column table reports the best or worst performers, with an optional row describing the parameter sweep.
Parameter Guide
Factor (Lower Bound) — Starting SuperTrend factor; the grid adds offsets between zero and three point three. Default three point zero. Higher raises distance to price and reduces flips.
ATR Period (Lower Bound) — Starting ATR length; the grid adds zero, one, and two. Default ten. Longer reduces noise at the cost of responsiveness.
Best vs Worst — Ranks by top or bottom cumulative return. Default Best. Use Worst for stress tests.
Calculation Mode — Additive sums percents; Multiplicative compounds returns. Multiplicative is closer to equity growth and can change the leaderboard.
Show in Table — “Top Three” or “All”. Fewer rows keep charts clean.
Show “Parameters Tested” Label — Displays the effective sweep ranges for auditability.
Plot Override SuperTrend — If enabled, the override factor and ATR are plotted instead of the ranked winner.
Override Factor / ATR Period — Values used when override is on.
Light Mode (for Table) — Adjusts table colors for bright charts.
Gradient/Coloring controls — Toggles for gradient bars and wick coloring, window length for normalization, gamma for contrast, and transparency settings. Use these to emphasize or tone down visual intensity.
Table Position and Text Size — Places the table and sets typography.
Reading & Interpretation
The auto SuperTrend plots one line for up regimes and one for down regimes. Color intensity reflects consecutive trend persistence within the chosen window. A small square at the bottom encodes the same gradient as a compact status channel. Optional wick coloring uses the same gradient for maximum contrast. The performance table lists parameter pairs and their cumulative return under the chosen aggregation; positive values are tinted with the up color, negative with the down color. “Long” labels mark flips that open a long in the simplified model.
Practical Workflows & Combinations
Trend following: Use the auto line as your primary bias. Enter on flips aligned with structure such as higher highs and higher lows. Filter with higher-timeframe trend or volatility contraction.
Exits/Stops: Consider conservative exits when color intensity fades or when the opposite line is approached. Aggressive traders can trail near the plotted line.
Override mode: When you want stability across instruments, enable override and standardize factor and ATR; keep the table visible for sanity checks.
Multi-asset/Multi-TF: Defaults travel well on liquid instruments and intraday to daily timeframes. Heavier assets may prefer larger lower bounds or multiplicative mode.
Behavior, Constraints & Performance
Repaint/confirmation: Signals are based on SuperTrend direction; confirmation is best assessed on closed bars to avoid mid-bar oscillation. No higher-timeframe requests are used.
Resources: One hundred and two SuperTrend evaluations per bar, arrays for state, and a last-bar table render. This is efficient for the grid size but avoid stacking many instances.
Known limits: The flip model ignores costs, slippage, and short exposure. Rapid whipsaws can degrade both aggregation modes. Gradients are cosmetic and do not change logic.
Sensible Defaults & Quick Tuning
Start with the provided lower bounds and “Top Three” table.
Too many flips → raise the lower bound factor or period.
Too sluggish → lower the bounds or switch to additive mode.
Rankings feel unstable → prefer multiplicative mode and extend the normalization window.
Visuals too strong → increase gradient transparency or disable wick coloring.
What this indicator is—and isn’t
This is a parameter-sweep and visualization layer for SuperTrend selection. It is not a complete trading system, not predictive, and does not include position sizing, transaction costs, or risk management. Combine with market structure, higher-timeframe context, and explicit risk controls.
Attribution and refactor note: The original work is by KioseffTrading. The script has been refactored from approximately two thousand three hundred seventy-one lines to about three hundred eighty core lines, retaining behavior without compiler errors. The general simplification pattern is reusable for other indicators.
Metadata
Name/Tag: SuperTrend Optimizer Remastered
Pine version: v6
Overlay or separate pane: true (overlay)
Core idea/principle: Grid-based SuperTrend selection by cumulative flip returns with additive or multiplicative aggregation.
Primary outputs/signals: Auto-selected SuperTrend up and down lines, optional override lines, gradient bar and wick colors, “Long” labels, performance table.
Inputs with defaults: See Parameter Guide above.
Metrics/functions used: SuperTrend, ATR, arrays, barstate checks, windowed normalization, gamma-based contrast adjustment, table API, gradient utilities.
Special techniques: Fixed grid sweep, compounding vs linear aggregation, last-bar UI updates, gradient encoding of persistence.
Performance/constraints: One hundred and two SuperTrend calls, arrays of length one hundred and two, label budget, last-bar table updates, no higher-timeframe requests.
Recommended use-cases/workflows: Trend bias selection, quick parameter audits, override standardization across assets.
Compatibility/assets/timeframes: Standard OHLC charts across intraday to daily; liquid instruments recommended.
Limitations/risks: Costs and slippage omitted; mid-bar instability possible; not suitable for synthetic chart types.
Debug/diagnostics: Ranking table, optional tested-range label; internal counters for consecutive trends.
Disclaimer
The content provided, including all code and materials, is strictly for educational and informational purposes only. It is not intended as, and should not be interpreted as, financial advice, a recommendation to buy or sell any financial instrument, or an offer of any financial product or service. All strategies, tools, and examples discussed are provided for illustrative purposes to demonstrate coding techniques and the functionality of Pine Script within a trading context.
Any results from strategies or tools provided are hypothetical, and past performance is not indicative of future results. Trading and investing involve high risk, including the potential loss of principal, and may not be suitable for all individuals. Before making any trading decisions, please consult with a qualified financial professional to understand the risks involved.
By using this script, you acknowledge and agree that any trading decisions are made solely at your discretion and risk.
Do not use this indicator on Heikin-Ashi, Renko, Kagi, Point-and-Figure, or Range charts, as these chart types can produce unrealistic results for signal markers and alerts.
Best regards and happy trading
Chervolino
Screener based on Profitunity strategy for multiple timeframes
Screener based on Profitunity strategy by Bill Williams for multiple timeframes (max 5, including chart timeframe) and customizable symbol list. The screener analyzes the Alligator and Awesome Oscillator indicators, Divergent bars and high volume bars.
The maximum allowed number of requests (symbols and timeframes) is limited to 40 requests, for example, for 10 symbols by 4 requests of different timeframes. Therefore, the indicator automatically limits the number of displayed symbols depending on the number of timeframes for each symbol, if there are more symbols than are displayed in the screener table, then the ordinal numbers are displayed to the left of the symbols, in this case you can display the next group of symbols by increasing the value by 1 in the "Show tickers from" field, if the "Group" field is enabled, or specify the symbol number by 1 more than the last symbol in the screener table. 👀 When timeframe filtering is applied, the screener table displays only the columns of those timeframes for which the filtering value is selected, which allows displaying more symbols.
For each timeframe, in the "TIMEFRAMES > Prev" field, you can enable the display of data for the previous bar relative to the last (current) one, if the market is open for the requested symbol. In the "TIMEFRAMES > Y" field, you can enable filtering depending on the location of the last five bars relative to the Alligator indicator lines, which are designated by special symbols in the screener table:
⬆️ — if the Alligator is open upwards (Lips > Teeth > Jaw) and none of the bars is closed below the Lips line;
↗️ — if one of the bars, except for the penultimate one, is closed below Lips, or two bars, except for the last one, are closed below Lips, or the Alligator is open upwards only below four bars, but none of the bars is closed below Lips;
⬇️ — if the Alligator is open downwards (Lips < Teeth < Jaw), but none of the bars is closed above Lips;
↘️ — if one of the bars, except the penultimate one, is closed above the Lips, or two bars, except the last one, are closed above the Lips, or the Alligator is open down only above four bars, but none of the bars are closed above the Lips;
➡️ — in other cases, including when the Alligator lines intersect and one of the bars is closed behind the Lips line or two bars intersect one of the Alligator lines.
In the "TIMEFRAMES > Show bar change value for TF" field, you can add a column to the right of the selected timeframe column with the percentage change between the closing price of the last bar (current) and the closing price of the previous bar ((close – previous close) / previous close * 100). Depending on the percentage value, the background color of the screener table cell will change: dark red if <= -3%; red if <= -2%, light red if <= -0.5%; dark green if >= 3%; green if >= 2%; light green if >= 0.5%.
For each timeframe, the screener table displays the symbol of the latest (current) bar, depending on the closing price relative to the bar's midpoint ((high + low) / 2) and its location relative to the Alligator indicator lines: ⎾ — the bar's closing price is above its midpoint; ⎿ — the bar's closing price is below its midpoint; ├ — the bar's closing price is equal to its midpoint; 🟢 — Bullish Divergent bar, i.e. the bar's closing price is above its midpoint, the bar's high is below all Alligator lines, the bar's low is below the previous bar's low; 🔴 — Bearish Divergent bar, i.e. the bar's closing price is below its midpoint, the bar's low is above all Alligator lines, the bar's high is above the previous bar's high. When filtering is enabled in the "TIMEFRAMES > Filtering by Divergent bar" field, the data in the screener table cells will be displayed only for those timeframes that have a Divergent bar. A high bar volume signal is also displayed — 📶/📶² if the bar volume is greater than 40%/70% of the average volume value calculated using a simple moving average (SMA) in the 140 bar interval from the last bar.
In the indicator settings in the "SYMBOL LIST" field, each ticker (for example: OANDA:SPX500USD) must be on a separate line. If the market is closed, then the data for requested symbols will be limited to the time of the last (current) bar on the chart, for example, if the current symbol was traded yesterday, and the requested symbol is traded today, when requesting data for an hourly timeframe, the last bar will be for yesterday, if the timeframe of the current chart is not higher than 1 day. Therefore, by default, a warning will be displayed on the chart instead of the screener table that if the market is open, you must wait for the screener to load (after the first price change on the current chart), or if the highest timeframe in the screener is 1 day, you will be prompted to change the timeframe on the current chart to 1 week, if the screener requests data for the timeframe of 1 week, you will be prompted to change the timeframe on the current chart to 1 month, or switch to another symbol on the current chart for which the market is open (for example: BINANCE:BTCUSDT), or disable the warning in the field "SYMBOL LIST > Do not display screener if market is close".
The number of the last columns with the color of the AO indicator that will be displayed in the screener table for each timeframe is specified in the indicator settings in the "AWESOME OSCILLATOR > Number of columns" field.
For each timeframe, the direction of the trend between the price of the highest and lowest bars in the specified range of bars from the last bar is displayed — ↑ if the trend is up (the highest bar is to the right of the lowest), or ↓ if the trend is down (the lowest bar is to the right of the highest). If there is a divergence on the AO indicator in the specified interval, the symbol ∇ is also displayed. The average volume value is also calculated in the specified interval using a simple moving average (SMA). The number of bars is set in the indicator settings in the "INTERVAL FOR HIGHEST AND LOWEST BARS > Bars count" field.
In the indicator settings in the "STYLE" field you can change the position of the screener table relative to the chart window, the background color, the color and size of the text.
***
Скринер на основе стратегии Profitunity Билла Вильямса для нескольких таймфреймов (максимум 5, включая таймфрейм графика) и настраиваемого списка символов. Скринер анализирует индикаторы Alligator и Awesome Oscillator, Дивергентные бары и бары с высоким объемом.
Максимально допустимое количество запросов (символы и таймфреймы) ограничено 40 запросами, например, для 10 символов по 4 запроса разных таймфреймов. Поэтому в индикаторе автоматически ограничивается количество отображаемых символов в зависимости от количества таймфреймов для каждого символа, если символов больше чем отображено в таблице скринера, то слева от символов отображаются порядковые номера, в таком случае можно отобразить следующую группу символов, увеличив значение на 1 в настройках индикатора поле "Show tickers from", если включено поле "Group", или указать номер символа на 1 больше, чем последний символ в таблице скринера. 👀 Когда применяется фильтрация по таймфрейму, в таблице скринера отображаются только столбцы тех таймфреймов, для которых выбрано значение фильтрации, что позволяет отображать большее количество символов.
Для каждого таймфрейма в настройках индикатора в поле "TIMEFRAMES > Prev" можно включить отображение данных для предыдущего бара относительно последнего (текущего), если для запрашиваемого символа рынок открыт. В поле "TIMEFRAMES > Y" можно включить фильтрацию, в зависимости от расположения последних пяти баров относительно линий индикатора Alligator, которые обозначаются специальными символами в таблице скринера:
⬆️ — если Alligator открыт вверх (Lips > Teeth > Jaw) и ни один из баров не закрыт ниже линии Lips;
↗️ — если один из баров, кроме предпоследнего, закрыт ниже Lips, или два бара, кроме последнего, закрыты ниже Lips, или Alligator открыт вверх только ниже четырех баров, но ни один из баров не закрыт ниже Lips;
⬇️ — если Alligator открыт вниз (Lips < Teeth < Jaw), но ни один из баров не закрыт выше Lips;
↘️ — если один из баров, кроме предпоследнего, закрыт выше Lips, или два бара, кроме последнего, закрыты выше Lips, или Alligator открыт вниз только выше четырех баров, но ни один из баров не закрыт выше Lips;
➡️ — в остальных случаях, в то числе когда линии Alligator пересекаются и один из баров закрыт за линией Lips или два бара пересекают одну из линий Alligator.
В поле "TIMEFRAMES > Show bar change value for TF" можно добавить справа от выбранного столбца таймфрейма столбец с процентным изменением между ценой закрытия последнего бара (текущего) и ценой закрытия предыдущего бара ((close – previous close) / previous close * 100). В зависимости от величины процента будет меняться цвет фона ячейки таблицы скринера: темно-красный, если <= -3%; красный, если <= -2%, светло-красный, если <= -0.5%; темно-зеленый, если >= 3%; зеленый, если >= 2%; светло-зеленый, если >= 0.5%.
Для каждого таймфрейма в таблице скринера отображается символ последнего (текущего) бара, в зависимости от цены закрытия относительно середины бара ((high + low) / 2) и расположения относительно линий индикатора Alligator: ⎾ — цена закрытия бара выше его середины; ⎿ — цена закрытия бара ниже его середины; ├ — цена закрытия бара равна его середине; 🟢 — Бычий Дивергентный бар, т.е. цена закрытия бара выше его середины, максимум бара ниже всех линий Alligator, минимум бара ниже минимума предыдущего бара; 🔴 — Медвежий Дивергентный бар, т.е. цена закрытия бара ниже его середины, минимум бара выше всех линий Alligator, максимум бара выше максимума предыдущего бара. При включении фильтрации в поле "TIMEFRAMES > Filtering by Divergent bar" данные в ячейках таблицы скринера будут отображаться только для тех таймфреймов, где есть Дивергентный бар. Также отображается сигнал высокого объема бара — 📶/📶², если объем бара больше чем на 40%/70% среднего значения объема, рассчитанного с помощью простой скользящей средней (SMA) в интервале 140 баров от последнего бара.
В настройках индикатора в поле "SYMBOL LIST" каждый тикер (например: OANDA:SPX500USD) должен быть на отдельной строке. Если рынок закрыт, то данные для запрашиваемых символов будут ограничены временем последнего (текущего) бара на графике, например, если текущий символ торговался последний день вчера, а запрашиваемый символ торгуется сегодня, при запросе данных для часового таймфрейма, последний бар будет за вчерашний день, если таймфрейм текущего графика не выше 1 дня. Поэтому по умолчанию на графике будет отображаться предупреждение вместо таблицы скринера о том, что если рынок открыт, то необходимо дождаться загрузки скринера (после первого изменения цены на текущем графике), или если в скринере самый высокий таймфрейм 1 день, то будет предложено изменить на текущем графике таймфрейм на 1 неделю, если в скринере запрашиваются данные для таймфрейма 1 неделя, то будет предложено изменить на текущем графике таймфрейм на 1 месяц, или же переключиться на другой символ на текущем графике, для которого рынок открыт (например: BINANCE:BTCUSDT), или отключить предупреждение в поле "SYMBOL LIST > Do not display screener if market is close".
Количество последних столбцов с цветом индикатора AO, которые будут отображены в таблице скринера для каждого таймфрейма, указывается в настройках индикатора в поле "AWESOME OSCILLATOR > Number of columns".
Для каждого таймфрейма отображается направление тренда между ценой самого высокого и самого низкого баров в указанном интервале баров от последнего бара — ↑, если тренд направлен вверх (самый высокий бар справа от самого низкого), или ↓, если тренд направлен вниз (самый низкий бар справа от самого высокого). Если есть дивергенция на индикаторе AO в указанном интервале, то также отображается символ — ∇. В указанном интервале также рассчитывается среднее значение объема с помощью простой скользящей средней (SMA). Количество баров устанавливается в настройках индикатора в поле "INTERVAL FOR HIGHEST AND LOWEST BARS > Bars count".
В настройках индикатора в поле "STYLE" можно изменить положение таблицы скринера относительно окна графика, цвет фона, цвет и размер текста.
Adaptive ATR Limits█ OVERVIEW
This indicator plots adaptive ATR limits for intraday trading. A key feature of this indicator, which makes it different from other ATR limit indicators, is that the top and bottom ATR limit lines are always exactly one ATR apart from each other (in "auto" mode; there is also a "basic" mode, which plots the limits in the more traditional way—i.e., one ATR above the low and one ATR below the high at all times—and this can be used for comparison).
█ FEATURES
Provides an algorithm to plot the most reasonable intraday ATR top/bottom limits based on currently available information
Dynamically adapts limits as the price evolves during the day
Works correctly and consistently on both RTH and ETH charts
Has a user-selected ADR mode to base the limits on ADR instead of ATR
Option to include the current pre-market and previous day's post-market range in the calculation
Configurable ATR/ADR averaging length
Provides a visual smoothing option
Provides an information box showing the current numerical ATR/ADR values
Reasonable defaults that work well if the user changes nothing
Well-documented, high-quality, open-source code for those interested
█ HOW TO USE
At a minimum, there is nothing that needs to be set. The defaults work well. The ATR top line (red, configurable) gives you the most reasonable move given the currently available information. The line will move away from the price as the price approaches it; that is normal—it is reacting to new information. This happens until the ATR bottom limit hits the lower of the daily low and the previous day's close (in ATR mode). The ATR bottom line (green, configurable) works the same way, with reversed logic.
There is an option to use ADR instead of ATR. The ATR includes the previous day's RTH close in the range, whereas ADR does not. Another option allows the user to add the current day's pre-market range or the previous day's post-market into the current day's range, which has an effect if either of those went outside of today's RTH range, plus yesterday's RTH close (in the default ATR mode). Pre-market and post-market range is not typically included in the daily true range, so only change it if you really know you want it.
█ CONCEPTS
Most traditional ATR limit indicators plot the top ATR limit one ATR above the current daily low, and the bottom ATR limit one ATR below the current daily high. This indicator can also do that (in "basic" mode), but its value lies in its default "auto" mode, which uses an algorithm to dynamically adapt the ATR limits throughout the day, keeping them one ATR apart at all times. It tries to plot the most sensible ATR limits based on the current daily ATR, in order to provide a reasonable visual intraday target, given the available information at that point in time.
"Auto" mode is actually a weighted average of two methods: midpoint and relative (both of which can also be explicitly selected). The midpoint method places the midpoint of the ATR limit equal to the midpoint of the currently established daily range. The relative method measures the currently established daily range and calculates the position of the current price within it (as a ratio between 0 and 1). It then uses that value as a weight in a weighted average of extreme locations for the ATR limits, which are: the ATR top anchored to one ATR above the daily low, and the ATR bottom anchored to one ATR below the daily high.
The relative method is more advanced and better for most of the day; however, it can cause wild swings in the early market or pre-market before a reasonable range (as a percentage of ATR) has been established. "Auto" mode therefore takes another weighted average between the two methods, with the weight determined by the percentage of the ATR currently established within the day, more strongly weighting the calmer midpoint method before a good range is established. Once the full ATR has been achieved, the algorithm in "auto" mode will have fully switched to the relative method and will remain with that method for the rest of the day.
To explain the effect further, as an example, imagine that the price is approaching the full ATR range on the high side. At this point, the indicator will have almost fully transitioned to the second (relative) method. The lower ATR limit will now be anchored to the daily low as the price hits the upper ATR limit. If the price goes beyond the upper ATR, the lower ATR limit will stay anchored to the daily low, and the upper limit will stay anchored to one ATR above the lower limit. This allows you to see how far the price is going beyond the upper ATR limit. If the price then returns and backs off the upper ATR limit, the lower ATR limit will un-anchor from the daily low (it will actually rise, since the daily ATR range has been exceeded, so the lower ATR limit needs to come up because the actual daily range can’t fit into the ATR range anymore). The overall effect is to give you the best visual indication of where the price is in relation to a possible upper ATR-based target. Reverse this example for when the price low approaches the ATR range on the low side.
Care was taken so that the code uses no hard-coded time zones, exchanges, or session times. For this reason, it can in principle work globally. However, it very much depends on the information provided by the exchange, which is reflected in built-in Pine Script variables (see Limitations below).
█ LIMITATIONS
The indicator was developed for US/European equities and is tested on them only. It is also known to work on US futures; in this case, the whole 23-hour session is used, and the "Sessions to include in range" setting has no effect. It may or may not work as intended on security types and equities/futures for other countries.






















