HelperFunctionsLibrary "HelperFunctions"
A collection of my most used functions
apply_smoothing()
Apply one of Pine Script's built-in smoothing functions to a series
MATH
calcLibrary "calc"
Library for math functions. will expand over time.
split(_sumTotal, _divideBy, _forceMinimum, _haltOnError)
Split a large number into integer sized chunks
Parameters:
_sumTotal : (int) Total numbert of items
_divideBy : (int) Groups to make
_forceMinimum : (bool) force minimum number 1/group
_haltOnError : (bool) force error if too few groups
Returns: int array of items per group
UtilityFunctionsLibrary "UtilityFunctions"
Utility functions written by me
printLabelOnLastBar_string(string)
Prints string in a label on the last bar
Parameters:
string : value to print
Returns: void
printLabelOnLastBar_float(float)
Prints float in a label on the last bar
Parameters:
float : value to print
Returns: void
printSeriesInReverseOnLabels(series)
Prints a float series in labels in reverse (the first value is on the last candle, the second value is on the second to last candle, etc.)
Parameters:
series : float values to print
Returns: void
isPeriodDailyBased(string)
Returns true/false if the period is Daily based (1D, 3D, ...)
Parameters:
string : timeframe period
Returns: true/false
get_multiplier(string)
Gets the mutliplier of the timeframe passed compared to the current timeframe. If current TF is 5m and the passed timeframe period is 30m, the result will be 6
Parameters:
string : timeframe param
Returns: simple float of the multiplier
percentageLibLibrary "percentageLib"
: every thing need anout percentage
getPercentage(entry, exit)
: get percentage change of of two value
Parameters:
entry : : value of entry price
exit : : value of exit price
Returns: : negative or positive value
applyPercentageNoAddUp(price, percentage)
: apply percentage change on value decrease or increase
Parameters:
price : : value of price
percentage : : percentage change can be negative or positive
Returns: : return only positive value
applyPercentageAddUp(price, percentage)
: apply percentage change on value decrease or increase
Parameters:
price : : value of price
percentage : : percentage change can be negative or positive
Returns: : return only positive value
reversePercentage(percentage)
: get percentage (positive or negative) and return the percentage need to back to previous price
Parameters:
percentage : : percentage change can be negative or positive
Returns: : return positive/negative value
@example : reversePercentage(10) =>11.11111111111111111111111 , reversePercentage(10) =>9.0909090909090909
getReversePercentage(price, percentage)
: get two prices and return the percentage need to back to previous price
Parameters:
price : : value of price
percentage : : percentage change can be negative or positive
Returns: : return only positive value
@example : getReversePercentage(100,90) =>11.11111111111111111111111
multipeBarTotalPercentage()
xor logical operatorLibrary "xor"
xor(a, b)
xor: Exclusive or, or exclusive disjunction is a logical operation that is true if and only if its arguments differ (one is true, the other is false).
Parameters:
a : first argument
b : second argument
Returns: returns xor (true only if a and b are true, but not both)
Example:
true xor true = false
true xor false = true
false xor true = true
false xor false = false
Moving_AveragesLibrary "Moving_Averages"
This library contains majority important moving average functions with int series support. Which means that they can be used with variable length input. For conventional use, please use tradingview built-in ta functions for moving averages as they are more precise. I'll use functions in this library for my other scripts with dynamic length inputs.
ema(src, len)
Exponential Moving Average (EMA)
Parameters:
src : Source
len : Period
Returns: Exponential Moving Average with Series Int Support (EMA)
alma(src, len, a_offset, a_sigma)
Arnaud Legoux Moving Average (ALMA)
Parameters:
src : Source
len : Period
a_offset : Arnaud Legoux offset
a_sigma : Arnaud Legoux sigma
Returns: Arnaud Legoux Moving Average (ALMA)
covwema(src, len)
Coefficient of Variation Weighted Exponential Moving Average (COVWEMA)
Parameters:
src : Source
len : Period
Returns: Coefficient of Variation Weighted Exponential Moving Average (COVWEMA)
covwma(src, len)
Coefficient of Variation Weighted Moving Average (COVWMA)
Parameters:
src : Source
len : Period
Returns: Coefficient of Variation Weighted Moving Average (COVWMA)
dema(src, len)
DEMA - Double Exponential Moving Average
Parameters:
src : Source
len : Period
Returns: DEMA - Double Exponential Moving Average
edsma(src, len, ssfLength, ssfPoles)
EDSMA - Ehlers Deviation Scaled Moving Average
Parameters:
src : Source
len : Period
ssfLength : EDSMA - Super Smoother Filter Length
ssfPoles : EDSMA - Super Smoother Filter Poles
Returns: Ehlers Deviation Scaled Moving Average (EDSMA)
eframa(src, len, FC, SC)
Ehlrs Modified Fractal Adaptive Moving Average (EFRAMA)
Parameters:
src : Source
len : Period
FC : Lower Shift Limit for Ehlrs Modified Fractal Adaptive Moving Average
SC : Upper Shift Limit for Ehlrs Modified Fractal Adaptive Moving Average
Returns: Ehlrs Modified Fractal Adaptive Moving Average (EFRAMA)
ehma(src, len)
EHMA - Exponential Hull Moving Average
Parameters:
src : Source
len : Period
Returns: Exponential Hull Moving Average (EHMA)
etma(src, len)
Exponential Triangular Moving Average (ETMA)
Parameters:
src : Source
len : Period
Returns: Exponential Triangular Moving Average (ETMA)
frama(src, len)
Fractal Adaptive Moving Average (FRAMA)
Parameters:
src : Source
len : Period
Returns: Fractal Adaptive Moving Average (FRAMA)
hma(src, len)
HMA - Hull Moving Average
Parameters:
src : Source
len : Period
Returns: Hull Moving Average (HMA)
jma(src, len, jurik_phase, jurik_power)
Jurik Moving Average - JMA
Parameters:
src : Source
len : Period
jurik_phase : Jurik (JMA) Only - Phase
jurik_power : Jurik (JMA) Only - Power
Returns: Jurik Moving Average (JMA)
kama(src, len, k_fastLength, k_slowLength)
Kaufman's Adaptive Moving Average (KAMA)
Parameters:
src : Source
len : Period
k_fastLength : Number of periods for the fastest exponential moving average
k_slowLength : Number of periods for the slowest exponential moving average
Returns: Kaufman's Adaptive Moving Average (KAMA)
kijun(_high, _low, len, kidiv)
Kijun v2
Parameters:
_high : High value of bar
_low : Low value of bar
len : Period
kidiv : Kijun MOD Divider
Returns: Kijun v2
lsma(src, len, offset)
LSMA/LRC - Least Squares Moving Average / Linear Regression Curve
Parameters:
src : Source
len : Period
offset : Offset
Returns: Least Squares Moving Average (LSMA)/ Linear Regression Curve (LRC)
mf(src, len, beta, feedback, z)
MF - Modular Filter
Parameters:
src : Source
len : Period
beta : Modular Filter, General Filter Only - Beta
feedback : Modular Filter Only - Feedback
z : Modular Filter Only - Feedback Weighting
Returns: Modular Filter (MF)
rma(src, len)
RMA - RSI Moving average
Parameters:
src : Source
len : Period
Returns: RSI Moving average (RMA)
sma(src, len)
SMA - Simple Moving Average
Parameters:
src : Source
len : Period
Returns: Simple Moving Average (SMA)
smma(src, len)
Smoothed Moving Average (SMMA)
Parameters:
src : Source
len : Period
Returns: Smoothed Moving Average (SMMA)
stma(src, len)
Simple Triangular Moving Average (STMA)
Parameters:
src : Source
len : Period
Returns: Simple Triangular Moving Average (STMA)
tema(src, len)
TEMA - Triple Exponential Moving Average
Parameters:
src : Source
len : Period
Returns: Triple Exponential Moving Average (TEMA)
thma(src, len)
THMA - Triple Hull Moving Average
Parameters:
src : Source
len : Period
Returns: Triple Hull Moving Average (THMA)
vama(src, len, volatility_lookback)
VAMA - Volatility Adjusted Moving Average
Parameters:
src : Source
len : Period
volatility_lookback : Volatility lookback length
Returns: Volatility Adjusted Moving Average (VAMA)
vidya(src, len)
Variable Index Dynamic Average (VIDYA)
Parameters:
src : Source
len : Period
Returns: Variable Index Dynamic Average (VIDYA)
vwma(src, len)
Volume-Weighted Moving Average (VWMA)
Parameters:
src : Source
len : Period
Returns: Volume-Weighted Moving Average (VWMA)
wma(src, len)
WMA - Weighted Moving Average
Parameters:
src : Source
len : Period
Returns: Weighted Moving Average (WMA)
zema(src, len)
Zero-Lag Exponential Moving Average (ZEMA)
Parameters:
src : Source
len : Period
Returns: Zero-Lag Exponential Moving Average (ZEMA)
zsma(src, len)
Zero-Lag Simple Moving Average (ZSMA)
Parameters:
src : Source
len : Period
Returns: Zero-Lag Simple Moving Average (ZSMA)
evwma(src, len)
EVWMA - Elastic Volume Weighted Moving Average
Parameters:
src : Source
len : Period
Returns: Elastic Volume Weighted Moving Average (EVWMA)
tt3(src, len, a1_t3)
Tillson T3
Parameters:
src : Source
len : Period
a1_t3 : Tillson T3 Volume Factor
Returns: Tillson T3
gma(src, len)
GMA - Geometric Moving Average
Parameters:
src : Source
len : Period
Returns: Geometric Moving Average (GMA)
wwma(src, len)
WWMA - Welles Wilder Moving Average
Parameters:
src : Source
len : Period
Returns: Welles Wilder Moving Average (WWMA)
ama(src, _high, _low, len, ama_f_length, ama_s_length)
AMA - Adjusted Moving Average
Parameters:
src : Source
_high : High value of bar
_low : Low value of bar
len : Period
ama_f_length : Fast EMA Length
ama_s_length : Slow EMA Length
Returns: Adjusted Moving Average (AMA)
cma(src, len)
Corrective Moving average (CMA)
Parameters:
src : Source
len : Period
Returns: Corrective Moving average (CMA)
gmma(src, len)
Geometric Mean Moving Average (GMMA)
Parameters:
src : Source
len : Period
Returns: Geometric Mean Moving Average (GMMA)
ealf(src, len, LAPercLen_, FPerc_)
Ehler's Adaptive Laguerre filter (EALF)
Parameters:
src : Source
len : Period
LAPercLen_ : Median Length
FPerc_ : Median Percentage
Returns: Ehler's Adaptive Laguerre filter (EALF)
elf(src, len, LAPercLen_, FPerc_)
ELF - Ehler's Laguerre filter
Parameters:
src : Source
len : Period
LAPercLen_ : Median Length
FPerc_ : Median Percentage
Returns: Ehler's Laguerre Filter (ELF)
edma(src, len)
Exponentially Deviating Moving Average (MZ EDMA)
Parameters:
src : Source
len : Period
Returns: Exponentially Deviating Moving Average (MZ EDMA)
pnr(src, len, rank_inter_Perc_)
PNR - percentile nearest rank
Parameters:
src : Source
len : Period
rank_inter_Perc_ : Rank and Interpolation Percentage
Returns: Percentile Nearest Rank (PNR)
pli(src, len, rank_inter_Perc_)
PLI - Percentile Linear Interpolation
Parameters:
src : Source
len : Period
rank_inter_Perc_ : Rank and Interpolation Percentage
Returns: Percentile Linear Interpolation (PLI)
rema(src, len)
Range EMA (REMA)
Parameters:
src : Source
len : Period
Returns: Range EMA (REMA)
sw_ma(src, len)
Sine-Weighted Moving Average (SW-MA)
Parameters:
src : Source
len : Period
Returns: Sine-Weighted Moving Average (SW-MA)
vwap(src, len)
Volume Weighted Average Price (VWAP)
Parameters:
src : Source
len : Period
Returns: Volume Weighted Average Price (VWAP)
mama(src, len)
MAMA - MESA Adaptive Moving Average
Parameters:
src : Source
len : Period
Returns: MESA Adaptive Moving Average (MAMA)
fama(src, len)
FAMA - Following Adaptive Moving Average
Parameters:
src : Source
len : Period
Returns: Following Adaptive Moving Average (FAMA)
hkama(src, len)
HKAMA - Hilbert based Kaufman's Adaptive Moving Average
Parameters:
src : Source
len : Period
Returns: Hilbert based Kaufman's Adaptive Moving Average (HKAMA)
Price Displacement - Candlestick (OHLC) CalculationsA Magical little helper friend for Candle Math.
When composing scripts, it is often necessary to manipulate the math around the OHLC. At times, you want a scalar (absolute) value others you want a vector (+/-). Sometimes you want the open - close and sometimes you want just the positive number of the body size. You might want it in ticks or you might want it in points or you might want in percentages. And every time you try to put it together you waste precious time and brain power trying to think about how to properly structure what you're looking for. Not to mention it's normally not that aesthetically pleasing to look at in the code.
So, this fixes all of that.
Using this library. A function like 'pd.pt(_exp)' can call any kind of candlestick math you need. The function returns the candlestick math you define using particular expressions.
Candle Math Functions Include:
Points:
pt(_exp) Absolute Point Displacement. Point quantity of given size parameters according to _exp.
vpt(_exp) Vector Point Displacement. Point quantity of given size parameters according to _exp.
Ticks:
tick(_exp) Absolute Tick Displacement. Tick quantity of given size parameters according to _exp.
vtick(_exp) Vector Tick Displacement. Tick quantity of given size parameters according to _exp.
Percentages:
pct(_exp, _prec) Absolute Percent Displacement. (w/rounding overload). Percent quantity of bar range of given size parameters according to _exp.
vpct(_exp, _prec) Vector Percent Displacement (w/rounding overload). Percent quantity of bar range of given size parameters according to _exp.
Expressions You Can Use with Formulas:
The expressions are simple (simple strings that is) and I did my best to make them sensible, generally using just the ohlc abreviations. I also included uw, lw, bd, and rg for when you're just trying to pull a candle component out. That way you don't have to think about which of the ohlc you're trying to get just use pd.tick("uw") and now the variable is assigned the length of the upper wick, absolute value, in ticks. If you wanted the vector in pts its pd.vpt("uw"). It also makes changing things easy too as I write it out.
Expression List:
Combinations
"oh" = open - high
"ol" = open - low
"oc" = open - close
"ho" = high - open
"hl" = high - low
"hc" = high - close
"lo" = low - open
"lh" = low - high
"lc" = low - close
"co" = close - open
"ch" = close - high
"cl" = close - low
Candle Components
"uw" = Upper Wick
"bd" = Body
"lw" = Lower Wick
"rg" = Range
Pct() Only
"scp" = Scalar Close Position
"sop" = Scalar Open Position
"vcp" = Vector Close Position
"vop" = Vector Open Position
The attributes are going to be available in the pop up dialogue when you mouse over the function, so you don't really have to remember them. I tried to make that look as efficient as possible. You'll notice it follows the OHLC pattern. Thus, "oh" precedes "ho" (heyo) because "O" would be first in the OHLC. Its a way to help find the expression you're looking for quickly. Like looking through an alphabetized list for traders.
There is a copy/paste console friendly helper list in the script itself.
Additional Notes on the Pct() Only functions:
This is the original reason I started writing this. These concepts place a rating/value on the bar based on candle attributes in one number. These formulas put a open or close value in a percentile of the bar relative to another aspect of the bar.
Scalar - Non-directional. Absolute Value.
Scalar Position: The position of the price attribute relative to the scale of the bar range (high - low)
Example: high = 100. low = 0. close = 25.
(A) Measure price distance C-L. How high above the low did the candle close (e.g. close - low = 25)
(B) Divide by bar range (high - low). 25 / (100 - 0) = .25
Explaination: The candle closed at the 25th percentile of the bar range given the bar range low = 0 and bar range high = 100.
Formula: scp = (close - low) / (high - low)
Vector = Directional.
Vector Position: The position of the price attribute relative to the scale of the bar midpoint (Vector Position at hl2 = 0)
Example: high = 100. low = 0. close = 25.
(A) Measure Price distance C-L: How high above the low did the candle close (e.g. close - low = 25)
(B) Measure Price distance H-C: How far below the high did the candle close (e.g. high - close = 75)
(C) Take Difference: A - B = C = -50
(D) Divide by bar range (high - low). -50 / (100 - 0) = -0.50
Explaination: Candle close at the midpoint between hl2 and the low.
Formula: vcp = { / (high - low) }
Thank you for checking this out. I hope no one else has already done this (because it took half the day) and I hope you find value in it. Be well. Trade well.
Library "PD"
Price Displacement
pt(_exp) Absolute Point Displacement. Point quantity of given size parameters according to _exp.
Parameters:
_exp : (string) Price Parameter
Returns: Point size of given expression as an absolute value.
vpt(_exp) Vector Point Displacement. Point quantity of given size parameters according to _exp.
Parameters:
_exp : (string) Price Parameter
Returns: Point size of given expression as a vector.
tick(_exp) Absolute Tick Displacement. Tick quantity of given size parameters according to _exp.
Parameters:
_exp : (string) Price Parameter
Returns: Tick size of given expression as an absolute value.
vtick(_exp) Vector Tick Displacement. Tick quantity of given size parameters according to _exp.
Parameters:
_exp : (string) Price Parameter
Returns: Tick size of given expression as a vector.
pct(_exp, _prec) Absolute Percent Displacement (w/rounding overload). Percent quantity of bar range of given size parameters according to _exp.
Parameters:
_exp : (string) Expression
_prec : (int) Overload - Place value precision definition
Returns: Percent size of given expression as decimal.
vpct(_exp, _prec) Vector Percent Displacement (w/rounding overload). Percent quantity of bar range of given size parameters according to _exp.
Parameters:
_exp : (string) Expression
_prec : (int) Overload - Place value precision definition
Returns: Percent size of given expression as decimal.
Strategy Table LibraryLibrary "table_library"
TODO: With this library, you can add tables to your strategies.
strategy_table()
Returns: Strategy Profit Table
Adds a table to the graph of the strategy for which you are calling the function. You can see data such as net profit in this table.
No parameters. Just call the function inside the strategy.
Example Code :
import only_fibonacci/table_lib/1 as st
st.strategy_table()
AutoFiboRetraceLibrary "AutoFiboRetrace"
TODO: add library description here
fun(x) TODO: add function description here
Parameters:
x : TODO: add parameter x description here
Returns: TODO: add what function returns
honest personal libraryLibrary "honestpersonallibrary"
thestratnumber() this will return the number 1,2 or 3 using the logic from Rob Smiths #thestrat which uses these type of bars for setups
getBodySize() Gets the current candle's body size (in POINTS, divide by 10 to get pips)
Returns: The current candle's body size in POINTS
getTopWickSize() Gets the current candle's top wick size (in POINTS, divide by 10 to get pips)
Returns: The current candle's top wick size in POINTS
getBottomWickSize() Gets the current candle's bottom wick size (in POINTS, divide by 10 to get pips)
Returns: The current candle's bottom wick size in POINTS
getBodyPercent() Gets the current candle's body size as a percentage of its entire size including its wicks
Returns: The current candle's body size percentage
strictBearPinBar(float, float) This it to find pinbars with a very long wick compared to the body that are bearish
Parameters:
float : minTopMulitplier (default=4) The minimum number of times that the top wick has to be bigger than the candle body size
float : maxBottomMultiplier (default=2) The maximum number of times that the bottom wick can be bigger than the candle body size
Returns: a bool function true if current candle is withing the parameters
strictBullPinBar(float, float) This it to find pinbars with a very long wick compared to the body that are bearish
Parameters:
float : minTopMulitplier (default=4) The minimum number of times that the top wick has to be bigger than the candle body size
float : maxBottomMultiplier (default=2) The maximum number of times that the bottom wick can be bigger than the candle body size
Returns: a bool function true if current candle is withing the parameters
FunctionIntrabarCrossValueLibrary "FunctionIntrabarCrossValue"
intrabar_cross_value(a, b, step) Find the minimum difference of a intrabar cross and return its median value.
Parameters:
a : float, series a.
b : float, series b.
step : float, step to iterate x axis, default=0.01
Returns: float
OrdinaryLeastSquaresLibrary "OrdinaryLeastSquares"
One of the most common ways to estimate the coefficients for a linear regression is to use the Ordinary Least Squares (OLS) method.
This library implements OLS in pine. This implementation can be used to fit a linear regression of multiple independent variables onto one dependent variable,
as long as the assumptions behind OLS hold.
solve_xtx_inv(x, y) Solve a linear system of equations using the Ordinary Least Squares method.
This function returns both the estimated OLS solution and a matrix that essentially measures the model stability (linear dependence between the columns of 'x').
NOTE: The latter is an intermediate step when estimating the OLS solution but is useful when calculating the covariance matrix and is returned here to save computation time
so that this step doesn't have to be calculated again when things like standard errors should be calculated.
Parameters:
x : The matrix containing the independent variables. Each column is regarded by the algorithm as one independent variable. The row count of 'x' and 'y' must match.
y : The matrix containing the dependent variable. This matrix can only contain one dependent variable and can therefore only contain one column. The row count of 'x' and 'y' must match.
Returns: Returns both the estimated OLS solution and a matrix that essentially measures the model stability (xtx_inv is equal to (X'X)^-1).
solve(x, y) Solve a linear system of equations using the Ordinary Least Squares method.
Parameters:
x : The matrix containing the independent variables. Each column is regarded by the algorithm as one independent variable. The row count of 'x' and 'y' must match.
y : The matrix containing the dependent variable. This matrix can only contain one dependent variable and can therefore only contain one column. The row count of 'x' and 'y' must match.
Returns: Returns the estimated OLS solution.
standard_errors(x, y, beta_hat, xtx_inv) Calculate the standard errors.
Parameters:
x : The matrix containing the independent variables. Each column is regarded by the algorithm as one independent variable. The row count of 'x' and 'y' must match.
y : The matrix containing the dependent variable. This matrix can only contain one dependent variable and can therefore only contain one column. The row count of 'x' and 'y' must match.
beta_hat : The Ordinary Least Squares (OLS) solution provided by solve_xtx_inv() or solve().
xtx_inv : This is (X'X)^-1, which means we take the transpose of the X matrix, multiply that the X matrix and then take the inverse of the result.
This essentially measures the linear dependence between the columns of the X matrix.
Returns: The standard errors.
estimate(x, beta_hat) Estimate the next step of a linear model.
Parameters:
x : The matrix containing the independent variables. Each column is regarded by the algorithm as one independent variable. The row count of 'x' and 'y' must match.
beta_hat : The Ordinary Least Squares (OLS) solution provided by solve_xtx_inv() or solve().
Returns: Returns the new estimate of Y based on the linear model.
FunctionMatrixSolveLibrary "FunctionMatrixSolve"
Matrix Equation solution for Ax = B, finds the value of x.
solve(A, B) Solves Matrix Equation for Ax = B, finds value for x.
Parameters:
A : matrix, Square matrix with data values.
B : matrix, One column matrix with data values.
Returns: matrix with X, x = A^-1 b, assuming A is square and has full rank
introcs.cs.princeton.edu
FunctionPolynomialFitLibrary "FunctionPolynomialFit"
Performs Polynomial Regression fit to data.
In statistics, polynomial regression is a form of regression analysis in which
the relationship between the independent variable x and the dependent variable
y is modelled as an nth degree polynomial in x.
reference:
en.wikipedia.org
www.bragitoff.com
gauss_elimination(A, m, n) Perform Gauss-Elimination and returns the Upper triangular matrix and solution of equations.
Parameters:
A : float matrix, data samples.
m : int, defval=na, number of rows.
n : int, defval=na, number of columns.
Returns: float array with coefficients.
polyfit(X, Y, degree) Fits a polynomial of a degree to (x, y) points.
Parameters:
X : float array, data sample x point.
Y : float array, data sample y point.
degree : int, defval=2, degree of the polynomial.
Returns: float array with coefficients.
note:
p(x) = p * x**deg + ... + p
interpolate(coeffs, x) interpolate the y position at the provided x.
Parameters:
coeffs : float array, coefficients of the polynomial.
x : float, position x to estimate y.
Returns: float.
DominantCycleCollection of Dominant Cycle estimators. Length adaptation used in the Adaptive Moving Averages and the Adaptive Oscillators try to follow price movements and accelerate/decelerate accordingly (usually quite rapidly with a huge range). Cycle estimators, on the other hand, try to measure the cycle period of the current market, which does not reflect price movement or the rate of change (the rate of change may also differ depending on the cycle phase, but the cycle period itself usually changes slowly). This collection may become encyclopaedic, so if you have any working cycle estimator, drop me a line in the comments below. Suggestions are welcome. Currently included estimators are based on the work of John F. Ehlers
mamaPeriod(src, dynLow, dynHigh) MESA Adaptation - MAMA Cycle
Parameters:
src : Series to use
dynLow : Lower bound for the dynamic length
dynHigh : Upper bound for the dynamic length
Returns: Calculated period
Based on MESA Adaptive Moving Average by John F. Ehlers
Performs Hilbert Transform Homodyne Discriminator cycle measurement
Unlike MAMA Alpha function (in LengthAdaptation library), this does not compute phase rate of change
Introduced in the September 2001 issue of Stocks and Commodities
Inspired by the @everget implementation:
Inspired by the @anoojpatel implementation:
paPeriod(src, dynLow, dynHigh, preHP, preSS, preHP) Pearson Autocorrelation
Parameters:
src : Series to use
dynLow : Lower bound for the dynamic length
dynHigh : Upper bound for the dynamic length
preHP : Use High Pass prefilter (default)
preSS : Use Super Smoother prefilter (default)
preHP : Use Hann Windowing prefilter
Returns: Calculated period
Based on Pearson Autocorrelation Periodogram by John F. Ehlers
Introduced in the September 2016 issue of Stocks and Commodities
Inspired by the @blackcat1402 implementation:
Inspired by the @rumpypumpydumpy implementation:
Corrected many errors, and made small speed optimizations, so this could be the best implementation to date (still slow, though, so may revisit in future)
High Pass and Super Smoother prefilters are used in the original implementation
dftPeriod(src, dynLow, dynHigh, preHP, preSS, preHP) Discrete Fourier Transform
Parameters:
src : Series to use
dynLow : Lower bound for the dynamic length
dynHigh : Upper bound for the dynamic length
preHP : Use High Pass prefilter (default)
preSS : Use Super Smoother prefilter (default)
preHP : Use Hann Windowing prefilter
Returns: Calculated period
Based on Spectrum from Discrete Fourier Transform by John F. Ehlers
Inspired by the @blackcat1402 implementation:
High Pass, Super Smoother and Hann Windowing prefilters are used in the original implementation
phasePeriod(src, dynLow, dynHigh, preHP, preSS, preHP) Phase Accumulation
Parameters:
src : Series to use
dynLow : Lower bound for the dynamic length
dynHigh : Upper bound for the dynamic length
preHP : Use High Pass prefilter (default)
preSS : Use Super Smoother prefilter (default)
preHP : Use Hamm Windowing prefilter
Returns: Calculated period
Based on Dominant Cycle from Phase Accumulation by John F. Ehlers
High Pass and Super Smoother prefilters are used in the original implementation
doAdapt(type, src, len, dynLow, dynHigh, chandeSDLen, chandeSmooth, chandePower, preHP, preSS, preHP) Execute a particular Length Adaptation or Dominant Cycle Estimator from the list
Parameters:
type : Length Adaptation or Dominant Cycle Estimator type to use
src : Series to use
len : Reference lookback length
dynLow : Lower bound for the dynamic length
dynHigh : Upper bound for the dynamic length
chandeSDLen : Lookback length of Standard deviation for Chande's Dynamic Length
chandeSmooth : Smoothing length of Standard deviation for Chande's Dynamic Length
chandePower : Exponent of the length adaptation for Chande's Dynamic Length (lower is smaller variation)
preHP : Use High Pass prefilter for the Estimators that support it (default)
preSS : Use Super Smoother prefilter for the Estimators that support it (default)
preHP : Use Hann Windowing prefilter for the Estimators that support it
Returns: Calculated period (float, not limited)
doEstimate(type, src, dynLow, dynHigh, preHP, preSS, preHP) Execute a particular Dominant Cycle Estimator from the list
Parameters:
type : Dominant Cycle Estimator type to use
src : Series to use
dynLow : Lower bound for the dynamic length
dynHigh : Upper bound for the dynamic length
preHP : Use High Pass prefilter for the Estimators that support it (default)
preSS : Use Super Smoother prefilter for the Estimators that support it (default)
preHP : Use Hann Windowing prefilter for the Estimators that support it
Returns: Calculated period (float, not limited)
least_squares_regressionLibrary "least_squares_regression"
least_squares_regression: Least squares regression algorithm to find the optimal price interval for a given time period
basic_lsr(series, series, series) basic_lsr: Basic least squares regression algorithm
Parameters:
series : int t: time scale value array corresponding to price
series : float p: price scale value array corresponding to time
series : int array_size: the length of regression array
Returns: reg_slop, reg_intercept, reg_level, reg_stdev
trend_line_lsr(series, series, series, string, series, series) top_trend_line_lsr: Trend line fitting based on least square algorithm
Parameters:
series : int t: time scale value array corresponding to price
series : float p: price scale value array corresponding to time
series : int array_size: the length of regression array
string : reg_type: regression type in 'top' and 'bottom'
series : int max_iter: maximum fitting iterations
series : int min_points: the threshold of regression point numbers
Returns: reg_slop, reg_intercept, reg_level, reg_stdev, reg_point_num
simple_squares_regressionLibrary "simple_squares_regression"
simple_squares_regression: simple squares regression algorithm to find the optimal price interval for a given time period
basic_ssr(series, series, series) basic_ssr: Basic simple squares regression algorithm
Parameters:
series : float src: the regression source such as close
series : int region_forward: number of candle lines at the right end of the regression region from the current candle line
series : int region_len: the length of regression region
Returns: left_loc, right_loc, reg_val, reg_std, reg_max_offset
search_ssr(series, series, series, series) search_ssr: simple squares regression region search algorithm
Parameters:
series : float src: the regression source such as close
series : int max_forward: max number of candle lines at the right end of the regression region from the current candle line
series : int region_lower: the lower length of regression region
series : int region_upper: the upper length of regression region
Returns: left_loc, right_loc, reg_val, reg_level, reg_std_err, reg_max_offset
on_balance_volumeLibrary "on_balance_volume"
on_balance_volume: custom on balance volume
obv_diff(string, simple) obv_diff: custom on balance volume diff version
Parameters:
string : type: the moving average type of on balance volume
simple : int len: the moving average length of on balance volume
Returns: obv_diff: custom on balance volume diff value
obv_diff_norm(string, simple) obv_diff_norm: custom normalized on balance volume diff version
Parameters:
string : type: the moving average type of on balance volume
simple : int len: the moving average length of on balance volume
Returns: obv_diff: custom normalized on balance volume diff value
moving_averageLibrary "moving_average"
moving_average: moving average variants
variant(string, series, simple) variant: moving average variants
Parameters:
string : type: type in
series : float src: the source series of moving average
simple : int len: the length of moving average
Returns: float: the moving average variant value
NormalizedOscillatorsLibrary "NormalizedOscillators"
Collection of some common Oscillators. All are zero-mean and normalized to fit in the -1..1 range. Some are modified, so that the internal smoothing function could be configurable (for example, to enable Hann Windowing, that John F. Ehlers uses frequently). Some are modified for other reasons (see comments in the code), but never without a reason. This collection is neither encyclopaedic, nor reference, however I try to find the most correct implementation. Suggestions are welcome.
rsi2(upper, lower) RSI - second step
Parameters:
upper : Upwards momentum
lower : Downwards momentum
Returns: Oscillator value
Modified by Ehlers from Wilder's implementation to have a zero mean (oscillator from -1 to +1)
Originally: 100.0 - (100.0 / (1.0 + upper / lower))
Ignoring the 100 scale factor, we get: upper / (upper + lower)
Multiplying by two and subtracting 1, we get: (2 * upper) / (upper + lower) - 1 = (upper - lower) / (upper + lower)
rms(src, len) Root mean square (RMS)
Parameters:
src : Source series
len : Lookback period
Based on by John F. Ehlers implementation
ift(src) Inverse Fisher Transform
Parameters:
src : Source series
Returns: Normalized series
Based on by John F. Ehlers implementation
The input values have been multiplied by 2 (was "2*src", now "4*src") to force expansion - not compression
The inputs may be further modified, if needed
stoch(src, len) Stochastic
Parameters:
src : Source series
len : Lookback period
Returns: Oscillator series
ssstoch(src, len) Super Smooth Stochastic (part of MESA Stochastic) by John F. Ehlers
Parameters:
src : Source series
len : Lookback period
Returns: Oscillator series
Introduced in the January 2014 issue of Stocks and Commodities
This is not an implementation of MESA Stochastic, as it is based on Highpass filter not present in the function (but you can construct it)
This implementation is scaled by 0.95, so that Super Smoother does not exceed 1/-1
I do not know, if this the right way to fix this issue, but it works for now
netKendall(src, len) Noise Elimination Technology by John F. Ehlers
Parameters:
src : Source series
len : Lookback period
Returns: Oscillator series
Introduced in the December 2020 issue of Stocks and Commodities
Uses simplified Kendall correlation algorithm
Implementation by @QuantTherapy:
rsi(src, len, smooth) RSI
Parameters:
src : Source series
len : Lookback period
smooth : Internal smoothing algorithm
Returns: Oscillator series
vrsi(src, len, smooth) Volume-scaled RSI
Parameters:
src : Source series
len : Lookback period
smooth : Internal smoothing algorithm
Returns: Oscillator series
This is my own version of RSI. It scales price movements by the proportion of RMS of volume
mrsi(src, len, smooth) Momentum RSI
Parameters:
src : Source series
len : Lookback period
smooth : Internal smoothing algorithm
Returns: Oscillator series
Inspired by RocketRSI by John F. Ehlers (Stocks and Commodities, May 2018)
rrsi(src, len, smooth) Rocket RSI
Parameters:
src : Source series
len : Lookback period
smooth : Internal smoothing algorithm
Returns: Oscillator series
Inspired by RocketRSI by John F. Ehlers (Stocks and Commodities, May 2018)
Does not include Fisher Transform of the original implementation, as the output must be normalized
Does not include momentum smoothing length configuration, so always assumes half the lookback length
mfi(src, len, smooth) Money Flow Index
Parameters:
src : Source series
len : Lookback period
smooth : Internal smoothing algorithm
Returns: Oscillator series
lrsi(src, in_gamma, len) Laguerre RSI by John F. Ehlers
Parameters:
src : Source series
in_gamma : Damping factor (default is -1 to generate from len)
len : Lookback period (alternatively, if gamma is not set)
Returns: Oscillator series
The original implementation is with gamma. As it is impossible to collect gamma in my system, where the only user input is length,
an alternative calculation is included, where gamma is set by dividing len by 30. Maybe different calculation would be better?
fe(len) Choppiness Index or Fractal Energy
Parameters:
len : Lookback period
Returns: Oscillator series
The Choppiness Index (CHOP) was created by E. W. Dreiss
This indicator is sometimes called Fractal Energy
er(src, len) Efficiency ratio
Parameters:
src : Source series
len : Lookback period
Returns: Oscillator series
Based on Kaufman Adaptive Moving Average calculation
This is the correct Efficiency ratio calculation, and most other implementations are wrong:
the number of bar differences is 1 less than the length, otherwise we are adding the change outside of the measured range!
For reference, see Stocks and Commodities June 1995
dmi(len, smooth) Directional Movement Index
Parameters:
len : Lookback period
smooth : Internal smoothing algorithm
Returns: Oscillator series
Based on the original Tradingview algorithm
Modified with inspiration from John F. Ehlers DMH (but not implementing the DMH algorithm!)
Only ADX is returned
Rescaled to fit -1 to +1
Unlike most oscillators, there is no src parameter as DMI works directly with high and low values
fdmi(len, smooth) Fast Directional Movement Index
Parameters:
len : Lookback period
smooth : Internal smoothing algorithm
Returns: Oscillator series
Same as DMI, but without secondary smoothing. Can be smoothed later. Instead, +DM and -DM smoothing can be configured
doOsc(type, src, len, smooth) Execute a particular Oscillator from the list
Parameters:
type : Oscillator type to use
src : Source series
len : Lookback period
smooth : Internal smoothing algorithm
Returns: Oscillator series
Chande Momentum Oscillator (CMO) is RSI without smoothing. No idea, why some authors use different calculations
LRSI with Fractal Energy is a combo oscillator that uses Fractal Energy to tune LRSI gamma, as seen here: www.prorealcode.com
doPostfilter(type, src, len) Execute a particular Oscillator Postfilter from the list
Parameters:
type : Oscillator type to use
src : Source series
len : Lookback period
Returns: Oscillator series
CommonFiltersLibrary "CommonFilters"
Collection of some common Filters and Moving Averages. This collection is not encyclopaedic, but to declutter my other scripts. Suggestions are welcome, though. Many filters here are based on the work of John F. Ehlers
sma(src, len) Simple Moving Average
Parameters:
src : Series to use
len : Filtering length
Returns: Filtered series
ema(src, len) Exponential Moving Average
Parameters:
src : Series to use
len : Filtering length
Returns: Filtered series
rma(src, len) Wilder's Smoothing (Running Moving Average)
Parameters:
src : Series to use
len : Filtering length
Returns: Filtered series
hma(src, len) Hull Moving Average
Parameters:
src : Series to use
len : Filtering length
Returns: Filtered series
vwma(src, len) Volume Weighted Moving Average
Parameters:
src : Series to use
len : Filtering length
Returns: Filtered series
hp2(src) Simple denoiser
Parameters:
src : Series to use
Returns: Filtered series
fir2(src) Zero at 2 bar cycle period by John F. Ehlers
Parameters:
src : Series to use
Returns: Filtered series
fir3(src) Zero at 3 bar cycle period by John F. Ehlers
Parameters:
src : Series to use
Returns: Filtered series
fir23(src) Zero at 2 bar and 3 bar cycle periods by John F. Ehlers
Parameters:
src : Series to use
Returns: Filtered series
fir234(src) Zero at 2, 3 and 4 bar cycle periods by John F. Ehlers
Parameters:
src : Series to use
Returns: Filtered series
hp(src, len) High Pass Filter for cyclic components shorter than langth. Part of Roofing Filter by John F. Ehlers
Parameters:
src : Series to use
len : Filtering length
Returns: Filtered series
supers2(src, len) 2-pole Super Smoother by John F. Ehlers
Parameters:
src : Series to use
len : Filtering length
Returns: Filtered series
filt11(src, len) Filt11 is a variant of 2-pole Super Smoother with error averaging for zero-lag response by John F. Ehlers
Parameters:
src : Series to use
len : Filtering length
Returns: Filtered series
supers3(src, len) 3-pole Super Smoother by John F. Ehlers
Parameters:
src : Series to use
len : Filtering length
Returns: Filtered series
hannFIR(src, len) Hann Window Filter by John F. Ehlers
Parameters:
src : Series to use
len : Filtering length
Returns: Filtered series
hammingFIR(src, len) Hamming Window Filter (inspired by John F. Ehlers). Simplified implementation as Pedestal input parameter cannot be supplied, so I calculate it from the supplied length
Parameters:
src : Series to use
len : Filtering length
Returns: Filtered series
triangleFIR(src, len) Triangle Window Filter by John F. Ehlers
Parameters:
src : Series to use
len : Filtering length
Returns: Filtered series
doPrefilter(type, src) Execute a particular Prefilter from the list
Parameters:
type : Prefilter type to use
src : Series to use
Returns: Filtered series
doMA(type, src, len) Execute a particular MA from the list
Parameters:
type : MA type to use
src : Series to use
len : Filtering length
Returns: Filtered series
math_utilsLibrary "math_utils"
Collection of math functions that are not part of the standard math library
num_of_non_decimal_digits(number) num_of_non_decimal_digits - The number of the most significant digits on the left of the dot
Parameters:
number : - The floating point number
Returns: number of non digits
num_of_decimal_digits(number) num_of_decimal_digits - The number of the most significant digits on the right of the dot
Parameters:
number : - The floating point number
Returns: number of decimal digits
floor(number, precision) floor - floor with precision to the given most significant decimal point
Parameters:
number : - The floating point number
precision : - The number of decimal places a.k.a the most significant decimal digit - e.g precision 2 will produce 0.01 minimum change
Returns: number floored to the decimal digits defined by the precision
ceil(number, precision) floor - ceil with precision to the given most significant decimal point
Parameters:
number : - The floating point number
precision : - The number of decimal places a.k.a the most significant decimal digit - e.g precision 2 will produce 0.01 minimum change
Returns: number ceiled to the decimal digits defined by the precision
clamp(number, lower, higher, precision) clamp - clamp with precision to the given most significant decimal point
Parameters:
number : - The floating point number
lower : - The lowerst number limit to return
higher : - The highest number limit to return
precision : - The number of decimal places a.k.a the most significant decimal digit - e.g precision 2 will produce 0.01 minimum change
Returns: number clamped to the decimal digits defined by the precision
ConditionalAverages█ OVERVIEW
This library is a Pine Script™ programmer’s tool containing functions that average values selectively.
█ CONCEPTS
Averaging can be useful to smooth out unstable readings in the data set, provide a benchmark to see the underlying trend of the data, or to provide a general expectancy of values in establishing a central tendency. Conventional averaging techniques tend to apply indiscriminately to all values in a fixed window, but it can sometimes be useful to average values only when a specific condition is met. As conditional averaging works on specific elements of a dataset, it can help us derive more context-specific conclusions. This library offers a collection of averaging methods that not only accomplish these tasks, but also exploit the efficiencies of the Pine Script™ runtime by foregoing unnecessary and resource-intensive for loops.
█ NOTES
To Loop or Not to Loop
Though for and while loops are essential programming tools, they are often unnecessary in Pine Script™. This is because the Pine Script™ runtime already runs your scripts in a loop where it executes your code on each bar of the dataset. Pine Script™ programmers who understand how their code executes on charts can use this to their advantage by designing loop-less code that will run orders of magnitude faster than functionally identical code using loops. Most of this library's function illustrate how you can achieve loop-less code to process past values. See the User Manual page on loops for more information. If you are looking for ways to measure execution time for you scripts, have a look at our LibraryStopwatch library .
Our `avgForTimeWhen()` and `totalForTimeWhen()` are exceptions in the library, as they use a while structure. Only a few iterations of the loop are executed on each bar, however, as its only job is to remove the few elements in the array that are outside the moving window defined by a time boundary.
Cumulating and Summing Conditionally
The ta.cum() or math.sum() built-in functions can be used with ternaries that select only certain values. In our `avgWhen(src, cond)` function, for example, we use this technique to cumulate only the occurrences of `src` when `cond` is true:
float cumTotal = ta.cum(cond ? src : 0) We then use:
float cumCount = ta.cum(cond ? 1 : 0) to calculate the number of occurrences where `cond` is true, which corresponds to the quantity of values cumulated in `cumTotal`.
Building Custom Series With Arrays
The advent of arrays in Pine has enabled us to build our custom data series. Many of this library's functions use arrays for this purpose, saving newer values that come in when a condition is met, and discarding the older ones, implementing a queue .
`avgForTimeWhen()` and `totalForTimeWhen()`
These two functions warrant a few explanations. They operate on a number of values included in a moving window defined by a timeframe expressed in milliseconds. We use a 1D timeframe in our example code. The number of bars included in the moving window is unknown to the programmer, who only specifies the period of time defining the moving window. You can thus use `avgForTimeWhen()` to calculate a rolling moving average for the last 24 hours, for example, that will work whether the chart is using a 1min or 1H timeframe. A 24-hour moving window will typically contain many more values on a 1min chart that on a 1H chart, but their calculated average will be very close.
Problems will arise on non-24x7 markets when large time gaps occur between chart bars, as will be the case across holidays or trading sessions. For example, if you were using a 24H timeframe and there is a two-day gap between two bars, then no chart bars would fit in the moving window after the gap. The `minBars` parameter mitigates this by guaranteeing that a minimum number of bars are always included in the calculation, even if including those bars requires reaching outside the prescribed timeframe. We use a minimum value of 10 bars in the example code.
Using var in Constant Declarations
In the past, we have been using var when initializing so-called constants in our scripts, which as per the Style Guide 's recommendations, we identify using UPPER_SNAKE_CASE. It turns out that var variables incur slightly superior maintenance overhead in the Pine Script™ runtime, when compared to variables initialized on each bar. We thus no longer use var to declare our "int/float/bool" constants, but still use it when an initialization on each bar would require too much time, such as when initializing a string or with a heavy function call.
Look first. Then leap.
█ FUNCTIONS
avgWhen(src, cond)
Gathers values of the source when a condition is true and averages them over the total number of occurrences of the condition.
Parameters:
src : (series int/float) The source of the values to be averaged.
cond : (series bool) The condition determining when a value will be included in the set of values to be averaged.
Returns: (float) A cumulative average of values when a condition is met.
avgWhenLast(src, cond, cnt)
Gathers values of the source when a condition is true and averages them over a defined number of occurrences of the condition.
Parameters:
src : (series int/float) The source of the values to be averaged.
cond : (series bool) The condition determining when a value will be included in the set of values to be averaged.
cnt : (simple int) The quantity of last occurrences of the condition for which to average values.
Returns: (float) The average of `src` for the last `x` occurrences where `cond` is true.
avgWhenInLast(src, cond, cnt)
Gathers values of the source when a condition is true and averages them over the total number of occurrences during a defined number of bars back.
Parameters:
src : (series int/float) The source of the values to be averaged.
cond : (series bool) The condition determining when a value will be included in the set of values to be averaged.
cnt : (simple int) The quantity of bars back to evaluate.
Returns: (float) The average of `src` in last `cnt` bars, but only when `cond` is true.
avgSince(src, cond)
Averages values of the source since a condition was true.
Parameters:
src : (series int/float) The source of the values to be averaged.
cond : (series bool) The condition determining when the average is reset.
Returns: (float) The average of `src` since `cond` was true.
avgForTimeWhen(src, ms, cond, minBars)
Averages values of `src` when `cond` is true, over a moving window of length `ms` milliseconds.
Parameters:
src : (series int/float) The source of the values to be averaged.
ms : (simple int) The time duration in milliseconds defining the size of the moving window.
cond : (series bool) The condition determining which values are included. Optional.
minBars : (simple int) The minimum number of values to keep in the moving window. Optional.
Returns: (float) The average of `src` when `cond` is true in the moving window.
totalForTimeWhen(src, ms, cond, minBars)
Sums values of `src` when `cond` is true, over a moving window of length `ms` milliseconds.
Parameters:
src : (series int/float) The source of the values to be summed.
ms : (simple int) The time duration in milliseconds defining the size of the moving window.
cond : (series bool) The condition determining which values are included. Optional.
minBars : (simple int) The minimum number of values to keep in the moving window. Optional.
Returns: (float) The sum of `src` when `cond` is true in the moving window.