VPQuantLibLibrary "VPQuantLib"
Misc of math, position size and consolidation detection functions that can be used accross various scripts.
isPercentAboveReference(current, percent, reference, or_equal)
Checks if the current value is bigger (or equal) with the provided percent value to the reference
Parameters:
current (float) : - what to check against the reference
percent (float) : - what is the percent to check for difference
reference (float) : - what to compare against
or_equal (bool) : - enables checking for bigger or equal
Returns: true if the current is percent bigger (or equal) to the reference
isPercentBelowReference(current, percent, reference, or_equal)
Checks if the current value is smaller (or equal) with the provided percent value to the reference
Parameters:
current (float) : - what to check against the reference
percent (float) : - what is the percent to check for difference
reference (float) : - what to compare against
or_equal (bool) : - enables checking for smaller or equal
Returns: true if the current is percent smaller (or equal) to the reference
isInRange(current, reference, min_percent, max_percent, below)
Checks if the current value is greater/smaller than the reference value within the provided percent range
Parameters:
current (float) : - what to check for being in range against the refenence
reference (float) : - what to compare against
min_percent (float) : - the min percent range border
max_percent (float) : - the max percent range border
below (bool) : - check if below or above the reference
@return true if the current is bigger/smaller than the reference withing the percent range provided
GetRiskBasedPositionSize(account_balance, equity_risk_perc, max_loss_per_share)
Calculates and returns the positins size based on risk of the equity
Parameters:
account_balance (float) : - total account balance
equity_risk_perc (int) : - percent of equity to risk in the trade
max_loss_per_share (float) : - maximum loss per share (in currency, not in %) that we're willing to loose (calc based on the entry_price-stop_loss_price)
@return number of shares to buy
CheckInRangeConsolidation(consolidation_period, allowed_consolidation_range, ref_high, ref_low, prev_bar_consolidaton, draw_consolidation_lines)
Checks if the current bar is in a consolidation range
Parameters:
consolidation_period (int) : - the number of bars to consider for consolidation range calculation
allowed_consolidation_range (int) : - the percentage range allowed for the current consolidation range to be considered valid
ref_high (float) : - the reference high value to use for consolidation range calculation
ref_low (float) : - the reference low value to use for consolidation range calculation
prev_bar_consolidaton (bool)
draw_consolidation_lines (bool) : - a boolean indicating if consolidation range lines should be drawn on the chart
@return a tuple of three values:
1. _curr_consolidation - a boolean indicating if the current bar is in consolidation range
2. _curr_consolidation_low - the current consolidation low value
3. _curr_consolidation_high - the current consolidation high value
FindBasicConsolidation(loopback_period, consolidation_length, ref_high, ref_low, draw_consolidation_lines)
Finds a basic consolidation areas, looking back 1000 bars to find the pivot of the trend and checks if the current bar is in consolidation area counting the
number of bars that have not broken the consolidation high/low levels
Parameters:
loopback_period (int) : - the number of bars to look back to determine the high/low watermark
consolidation_length (int) : - minimum number of bars required to establish a consolidation period
ref_high (float) : - user input for high (can be based on the bar or wicks)
ref_low (float) : - user input for high (can be based on the bar or wicks)
draw_consolidation_lines (bool) : - enable/disable drawing of the consolidation lines
Returns: _pivot_point - pivot point
MATH
commonThe "Pineify/common" library presents a specialized toolkit crafted to empower traders and script developers with state-of-the-art time manipulation functions on the TradingView platform. It is instead a foundational utility aimed at enriching your script's ability to process and interpret time-based data with unparalleled precision.
Key Features
String Splitter:
The 'str_split_into_two' function is a universal string handler that separates any given input into two distinct strings based on a specified delimiter. This function is especially useful in parsing time strings or any scenario where a string needs to be divided into logical parts efficiently.
Example:
= str_split_into_two("a:b", ":")
// a = "a"
// b = "b"
Time Parser:
With 'time_to_hour_minute', users can effortlessly convert a time string into numerical hours and minutes. This function is pivotal for those who need to exact specific time series data or wish to schedule their trades down to the minute.
Example:
= time_to_hour_minute("02:30")
// time_hour = 2
// time_minute = 30
Unix Time Converter
The 'time_range_to_unix_time' function transcends traditional boundaries by converting a given time range into Unix timestamp format. This integration of date, time, and timezone, accounts for a comprehensive approach, allowing scripts to make timed decisions, perform historical analyses, and account for international markets across different time zones.
Example:
// Support 'hhmm-hhmm' and 'hh:mm-hh:mm'
= time_range_to_unix_time("09:30-12:00")
Summary:
Each function is meticulously designed to minimize complexity and maximize versatility. Whether you are a programmer seeking to streamline your code, or a trader requiring precise timing for your strategies, our library provides the logical framework that aligns with your needs.
The "Pineify/common" library is the bridge between high-level time concepts and actionable trading insights. It serves a multitude of purposes – from crafting elegant time-based triggers to dissecting complex string data. Embrace the power of precision with "Pineify/common" and elevate your TradingView scripting experience to new heights.
Mad_FibonacciboxLibrary "Mad_Fibonaccibox"
This library is designed to create and manage multiple Fibonacci boxes, which are graphical representations based on the inputs.
-----------------
exports:
f_fib_calc(_Fibonacci_box, _itemnumber)
fibonacci calc.
@description This function block uses the levels and paramters set into the type_fibonacci_box(levels) and fills the corresponding array of prices.
Parameters:
_Fibonacci_box (type_Fibonacci_box )
_itemnumber (int)
Returns: returns a type_Fibonacci_box with the filled data
f_fib_draw(_Fibonacci_box, _itemnumber)
fibonacci draw.
@description This function block uses the levels, prices and paramters set into the type_fibonacci_box(levels) and draws the fib on the chart
Parameters:
_Fibonacci_box (type_Fibonacci_box )
_itemnumber (int)
Returns: returns lines labels and fills on the chart, no data returns
type_level
s for defining a lines and texts of a fibonacci box
Fields:
level (series float)
price (series float)
drawline (series bool)
linewidth (series int)
linetype (series string)
fiblinecolor (series color)
drawlabel (series string)
labeltext (series string)
textshift (series int)
fibtextcolor (series color)
fibtextsize (series string)
transp (series int)
type_fill
s for defining the fills of a fibonaccibox
Fields:
partner_A (series int)
partner_B (series int)
fill_color (series color)
transp (series int)
type_Fibonacci_box
s for defining a fibonacci box
Fields:
bottom_price (series float)
top_price (series float)
StartBar (series int)
StopBar (series int)
levels (type_level )
fills (type_fill )
ChartisLog (series bool)
fibreverse (series bool)
fibdrawreverse (series bool)
decimals_price (series int)
decimals_percent (series int)
drawlines (series bool)
drawlabels (series bool)
drawfills (series bool)
draw_biginfo (series bool)
biginfo_textshift (series int)
rangeinfo_location (series int)
rangeinfo_color (series color)
rangeinfo_textsize (series string)
line_array (line )
linefill_array (linefill )
label_array (label )
lib_mathLibrary "lib_math"
a collection of functions calculating without history operator to avoid max_bars_back errors
mean(value, reset)
Parameters:
value (float) : series to track
reset (bool) : flag to reset tracking
@return returns average/mean of value since last reset
vwap(value, reset)
Parameters:
value (float) : series to track
reset (bool) : flag to reset tracking
@return returns vwap of value and volume since last reset
variance(value, reset)
Parameters:
value (float) : series to track
reset (bool) : flag to reset tracking
@return returns variance of value since last reset
trend(value, reset)
Parameters:
value (float) : series to track
reset (bool) : flag to reset tracking
@return where slope is the trend direction, correlation is a measurement for how well the values fit to the trendline (positive means ), stddev is how far the values deviate from the trend, x1 would be the time where reset is true and x2 would be the current time
Price - TP/SLPrices
With this library, you can easily manage prices such as stop loss, take profit, calculate differences, prices from a lower timeframe, and get the order size and commission from the strategy properties tab.
Note that the order size and commission only work with strategies!
Usage
Take Profit & Stop Loss
var bool open_trade = false
open_trade := strategy.position_size != 0
bars_since_opened = strategy.opentrades > 0 ? bar_index - strategy.opentrades.entry_bar_index(strategy.opentrades - 1) + 1 : 0
// ############################################################
// # TAKE PROFIT
// ############################################################
take_profit = input.string(title='Take Profit', defval='OFF', options= , group='TAKE PROFIT')
take_profit_percentage = input.float(title='Take Profit (% or X)', defval=0, minval=0, step=0.1, group='TAKE PROFIT')
take_profit_bars = input.int(title='Take Profit Bars', defval=0, minval=0, step=1, group='TAKE PROFIT')
take_profit_indication = input.string(title='Take Profit Plot', defval='OFF', options= , group='TAKE PROFIT')
take_profit_color = input.color(title='Take Profit Color', defval=#26A69A, group='TAKE PROFIT')
take_profit_price = math.round_to_mintick(strategy.position_avg_price)
take_profit_plot = plot(take_profit == 'ON' and take_profit_indication == 'ON' and open_trade and bars_since_opened >= take_profit_bars and take_profit_percentage > 0 and nz(take_profit_price) ? take_profit_price : na, color=take_profit_color, style=plot.style_linebr, linewidth=1, title='TP', offset=0)
// ############################################################
// # STOP LOSS
// ############################################################
stop_loss = input.string(title='Stop Loss', defval='OFF', options= , group='STOP LOSS')
stop_loss_percentage = input.float(title='Stop Loss (% or X)', defval=0, minval=0, step=0.1, group='STOP LOSS')
stop_loss_bars = input.int(title='Stop Loss Bars', defval=0, minval=0, step=1, group='STOP LOSS')
stop_loss_indication = input.string(title='Stop Loss Plot', defval='OFF', options= , group='STOP LOSS')
stop_loss_color = input.color(title='Stop Loss Color', defval=#FF5252, group='STOP LOSS')
stop_loss_price = math.round_to_mintick(strategy.position_avg_price)
stop_loss_plot = plot(stop_loss == 'ON' and stop_loss_indication == 'ON' and open_trade and bars_since_opened >= stop_loss_bars and stop_loss_percentage > 0 and nz(stop_loss_price) ? stop_loss_price : na, color=stop_loss_color, style=plot.style_linebr, linewidth=1, title='SL', offset=0)
// ############################################################
// # STRATEGY
// ############################################################
var limit_price = 0.0
var stop_price = 0.0
limit_price := take_profit == 'ON' ? price.take_profit_price(take_profit_price, take_profit_percentage, take_profit_bars, bars_since_opened) : na
stop_price := stop_loss == 'ON' ? price.stop_loss_price(stop_loss_price, stop_loss_percentage, stop_loss_bars, bars_since_opened) : na
strategy.exit(id='TP/SL', comment='TP/SL', from_entry='LONG', limit=limit_price, stop=stop_price)
Calculate difference between 2 prices:
price.difference(close, close )
Get last price from lower timeframe:
price.ltf(request.security_lower_tf(ticker, '1', close))
Get the order size from the properties tab:
price.order_size()
Get the commission from the properties tab.
price.commission()
map_custom_value_usefullLibrary "map_custom_value_usefull"
makes it possible to create:
1.map with array value:
for this purpose need:
1.create map with arrays type value
2.put your array in this map, overloaded put method itself will put the array based on the type into the required field
3.next you can get this array with help standard get function, which will determine which field you need to get.(But because of this, only arrays of the same type can be used in one map)
2.map with map value:
for this purpose need:
1.create map with maps type value
2.put your other map in how value in your based map, need you need to put it in the field corresponding to your map type
3.next you can get this map with help standard get function.You need to specify a special field name here, because the get function cannot be overloaded without additional variables(
map_custom_value_fullLibrary "map_custom_value_full"
makes it possible to create:
1.map with array value:
for this purpose need:
1.create map with arrays type value
2.put your array in this map, overloaded put method itself will put the array based on the type into the required field
3.next you can get this array with help standard get function, by specifying the type field of your array
2.map with map value:
for this purpose need:
1.create map with maps type value
2.put your other map in how value in your based map, need you need to put it in the field corresponding to your map type
3.next you can get this map with help standard get function, by specifying the type field of your array
3.maps with value in array with maps:
for this purpose need:
1.create map with arrays type value
2.put as value maps_arrays fild with array from maps_arrays type fild which should already contain map of the type you need (there are not all map type fields here you can add a map of the required type by adding a corresponding field of map_arrays type.)
3.next you can get this array from map with help standard get function, by specifying the type field of your array
Polyline PlusThis library introduces the `PolylinePlus` type, which is an enhanced version of the built-in PineScript `polyline`. It enables two features that are absent from the built-in type:
1. Developers can now efficiently add or remove points from the polyline. In contrast, the built-in `polyline` type is immutable, requiring developers to create a new instance of the polyline to make changes, which is cumbersome and incurs a significant performance penalty.
2. Each `PolylinePlus` instance can theoretically hold up to ~1M points, surpassing the built-in `polyline` type's limit of 10K points, as long as it does not exceed the memory limit of the PineScript runtime.
Internally, each `PolylinePlus` instance utilizes an array of `line`s and an array of `polyline`s. The `line`s array serves as a buffer to store lines formed by recently added points. When the buffer reaches its capacity, it flushes the contents and converts the lines into polylines. These polylines are expected to undergo fewer updates. This approach is similiar to the concept of "Buffered I/O" in file and network systems. By connecting the underlying lines and polylines, this library achieves an enhanced polyline that is dynamic, efficient, and capable of surpassing the maximum number of points imposed by the built-in polyline.
🔵 API
Step 1: Import this library
import algotraderdev/polylineplus/1 as pp
// remember to check the latest version of this library and replace the 1 above.
Step 2: Initialize the `PolylinePlus` type.
var p = pp.PolylinePlus.new()
There are a few optional params that developers can specify in the constructor to modify the behavior and appearance of the polyline instance.
var p = pp.PolylinePlus.new(
// If true, the drawing will also connect the first point to the last point, resulting in a closed polyline.
closed = false,
// Determines the field of the chart.point objects that the polyline will use for its x coordinates. Either xloc.bar_index (default), or xloc.bar_time.
xloc = xloc.bar_index,
// Color of the polyline. Default is blue.
line_color = color.blue,
// Style of the polyline. Default is line.style_solid.
line_style = line.style_solid,
// Width of the polyline. Default is 1.
line_width = 1,
// The maximum number of points that each built-in `polyline` instance can contain.
// NOTE: this is not to be confused with the maximum of points that each `PolylinePlus` instance can contain.
max_points_per_builtin_polyline = 10000,
// The number of lines to keep in the buffer. If more points are to be added while the buffer is full, then all the lines in the buffer will be flushed into the poylines.
// The higher the number, the less frequent we'll need to // flush the buffer, and thus lead to better performance.
// NOTE: the maximum total number of lines per chart allowed by PineScript is 500. But given there might be other places where the indicator or strategy are drawing lines outside this polyline context, the default value is 50 to be safe.
lines_bffer_size = 50)
Step 3: Push / Pop Points
// Push a single point
p.push_point(chart.point.now())
// Push multiple points
chart.point points = array.from(p1, p2, p3) // Where p1, p2, p3 are all chart.point type.
p.push_points(points)
// Pop point
p.pop_point()
// Resets all the points in the polyline.
p.set_points(points)
// Deletes the polyline.
p.delete()
🔵 Benchmark
Below is a simple benchmark comparing the performance between `PolylinePlus` and the native `polyline` type for incrementally adding 10K points to a polyline.
import algotraderdev/polylineplus/2 as pp
var t1 = 0
var t2 = 0
if bar_index < 10000
int start = timenow
var p = pp.PolylinePlus.new(xloc = xloc.bar_time, closed = true)
p.push_point(chart.point.now())
t1 += timenow - start
start := timenow
var polyline pl = na
var points = array.new()
points.push(chart.point.now())
if not na(pl)
pl.delete()
pl := polyline.new(points)
t2 += timenow - start
if barstate.islast
log.info('{0} {1}', t1, t2)
For this benchmark, `PolylinePlus` took ~300ms, whereas the native `polyline` type took ~6000ms.
We can also fine-tune the parameters for `PolylinePlus` to have a larger buffer size for `line`s and a smaller buffer for `polyline`s.
var p = pp.PolylinePlus.new(xloc = xloc.bar_time, closed = true, lines_buffer_size = 500, max_points_per_builtin_polyline = 1000)
With the above optimization, it only took `PolylinePlus` ~80ms to process the same 10K points, which is ~75x the performance compared to the native `polyline`.
SPTS_StatsPakLibFinally getting around to releasing the library component to the SPTS indicator!
This library is packed with a ton of great statistics functions to supplement SPTS, these functions add to the capabilities of SPTS including a forecast function.
The library includes the following functions
1. Linear Regression (single independent and single dependent)
2. Multiple Regression (2 independent variables, 1 dependent)
3. Standard Error of Residual Assessment
4. Z-Score
5. Effect Size
6. Confidence Interval
7. Paired Sample Test
8. Two Tailed T-Test
9. Qualitative assessment of T-Test
10. T-test table and p value assigner
11. Correlation of two arrays
12. Quadratic correlation (curvlinear)
13. R Squared value of 2 arrays
14. R Squared value of 2 floats
15. Test of normality
16. Forecast function which will push the desired forecasted variables into an array.
One of the biggest added functionalities of this library is the forecasting function.
This function provides an autoregressive, trainable model that will export forecasted values to 3 arrays, one contains the autoregressed forecasted results, the other two contain the upper confidence forecast and the lower confidence forecast.
Hope you enjoy and find use for this!
Library "SPTS_StatsPakLib"
f_linear_regression(independent, dependent, len, variable)
TODO: creates a simple linear regression model between two variables.
Parameters:
independent (float)
dependent (float)
len (int)
variable (float)
Returns: TODO: returns 6 float variables
result: The result of the regression model
pear_cor: The pearson correlation of the regresion model
rsqrd: the R2 of the regression model
std_err: the error of residuals
slope: the slope of the model (coefficient)
intercept: the intercept of the model (y = mx + b is y = slope x + intercept)
f_multiple_regression(y, x1, x2, input1, input2, len)
TODO: creates a multiple regression model between two independent variables and 1 dependent variable.
Parameters:
y (float)
x1 (float)
x2 (float)
input1 (float)
input2 (float)
len (int)
Returns: TODO: returns 7 float variables
result: The result of the regression model
pear_cor: The pearson correlation of the regresion model
rsqrd: the R2 of the regression model
std_err: the error of residuals
b1 & b2: the slopes of the model (coefficients)
intercept: the intercept of the model (y = mx + b is y = b1 x + b2 x + intercept)
f_stanard_error(result, dependent, length)
x TODO: performs an assessment on the error of residuals, can be used with any variable in which there are residual values (such as moving averages or more comlpex models)
param x TODO: result is the output, for example, if you are calculating the residuals of a 200 EMA, the result would be the 200 EMA
dependent: is the dependent variable. In the above example with the 200 EMA, your dependent would be the source for your 200 EMA
Parameters:
result (float)
dependent (float)
length (int)
Returns: x TODO: the standard error of the residual, which can then be multiplied by standard deviations or used as is.
f_zscore(variable, length)
TODO: Calculates the z-score
Parameters:
variable (float)
length (int)
Returns: TODO: returns float z-score
f_effect_size(array1, array2)
TODO: Calculates the effect size between two arrays of equal scale.
Parameters:
array1 (float )
array2 (float )
Returns: TODO: returns the effect size (float)
f_confidence_interval(array1, array2, ci_input)
TODO: Calculates the confidence interval between two arrays
Parameters:
array1 (float )
array2 (float )
ci_input (float)
Returns: TODO: returns the upper_bound and lower_bound cofidence interval as float values
paired_sample_t(src1, src2, len)
TODO: Performs a paired sample t-test
Parameters:
src1 (float)
src2 (float)
len (int)
Returns: TODO: Returns the t-statistic and degrees of freedom of a paired sample t-test
two_tail_t_test(array1, array2)
TODO: Perofrms a two tailed t-test
Parameters:
array1 (float )
array2 (float )
Returns: TODO: Returns the t-statistic and degrees of freedom of a two_tail_t_test sample t-test
t_table_analysis(t_stat, df)
TODO: This is to make a qualitative assessment of your paired and single sample t-test
Parameters:
t_stat (float)
df (float)
Returns: TODO: the function will return 2 string variables and 1 colour variable. The 2 string variables indicate whether the results are significant or not and the colour will
output red for insigificant and green for significant
t_table_p_value(df, t_stat)
TODO: This performs a quantaitive assessment on your t-tests to determine the statistical significance p value
Parameters:
df (float)
t_stat (float)
Returns: TODO: The function will return 1 float variable, the p value of the t-test.
cor_of_array(array1, array2)
TODO: This performs a pearson correlation assessment of two arrays. They need to be of equal size!
Parameters:
array1 (float )
array2 (float )
Returns: TODO: The function will return the pearson correlation.
quadratic_correlation(src1, src2, len)
TODO: This performs a quadratic (curvlinear) pearson correlation between two values.
Parameters:
src1 (float)
src2 (float)
len (int)
Returns: TODO: The function will return the pearson correlation (quadratic based).
f_r2_array(array1, array2)
TODO: Calculates the r2 of two arrays
Parameters:
array1 (float )
array2 (float )
Returns: TODO: returns the R2 value
f_rsqrd(src1, src2, len)
TODO: Calculates the r2 of two float variables
Parameters:
src1 (float)
src2 (float)
len (int)
Returns: TODO: returns the R2 value
test_of_normality(array, src)
TODO: tests the normal distribution hypothesis
Parameters:
array (float )
src (float)
Returns: TODO: returns 4 variables, 2 float and 2 string
Skew: the skewness of the dataset
Kurt: the kurtosis of the dataset
dist = the distribution type (recognizes 7 different distribution types)
implication = a string assessment of the implication of the distribution (qualitative)
f_forecast(output, input, train_len, forecast_length, output_array, upper_array, lower_array)
TODO: This performs a simple forecast function on a single dependent variable. It will autoregress this based on the train time, to the desired length of output,
then it will push the forecasted values to 3 float arrays, one that contains the forecasted result, 1 that contains the Upper Confidence Result and one with the lower confidence
result.
Parameters:
output (float)
input (float)
train_len (int)
forecast_length (int)
output_array (float )
upper_array (float )
lower_array (float )
Returns: TODO: Will return 3 arrays, one with the forecasted results, one with the upper confidence results, and a final with the lower confidence results. Example is given below.
mathLibrary "math"
TODO: Math custom MA and more
pine_ema(src, length)
Parameters:
src (float)
length (int)
pine_dema(src, length)
Parameters:
src (float)
length (int)
pine_tema(src, length)
Parameters:
src (float)
length (int)
pine_sma(src, length)
Parameters:
src (float)
length (int)
pine_smma(src, length)
Parameters:
src (float)
length (int)
pine_ssma(src, length)
Parameters:
src (float)
length (int)
pine_rma(src, length)
Parameters:
src (float)
length (int)
pine_wma(x, y)
Parameters:
x (float)
y (int)
pine_hma(src, length)
Parameters:
src (float)
length (int)
pine_vwma(x, y)
Parameters:
x (float)
y (int)
pine_swma(x)
Parameters:
x (float)
pine_alma(src, length, offset, sigma)
Parameters:
src (float)
length (int)
offset (float)
sigma (float)
EphemerisLibrary "Ephemeris"
TODO: add library description here
mercuryElements()
mercuryRates()
venusElements()
venusRates()
earthElements()
earthRates()
marsElements()
marsRates()
jupiterElements()
jupiterRates()
saturnElements()
saturnRates()
uranusElements()
uranusRates()
neptuneElements()
neptuneRates()
rev360(x)
Normalize degrees to within [0, 360)
Parameters:
x (float) : degrees to be normalized
Returns: Normalized degrees
scaleAngle(longitude, magnitude, harmonic)
Scale angle in degrees
Parameters:
longitude (float)
magnitude (float)
harmonic (int)
Returns: Scaled angle in degrees
julianCenturyInJulianDays()
Constant Julian days per century
Returns: 36525
julianEpochJ2000()
Julian date on J2000 epoch start (2000-01-01)
Returns: 2451545.0
meanObliquityForJ2000()
Mean obliquity of the ecliptic on J2000 epoch start (2000-01-01)
Returns: 23.43928
getJulianDate(Year, Month, Day, Hour, Minute)
Convert calendar date to Julian date
Parameters:
Year (int) : calendar year as integer (e.g. 2018)
Month (int) : calendar month (January = 1, December = 12)
Day (int) : calendar day of month (e.g. January valid days are 1-31)
Hour (int) : valid values 0-23
Minute (int) : valid values 0-60
julianCenturies(date, epoch_start)
Centuries since Julian Epoch 2000-01-01
Parameters:
date (float) : Julian date to conver to Julian centuries
epoch_start (float) : Julian date of epoch start (e.g. J2000 epoch = 2451545)
Returns: Julian date converted to Julian centuries
julianCenturiesSinceEpochJ2000(julianDate)
Calculate Julian centuries since epoch J2000 (2000-01-01)
Parameters:
julianDate (float) : Julian Date in days
Returns: Julian centuries since epoch J2000 (2000-01-01)
atan2(y, x)
Specialized arctan function
Parameters:
y (float) : radians
x (float) : radians
Returns: special arctan of y/x
eccAnom(ec, m_param, dp)
Compute eccentricity of the anomaly
Parameters:
ec (float) : Eccentricity of Orbit
m_param (float) : Mean Anomaly ?
dp (int) : Decimal places to round to
Returns: Eccentricity of the Anomaly
planetEphemerisCalc(TGen, planetElementId, planetRatesId)
Compute planetary ephemeris (longtude relative to Earth or Sun) on a Julian date
Parameters:
TGen (float) : Julian Date
planetElementId (float ) : All planet orbital elements in an array. This index references a specific planet's elements.
planetRatesId (float ) : All planet orbital rates in an array. This index references a specific planet's rates.
Returns: X,Y,Z ecliptic rectangular coordinates and R radius from reference body.
calculateRightAscensionAndDeclination(earthX, earthY, earthZ, planetX, planetY, planetZ)
Calculate right ascension and declination for a planet relative to Earth
Parameters:
earthX (float) : Earth X ecliptic rectangular coordinate relative to Sun
earthY (float) : Earth Y ecliptic rectangular coordinate relative to Sun
earthZ (float) : Earth Z ecliptic rectangular coordinate relative to Sun
planetX (float) : Planet X ecliptic rectangular coordinate relative to Sun
planetY (float) : Planet Y ecliptic rectangular coordinate relative to Sun
planetZ (float) : Planet Z ecliptic rectangular coordinate relative to Sun
Returns: Planet geocentric orbital radius, geocentric right ascension, and geocentric declination
mercuryHelio(T)
Compute Mercury heliocentric longitude on date
Parameters:
T (float)
Returns: Mercury heliocentric longitude on date
venusHelio(T)
Compute Venus heliocentric longitude on date
Parameters:
T (float)
Returns: Venus heliocentric longitude on date
earthHelio(T)
Compute Earth heliocentric longitude on date
Parameters:
T (float)
Returns: Earth heliocentric longitude on date
marsHelio(T)
Compute Mars heliocentric longitude on date
Parameters:
T (float)
Returns: Mars heliocentric longitude on date
jupiterHelio(T)
Compute Jupiter heliocentric longitude on date
Parameters:
T (float)
Returns: Jupiter heliocentric longitude on date
saturnHelio(T)
Compute Saturn heliocentric longitude on date
Parameters:
T (float)
Returns: Saturn heliocentric longitude on date
neptuneHelio(T)
Compute Neptune heliocentric longitude on date
Parameters:
T (float)
Returns: Neptune heliocentric longitude on date
uranusHelio(T)
Compute Uranus heliocentric longitude on date
Parameters:
T (float)
Returns: Uranus heliocentric longitude on date
sunGeo(T)
Parameters:
T (float)
mercuryGeo(T)
Parameters:
T (float)
venusGeo(T)
Parameters:
T (float)
marsGeo(T)
Parameters:
T (float)
jupiterGeo(T)
Parameters:
T (float)
saturnGeo(T)
Parameters:
T (float)
neptuneGeo(T)
Parameters:
T (float)
uranusGeo(T)
Parameters:
T (float)
moonGeo(T_JD)
Parameters:
T_JD (float)
mercuryOrbitalPeriod()
Mercury orbital period in Earth days
Returns: 87.9691
venusOrbitalPeriod()
Venus orbital period in Earth days
Returns: 224.701
earthOrbitalPeriod()
Earth orbital period in Earth days
Returns: 365.256363004
marsOrbitalPeriod()
Mars orbital period in Earth days
Returns: 686.980
jupiterOrbitalPeriod()
Jupiter orbital period in Earth days
Returns: 4332.59
saturnOrbitalPeriod()
Saturn orbital period in Earth days
Returns: 10759.22
uranusOrbitalPeriod()
Uranus orbital period in Earth days
Returns: 30688.5
neptuneOrbitalPeriod()
Neptune orbital period in Earth days
Returns: 60195.0
jupiterSaturnCompositePeriod()
jupiterNeptuneCompositePeriod()
jupiterUranusCompositePeriod()
saturnNeptuneCompositePeriod()
saturnUranusCompositePeriod()
planetSineWave(julianDateInCenturies, planetOrbitalPeriod, planetHelio)
Convert heliocentric longitude of planet into a sine wave
Parameters:
julianDateInCenturies (float)
planetOrbitalPeriod (float) : Orbital period of planet in Earth days
planetHelio (float) : Heliocentric longitude of planet in degrees
Returns: Sine of heliocentric longitude on a Julian date
WIPFunctionLyaponovLibrary "WIPFunctionLyaponov"
Lyapunov exponents are mathematical measures used to describe the behavior of a system over
time. They are named after Russian mathematician Alexei Lyapunov, who first introduced the concept in the
late 19th century. The exponent is defined as the rate at which a particular function or variable changes
over time, and can be positive, negative, or zero.
Positive exponents indicate that a system tends to grow or expand over time, while negative exponents
indicate that a system tends to shrink or decay. Zero exponents indicate that the system does not change
significantly over time. Lyapunov exponents are used in various fields of science and engineering, including
physics, economics, and biology, to study the long-term behavior of complex systems.
~ generated description from vicuna13b
---
To calculate the Lyapunov Exponent (LE) of a given Time Series, we need to follow these steps:
1. Firstly, you should have access to your data in some format like CSV or Excel file. If not, then you can collect it manually using tools such as stopwatches and measuring tapes.
2. Once the data is collected, clean it up by removing any outliers that may skew results. This step involves checking for inconsistencies within your dataset (e.g., extremely large or small values) and either discarding them entirely or replacing with more reasonable estimates based on surrounding values.
3. Next, you need to determine the dimension of your time series data. In most cases, this will be equal to the number of variables being measured in each observation period (e.g., temperature, humidity, wind speed).
4. Now that we have a clean dataset with known dimensions, we can calculate the LE for our Time Series using the following formula:
λ = log(||M^T * M - I||)/log(||v||)
where:
λ (Lyapunov Exponent) is the quantity that will be calculated.
||...|| denotes an Euclidean norm of a vector or matrix, which essentially means taking the square root of the sum of squares for each element in the vector/matrix.
M represents our Jacobian Matrix whose elements are given by:
J_ij = (∂fj / ∂xj) where fj is the jth variable and xj is the ith component of the initial condition vector x(t). In other words, each element in this matrix represents how much a small change in one variable affects another.
I denotes an identity matrix whose elements are all equal to 1 (or any constant value if you prefer). This term essentially acts as a baseline for comparison purposes since we want our Jacobian Matrix M^T * M to be close to it when the system is stable and far away from it when the system is unstable.
v represents an arbitrary vector whose Euclidean norm ||v|| will serve as a scaling factor in our calculation. The choice of this particular vector does not matter since we are only interested in its magnitude (i.e., length) for purposes of normalization. However, if you want to ensure that your results are accurate and consistent across different datasets or scenarios, it is recommended to use the same initial condition vector x(t) as used earlier when calculating our Jacobian Matrix M.
5. Finally, once we have calculated λ using the formula above, we can interpret its value in terms of stability/instability for our Time Series data:
- If λ < 0, then this indicates that the system is stable (i.e., nearby trajectories will converge towards each other over time).
- On the other hand, if λ > 0, then this implies that the system is unstable (i.e., nearby trajectories will diverge away from one another over time).
~ generated description from airoboros33b
---
Reference:
en.wikipedia.org
www.collimator.ai
blog.abhranil.net
www.researchgate.net
physics.stackexchange.com
---
This is a work in progress, it may contain errors so use with caution.
If you find flaws or suggest something new, please leave a comment bellow.
_measure_function(i)
helper function to get the name of distance function by a index (0 -> 13).\
Functions: SSD, Euclidean, Manhattan, Minkowski, Chebyshev, Correlation, Cosine, Camberra, MAE, MSE, Lorentzian, Intersection, Penrose Shape, Meehl.
Parameters:
i (int)
_test(L)
Helper function to test the output exponents state system and outputs description into a string.
Parameters:
L (float )
estimate(X, initial_distance, distance_function)
Estimate the Lyaponov Exponents for multiple series in a row matrix.
Parameters:
X (map)
initial_distance (float) : Initial distance limit.
distance_function (string) : Name of the distance function to be used, default:`ssd`.
Returns: List of Lyaponov exponents.
max(L)
Maximal Lyaponov Exponent.
Parameters:
L (float ) : List of Lyapunov exponents.
Returns: Highest exponent.
Contrast Color LibraryThis lightweight library provides a utility method that analyzes any provided background color and automatically chooses the optimal black or white foreground color to ensure maximum visual contrast and readability.
🟠 Algorithm
The library utilizes the HSP Color Model to calculate the brightness of the background color. The formula for this calculation is as follows:
brightness = sqrt(0.299 * R^2 + 0.587 * G^2 + 0.114 * B^2 )
The library chooses black as the foreground color if the brightness exceeds the threshold (default 0.5), and white otherwise.
two_ma_logicLibrary "two_ma_logic"
The core logic for the two moving average strategy that is used as an example for the internal logic of
the "Template Trailing Strategy" and the "Two MA Signal Indicator"
ma(source, maType, length)
ma - Calculate the moving average of the given source for the given length and type of the average
Parameters:
source (float) : - The source of the values
maType (simple string) : - The type of the moving average
length (simple int) : - The length of the moving average
Returns: - The resulted value of the calculations of the moving average
getDealConditions(drawings, longDealsEnabled, shortDealsEnabled, endDealsEnabled, cnlStartDealsEnabled, cnlEndDealsEnabled, emaFilterEnabled, emaAtrBandEnabled, adxFilterEnabled, adxSmoothing, diLength, adxThreshold)
Parameters:
drawings (TwoMaDrawings)
longDealsEnabled (simple bool)
shortDealsEnabled (simple bool)
endDealsEnabled (simple bool)
cnlStartDealsEnabled (simple bool)
cnlEndDealsEnabled (simple bool)
emaFilterEnabled (simple bool)
emaAtrBandEnabled (simple bool)
adxFilterEnabled (simple bool)
adxSmoothing (simple int)
diLength (simple int)
adxThreshold (simple float)
TwoMaDrawings
Fields:
fastMA (series__float)
slowMA (series__float)
emaLine (series__float)
emaUpperBand (series__float)
emaLowerBand (series__float)
tts_conventionLibrary "tts_convention"
This library can convert the start, end, cancel start, cancel end deal conditions that are used in the
"Template Trailing Strategy" script into a signal value and vice versa. The "two channels mod div" convention is unsed
internaly and the signal value can be composed/decomposed into two channels that contain the afforementioned actions
for long and short positions separetely.
getDealConditions(signal)
getDealConditions - Get the start, end, cancel start and cancel end deal conditions that are used in the "Template Trailing Strategy" script by decomposing the given signal
Parameters:
signal (int) : - The signal value to decompose
Returns: An object with the start, end, cancel start and cancel end deal conditions for long and short
getSignal(dealConditions)
getSignal - Get the signal value from the composition of the start, end, cancel start and cancel end deal conditions that are used in the "Template Trailing Strategy" script
Parameters:
dealConditions (DealConditions) : - The deal conditions object that containd the start, end, cancel start and cancel end deal conditions for long and short
Returns: The composed signal value
DealConditions
Fields:
startLongDeal (series__bool)
startShortDeal (series__bool)
endLongDeal (series__bool)
endShortDeal (series__bool)
cnlStartLongDeal (series__bool)
cnlStartShortDeal (series__bool)
cnlEndLongDeal (series__bool)
cnlEndShortDeal (series__bool)
signal_datagramThe purpose of this library is to split and merge an integer into useful pieces of information that can easily handled and plotted.
The basic piece of information is one word. Depending on the underlying numerical system a word can be a bit, octal, digit, nibble, or byte.
The user can define channels. Channels are named groups of words. Multiple words can be combined to increase the value range of a channel.
A datagram is a description of the user-defined channels in an also user-defined numeric system that also contains all runtime information that is necessary to split and merge the integer.
This library simplifies the communication between two scripts by allowing the user to define the same datagram in both scripts.
On the sender's side, the channel values can be merged into one single integer value called signal. This signal can be 'emitted' using the plot function. The other script can use the 'input.source' function to receive that signal.
On the receiver's end based on the same datagram, the signal can be split into several channels. Each channel has the piece of information that the sender script put.
In the example of this library, we use two channels and we have split the integer in half. However, the user can add new channels, change them, and give meaning to them according to the functionality he wants to implement and the type of information he wants to communicate.
Nowadays many 'input.source' calls are allowed to pass information between the scripts, When that is not a price or a floating value, this library is very useful.
The reason is that most of the time, the convention that is used is not clear enough and it is easy to do things the wrong way or break them later on.
With this library validation checks are done during the initialization minimizing the possibility of error due to some misconceptions.
Library "signal_datagram"
Conversion of a datagram type to a signal that can be "send" as a single value from an indicator to a strategy script
method init(this, positions, maxWords)
init - Initialize if the word positons array with an empty array
Namespace types: WordPosArray
Parameters:
this (WordPosArray) : - The word positions array object
positions (int ) : - The array that contains all the positions of the worlds that shape the channel
maxWords (int) : - The maximum words allowed based on the span
Returns: The initialized object
method init(this)
init - Initialize if the channels word positons map with an empty map
Namespace types: ChannelDesc
Parameters:
this (ChannelDesc) : - The channels' descriptor object
Returns: The initialized object
method init(this, numericSystem, channelDesc)
init - Initialize if the datagram
Namespace types: Datagram
Parameters:
this (Datagram) : - The datagram object
numericSystem (simple string) : - The numeric system of the words to be used
channelDesc (ChannelDesc) : - The channels descriptor that contains the positions of the words that each channel consists of
Returns: The initialized object
method add_channel(this, name, positions)
add_channel - Add a new channel descriptopn with its name and its corresponding word positons to the map
Namespace types: ChannelDesc
Parameters:
this (ChannelDesc) : - The channels' descriptor object to update
name (simple string)
positions (int )
Returns: The initialized object
method set_signal(this, value)
set_signal - Set the signal value
Namespace types: Datagram
Parameters:
this (Datagram) : - The datagram object to update
value (int) : - The signal value to set
method get_signal(this)
get_signal - Get the signal value
Namespace types: Datagram
Parameters:
this (Datagram) : - The datagram object to query
Returns: The value of the signal in digits
method set_signal_sign(this, sign)
set_signal_sign - Set the signal sign
Namespace types: Datagram
Parameters:
this (Datagram) : - The datagram object to update
sign (int) : - The negative -1 or positive 1 sign of the underlying value
method get_signal_sign(this)
get_signal_sign - Get the signal sign
Namespace types: Datagram
Parameters:
this (Datagram) : - The datagram object to query
Returns: The sign of the signal value -1 if it is negative and 1 if it is possitive
method get_channel_names(this)
get_channel_names - Get an array of all channel names
Namespace types: Datagram
Parameters:
this (Datagram)
Returns: An array that has all the channel names that are used by the datagram
method set_channel_value(this, channelName, value)
set_channel_value - Set the value of the channel
Namespace types: Datagram
Parameters:
this (Datagram) : - The datagram object to update
channelName (simple string) : - The name of the channel to set the value to. Then name should be as described int the schemas channel descriptor
value (int) : - The channel value to set
method set_all_channels_value(this, value)
set_all_channels_value - Set the value of all the channels
Namespace types: Datagram
Parameters:
this (Datagram) : - The datagram object to update
value (int) : - The channel value to set
method set_all_channels_max_value(this)
set_all_channels_value - Set the value of all the channels
Namespace types: Datagram
Parameters:
this (Datagram) : - The datagram object to update
method get_channel_value(this, channelName)
get_channel_value - Get the value of the channel
Namespace types: Datagram
Parameters:
this (Datagram) : - The datagram object to query
channelName (simple string)
Returns: Digit group of words (bits/octals/digits/nibbles/hexes/bytes) found at the channel accodring to the schema
WordDesc
Fields:
numericSystem (series__string)
span (series__integer)
WordPosArray
Fields:
positions (array__integer)
ChannelDesc
Fields:
map (map__series__string:|WordPosArray|#OBJ)
Schema
Fields:
wordDesc (|WordDesc|#OBJ)
channelDesc (|ChannelDesc|#OBJ)
Signal
Fields:
value (series__integer)
isNegative (series__bool)
words (array__integer)
Datagram
Fields:
schema (|Schema|#OBJ)
signal (|Signal|#OBJ)
SimilarityMeasuresLibrary "SimilarityMeasures"
Similarity measures are statistical methods used to quantify the distance between different data sets
or strings. There are various types of similarity measures, including those that compare:
- data points (SSD, Euclidean, Manhattan, Minkowski, Chebyshev, Correlation, Cosine, Camberra, MAE, MSE, Lorentzian, Intersection, Penrose Shape, Meehl),
- strings (Edit(Levenshtein), Lee, Hamming, Jaro),
- probability distributions (Mahalanobis, Fidelity, Bhattacharyya, Hellinger),
- sets (Kumar Hassebrook, Jaccard, Sorensen, Chi Square).
---
These measures are used in various fields such as data analysis, machine learning, and pattern recognition. They
help to compare and analyze similarities and differences between different data sets or strings, which
can be useful for making predictions, classifications, and decisions.
---
References:
en.wikipedia.org
cran.r-project.org
numerics.mathdotnet.com
github.com
github.com
github.com
Encyclopedia of Distances, doi.org
ssd(p, q)
Sum of squared difference for N dimensions.
Parameters:
p (float ) : `array` Vector with first numeric distribution.
q (float ) : `array` Vector with second numeric distribution.
Returns: Measure of distance that calculates the squared euclidean distance.
euclidean(p, q)
Euclidean distance for N dimensions.
Parameters:
p (float ) : `array` Vector with first numeric distribution.
q (float ) : `array` Vector with second numeric distribution.
Returns: Measure of distance that calculates the straight-line (or Euclidean).
manhattan(p, q)
Manhattan distance for N dimensions.
Parameters:
p (float ) : `array` Vector with first numeric distribution.
q (float ) : `array` Vector with second numeric distribution.
Returns: Measure of absolute differences between both points.
minkowski(p, q, p_value)
Minkowsky Distance for N dimensions.
Parameters:
p (float ) : `array` Vector with first numeric distribution.
q (float ) : `array` Vector with second numeric distribution.
p_value (float) : `float` P value, default=1.0(1: manhatan, 2: euclidean), does not support chebychev.
Returns: Measure of similarity in the normed vector space.
chebyshev(p, q)
Chebyshev distance for N dimensions.
Parameters:
p (float ) : `array` Vector with first numeric distribution.
q (float ) : `array` Vector with second numeric distribution.
Returns: Measure of maximum absolute difference.
correlation(p, q)
Correlation distance for N dimensions.
Parameters:
p (float ) : `array` Vector with first numeric distribution.
q (float ) : `array` Vector with second numeric distribution.
Returns: Measure of maximum absolute difference.
cosine(p, q)
Cosine distance between provided vectors.
Parameters:
p (float ) : `array` 1D Vector.
q (float ) : `array` 1D Vector.
Returns: The Cosine distance between vectors `p` and `q`.
---
angiogenesis.dkfz.de
camberra(p, q)
Camberra distance for N dimensions.
Parameters:
p (float ) : `array` Vector with first numeric distribution.
q (float ) : `array` Vector with second numeric distribution.
Returns: Weighted measure of absolute differences between both points.
mae(p, q)
Mean absolute error is a normalized version of the sum of absolute difference (manhattan).
Parameters:
p (float ) : `array` Vector with first numeric distribution.
q (float ) : `array` Vector with second numeric distribution.
Returns: Mean absolute error of vectors `p` and `q`.
mse(p, q)
Mean squared error is a normalized version of the sum of squared difference.
Parameters:
p (float ) : `array` Vector with first numeric distribution.
q (float ) : `array` Vector with second numeric distribution.
Returns: Mean squared error of vectors `p` and `q`.
lorentzian(p, q)
Lorentzian distance between provided vectors.
Parameters:
p (float ) : `array` Vector with first numeric distribution.
q (float ) : `array` Vector with second numeric distribution.
Returns: Lorentzian distance of vectors `p` and `q`.
---
angiogenesis.dkfz.de
intersection(p, q)
Intersection distance between provided vectors.
Parameters:
p (float ) : `array` Vector with first numeric distribution.
q (float ) : `array` Vector with second numeric distribution.
Returns: Intersection distance of vectors `p` and `q`.
---
angiogenesis.dkfz.de
penrose(p, q)
Penrose Shape distance between provided vectors.
Parameters:
p (float ) : `array` Vector with first numeric distribution.
q (float ) : `array` Vector with second numeric distribution.
Returns: Penrose shape distance of vectors `p` and `q`.
---
angiogenesis.dkfz.de
meehl(p, q)
Meehl distance between provided vectors.
Parameters:
p (float ) : `array` Vector with first numeric distribution.
q (float ) : `array` Vector with second numeric distribution.
Returns: Meehl distance of vectors `p` and `q`.
---
angiogenesis.dkfz.de
edit(x, y)
Edit (aka Levenshtein) distance for indexed strings.
Parameters:
x (int ) : `array` Indexed array.
y (int ) : `array` Indexed array.
Returns: Number of deletions, insertions, or substitutions required to transform source string into target string.
---
generated description:
The Edit distance is a measure of similarity used to compare two strings. It is defined as the minimum number of
operations (insertions, deletions, or substitutions) required to transform one string into another. The operations
are performed on the characters of the strings, and the cost of each operation depends on the specific algorithm
used.
The Edit distance is widely used in various applications such as spell checking, text similarity, and machine
translation. It can also be used for other purposes like finding the closest match between two strings or
identifying the common prefixes or suffixes between them.
---
github.com
www.red-gate.com
planetcalc.com
lee(x, y, dsize)
Distance between two indexed strings of equal length.
Parameters:
x (int ) : `array` Indexed array.
y (int ) : `array` Indexed array.
dsize (int) : `int` Dictionary size.
Returns: Distance between two strings by accounting for dictionary size.
---
www.johndcook.com
hamming(x, y)
Distance between two indexed strings of equal length.
Parameters:
x (int ) : `array` Indexed array.
y (int ) : `array` Indexed array.
Returns: Length of different components on both sequences.
---
en.wikipedia.org
jaro(x, y)
Distance between two indexed strings.
Parameters:
x (int ) : `array` Indexed array.
y (int ) : `array` Indexed array.
Returns: Measure of two strings' similarity: the higher the value, the more similar the strings are.
The score is normalized such that `0` equates to no similarities and `1` is an exact match.
---
rosettacode.org
mahalanobis(p, q, VI)
Mahalanobis distance between two vectors with population inverse covariance matrix.
Parameters:
p (float ) : `array` 1D Vector.
q (float ) : `array` 1D Vector.
VI (matrix) : `matrix` Inverse of the covariance matrix.
Returns: The mahalanobis distance between vectors `p` and `q`.
---
people.revoledu.com
stat.ethz.ch
docs.scipy.org
fidelity(p, q)
Fidelity distance between provided vectors.
Parameters:
p (float ) : `array` 1D Vector.
q (float ) : `array` 1D Vector.
Returns: The Bhattacharyya Coefficient between vectors `p` and `q`.
---
en.wikipedia.org
bhattacharyya(p, q)
Bhattacharyya distance between provided vectors.
Parameters:
p (float ) : `array` 1D Vector.
q (float ) : `array` 1D Vector.
Returns: The Bhattacharyya distance between vectors `p` and `q`.
---
en.wikipedia.org
hellinger(p, q)
Hellinger distance between provided vectors.
Parameters:
p (float ) : `array` 1D Vector.
q (float ) : `array` 1D Vector.
Returns: The hellinger distance between vectors `p` and `q`.
---
en.wikipedia.org
jamesmccaffrey.wordpress.com
kumar_hassebrook(p, q)
Kumar Hassebrook distance between provided vectors.
Parameters:
p (float ) : `array` 1D Vector.
q (float ) : `array` 1D Vector.
Returns: The Kumar Hassebrook distance between vectors `p` and `q`.
---
github.com
jaccard(p, q)
Jaccard distance between provided vectors.
Parameters:
p (float ) : `array` 1D Vector.
q (float ) : `array` 1D Vector.
Returns: The Jaccard distance between vectors `p` and `q`.
---
github.com
sorensen(p, q)
Sorensen distance between provided vectors.
Parameters:
p (float ) : `array` 1D Vector.
q (float ) : `array` 1D Vector.
Returns: The Sorensen distance between vectors `p` and `q`.
---
people.revoledu.com
chi_square(p, q, eps)
Chi Square distance between provided vectors.
Parameters:
p (float ) : `array` 1D Vector.
q (float ) : `array` 1D Vector.
eps (float)
Returns: The Chi Square distance between vectors `p` and `q`.
---
uw.pressbooks.pub
stats.stackexchange.com
www.itl.nist.gov
kulczynsky(p, q, eps)
Kulczynsky distance between provided vectors.
Parameters:
p (float ) : `array` 1D Vector.
q (float ) : `array` 1D Vector.
eps (float)
Returns: The Kulczynsky distance between vectors `p` and `q`.
---
github.com
FunctionMatrixCovarianceLibrary "FunctionMatrixCovariance"
In probability theory and statistics, a covariance matrix (also known as auto-covariance matrix, dispersion matrix, variance matrix, or variance–covariance matrix) is a square matrix giving the covariance between each pair of elements of a given random vector.
Intuitively, the covariance matrix generalizes the notion of variance to multiple dimensions. As an example, the variation in a collection of random points in two-dimensional space cannot be characterized fully by a single number, nor would the variances in the `x` and `y` directions contain all of the necessary information; a `2 × 2` matrix would be necessary to fully characterize the two-dimensional variation.
Any covariance matrix is symmetric and positive semi-definite and its main diagonal contains variances (i.e., the covariance of each element with itself).
The covariance matrix of a random vector `X` is typically denoted by `Kxx`, `Σ` or `S`.
~wikipedia.
method cov(M, bias)
Estimate Covariance matrix with provided data.
Namespace types: matrix
Parameters:
M (matrix) : `matrix` Matrix with vectors in column order.
bias (bool)
Returns: Covariance matrix of provided vectors.
---
en.wikipedia.org
numpy.org
Extended Moving Average (MA) LibraryThis Extended Moving Average Library is a sophisticated and comprehensive tool for traders seeking to expand their arsenal of moving averages for more nuanced and detailed technical analysis.
The library contains various types of moving averages, each with two versions - one that accepts a simple constant length parameter and another that accepts a series or changing length parameter.
This makes the library highly versatile and suitable for a wide range of strategies and trading styles.
Moving Averages Included:
Simple Moving Average (SMA): This is the most basic type of moving average. It calculates the average of a selected range of prices, typically closing prices, by the number of periods in that range.
Exponential Moving Average (EMA): This type of moving average gives more weight to the latest data and is thus more responsive to new price information. This can help traders to react faster to recent price changes.
Double Exponential Moving Average (DEMA): This is a composite of a single exponential moving average, a double exponential moving average, and an exponential moving average of a triple exponential moving average. It aims to eliminate lag, which is a key drawback of using moving averages.
Jurik Moving Average (JMA): This is a versatile and responsive moving average that can be adjusted for market speed. It is designed to stay balanced and responsive, regardless of how long or short it is.
Kaufman's Adaptive Moving Average (KAMA): This moving average is designed to account for market noise or volatility. KAMA will closely follow prices when the price swings are relatively small and the noise is low.
Smoothed Moving Average (SMMA): This type of moving average applies equal weighting to all observations and smooths out the data.
Triangular Moving Average (TMA): This is a double smoothed simple moving average, calculated by averaging the simple moving averages of a dataset.
True Strength Force (TSF): This is a moving average of the linear regression line, a statistical tool used to predict future values from past values.
Volume Moving Average (VMA): This is a simple moving average of a volume, which can help to identify trends in volume.
Volume Adjusted Moving Average (VAMA): This moving average adjusts for volume and can be more responsive to volume changes.
Zero Lag Exponential Moving Average (ZLEMA): This type of moving average aims to eliminate the lag in traditional EMAs, making it more responsive to recent price changes.
Selector: The selector function allows users to easily select and apply any of the moving averages included in the library inside their strategy.
This library provides a broad selection of moving averages to choose from, allowing you to experiment with different types and find the one that best suits your trading strategy.
By providing both simple and series versions for each moving average, this library offers great flexibility, enabling users to pass both constant and changing length parameters as needed.
ta_mLibrary "ta_m"
This library is a Pine Script™ programmer’s tool containing calcs for my oscillators and some helper functions.
upDnIntrabarVolumesByPolarity()
Determines if the volume for an intrabar is up or down.
Returns: ( ) A tuple of two values, one of which contains the bar's volume. `upVol` is the positive volume of up bars. `dnVol` is the negative volume of down bars.
Note that when this function is designed to be called with `request.security_lower_tf()`,
which will return a tuple of "array" arrays containing up and dn volume for all the intrabars in a chart bar.
upDnIntrabarVolumesByPrice()
Determines if the intrabar volume is up or down
Returns: ( ) A tuple of two values, one of which contains the bar's volume. `upVol` is the positive volume of up bars. `dnVol` is the negative volume of down bars.
Note that when this function is designed to be called with `request.security_lower_tf()`,
which will return a tuple of "array" arrays containing up and dn volume for all the intrabars in a chart bar.
LibrarySupertrendLibrary "LibrarySupertrend"
selective_ma(condition, source, length)
Parameters:
condition (bool)
source (float)
length (int)
trendUp(source)
Parameters:
source (float)
smoothrng(source, sampling_period, range_mult)
Parameters:
source (float)
sampling_period (simple int)
range_mult (float)
rngfilt(source, smoothrng)
Parameters:
source (float)
smoothrng (float)
fusion(overallLength, rsiLength, mfiLength, macdLength, cciLength, tsiLength, rviLength, atrLength, adxLength)
Parameters:
overallLength (simple int)
rsiLength (simple int)
mfiLength (simple int)
macdLength (simple int)
cciLength (simple int)
tsiLength (simple int)
rviLength (simple int)
atrLength (simple int)
adxLength (simple int)
zonestrength(amplitude, wavelength)
Parameters:
amplitude (int)
wavelength (simple int)
atr_anysource(source, atr_length)
Parameters:
source (float)
atr_length (simple int)
supertrend_anysource(source, factor, atr_length)
Parameters:
source (float)
factor (float)
atr_length (simple int)
lib_drawing_compositesLibrary "lib_drawing_composites"
methods to draw and manage composite obejects. Based on Trendoscope's added Triangle and Polygon composite objects, fixed tostring method output to be actual json
method tostring(this, format_date, format, tz, pretty)
Converts lib_drawing_types/LineProperties object to a json string representation
Namespace types: D.Point
Parameters:
this (Point type from HeWhoMustNotBeNamed/DrawingTypes/2) : lib_drawing_types/LineProperties object
format_date (simple bool)
format (simple string)
tz (simple string)
pretty (simple bool) : if true adds a line feed after every property and a space before properties (default: true)
Returns: string representation of lib_drawing_types/LineProperties
method tostring(this, pretty)
Converts lib_drawing_types/LabelProperties object to a json string representation
Namespace types: D.LineProperties
Parameters:
this (LineProperties type from HeWhoMustNotBeNamed/DrawingTypes/2) : lib_drawing_types/LabelProperties object
pretty (simple bool) : if true adds a line feed after every property and a space before properties (default: true)
Returns: string representation of lib_drawing_types/LabelProperties
method tostring(this, format_date, format, tz, pretty)
Converts lib_drawing_types/BoxProperties object to a json string representation
Namespace types: D.Line
Parameters:
this (Line type from HeWhoMustNotBeNamed/DrawingTypes/2) : lib_drawing_types/BoxProperties object
format_date (simple bool)
format (simple string)
tz (simple string)
pretty (simple bool) : if true adds a line feed after every property and a space before properties (default: true)
Returns: string representation of lib_drawing_types/BoxProperties
method tostring(this, pretty)
Converts lib_drawing_types/BoxText object to a json string representation
Namespace types: D.LabelProperties
Parameters:
this (LabelProperties type from HeWhoMustNotBeNamed/DrawingTypes/2) : lib_drawing_types/BoxText object
pretty (simple bool) : if true adds a line feed after every property and a space before properties (default: true)
Returns: string representation of lib_drawing_types/BoxText
method tostring(this, format_date, format, tz, pretty)
Converts lib_drawing_types/TriangleProperties object to a json string representation
Namespace types: D.Label
Parameters:
this (Label type from HeWhoMustNotBeNamed/DrawingTypes/2) : lib_drawing_types/TriangleProperties object
format_date (simple bool)
format (simple string)
tz (simple string)
pretty (simple bool) : if true adds a line feed after every property and a space before properties (default: true)
Returns: string representation of lib_drawing_types/TriangleProperties
method tostring(this, format_date, format, tz, pretty)
Namespace types: D.Linefill
Parameters:
this (Linefill type from HeWhoMustNotBeNamed/DrawingTypes/2)
format_date (simple bool)
format (simple string)
tz (simple string)
pretty (simple bool)
method tostring(this, pretty)
Namespace types: D.BoxProperties
Parameters:
this (BoxProperties type from HeWhoMustNotBeNamed/DrawingTypes/2)
pretty (simple bool)
method tostring(this, pretty)
Namespace types: D.BoxText
Parameters:
this (BoxText type from HeWhoMustNotBeNamed/DrawingTypes/2)
pretty (simple bool)
method tostring(this, format_date, format, tz, pretty)
Namespace types: D.Box
Parameters:
this (Box type from HeWhoMustNotBeNamed/DrawingTypes/2)
format_date (simple bool)
format (simple string)
tz (simple string)
pretty (simple bool)
method tostring(this, pretty)
Namespace types: DC.TriangleProperties
Parameters:
this (TriangleProperties type from robbatt/lib_drawing_composite_types/1)
pretty (simple bool)
method tostring(this, format_date, format, tz, pretty)
Namespace types: DC.Triangle
Parameters:
this (Triangle type from robbatt/lib_drawing_composite_types/1)
format_date (simple bool)
format (simple string)
tz (simple string)
pretty (simple bool)
method tostring(this, format_date, format, tz, pretty)
Namespace types: DC.Trianglefill
Parameters:
this (Trianglefill type from robbatt/lib_drawing_composite_types/1)
format_date (simple bool)
format (simple string)
tz (simple string)
pretty (simple bool)
method tostring(this, format_date, format, tz, pretty)
Namespace types: DC.Polygon
Parameters:
this (Polygon type from robbatt/lib_drawing_composite_types/1)
format_date (simple bool)
format (simple string)
tz (simple string)
pretty (simple bool)
method tostring(this, format_date, format, tz, pretty)
Namespace types: DC.Polygonfill
Parameters:
this (Polygonfill type from robbatt/lib_drawing_composite_types/1)
format_date (simple bool)
format (simple string)
tz (simple string)
pretty (simple bool)
method delete(this)
Namespace types: DC.Trianglefill
Parameters:
this (Trianglefill type from robbatt/lib_drawing_composite_types/1)
method delete(this)
Namespace types: DC.Triangle
Parameters:
this (Triangle type from robbatt/lib_drawing_composite_types/1)
method delete(this)
Namespace types: DC.Triangle
Parameters:
this (Triangle type from robbatt/lib_drawing_composite_types/1)
method delete(this)
Namespace types: DC.Trianglefill
Parameters:
this (Trianglefill type from robbatt/lib_drawing_composite_types/1)
method delete(this)
Namespace types: DC.Polygon
Parameters:
this (Polygon type from robbatt/lib_drawing_composite_types/1)
method delete(this)
Namespace types: DC.Polygonfill
Parameters:
this (Polygonfill type from robbatt/lib_drawing_composite_types/1)
method delete(this)
Namespace types: DC.Polygon
Parameters:
this (Polygon type from robbatt/lib_drawing_composite_types/1)
method delete(this)
Namespace types: DC.Polygonfill
Parameters:
this (Polygonfill type from robbatt/lib_drawing_composite_types/1)
method clear(this)
Namespace types: DC.Triangle
Parameters:
this (Triangle type from robbatt/lib_drawing_composite_types/1)
method clear(this)
Namespace types: DC.Trianglefill
Parameters:
this (Trianglefill type from robbatt/lib_drawing_composite_types/1)
method clear(this)
Namespace types: DC.Polygon
Parameters:
this (Polygon type from robbatt/lib_drawing_composite_types/1)
method clear(this)
Namespace types: DC.Polygonfill
Parameters:
this (Polygonfill type from robbatt/lib_drawing_composite_types/1)
method draw(this, is_polygon_section)
Namespace types: DC.Triangle
Parameters:
this (Triangle type from robbatt/lib_drawing_composite_types/1)
is_polygon_section (bool)
method draw(this)
Namespace types: DC.Trianglefill
Parameters:
this (Trianglefill type from robbatt/lib_drawing_composite_types/1)
method draw(this, is_polygon)
Namespace types: DC.Triangle
Parameters:
this (Triangle type from robbatt/lib_drawing_composite_types/1)
is_polygon (bool)
method draw(this)
Namespace types: DC.Polygon
Parameters:
this (Polygon type from robbatt/lib_drawing_composite_types/1)
method draw(this)
Namespace types: DC.Trianglefill
Parameters:
this (Trianglefill type from robbatt/lib_drawing_composite_types/1)
method draw(this)
Namespace types: DC.Polygonfill
Parameters:
this (Polygonfill type from robbatt/lib_drawing_composite_types/1)
method draw(this)
Namespace types: DC.Polygon
Parameters:
this (Polygon type from robbatt/lib_drawing_composite_types/1)
method draw(this)
Namespace types: DC.Polygonfill
Parameters:
this (Polygonfill type from robbatt/lib_drawing_composite_types/1)
method createCenter(this, other)
Namespace types: D.Point
Parameters:
this (Point type from HeWhoMustNotBeNamed/DrawingTypes/2)
other (Point type from HeWhoMustNotBeNamed/DrawingTypes/2)
method createCenter(this)
Namespace types: D.Point
Parameters:
this (Point type from HeWhoMustNotBeNamed/DrawingTypes/2)
method createCenter(this, other1, other2)
Namespace types: D.Point
Parameters:
this (Point type from HeWhoMustNotBeNamed/DrawingTypes/2)
other1 (Point type from HeWhoMustNotBeNamed/DrawingTypes/2)
other2 (Point type from HeWhoMustNotBeNamed/DrawingTypes/2)
method createLabel(this, labeltext, tooltip, properties)
Namespace types: D.Line
Parameters:
this (Line type from HeWhoMustNotBeNamed/DrawingTypes/2)
labeltext (string)
tooltip (string)
properties (LabelProperties type from HeWhoMustNotBeNamed/DrawingTypes/2)
method createLabel(this, labeltext, tooltip, properties)
Namespace types: DC.Triangle
Parameters:
this (Triangle type from robbatt/lib_drawing_composite_types/1)
labeltext (string)
tooltip (string)
properties (LabelProperties type from HeWhoMustNotBeNamed/DrawingTypes/2)
method createTriangle(this, p2, p3, properties)
Namespace types: D.Point
Parameters:
this (Point type from HeWhoMustNotBeNamed/DrawingTypes/2)
p2 (Point type from HeWhoMustNotBeNamed/DrawingTypes/2)
p3 (Point type from HeWhoMustNotBeNamed/DrawingTypes/2)
properties (TriangleProperties type from robbatt/lib_drawing_composite_types/1)
method createTrianglefill(this, fill_color, transparency)
Namespace types: DC.Triangle
Parameters:
this (Triangle type from robbatt/lib_drawing_composite_types/1)
fill_color (color)
transparency (int)
method createPolygonfill(this, fill_color, transparency)
Namespace types: DC.Polygon
Parameters:
this (Polygon type from robbatt/lib_drawing_composite_types/1)
fill_color (color)
transparency (int)
method createPolygon(points, properties)
Namespace types: D.Point
Parameters:
points (Point type from HeWhoMustNotBeNamed/DrawingTypes/2)
properties (TriangleProperties type from robbatt/lib_drawing_composite_types/1)
lib_drawing_composite_typesLibrary "lib_drawing_composite_types"
User Defined Types for basic drawing structure. Other types and methods will be built on these. (added type Triangle and Polygon to )
TriangleProperties
TriangleProperties object
Fields:
border_color (series color) : Box border color. Default is color.blue
fill_color (series color) : Fill color
fill_transparency (series int)
border_width (series int) : Box border width. Default is 1
border_style (series string) : Box border style. Default is line.style_solid
xloc (series string) : defines if drawing needs to be done based on bar index or time. default is xloc.bar_index
Triangle
Triangle object
Fields:
p1 (Point type from HeWhoMustNotBeNamed/DrawingTypes/2) : point one
p2 (Point type from HeWhoMustNotBeNamed/DrawingTypes/2) : point two
p3 (Point type from HeWhoMustNotBeNamed/DrawingTypes/2) : point three
properties (TriangleProperties) : Triangle properties
l12 (series line) : line object created
l23 (series line) : line object created
l31 (series line) : line object created
Trianglefill
Trianglefill object
Fields:
triangle (Triangle) : to create a linefill for
fill_color (series color) : Fill color
transparency (series int) : Fill transparency range from 0 to 100
object (series linefill) : linefill object created
Polygon
Polygon object
Fields:
center (Point type from HeWhoMustNotBeNamed/DrawingTypes/2) : Point that triangles are using as common center
triangles (Triangle ) : an array of triangles that form the Polygon
Polygonfill
Polygonfill object
Fields:
_polygon (Polygon) : to create a fill for
_fills (Trianglefill ) : an array of Trianglefill objects that match the array of triangles in _polygon