Ehlers Instantaneous Trend [LazyBear]One more to add to the Ehlers collection.
Ehlers Instantaneous Trendline, by John Ehlers, identifies the market trend by doing removing cycle component. I think, this simplicity is what makes it attractive :) To understand Ehlers's thought process behind this, refer to the PDF linked below.
There are atleast 6 variations of this ITrend. This version is from his early presentations.
Is this better than a simple HMA? May be, May be not. I will leave it to you to decide :)
I have added options to show this as a ribbon, and to color bars based on ITrend. Check out the options page.
More info:
- ITrend: www.mesasoftware.com
List of my public indicators: bit.ly
List of my app-store indicators: blog.tradingview.com
Pesquisar nos scripts por "one一季度财报"
Bear Market Highlighter — Peak→Final Trough 📌 Bear Market Highlighter — Peak → Final Trough (Summary)
Purpose:
This TradingView indicator automatically identifies and shades bear markets. A bear market is defined as a drawdown greater than 20% (customizable) from the last peak to the eventual trough.
🔑 Key Features
Automatic Detection: Detects market peaks and measures drawdowns until they exceed the user-defined threshold.
Shading: Highlights the entire bear phase (peak → trough) with a colored box plus live background shading.
Labels: Marks Bear Start at the peak and Bear End at the trough.
Configurable End Condition:
End on New High = true: Bear ends only when price fully recovers above the prior peak (keeps long declines like 2008 as one continuous bear).
End on New High = false: Bear ends sooner when drawdown shrinks below the threshold.
Customizable Inputs:
Threshold % (default: 20%)
Use Low or Close prices for drawdown
Shade color and opacity
⚙️ Inputs Explained
Bear threshold (%) → Minimum drop from peak to trigger a bear (default 20%).
Use low for drawdown → If true, calculation uses low prices (intraday depth), else close.
End on new high → Keeps long multi-leg declines as one bear until market exceeds old peak.
Shade color & opacity → Customize how the bear phase appears visually.
📈 How It Works
Track Peak: While not in a bear, the indicator keeps updating the highest point.
Trigger Bear: When drawdown > threshold, it marks the bear start.
Update Trough: While in bear, it tracks the lowest price reached.
End Bear:
Either when price recovers above old peak (End on new high = true),
Or when drawdown shrinks below threshold (End on new high = false).
Visualize: Shades the full peak-to-trough area and places start/end labels.
👉 Best Use Case: Long-term charts (weekly/monthly) for visualizing historic bear markets — helps compare severity & duration of downturns at a glance.
Ichimoku Fractal Flow### Ichimoku Fractal Flow (IFF)
By Gurjit Singh
Ichimoku Fractal Flow (IFF) distills the Ichimoku system into a single oscillator by merging fractal echoes of price and cloud dynamics into one flow signal. Instead of static Ichimoku lines, it measures the "flow" between Conversion/Base, Span A/B, price echoes, and cloud echoes. The result is a multidimensional oscillator that reveals hidden rhythm, momentum shifts, and trend bias.
#### 📌 Key Features
1. Fourfold Fusion – The oscillator blends:
* Phase: Tenkan vs. Kijun spread (short vs. medium trend).
* Kumo Phase: Span A vs. Span B spread (cloud thickness).
* Echo: Price vs lagged reflection.
* Cloud Echo: Price vs. projected cloud center.
2. Oscillator Output – A unified flow line oscillating around zero.
3. Dual Calculation Modes – Oscillator can be built using:
* High-Low Midpoint (classic Ichimoku-style averaging).
* Wilder’s RMA (smoother, less noisy averaging averaging).
4. Optional Smoothing – EMA or Wilder’s RMA creates a trend line, enabling MACD-style crossovers.
5. Dynamic Coloring – Bullish/Bearish color shifts for quick bias recognition.
6. Fill Styling – Highlighted regions between oscillator & smoothing line.
7. Zero Line Reference – Acts as a structural pivot (bull vs. bear).
#### 🔑 How to Use
1. Add to Chart: Works across all assets and timeframes.
2. Flow Bias (Zero Line):
* Above 0 → Bullish flow 🐂
* Below 0 → Bearish flow 🐻
3. With Signal Line:
* Oscillator above smoothing line → Possible upward trend shift.
* Oscillator below smoothing line → Possible downward trend shift.
4. Strength:
* Wide separation from smoothing = strong trend.
* Flat, tight clustering = indecision/range.
5. Contextual Edge: Combine signals with Ichimoku Cloud analysis for stronger confluence.
#### ⚙️ Inputs & Options
* Conversion Line (Tenkan, default 9)
* Base Line (Kijun, default 26)
* Leading Span B (default 52)
* Lag/Lead Shift (default 26)
* Oscillator Mode: High-Low Midpoint vs Wilder’s RMA
* Use Smoothing (toggle on/off)
* Signal Smoothing: Wilder/EMA option
* Smoothing Length (default 9)
* Bullish/Bearish Colors + Transparency
#### 💡 Tips
* Wilder’s RMA (both oscillator & smoothing) is gentler, reducing whipsaws in sideways markets.
* High-Low Mid captures pure Ichimoku-style ranges, good for structure-based traders.
* EMA reacts faster than RMA; use if you want early momentum signals.
* Zero-line flips act like momentum pivots—watch them near cloud boundaries.
* Signal line crossovers behave like MACD-style triggers.
* Strongest signals appear when oscillator, signal line, and Ichimoku Cloud all align.
👉 In short: Ichimoku Fractal Flow compresses multi-layered Ichimoku system into a single fractal oscillator that detects flow, pivotal shifts, and momentum with clarity—bridging price, cloud, and echoes into one signal. Where the cloud shows structure, IFF reveals the underlying flow. Together, they offer a fractal lens into market rhythm.
ValueAtTime█ OVERVIEW
This library is a Pine Script® programming tool for accessing historical values in a time series using UNIX timestamps . Its data structure and functions index values by time, allowing scripts to retrieve past values based on absolute timestamps or relative time offsets instead of relying on bar index offsets.
█ CONCEPTS
UNIX timestamps
In Pine Script®, a UNIX timestamp is an integer representing the number of milliseconds elapsed since January 1, 1970, at 00:00:00 UTC (the UNIX Epoch ). The timestamp is a unique, absolute representation of a specific point in time. Unlike a calendar date and time, a UNIX timestamp's meaning does not change relative to any time zone .
This library's functions process series values and corresponding UNIX timestamps in pairs , offering a simplified way to identify values that occur at or near distinct points in time instead of on specific bars.
Storing and retrieving time-value pairs
This library's `Data` type defines the structure for collecting time and value information in pairs. Objects of the `Data` type contain the following two fields:
• `times` – An array of "int" UNIX timestamps for each recorded value.
• `values` – An array of "float" values for each saved timestamp.
Each index in both arrays refers to a specific time-value pair. For instance, the `times` and `values` elements at index 0 represent the first saved timestamp and corresponding value. The library functions that maintain `Data` objects queue up to one time-value pair per bar into the object's arrays, where the saved timestamp represents the bar's opening time .
Because the `times` array contains a distinct UNIX timestamp for each item in the `values` array, it serves as a custom mapping for retrieving saved values. All the library functions that return information from a `Data` object use this simple two-step process to identify a value based on time:
1. Perform a binary search on the `times` array to find the earliest saved timestamp closest to the specified time or offset and get the element's index.
2. Access the element from the `values` array at the retrieved index, returning the stored value corresponding to the found timestamp.
Value search methods
There are several techniques programmers can use to identify historical values from corresponding timestamps. This library's functions include three different search methods to locate and retrieve values based on absolute times or relative time offsets:
Timestamp search
Find the value with the earliest saved timestamp closest to a specified timestamp.
Millisecond offset search
Find the value with the earliest saved timestamp closest to a specified number of milliseconds behind the current bar's opening time. This search method provides a time-based alternative to retrieving historical values at specific bar offsets.
Period offset search
Locate the value with the earliest saved timestamp closest to a defined period offset behind the current bar's opening time. The function calculates the span of the offset based on a period string . The "string" must contain one of the following unit tokens:
• "D" for days
• "W" for weeks
• "M" for months
• "Y" for years
• "YTD" for year-to-date, meaning the time elapsed since the beginning of the bar's opening year in the exchange time zone.
The period string can include a multiplier prefix for all supported units except "YTD" (e.g., "2W" for two weeks).
Note that the precise span covered by the "M", "Y", and "YTD" units varies across time. The "1M" period can cover 28, 29, 30, or 31 days, depending on the bar's opening month and year in the exchange time zone. The "1Y" period covers 365 or 366 days, depending on leap years. The "YTD" period's span changes with each new bar, because it always measures the time from the start of the current bar's opening year.
█ CALCULATIONS AND USE
This library's functions offer a flexible, structured approach to retrieving historical values at or near specific timestamps, millisecond offsets, or period offsets for different analytical needs.
See below for explanations of the exported functions and how to use them.
Retrieving single values
The library includes three functions that retrieve a single stored value using timestamp, millisecond offset, or period offset search methods:
• `valueAtTime()` – Locates the saved value with the earliest timestamp closest to a specified timestamp.
• `valueAtTimeOffset()` – Finds the saved value with the earliest timestamp closest to the specified number of milliseconds behind the current bar's opening time.
• `valueAtPeriodOffset()` – Finds the saved value with the earliest timestamp closest to the period-based offset behind the current bar's opening time.
Each function has two overloads for advanced and simple use cases. The first overload searches for a value in a user-specified `Data` object created by the `collectData()` function (see below). It returns a tuple containing the found value and the corresponding timestamp.
The second overload maintains a `Data` object internally to store and retrieve values for a specified `source` series. This overload returns a tuple containing the historical `source` value, the corresponding timestamp, and the current bar's `source` value, making it helpful for comparing past and present values from requested contexts.
Retrieving multiple values
The library includes the following functions to retrieve values from multiple historical points in time, facilitating calculations and comparisons with values retrieved across several intervals:
• `getDataAtTimes()` – Locates a past `source` value for each item in a `timestamps` array. Each retrieved value's timestamp represents the earliest time closest to one of the specified timestamps.
• `getDataAtTimeOffsets()` – Finds a past `source` value for each item in a `timeOffsets` array. Each retrieved value's timestamp represents the earliest time closest to one of the specified millisecond offsets behind the current bar's opening time.
• `getDataAtPeriodOffsets()` – Finds a past value for each item in a `periods` array. Each retrieved value's timestamp represents the earliest time closest to one of the specified period offsets behind the current bar's opening time.
Each function returns a tuple with arrays containing the found `source` values and their corresponding timestamps. In addition, the tuple includes the current `source` value and the symbol's description, which also makes these functions helpful for multi-interval comparisons using data from requested contexts.
Processing period inputs
When writing scripts that retrieve historical values based on several user-specified period offsets, the most concise approach is to create a single text input that allows users to list each period, then process the "string" list into an array for use in the `getDataAtPeriodOffsets()` function.
This library includes a `getArrayFromString()` function to provide a simple way to process strings containing comma-separated lists of periods. The function splits the specified `str` by its commas and returns an array containing every non-empty item in the list with surrounding whitespaces removed. View the example code to see how we use this function to process the value of a text area input .
Calculating period offset times
Because the exact amount of time covered by a specified period offset can vary, it is often helpful to verify the resulting times when using the `valueAtPeriodOffset()` or `getDataAtPeriodOffsets()` functions to ensure the calculations work as intended for your use case.
The library's `periodToTimestamp()` function calculates an offset timestamp from a given period and reference time. With this function, programmers can verify the time offsets in a period-based data search and use the calculated offset times in additional operations.
For periods with "D" or "W" units, the function calculates the time offset based on the absolute number of milliseconds the period covers (e.g., `86400000` for "1D"). For periods with "M", "Y", or "YTD" units, the function calculates an offset time based on the reference time's calendar date in the exchange time zone.
Collecting data
All the `getDataAt*()` functions, and the second overloads of the `valueAt*()` functions, collect and maintain data internally, meaning scripts do not require a separate `Data` object when using them. However, the first overloads of the `valueAt*()` functions do not collect data, because they retrieve values from a user-specified `Data` object.
For cases where a script requires a separate `Data` object for use with these overloads or other custom routines, this library exports the `collectData()` function. This function queues each bar's `source` value and opening timestamp into a `Data` object and returns the object's ID.
This function is particularly useful when searching for values from a specific series more than once. For instance, instead of using multiple calls to the second overloads of `valueAt*()` functions with the same `source` argument, programmers can call `collectData()` to store each bar's `source` and opening timestamp, then use the returned `Data` object's ID in calls to the first `valueAt*()` overloads to reduce memory usage.
The `collectData()` function and all the functions that collect data internally include two optional parameters for limiting the saved time-value pairs to a sliding window: `timeOffsetLimit` and `timeframeLimit`. When either has a non-na argument, the function restricts the collected data to the maximum number of recent bars covered by the specified millisecond- and timeframe-based intervals.
NOTE : All calls to the functions that collect data for a `source` series can execute up to once per bar or realtime tick, because each stored value requires a unique corresponding timestamp. Therefore, scripts cannot call these functions iteratively within a loop . If a call to these functions executes more than once inside a loop's scope, it causes a runtime error.
█ EXAMPLE CODE
The example code at the end of the script demonstrates one possible use case for this library's functions. The code retrieves historical price data at user-specified period offsets, calculates price returns for each period from the retrieved data, and then populates a table with the results.
The example code's process is as follows:
1. Input a list of periods – The user specifies a comma-separated list of period strings in the script's "Period list" input (e.g., "1W, 1M, 3M, 1Y, YTD"). Each item in the input list represents a period offset from the latest bar's opening time.
2. Process the period list – The example calls `getArrayFromString()` on the first bar to split the input list by its commas and construct an array of period strings.
3. Request historical data – The code uses a call to `getDataAtPeriodOffsets()` as the `expression` argument in a request.security() call to retrieve the closing prices of "1D" bars for each period included in the processed `periods` array.
4. Display information in a table – On the latest bar, the code uses the retrieved data to calculate price returns over each specified period, then populates a two-row table with the results. The cells for each return percentage are color-coded based on the magnitude and direction of the price change. The cells also include tooltips showing the compared daily bar's opening date in the exchange time zone.
█ NOTES
• This library's architecture relies on a user-defined type (UDT) for its data storage format. UDTs are blueprints from which scripts create objects , i.e., composite structures with fields containing independent values or references of any supported type.
• The library functions search through a `Data` object's `times` array using the array.binary_search_leftmost() function, which is more efficient than looping through collected data to identify matching timestamps. Note that this built-in works only for arrays with elements sorted in ascending order .
• Each function that collects data from a `source` series updates the values and times stored in a local `Data` object's arrays. If a single call to these functions were to execute in a loop , it would store multiple values with an identical timestamp, which can cause erroneous search behavior. To prevent looped calls to these functions, the library uses the `checkCall()` helper function in their scopes. This function maintains a counter that increases by one each time it executes on a confirmed bar. If the count exceeds the total number of bars, indicating the call executes more than once in a loop, it raises a runtime error .
• Typically, when requesting higher-timeframe data with request.security() while using barmerge.lookahead_on as the `lookahead` argument, the `expression` argument should be offset with the history-referencing operator to prevent lookahead bias on historical bars. However, the call in this script's example code enables lookahead without offsetting the `expression` because the script displays results only on the last historical bar and all realtime bars, where there is no future data to leak into the past. This call ensures the displayed results use the latest data available from the context on realtime bars.
Look first. Then leap.
█ EXPORTED TYPES
Data
A structure for storing successive timestamps and corresponding values from a dataset.
Fields:
times (array) : An "int" array containing a UNIX timestamp for each value in the `values` array.
values (array) : A "float" array containing values corresponding to the timestamps in the `times` array.
█ EXPORTED FUNCTIONS
getArrayFromString(str)
Splits a "string" into an array of substrings using the comma (`,`) as the delimiter. The function trims surrounding whitespace characters from each substring, and it excludes empty substrings from the result.
Parameters:
str (series string) : The "string" to split into an array based on its commas.
Returns: (array) An array of trimmed substrings from the specified `str`.
periodToTimestamp(period, referenceTime)
Calculates a UNIX timestamp representing the point offset behind a reference time by the amount of time within the specified `period`.
Parameters:
period (series string) : The period string, which determines the time offset of the returned timestamp. The specified argument must contain a unit and an optional multiplier (e.g., "1Y", "3M", "2W", "YTD"). Supported units are:
- "Y" for years.
- "M" for months.
- "W" for weeks.
- "D" for days.
- "YTD" (Year-to-date) for the span from the start of the `referenceTime` value's year in the exchange time zone. An argument with this unit cannot contain a multiplier.
referenceTime (series int) : The millisecond UNIX timestamp from which to calculate the offset time.
Returns: (int) A millisecond UNIX timestamp representing the offset time point behind the `referenceTime`.
collectData(source, timeOffsetLimit, timeframeLimit)
Collects `source` and `time` data successively across bars. The function stores the information within a `Data` object for use in other exported functions/methods, such as `valueAtTimeOffset()` and `valueAtPeriodOffset()`. Any call to this function cannot execute more than once per bar or realtime tick.
Parameters:
source (series float) : The source series to collect. The function stores each value in the series with an associated timestamp representing its corresponding bar's opening time.
timeOffsetLimit (simple int) : Optional. A time offset (range) in milliseconds. If specified, the function limits the collected data to the maximum number of bars covered by the range, with a minimum of one bar. If the call includes a non-empty `timeframeLimit` value, the function limits the data using the largest number of bars covered by the two ranges. The default is `na`.
timeframeLimit (simple string) : Optional. A valid timeframe string. If specified and not empty, the function limits the collected data to the maximum number of bars covered by the timeframe, with a minimum of one bar. If the call includes a non-na `timeOffsetLimit` value, the function limits the data using the largest number of bars covered by the two ranges. The default is `na`.
Returns: (Data) A `Data` object containing collected `source` values and corresponding timestamps over the allowed time range.
method valueAtTime(data, timestamp)
(Overload 1 of 2) Retrieves value and time data from a `Data` object's fields at the index of the earliest timestamp closest to the specified `timestamp`. Callable as a method or a function.
Parameters:
data (series Data) : The `Data` object containing the collected time and value data.
timestamp (series int) : The millisecond UNIX timestamp to search. The function returns data for the earliest saved timestamp that is closest to the value.
Returns: ( ) A tuple containing the following data from the `Data` object:
- The stored value corresponding to the identified timestamp ("float").
- The earliest saved timestamp that is closest to the specified `timestamp` ("int").
valueAtTime(source, timestamp, timeOffsetLimit, timeframeLimit)
(Overload 2 of 2) Retrieves `source` and time information for the earliest bar whose opening timestamp is closest to the specified `timestamp`. Any call to this function cannot execute more than once per bar or realtime tick.
Parameters:
source (series float) : The source series to analyze. The function stores each value in the series with an associated timestamp representing its corresponding bar's opening time.
timestamp (series int) : The millisecond UNIX timestamp to search. The function returns data for the earliest bar whose timestamp is closest to the value.
timeOffsetLimit (simple int) : Optional. A time offset (range) in milliseconds. If specified, the function limits the collected data to the maximum number of bars covered by the range, with a minimum of one bar. If the call includes a non-empty `timeframeLimit` value, the function limits the data using the largest number of bars covered by the two ranges. The default is `na`.
timeframeLimit (simple string) : (simple string) Optional. A valid timeframe string. If specified and not empty, the function limits the collected data to the maximum number of bars covered by the timeframe, with a minimum of one bar. If the call includes a non-na `timeOffsetLimit` value, the function limits the data using the largest number of bars covered by the two ranges. The default is `na`.
Returns: ( ) A tuple containing the following data:
- The `source` value corresponding to the identified timestamp ("float").
- The earliest bar's timestamp that is closest to the specified `timestamp` ("int").
- The current bar's `source` value ("float").
method valueAtTimeOffset(data, timeOffset)
(Overload 1 of 2) Retrieves value and time data from a `Data` object's fields at the index of the earliest saved timestamp closest to `timeOffset` milliseconds behind the current bar's opening time. Callable as a method or a function.
Parameters:
data (series Data) : The `Data` object containing the collected time and value data.
timeOffset (series int) : The millisecond offset behind the bar's opening time. The function returns data for the earliest saved timestamp that is closest to the calculated offset time.
Returns: ( ) A tuple containing the following data from the `Data` object:
- The stored value corresponding to the identified timestamp ("float").
- The earliest saved timestamp that is closest to `timeOffset` milliseconds before the current bar's opening time ("int").
valueAtTimeOffset(source, timeOffset, timeOffsetLimit, timeframeLimit)
(Overload 2 of 2) Retrieves `source` and time information for the earliest bar whose opening timestamp is closest to `timeOffset` milliseconds behind the current bar's opening time. Any call to this function cannot execute more than once per bar or realtime tick.
Parameters:
source (series float) : The source series to analyze. The function stores each value in the series with an associated timestamp representing its corresponding bar's opening time.
timeOffset (series int) : The millisecond offset behind the bar's opening time. The function returns data for the earliest bar's timestamp that is closest to the calculated offset time.
timeOffsetLimit (simple int) : Optional. A time offset (range) in milliseconds. If specified, the function limits the collected data to the maximum number of bars covered by the range, with a minimum of one bar. If the call includes a non-empty `timeframeLimit` value, the function limits the data using the largest number of bars covered by the two ranges. The default is `na`.
timeframeLimit (simple string) : Optional. A valid timeframe string. If specified and not empty, the function limits the collected data to the maximum number of bars covered by the timeframe, with a minimum of one bar. If the call includes a non-na `timeOffsetLimit` value, the function limits the data using the largest number of bars covered by the two ranges. The default is `na`.
Returns: ( ) A tuple containing the following data:
- The `source` value corresponding to the identified timestamp ("float").
- The earliest bar's timestamp that is closest to `timeOffset` milliseconds before the current bar's opening time ("int").
- The current bar's `source` value ("float").
method valueAtPeriodOffset(data, period)
(Overload 1 of 2) Retrieves value and time data from a `Data` object's fields at the index of the earliest timestamp closest to a calculated offset behind the current bar's opening time. The calculated offset represents the amount of time covered by the specified `period`. Callable as a method or a function.
Parameters:
data (series Data) : The `Data` object containing the collected time and value data.
period (series string) : The period string, which determines the calculated time offset. The specified argument must contain a unit and an optional multiplier (e.g., "1Y", "3M", "2W", "YTD"). Supported units are:
- "Y" for years.
- "M" for months.
- "W" for weeks.
- "D" for days.
- "YTD" (Year-to-date) for the span from the start of the current bar's year in the exchange time zone. An argument with this unit cannot contain a multiplier.
Returns: ( ) A tuple containing the following data from the `Data` object:
- The stored value corresponding to the identified timestamp ("float").
- The earliest saved timestamp that is closest to the calculated offset behind the bar's opening time ("int").
valueAtPeriodOffset(source, period, timeOffsetLimit, timeframeLimit)
(Overload 2 of 2) Retrieves `source` and time information for the earliest bar whose opening timestamp is closest to a calculated offset behind the current bar's opening time. The calculated offset represents the amount of time covered by the specified `period`. Any call to this function cannot execute more than once per bar or realtime tick.
Parameters:
source (series float) : The source series to analyze. The function stores each value in the series with an associated timestamp representing its corresponding bar's opening time.
period (series string) : The period string, which determines the calculated time offset. The specified argument must contain a unit and an optional multiplier (e.g., "1Y", "3M", "2W", "YTD"). Supported units are:
- "Y" for years.
- "M" for months.
- "W" for weeks.
- "D" for days.
- "YTD" (Year-to-date) for the span from the start of the current bar's year in the exchange time zone. An argument with this unit cannot contain a multiplier.
timeOffsetLimit (simple int) : Optional. A time offset (range) in milliseconds. If specified, the function limits the collected data to the maximum number of bars covered by the range, with a minimum of one bar. If the call includes a non-empty `timeframeLimit` value, the function limits the data using the largest number of bars covered by the two ranges. The default is `na`.
timeframeLimit (simple string) : Optional. A valid timeframe string. If specified and not empty, the function limits the collected data to the maximum number of bars covered by the timeframe, with a minimum of one bar. If the call includes a non-na `timeOffsetLimit` value, the function limits the data using the largest number of bars covered by the two ranges. The default is `na`.
Returns: ( ) A tuple containing the following data:
- The `source` value corresponding to the identified timestamp ("float").
- The earliest bar's timestamp that is closest to the calculated offset behind the current bar's opening time ("int").
- The current bar's `source` value ("float").
getDataAtTimes(timestamps, source, timeOffsetLimit, timeframeLimit)
Retrieves `source` and time information for each bar whose opening timestamp is the earliest one closest to one of the UNIX timestamps specified in the `timestamps` array. Any call to this function cannot execute more than once per bar or realtime tick.
Parameters:
timestamps (array) : An array of "int" values representing UNIX timestamps. The function retrieves `source` and time data for each element in this array.
source (series float) : The source series to analyze. The function stores each value in the series with an associated timestamp representing its corresponding bar's opening time.
timeOffsetLimit (simple int) : Optional. A time offset (range) in milliseconds. If specified, the function limits the collected data to the maximum number of bars covered by the range, with a minimum of one bar. If the call includes a non-empty `timeframeLimit` value, the function limits the data using the largest number of bars covered by the two ranges. The default is `na`.
timeframeLimit (simple string) : Optional. A valid timeframe string. If specified and not empty, the function limits the collected data to the maximum number of bars covered by the timeframe, with a minimum of one bar. If the call includes a non-na `timeOffsetLimit` value, the function limits the data using the largest number of bars covered by the two ranges. The default is `na`.
Returns: ( ) A tuple of the following data:
- An array containing a `source` value for each identified timestamp (array).
- An array containing an identified timestamp for each item in the `timestamps` array (array).
- The current bar's `source` value ("float").
- The symbol's description from `syminfo.description` ("string").
getDataAtTimeOffsets(timeOffsets, source, timeOffsetLimit, timeframeLimit)
Retrieves `source` and time information for each bar whose opening timestamp is the earliest one closest to one of the time offsets specified in the `timeOffsets` array. Each offset in the array represents the absolute number of milliseconds behind the current bar's opening time. Any call to this function cannot execute more than once per bar or realtime tick.
Parameters:
timeOffsets (array) : An array of "int" values representing the millisecond time offsets used in the search. The function retrieves `source` and time data for each element in this array. For example, the array ` ` specifies that the function returns data for the timestamps closest to one day and one week behind the current bar's opening time.
source (float) : (series float) The source series to analyze. The function stores each value in the series with an associated timestamp representing its corresponding bar's opening time.
timeOffsetLimit (simple int) : Optional. A time offset (range) in milliseconds. If specified, the function limits the collected data to the maximum number of bars covered by the range, with a minimum of one bar. If the call includes a non-empty `timeframeLimit` value, the function limits the data using the largest number of bars covered by the two ranges. The default is `na`.
timeframeLimit (simple string) : Optional. A valid timeframe string. If specified and not empty, the function limits the collected data to the maximum number of bars covered by the timeframe, with a minimum of one bar. If the call includes a non-na `timeOffsetLimit` value, the function limits the data using the largest number of bars covered by the two ranges. The default is `na`.
Returns: ( ) A tuple of the following data:
- An array containing a `source` value for each identified timestamp (array).
- An array containing an identified timestamp for each offset specified in the `timeOffsets` array (array).
- The current bar's `source` value ("float").
- The symbol's description from `syminfo.description` ("string").
getDataAtPeriodOffsets(periods, source, timeOffsetLimit, timeframeLimit)
Retrieves `source` and time information for each bar whose opening timestamp is the earliest one closest to a calculated offset behind the current bar's opening time. Each calculated offset represents the amount of time covered by a period specified in the `periods` array. Any call to this function cannot execute more than once per bar or realtime tick.
Parameters:
periods (array) : An array of period strings, which determines the time offsets used in the search. The function retrieves `source` and time data for each element in this array. For example, the array ` ` specifies that the function returns data for the timestamps closest to one day, week, and month behind the current bar's opening time. Each "string" in the array must contain a unit and an optional multiplier. Supported units are:
- "Y" for years.
- "M" for months.
- "W" for weeks.
- "D" for days.
- "YTD" (Year-to-date) for the span from the start of the current bar's year in the exchange time zone. An argument with this unit cannot contain a multiplier.
source (float) : (series float) The source series to analyze. The function stores each value in the series with an associated timestamp representing its corresponding bar's opening time.
timeOffsetLimit (simple int) : Optional. A time offset (range) in milliseconds. If specified, the function limits the collected data to the maximum number of bars covered by the range, with a minimum of one bar. If the call includes a non-empty `timeframeLimit` value, the function limits the data using the largest number of bars covered by the two ranges. The default is `na`.
timeframeLimit (simple string) : Optional. A valid timeframe string. If specified and not empty, the function limits the collected data to the maximum number of bars covered by the timeframe, with a minimum of one bar. If the call includes a non-na `timeOffsetLimit` value, the function limits the data using the largest number of bars covered by the two ranges. The default is `na`.
Returns: ( ) A tuple of the following data:
- An array containing a `source` value for each identified timestamp (array).
- An array containing an identified timestamp for each period specified in the `periods` array (array).
- The current bar's `source` value ("float").
- The symbol's description from `syminfo.description` ("string").
Adaptive Candlestick Pattern Recognition System█ INTRODUCTION
Nearly three years in the making, intermittently worked on in the few spare hours of weekends and time off, this is a passion project I undertook to flesh out my skills as a computer programmer. This script currently recognizes 85 different candlestick patterns ranging from one to five candles in length. It also performs statistical analysis on those patterns to determine prior performance and changes the coloration of those patterns based on that performance. In searching TradingView's script library for scripts similar to this one, I had found a handful. However, when I reviewed the ones which were open source, I did not see many that truly captured the power of PineScrypt or leveraged the way it works to create efficient and reliable code; one of the main driving factors for releasing this 5,000+ line behemoth open sourced.
Please take the time to review this description and source code to utilize this script to its fullest potential.
█ CONCEPTS
This script covers the following topics: Candlestick Theory, Trend Direction, Higher Timeframes, Price Analysis, Statistic Analysis, and Code Design.
Candlestick Theory - This script focuses solely on the concept of Candlestick Theory: arrangements of candlesticks may form certain patterns that can potentially influence the future price action of assets which experience those patterns. A full list of patterns (grouped by pattern length) will be in its own section of this description. This script contains two modes of operation for identifying candlestick patterns, 'CLASSIC' and 'BREAKOUT'.
CLASSIC: In this mode, candlestick patterns will be identified whenever they appear. The user has a wide variety of inputs to manipulate that can change how certain patterns are identified and even enable alerts to notify themselves when these patterns appear. Each pattern selected to appear will have their Profit or Loss (P/L) calculated starting from the first candle open succeeding the pattern to a candle close specified some number of candles ahead. These P/L calculations are then collected for each pattern, and split among partitions of prior price action of the asset the script is currently applied to (more on that in Higher Timeframes ).
BREAKOUT: In this mode, P/L calculations are held off until a breakout direction has been confirmed. The user may specify the number of candles ahead of a pattern's appearance (from one to five) that a pattern has to confirm a breakout in either an upward or downward direction. A breakout is constituted when there is a candle following the appearance of the pattern that closes above/at the highest high of the pattern, or below/at its lowest low. Only then will percent return calculations be performed for the pattern that's been identified, and these percent returns are broken up not only by the partition they had appeared in but also by the breakout direction itself. Patterns which do not breakout in either direction will be ignored, along with having their labels deleted.
In both of these modes, patterns may be overridden. Overrides occur when a smaller pattern has been detected and ends up becoming one (or more) of the candles of a larger pattern. A key example of this would be the Bearish Engulfing and the Three Outside Down patterns. A Three Outside Down necessitates a Bearish Engulfing as the first two candles in it, while the third candle closes lower. When a pattern is overridden, the return for that pattern will no longer be tracked. Overrides will not occur if the tail end of a larger pattern occurs at the beginning of a smaller pattern (Ex: a Bullish Engulfing occurs on the third candle of a Three Outside Down and the candle immediately following that pattern, the Three Outside Down pattern will not be overridden).
Important Functionality Note: These patterns are only searched for at the most recently closed candle, not on the currently closing candle, which creates an offset of one for this script's execution. (SEE LIMITATIONS)
Trend Direction - Many of the patterns require a trend direction prior to their appearance. Noting TradingView's own publication of candlestick patterns, I utilize a similar method for determining trend direction. Moving Averages are used to determine which trend is currently taking place for candlestick patterns to be sought out. The user has access to two Moving Averages which they may individually modify the following for each: Moving Average type (list of 9), their length, width, source values, and all variables associated with two special Moving Averages (Least Squares and Arnaud Legoux).
There are 3 settings for these Moving Averages, the first two switch between the two Moving Averages, and the third uses both. When using individual Moving Averages, the user may select a 'price point' to compare against the Moving Average (default is close). This price point is compared to the Moving Average at the candles prior to the appearance of candle patterns. Meaning: The close compared to the Moving Average two candles behind determines the trend direction used for Candlestick Analysis of one candle patterns; three candles behind for two candle patterns and so on. If the selected price point is above the Moving Average, then the current trend is an 'uptrend', 'downtrend' otherwise.
The third setting using both Moving Averages will compare the lengths of each, and trend direction is determined by the shorter Moving Average compared to the longer one. If the shorter Moving Average is above the longer, then the current trend is an 'uptrend', 'downtrend' otherwise. If the lengths of the Moving Averages are the same, or both Moving Averages are Symmetrical, then MA1 will be used by default. (SEE LIMITATIONS)
Higher Timeframes - This script employs the use of Higher Timeframes with a few request.security calls. The purpose of these calls is strictly for the partitioning of an asset's chart, splitting the returns of patterns into three separate groups. The four inputs in control of this partitioning split the chart based on: A given resolution to grab values from, the length of time in that resolution, and 'Upper' and 'Lower Limits' which split the trading range provided by that length of time in that resolution that forms three separate groups. The default values for these four inputs will partition the current chart by the yearly high-low range where: the 'Upper' partition is the top 20% of that trading range, the 'Middle' partition is 80% to 33% of the trading range, and the 'Lower' partition covers the trading range within 33% of the yearly low.
Patterns which are identified by this script will have their returns grouped together based on which partition they had appeared in. For example, a Bullish Engulfing which occurs within a third of the yearly low will have its return placed separately from a Bullish Engulfing that occurred within 20% of the yearly high. The idea is that certain patterns may perform better or worse depending on when they had occurred during an asset's trading range.
Price Analysis - Price Analysis is a major part of this script's functionality as it can fundamentally change how patterns are shown to the user. The settings related to Price Analysis include setting the number of candles ahead of a pattern's appearance to determine the return of that pattern. In 'BREAKOUT' mode, an additional setting allows the user to specify where the P/L calculation will begin for a pattern that had appeared and confirmed. (SEE LIMITATIONS)
The calculation for percent returns of patterns is illustrated with the following pseudo-code (CLASSIC mode, this is a simplified version of the actual code):
type patternObj
int ID
int partition
type returnsArray
float returns
// No pattern found = na returned
patternObj TEST_VAL = f_FindPattern()
priorTestVal = TEST_VAL
if not na( priorTestVal )
pnlMatrixRow = priorTestVal.ID
pnlMatrixCol = priorTestVal.partition
matrixReturn = matrix.get(PERCENT_RETURNS, pnlMatrixRow, pnlMatrixCol)
percentReturn = ( (close - open ) / open ) * 100%
array.push(matrixReturn.returns, percentReturn)
Statistic Analysis - This script uses Pine's built-in array functions to conduct the Statistic Analysis for patterns. When a pattern is found and its P/L calculation is complete, its return is added to a 'Return Array' User-Defined-Type that contains numerous fields which retain information on a pattern's prior performance. The actual UDT is as follows:
type returnArray
float returns = na
int size = 0
float avg = 0
float median = 0
float stdDev = 0
int polarities = na
All values within this UDT will be updated when a return is added to it (some based on user input). The array.avg , array.median and array.stdev will be ran and saved into their respective fields after a return is placed in the 'returns' array. The 'polarities' integer array is what will be changed based on user input. The user specifies two different percentages that declare 'Positive' and 'Negative' returns for patterns. When a pattern returns above, below, or in between these two values, different indices of this array will be incremented to reflect the kind of return that pattern had just experienced.
These values (plus the full name, partition the pattern occurred in, and a 95% confidence interval of expected returns) will be displayed to the user on the tooltip of the labels that identify patterns. Simply scroll over the pattern label to view each of these values.
Code Design - Overall this script is as much of an art piece as it is functional. Its design features numerous depictions of ASCII Art that illustrate what is being attempted by the functions that identify patterns, and an incalculable amount of time was spent rewriting portions of code to improve its efficiency. Admittedly, this final version is nearly 1,000 lines shorter than a previous version (one which took nearly 30 seconds after compilation to run, and didn't do nearly half of what this version does). The use of UDTs, especially the 'patternObj' one crafted and redesigned from the Hikkake Hunter 2.0 I published last month, played a significant role in making this script run efficiently. There is a slight rigidity in some of this code mainly around pattern IDs which are responsible for displaying the abbreviation for patterns (as well as the full names under the tooltips, and the matrix row position for holding returns), as each is hard-coded to correspond to that pattern.
However, one thing I would like to mention is the extensive use of global variables for pattern detection. Many scripts I had looked over for ideas on how to identify candlestick patterns had the same idea; break the pattern into a set of logical 'true/false' statements derived from historically referencing candle OHLC values. Some scripts which identified upwards of 20 to 30 patterns would reference Pine's built-in OHLC values for each pattern individually, potentially requesting information from TradingView's servers numerous times that could easily be saved into a variable for re-use and only requested once per candle (what this script does).
█ FEATURES
This script features a massive amount of switches, options, floating point values, detection settings, and methods for identifying/tailoring pattern appearances. All modifiable inputs for patterns are grouped together based on the number of candles they contain. Other inputs (like those for statistics settings and coloration) are grouped separately and presented in a way I believe makes the most sense.
Not mentioned above is the coloration settings. One of the aims of this script was to make patterns visually signify their behavior to the user when they are identified. Each pattern has its own collection of returns which are analyzed and compared to the inputs of the user. The user may choose the colors for bullish, neutral, and bearish patterns. They may also choose the minimum number of patterns needed to occur before assigning a color to that pattern based on its behavior; a color for patterns that have not met this minimum number of occurrences yet, and a color for patterns that are still processing in BREAKOUT mode.
There are also an additional three settings which alter the color scheme for patterns: Statistic Point-of-Reference, Adaptive coloring, and Hard Limiting. The Statistic Point-of-Reference decides which value (average or median) will be compared against the 'Negative' and 'Positive Return Tolerance'(s) to guide the coloration of the patterns (or for Adaptive Coloring, the generation of a color gradient).
Adaptive Coloring will have this script produce a gradient that patterns will be colored along. The more bullish or bearish a pattern is, the further along the gradient those patterns will be colored starting from the 'Neutral' color (hard lined at the value of 0%: values above this will be colored bullish, bearish otherwise). When Adaptive Coloring is enabled, this script will request the highest and lowest values (these being the Statistic Point-of-Reference) from the matrix containing all returns and rewrite global variables tied to the negative and positive return tolerances. This means that all patterns identified will be compared with each other to determine bullish/bearishness in Adaptive Coloring.
Hard Limiting will prevent these global variables from being rewritten, so patterns whose Statistic Point-of-Reference exceed the return tolerances will be fully colored the bullish or bearish colors instead of a generated gradient color. (SEE LIMITATIONS)
Apart from the Candle Detection Modes (CLASSIC and BREAKOUT), there's an additional two inputs which modify how this script behaves grouped under a "MASTER DETECTION SETTINGS" tab. These two "Pattern Detection Settings" are 'SWITCHBOARD' and 'TARGET MODE'.
SWITCHBOARD: Every single pattern has a switch that is associated with its detection. When a switch is enabled, the code which searches for that pattern will be run. With the Pattern Detection Setting set to this, all patterns that have their switches enabled will be sought out and shown.
TARGET MODE: There is an additional setting which operates on top of 'SWITCHBOARD' that singles out an individual pattern the user specifies through a drop down list. The names of every pattern recognized by this script will be present along with an identifier that shows the number of candles in that pattern (Ex: " (# candles)"). All patterns enabled in the switchboard will still have their returns measured, but only the pattern selected from the "Target Pattern" list will be shown. (SEE LIMITATIONS)
The vast majority of other features are held in the one, two, and three candle pattern sections.
For one-candle patterns, there are:
3 — Settings related to defining 'Tall' candles:
The number of candles to sample for previous candle-size averages.
The type of comparison done for 'Tall' Candles: Settings are 'RANGE' and 'BODY'.
The 'Tolerance' for tall candles, specifying what percent of the 'average' size candles must exceed to be considered 'Tall'.
When 'Tall Candle Setting' is set to RANGE, the high-low ranges are what the current candle range will be compared against to determine if a candle is 'Tall'. Otherwise the candle bodies (absolute value of the close - open) will be compared instead. (SEE LIMITATIONS)
Hammer Tolerance - How large a 'discarded wick' may be before it disqualifies a candle from being a 'Hammer'.
Discarded wicks are compared to the size of the Hammer's candle body and are dependent upon the body's center position. Hammer bodies closer to the high of the candle will have the upper wick used as its 'discarded wick', otherwise the lower wick is used.
9 — Doji Settings, some pulled from an old Doji Hunter I made a while back:
Doji Tolerance - How large the body of a candle may be compared to the range to be considered a 'Doji'.
Ignore N/S Dojis - Turns off Trend Direction for non-special Dojis.
GS/DF Doji Settings - 2 Inputs that enable and specify how large wicks that typically disqualify Dojis from being 'Gravestone' or 'Dragonfly' Dojis may be.
4 Settings related to 'Long Wick Doji' candles detailed below.
A Tolerance for 'Rickshaw Man' Dojis specifying how close the center of the body must be to the range to be valid.
The 4 settings the user may modify for 'Long Legged' Dojis are: A Sample Base for determining the previous average of wicks, a Sample Length specifying how far back to look for these averages, a Behavior Setting to define how 'Long Legged' Dojis are recognized, and a tolerance to specify how large in comparison to the prior wicks a Doji's wicks must be to be considered 'Long Legged'.
The 'Sample Base' list has two settings:
RANGE: The wicks of prior candles are compared to their candle ranges and the 'wick averages' will be what the average percent of ranges were in the sample.
WICKS: The size of the wicks themselves are averaged and returned for comparing against the current wicks of a Doji.
The 'Behavior' list has three settings:
ONE: Only one wick length needs to exceed the average by the tolerance for a Doji to be considered 'Long Legged'.
BOTH: Both wick lengths need to exceed the average of the tolerance of their respective wicks (upper wicks are compared to upper wicks, lower wicks compared to lower) to be considered 'Long Legged'.
AVG: Both wicks and the averages of the previous wicks are added together, divided by two, and compared. If the 'average' of the current wicks exceeds this combined average of prior wicks by the tolerance, then this would constitute a valid 'Long Legged' Doji. (For Dojis in general - SEE LIMITATIONS)
The final input is one related to candle patterns which require a Marubozu candle in them. The two settings for this input are 'INCLUSIVE' and 'EXCLUSIVE'. If INCLUSIVE is selected, any opening/closing variant of Marubozu candles will be allowed in the patterns that require them.
For two-candle patterns, there are:
2 — Settings which define 'Engulfing' parameters:
Engulfing Setting - Two options, RANGE or BODY which sets up how one candle may 'engulf' the previous.
Inclusive Engulfing - Boolean which enables if 'engulfing' candles can be equal to the values needed to 'engulf' the prior candle.
For the 'Engulfing Setting':
RANGE: If the second candle's high-low range completely covers the high-low range of the prior candle, this is recognized as 'engulfing'.
BODY: If the second candle's open-close completely covers the open-close of the previous candle, this is recognized as 'engulfing'. (SEE LIMITATIONS)
4 — Booleans specifying different settings for a few patterns:
One which allows for 'opens within body' patterns to let the second candle's open/close values match the prior candles' open/close.
One which forces 'Kicking' patterns to have a gap if the Marubozu setting is set to 'INCLUSIVE'.
And Two which dictate if the individual candles in 'Stomach' patterns need to be 'Tall'.
8 — Floating point values which affect 11 different patterns:
One which determines the distance the close of the first candle in a 'Hammer Inverted' pattern must be to the low to be considered valid.
One which affects how close the opens/closes need to be for all 'Lines' patterns (Bull/Bear Meeting/Separating Lines).
One that allows some leeway with the 'Matching Low' pattern (gives a small range the second candle close may be within instead of needing to match the previous close).
Three tolerances for On Neck/In Neck patterns (2 and 1 respectively).
A tolerance for the Thrusting pattern which give a range the close the second candle may be between the midpoint and close of the first to be considered 'valid'.
A tolerance for the two Tweezers patterns that specifies how close the highs and lows of the patterns need to be to each other to be 'valid'.
The first On Neck tolerance specifies how large the lower wick of the first candle may be (as a % of that candle's range) before the pattern is invalidated. The second tolerance specifies how far up the lower wick to the close the second candle's close may be for this pattern. The third tolerance for the In Neck pattern determines how far into the body of the first candle the second may close to be 'valid'.
For the remaining patterns (3, 4, and 5 candles), there are:
3 — Settings for the Deliberation pattern:
A boolean which forces the open of the third candle to gap above the close of the second.
A tolerance which changes the proximity of the third candle's open to the second candle's close in this pattern.
A tolerance that sets the maximum size the third candle may be compared to the average of the first two candles.
One boolean value for the Two Crows patterns (standard and Upside Gapping) that forces the first two candles in the patterns to completely gap if disabled (candle 1's close < candle 2's low).
10 — Floating point values for the remaining patterns:
One tolerance for defining how much the size of each candle in the Identical Black Crows pattern may deviate from the average of themselves to be considered valid.
One tolerance for setting how close the opens/closes of certain three candle patterns may be to each other's opens/closes.*
Three floating point values that affect the Three Stars in the South pattern.
One tolerance for the Side-by-Side patterns - looks at the second and third candle closes.
One tolerance for the Stick Sandwich pattern - looks at the first and third candle closes.
A floating value that sizes the Concealing Baby Swallow pattern's 3rd candle wick.
Two values for the Ladder Bottom pattern which define a range that the third candle's wick size may be.
* This affects the Three Black Crows (non-identical) and Three White Soldiers patterns, each require the opens and closes of every candle to be near each other.
The first tolerance of the Three Stars in the South pattern affects the first candle body's center position, and defines where it must be above to be considered valid. The second tolerance specifies how close the second candle must be to this same position, as well as the deviation the ratio the candle body to its range may be in comparison to the first candle. The third restricts how large the second candle range may be in comparison to the first (prevents this pattern from being recognized if the second candle is similar to the first but larger).
The last two floating point values define upper and lower limits to the wick size of a Ladder Bottom's fourth candle to be considered valid.
█ HOW TO USE
While there are many moving parts to this script, I attempted to set the default values with what I believed may help identify the most patterns within reasonable definitions. When this script is applied to a chart, the Candle Detection Mode (along with the BREAKOUT settings) and all candle switches must be confirmed before patterns are displayed. All switches are on by default, so this gives the user an opportunity to pick which patterns to identify first before playing around in the settings.
All of the settings/inputs described above are meant for experimentation. I encourage the user to tweak these values at will to find which set ups work best for whichever charts they decide to apply these patterns to.
Refer to the patterns themselves during experimentation. The statistic information provided on the tooltips of the patterns are meant to help guide input decisions. The breadth of candlestick theory is deep, and this was an attempt at capturing what I could in its sea of information.
█ LIMITATIONS
DISCLAIMER: While it may seem a bit paradoxical that this script aims to use past performance to potentially measure future results, past performance is not indicative of future results . Markets are highly adaptive and often unpredictable. This script is meant as an informational tool to show how patterns may behave. There is no guarantee that confidence intervals (or any other metric measured with this script) are accurate to the performance of patterns; caution must be exercised with all patterns identified regardless of how much information regarding prior performance is available.
Candlestick Theory - In the name, Candlestick Theory is a theory , and all theories come with their own limits. Some patterns identified by this script may be completely useless/unprofitable/unpredictable regardless of whatever combination of settings are used to identify them. However, if I truly believed this theory had no merit, this script would not exist. It is important to understand that this is a tool meant to be utilized with an array of others to procure positive (or negative, looking at you, short sellers ) results when navigating the complex world of finance.
To address the functionality note however, this script has an offset of 1 by default. Patterns will not be identified on the currently closing candle, only on the candle which has most recently closed. Attempting to have this script do both (offset by one or identify on close) lead to more trouble than it was worth. I personally just want users to be aware that patterns will not be identified immediately when they appear.
Trend Direction - Moving Averages - There is a small quirk with how MA settings will be adjusted if the user inputs two moving averages of the same length when the "MA Setting" is set to 'BOTH'. If Moving Averages have the same length, this script will default to only using MA 1 regardless of if the types of Moving Averages are different . I will experiment in the future to alleviate/reduce this restriction.
Price Analysis - BREAKOUT mode - With how identifying patterns with a look-ahead confirmation works, the percent returns for patterns that break out in either direction will be calculated on the same candle regardless of if P/L Offset is set to 'FROM CONFIRMATION' or 'FROM APPEARANCE'. This same issue is present in the Hikkake Hunter script mentioned earlier. This does not mean the P/L calculations are incorrect , the offset for the calculation is set by the number of candles required to confirm the pattern if 'FROM APPEARANCE' is selected. It just means that these two different P/L calculations will complete at the same time independent of the setting that's been selected.
Adaptive Coloring/Hard Limiting - Hard Limiting is only used with Adaptive Coloring and has no effect outside of it. If Hard Limiting is used, it is recommended to increase the 'Positive' and 'Negative' return tolerance values as a pattern's bullish/bearishness may be disproportionately represented with the gradient generated under a hard limit.
TARGET MODE - This mode will break rules regarding patterns that are overridden on purpose. If a pattern selected in TARGET mode would have otherwise been absorbed by a larger pattern, it will have that pattern's percent return calculated; potentially leading to duplicate returns being included in the matrix of all returns recognized by this script.
'Tall' Candle Setting - This is a wide-reaching setting, as approximately 30 different patterns or so rely on defining 'Tall' candles. Changing how 'Tall' candles are defined whether by the tolerance value those candles need to exceed or by the values of the candle used for the baseline comparison (RANGE/BODY) can wildly affect how this script functions under certain conditions. Refer to the tooltip of these settings for more information on which specific patterns are affected by this.
Doji Settings - There are roughly 10 or so two to three candle patterns which have Dojis as a part of them. If all Dojis are disabled, it will prevent some of these larger patterns from being recognized. This is a dependency issue that I may address in the future.
'Engulfing' Setting - Functionally, the two 'Engulfing' settings are quite different. Because of this, the 'RANGE' setting may cause certain patterns that would otherwise be valid under textbook and online references/definitions to not be recognized as such (like the Upside Gap Two Crows or Three Outside down).
█ PATTERN LIST
This script recognizes 85 patterns upon initial release. I am open to adding additional patterns to it in the future and any comments/suggestions are appreciated. It recognizes:
15 — 1 Candle Patterns
4 Hammer type patterns: Regular Hammer, Takuri Line, Shooting Star, and Hanging Man
9 Doji Candles: Regular Dojis, Northern/Southern Dojis, Gravestone/Dragonfly Dojis, Gapping Up/Down Dojis, and Long-Legged/Rickshaw Man Dojis
White/Black Long Days
32 — 2 Candle Patterns
4 Engulfing type patterns: Bullish/Bearish Engulfing and Last Engulfing Top/Bottom
Dark Cloud Cover
Bullish/Bearish Doji Star patterns
Hammer Inverted
Bullish/Bearish Haramis + Cross variants
Homing Pigeon
Bullish/Bearish Kicking
4 Lines type patterns: Bullish/Bearish Meeting/Separating Lines
Matching Low
On/In Neck patterns
Piercing pattern
Shooting Star (2 Lines)
Above/Below Stomach patterns
Thrusting
Tweezers Top/Bottom patterns
Two Black Gapping
Rising/Falling Window patterns
29 — 3 Candle Patterns
Bullish/Bearish Abandoned Baby patterns
Advance Block
Collapsing Doji Star
Deliberation
Upside/Downside Gap Three Methods patterns
Three Inside/Outside Up/Down patterns (4 total)
Bullish/Bearish Side-by-Side patterns
Morning/Evening Star patterns + Doji variants
Stick Sandwich
Downside/Upside Tasuki Gap patterns
Three Black Crows + Identical variation
Three White Soldiers
Three Stars in the South
Bullish/Bearish Tri-Star patterns
Two Crows + Upside Gap variant
Unique Three River Bottom
3 — 4 Candle Patterns
Concealing Baby Swallow
Bullish/Bearish Three Line Strike patterns
6 — 5 Candle Patterns
Bullish/Bearish Breakaway patterns
Ladder Bottom
Mat Hold
Rising/Falling Three Methods patterns
█ WORKS CITED
Because of the amount of time needed to complete this script, I am unable to provide exact dates for when some of these references were used. I will also not provide every single reference, as citing a reference for each individual pattern and the place it was reviewed would lead to a bibliography larger than this script and its description combined. There were five major resources I used when building this script, one book, two websites (for various different reasons including patterns, moving averages, and various other articles of information), various scripts from TradingView's public library (including TradingView's own source code for *all* candle patterns ), and PineScrypt's reference manual.
Bulkowski, Thomas N. Encyclopedia of Candlestick Patterns . Hoboken, New Jersey: John Wiley & Sons Inc., 2008. E-book (google books).
Various. Numerous webpages. CandleScanner . 2023. online. Accessed 2020 - 2023.
Various. Numerous webpages. Investopedia . 2023. online. Accessed 2020 - 2023.
█ AKNOWLEDGEMENTS
I want to take the time here to thank all of my friends and family, both online and in real life, for the support they've given me over the last few years in this endeavor. My pets who tried their hardest to keep me from completing it. And work for the grit to continue pushing through until this script's completion.
This belongs to me just as much as it does anyone else. Whether you are an institutional trader, gold bug hedging against the dollar, retail ape who got in on a squeeze, or just parents trying to grow their retirement/save for the kids. This belongs to everyone.
Private Beta for new features to be tested can be found here .
Vires In Numeris
`security()` revisited [PineCoders]NOTE
The non-repainting technique in this publication that relies on bar states is now deprecated, as we have identified inconsistencies that undermine its credibility as a universal solution. The outputs that use the technique are still available for reference in this publication. However, we do not endorse its usage. See this publication for more information about the current best practices for requesting HTF data and why they work.
█ OVERVIEW
This script presents a new function to help coders use security() in both repainting and non-repainting modes. We revisit this often misunderstood and misused function, and explain its behavior in different contexts, in the hope of dispelling some of the coder lure surrounding it. The function is incredibly powerful, yet misused, it can become a dangerous WMD and an instrument of deception, for both coders and traders.
We will discuss:
• How to use our new `f_security()` function.
• The behavior of Pine code and security() on the three very different types of bars that make up any chart.
• Why what you see on a chart is a simulation, and should be taken with a grain of salt.
• Why we are presenting a new version of a function handling security() calls.
• Other topics of interest to coders using higher timeframe (HTF) data.
█ WARNING
We have tried to deliver a function that is simple to use and will, in non-repainting mode, produce reliable results for both experienced and novice coders. If you are a novice coder, stick to our recommendations to avoid getting into trouble, and DO NOT change our `f_security()` function when using it. Use `false` as the function's last argument and refrain from using your script at smaller timeframes than the chart's. To call our function to fetch a non-repainting value of close from the 1D timeframe, use:
f_security(_sym, _res, _src, _rep) => security(_sym, _res, _src )
previousDayClose = f_security(syminfo.tickerid, "D", close, false)
If that's all you're interested in, you are done.
If you choose to ignore our recommendation and use the function in repainting mode by changing the `false` in there for `true`, we sincerely hope you read the rest of our ramblings before you do so, to understand the consequences of your choice.
Let's now have a look at what security() is showing you. There is a lot to cover, so buckle up! But before we dig in, one last thing.
What is a chart?
A chart is a graphic representation of events that occur in markets. As any representation, it is not reality, but rather a model of reality. As Scott Page eloquently states in The Model Thinker : "All models are wrong; many are useful". Having in mind that both chart bars and plots on our charts are imperfect and incomplete renderings of what actually occurred in realtime markets puts us coders in a place from where we can better understand the nature of, and the causes underlying the inevitable compromises necessary to build the data series our code uses, and print chart bars.
Traders or coders complaining that charts do not reflect reality act like someone who would complain that the word "dog" is not a real dog. Let's recognize that we are dealing with models here, and try to understand them the best we can. Sure, models can be improved; TradingView is constantly improving the quality of the information displayed on charts, but charts nevertheless remain mere translations. Plots of data fetched through security() being modelized renderings of what occurs at higher timeframes, coders will build more useful and reliable tools for both themselves and traders if they endeavor to perfect their understanding of the abstractions they are working with. We hope this publication helps you in this pursuit.
█ FEATURES
This script's "Inputs" tab has four settings:
• Repaint : Determines whether the functions will use their repainting or non-repainting mode.
Note that the setting will not affect the behavior of the yellow plot, as it always repaints.
• Source : The source fetched by the security() calls.
• Timeframe : The timeframe used for the security() calls. If it is lower than the chart's timeframe, a warning appears.
• Show timeframe reminder : Displays a reminder of the timeframe after the last bar.
█ THE CHART
The chart shows two different pieces of information and we want to discuss other topics in this section, so we will be covering:
A — The type of chart bars we are looking at, indicated by the colored band at the top.
B — The plots resulting of calling security() with the close price in different ways.
C — Points of interest on the chart.
A — Chart bars
The colored band at the top shows the three types of bars that any chart on a live market will print. It is critical for coders to understand the important distinctions between each type of bar:
1 — Gray : Historical bars, which are bars that were already closed when the script was run on them.
2 — Red : Elapsed realtime bars, i.e., realtime bars that have run their course and closed.
The state of script calculations showing on those bars is that of the last time they were made, when the realtime bar closed.
3 — Green : The realtime bar. Only the rightmost bar on the chart can be the realtime bar at any given time, and only when the chart's market is active.
Refer to the Pine User Manual's Execution model page for a more detailed explanation of these types of bars.
B — Plots
The chart shows the result of letting our 5sec chart run for a few minutes with the following settings: "Repaint" = "On" (the default is "Off"), "Source" = `close` and "Timeframe" = 1min. The five lines plotted are the following. They have progressively thinner widths:
1 — Yellow : A normal, repainting security() call.
2 — Silver : Our recommended security() function.
3 — Fuchsia : Our recommended way of achieving the same result as our security() function, for cases when the source used is a function returning a tuple.
4 — White : The method we previously recommended in our MTF Selection Framework , which uses two distinct security() calls.
5 — Black : A lame attempt at fooling traders that MUST be avoided.
All lines except the first one in yellow will vary depending on the "Repaint" setting in the script's inputs. The first plot does not change because, contrary to all other plots, it contains no conditional code to adapt to repainting/no-repainting modes; it is a simple security() call showing its default behavior.
C — Points of interest on the chart
Historical bars do not show actual repainting behavior
To appreciate what a repainting security() call will plot in realtime, one must look at the realtime bar and at elapsed realtime bars, the bars where the top line is green or red on the chart at the top of this page. There you can see how the plots go up and down, following the close value of each successive chart bar making up a single bar of the higher timeframe. You would see the same behavior in "Replay" mode. In the realtime bar, the movement of repainting plots will vary with the source you are fetching: open will not move after a new timeframe opens, low and high will change when a new low or high are found, close will follow the last feed update. If you are fetching a value calculated by a function, it may also change on each update.
Now notice how different the plots are on historical bars. There, the plot shows the close of the previously completed timeframe for the whole duration of the current timeframe, until on its last bar the price updates to the current timeframe's close when it is confirmed (if the timeframe's last bar is missing, the plot will only update on the next timeframe's first bar). That last bar is the only one showing where the plot would end if that timeframe's bars had elapsed in realtime. If one doesn't understand this, one cannot properly visualize how his script will calculate in realtime when using repainting. Additionally, as published scripts typically show charts where the script has only run on historical bars, they are, in fact, misleading traders who will naturally assume the script will behave the same way on realtime bars.
Non-repainting plots are more accurate on historical bars
Now consider this chart, where we are using the same settings as on the chart used to publish this script, except that we have turned "Repainting" off this time:
The yellow line here is our reference, repainting line, so although repainting is turned off, it is still repainting, as expected. Because repainting is now off, however, plots on historical bars show the previous timeframe's close until the first bar of a new timeframe, at which point the plot updates. This correctly reflects the behavior of the script in the realtime bar, where because we are offsetting the series by one, we are always showing the previously calculated—and thus confirmed—higher timeframe value. This means that in realtime, we will only get the previous timeframe's values one bar after the timeframe's last bar has elapsed, at the open of the first bar of a new timeframe. Historical and elapsed realtime bars will not actually show this nuance because they reflect the state of calculations made on their close , but we can see the plot update on that bar nonetheless.
► This more accurate representation on historical bars of what will happen in the realtime bar is one of the two key reasons why using non-repainting data is preferable.
The other is that in realtime, your script will be using more reliable data and behave more consistently.
Misleading plots
Valiant attempts by coders to show non-repainting, higher timeframe data updating earlier than on our chart are futile. If updates occur one bar earlier because coders use the repainting version of the function, then so be it, but they must then also accept that their historical bars are not displaying information that is as accurate. Not informing script users of this is to mislead them. Coders should also be aware that if they choose to use repainting data in realtime, they are sacrificing reliability to speed and may be running a strategy that behaves very differently from the one they backtested, thus invalidating their tests.
When, however, coders make what are supposed to be non-repainting plots plot artificially early on historical bars, as in examples "c4" and "c5" of our script, they would want us to believe they have achieved the miracle of time travel. Our understanding of the current state of science dictates that for now, this is impossible. Using such techniques in scripts is plainly misleading, and public scripts using them will be moderated. We are coding trading tools here—not video games. Elementary ethics prescribe that we should not mislead traders, even if it means not being able to show sexy plots. As the great Feynman said: You should not fool the layman when you're talking as a scientist.
You can readily appreciate the fantasy plot of "c4", the thinnest line in black, by comparing its supposedly non-repainting behavior between historical bars and realtime bars. After updating—by miracle—as early as the wide yellow line that is repainting, it suddenly moves in a more realistic place when the script is running in realtime, in synch with our non-repainting lines. The "c5" version does not plot on the chart, but it displays in the Data Window. It is even worse than "c4" in that it also updates magically early on historical bars, but goes on to evaluate like the repainting yellow line in realtime, except one bar late.
Data Window
The Data Window shows the values of the chart's plots, then the values of both the inside and outside offsets used in our calculations, so you can see them change bar by bar. Notice their differences between historical and elapsed realtime bars, and the realtime bar itself. If you do not know about the Data Window, have a look at this essential tool for Pine coders in the Pine User Manual's page on Debugging . The conditional expressions used to calculate the offsets may seem tortuous but their objective is quite simple. When repainting is on, we use this form, so with no offset on all bars:
security(ticker, i_timeframe, i_source )
// which is equivalent to:
security(ticker, i_timeframe, i_source)
When repainting is off, we use two different and inverted offsets on historical bars and the realtime bar:
// Historical bars:
security(ticker, i_timeframe, i_source )
// Realtime bar (and thus, elapsed realtime bars):
security(ticker, i_timeframe, i_source )
The offsets in the first line show how we prevent repainting on historical bars without the need for the `lookahead` parameter. We use the value of the function call on the chart's previous bar. Since values between the repainting and non-repainting versions only differ on the timeframe's last bar, we can use the previous value so that the update only occurs on the timeframe's first bar, as it will in realtime when not repainting.
In the realtime bar, we use the second call, where the offsets are inverted. This is because if we used the first call in realtime, we would be fetching the value of the repainting function on the previous bar, so the close of the last bar. What we want, instead, is the data from the previous, higher timeframe bar , which has elapsed and is confirmed, and thus will not change throughout realtime bars, except on the first constituent chart bar belonging to a new higher timeframe.
After the offsets, the Data Window shows values for the `barstate.*` variables we use in our calculations.
█ NOTES
Why are we revisiting security() ?
For four reasons:
1 — We were seeing coders misuse our `f_secureSecurity()` function presented in How to avoid repainting when using security() .
Some novice coders were modifying the offset used with the history-referencing operator in the function, making it zero instead of one,
which to our horror, caused look-ahead bias when used with `lookahead = barmerge.lookahead_on`.
We wanted to present a safer function which avoids introducing the dreaded "lookahead" in the scripts of unsuspecting coders.
2 — The popularity of security() in screener-type scripts where coders need to use the full 40 calls allowed per script made us want to propose
a solid method of allowing coders to offer a repainting/no-repainting choice to their script users with only one security() call.
3 — We wanted to explain why some alternatives we see circulating are inadequate and produce misleading behavior.
4 — Our previous publication on security() focused on how to avoid repainting, yet many other considerations worthy of attention are not related to repainting.
Handling tuples
When sending function calls that return tuples with security() , our `f_security()` function will not work because Pine does not allow us to use the history-referencing operator with tuple return values. The solution is to integrate the inside offset to your function's arguments, use it to offset the results the function is returning, and then add the outside offset in a reassignment of the tuple variables, after security() returns its values to the script, as we do in our "c2" example.
Does it repaint?
We're pretty sure Wilder was not asked very often if RSI repainted. Why? Because it wasn't in fashion—and largely unnecessary—to ask that sort of question in the 80's. Many traders back then used daily charts only, and indicator values were calculated at the day's close, so everybody knew what they were getting. Additionally, indicator values were calculated by generally reputable outfits or traders themselves, so data was pretty reliable. Today, almost anybody can write a simple indicator, and the programming languages used to write them are complex enough for some coders lacking the caution, know-how or ethics of the best professional coders, to get in over their heads and produce code that does not work the way they think it does.
As we hope to have clearly demonstrated, traders do have legitimate cause to ask if MTF scripts repaint or not when authors do not specify it in their script's description.
► We recommend that authors always use our `f_security()` with `false` as the last argument to avoid repainting when fetching data dependent on OHLCV information. This is the only way to obtain reliable HTF data. If you want to offer users a choice, make non-repainting mode the default, so that if users choose repainting, it will be their responsibility. Non-repainting security() calls are also the only way for scripts to show historical behavior that matches the script's realtime behavior, so you are not misleading traders. Additionally, non-repainting HTF data is the only way that non-repainting alerts can be configured on MTF scripts, as users of MTF scripts cannot prevent their alerts from repainting by simply configuring them to trigger on the bar's close.
Data feeds
A chart at one timeframe is made up of multiple feeds that mesh seamlessly to form one chart. Historical bars can use one feed, and the realtime bar another, which brokers/exchanges can sometimes update retroactively so that elapsed realtime bars will reappear with very slight modifications when the browser's tab is refreshed. Intraday and daily chart prices also very often originate from different feeds supplied by brokers/exchanges. That is why security() calls at higher timeframes may be using a completely different feed than the chart, and explains why the daily high value, for example, can vary between timeframes. Volume information can also vary considerably between intraday and daily feeds in markets like stocks, because more volume information becomes available at the end of day. It is thus expected behavior—and not a bug—to see data variations between timeframes.
Another point to keep in mind concerning feeds it that when you are using a repainting security() plot in realtime, you will sometimes see discrepancies between its plot and the realtime bars. An artefact revealing these inconsistencies can be seen when security() plots sometimes skip a realtime chart bar during periods of high market activity. This occurs because of races between the chart and the security() feeds, which are being monitored by independent, concurrent processes. A blue arrow on the chart indicates such an occurrence. This is another cause of repainting, where realtime bar-building logic can produce different outcomes on one closing price. It is also another argument supporting our recommendation to use non-repainting data.
Alternatives
There is an alternative to using security() in some conditions. If all you need are OHLC prices of a higher timeframe, you can use a technique like the one Duyck demonstrates in his security free MTF example - JD script. It has the great advantage of displaying actual repainting values on historical bars, which mimic the code's behavior in the realtime bar—or at least on elapsed realtime bars, contrary to a repainting security() plot. It has the disadvantage of using the current chart's TF data feed prices, whereas higher timeframe data feeds may contain different and more reliable prices when they are compiled at the end of the day. In its current state, it also does not allow for a repainting/no-repainting choice.
When `lookahead` is useful
When retrieving non-price data, or in special cases, for experiments, it can be useful to use `lookahead`. One example is our Backtesting on Non-Standard Charts: Caution! script where we are fetching prices of standard chart bars from non-standard charts.
Warning users
Normal use of security() dictates that it only be used at timeframes equal to or higher than the chart's. To prevent users from inadvertently using your script in contexts where it will not produce expected behavior, it is good practice to warn them when their chart is on a higher timeframe than the one in the script's "Timeframe" field. Our `f_tfReminderAndErrorCheck()` function in this script does that. It can also print a reminder of the higher timeframe. It uses one security() call.
Intrabar timeframes
security() is not supported by TradingView when used with timeframes lower than the chart's. While it is still possible to use security() at intrabar timeframes, it then behaves differently. If no care is taken to send a function specifically written to handle the successive intrabars, security() will return the value of the last intrabar in the chart's timeframe, so the last 1H bar in the current 1D bar, if called at "60" from a "D" chart timeframe. If you are an advanced coder, see our FAQ entry on the techniques involved in processing intrabar timeframes. Using intrabar timeframes comes with important limitations, which you must understand and explain to traders if you choose to make scripts using the technique available to others. Special care should also be taken to thoroughly test this type of script. Novice coders should refrain from getting involved in this.
█ TERMINOLOGY
Timeframe
Timeframe , interval and resolution are all being used to name the concept of timeframe. We have, in the past, used "timeframe" and "resolution" more or less interchangeably. Recently, members from the Pine and PineCoders team have decided to settle on "timeframe", so from hereon we will be sticking to that term.
Multi-timeframe (MTF)
Some coders use "multi-timeframe" or "MTF" to name what are in fact "multi-period" calculations, as when they use MAs of progressively longer periods. We consider that a misleading use of "multi-timeframe", which should be reserved for code using calculations actually made from another timeframe's context and using security() , safe for scripts like Duyck's one mentioned earlier, or TradingView's Relative Volume at Time , which use a user-selected timeframe as an anchor to reset calculations. Calculations made at the chart's timeframe by varying the period of MAs or other rolling window calculations should be called "multi-period", and "MTF-anchored" could be used for scripts that reset calculations on timeframe boundaries.
Colophon
Our script was written using the PineCoders Coding Conventions for Pine .
The description was formatted using the techniques explained in the How We Write and Format Script Descriptions PineCoders publication.
Snippets were lifted from our MTF Selection Framework , then massaged to create the `f_tfReminderAndErrorCheck()` function.
█ THANKS
Thanks to apozdnyakov for his help with the innards of security() .
Thanks to bmistiaen for proofreading our description.
Look first. Then leap.
HTFMAs█ OVERVIEW
Contains a type HTFMA used to return data on six moving averages from a higher timeframe.
Several types of MA's are supported.
█ HOW TO USE
Please see instructions in the code (in library description). (Important: first fold all sections of the script: press Cmd + K then Cmd + - (for Windows Ctrl + K then Ctrl + -)
█ FULL LIST OF FUNCTIONS AND PARAMETERS
method getMaType(this)
Enumerator function, given a key returns `enum MaTypes` value
Namespace types: series string, simple string, input string, const string
Parameters:
this (string)
method init(this, enableAll, ma1Enabled, ma1MaType, ma1Src, ma1Prd, ma2Enabled, ma2MaType, ma2Src, ma2Prd, ma3Enabled, ma3MaType, ma3Src, ma3Prd, ma4Enabled, ma4MaType, ma4Src, ma4Prd, ma5Enabled, ma5MaType, ma5Src, ma5Prd, ma6Enabled, ma6MaType, ma6Src, ma6Prd)
Namespace types: RsParamsMAs
Parameters:
this (RsParamsMAs)
enableAll (simple MaEnable)
ma1Enabled (bool)
ma1MaType (series MaTypes)
ma1Src (string)
ma1Prd (int)
ma2Enabled (bool)
ma2MaType (series MaTypes)
ma2Src (string)
ma2Prd (int)
ma3Enabled (bool)
ma3MaType (series MaTypes)
ma3Src (string)
ma3Prd (int)
ma4Enabled (bool)
ma4MaType (series MaTypes)
ma4Src (string)
ma4Prd (int)
ma5Enabled (bool)
ma5MaType (series MaTypes)
ma5Src (string)
ma5Prd (int)
ma6Enabled (bool)
ma6MaType (series MaTypes)
ma6Src (string)
ma6Prd (int)
method init(this, enableAll, tf, rngAtrQ, showRecentBars, lblsOffset, lblsShow, lnOffset, lblSize, lblStyle, smoothen, ma1lnClr, ma1lnWidth, ma1lnStyle, ma2lnClr, ma2lnWidth, ma2lnStyle, ma3lnClr, ma3lnWidth, ma3lnStyle, ma4lnClr, ma4lnWidth, ma4lnStyle, ma5lnClr, ma5lnWidth, ma5lnStyle, ma6lnClr, ma6lnWidth, ma6lnStyle, ma1ShowHistory, ma2ShowHistory, ma3ShowHistory, ma4ShowHistory, ma5ShowHistory, ma6ShowHistory, ma1ShowLabel, ma2ShowLabel, ma3ShowLabel, ma4ShowLabel, ma5ShowLabel, ma6ShowLabel)
Namespace types: HTFMAs
Parameters:
this (HTFMAs)
enableAll (series MaEnable)
tf (string)
rngAtrQ (int)
showRecentBars (int)
lblsOffset (int)
lblsShow (bool)
lnOffset (int)
lblSize (string)
lblStyle (string)
smoothen (bool)
ma1lnClr (color)
ma1lnWidth (int)
ma1lnStyle (string)
ma2lnClr (color)
ma2lnWidth (int)
ma2lnStyle (string)
ma3lnClr (color)
ma3lnWidth (int)
ma3lnStyle (string)
ma4lnClr (color)
ma4lnWidth (int)
ma4lnStyle (string)
ma5lnClr (color)
ma5lnWidth (int)
ma5lnStyle (string)
ma6lnClr (color)
ma6lnWidth (int)
ma6lnStyle (string)
ma1ShowHistory (bool)
ma2ShowHistory (bool)
ma3ShowHistory (bool)
ma4ShowHistory (bool)
ma5ShowHistory (bool)
ma6ShowHistory (bool)
ma1ShowLabel (bool)
ma2ShowLabel (bool)
ma3ShowLabel (bool)
ma4ShowLabel (bool)
ma5ShowLabel (bool)
ma6ShowLabel (bool)
method get(this, id)
Namespace types: RsParamsMAs
Parameters:
this (RsParamsMAs)
id (int)
method set(this, id, prop, val)
Namespace types: RsParamsMAs
Parameters:
this (RsParamsMAs)
id (int)
prop (string)
val (string)
method set(this, id, prop, val)
Namespace types: HTFMAs
Parameters:
this (HTFMAs)
id (int)
prop (string)
val (string)
method htfUpdateTuple(rsParams, repaint)
Namespace types: RsParamsMAs
Parameters:
rsParams (RsParamsMAs)
repaint (bool)
method clear(this)
Namespace types: MaDrawing
Parameters:
this (MaDrawing)
method importRsRetTuple(this, htfBi, ma1, ma2, ma3, ma4, ma5, ma6)
Namespace types: HTFMAs
Parameters:
this (HTFMAs)
htfBi (int)
ma1 (float)
ma2 (float)
ma3 (float)
ma4 (float)
ma5 (float)
ma6 (float)
method getDrw(this, id)
Namespace types: HTFMAs
Parameters:
this (HTFMAs)
id (int)
method setDrwProp(this, id, prop, val)
Namespace types: HTFMAs
Parameters:
this (HTFMAs)
id (int)
prop (string)
val (string)
method initDrawings(this, rsPrms, dispBandWidth)
Namespace types: HTFMAs
Parameters:
this (HTFMAs)
rsPrms (RsParamsMAs)
dispBandWidth (float)
method updateDrawings(this, rsPrms, dispBandWidth)
Namespace types: HTFMAs
Parameters:
this (HTFMAs)
rsPrms (RsParamsMAs)
dispBandWidth (float)
method update(this)
Namespace types: HTFMAs
Parameters:
this (HTFMAs)
method importConfig(this, oCfg, maCount)
Imports HTF MAs settings from objProps (of any level) into `RsParamsMAs` child `RsMaCalcParams` objects (into the first first `maCount` of them)
Namespace types: RsParamsMAs
Parameters:
this (RsParamsMAs) : (RsParamsMAs) Target object to import prop values to.
oCfg (objProps type from moebius1977/CSVParser/1) : (CSVP.objProps) (one of objProps types) an objProps, ... opjProps8 containing properties' values in a child objProps objects
maCount (int) : (int) Number of tgtObj's RsMaCalcParams childs of tgtObj to set (1 to 6, starting from 1)
Returns: this
method importConfig(this, oCfg, maCount)
Imports HTF MAs settings from objProps (of any level) into `RsParamsMAs` child `RsMaCalcParams` objects (into the first first `maCount` of them)
Namespace types: RsParamsMAs
Parameters:
this (RsParamsMAs) : (RsParamsMAs) Target object to import prop values to.
oCfg (objProps0 type from moebius1977/CSVParser/1) : (CSVP.objProps) (one of objProps types) an objProps, ... opjProps8 containing properties' values in a child objProps objects
maCount (int) : (int) Number of tgtObj's RsMaCalcParams childs of tgtObj to set (1 to 6, starting from 1)
Returns: this
method importConfig(this, oCfg, maCount)
Imports HTF MAs settings from objProps (of any level) into `RsParamsMAs` child `RsMaCalcParams` objects (into the first first `maCount` of them)
Namespace types: RsParamsMAs
Parameters:
this (RsParamsMAs) : (RsParamsMAs) Target object to import prop values to.
oCfg (objProps1 type from moebius1977/CSVParser/1) : (CSVP.objProps) (one of objProps types) an objProps, ... opjProps8 containing properties' values in a child objProps objects
maCount (int) : (int) Number of tgtObj's RsMaCalcParams childs of tgtObj to set (1 to 6, starting from 1)
Returns: this
method importConfig(this, oCfg, maCount)
Imports HTF MAs settings from objProps (of any level) into `RsParamsMAs` child `RsMaCalcParams` objects (into the first first `maCount` of them)
Namespace types: RsParamsMAs
Parameters:
this (RsParamsMAs) : (RsParamsMAs) Target object to import prop values to.
oCfg (objProps2 type from moebius1977/CSVParser/1) : (CSVP.objProps) (one of objProps types) an objProps, ... opjProps8 containing properties' values in a child objProps objects
maCount (int) : (int) Number of tgtObj's RsMaCalcParams childs of tgtObj to set (1 to 6, starting from 1)
Returns: this
method importConfig(this, oCfg, maCount)
Imports HTF MAs settings from objProps (of any level) into `RsParamsMAs` child `RsMaCalcParams` objects (into the first first `maCount` of them)
Namespace types: RsParamsMAs
Parameters:
this (RsParamsMAs) : (RsParamsMAs) Target object to import prop values to.
oCfg (objProps3 type from moebius1977/CSVParser/1) : (CSVP.objProps) (one of objProps types) an objProps, ... opjProps8 containing properties' values in a child objProps objects
maCount (int) : (int) Number of tgtObj's RsMaCalcParams childs of tgtObj to set (1 to 6, starting from 1)
Returns: this
method importConfig(this, oCfg, maCount)
Imports HTF MAs settings from objProps (of any level) into `RsParamsMAs` child `RsMaCalcParams` objects (into the first first `maCount` of them)
Namespace types: RsParamsMAs
Parameters:
this (RsParamsMAs) : (RsParamsMAs) Target object to import prop values to.
oCfg (objProps4 type from moebius1977/CSVParser/1) : (CSVP.objProps) (one of objProps types) an objProps, ... opjProps8 containing properties' values in a child objProps objects
maCount (int) : (int) Number of tgtObj's RsMaCalcParams childs of tgtObj to set (1 to 6, starting from 1)
Returns: this
method importConfig(this, oCfg, maCount)
Imports HTF MAs settings from objProps (of any level) into `RsParamsMAs` child `RsMaCalcParams` objects (into the first first `maCount` of them)
Namespace types: RsParamsMAs
Parameters:
this (RsParamsMAs) : (RsParamsMAs) Target object to import prop values to.
oCfg (objProps5 type from moebius1977/CSVParser/1) : (CSVP.objProps) (one of objProps types) an objProps, ... opjProps8 containing properties' values in a child objProps objects
maCount (int) : (int) Number of tgtObj's RsMaCalcParams childs of tgtObj to set (1 to 6, starting from 1)
Returns: this
method importConfig(this, oCfg, maCount)
Imports HTF MAs settings from objProps (of any level) into `RsParamsMAs` child `RsMaCalcParams` objects (into the first first `maCount` of them)
Namespace types: RsParamsMAs
Parameters:
this (RsParamsMAs) : (RsParamsMAs) Target object to import prop values to.
oCfg (objProps6 type from moebius1977/CSVParser/1) : (CSVP.objProps) (one of objProps types) an objProps, ... opjProps8 containing properties' values in a child objProps objects
maCount (int) : (int) Number of tgtObj's RsMaCalcParams childs of tgtObj to set (1 to 6, starting from 1)
Returns: this
method importConfig(this, oCfg, maCount)
Namespace types: RsParamsMAs
Parameters:
this (RsParamsMAs)
oCfg (objProps7 type from moebius1977/CSVParser/1)
maCount (int)
method importConfig(this, oCfg, maCount)
Imports HTF MAs settings from objProps (of any level) into `HTFMAs` child `MaDrawing` objects (into the first first `maCount` of them)
Namespace types: RsParamsMAs
Parameters:
this (RsParamsMAs) : (HTFMAs) Target object to import prop values to.
oCfg (objProps8 type from moebius1977/CSVParser/1) : (CSVP.objProps) (one of objProps types) an objProps, ... opjProps8 containing properties' values in a child objProps objects
maCount (int) : (int) Number of tgtObj's RsMaCalcParams childs of tgtObj to set (1 to 6, starting from 1)
Returns: this
method importConfig(this, oCfg, maCount)
Imports HTF MAs settings from objProps (of any level) into `HTFMAs` child `MaDrawing` objects (into the first first `maCount` of them)
Namespace types: HTFMAs
Parameters:
this (HTFMAs) : (HTFMAs) Target object to import prop values to.
oCfg (objProps type from moebius1977/CSVParser/1) : (CSVP.objProps) (one of objProps types) an objProps, ... opjProps8 containing properties' values in a child objProps objects
maCount (int) : (int) Number of tgtObj's RsMaCalcParams childs of tgtObj to set (1 to 6, starting from 1)
Returns: this
method importConfig(this, oCfg, maCount)
Imports HTF MAs settings from objProps (of any level) into `HTFMAs` child `MaDrawing` objects (into the first first `maCount` of them)
Namespace types: HTFMAs
Parameters:
this (HTFMAs) : (HTFMAs) Target object to import prop values to.
oCfg (objProps0 type from moebius1977/CSVParser/1) : (CSVP.objProps) (one of objProps types) an objProps, ... opjProps8 containing properties' values in a child objProps objects
maCount (int) : (int) Number of tgtObj's RsMaCalcParams childs of tgtObj to set (1 to 6, starting from 1)
Returns: this
method importConfig(this, oCfg, maCount)
Imports HTF MAs settings from objProps (of any level) into `HTFMAs` child `MaDrawing` objects (into the first first `maCount` of them)
Namespace types: HTFMAs
Parameters:
this (HTFMAs) : (HTFMAs) Target object to import prop values to.
oCfg (objProps1 type from moebius1977/CSVParser/1) : (CSVP.objProps) (one of objProps types) an objProps, ... opjProps8 containing properties' values in a child objProps objects
maCount (int) : (int) Number of tgtObj's RsMaCalcParams childs of tgtObj to set (1 to 6, starting from 1)
Returns: this
method importConfig(this, oCfg, maCount)
Imports HTF MAs settings from objProps (of any level) into `HTFMAs` child `MaDrawing` objects (into the first first `maCount` of them)
Namespace types: HTFMAs
Parameters:
this (HTFMAs) : (HTFMAs) Target object to import prop values to.
oCfg (objProps2 type from moebius1977/CSVParser/1) : (CSVP.objProps) (one of objProps types) an objProps, ... opjProps8 containing properties' values in a child objProps objects
maCount (int) : (int) Number of tgtObj's RsMaCalcParams childs of tgtObj to set (1 to 6, starting from 1)
Returns: this
method importConfig(this, oCfg, maCount)
Imports HTF MAs settings from objProps (of any level) into `HTFMAs` child `MaDrawing` objects (into the first first `maCount` of them)
Namespace types: HTFMAs
Parameters:
this (HTFMAs) : (HTFMAs) Target object to import prop values to.
oCfg (objProps3 type from moebius1977/CSVParser/1) : (CSVP.objProps) (one of objProps types) an objProps, ... opjProps8 containing properties' values in a child objProps objects
maCount (int) : (int) Number of tgtObj's RsMaCalcParams childs of tgtObj to set (1 to 6, starting from 1)
Returns: this
method importConfig(this, oCfg, maCount)
Imports HTF MAs settings from objProps (of any level) into `HTFMAs` child `MaDrawing` objects (into the first first `maCount` of them)
Namespace types: HTFMAs
Parameters:
this (HTFMAs) : (HTFMAs) Target object to import prop values to.
oCfg (objProps4 type from moebius1977/CSVParser/1) : (CSVP.objProps) (one of objProps types) an objProps, ... opjProps8 containing properties' values in a child objProps objects
maCount (int) : (int) Number of tgtObj's RsMaCalcParams childs of tgtObj to set (1 to 6, starting from 1)
Returns: this
method importConfig(this, oCfg, maCount)
Imports HTF MAs settings from objProps (of any level) into `HTFMAs` child `MaDrawing` objects (into the first first `maCount` of them)
Namespace types: HTFMAs
Parameters:
this (HTFMAs) : (HTFMAs) Target object to import prop values to.
oCfg (objProps5 type from moebius1977/CSVParser/1) : (CSVP.objProps) (one of objProps types) an objProps, ... opjProps8 containing properties' values in a child objProps objects
maCount (int) : (int) Number of tgtObj's RsMaCalcParams childs of tgtObj to set (1 to 6, starting from 1)
Returns: this
method importConfig(this, oCfg, maCount)
Imports HTF MAs settings from objProps (of any level) into `HTFMAs` child `MaDrawing` objects (into the first first `maCount` of them)
Namespace types: HTFMAs
Parameters:
this (HTFMAs) : (HTFMAs) Target object to import prop values to.
oCfg (objProps6 type from moebius1977/CSVParser/1) : (CSVP.objProps) (one of objProps types) an objProps, ... opjProps8 containing properties' values in a child objProps objects
maCount (int) : (int) Number of tgtObj's RsMaCalcParams childs of tgtObj to set (1 to 6, starting from 1)
Returns: this
method importConfig(this, oCfg, maCount)
Namespace types: HTFMAs
Parameters:
this (HTFMAs)
oCfg (objProps7 type from moebius1977/CSVParser/1)
maCount (int)
method importConfig(this, oCfg, maCount)
Namespace types: HTFMAs
Parameters:
this (HTFMAs)
oCfg (objProps8 type from moebius1977/CSVParser/1)
maCount (int)
method newRsParamsMAs(this)
Namespace types: LO
Parameters:
this (LO)
method newHTFMAs(this)
Namespace types: LO
Parameters:
this (LO)
RsMaCalcParams
Parameters of one MA (only calculation params needed within req.sec(), visual parameters are within htfMAs type)
Fields:
enabled (series bool)
maType (series MaTypes) : MA type options: SMA / EMA / WMA / ...
src (series string)
prd (series int) : MA period
RsParamsMAs
Collection of parameters of 6 MAs. Used to pass params to req.sec()
Fields:
ma1CalcParams (RsMaCalcParams)
ma2CalcParams (RsMaCalcParams)
ma3CalcParams (RsMaCalcParams)
ma4CalcParams (RsMaCalcParams)
ma5CalcParams (RsMaCalcParams)
ma6CalcParams (RsMaCalcParams)
RsReturnMAs
Used to return data from req.sec().
Fields:
htfBi (series int)
ma1 (series float)
ma2 (series float)
ma3 (series float)
ma4 (series float)
ma5 (series float)
ma6 (series float)
MaDrawing
MA's plot parameters plus drawing objects for MA's current level (line and label).
Fields:
lnClr (series color) : (color) MA plot line color (like in plot())
lnWidth (series int) : (int) MA plot line width (like in plot())
lnStyle (series string) : (string) MA plot line style (like in plot())
showHistory (series bool) : (bool) Whether to plot the MA on historical bars or only show current level to the right of the latest bar.
showLabel (series bool) : (bool) Whether to show the name of the MA to the right of the MA's level
ln (series line) : (line) line to show MA"s current level
lbl (series label) : (label) label showing MA's name
HTFMAs
Contains data and drawing parameters for MA's of one timeframe (MA calculation parameters for MA's of one timeframe are in a separate object RsParamsMAs)
Fields:
rsRet (RsReturnMAs) : (RsReturnMAs) Contains data returned from req.sec(). Is set to na in between HTF bar changes if smoothing is enabled.
rsRetLast (RsReturnMAs) : (RsReturnMAs) Contains a copy of data returned from req.sec() in case rsRet is set to na for smoothing.
rsRetNaObj (RsReturnMAs) : (RsReturnMAs) An empty object as `na` placeholder
ma1Drawing (MaDrawing) : (MaDrawing) MA drawing properties
ma2Drawing (MaDrawing) : (MaDrawing) MA drawing properties
ma3Drawing (MaDrawing) : (MaDrawing) MA drawing properties
ma4Drawing (MaDrawing) : (MaDrawing) MA drawing properties
ma5Drawing (MaDrawing) : (MaDrawing) MA drawing properties
ma6Drawing (MaDrawing) : (MaDrawing) MA drawing properties
enabled (series bool) : (bool ) Enables/disables all of the MAs of one timeframe.
tf (series string) : (string) Timeframe
showHistory (series bool) : (bool ) Plot MA line on historical bars
rngAtrQ (series int) : (int ) A multiplier for atr(14). Determines a range within which the MA's will be plotted. MA's too far away will not be plotted.
showRecentBars (series int) : (int ) Only plot MA on these recent bars
smoothen (series bool) : (bool ) Smoothen MA plot. If false the same HTF value is returned on all chart bars within a HTF bar (intrabars), so the plot looks like steps.
lblsOffset (series int) : (int ) Show MA name this number of bars to the right off last bar.
lblsShow (series bool) : (bool ) Show MA name
lnOffset (series int) : (int ) Start line showing current level of the MA this number of bars to the right off the last bar.
lblSize (series string) : (string) Label size
lblStyle (series string) : (string) Label style
lblTxtAlign (series string) : (string) Label text align
bPopupLabel (series bool) : (bool ) Show current MA value as a tooltip to MA's name.
LO
LO Library object, whose only purpose is to serve as a shorthand for library name in script code.
Fields:
dummy (series string)
Gann Swing Strategy [1 Bar - Multi Layer]Use this Strategy to Fine-tune inputs for your Gann swing strategy.
Strategy allows you to fine-tune the indicator for 1 TimeFrame at a time; cross Timeframe Input fine-tuning is done manually after exporting the chart data.
MEANINGFUL DESCRIPTION:
The Gann Swing Chart using the One-Bar type, also known as the Minor Trend Chart, is designed to follow single-bar movements in the market. It helps identify trends by tracking price movements. When the market makes a higher high than the previous bar from a low price, the One-Bar trend line moves up, indicating a new high and establishing the previous low as a One-Bar bottom. Conversely, when the market makes a lower low than the previous bar from a high price, the One-Bar swing line moves down, marking a new low and setting the previous high as a One-Bar top. The crossing of these swing tops and bottoms indicates a change in trend direction.
HOW TO USE THE INDICATOR / Gann-swing Strategy:
The indicator shows 1, 2, and 3-bar swings. The strategy triggers a buy when the price crosses the previously determined high.
HOW TO USE THE STRATEGY:
Strategy to Fine-Tune Inputs for Your Gann Swing Strategy
This strategy allows for the fine-tuning of indicators for one timeframe at a time. Cross-timeframe input fine-tuning is done manually after exporting the chart data.
Meaningful Description:
The Gann Swing Chart using the One-Bar type, also known as the Minor Trend Chart, is designed to follow single-bar movements in the market. It helps identify trends by tracking price movements. When the market makes a higher high than the previous bar from a low price, the One-Bar trend line moves up, indicating a new high and establishing the previous low as a One-Bar bottom. Conversely, when the market makes a lower low than the previous bar from a high price, the One-Bar swing line moves down, marking a new low and setting the previous high as a One-Bar top. The crossing of these swing tops and bottoms indicates a change in trend direction.
How to Use the Indicator / Gann-Swing Strategy:
The indicator shows 1, 2, and 3-bar swings. The strategy triggers a buy when the price crosses the previously determined high.
How to Use the Strategy:
The strategy initiates a buy if the price breaks 1, 2, or 3-bar highs, or any combination thereof. Use the inputs to determine which highs or lows need to be crossed for the strategy to go long or short.
ORIGINALITY & USEFULNESS:
The One-Bar Swing Chart stands out for its simplicity and effectiveness in capturing minor market trends. Developed by meomeo105, this Gann high and low algorithm forms the basis of the strategy. I used my approach to creating strategy out of Gann swing indicator.
DETAILED DESCRIPTION:
What is a Swing Chart?
Swing charts help traders visualize price movements and identify trends by focusing on price highs and lows. They are instrumental in spotting trend reversals and continuations.
What is the One-Bar Swing Chart?
The One-Bar Swing Chart, also known as the Minor Trend Chart, follows single-bar price movements. It plots upward swings from a low price when a higher high is made, and downward swings from a high price when a lower low is made.
Key Features:
Trend Identification : Highlights minor trends by plotting swing highs and lows based on one-bar movements.
Simple Interpretation : Crossing a swing top indicates an uptrend, while crossing a swing bottom signals a downtrend.
Customizable Periods : Users can adjust the period to fine-tune the sensitivity of the swing chart to market movements.
Practical Application:
Bullish Trend : When the One-Bar Swing line moves above a previous swing top, it indicates a bullish trend.
Bearish Trend : When the One-Bar Swing line moves below a previous swing bottom, it signals a bearish trend.
Trend Reversal : Watch for crossings of swing tops and bottoms to detect potential trend reversals.
The One-Bar Swing Chart is a powerful tool for traders looking to capture and understand market trends. By following the simple rules of swing highs and lows, it provides clear and actionable insights into market direction.
Why the Strategy Uses 100% Allocation of a Portfolio:
This strategy allocates 100% of the portfolio to trading this specific pair, which does not mean 100% of all capital but 100% of the allocated trading capital for this pair. The strategy is swing-based and does not use take profit (TP) or stop losses.
Physics CandlesPhysics Candles embed volume and motion physics directly onto price candles or market internals according to the cyclic pattern of financial securities. The indicator works on both real-time “ticks” and historical data using statistical modeling to highlight when these values, like volume or momentum, is unusual or relatively high for some periodic window in time. Each candle is made out of one or more sub-candles that each contain their own information of motion, which converts to the color and transparency, or brightness, of that particular candle segment. The segments extend throughout the entire candle, both body and wicks, and Thick Wicks can be implemented to see the color coding better. This candle segmentation allows you to see if all the volume or energy is evenly distributed throughout the candle or highly contained in one small portion of it, and how intense these values are compared to similar time periods without going to lower time frames. Candle segmentation can also change a trader’s perspective on how valuable the information is. A “low” volume candle, for instance, could signify high value short-term stopping volume if the volume is all concentrated in one segment.
The Candles are flexible. The physics information embedded on the candles need not be from the same price security or market internal as the chart when using the Physics Source option, and multiple Candles can be overlayed together. You could embed stock price Candles with market volume, market price Candles with stock momentum, market structure with internal acceleration, stock price with stock force, etc. My particular use case is scalping the SPX futures market (ES), whose price action is also dictated by the volume action in the associated cash market, or SPY, as well as a host of other securities. Physics allows you to embed the ES volume on the SPY price action, or the SPY volume on the ES price action, or you can combine them both by overlaying two Candle streams and increasing the Number of Overlays option to two. That option decreases the transparency levels of your coloring scheme so that overlaying multiple Candles converges toward the same visual color intensity as if you had one. The Candle and Physics Sources allows for both Symbols and Spreads to visualize Candle physics from a single ticker or some mathematical transformation of tickers.
Due to certain TradingView programming restrictions, each Candle can only be made out of a maximum of 8 candle segments, or an “8-bit” resolution. Since limits are just an opportunity to go beyond, the user has the option to stack multiple Candle indicators together to further increase the candle resolution. If you don’t want to see the Candles for some particular period of the day, you can hide them, or use the hiding feature to have multiple Candles calibrated to show multiple parts of the trading day. Securities tend to have low volume after hours with sharp spikes at the open or close. Multiple Candles can be used for multiple parts of the trading day to accommodate these different cycles in volume.
The Candles do not need be associated with the nominal security listed on the TV chart. The Candle Source allows the user to look at AAPL Candles, for instance, while on a TSLA or SPY chart, each with their respective volume actions integrated into the candles, for instance, to allow the user to see multiple security price and volume correlation on a single chart.
The physics information currently embeddable on Candles are volume or time, velocity, momentum, acceleration, force, and kinetic energy. In order to apply equations of motion containing a mass variable to financial securities, some analogous value for mass must be assumed. Traders often regard volume or time as inextricable variables to a securities price that can indicate the direction and strength of a move. Since mass is the inextricable variable to calculating the momentum, force, or kinetic energy of motion, the user has the option to assume either time or volume is analogous to mass. Volume may be a better option for mass as it is not strictly dependent on the speed of a security, whereas time is.
Data transformations and outlier statistics are used to color code the intensity of the physics for each candle segment relative to past periodic behavior. A million shares during pre-market or a million shares during noontime may be more intense signals than a typical million shares traded at the open, and should have more intense color signals. To account for a specific cyclic behavior in the market, the user can specify the Window and Cycle Time Frames. The Window Time Frame splits up a Cycle into windows, samples and aggregates the statistics for each window, then compares the current physics values against past values in the same window. Intraday traders may benefit from using a Daily Cycle with a 30-minute Window Time Frame and 1-minute Sample Time Frame. These settings sample and compare the physics of 1-minute candles within the current 30-minute window to the same 30-minute window statistics for all past trading days, up until the data limit imposed by TradingView, or until the Data Collection Start Date specified in the settings. Longer-term traders may benefit from using a Monthly Cycle with a Weekly Time Frame, or a Yearly Cycle with a Quarterly Time Frame.
Multiple statistics and data transformation methods are available to convey relative intensity in different ways for different trading signals. Physics Candles allows for both Normal and Log-Normal assumptions in the physics distribution. The data can then be transformed by Linear, Logarithmic, Z-Score, or Power-Law scoring, where scoring simply assigns an intensity to the relative physics value of each candle segment based on some mathematical transformation. Z-scoring often renders adequate detection by scoring the segment value, such as volume or momentum, according to the mean and standard deviation of the data set in each window of the cycle. Logarithmic or power-law transformation with a gamma below 1 decreases the disparity between intensities so more less-important signals will show up, whereas the power-law transformation with gamma values above 1 increases the disparity between intensities, so less more-important signals will show up. These scores are then converted to color and transparency between the Min Score and the Max Score Cutoffs. The Auto-Normalization feature can automatically pick these cutoffs specific to each window based on the mean and standard deviation of the data set, or the user can manually set them. Physics was developed with novices in mind so that most users could calibrate their own settings by plotting the candle segment distributions directly on the chart and fiddling with the settings to see how different cutoffs capture different portions of the distribution and affect the relative color intensities differently. Security distributions are often skewed with fat-tails, known as kurtosis, where high-volume segments for example, have a higher-probabilities than expected for a normal distribution. These distribution are really log-normal, so that taking the logarithm leads to a standard bell-shaped distribution. Taking the Z-score of the Log-Normal distribution could make the most statistical sense, but color sensitivity is a discretionary preference.
Background Philosophy
This indicator was developed to study and trade the physics of motion in financial securities from a visually intuitive perspective. Newton’s laws of motion are loosely applied to financial motion:
“A body remains at rest, or in motion at a constant speed in a straight line, unless acted upon by a force”.
Financial securities remain at rest, or in motion at constant speed up or down, unless acted upon by the force of traders exchanging securities.
“When a body is acted upon by a force, the time rate of change of its momentum equals the force”.
Momentum is the product of mass and velocity, and force is the product of mass and acceleration. Traders render force on the security through the mass of their trading activity and the acceleration of price movement.
“If two bodies exert forces on each other, these forces have the same magnitude but opposite directions.”
Force arises from the interaction of traders, buyers and sellers. One body of motion, traders’ capitalization, exerts an equal and opposite force on another body of motion, the financial security. A securities movement arises at the expense of a buyer or seller’s capitalization.
Volume
The premise of this indicator assumes that volume, v, is an analogous means of measuring physical mass, m. This premise allows the application of the equations of motion to the movement of financial securities. We know from E=mc^2 that mass has energy. Energy can be used to create motion as kinetic energy. Taking a simple hypothetical example, the interaction of one short seller looking to cover lower and one buyer looking to sell higher exchange shares in a security at an agreed upon price to create volume or mass, and therefore, potential energy. Eventually the short seller will actively cover and buy the security from the previous buyer, moving the security higher, or the buyer will actively sell to the short seller, moving the security lower. The potential energy inherent in the initial consolidation or trading activity between buy and seller is now converted to kinetic energy on the subsequent trading activity that moves the securities price. The more potential energy that is created in the consolidation, the more kinetic energy there is to move price. This is why point and figure traders are said to give price targets based on the level of volatility or size of a consolidation range, or why Gann traders square price and time, as time is roughly proportional to mass and trading activity. The build-up of potential energy between short sellers and buyers in GME or TSLA led to their explosive moves beyond their standard fundamental valuations.
Position
Position, p, is simply the price or value of a financial security or market internal.
Time
Time, t, is another means of measuring mass to discover price behavior beyond the time snapshots that simple candle charts provide. We know from E=mc^2 that time is related to rest mass and energy given the speed of light, c, where time ≈ distance * sqrt(mass/E). This relation can also be derived from F=ma. The more mass there is, the longer it takes to compute the physics of a system. The more energy there is, the shorter it takes to compute the physics of a system. Similarly, more time is required to build a “resting” low-volatility trading consolidation with more mass. More energy added to that trading consolidation by competing buyers and sellers decreases the time it takes to build that same mass. Time is also related to price through velocity.
Velocity = (p(t1) – p(t0)) / p(t0)
Velocity, v, is the relative percent change of a securities price, p, over a period of time, t0 to t1. The period of time is between subsequent candles, and since time is constant between candles within the same timeframe, it is not used to calculate velocity or acceleration. Price moves faster with higher velocity, and slower with slower velocity, over the same fixed period of time. The product of velocity and mass gives momentum.
Momentum = mv
This indicator uses physics definition of momentum, not finance’s. In finance, momentum is defined as the amount of change in a securities price, either relative or absolute. This is definition is unfortunate, pun intended, since a one dollar move in a security from a thousand shares traded between a few traders has the exact same “momentum” as a one dollar move from millions of shares traded between hundreds of traders with everything else equal. If momentum is related to the energy of the move, momentum should consider both the level of activity in a price move, and the amount of that price move. If we equate mass to volume to account for the level of trading activity and use physics definition of momentum as the product of mass and velocity, this revised definition now gives a thousand-times more momentum to a one-dollar price move that has a thousand-times more volume behind it. If you want to use finance’s volume-less definition of momentum, use velocity in this indicator.
Acceleration = v(t1) – v(t0)
Acceleration, a, is the difference between velocities over some period of time, t0 to t1. Positive acceleration is necessary to increase a securities speed in the positive direction, while negative acceleration is necessary to decrease it. Acceleration is related to force by mass.
Force = ma
Force is required to change the speed of a securities valuation. Price movements with considerable force have considerably more impact on future direction. A change in direction requires force.
Kinetic Energy = 0.5mv^2
Kinetic energy is the energy that a financial security gains from the change in its velocity by force. The built-up of potential energy in trading consolidations can be converted to kinetic energy on a breakout from the consolidation.
Cycle Theory and Relativity
Just as the physics of motion is relative to a point of reference, so too should the physics of financial securities be relative to a point of reference. An object moving at a 100 mph towards another object moving in the same direction at 100 mph will not appear to be moving relative to each other, nor will they collide, but from an outsider observer, the objects are going 100 mph and will collide with significant impact if they run into a stationary object relative to the observer. Similarly, trading with a hundred thousand shares at the open when the average volume is a couple million may have a much smaller impact on the price compared to trading a hundred thousand shares pre-market when the average volume is ten thousand shares. The point of reference used in this indicator is the average statistics collected for a given Window Time Frame for every Cycle Time Frame. The physics values are normalized relative to these statistics.
Examples
The main chart of this publication shows the Force Candles for the SPY. An intense force candle is observed pre-market that implicates the directional overtone of the day. The assumption that direction should follow force arises from physical observation. If a large object is accelerating intensely in a particular direction, it may be fair to assume that the object continues its direction for the time being unless acted upon by another force.
The second example shows a similar Force Candle for the SPY that counters the assumption made in the first example and emphasizes the importance of both motion and context. While it’s fair to assume that a heavy highly accelerating object should continue its course, if that object runs into an obstacle, say a brick wall, it’s course may deviate. This example shows SPY running into the 50% retracement wall from the low of Mar 2020, a significant support level noted in literature. The example also conveys Gann’s idea of “lost motion”, where the SPY penetrated the 50% price but did not break through it. A brick wall is not one atom thick and price support is not one tick thick. An object can penetrate only one layer of a wall and not go through it.
The third example shows how Volume Candles can be used to identify scalping opportunities on the SPY and conveys why price behavior is as important as motion and context. It doesn’t take a brick wall to impede direction if you know that the person driving the car tends to forget to feed the cats before they leave. In the chart below, the SPY breaks down to a confluence of the 5-day SMA, 20-day SMA, and an important daily trendline (not shown) after the bullish bounce from the 50% retracement days earlier. High volume candles on the SMA signify stopping volume that reverse price direction. The character of the day changes. Bulls become more aggressive than bears with higher volume on upswings and resistance, whiles bears take on a defensive position with lower volume on downswings and support. High volume stopping candles are seen after rallies, and can tell you when to take profit, get out of a position, or go short. The character change can indicate that its relatively safe to re-enter bullish positions on many major supports, especially given the overarching bullish theme from the large reaction off the 50% retracement level.
The last example emphasizes the importance of relativity. The Volume Candles in the chart below are brightest pre-market even though the open has much higher volume since the pre-market activity is much higher compared to past pre-markets than the open is compared to past opens. Pre-market behavior is a good indicator for the character of the day. These bullish Volume Candles are some of the brightest seen since the bounce off the 50% retracement and indicates that bulls are making a relatively greater attempt to bring the SPY higher at the start of the day.
Infrequently Asked Questions
Where do I start?
The default settings are what I use to scalp the SPY throughout most of the extended trading day, on a one-minute chart using SPY volume. I also overlay another Candle set containing ES future volume on the SPY price structure by setting the Physics Source to ES1! and the Number of Overlays setting to 2 for each Candle stream in order to account for pre- and post-market trading activity better. Since the closing volume is exponential-like up until the end of the regular trading day, adding additional Candle streams with a tighter Window Time Frame (e.g., 2-5 minute) in the last 15 minutes of trading can be beneficial. The Hide feature can allow you to set certain intraday timeframes to hide one Candle set in order to show another Candle set during that time.
How crazy can you get with this indicator?
I hope you can answer this question better. One interesting use case is embedding the velocity of market volume onto an internal market structure. The PCTABOVEVWAP.US is a market statistic that indicates the percent of securities above their VWAP among US stocks and is helpful for determining short term trends in the US market. When securities are rising above their VWAP, the average long is up on the day and a rising PCTABOVEVWAP.US can be viewed as more bullish. When securities are falling below their VWAP, the average short is up on the day and a falling PCTABOVEVWAP.US can be viewed as more bearish. (UPVOL.US - DNVOL.US) / TVOL.US is a “spread” symbol, in TV parlance, that indicates the decimal percent difference between advancing volume and declining volume in the US market, showing the relative flow of volume between stocks that are up on the day, and stocks that are down on the day. Setting PCTABOVEVWAP.US in the Candle Source, (UPVOL.US - DNVOL.US) / TVOL.US in the Physics Source, and selecting the Physics to Velocity will embed the relative velocity of the spread symbol onto the PCTABOVEVWAP.US candles. This can be helpful in seeing short term trends in the US market that have an increasing amount of volume behind them compared to other trends. The chart below shows Volume Candles (top) and these Spread Candles (bottom). The first top at 9:30 and second top at 10:30, the high of the day, break down when the spread candles light up, showing a high velocity volume transfer from up stocks to down stocks.
How do I plot the indicator distribution and why should I even care?
The distribution is visually helpful in seeing how different normalization settings effect the distribution of candle segments. It is also helpful in seeing what physics intensities you want to ignore or show by segmenting part of the distribution within the Min and Max Cutoff values. The intensity of color is proportional to the physics value between the Min and Max Cutoff values, which correspond to the Min and Max Colors in your color scheme. Any physics value outside these Min and Max Cutoffs will be the same as the Min and Max Colors.
Select the Print Windows feature to show the window numbers according to the Cycle Time Frame and Window Time Frame settings. The window numbers are labeled at the start of each window and are candle width in size, so you may need to zoom into to see them. Selecting the Plot Window feature and input the window number of interest to shows the distribution of physics values for that particular window along with some statistics.
A log-normal volume distribution of segmented z-scores is shown below for 30-minute opening of the SPY. The Min and Max Cutoff at the top of the graph contain the part of the distribution whose intensities will be linearly color-coded between the Min and Max Colors of the color scheme. The part of the distribution below the Min Cutoff will be treated as lowest quality signals and set to the Min Color, while the few segments above the Max Cutoff will be treated as the highest quality signals and set to the Max Color.
What do I do if I don’t see anything?
Troubleshooting issues with this indicator can involve checking for error messages shown near the indicator name on the chart or using the Data Validation section to evaluate the statistics and normalization cutoffs. For example, if the Plot Window number is set to a window number that doesn’t exist, an error message will tell you and you won’t see any candles. You can use the Print Windows option to show windows that do exist for you current settings. The auto-normalization cutoff values may be inappropriate for your particular use case and literally cut the candles out of the chart. Try changing the chart time frame to see if they are appropriate for your cycle, sample and window time frames. If you get a “Timeframe passed to the request.security_lower_tf() function must be lower than the timeframe of the main chart” error, this means that the chart timeframe should be increased above the sample time frame. If you get a “Symbol resolve error”, ensure that you have correct symbol or spread in the Candle or Physics Source.
How do I see a relative physics values without cycles?
Set the Window Time Frame to be equal to the Cycle Time Frame. This will aggregate all the statistics into one bucket and show the physics values, such as volume, relative to all the past volumes that TV will allow.
How do I see candles without segmentation?
Segmentation can be very helpful in one context or annoying in another. Segmentation can be removed by setting the candle resolution value to 1.
Notes
I have yet to find a trading platform that consistently provides accurate real-time volume and pricing information, lacking adequate end-user data validation or quality control. I can provide plenty of examples of real-time volume counts or prices provided by TradingView and other platforms that were significantly off from what they should have been when comparing against the exchanges own data, and later retroactively corrected or not corrected at all. Since no indicator can work accurately with inaccurate data, please use at your own discretion.
The first version is a beta version. Debugging and validating code in Pine script is difficult without proper unit testing. Please report any bugs with enough information to reproduce them and indicate why they are important. I also encourage you to export the data from TradingView and verify the calculations for your particular use case.
The indicator works on real-time updates that occur at a higher frequency than the candle time frame, which TV incorrectly refers to as ticks. They use this terminology inaccurately as updates are really aggregated tick data that can take place at different prices and may not accurately reflect the real tick price action. Consequently, this inaccuracy also impacts the real-time segmentation accuracy to some degree. TV does not provide a means of retaining “tick” information, so the higher granularity of information seen real-time will be lost on a disconnect.
TV does not provide time and sales information. The volume and price information collected using the Sample Time Frame is intraday, which provides only part of the picture. Intraday volume is generally 50 to 80% of the end of day volume. Consequently, the daily+ OHLC prices are intraday, and may differ significantly from exchanged settled OHLC prices.
The Cycle and Window Time Frames refer to calendar days and time, not trading days or time. For example, the first window week of a monthly cycle is the first seven days of the month, not the first Monday through Friday of trading for the month.
Chart Time Frames that are higher than the Window Time Frames average the normalized physics for price action that occurred within a given Candle segment. It does not average price action that did not occur.
One of the main performance bottleneck in TradingView’s Pine Script is client-side drawing and plotting. The performance of this indicator can be increased by lowering the resolution (the number of sub-candles this indicator plots), getting a faster computer, or increasing the performance of your computer like plugging your laptop in and eliminating unnecessary processes.
The statistical integrity of this indicator relies on the number of samples collected per sample window in a given cycle. Higher sample counts can be obtained by increasing the chart time frame or upgrading the TradingView plan for a higher bar count. While increasing the chart time frame doesn’t increase the visual number of bars plotted on the chart, it does increase the number of bars that can be pulled at a lower time frame, up to 100,000.
Due to a limitation in Pine Scripts request_lower_tf() function, using a spread symbol will only work for regular trading hours, not extended trading hours.
Ideally, velocity or momentum should be calculated between candle closes. To eliminate the need to deal with price gaps that would lead to an incorrect statistical distributions, momentum is calculated between candle open and closes as a percent change of the price or value, which should not be an issue for most liquid securities.
Realtime 5D Profile [LucF]█ OVERVIEW
This indicator displays a realtime profile that can be configured to visualize five dimensions: volume, price, time, activity and age. For each price level in a bar or timeframe, you can display total or delta volume or ticks. The tick count measures activity on a level. The thickness of each level's line indicates its age, which helps you identify the most recent levels.
█ WARNING
The indicator only works in real time. Contrary to TradingView's line of volume profile indicators , it does not show anything on historical bars or closed markets, and it cannot display volume information if none exists for the data feed the chart is using. A realtime indicator such as this one only displays information accumulated while it is running on a chart. The information it calculates cannot be saved on charts, nor can it be recalculated from historical bars. If you refresh the chart, or the script must re-execute for some reason, as when you change inputs, the accumulated information will be lost.
Because "Realtime 5D Profile" requires time to accumulate information on the chart, it will be most useful to traders working on small timeframes who trade only one instrument and do not frequently change their chart's symbol or timeframe. Traders working on higher timeframes or constantly changing charts will be better served by TradingView's volume profiles. Before using this indicator, please see the "Limitations" section further down for other important information.
█ HOW TO USE IT
Load the indicator on an active chart (see here if you don't know how).
The default configuration displays:
• A double-sided volume profile showing at what price levels activity has occurred.
• The left side shows "down" volume, the right side shows "up" volume.
• The value corresponding to each level is displayed.
• The width of lines reflects their relative value.
• The thickness of lines reflects their age. Four thicknesses are used, with the thicker lines being the most recent.
• The total value of down/up values for the profile appears at the top.
To understand how to use profiles in your trading, please research the subject. Searches on "volume profile" or "market profile" will yield many useful results. I provide you with tools — I do not teach trading. To understand more about this indicator, read on. If you choose not to do so, please don't ask me to answer questions that are already answered here, nor to make videos; I don't.
█ CONCEPTS
Delta calculations
Volume is slotted in up or down slots depending on whether the price of each new chart update is higher or lower than the previous update's price. When price does not move between chart updates, the last known direction is used. In a perfect world, Pine scripts would have access to bid and ask levels, as this would allow us to know for sure if market orders are being filled on upticks (at the ask) or downticks (at the bid). Comparing the price of successive chart updates provides the most precise way to calculate volume delta on TradingView, but it is still a compromise. Order books are in constant movement; in some cases, order cancellations can cause sudden movements of both the bid and ask levels such that the next chart update can occur on an uptick at a lower price than the previous one (or vice versa). While this update's volume should be slotted in the up slot because a buy market order was filled, it will erroneously be slotted in the down slot because the price of the chart's update is lower than that of the previous one. Luckily, these conditions are relatively rare, so they should not adversely affect calculations.
Levels
A profile is a tool that displays information organized by price levels. You can select the maximum quantity of levels this indicator displays by using the script's "Levels" input. If the profile's height is small enough for level increments to be less than the symbol's tick size, a smaller quantity of levels is used until the profile's height grows sufficiently to allow your specified quantity of levels to be displayed. The exact position of levels is not tethered to the symbol's tick increments. Activity for one level is that which happens on either side of the level, halfway between its higher or lower levels. The lowest/highest levels in the profile thus appear higher/lower than the profile's low/high limits, which are determined by the lowest/highest points reached by price during the profile's life.
Level Values and Length
The profile's vertical structure is dynamic. As the profile's height changes with the price range, it is rebalanced and the price points of its levels may be recalculated. When this happens, past updates will be redistributed among the new profile's levels, and the level values may thus change. The new levels where updates are slotted will of course always be near past ones, but keep this fluidity in mind when watching level values evolve.
The profile's horizontal structure is also dynamic. The maximum length of level lines is controlled by the "Maximum line length" input value. This maximum length is always used for the largest level value in the profile, and the length of other levels is determined by their value relative to that maximum.
Updates vs Ticks
Strictly speaking, a tick is the record of a transaction between two parties. On TradingView, these are detected on seconds charts. On other charts, ticks are aggregated to form a chart update . I use the broader "update" term when it names both events. Note that, confusingly, tick is also used to name an instrument's minimal price increment.
Volume Quality
If you use volume, it's important to understand its nature and quality, as it varies with sectors and instruments. My Volume X-ray indicator is one way you can appraise the quality of an instrument's intraday volume.
█ FEATURES
Double-Sided Profiles
When you choose one of the first two configuration selections in the "Configuration" field's dropdown menu, you are asking the indicator to display a double-sided profile, i.e., where the down values appear on the left and the up ones on the right. In this mode, the formatting options in the top section of inputs apply to both sides of the profile.
Single-Sided Profiles
The six other selections down the "Configuration" field's dropdown menu select single-sided profiles, where one side aggregates the up/down values for either volume or ticks. In this mode, the formatting options in the top section of inputs apply to the left profile. The ones in the following "Right format" section apply to the right profile.
Calculation Mode
The "Calculation" input field allows the selection of one of two modes which applies to single-sided profiles only. Values can represent the simple total of volume or ticks at each level, or their delta. The mode has no effect when a double-sided profile is used because then, the total is represented by the sum of the left and right sides. Note that when totals are selected, all levels appear in the up color.
Age
The age of each level is always displayed as one of four line thicknesses. Thicker lines are used for the youngest levels. The age of levels is determined by averaging the times of the updates composing that level. When viewing double-sided profiles, the age of each side is calculated independently, which entails you can have a down level on the left side of the profile appear thinner than its corresponding up side level line on the right side because the updates composing the up side are more recent. When calculating the age of single-sided profiles, the age of the up/down values aggregated to calculate the side are averaged. Since they may be different, the averaged level ages will not be as responsive as when using a double-sided profile configuration, where the age of levels on each side is calculated independently and follows price action more closely. Moreover, when displaying two single-sided profiles (volume on one side and ticks on the other), the age of both sides will match because they are calculated from the same realtime updates.
Profile Resets
The profile can reset on timeframes or trend changes. The usual timeframe selections are available, including the chart's, in which case the profile will reset on each new chart bar. One of two trend detection logics can be used: Supertrend or the one used by LazyBear in his Weis Wave indicator . Settings for the trend logics are in the bottommost section of the inputs, where you can also control the display of trend changes and states. Note that the "Timeframe" field's setting also applies to the trend detection mechanism. Whatever the timeframe used for trend detection, its logic will not repaint.
Format
Formatting a profile for charts is often a challenge for traders, and this one is no exception. Varying zoom factors on your chart and the frequency of profile resets will require different profile formats. You can achieve a reasonable variety of effects by playing with the following input fields:
• "Resets on" controls how frequently new profiles are drawn. Spacing out profiles between bars can help make them more usable.
• "Levels" determines the maximum quantity of levels displayed.
• "Offset" allows you to shift the profile horizontally.
• "Profile size" affects the global size of the profile.
• Another "Size" field provides control over the size of the totals displayed above the profile.
• "Maximum line length" controls how far away from the center of the bar the lines will stretch left and right.
Colors
The color and brightness of levels and totals always allows you to determine the winning side between up and down values. On double-sided profiles, each side is always of one color, since the left side is down values and the right side, up values. However, the losing side is colored with half its brightness, so the emphasis is put on the winning side. When there is no winner, the toned-down version of each color is used for both sides. Single-sided profiles use the up and down colors in full brightness on the same side. Which one is used reflects the winning side.
Candles
The indicator can color candle bodies and borders independently. If you choose to do so, you may want to disable the chart's bars by using the eye icon near the symbol's name.
Tooltips
A tooltip showing the value of each level is available. If they do not appear when hovering over levels, select the indicator by clicking on its chart name. This should get the tooltips working.
Data Window
As usual, I provide key values in the Data Window, so you can track them. If you compare total realtime volumes for the profile and the built-in "Volume" indicator, you may see variations at some points. They are due to the different mechanisms running each program. In my experience, the values from the built-in don't always update as often as those of the profile, but they eventually catch up.
█ LIMITATIONS
• The levels do not appear exactly at the position they are calculated. They are positioned slightly lower than their actual price levels.
• Drawing a 20-level double-sided profile with totals requires 42 labels. The script will only display the last 500 labels,
so the number of levels you choose affects how many past profiles will remain visible.
• The script is quite taxing, which will sometimes make the chart's tab less responsive.
• When you first load the indicator on a chart, it will begin calculating from that moment; it will not take into account prior chart activity.
• If you let the script run long enough when using profile reset criteria that make profiles last for a long time, the script will eventually run out of memory,
as it will be tracking unmanageable amounts of chart updates. I don't know the exact quantity of updates that will cause this,
but the script can handle upwards of 60K updates per profile, which should last 1D except on the most active markets. You can follow the number of updates in the Data Window.
• The indicator's nature makes it more useful at very small timeframes, typically in the sub 15min realm.
• The Weis Wave trend detection used here has nothing to do with how David Weis detects trend changes.
LazyBear's version was a port of a port, so we are a few generations removed from the Weis technique, which uses reversals by a price unit.
I believe the version used here is useful nonetheless because it complements Supertrend rather well.
█ NOTES
The aggregated view that volume and tick profiles calculate for traders is a good example of one of the most useful things software can do for traders: look at things from a methodical, mathematical perspective, and present results in a meaningful way. Profiles are powerful because, if the volume data they use is of good enough quality, they tell us what levels are important for traders, regardless of the nature or rationality of the methods traders have used to determine those levels. Profiles don't care whether traders use the news, fundamentals, Fib numbers, pivots, or the phases of the moon to find "their" levels. They don't attempt to forecast or explain markets. They show us real stuff containing zero uncertainty, i.e., what HAS happened. I like this.
The indicator's "VPAA" chart name represents four of the five dimensions the indicator displays: volume, price, activity and age. The time dimension is implied by the fact it's a profile — and I couldn't find a proper place for a "T" in there )
I have not included alerts in the script. I may do so in the future.
For the moment, I have no plans to write a profile indicator that works on historical bars. TradingView's volume profiles already do that, and they run much faster than Pine versions could, so I don't see the point in spending efforts on a poor ersatz.
For Pine Coders
• The script uses labels that draw varying quantities of characters to break the limitation constraining other Pine plots/lines to bar boundaries.
• The code's structure was optimized for performance. When it was feasible, global arrays, "input" and other variables were used from functions,
sacrificing function readability and portability for speed. Code was also repeated in some places, to avoid the overhead of frequent function calls in high-traffic areas.
• I wrote my script using the revised recommendations in the Style Guide from the Pine v5 User Manual.
█ THANKS
• To Duyck for his function that sorts an array while keeping it in synch with another array.
The `sortTwoArrays()` function in my script is derived from the Pine Wizard 's code.
• To the one and only Maestro, RicardoSantos , the creative volcano who worked hard to write a function to produce fixed-width, figure space-padded numeric values.
A change in design made the function unnecessary in this script, but I am grateful to you nonetheless.
• To midtownskr8guy , another Pine Wizard who is also a wizard with colors. I use the colors from his Pine Color Magic and Chart Theme Simulator constantly.
• Finally, thanks to users of my earlier "Delta Volume" scripts. Comments and discussions with them encouraged me to persist in figuring out how to achieve what this indicator does.
Backtesting & Trading Engine [PineCoders]The PineCoders Backtesting and Trading Engine is a sophisticated framework with hybrid code that can run as a study to generate alerts for automated or discretionary trading while simultaneously providing backtest results. It can also easily be converted to a TradingView strategy in order to run TV backtesting. The Engine comes with many built-in strats for entries, filters, stops and exits, but you can also add you own.
If, like any self-respecting strategy modeler should, you spend a reasonable amount of time constantly researching new strategies and tinkering, our hope is that the Engine will become your inseparable go-to tool to test the validity of your creations, as once your tests are conclusive, you will be able to run this code as a study to generate the alerts required to put it in real-world use, whether for discretionary trading or to interface with an execution bot/app. You may also find the backtesting results the Engine produces in study mode enough for your needs and spend most of your time there, only occasionally converting to strategy mode in order to backtest using TV backtesting.
As you will quickly grasp when you bring up this script’s Settings, this is a complex tool. While you will be able to see results very quickly by just putting it on a chart and using its built-in strategies, in order to reap the full benefits of the PineCoders Engine, you will need to invest the time required to understand the subtleties involved in putting all its potential into play.
Disclaimer: use the Engine at your own risk.
Before we delve in more detail, here’s a bird’s eye view of the Engine’s features:
More than 40 built-in strategies,
Customizable components,
Coupling with your own external indicator,
Simple conversion from Study to Strategy modes,
Post-Exit analysis to search for alternate trade outcomes,
Use of the Data Window to show detailed bar by bar trade information and global statistics, including some not provided by TV backtesting,
Plotting of reminders and generation of alerts on in-trade events.
By combining your own strats to the built-in strats supplied with the Engine, and then tuning the numerous options and parameters in the Inputs dialog box, you will be able to play what-if scenarios from an infinite number of permutations.
USE CASES
You have written an indicator that provides an entry strat but it’s missing other components like a filter and a stop strategy. You add a plot in your indicator that respects the Engine’s External Signal Protocol, connect it to the Engine by simply selecting your indicator’s plot name in the Engine’s Settings/Inputs and then run tests on different combinations of entry stops, in-trade stops and profit taking strats to find out which one produces the best results with your entry strat.
You are building a complex strategy that you will want to run as an indicator generating alerts to be sent to a third-party execution bot. You insert your code in the Engine’s modules and leverage its trade management code to quickly move your strategy into production.
You have many different filters and want to explore results using them separately or in combination. Integrate the filter code in the Engine and run through different permutations or hook up your filtering through the external input and control your filter combos from your indicator.
You are tweaking the parameters of your entry, filter or stop strat. You integrate it in the Engine and evaluate its performance using the Engine’s statistics.
You always wondered what results a random entry strat would yield on your markets. You use the Engine’s built-in random entry strat and test it using different combinations of filters, stop and exit strats.
You want to evaluate the impact of fees and slippage on your strategy. You use the Engine’s inputs to play with different values and get immediate feedback in the detailed numbers provided in the Data Window.
You just want to inspect the individual trades your strategy generates. You include it in the Engine and then inspect trades visually on your charts, looking at the numbers in the Data Window as you move your cursor around.
You have never written a production-grade strategy and you want to learn how. Inspect the code in the Engine; you will find essential components typical of what is being used in actual trading systems.
You have run your system for a while and have compiled actual slippage information and your broker/exchange has updated his fees schedule. You enter the information in the Engine and run it on your markets to see the impact this has on your results.
FEATURES
Before going into the detail of the Inputs and the Data Window numbers, here’s a more detailed overview of the Engine’s features.
Built-in strats
The engine comes with more than 40 pre-coded strategies for the following standard system components:
Entries,
Filters,
Entry stops,
2 stage in-trade stops with kick-in rules,
Pyramiding rules,
Hard exits.
While some of the filter and stop strats provided may be useful in production-quality systems, you will not devise crazy profit-generating systems using only the entry strats supplied; that part is still up to you, as will be finding the elusive combination of components that makes winning systems. The Engine will, however, provide you with a solid foundation where all the trade management nitty-gritty is handled for you. By binding your custom strats to the Engine, you will be able to build reliable systems of the best quality currently allowed on the TV platform.
On-chart trade information
As you move over the bars in a trade, you will see trade numbers in the Data Window change at each bar. The engine calculates the P&L at every bar, including slippage and fees that would be incurred were the trade exited at that bar’s close. If the trade includes pyramided entries, those will be taken into account as well, although for those, final fees and slippage are only calculated at the trade’s exit.
You can also see on-chart markers for the entry level, stop positions, in-trade special events and entries/exits (you will want to disable these when using the Engine in strategy mode to see TV backtesting results).
Customization
You can couple your own strats to the Engine in two ways:
1. By inserting your own code in the Engine’s different modules. The modular design should enable you to do so with minimal effort by following the instructions in the code.
2. By linking an external indicator to the engine. After making the proper selections in the engine’s Settings and providing values respecting the engine’s protocol, your external indicator can, when the Engine is used in Indicator mode only:
Tell the engine when to enter long or short trades, but let the engine’s in-trade stop and exit strats manage the exits,
Signal both entries and exits,
Provide an entry stop along with your entry signal,
Filter other entry signals generated by any of the engine’s entry strats.
Conversion from strategy to study
TradingView strategies are required to backtest using the TradingView backtesting feature, but if you want to generate alerts with your script, whether for automated trading or just to trigger alerts that you will use in discretionary trading, your code has to run as a study since, for the time being, strategies can’t generate alerts. From hereon we will use indicator as a synonym for study.
Unless you want to maintain two code bases, you will need hybrid code that easily flips between strategy and indicator modes, and your code will need to restrict its use of strategy() calls and their arguments if it’s going to be able to run both as an indicator and a strategy using the same trade logic. That’s one of the benefits of using this Engine. Once you will have entered your own strats in the Engine, it will be a matter of commenting/uncommenting only four lines of code to flip between indicator and strategy modes in a matter of seconds.
Additionally, even when running in Indicator mode, the Engine will still provide you with precious numbers on your individual trades and global results, some of which are not available with normal TradingView backtesting.
Post-Exit Analysis for alternate outcomes (PEA)
While typical backtesting shows results of trade outcomes, PEA focuses on what could have happened after the exit. The intention is to help traders get an idea of the opportunity/risk in the bars following the trade in order to evaluate if their exit strategies are too aggressive or conservative.
After a trade is exited, the Engine’s PEA module continues analyzing outcomes for a user-defined quantity of bars. It identifies the maximum opportunity and risk available in that space, and calculates the drawdown required to reach the highest opportunity level post-exit, while recording the number of bars to that point.
Typically, if you can’t find opportunity greater than 1X past your trade using a few different reasonable lengths of PEA, your strategy is doing pretty good at capturing opportunity. Remember that 100% of opportunity is never capturable. If, however, PEA was finding post-trade maximum opportunity of 3 or 4X with average drawdowns of 0.3 to those areas, this could be a clue revealing your system is exiting trades prematurely. To analyze PEA numbers, you can uncomment complete sets of plots in the Plot module to reveal detailed global and individual PEA numbers.
Statistics
The Engine provides stats on your trades that TV backtesting does not provide, such as:
Average Profitability Per Trade (APPT), aka statistical expectancy, a crucial value.
APPT per bar,
Average stop size,
Traded volume .
It also shows you on a trade-by-trade basis, on-going individual trade results and data.
In-trade events
In-trade events can plot reminders and trigger alerts when they occur. The built-in events are:
Price approaching stop,
Possible tops/bottoms,
Large stop movement (for discretionary trading where stop is moved manually),
Large price movements.
Slippage and Fees
Even when running in indicator mode, the Engine allows for slippage and fees to be included in the logic and test results.
Alerts
The alert creation mechanism allows you to configure alerts on any combination of the normal or pyramided entries, exits and in-trade events.
Backtesting results
A few words on the numbers calculated in the Engine. Priority is given to numbers not shown in TV backtesting, as you can readily convert the script to a strategy if you need them.
We have chosen to focus on numbers expressing results relative to X (the trade’s risk) rather than in absolute currency numbers or in other more conventional but less useful ways. For example, most of the individual trade results are not shown in percentages, as this unit of measure is often less meaningful than those expressed in units of risk (X). A trade that closes with a +25% result, for example, is a poor outcome if it was entered with a -50% stop. Expressed in X, this trade’s P&L becomes 0.5, which provides much better insight into the trade’s outcome. A trade that closes with a P&L of +2X has earned twice the risk incurred upon entry, which would represent a pre-trade risk:reward ratio of 2.
The way to go about it when you think in X’s and that you adopt the sound risk management policy to risk a fixed percentage of your account on each trade is to equate a currency value to a unit of X. E.g. your account is 10K USD and you decide you will risk a maximum of 1% of it on each trade. That means your unit of X for each trade is worth 100 USD. If your APPT is 2X, this means every time you risk 100 USD in a trade, you can expect to make, on average, 200 USD.
By presenting results this way, we hope that the Engine’s statistics will appeal to those cognisant of sound risk management strategies, while gently leading traders who aren’t, towards them.
We trade to turn in tangible profits of course, so at some point currency must come into play. Accordingly, some values such as equity, P&L, slippage and fees are expressed in currency.
Many of the usual numbers shown in TV backtests are nonetheless available, but they have been commented out in the Engine’s Plot module.
Position sizing and risk management
All good system designers understand that optimal risk management is at the very heart of all winning strategies. The risk in a trade is defined by the fraction of current equity represented by the amplitude of the stop, so in order to manage risk optimally on each trade, position size should adjust to the stop’s amplitude. Systems that enter trades with a fixed stop amplitude can get away with calculating position size as a fixed percentage of current equity. In the context of a test run where equity varies, what represents a fixed amount of risk translates into different currency values.
Dynamically adjusting position size throughout a system’s life is optimal in many ways. First, as position sizing will vary with current equity, it reproduces a behavioral pattern common to experienced traders, who will dial down risk when confronted to poor performance and increase it when performance improves. Second, limiting risk confers more predictability to statistical test results. Third, position sizing isn’t just about managing risk, it’s also about maximizing opportunity. By using the maximum leverage (no reference to trading on margin here) into the trade that your risk management strategy allows, a dynamic position size allows you to capture maximal opportunity.
To calculate position sizes using the fixed risk method, we use the following formula: Position = Account * MaxRisk% / Stop% [, which calculates a position size taking into account the trade’s entry stop so that if the trade is stopped out, 100 USD will be lost. For someone who manages risk this way, common instructions to invest a certain percentage of your account in a position are simply worthless, as they do not take into account the risk incurred in the trade.
The Engine lets you select either the fixed risk or fixed percentage of equity position sizing methods. The closest thing to dynamic position sizing that can currently be done with alerts is to use a bot that allows syntax to specify position size as a percentage of equity which, while being dynamic in the sense that it will adapt to current equity when the trade is entered, does not allow us to modulate position size using the stop’s amplitude. Changes to alerts are on the way which should solve this problem.
In order for you to simulate performance with the constraint of fixed position sizing, the Engine also offers a third, less preferable option, where position size is defined as a fixed percentage of initial capital so that it is constant throughout the test and will thus represent a varying proportion of current equity.
Let’s recap. The three position sizing methods the Engine offers are:
1. By specifying the maximum percentage of risk to incur on your remaining equity, so the Engine will dynamically adjust position size for each trade so that, combining the stop’s amplitude with position size will yield a fixed percentage of risk incurred on current equity,
2. By specifying a fixed percentage of remaining equity. Note that unless your system has a fixed stop at entry, this method will not provide maximal risk control, as risk will vary with the amplitude of the stop for every trade. This method, as the first, does however have the advantage of automatically adjusting position size to equity. It is the Engine’s default method because it has an equivalent in TV backtesting, so when flipping between indicator and strategy mode, test results will more or less correspond.
3. By specifying a fixed percentage of the Initial Capital. While this is the least preferable method, it nonetheless reflects the reality confronted by most system designers on TradingView today. In this case, risk varies both because the fixed position size in initial capital currency represents a varying percentage of remaining equity, and because the trade’s stop amplitude may vary, adding another variability vector to risk.
Note that the Engine cannot display equity results for strategies entering trades for a fixed amount of shares/contracts at a variable price.
SETTINGS/INPUTS
Because the initial text first published with a script cannot be edited later and because there are just too many options, the Engine’s Inputs will not be covered in minute detail, as they will most certainly evolve. We will go over them with broad strokes; you should be able to figure the rest out. If you have questions, just ask them here or in the PineCoders Telegram group.
Display
The display header’s checkbox does nothing.
For the moment, only one exit strategy uses a take profit level, so only that one will show information when checking “Show Take Profit Level”.
Entries
You can activate two simultaneous entry strats, each selected from the same set of strats contained in the Engine. If you select two and they fire simultaneously, the main strat’s signal will be used.
The random strat in each list uses a different seed, so you will get different results from each.
The “Filter transitions” and “Filter states” strats delegate signal generation to the selected filter(s). “Filter transitions” signals will only fire when the filter transitions into bull/bear state, so after a trade is stopped out, the next entry may take some time to trigger if the filter’s state does not change quickly. When you choose “Filter states”, then a new trade will be entered immediately after an exit in the direction the filter allows.
If you select “External Indicator”, your indicator will need to generate a +2/-2 (or a positive/negative stop value) to enter a long/short position, providing the selected filters allow for it. If you wish to use the Engine’s capacity to also derive the entry stop level from your indicator’s signal, then you must explicitly choose this option in the Entry Stops section.
Filters
You can activate as many filters as you wish; they are additive. The “Maximum stop allowed on entry” is an important component of proper risk management. If your system has an average 3% stop size and you need to trade using fixed position sizes because of alert/execution bot limitations, you must use this filter because if your system was to enter a trade with a 15% stop, that trade would incur 5 times the normal risk, and its result would account for an abnormally high proportion in your system’s performance.
Remember that any filter can also be used as an entry signal, either when it changes states, or whenever no trade is active and the filter is in a bull or bear mode.
Entry Stops
An entry stop must be selected in the Engine, as it requires a stop level before the in-trade stop is calculated. Until the selected in-trade stop strat generates a stop that comes closer to price than the entry stop (or respects another one of the in-trade stops kick in strats), the entry stop level is used.
It is here that you must select “External Indicator” if your indicator supplies a +price/-price value to be used as the entry stop. A +price is expected for a long entry and a -price value will enter a short with a stop at price. Note that the price is the absolute price, not an offset to the current price level.
In-Trade Stops
The Engine comes with many built-in in-trade stop strats. Note that some of them share the “Length” and “Multiple” field, so when you swap between them, be sure that the length and multiple in use correspond to what you want for that stop strat. Suggested defaults appear with the name of each strat in the dropdown.
In addition to the strat you wish to use, you must also determine when it kicks in to replace the initial entry’s stop, which is determined using different strats. For strats where you can define a positive or negative multiple of X, percentage or fixed value for a kick-in strat, a positive value is above the trade’s entry fill and a negative one below. A value of zero represents breakeven.
Pyramiding
What you specify in this section are the rules that allow pyramiding to happen. By themselves, these rules will not generate pyramiding entries. For those to happen, entry signals must be issued by one of the active entry strats, and conform to the pyramiding rules which act as a filter for them. The “Filter must allow entry” selection must be chosen if you want the usual system’s filters to act as additional filtering criteria for your pyramided entries.
Hard Exits
You can choose from a variety of hard exit strats. Hard exits are exit strategies which signal trade exits on specific events, as opposed to price breaching a stop level in In-Trade Stops strategies. They are self-explanatory. The last one labelled When Take Profit Level (multiple of X) is reached is the only one that uses a level, but contrary to stops, it is above price and while it is relative because it is expressed as a multiple of X, it does not move during the trade. This is the level called Take Profit that is show when the “Show Take Profit Level” checkbox is checked in the Display section.
While stops focus on managing risk, hard exit strategies try to put the emphasis on capturing opportunity.
Slippage
You can define it as a percentage or a fixed value, with different settings for entries and exits. The entry and exit markers on the chart show the impact of slippage on the entry price (the fill).
Fees
Fees, whether expressed as a percentage of position size in and out of the trade or as a fixed value per in and out, are in the same units of currency as the capital defined in the Position Sizing section. Fees being deducted from your Capital, they do not have an impact on the chart marker positions.
In-Trade Events
These events will only trigger during trades. They can be helpful to act as reminders for traders using the Engine as assistance to discretionary trading.
Post-Exit Analysis
It is normally on. Some of its results will show in the Global Numbers section of the Data Window. Only a few of the statistics generated are shown; many more are available, but commented out in the Plot module.
Date Range Filtering
Note that you don’t have to change the dates to enable/diable filtering. When you are done with a specific date range, just uncheck “Date Range Filtering” to disable date filtering.
Alert Triggers
Each selection corresponds to one condition. Conditions can be combined into a single alert as you please. Just be sure you have selected the ones you want to trigger the alert before you create the alert. For example, if you trade in both directions and you want a single alert to trigger on both types of exits, you must select both “Long Exit” and “Short Exit” before creating your alert.
Once the alert is triggered, these settings no longer have relevance as they have been saved with the alert.
When viewing charts where an alert has just triggered, if your alert triggers on more than one condition, you will need the appropriate markers active on your chart to figure out which condition triggered the alert, since plotting of markers is independent of alert management.
Position sizing
You have 3 options to determine position size:
1. Proportional to Stop -> Variable, with a cap on size.
2. Percentage of equity -> Variable.
3. Percentage of Initial Capital -> Fixed.
External Indicator
This is where you connect your indicator’s plot that will generate the signals the Engine will act upon. Remember this only works in Indicator mode.
DATA WINDOW INFORMATION
The top part of the window contains global numbers while the individual trade information appears in the bottom part. The different types of units used to express values are:
curr: denotes the currency used in the Position Sizing section of Inputs for the Initial Capital value.
quote: denotes quote currency, i.e. the value the instrument is expressed in, or the right side of the market pair (USD in EURUSD ).
X: the stop’s amplitude, itself expressed in quote currency, which we use to express a trade’s P&L, so that a trade with P&L=2X has made twice the stop’s amplitude in profit. This is sometimes referred to as R, since it represents one unit of risk. It is also the unit of measure used in the APPT, which denotes expected reward per unit of risk.
X%: is also the stop’s amplitude, but expressed as a percentage of the Entry Fill.
The numbers appearing in the Data Window are all prefixed:
“ALL:” the number is the average for all first entries and pyramided entries.
”1ST:” the number is for first entries only.
”PYR:” the number is for pyramided entries only.
”PEA:” the number is for Post-Exit Analyses
Global Numbers
Numbers in this section represent the results of all trades up to the cursor on the chart.
Average Profitability Per Trade (X): This value is the most important gauge of your strat’s worthiness. It represents the returns that can be expected from your strat for each unit of risk incurred. E.g.: your APPT is 2.0, thus for every unit of currency you invest in a trade, you can on average expect to obtain 2 after the trade. APPT is also referred to as “statistical expectancy”. If it is negative, your strategy is losing, even if your win rate is very good (it means your winning trades aren’t winning enough, or your losing trades lose too much, or both). Its counterpart in currency is also shown, as is the APPT/bar, which can be a useful gauge in deciding between rivalling systems.
Profit Factor: Gross of winning trades/Gross of losing trades. Strategy is profitable when >1. Not as useful as the APPT because it doesn’t take into account the win rate and the average win/loss per trade. It is calculated from the total winning/losing results of this particular backtest and has less predictive value than the APPT. A good profit factor together with a poor APPT means you just found a chart where your system outperformed. Relying too much on the profit factor is a bit like a poker player who would think going all in with two’s against aces is optimal because he just won a hand that way.
Win Rate: Percentage of winning trades out of all trades. Taken alone, it doesn’t have much to do with strategy profitability. You can have a win rate of 99% but if that one trade in 100 ruins you because of poor risk management, 99% doesn’t look so good anymore. This number speaks more of the system’s profile than its worthiness. Still, it can be useful to gauge if the system fits your personality. It can also be useful to traders intending to sell their systems, as low win rate systems are more difficult to sell and require more handholding of worried customers.
Equity (curr): This the sum of initial capital and the P&L of your system’s trades, including fees and slippage.
Return on Capital is the equivalent of TV’s Net Profit figure, i.e. the variation on your initial capital.
Maximum drawdown is the maximal drawdown from the highest equity point until the drop . There is also a close to close (meaning it doesn’t take into account in-trade variations) maximum drawdown value commented out in the code.
The next values are self-explanatory, until:
PYR: Avg Profitability Per Entry (X): this is the APPT for all pyramided entries.
PEA: Avg Max Opp . Available (X): the average maximal opportunity found in the Post-Exit Analyses.
PEA: Avg Drawdown to Max Opp . (X): this represents the maximum drawdown (incurred from the close at the beginning of the PEA analysis) required to reach the maximal opportunity point.
Trade Information
Numbers in this section concern only the current trade under the cursor. Most of them are self-explanatory. Use the description’s prefix to determine what the values applies to.
PYR: Avg Profitability Per Entry (X): While this value includes the impact of all current pyramided entries (and only those) and updates when you move your cursor around, P&L only reflects fees at the trade’s last bar.
PEA: Max Opp . Available (X): It’s the most profitable close reached post-trade, measured from the trade’s Exit Fill, expressed in the X value of the trade the PEA follows.
PEA: Drawdown to Max Opp . (X): This is the maximum drawdown from the trade’s Exit Fill that needs to be sustained in order to reach the maximum opportunity point, also expressed in X. Note that PEA numbers do not include slippage and fees.
EXTERNAL SIGNAL PROTOCOL
Only one external indicator can be connected to a script; in order to leverage its use to the fullest, the engine provides options to use it as either an entry signal, an entry/exit signal or a filter. When used as an entry signal, you can also use the signal to provide the entry’s stop. Here’s how this works:
For filter state: supply +1 for bull (long entries allowed), -1 for bear (short entries allowed).
For entry signals: supply +2 for long, -2 for short.
For exit signals: supply +3 for exit from long, -3 for exit from short.
To send an entry stop level with an entry signal: Send positive stop level for long entry (e.g. 103.33 to enter a long with a stop at 103.33), negative stop level for short entry (e.g. -103.33 to enter a short with a stop at 103.33). If you use this feature, your indicator will have to check for exact stop levels of 1.0, 2.0 or 3.0 and their negative counterparts, and fudge them with a tick in order to avoid confusion with other signals in the protocol.
Remember that mere generation of the values by your indicator will have no effect until you explicitly allow their use in the appropriate sections of the Engine’s Settings/Inputs.
An example of a script issuing a signal for the Engine is published by PineCoders.
RECOMMENDATIONS TO ASPIRING SYSTEM DESIGNERS
Stick to higher timeframes. On progressively lower timeframes, margins decrease and fees and slippage take a proportionally larger portion of profits, to the point where they can very easily turn a profitable strategy into a losing one. Additionally, your margin for error shrinks as the equilibrium of your system’s profitability becomes more fragile with the tight numbers involved in the shorter time frames. Avoid <1H time frames.
Know and calculate fees and slippage. To avoid market shock, backtest using conservative fees and slippage parameters. Systems rarely show unexpectedly good returns when they are confronted to the markets, so put all chances on your side by being outrageously conservative—or a the very least, realistic. Test results that do not include fees and slippage are worthless. Slippage is there for a reason, and that’s because our interventions in the market change the market. It is easier to find alpha in illiquid markets such as cryptos because not many large players participate in them. If your backtesting results are based on moving large positions and you don’t also add the inevitable slippage that will occur when you enter/exit thin markets, your backtesting will produce unrealistic results. Even if you do include large slippage in your settings, the Engine can only do so much as it will not let slippage push fills past the high or low of the entry bar, but the gap may be much larger in illiquid markets.
Never test and optimize your system on the same dataset , as that is the perfect recipe for overfitting or data dredging, which is trying to find one precise set of rules/parameters that works only on one dataset. These setups are the most fragile and often get destroyed when they meet the real world.
Try to find datasets yielding more than 100 trades. Less than that and results are not as reliable.
Consider all backtesting results with suspicion. If you never entertained sceptic tendencies, now is the time to begin. If your backtest results look really good, assume they are flawed, either because of your methodology, the data you’re using or the software doing the testing. Always assume the worse and learn proper backtesting techniques such as monte carlo simulations and walk forward analysis to avoid the traps and biases that unchecked greed will set for you. If you are not familiar with concepts such as survivor bias, lookahead bias and confirmation bias, learn about them.
Stick to simple bars or candles when designing systems. Other types of bars often do not yield reliable results, whether by design (Heikin Ashi) or because of the way they are implemented on TV (Renko bars).
Know that you don’t know and use that knowledge to learn more about systems and how to properly test them, about your biases, and about yourself.
Manage risk first , then capture opportunity.
Respect the inherent uncertainty of the future. Cleanse yourself of the sad arrogance and unchecked greed common to newcomers to trading. Strive for rationality. Respect the fact that while backtest results may look promising, there is no guarantee they will repeat in the future (there is actually a high probability they won’t!), because the future is fundamentally unknowable. If you develop a system that looks promising, don’t oversell it to others whose greed may lead them to entertain unreasonable expectations.
Have a plan. Understand what king of trading system you are trying to build. Have a clear picture or where entries, exits and other important levels will be in the sort of trade you are trying to create with your system. This stated direction will help you discard more efficiently many of the inevitably useless ideas that will pop up during system design.
Be wary of complexity. Experienced systems engineers understand how rapidly complexity builds when you assemble components together—however simple each one may be. The more complex your system, the more difficult it will be to manage.
Play! . Allow yourself time to play around when you design your systems. While much comes about from working with a purpose, great ideas sometimes come out of just trying things with no set goal, when you are stuck and don’t know how to move ahead. Have fun!
@LucF
NOTES
While the engine’s code can supply multiple consecutive entries of longs or shorts in order to scale positions (pyramid), all exits currently assume the execution bot will exit the totality of the position. No partial exits are currently possible with the Engine.
Because the Engine is literally crippled by the limitations on the number of plots a script can output on TV; it can only show a fraction of all the information it calculates in the Data Window. You will find in the Plot Module vast amounts of commented out lines that you can activate if you also disable an equivalent number of other plots. This may be useful to explore certain characteristics of your system in more detail.
When backtesting using the TV backtesting feature, you will need to provide the strategy parameters you wish to use through either Settings/Properties or by changing the default values in the code’s header. These values are defined in variables and used not only in the strategy() statement, but also as defaults in the Engine’s relevant Inputs.
If you want to test using pyramiding, then both the strategy’s Setting/Properties and the Engine’s Settings/Inputs need to allow pyramiding.
If you find any bugs in the Engine, please let us know.
THANKS
To @glaz for allowing the use of his unpublished MA Squize in the filters.
To @everget for his Chandelier stop code, which is also used as a filter in the Engine.
To @RicardoSantos for his pseudo-random generator, and because it’s from him that I first read in the Pine chat about the idea of using an external indicator as input into another. In the PineCoders group, @theheirophant then mentioned the idea of using it as a buy/sell signal and @simpelyfe showed a piece of code implementing the idea. That’s the tortuous story behind the use of the external indicator in the Engine.
To @admin for the Volatility stop’s original code and for the donchian function lifted from Ichimoku .
To @BobHoward21 for the v3 version of Volatility Stop .
To @scarf and @midtownsk8rguy for the color tuning.
To many other scripters who provided encouragement and suggestions for improvement during the long process of writing and testing this piece of code.
To J. Welles Wilder Jr. for ATR, used extensively throughout the Engine.
To TradingView for graciously making an account available to PineCoders.
And finally, to all fellow PineCoders for the constant intellectual stimulation; it is a privilege to share ideas with you all. The Engine is for all TradingView PineCoders, of course—but especially for you.
Look first. Then leap.
Vector2Library "Vector2"
Representation of two dimensional vectors or points.
This structure is used to represent positions in two dimensional space or vectors,
for example in spacial coordinates in 2D space.
~~~
references:
docs.unity3d.com
gist.github.com
github.com
gist.github.com
gist.github.com
gist.github.com
~~~
new(x, y)
Create a new Vector2 object.
Parameters:
x : float . The x value of the vector, default=0.
y : float . The y value of the vector, default=0.
Returns: Vector2. Vector2 object.
-> usage:
`unitx = Vector2.new(1.0) , plot(unitx.x)`
from(value)
Assigns value to a new vector `x,y` elements.
Parameters:
value : float, x and y value of the vector.
Returns: Vector2. Vector2 object.
-> usage:
`one = Vector2.from(1.0), plot(one.x)`
from(value, element_sep, open_par, close_par)
Assigns value to a new vector `x,y` elements.
Parameters:
value : string . The `x` and `y` value of the vector in a `x,y` or `(x,y)` format, spaces and parentesis will be removed automatically.
element_sep : string . Element separator character, default=`,`.
open_par : string . Open parentesis character, default=`(`.
close_par : string . Close parentesis character, default=`)`.
Returns: Vector2. Vector2 object.
-> usage:
`one = Vector2.from("1.0,2"), plot(one.x)`
copy(this)
Creates a deep copy of a vector.
Parameters:
this : Vector2 . Vector2 object.
Returns: Vector2. Vector2 object.
-> usage:
`a = Vector2.new(1.0) , b = a.copy() , plot(b.x)`
down()
Vector in the form `(0, -1)`.
Returns: Vector2. Vector2 object.
left()
Vector in the form `(-1, 0)`.
Returns: Vector2. Vector2 object.
right()
Vector in the form `(1, 0)`.
Returns: Vector2. Vector2 object.
up()
Vector in the form `(0, 1)`.
Returns: Vector2. Vector2 object.
one()
Vector in the form `(1, 1)`.
Returns: Vector2. Vector2 object.
zero()
Vector in the form `(0, 0)`.
Returns: Vector2. Vector2 object.
minus_one()
Vector in the form `(-1, -1)`.
Returns: Vector2. Vector2 object.
unit_x()
Vector in the form `(1, 0)`.
Returns: Vector2. Vector2 object.
unit_y()
Vector in the form `(0, 1)`.
Returns: Vector2. Vector2 object.
nan()
Vector in the form `(float(na), float(na))`.
Returns: Vector2. Vector2 object.
xy(this)
Return the values of `x` and `y` as a tuple.
Parameters:
this : Vector2 . Vector2 object.
Returns: .
-> usage:
`a = Vector2.new(1.0, 1.0) , = a.xy() , plot(ax)`
length_squared(this)
Length of vector `a` in the form. `a.x^2 + a.y^2`, for comparing vectors this is computationaly lighter.
Parameters:
this : Vector2 . Vector2 object.
Returns: float. Squared length of vector.
-> usage:
`a = Vector2.new(1.0, 1.0) , plot(a.length_squared())`
length(this)
Magnitude of vector `a` in the form. `sqrt(a.x^2 + a.y^2)`
Parameters:
this : Vector2 . Vector2 object.
Returns: float. Length of vector.
-> usage:
`a = Vector2.new(1.0, 1.0) , plot(a.length())`
normalize(a)
Vector normalized with a magnitude of 1, in the form. `a / length(a)`.
Parameters:
a : Vector2 . Vector2 object.
Returns: Vector2. Vector2 object.
-> usage:
`a = normalize(Vector2.new(3.0, 2.0)) , plot(a.y)`
isNA(this)
Checks if any of the components is `na`.
Parameters:
this : Vector2 . Vector2 object.
Returns: bool.
usage:
p = Vector2.new(1.0, na) , plot(isNA(p)?1:0)
add(a, b)
Adds vector `b` to `a`, in the form `(a.x + b.x, a.y + b.y)`.
Parameters:
a : Vector2 . Vector2 object.
b : Vector2 . Vector2 object.
Returns: Vector2. Vector2 object.
-> usage:
`a = one() , b = one() , c = add(a, b) , plot(c.x)`
add(a, b)
Adds vector `b` to `a`, in the form `(a.x + b, a.y + b)`.
Parameters:
a : Vector2 . Vector2 object.
b : float . Value.
Returns: Vector2. Vector2 object.
-> usage:
`a = one() , b = 1.0 , c = add(a, b) , plot(c.x)`
add(a, b)
Adds vector `b` to `a`, in the form `(a + b.x, a + b.y)`.
Parameters:
a : float . Value.
b : Vector2 . Vector2 object.
Returns: Vector2. Vector2 object.
-> usage:
`a = 1.0 , b = one() , c = add(a, b) , plot(c.x)`
subtract(a, b)
Subtract vector `b` from `a`, in the form `(a.x - b.x, a.y - b.y)`.
Parameters:
a : Vector2 . Vector2 object.
b : Vector2 . Vector2 object.
Returns: Vector2. Vector2 object.
-> usage:
`a = one() , b = one() , c = subtract(a, b) , plot(c.x)`
subtract(a, b)
Subtract vector `b` from `a`, in the form `(a.x - b, a.y - b)`.
Parameters:
a : Vector2 . vector2 object.
b : float . Value.
Returns: Vector2. Vector2 object.
-> usage:
`a = one() , b = 1.0 , c = subtract(a, b) , plot(c.x)`
subtract(a, b)
Subtract vector `b` from `a`, in the form `(a - b.x, a - b.y)`.
Parameters:
a : float . value.
b : Vector2 . Vector2 object.
Returns: Vector2. Vector2 object.
-> usage:
`a = 1.0 , b = one() , c = subtract(a, b) , plot(c.x)`
multiply(a, b)
Multiply vector `a` with `b`, in the form `(a.x * b.x, a.y * b.y)`.
Parameters:
a : Vector2 . Vector2 object.
b : Vector2 . Vector2 object.
Returns: Vector2. Vector2 object.
-> usage:
`a = one() , b = one() , c = multiply(a, b) , plot(c.x)`
multiply(a, b)
Multiply vector `a` with `b`, in the form `(a.x * b, a.y * b)`.
Parameters:
a : Vector2 . Vector2 object.
b : float . Value.
Returns: Vector2. Vector2 object.
-> usage:
`a = one() , b = 1.0 , c = multiply(a, b) , plot(c.x)`
multiply(a, b)
Multiply vector `a` with `b`, in the form `(a * b.x, a * b.y)`.
Parameters:
a : float . Value.
b : Vector2 . Vector2 object.
Returns: Vector2. Vector2 object.
-> usage:
`a = 1.0 , b = one() , c = multiply(a, b) , plot(c.x)`
divide(a, b)
Divide vector `a` with `b`, in the form `(a.x / b.x, a.y / b.y)`.
Parameters:
a : Vector2 . Vector2 object.
b : Vector2 . Vector2 object.
Returns: Vector2. Vector2 object.
-> usage:
`a = from(3.0) , b = from(2.0) , c = divide(a, b) , plot(c.x)`
divide(a, b)
Divide vector `a` with value `b`, in the form `(a.x / b, a.y / b)`.
Parameters:
a : Vector2 . Vector2 object.
b : float . Value.
Returns: Vector2. Vector2 object.
-> usage:
`a = from(3.0) , b = 2.0 , c = divide(a, b) , plot(c.x)`
divide(a, b)
Divide value `a` with vector `b`, in the form `(a / b.x, a / b.y)`.
Parameters:
a : float . Value.
b : Vector2 . Vector2 object.
Returns: Vector2. Vector2 object.
-> usage:
`a = 3.0 , b = from(2.0) , c = divide(a, b) , plot(c.x)`
negate(a)
Negative of vector `a`, in the form `(-a.x, -a.y)`.
Parameters:
a : Vector2 . Vector2 object.
Returns: Vector2. Vector2 object.
-> usage:
`a = from(3.0) , b = a.negate , plot(b.x)`
pow(a, b)
Raise vector `a` with exponent vector `b`, in the form `(a.x ^ b.x, a.y ^ b.y)`.
Parameters:
a : Vector2 . Vector2 object.
b : Vector2 . Vector2 object.
Returns: Vector2. Vector2 object.
-> usage:
`a = from(3.0) , b = from(2.0) , c = pow(a, b) , plot(c.x)`
pow(a, b)
Raise vector `a` with value `b`, in the form `(a.x ^ b, a.y ^ b)`.
Parameters:
a : Vector2 . Vector2 object.
b : float . Value.
Returns: Vector2. Vector2 object.
-> usage:
`a = from(3.0) , b = 2.0 , c = pow(a, b) , plot(c.x)`
pow(a, b)
Raise value `a` with vector `b`, in the form `(a ^ b.x, a ^ b.y)`.
Parameters:
a : float . Value.
b : Vector2 . Vector2 object.
Returns: Vector2. Vector2 object.
-> usage:
`a = 3.0 , b = from(2.0) , c = pow(a, b) , plot(c.x)`
sqrt(a)
Square root of the elements in a vector.
Parameters:
a : Vector2 . Vector2 object.
Returns: Vector2. Vector2 object.
-> usage:
`a = from(3.0) , b = sqrt(a) , plot(b.x)`
abs(a)
Absolute properties of the vector.
Parameters:
a : Vector2 . Vector2 object.
Returns: Vector2. Vector2 object.
-> usage:
`a = from(-3.0) , b = abs(a) , plot(b.x)`
min(a)
Lowest element of a vector.
Parameters:
a : Vector2 . Vector2 object.
Returns: float.
-> usage:
`a = new(3.0, 1.5) , b = min(a) , plot(b)`
max(a)
Highest element of a vector.
Parameters:
a : Vector2 . Vector2 object.
Returns: float.
-> usage:
`a = new(3.0, 1.5) , b = max(a) , plot(b)`
vmax(a, b)
Highest elements of two vectors.
Parameters:
a : Vector2 . Vector2 object.
b : Vector2 . Vector2 object.
Returns: Vector2. Vector2 object.
-> usage:
`a = new(3.0, 2.0) , b = new(2.0, 3.0) , c = vmax(a, b) , plot(c.x)`
vmax(a, b, c)
Highest elements of three vectors.
Parameters:
a : Vector2 . Vector2 object.
b : Vector2 . Vector2 object.
c : Vector2 . Vector2 object.
Returns: Vector2. Vector2 object.
-> usage:
`a = new(3.0, 2.0) , b = new(2.0, 3.0) , c = new(1.5, 4.5) , d = vmax(a, b, c) , plot(d.x)`
vmin(a, b)
Lowest elements of two vectors.
Parameters:
a : Vector2 . Vector2 object.
b : Vector2 . Vector2 object.
Returns: Vector2. Vector2 object.
-> usage:
`a = new(3.0, 2.0) , b = new(2.0, 3.0) , c = vmin(a, b) , plot(c.x)`
vmin(a, b, c)
Lowest elements of three vectors.
Parameters:
a : Vector2 . Vector2 object.
b : Vector2 . Vector2 object.
c : Vector2 . Vector2 object.
Returns: Vector2. Vector2 object.
-> usage:
`a = new(3.0, 2.0) , b = new(2.0, 3.0) , c = new(1.5, 4.5) , d = vmin(a, b, c) , plot(d.x)`
perp(a)
Perpendicular Vector of `a`, in the form `(a.y, -a.x)`.
Parameters:
a : Vector2 . Vector2 object.
Returns: Vector2. Vector2 object.
-> usage:
`a = new(3.0, 1.5) , b = perp(a) , plot(b.x)`
floor(a)
Compute the floor of vector `a`.
Parameters:
a : Vector2 . Vector2 object.
Returns: Vector2. Vector2 object.
-> usage:
`a = new(3.0, 1.5) , b = floor(a) , plot(b.x)`
ceil(a)
Ceils vector `a`.
Parameters:
a : Vector2 . Vector2 object.
Returns: Vector2. Vector2 object.
-> usage:
`a = new(3.0, 1.5) , b = ceil(a) , plot(b.x)`
ceil(a, digits)
Ceils vector `a`.
Parameters:
a : Vector2 . Vector2 object.
digits : int . Digits to use as ceiling.
Returns: Vector2. Vector2 object.
round(a)
Round of vector elements.
Parameters:
a : Vector2 . Vector2 object.
Returns: Vector2. Vector2 object.
-> usage:
`a = new(3.0, 1.5) , b = round(a) , plot(b.x)`
round(a, precision)
Round of vector elements.
Parameters:
a : Vector2 . Vector2 object.
precision : int . Number of digits to round vector "a" elements.
Returns: Vector2. Vector2 object.
-> usage:
`a = new(0.123456, 1.234567) , b = round(a, 2) , plot(b.x)`
fractional(a)
Compute the fractional part of the elements from vector `a`.
Parameters:
a : Vector2 . Vector2 object.
Returns: Vector2. Vector2 object.
-> usage:
`a = new(3.123456, 1.23456) , b = fractional(a) , plot(b.x)`
dot_product(a, b)
dot_product product of 2 vectors, in the form `a.x * b.x + a.y * b.y.
Parameters:
a : Vector2 . Vector2 object.
b : Vector2 . Vector2 object.
Returns: float.
-> usage:
`a = new(3.0, 1.5) , b = from(2.0) , c = dot_product(a, b) , plot(c)`
cross_product(a, b)
cross product of 2 vectors, in the form `a.x * b.y - a.y * b.x`.
Parameters:
a : Vector2 . Vector2 object.
b : Vector2 . Vector2 object.
Returns: float.
-> usage:
`a = new(3.0, 1.5) , b = from(2.0) , c = cross_product(a, b) , plot(c)`
equals(a, b)
Compares two vectors
Parameters:
a : Vector2 . Vector2 object.
b : Vector2 . Vector2 object.
Returns: bool. Representing the equality.
-> usage:
`a = new(3.0, 1.5) , b = from(2.0) , c = equals(a, b) ? 1 : 0 , plot(c)`
sin(a)
Compute the sine of argument vector `a`.
Parameters:
a : Vector2 . Vector2 object.
Returns: Vector2. Vector2 object.
-> usage:
`a = new(3.0, 1.5) , b = sin(a) , plot(b.x)`
cos(a)
Compute the cosine of argument vector `a`.
Parameters:
a : Vector2 . Vector2 object.
Returns: Vector2. Vector2 object.
-> usage:
`a = new(3.0, 1.5) , b = cos(a) , plot(b.x)`
tan(a)
Compute the tangent of argument vector `a`.
Parameters:
a : Vector2 . Vector2 object.
Returns: Vector2. Vector2 object.
-> usage:
`a = new(3.0, 1.5) , b = tan(a) , plot(b.x)`
atan2(x, y)
Approximation to atan2 calculation, arc tangent of `y/x` in the range (-pi,pi) radians.
Parameters:
x : float . The x value of the vector.
y : float . The y value of the vector.
Returns: float. Value with angle in radians. (negative if quadrante 3 or 4)
-> usage:
`a = new(3.0, 1.5) , b = atan2(a.x, a.y) , plot(b)`
atan2(a)
Approximation to atan2 calculation, arc tangent of `y/x` in the range (-pi,pi) radians.
Parameters:
a : Vector2 . Vector2 object.
Returns: float, value with angle in radians. (negative if quadrante 3 or 4)
-> usage:
`a = new(3.0, 1.5) , b = atan2(a) , plot(b)`
distance(a, b)
Distance between vector `a` and `b`.
Parameters:
a : Vector2 . Vector2 object.
b : Vector2 . Vector2 object.
Returns: float.
-> usage:
`a = new(3.0, 1.5) , b = from(2.0) , c = distance(a, b) , plot(c)`
rescale(a, length)
Rescale a vector to a new magnitude.
Parameters:
a : Vector2 . Vector2 object.
length : float . Magnitude.
Returns: Vector2. Vector2 object.
-> usage:
`a = new(3.0, 1.5) , b = 2.0 , c = rescale(a, b) , plot(c.x)`
rotate(a, radians)
Rotates vector by a angle.
Parameters:
a : Vector2 . Vector2 object.
radians : float . Angle value in radians.
Returns: Vector2. Vector2 object.
-> usage:
`a = new(3.0, 1.5) , b = 2.0 , c = rotate(a, b) , plot(c.x)`
rotate_degree(a, degree)
Rotates vector by a angle.
Parameters:
a : Vector2 . Vector2 object.
degree : float . Angle value in degrees.
Returns: Vector2. Vector2 object.
-> usage:
`a = new(3.0, 1.5) , b = 45.0 , c = rotate_degree(a, b) , plot(c.x)`
rotate_around(this, center, angle)
Rotates vector `target` around `origin` by angle value.
Parameters:
this
center : Vector2 . Vector2 object.
angle : float . Angle value in degrees.
Returns: Vector2. Vector2 object.
-> usage:
`a = new(3.0, 1.5) , b = from(2.0) , c = rotate_around(a, b, 45.0) , plot(c.x)`
perpendicular_distance(a, b, c)
Distance from point `a` to line between `b` and `c`.
Parameters:
a : Vector2 . Vector2 object.
b : Vector2 . Vector2 object.
c : Vector2 . Vector2 object.
Returns: float.
-> usage:
`a = new(1.5, 2.6) , b = from(1.0) , c = from(3.0) , d = perpendicular_distance(a, b, c) , plot(d.x)`
project(a, axis)
Project a vector onto another.
Parameters:
a : Vector2 . Vector2 object.
axis : Vector2 . Vector2 object.
Returns: Vector2. Vector2 object.
-> usage:
`a = new(3.0, 1.5) , b = from(2.0) , c = project(a, b) , plot(c.x)`
projectN(a, axis)
Project a vector onto a vector of unit length.
Parameters:
a : Vector2 . Vector2 object.
axis : Vector2 . Vector2 object.
Returns: Vector2. Vector2 object.
-> usage:
`a = new(3.0, 1.5) , b = from(2.0) , c = projectN(a, b) , plot(c.x)`
reflect(a, axis)
Reflect a vector on another.
Parameters:
a : Vector2 . Vector2 object.
axis
Returns: Vector2. Vector2 object.
-> usage:
`a = new(3.0, 1.5) , b = from(2.0) , c = reflect(a, b) , plot(c.x)`
reflectN(a, axis)
Reflect a vector to a arbitrary axis.
Parameters:
a : Vector2 . Vector2 object.
axis
Returns: Vector2. Vector2 object.
-> usage:
`a = new(3.0, 1.5) , b = from(2.0) , c = reflectN(a, b) , plot(c.x)`
angle(a)
Angle in radians of a vector.
Parameters:
a : Vector2 . Vector2 object.
Returns: float.
-> usage:
`a = new(3.0, 1.5) , b = angle(a) , plot(b)`
angle_unsigned(a, b)
unsigned degree angle between 0 and +180 by given two vectors.
Parameters:
a : Vector2 . Vector2 object.
b : Vector2 . Vector2 object.
Returns: float.
-> usage:
`a = new(3.0, 1.5) , b = from(2.0) , c = angle_unsigned(a, b) , plot(c)`
angle_signed(a, b)
Signed degree angle between -180 and +180 by given two vectors.
Parameters:
a : Vector2 . Vector2 object.
b : Vector2 . Vector2 object.
Returns: float.
-> usage:
`a = new(3.0, 1.5) , b = from(2.0) , c = angle_signed(a, b) , plot(c)`
angle_360(a, b)
Degree angle between 0 and 360 by given two vectors
Parameters:
a : Vector2 . Vector2 object.
b : Vector2 . Vector2 object.
Returns: float.
-> usage:
`a = new(3.0, 1.5) , b = from(2.0) , c = angle_360(a, b) , plot(c)`
clamp(a, min, max)
Restricts a vector between a min and max value.
Parameters:
a : Vector2 . Vector2 object.
min
max
Returns: Vector2. Vector2 object.
-> usage:
`a = new(3.0, 1.5) , b = from(2.0) , c = from(2.5) , d = clamp(a, b, c) , plot(d.x)`
clamp(a, min, max)
Restricts a vector between a min and max value.
Parameters:
a : Vector2 . Vector2 object.
min : float . Lower boundary value.
max : float . Higher boundary value.
Returns: Vector2. Vector2 object.
-> usage:
`a = new(3.0, 1.5) , b = clamp(a, 2.0, 2.5) , plot(b.x)`
lerp(a, b, rate)
Linearly interpolates between vectors a and b by rate.
Parameters:
a : Vector2 . Vector2 object.
b : Vector2 . Vector2 object.
rate : float . Value between (a:-infinity -> b:1.0), negative values will move away from b.
Returns: Vector2. Vector2 object.
-> usage:
`a = new(3.0, 1.5) , b = from(2.0) , c = lerp(a, b, 0.5) , plot(c.x)`
herp(a, b, rate)
Hermite curve interpolation between vectors a and b by rate.
Parameters:
a : Vector2 . Vector2 object.
b : Vector2 . Vector2 object.
rate : Vector2 . Vector2 object. Value between (a:0 > 1:b).
Returns: Vector2. Vector2 object.
-> usage:
`a = new(3.0, 1.5) , b = from(2.0) , c = from(2.5) , d = herp(a, b, c) , plot(d.x)`
transform(position, mat)
Transform a vector by the given matrix.
Parameters:
position : Vector2 . Source vector.
mat : M32 . Transformation matrix
Returns: Vector2. Transformed vector.
transform(position, mat)
Transform a vector by the given matrix.
Parameters:
position : Vector2 . Source vector.
mat : M44 . Transformation matrix
Returns: Vector2. Transformed vector.
transform(position, mat)
Transform a vector by the given matrix.
Parameters:
position : Vector2 . Source vector.
mat : matrix . Transformation matrix, requires a 3x2 or a 4x4 matrix.
Returns: Vector2. Transformed vector.
transform(this, rotation)
Transform a vector by the given quaternion rotation value.
Parameters:
this : Vector2 . Source vector.
rotation : Quaternion . Rotation to apply.
Returns: Vector2. Transformed vector.
area_triangle(a, b, c)
Find the area in a triangle of vectors.
Parameters:
a : Vector2 . Vector2 object.
b : Vector2 . Vector2 object.
c : Vector2 . Vector2 object.
Returns: float.
-> usage:
`a = new(1.0, 2.0) , b = from(2.0) , c = from(1.0) , d = area_triangle(a, b, c) , plot(d.x)`
random(max)
2D random value.
Parameters:
max : Vector2 . Vector2 object. Vector upper boundary.
Returns: Vector2. Vector2 object.
-> usage:
`a = from(2.0) , b = random(a) , plot(b.x)`
random(max)
2D random value.
Parameters:
max : float, Vector upper boundary.
Returns: Vector2. Vector2 object.
-> usage:
`a = random(2.0) , plot(a.x)`
random(min, max)
2D random value.
Parameters:
min : Vector2 . Vector2 object. Vector lower boundary.
max : Vector2 . Vector2 object. Vector upper boundary.
Returns: Vector2. Vector2 object.
-> usage:
`a = from(1.0) , b = from(2.0) , c = random(a, b) , plot(c.x)`
random(min, max)
2D random value.
Parameters:
min : Vector2 . Vector2 object. Vector lower boundary.
max : Vector2 . Vector2 object. Vector upper boundary.
Returns: Vector2. Vector2 object.
-> usage:
`a = random(1.0, 2.0) , plot(a.x)`
noise(a)
2D Noise based on Morgan McGuire @morgan3d.
Parameters:
a : Vector2 . Vector2 object.
Returns: Vector2. Vector2 object.
-> usage:
`a = from(2.0) , b = noise(a) , plot(b.x)`
to_string(a)
Converts vector `a` to a string format, in the form `"(x, y)"`.
Parameters:
a : Vector2 . Vector2 object.
Returns: string. In `"(x, y)"` format.
-> usage:
`a = from(2.0) , l = barstate.islast ? label.new(bar_index, 0.0, to_string(a)) : label(na)`
to_string(a, format)
Converts vector `a` to a string format, in the form `"(x, y)"`.
Parameters:
a : Vector2 . Vector2 object.
format : string . Format to apply transformation.
Returns: string. In `"(x, y)"` format.
-> usage:
`a = from(2.123456) , l = barstate.islast ? label.new(bar_index, 0.0, to_string(a, "#.##")) : label(na)`
to_array(a)
Converts vector to a array format.
Parameters:
a : Vector2 . Vector2 object.
Returns: array.
-> usage:
`a = from(2.0) , b = to_array(a) , plot(array.get(b, 0))`
to_barycentric(this, a, b, c)
Captures the barycentric coordinate of a cartesian position in the triangle plane.
Parameters:
this : Vector2 . Source cartesian coordinate position.
a : Vector2 . Triangle corner `a` vertice.
b : Vector2 . Triangle corner `b` vertice.
c : Vector2 . Triangle corner `c` vertice.
Returns: bool.
from_barycentric(this, a, b, c)
Captures the cartesian coordinate of a barycentric position in the triangle plane.
Parameters:
this : Vector2 . Source barycentric coordinate position.
a : Vector2 . Triangle corner `a` vertice.
b : Vector2 . Triangle corner `b` vertice.
c : Vector2 . Triangle corner `c` vertice.
Returns: bool.
to_complex(this)
Translate a Vector2 structure to complex.
Parameters:
this : Vector2 . Source vector.
Returns: Complex.
to_polar(this)
Translate a Vector2 cartesian coordinate into polar coordinates.
Parameters:
this : Vector2 . Source vector.
Returns: Pole. The returned angle is in radians.
Alerts█ OVERVIEW
This library is a Pine Script™ programmers tool that provides functions to simplify the creation of compound conditions and alert messages. With these functions, scripts can use comma-separated "string" lists to specify condition groups from arbitrarily large "bool" arrays , offering a convenient way to provide highly flexible alert creation to script users without requiring numerous inputs in the "Settings/Inputs" menu.
█ CONCEPTS
Compound conditions
Compound conditions are essentially groups of two or more conditions, where each required condition must occur to produce a `true` result. Traders often combine conditions, including signals from various indicators, to drive and reinforce trade decisions. Similarly, programmers use compound conditions in logical operations to create scripts that respond dynamically to groups of events.
Condition conundrum
Providing flexible condition combinations to script users for signals and alerts often poses a significant challenge: input complexity . Conventionally, such flexibility comes at the cost of an extensive list of separate inputs for toggling individual conditions and customizing their properties, often resulting in complicated input menus that are difficult for users to navigate effectively. Furthermore, managing all those inputs usually entails tediously handling many extra variables and logical expressions, making such projects more complex for programmers.
Condensing complexity
This library introduces a technique using parsed strings to reference groups of elements from "bool" arrays , helping to simplify and streamline the construction of compound conditions and alert messages. With this approach, programmers can provide one or more "string" inputs in their scripts where users can list numbers corresponding to the conditions they want to combine.
For example, suppose you have a script that creates alert triggers based on a combination of up to 20 individual conditions, and you want to make inputs for users to choose which conditions to combine. Instead of creating 20 separate checkboxes in the "Settings/Inputs" tab and manually adding associated logic for each one, you can store the conditional values in arrays, make one or more "string" inputs that accept values listing the array item locations (e.g., "1,4,8,11"), and then pass the inputs to these functions to determine the compound conditions formed by the specified groups.
This approach condenses the input space, improving navigability and utility. Additionally, it helps provide high-level simplicity to complex conditional code, making it easier to maintain and expand over time.
█ CALCULATIONS AND USE
This library contains three functions for evaluating compound conditions: `getCompoundConditon()`, `getCompoundConditionsArray()`, and `compoundAlertMessage()`. Each function has two overloads that evaluate compound conditions based on groups of items from one or two "bool" arrays . The sections below explain the functions' calculations and how to use them.
Referencing conditions using "string" index lists
Each function processes "string" values containing comma-separated lists of numerals representing the indices of the "bool" array items to use in its calculations (e.g., "4, 8, 12"). The functions split each supplied "string" list by its commas, then iterate over those specified indices in the "bool" arrays to determine each group's combined `true` or `false` state.
For convenience, the numbers in the "string" lists can represent zero-based indices (where the first item is at index 0) or one-based indices (where the first item is at index 1), depending on the function's `zeroIndex` parameter. For example, an index list of "0, 2, 4" with a `zeroIndex` value of `true` specifies that the condition group uses the first , third , and fifth "bool" values in the array, ignoring all others. If the `zeroIndex` value is `false`, the list "1, 3, 5" also refers to those same elements.
Zero-based indexing is convenient for programmers because Pine arrays always use this index format. However, one-based indexing is often more convenient and familiar for script users, especially non-programmers.
Evaluating one or many condition groups
The `getCompoundCondition()` function evaluates singular condition groups determined by its `indexList` parameter, returning `true` values whenever the specified array elements are `true`. This function is helpful when a script has to evaluate specific groups of conditions and does not require many combinations.
In contrast, the `getCompoundConditionsArray()` function can evaluate numerous condition groups, one for each "string" included in its `indexLists` argument. It returns arrays containing `true` or `false` states for each listed group. This function is helpful when a script requires multiple condition combinations in additional calculations or logic.
The `compoundAlertMessage()` function is similar to the `getCompoundConditionsArray()` function. It also evaluates a separate compound condition group for each "string" in its `indexLists` array, but it returns "string" values containing the marker (name) of each group with a `true` result. You can use these returned values as the `message` argument in alert() calls, display them in labels and other drawing objects, or even use them in additional calculations and logic.
Directional condition pairs
The first overload of each function operates on a single `conditions` array, returning values representing one or more compound conditions from groups in that array. These functions are ideal for general-purpose condition groups that may or may not represent direction information.
The second overloads accept two arrays representing upward and downward conditions separately: `upConditions` and `downConditions`. These overloads evaluate opposing directional conditions in pairs (e.g., RSI is above/below a level) and return upward and downward condition information separately in a tuple .
When using the directional overloads, ensure the `upConditions` and `downConditions` arrays are the same size, with the intended condition pairs at the same indices . For instance, if you have a specific upward RSI condition's value at the first index in the `upConditions` array, include the opposing downward RSI condition's value at that same index in the `downConditions` array. If a condition can apply to both directions (e.g., rising volume), include its value at the same index in both arrays.
Group markers
To simplify the generation of informative alert messages, the `compoundAlertMessage()` function assigns "string" markers to each condition group, where "marker" refers to the group's name. The `groupMarkers` parameter allows you to assign custom markers to each listed group. If not specified, the function generates default group markers in the format "M", where "M" is short for "Marker" and "" represents the group number starting from 1. For example, the default marker for the first group specified in the `indexLists` array is "M1".
The function's returned "string" values contain a comma-separated list with markers for each activated condition group (e.g., "M1, M4"). The function's second overload, which processes directional pairs of conditions, also appends extra characters to the markers to signify the direction. The default for upward groups is "▲" (e.g., "M1▲") and the default for downward ones is "▼" (e.g., "M1▼"). You can customize these appended characters with the `upChar` and `downChar` parameters.
Designing customizable alerts
We recommend following these primary steps when using this library to design flexible alerts for script users:
1. Create text inputs for users to specify comma-separated lists of conditions with the input.string() or input.text_area() functions, and then collect all the input values in a "string" array . Note that each separate "string" in the array will represent a distinct condition group.
2. Create arrays of "bool" values representing the possible conditions to choose from. If your script will process pairs of upward and downward conditions, ensure the related elements in the arrays align at the same indices.
3. Call `compoundAlertMessage()` using the arrays from steps 1 and 2 as arguments to get the alert message text. If your script will use the text for alerts only, not historical display or calculation purposes, the call is necessary only on realtime bars .
4. Pass the calculated "string" values as the `message` argument in alert() calls. We recommend calling the function only when the "string" is not empty (i.e., `messageText != ""`). To avoid repainting alerts on open bars, use barstate.isconfirmed in the condition to allow alert triggers only on each bar's close .
5. Test the alerts. Open the "Create Alert" dialog box and select "Any alert() function call" in the "Condition" field. It is also helpful to inspect the strings with Pine Logs .
NOTE: Because the techniques in this library use lists of numbers to specify conditions, we recommend including a tooltip for the "string" inputs that lists the available numbers and the conditions they represent. This tooltip provides a legend for script users, making it simple to understand and utilize. To create the tooltip, declare a "const string" listing the options and pass it to the `input.*()` call's `tooltip` parameter. See the library's example code for a simple demonstration.
█ EXAMPLE CODE
This library's example code demonstrates one possible way to offer a selection of compound conditions with "string" inputs and these functions. It uses three input.string() calls, each accepting a comma-separated list representing a distinct condition group. The title of each input represents the default group marker that appears in the label and alert text. The code collects these three input values in a `conditionGroups` array for use with the `compoundAlertMessage()` function.
In this code, we created two "bool" arrays to store six arbitrary condition pairs for demonstration:
1. Bar up/down: The bar's close price must be above the open price for upward conditions, and vice versa for downward conditions.
2. Fast EMA above/below slow EMA : The 9-period Exponential Moving Average of close prices must be above the 21-period EMA for upward conditions, and vice versa for downward conditions.
3. Volume above average : The bar's volume must exceed its 20-bar average to activate an upward or downward condition.
4. Volume rising : The volume must exceed that of the previous bar to activate an upward or downward condition.
5. RSI trending up/down : The 14-period Relative Strength Index of close prices must be between 50 and 70 for upward conditions, and between 30 and 50 for downward conditions.
6. High volatility : The 7-period Average True Range (ATR) must be above the 40-period ATR to activate an upward or downward condition.
We included a `tooltip` argument for the third input.string() call that displays the condition numbers and titles, where 1 is the first condition number.
The `bullConditions` array contains the `true` or `false` states of all individual upward conditions, and the `bearConditions` array contains all downward condition states. For the conditions that filter either direction because they are non-directional, such as "High volatility", both arrays contain the condition's `true` or `false` value at the same index. If you use these conditions alone, they activate upward and downward alert conditions simultaneously.
The example code calls `compoundAlertMessage()` using the `bullConditions`, `bearConditions`, and `conditionGroups` arrays to create a tuple of strings containing the directional markers for each activated group. On confirmed bars, it displays non-empty strings in labels and uses them in alert() calls. For the text shown in the labels, we used str.replace_all() to replace commas with newline characters, aligning the markers vertically in the display.
Look first. Then leap.
█ FUNCTIONS
This library exports the following functions:
getCompoundCondition(conditions, indexList, minRequired, zeroIndex)
(Overload 1 of 2) Determines a compound condition based on selected elements from a `conditions` array.
Parameters:
conditions (array) : (array) An array containing the possible "bool" values to use in the compound condition.
indexList (string) : (series string) A "string" containing a comma-separated list of whole numbers representing the group of `conditions` elements to use in the compound condition. For example, if the value is `"0, 2, 4"`, and `minRequired` is `na`, the function returns `true` only if the `conditions` elements at index 0, 2, and 4 are all `true`. If the value is an empty "string", the function returns `false`.
minRequired (int) : (series int) Optional. Determines the minimum number of selected conditions required to activate the compound condition. For example, if the value is 2, the function returns `true` if at least two of the specified `conditions` elements are `true`. If the value is `na`, the function returns `true` only if all specified elements are `true`. The default is `na`.
zeroIndex (bool) : (series bool) Optional. Specifies whether the `indexList` represents zero-based array indices. If `true`, a value of "0" in the list represents the first array index. If `false`, a `value` of "1" represents the first index. The default is `true`.
Returns: (bool) `true` if `conditions` elements in the group specified by the `indexList` are `true`, `false` otherwise.
getCompoundCondition(upConditions, downConditions, indexList, minRequired, allowUp, allowDown, zeroIndex)
(Overload 2 of 2) Determines upward and downward compound conditions based on selected elements from `upConditions` and `downConditions` arrays.
Parameters:
upConditions (array) : (array) An array containing the possible "bool" values to use in the upward compound condition.
downConditions (array) : (array) An array containing the possible "bool" values to use in the downward compound condition.
indexList (string) : (series string) A "string" containing a comma-separated list of whole numbers representing the `upConditions` and `downConditions` elements to use in the compound conditions. For example, if the value is `"0, 2, 4"` and `minRequired` is `na`, the function returns `true` for the first value only if the `upConditions` elements at index 0, 2, and 4 are all `true`. If the value is an empty "string", the function returns ` `.
minRequired (int) : (series int) Optional. Determines the minimum number of selected conditions required to activate either compound condition. For example, if the value is 2, the function returns `true` for its first value if at least two of the specified `upConditions` elements are `true`. If the value is `na`, the function returns `true` only if all specified elements are `true`. The default is `na`.
allowUp (bool) : (series bool) Optional. Controls whether the function considers upward compound conditions. If `false`, the function ignores the `upConditions` array, and the first item in the returned tuple is `false`. The default is `true`.
allowDown (bool) : (series bool) Optional. Controls whether the function considers downward compound conditions. If `false`, the function ignores the `downConditions` array, and the second item in the returned tuple is `false`. The default is `true`.
zeroIndex (bool) : (series bool) Optional. Specifies whether the `indexList` represents zero-based array indices. If `true`, a value of "0" in the list represents the first array index. If `false`, a value of "1" represents the first index. The default is `true`.
Returns: ( ) A tuple containing two "bool" values representing the upward and downward compound condition states, respectively.
getCompoundConditionsArray(conditions, indexLists, zeroIndex)
(Overload 1 of 2) Creates an array of "bool" values representing compound conditions formed by selected elements from a `conditions` array.
Parameters:
conditions (array) : (array) An array containing the possible "bool" values to use in each compound condition.
indexLists (array) : (array) An array of strings containing comma-separated lists of whole numbers representing the `conditions` elements to use in each compound condition. For example, if an item is `"0, 2, 4"`, the corresponding item in the returned array is `true` only if the `conditions` elements at index 0, 2, and 4 are all `true`. If an item is an empty "string", the item in the returned array is `false`.
zeroIndex (bool) : (series bool) Optional. Specifies whether the "string" lists in the `indexLists` represent zero-based array indices. If `true`, a value of "0" in a list represents the first array index. If `false`, a value of "1" represents the first index. The default is `true`.
Returns: (array) An array of "bool" values representing compound condition states for each condition group. An item in the array is `true` only if all the `conditions` elements specified by the corresponding `indexLists` item are `true`. Otherwise, the item is `false`.
getCompoundConditionsArray(upConditions, downConditions, indexLists, allowUp, allowDown, zeroIndex)
(Overload 2 of 2) Creates two arrays of "bool" values representing compound upward and
downward conditions formed by selected elements from `upConditions` and `downConditions` arrays.
Parameters:
upConditions (array) : (array) An array containing the possible "bool" values to use in each upward compound condition.
downConditions (array) : (array) An array containing the possible "bool" values to use in each downward compound condition.
indexLists (array) : (array) An array of strings containing comma-separated lists of whole numbers representing the `upConditions` and `downConditions` elements to use in each compound condition. For example, if an item is `"0, 2, 4"`, the corresponding item in the first returned array is `true` only if the `upConditions` elements at index 0, 2, and 4 are all `true`. If an item is an empty "string", the items in both returned arrays are `false`.
allowUp (bool) : (series bool) Optional. Controls whether the function considers upward compound conditions. If `false`, the function ignores the `upConditions` array, and all elements in the first returned array are `false`. The default is `true`.
allowDown (bool) : (series bool) Optional. Controls whether the function considers downward compound conditions. If `false`, the function ignores the `downConditions` array, and all elements in the second returned array are `false`. The default is `true`.
zeroIndex (bool) : (series bool) Optional. Specifies whether the "string" lists in the `indexLists` represent zero-based array indices. If `true`, a value of "0" in a list represents the first array index. If `false`, a value of "1" represents the first index. The default is `true`.
Returns: ( ) A tuple containing two "bool" arrays:
- The first array contains values representing upward compound condition states determined using the `upConditions`.
- The second array contains values representing downward compound condition states determined using the `downConditions`.
compoundAlertMessage(conditions, indexLists, zeroIndex, groupMarkers)
(Overload 1 of 2) Creates a "string" message containing a comma-separated list of markers representing active compound conditions formed by specified element groups from a `conditions` array.
Parameters:
conditions (array) : (array) An array containing the possible "bool" values to use in each compound condition.
indexLists (array) : (array) An array of strings containing comma-separated lists of whole numbers representing the `conditions` elements to use in each compound condition. For example, if an item is `"0, 2, 4"`, the corresponding marker for that item appears in the returned "string" only if the `conditions` elements at index 0, 2, and 4 are all `true`.
zeroIndex (bool) : (series bool) Optional. Specifies whether the "string" lists in the `indexLists` represent zero-based array indices. If `true`, a value of "0" in a list represents the first array index. If `false`, a value of "1" represents the first index. The default is `true`.
groupMarkers (array) : (array) Optional. If specified, sets the marker (name) for each condition group specified in the `indexLists` array. If `na`, the function uses the format `"M"` for each group, where "M" is short for "Marker" and `` represents the one-based index for the group (e.g., the marker for the first listed group is "M1"). The default is `na`.
Returns: (string) A "string" containing a list of markers corresponding to each active compound condition.
compoundAlertMessage(upConditions, downConditions, indexLists, allowUp, allowDown, zeroIndex, groupMarkers, upChar, downChar)
(Overload 2 of 2) Creates two "string" messages containing comma-separated lists of markers representing active upward and downward compound conditions formed by specified element groups from `upConditions` and `downConditions` arrays.
Parameters:
upConditions (array) An array containing the possible "bool" values to use in each upward compound condition.
downConditions (array) An array containing the possible "bool" values to use in each downward compound condition.
indexLists (array) An array of strings containing comma-separated lists of whole numbers representing the `upConditions` and `downConditions` element groups to use in each compound condition. For example, if an item is `"0, 2, 4"`, the corresponding group marker for that item appears in the first returned "string" only if the `upConditions` elements at index 0, 2, and 4 are all `true`.
allowUp (bool) Optional. Controls whether the function considers upward compound conditions. If `false`, the function ignores the `upConditions` array and returns an empty "string" for the first tuple element. The default is `true`.
allowDown (bool) Optional. Controls whether the function considers downward compound conditions. If `false`, the function ignores the `downConditions` array and returns an empty "string" for the second tuple element. The default is `true`.
zeroIndex (bool) Optional. Specifies whether the "string" lists in the `indexLists` represent zero-based array indices. If `true`, a value of "0" in a list represents the first array index. If `false`, a value of "1" represents the first index. The default is `true`.
groupMarkers (array) Optional. If specified, sets the name (marker) of each condition group specified in the `indexLists` array. If `na`, the function uses the format `"M"` for each group, where "M" is short for "Marker" and `` represents the one-based index for the group (e.g., the marker for the first listed group is "M1"). The default is `na`.
upChar (string) Optional. A "string" appended to all group markers for upward conditions to signify direction. The default is "▲".
downChar (string) Optional. A "string" appended to all group markers for downward conditions to signify direction. The default is "▼".
Returns: ( ): A tuple of "string" values containing lists of markers corresponding to active upward and downward compound conditions, respectively.
MTF_DrawingsLibrary 'MTF_Drawings'
This library helps with drawing indicators and candle charts on all timeframes.
FEATURES
CHART DRAWING : Library provides functions for drawing High Time Frame (HTF) and Low Time Frame (LTF) candles.
INDICATOR DRAWING : Library provides functions for drawing various types of HTF and LTF indicators.
CUSTOM COLOR DRAWING : Library allows to color candles and indicators based on specific conditions.
LINEFILLS : Library provides functions for drawing linefills.
CATEGORIES
The functions are named in a way that indicates they purpose:
{Ind} : Function is meant only for indicators.
{Hist} : Function is meant only for histograms.
{Candle} : Function is meant only for candles.
{Draw} : Function draws indicators, histograms and candle charts.
{Populate} : Function generates necessary arrays required by drawing functions.
{LTF} : Function is meant only for lower timeframes.
{HTF} : Function is meant only for higher timeframes.
{D} : Function draws indicators that are composed of two lines.
{CC} : Function draws custom colored indicators.
USAGE
Import the library into your script.
Before using any {Draw} function it is necessary to use a {Populate} function.
Choose the appropriate one based on the category, provide the necessary arguments, and then use the {Draw} function, forwarding the arrays generated by the {Populate} function.
This doesn't apply to {Draw_Lines}, {LineFill}, or {Barcolor} functions.
EXAMPLE
import Spacex_trader/MTF_Drawings/1 as tf
//Request lower timeframe data.
Security(simple string Ticker, simple string New_LTF, float Ind) =>
float Value = request.security_lower_tf(Ticker, New_LTF, Ind)
Value
Timeframe = input.timeframe('1', 'Timeframe: ')
tf.Draw_Ind(tf.Populate_LTF_Ind(Security(syminfo.tickerid, Timeframe, ta.rsi(close, 14)), 498, color.purple), 1, true)
FUNCTION LIST
HTF_Candle(BarsBack, BodyBear, BodyBull, BordersBear, BordersBull, WickBear, WickBull, LineStyle, BoxStyle, LineWidth, HTF_Open, HTF_High, HTF_Low, HTF_Close, HTF_Bar_Index)
Populates two arrays with drawing data of the HTF candles.
Parameters:
BarsBack (int) : Bars number to display.
BodyBear (color) : Candle body bear color.
BodyBull (color) : Candle body bull color.
BordersBear (color) : Candle border bear color.
BordersBull (color) : Candle border bull color.
WickBear (color) : Candle wick bear color.
WickBull (color) : Candle wick bull color.
LineStyle (string) : Wick style (Solid-Dotted-Dashed).
BoxStyle (string) : Border style (Solid-Dotted-Dashed).
LineWidth (int) : Wick width.
HTF_Open (float) : HTF open price.
HTF_High (float) : HTF high price.
HTF_Low (float) : HTF low price.
HTF_Close (float) : HTF close price.
HTF_Bar_Index (int) : HTF bar_index.
Returns: Two arrays with drawing data of the HTF candles.
LTF_Candle(BarsBack, BodyBear, BodyBull, BordersBear, BordersBull, WickBear, WickBull, LineStyle, BoxStyle, LineWidth, LTF_Open, LTF_High, LTF_Low, LTF_Close)
Populates two arrays with drawing data of the LTF candles.
Parameters:
BarsBack (int) : Bars number to display.
BodyBear (color) : Candle body bear color.
BodyBull (color) : Candle body bull color.
BordersBear (color) : Candle border bear color.
BordersBull (color) : Candle border bull color.
WickBear (color) : Candle wick bear color.
WickBull (color) : Candle wick bull color.
LineStyle (string) : Wick style (Solid-Dotted-Dashed).
BoxStyle (string) : Border style (Solid-Dotted-Dashed).
LineWidth (int) : Wick width.
LTF_Open (float ) : LTF open price.
LTF_High (float ) : LTF high price.
LTF_Low (float ) : LTF low price.
LTF_Close (float ) : LTF close price.
Returns: Two arrays with drawing data of the LTF candles.
Draw_Candle(Box, Line, Offset)
Draws HTF or LTF candles.
Parameters:
Box (box ) : Box array with drawing data.
Line (line ) : Line array with drawing data.
Offset (int) : Offset of the candles.
Returns: Drawing of the candles.
Populate_HTF_Ind(IndValue, BarsBack, IndColor, HTF_Bar_Index)
Populates one array with drawing data of the HTF indicator.
Parameters:
IndValue (float) : Indicator value.
BarsBack (int) : Indicator lines to display.
IndColor (color) : Indicator color.
HTF_Bar_Index (int) : HTF bar_index.
Returns: An array with drawing data of the HTF indicator.
Populate_LTF_Ind(IndValue, BarsBack, IndColor)
Populates one array with drawing data of the LTF indicator.
Parameters:
IndValue (float ) : Indicator value.
BarsBack (int) : Indicator lines to display.
IndColor (color) : Indicator color.
Returns: An array with drawing data of the LTF indicator.
Draw_Ind(Line, Mult, Exe)
Draws one HTF or LTF indicator.
Parameters:
Line (line ) : Line array with drawing data.
Mult (int) : Coordinates multiplier.
Exe (bool) : Display the indicator.
Returns: Drawing of the indicator.
Populate_HTF_Ind_D(IndValue_1, IndValue_2, BarsBack, IndColor_1, IndColor_2, HTF_Bar_Index)
Populates two arrays with drawing data of the HTF indicators.
Parameters:
IndValue_1 (float) : First indicator value.
IndValue_2 (float) : Second indicator value.
BarsBack (int) : Indicator lines to display.
IndColor_1 (color) : First indicator color.
IndColor_2 (color) : Second indicator color.
HTF_Bar_Index (int) : HTF bar_index.
Returns: Two arrays with drawing data of the HTF indicators.
Populate_LTF_Ind_D(IndValue_1, IndValue_2, BarsBack, IndColor_1, IndColor_2)
Populates two arrays with drawing data of the LTF indicators.
Parameters:
IndValue_1 (float ) : First indicator value.
IndValue_2 (float ) : Second indicator value.
BarsBack (int) : Indicator lines to display.
IndColor_1 (color) : First indicator color.
IndColor_2 (color) : Second indicator color.
Returns: Two arrays with drawing data of the LTF indicators.
Draw_Ind_D(Line_1, Line_2, Mult, Exe_1, Exe_2)
Draws two LTF or HTF indicators.
Parameters:
Line_1 (line ) : First line array with drawing data.
Line_2 (line ) : Second line array with drawing data.
Mult (int) : Coordinates multiplier.
Exe_1 (bool) : Display the first indicator.
Exe_2 (bool) : Display the second indicator.
Returns: Drawings of the indicators.
Barcolor(Box, Line, BarColor)
Colors the candles based on indicators output.
Parameters:
Box (box ) : Candle box array.
Line (line ) : Candle line array.
BarColor (color ) : Indicator color array.
Returns: Colored candles.
Populate_HTF_Ind_D_CC(IndValue_1, IndValue_2, BarsBack, BullColor, BearColor, IndColor_1, HTF_Bar_Index)
Populates two array with drawing data of the HTF indicators with color based on: IndValue_1 >= IndValue_2 ? BullColor : BearColor.
Parameters:
IndValue_1 (float) : First indicator value.
IndValue_2 (float) : Second indicator value.
BarsBack (int) : Indicator lines to display.
BullColor (color) : Bull color.
BearColor (color) : Bear color.
IndColor_1 (color) : First indicator color.
HTF_Bar_Index (int) : HTF bar_index.
Returns: Three arrays with drawing and color data of the HTF indicators.
Populate_LTF_Ind_D_CC(IndValue_1, IndValue_2, BarsBack, BullColor, BearColor, IndColor_1)
Populates two arrays with drawing data of the LTF indicators with color based on: IndValue_1 >= IndValue_2 ? BullColor : BearColor.
Parameters:
IndValue_1 (float ) : First indicator value.
IndValue_2 (float ) : Second indicator value.
BarsBack (int) : Indicator lines to display.
BullColor (color) : Bull color.
BearColor (color) : Bearcolor.
IndColor_1 (color) : First indicator color.
Returns: Three arrays with drawing and color data of the LTF indicators.
Populate_HTF_Hist_CC(HistValue, IndValue_1, IndValue_2, BarsBack, BullColor, BearColor, HTF_Bar_Index)
Populates one array with drawing data of the HTF histogram with color based on: IndValue_1 >= IndValue_2 ? BullColor : BearColor.
Parameters:
HistValue (float) : Indicator value.
IndValue_1 (float) : First indicator value.
IndValue_2 (float) : Second indicator value.
BarsBack (int) : Indicator lines to display.
BullColor (color) : Bull color.
BearColor (color) : Bearcolor.
HTF_Bar_Index (int) : HTF bar_index
Returns: Two arrays with drawing and color data of the HTF histogram.
Populate_LTF_Hist_CC(HistValue, IndValue_1, IndValue_2, BarsBack, BullColor, BearColor)
Populates one array with drawing data of the LTF histogram with color based on: IndValue_1 >= IndValue_2 ? BullColor : BearColor.
Parameters:
HistValue (float ) : Indicator value.
IndValue_1 (float ) : First indicator value.
IndValue_2 (float ) : Second indicator value.
BarsBack (int) : Indicator lines to display.
BullColor (color) : Bull color.
BearColor (color) : Bearcolor.
Returns: Two array with drawing and color data of the LTF histogram.
Populate_LTF_Hist_CC_VA(HistValue, Value, BarsBack, BullColor, BearColor)
Populates one array with drawing data of the LTF histogram with color based on: HistValue >= Value ? BullColor : BearColor.
Parameters:
HistValue (float ) : Indicator value.
Value (float) : First indicator value.
BarsBack (int) : Indicator lines to display.
BullColor (color) : Bull color.
BearColor (color) : Bearcolor.
Returns: Two array with drawing and color data of the LTF histogram.
Populate_HTF_Ind_CC(IndValue, IndValue_1, BarsBack, BullColor, BearColor, HTF_Bar_Index)
Populates one array with drawing data of the HTF indicator with color based on: IndValue >= IndValue_1 ? BullColor : BearColor.
Parameters:
IndValue (float) : Indicator value.
IndValue_1 (float) : Second indicator value.
BarsBack (int) : Indicator lines to display.
BullColor (color) : Bull color.
BearColor (color) : Bearcolor.
HTF_Bar_Index (int) : HTF bar_index
Returns: Two arrays with drawing and color data of the HTF indicator.
Populate_LTF_Ind_CC(IndValue, IndValue_1, BarsBack, BullColor, BearColor)
Populates one array with drawing data of the LTF indicator with color based on: IndValue >= IndValue_1 ? BullColor : BearColor.
Parameters:
IndValue (float ) : Indicator value.
IndValue_1 (float ) : Second indicator value.
BarsBack (int) : Indicator lines to display.
BullColor (color) : Bull color.
BearColor (color) : Bearcolor.
Returns: Two arrays with drawing and color data of the LTF indicator.
Draw_Lines(BarsBack, y1, y2, LineType, Fill)
Draws price lines on indicators.
Parameters:
BarsBack (int) : Indicator lines to display.
y1 (float) : Coordinates of the first line.
y2 (float) : Coordinates of the second line.
LineType (string) : Line type.
Fill (color) : Fill color.
Returns: Drawing of the lines.
LineFill(Upper, Lower, BarsBack, FillColor)
Fills two lines with linefill HTF or LTF.
Parameters:
Upper (line ) : Upper line.
Lower (line ) : Lower line.
BarsBack (int) : Indicator lines to display.
FillColor (color) : Fill color.
Returns: Linefill of the lines.
Populate_LTF_Hist(HistValue, BarsBack, HistColor)
Populates one array with drawing data of the LTF histogram.
Parameters:
HistValue (float ) : Indicator value.
BarsBack (int) : Indicator lines to display.
HistColor (color) : Indicator color.
Returns: One array with drawing data of the LTF histogram.
Populate_HTF_Hist(HistValue, BarsBack, HistColor, HTF_Bar_Index)
Populates one array with drawing data of the HTF histogram.
Parameters:
HistValue (float) : Indicator value.
BarsBack (int) : Indicator lines to display.
HistColor (color) : Indicator color.
HTF_Bar_Index (int) : HTF bar_index.
Returns: One array with drawing data of the HTF histogram.
Draw_Hist(Box, Mult, Exe)
Draws HTF or LTF histogram.
Parameters:
Box (box ) : Box Array.
Mult (int) : Coordinates multiplier.
Exe (bool) : Display the histogram.
Returns: Drawing of the histogram.
Statistical Package for the Trading Sciences [SS]
This is SPTS.
It stands for Statistical Package for the Trading Sciences.
Its a play on SPSS (Statistical Package for the Social Sciences) by IBM (software that, prior to Pinescript, I would use on a daily basis for trading).
Let's preface this indicator first:
This isn't so much an indicator as it is a project. A passion project really.
This has been in the works for months and I still feel like its incomplete. But the plan here is to continue to add functionality to it and actually have the Pinecoding and Tradingview community contribute to it.
As a math based trader, I relied on Excel, SPSS and R constantly to plan my trades. Since learning a functional amount of Pinescript and coding a lot of what I do and what I relied on SPSS, Excel and R for, I use it perhaps maybe a few times a week.
This indicator, or package, has some of the key things I used Excel and SPSS for on a daily and weekly basis. This also adds a lot of, I would say, fairly complex math functionality to Pinescript. Because this is adding functionality not necessarily native to Pinescript, I have placed most, if not all, of the functionality into actual exportable functions. I have also set it up as a kind of library, with explanations and tips on how other coders can take these functions and implement them into other scripts.
The hope here is that other coders will take it, build upon it, improve it and hopefully share additional functionality that can be added into this package. Hence why I call it a project. Okay, let's get into an overview:
Current Functions of SPTS:
SPTS currently has the following functionality (further explanations will be offered below):
Ability to Perform a One-Tailed, Two-Tailed and Paired Sample T-Test, with corresponding P value.
Standard Pearson Correlation (with functionality to be able to calculate the Pearson Correlation between 2 arrays).
Quadratic (or Curvlinear) correlation assessments.
R squared Assessments.
Standard Linear Regression.
Multiple Regression of 2 independent variables.
Tests of Normality (with Kurtosis and Skewness) and recognition of up to 7 Different Distributions.
ARIMA Modeller (Sort of, more details below)
Okay, so let's go over each of them!
T-Tests
So traditionally, most correlation assessments on Pinescript are done with a generic Pearson Correlation using the "ta.correlation" argument. However, this is not always the best test to be used for correlations and determine effects. One approach to correlation assessments used frequently in economics is the T-Test assessment.
The t-test is a statistical hypothesis test used to determine if there is a significant difference between the means of two groups. It assesses whether the sample means are likely to have come from populations with the same mean. The test produces a t-statistic, which is then compared to a critical value from the t-distribution to determine statistical significance. Lower p-values indicate stronger evidence against the null hypothesis of equal means.
A significant t-test result, indicating the rejection of the null hypothesis, suggests that there is statistical evidence to support that there is a significant difference between the means of the two groups being compared. In practical terms, it means that the observed difference in sample means is unlikely to have occurred by random chance alone. Researchers typically interpret this as evidence that there is a real, meaningful difference between the groups being studied.
Some uses of the T-Test in finance include:
Risk Assessment: The t-test can be used to compare the risk profiles of different financial assets or portfolios. It helps investors assess whether the differences in returns or volatility are statistically significant.
Pairs Trading: Traders often apply the t-test when engaging in pairs trading, a strategy that involves trading two correlated securities. It helps determine when the price spread between the two assets is statistically significant and may revert to the mean.
Volatility Analysis: Traders and risk managers use t-tests to compare the volatility of different assets or portfolios, assessing whether one is significantly more or less volatile than another.
Market Efficiency Tests: Financial researchers use t-tests to test the Efficient Market Hypothesis by assessing whether stock price movements follow a random walk or if there are statistically significant deviations from it.
Value at Risk (VaR) Calculation: Risk managers use t-tests to calculate VaR, a measure of potential losses in a portfolio. It helps assess whether a portfolio's value is likely to fall below a certain threshold.
There are many other applications, but these are a few of the highlights. SPTS permits 3 different types of T-Test analyses, these being the One Tailed T-Test (if you want to test a single direction), two tailed T-Test (if you are unsure of which direction is significant) and a paired sample t-test.
Which T is the Right T?
Generally, a one-tailed t-test is used to determine if a sample mean is significantly greater than or less than a specified population mean, whereas a two-tailed t-test assesses if the sample mean is significantly different (either greater or less) from the population mean. In contrast, a paired sample t-test compares two sets of paired observations (e.g., before and after treatment) to assess if there's a significant difference in their means, typically used when the data points in each pair are related or dependent.
So which do you use? Well, it depends on what you want to know. As a general rule a one tailed t-test is sufficient and will help you pinpoint directionality of the relationship (that one ticker or economic indicator has a significant affect on another in a linear way).
A two tailed is more broad and looks for significance in either direction.
A paired sample t-test usually looks at identical groups to see if one group has a statistically different outcome. This is usually used in clinical trials to compare treatment interventions in identical groups. It's use in finance is somewhat limited, but it is invaluable when you want to compare equities that track the same thing (for example SPX vs SPY vs ES1!) or you want to test a hypothesis about an index and a leveraged share (for example, the relationship between FNGU and, say, MSFT or NVDA).
Statistical Significance
In general, with a t-test you would need to reference a T-Table to determine the statistical significance of the degree of Freedom and the T-Statistic.
However, because I wanted Pinescript to full fledge replace SPSS and Excel, I went ahead and threw the T-Table into an array, so that Pinescript can make the determination itself of the actual P value for a t-test, no cross referencing required :-).
Left tail (Significant):
Both tails (Significant):
Distributed throughout (insignificant):
As you can see in the images above, the t-test will also display a bell-curve analysis of where the significance falls (left tail, both tails or insignificant, distributed throughout).
That said, I have not included this function for the paired sample t-test because that is a bit more nuanced. But for the one and two tailed assessments, the indicator will provide you the P value.
Pearson Correlation Assessment
I don't think I need to go into too much detail on this one.
I have put in functionality to quickly calculate the Pearson Correlation of two array's, which is not currently possible with the "ta.correlation" function.
Quadratic (Curvlinear) Correlation
Not everything in life is linear, sometimes things are curved!
The Pearson Correlation is great for linear assessments, but tends to under-estimate the degree of the relationship in curved relationships. There currently is no native function to t-test for quadratic/curvlinear relationships, so I went ahead and created one.
You can see an example of how Quadratic and Pearson Correlations vary when you look at CME_MINI:ES1! against AMEX:DIA for the past 10 ish months:
Pearson Correlation:
Quadratic Correlation:
One or the other is not always the best, so it is important to check both!
R-Squared Assessments:
The R-squared value, or the square of the Pearson correlation coefficient (r), is used to measure the proportion of variance in one variable that can be explained by the linear relationship with another variable. It represents the goodness-of-fit of a linear regression model with a single predictor variable.
R-Squared is offered in 3 separate forms within this indicator. First, there is the generic R squared which is taking the square root of a Pearson Correlation assessment to assess the variance.
The next is the R-Squared which is calculated from an actual linear regression model done within the indicator.
The first is the R-Squared which is calculated from a multiple regression model done within the indicator.
Regardless of which R-Squared value you are using, the meaning is the same. R-Square assesses the variance between the variables under assessment and can offer an insight into the goodness of fit and the ability of the model to account for the degree of variance.
Here is the R Squared assessment of the SPX against the US Money Supply:
Standard Linear Regression
The indicator contains the ability to do a standard linear regression model. You can convert one ticker or economic indicator into a stock, ticker or other economic indicator. The indicator will provide you with all of the expected information from a linear regression model, including the coefficients, intercept, error assessments, correlation and R2 value.
Here is AAPL and MSFT as an example:
Multiple Regression
Oh man, this was something I really wanted in Pinescript, and now we have it!
I have created a function for multiple regression, which, if you export the function, will permit you to perform multiple regression on any variables available in Pinescript!
Using this functionality in the indicator, you will need to select 2, dependent variables and a single independent variable.
Here is an example of multiple regression for NASDAQ:AAPL using NASDAQ:MSFT and NASDAQ:NVDA :
And an example of SPX using the US Money Supply (M2) and AMEX:GLD :
Tests of Normality:
Many indicators perform a lot of functions on the assumption of normality, yet there are no indicators that actually test that assumption!
So, I have inputted a function to assess for normality. It uses the Kurtosis and Skewness to determine up to 7 different distribution types and it will explain the implication of the distribution. Here is an example of SP:SPX on the Monthly Perspective since 2010:
And NYSE:BA since the 60s:
And NVDA since 2015:
ARIMA Modeller
Okay, so let me disclose, this isn't a full fledge ARIMA modeller. I took some shortcuts.
True ARIMA modelling would involve decomposing the seasonality from the trend. I omitted this step for simplicity sake. Instead, you can select between using an EMA or SMA based approach, and it will perform an autogressive type analysis on the EMA or SMA.
I have tested it on lookback with results provided by SPSS and this actually works better than SPSS' ARIMA function. So I am actually kind of impressed.
You will need to input your parameters for the ARIMA model, I usually would do a 14, 21 and 50 day EMA of the close price, and it will forecast out that range over the length of the EMA.
So for example, if you select the EMA 50 on the daily, it will plot out the forecast for the next 50 days based on an autoregressive model created on the EMA 50. Here is how it looks on AMEX:SPY :
You can also elect to plot the upper and lower confidence bands:
Closing Remarks
So that is the indicator/package.
I do hope to continue expanding its functionality, but as of now, it does already have quite a lot of functionality.
I really hope you enjoy it and find it helpful. This. Has. Taken. AGES! No joke. Between referencing my old statistics textbooks, trying to remember how to calculate some of these things, and wanting to throw my computer against the wall because of errors in the code, this was a task, that's for sure. So I really hope you find some usefulness in it all and enjoy the ability to be able to do functions that previously could really only be done in external software.
As always, leave your comments, suggestions and feedback below!
Take care!
[Excalibur] Ehlers AutoCorrelation Periodogram ModifiedKeep your coins folks, I don't need them, don't want them. If you wish be generous, I do hope that charitable peoples worldwide with surplus food stocks may consider stocking local food banks before stuffing monetary bank vaults, for the crusade of remedying the needs of less than fortunate children, parents, elderly, homeless veterans, and everyone else who deserves nutritional sustenance for the soul.
DEDICATION:
This script is dedicated to the memory of Nikolai Dmitriyevich Kondratiev (Никола́й Дми́триевич Кондра́тьев) as tribute for being a pioneering economist and statistician, paving the way for modern econometrics by advocation of rigorous and empirical methodologies. One of his most substantial contributions to the study of business cycle theory include a revolutionary hypothesis recognizing the existence of dynamic cycle-like phenomenon inherent to economies that are characterized by distinct phases of expansion, stagnation, recession and recovery, what we now know as "Kondratiev Waves" (K-waves). Kondratiev was one of the first economists to recognize the vital significance of applying quantitative analysis on empirical data to evaluate economic dynamics by means of statistical methods. His understanding was that conceptual models alone were insufficient to adequately interpret real-world economic conditions, and that sophisticated analysis was necessary to better comprehend the nature of trending/cycling economic behaviors. Additionally, he recognized prosperous economic cycles were predominantly driven by a combination of technological innovations and infrastructure investments that resulted in profound implications for economic growth and development.
I will mention this... nation's economies MUST be supported and defended to continuously evolve incrementally in order to flourish in perpetuity OR suffer through eras with lasting ramifications of societal stagnation and implosion.
Analogous to the realm of economics, aperiodic cycles/frequencies, both enduring and ephemeral, do exist in all facets of life, every second of every day. To name a few that any blind man can naturally see are: heartbeat (cardiac cycles), respiration rates, circadian rhythms of sleep, powerful magnetic solar cycles, seasonal cycles, lunar cycles, weather patterns, vegetative growth cycles, and ocean waves. Do not pretend for one second that these basic aforementioned examples do not affect business cycle fluctuations in minuscule and monumental ways hour to hour, day to day, season to season, year to year, and decade to decade in every nation on the planet. Kondratiev's original seminal theories in macroeconomics from nearly a century ago have proven remarkably prescient with many of his antiquated elementary observations/notions/hypotheses in macroeconomics being scholastically studied and topically researched further. Therefore, I am compelled to honor and recognize his statistical insight and foresight.
If only.. Kondratiev could hold a pocket sized computer in the cup of both hands bearing the TradingView logo and platform services, I truly believe he would be amazed in marvelous delight with a GARGANTUAN smile on his face.
INTRODUCTION:
Firstly, this is NOT technically speaking an indicator like most others. I would describe it as an advanced cycle period detector to obtain market data spectral estimates with low latency and moderate frequency resolution. Developers can take advantage of this detector by creating scripts that utilize a "Dominant Cycle Source" input to adaptively govern algorithms. Be forewarned, I would only recommend this for advanced developers, not novice code dabbling. Although, there is some Pine wizardry introduced here for novice Pine enthusiasts to witness and learn from. AI did describe the code into one super-crunched sentence as, "a rare feat of exceptionally formatted code masterfully balancing visual clarity, precision, and complexity to provide immense educational value for both programming newcomers and expert Pine coders alike."
Understand all of the above aforementioned? Buckle up and proceed for a lengthy read of verbose complexity...
This is my enhanced and heavily modified version of autocorrelation periodogram (ACP) for Pine Script v5.0. It was originally devised by the mathemagician John Ehlers for detecting dominant cycles (frequencies) in an asset's price action. I have been sitting on code similar to this for a long time, but I decided to unleash the advanced code with my fashion. Originally Ehlers released this with multiple versions, one in a 2016 TASC article and the other in his last published 2013 book "Cycle Analytics for Traders", chapter 8. He wasn't joking about "concepts of advanced technical trading" and ACP is nowhere near to his most intimidating and ingenious calculations in code. I will say the book goes into many finer details about the original periodogram, so if you wish to delve into even more elaborate info regarding Ehlers' original ACP form AND how you may adapt algorithms, you'll have to obtain one. Note to reader, comparing Ehlers' original code to my chimeric code embracing the "Power of Pine", you will notice they have little resemblance.
What you see is a new species of autocorrelation periodogram combining Ehlers' innovation with my fascinations of what ACP could be in a Pine package. One other intention of this script's code is to pay homage to Ehlers' lifelong works. Like Kondratiev, Ehlers is also a hardcore cycle enthusiast. I intend to carry on the fire Ehlers envisioned and I believe that is literally displayed here as a pleasant "fiery" example endowed with Pine. With that said, I tried to make the code as computationally efficient as possible, without going into dozens of more crazy lines of code to speed things up even more. There's also a few creative modifications I made by making alterations to the originating formulas that I felt were improvements, one of them being lag reduction. By recently questioning every single thing I thought I knew about ACP, combined with the accumulation of my current knowledge base, this is the innovative revision I came up with. I could have improved it more but decided not to mind thrash too many TV members, maybe later...
I am now confident Pine should have adequate overhead left over to attach various indicators to the dominant cycle via input.source(). TV, I apologize in advance if in the future a server cluster combusts into a raging inferno... Coders, be fully prepared to build entire algorithms from pure raw code, because not all of the built-in Pine functions fully support dynamic periods (e.g. length=ANYTHING). Many of them do, as this was requested and granted a while ago, but some functions are just inherently finicky due to implementation combinations and MUST be emulated via raw code. I would imagine some comprehensive library or numerous authored scripts have portions of raw code for Pine built-ins some where on TV if you look diligently enough.
Notice: Unfortunately, I will not provide any integration support into member's projects at all. I have my own projects that require way too much of my day already. While I was refactoring my life (forgoing many other "important" endeavors) in the early half of 2023, I primarily focused on this code over and over in my surplus time. During that same time I was working on other innovations that are far above and beyond what this code is. I hope you understand.
The best way programmatically may be to incorporate this code into your private Pine project directly, after brutal testing of course, but that may be too challenging for many in early development. Being able to see the periodogram is also beneficial, so input sourcing may be the "better" avenue to tether portions of the dominant cycle to algorithms. Unique indication being able to utilize the dominantCycle may be advantageous when tethering this script to those algorithms. The easiest way is to manually set your indicators to what ACP recognizes as the dominant cycle, but that's actually not considered dynamic real time adaption of an indicator. Different indicators may need a proportion of the dominantCycle, say half it's value, while others may need the full value of it. That's up to you to figure that out in practice. Sourcing one or more custom indicators dynamically to one detector's dominantCycle may require code like this: `int sourceDC = int(math.max(6, math.min(49, input.source(close, "Dominant Cycle Source"))))`. Keep in mind, some algos can use a float, while algos with a for loop require an integer.
I have witnessed a few attempts by talented TV members for a Pine based autocorrelation periodogram, but not in this caliber. Trust me, coding ACP is no ordinary task to accomplish in Pine and modifying it blessed with applicable improvements is even more challenging. For over 4 years, I have been slowly improving this code here and there randomly. It is beautiful just like a real flame, but... this one can still burn you! My mind was fried to charcoal black a few times wrestling with it in the distant past. My very first attempt at translating ACP was a month long endeavor because PSv3 simply didn't have arrays back then. Anyways, this is ACP with a newer engine, I hope you enjoy it. Any TV subscriber can utilize this code as they please. If you are capable of sufficiently using it properly, please use it wisely with intended good will. That is all I beg of you.
Lastly, you now see how I have rasterized my Pine with Ehlers' swami-like tech. Yep, this whole time I have been using hline() since PSv3, not plot(). Evidently, plot() still has a deficiency limited to only 32 plots when it comes to creating intense eye candy indicators, the last I checked. The use of hline() is the optimal choice for rasterizing Ehlers styled heatmaps. This does only contain two color schemes of the many I have formerly created, but that's all that is essentially needed for this gizmo. Anything else is generally for a spectacle or seeing how brutal Pine can be color treated. The real hurdle is being able to manipulate colors dynamically with Merlin like capabilities from multiple algo results. That's the true challenging part of these heatmap contraptions to obtain multi-colored "predator vision" level indication. You now have basic hline() food for thought empowerment to wield as you can imaginatively dream in Pine projects.
PERIODOGRAM UTILITY IN REAL WORLD SCENARIOS:
This code is a testament to the abilities that have yet to be fully realized with indication advancements. Periodograms, spectrograms, and heatmaps are a powerful tool with real-world applications in various fields such as financial markets, electrical engineering, astronomy, seismology, and neuro/medical applications. For instance, among these diverse fields, it may help traders and investors identify market cycles/periodicities in financial markets, support engineers in optimizing electrical or acoustic systems, aid astronomers in understanding celestial object attributes, assist seismologists with predicting earthquake risks, help medical researchers with neurological disorder identification, and detection of asymptomatic cardiovascular clotting in the vaxxed via full body thermography. In either field of study, technologies in likeness to periodograms may very well provide us with a better sliver of analysis beyond what was ever formerly invented. Periodograms can identify dominant cycles and frequency components in data, which may provide valuable insights and possibly provide better-informed decisions. By utilizing periodograms within aspects of market analytics, individuals and organizations can potentially refrain from making blinded decisions and leverage data-driven insights instead.
PERIODOGRAM INTERPRETATION:
The periodogram renders the power spectrum of a signal, with the y-axis representing the periodicity (frequencies/wavelengths) and the x-axis representing time. The y-axis is divided into periods, with each elevation representing a period. In this periodogram, the y-axis ranges from 6 at the very bottom to 49 at the top, with intermediate values in between, all indicating the power of the corresponding frequency component by color. The higher the position occurs on the y-axis, the longer the period or lower the frequency. The x-axis of the periodogram represents time and is divided into equal intervals, with each vertical column on the axis corresponding to the time interval when the signal was measured. The most recent values/colors are on the right side.
The intensity of the colors on the periodogram indicate the power level of the corresponding frequency or period. The fire color scheme is distinctly like the heat intensity from any casual flame witnessed in a small fire from a lighter, match, or camp fire. The most intense power would be indicated by the brightest of yellow, while the lowest power would be indicated by the darkest shade of red or just black. By analyzing the pattern of colors across different periods, one may gain insights into the dominant frequency components of the signal and visually identify recurring cycles/patterns of periodicity.
SETTINGS CONFIGURATIONS BRIEFLY EXPLAINED:
Source Options: These settings allow you to choose the data source for the analysis. Using the `Source` selection, you may tether to additional data streams (e.g. close, hlcc4, hl2), which also may include samples from any other indicator. For example, this could be my "Chirped Sine Wave Generator" script found in my member profile. By using the `SineWave` selection, you may analyze a theoretical sinusoidal wave with a user-defined period, something already incorporated into the code. The `SineWave` will be displayed over top of the periodogram.
Roofing Filter Options: These inputs control the range of the passband for ACP to analyze. Ehlers had two versions of his highpass filters for his releases, so I included an option for you to see the obvious difference when performing a comparison of both. You may choose between 1st and 2nd order high-pass filters.
Spectral Controls: These settings control the core functionality of the spectral analysis results. You can adjust the autocorrelation lag, adjust the level of smoothing for Fourier coefficients, and control the contrast/behavior of the heatmap displaying the power spectra. I provided two color schemes by checking or unchecking a checkbox.
Dominant Cycle Options: These settings allow you to customize the various types of dominant cycle values. You can choose between floating-point and integer values, and select the rounding method used to derive the final dominantCycle values. Also, you may control the level of smoothing applied to the dominant cycle values.
DOMINANT CYCLE VALUE SELECTIONS:
External to the acs() function, the code takes a dominant cycle value returned from acs() and changes its numeric form based on a specified type and form chosen within the indicator settings. The dominant cycle value can be represented as an integer or a decimal number, depending on the attached algorithm's requirements. For example, FIR filters will require an integer while many IIR filters can use a float. The float forms can be either rounded, smoothed, or floored. If the resulting value is desired to be an integer, it can be rounded up/down or just be in an integer form, depending on how your algorithm may utilize it.
AUTOCORRELATION SPECTRUM FUNCTION BASICALLY EXPLAINED:
In the beginning of the acs() code, the population of caches for precalculated angular frequency factors and smoothing coefficients occur. By precalculating these factors/coefs only once and then storing them in an array, the indicator can save time and computational resources when performing subsequent calculations that require them later.
In the following code block, the "Calculate AutoCorrelations" is calculated for each period within the passband width. The calculation involves numerous summations of values extracted from the roofing filter. Finally, a correlation values array is populated with the resulting values, which are normalized correlation coefficients.
Moving on to the next block of code, labeled "Decompose Fourier Components", Fourier decomposition is performed on the autocorrelation coefficients. It iterates this time through the applicable period range of 6 to 49, calculating the real and imaginary parts of the Fourier components. Frequencies 6 to 49 are the primary focus of interest for this periodogram. Using the precalculated angular frequency factors, the resulting real and imaginary parts are then utilized to calculate the spectral Fourier components, which are stored in an array for later use.
The next section of code smooths the noise ridden Fourier components between the periods of 6 and 49 with a selected filter. This species also employs numerous SuperSmoothers to condition noisy Fourier components. One of the big differences is Ehlers' versions used basic EMAs in this section of code. I decided to add SuperSmoothers.
The final sections of the acs() code determines the peak power component for normalization and then computes the dominant cycle period from the smoothed Fourier components. It first identifies a single spectral component with the highest power value and then assigns it as the peak power. Next, it normalizes the spectral components using the peak power value as a denominator. It then calculates the average dominant cycle period from the normalized spectral components using Ehlers' "Center of Gravity" calculation. Finally, the function returns the dominant cycle period along with the normalized spectral components for later external use to plot the periodogram.
POST SCRIPT:
Concluding, I have to acknowledge a newly found analyst for assistance that I couldn't receive from anywhere else. For one, Claude doesn't know much about Pine, is unfortunately color blind, and can't even see the Pine reference, but it was able to intuitively shred my code with laser precise realizations. Not only that, formulating and reformulating my description needed crucial finesse applied to it, and I couldn't have provided what you have read here without that artificial insight. Finding the right order of words to convey the complexity of ACP and the elaborate accompanying content was a daunting task. No code in my life has ever absorbed so much time and hard fricking work, than what you witness here, an ACP gem cut pristinely. I'm unveiling my version of ACP for an empowering cause, in the hopes a future global army of code wielders will tether it to highly functional computational contraptions they might possess. Here is ACP fully blessed poetically with the "Power of Pine" in sublime code. ENJOY!
[ChasinAlts]Top-Wicked Good S/R LinesHello Tradeurs, as per usual, I hope everyone is having a FAN-FRIGGIN-TASTIC day. With the soon incoming bull market approaching fast(Nov 7, 2022), there are a few ideas that I've really been trying to push out to help nail a few coins as they are near their bottom peak of this closing Bear Market. This one may seem very similar to the last one I posted but I think this one takes the cake...esp when you see the next script from my 'Market Overview' series that I will be publishing shortly after this one as it is utilizing this new script for a market scanner that will be SUPER legit and profitable. Though it is alway nice to be noticed, I'm glad that I'm relatively unpopular so the few people that are now following me can have some time to make some money with some of these scripts I'm trying to pump out for the benefit of the community. I will rarely give my full analysis of how I take in and utilize these scripts but I can tell you, QUITE A FEW of them are money in the bank. Esp these last few I've done/am doing and even more-so the ones that are soon to come (I'm speaking of about the next 3-4 that I will be attempting to pump out in this next VERY IMPORTANT week.). One more thing I'll add before going to the script is a little alpha(Im pretty certain this is the way it is going but NOTHING is EVERY 100% in life). What I believe should be realized is the bottoming out of MANY of the crypto coins at the VERY bottom of a LONG TERM Cup and Handle (so it seems but shat can still change in the blink of an eye). Thus there are quite a few coins that I believe have already bottomed and wont be returning to said bottom for a few years or so but there are also quite a few still at the brink of the bottomest part before the real market breakout occurs. My goal with these scripts coming out this week to help you all find those coins that have yet to hit their very bottom (thus the ATH/ATL script recently published). Going back in history looking for the lowest points of long term Cup & Handles I will point out 2 key things. Near the center/bottomest part of these historical CnH you will see either Double Bottoms OR a Huge dump and then its V-shaped recovery. After these print the point of no return has occurred where only a few coins will be going lower than these Double Bottoms/V-Shaped recoveries. So the time is at hand. Now that many coins are seemingly pumping after this long consolidation, I believe we need to keep a keen eye out for THE FINAL RUG PULL (as soon as enough degenerates are leveraging Long their entire savings.). What Im saying is be ready for this final rug pull to finally be seeing these Double Bottoms/V-Shaped recoveries VERY soon. DO NOT waste all your capital yet and MAKE SURE to use stop losses or else rather than stop losses you will be burdened with MUCH WORSE losses. Im currently not even in the market bc I am waiting on said rug pull. Ok for the Script now.
This script is similar to the last one but with the previous one, one general set of settings can produce VASTLY different results (might have 2 S/R lines on one coin and 80 on another). I wanted to fix that with this script, turn it into a "Market Overview" Scanner and create alerts for the MO Scanner to be able to get alerted any time a coin is passing its largest wick S/R levels bc DULY NOTE...it is VERY rare that a coin will blow past it if it hasn't approached it recently. That means that a small retrace of 3-5%(or more) is EASY to acquire (with leverage that can really add up with how many coins are in the Kucoin Margin Coin list that I have in my scanners). Now, once price does shoot through a level you best be sure to be looking down the line for a retest of the S/R level it blew past before as they are MANY times the retest level and price will be coming back to it before continuing
in the direction it was going. Depending on the TF your using this could be a few hours to a few days to a few weeks...you get it. With this script you can choose to draw S/R lines 2 ways: 1) by having it plot S/R lines on the end of the largest 2(3,4,5..however many you choose) wicks that the chart has access to. For the scanner ill just be putting the largest 2-3 wicks and set alerts when coming up to them/crossing them & 2) having it draw S/R lines on the ends of the largest X% of wicks. it will be erasing the lines and drawing new ones on each new candle occurrence so the same general settings will no longer be producing VASTLY diff amounts of S/R lines and will be way more consistent amongst the coins for better utilization with the scanner (when I publish it). There is also a Wick Max Cutoff % so for those coins that had it's first few hours printing 100% sized wicks...you can choose to ignore them so they are not taking up one of your top spots for the S/R lines. There is similarly a Wick % min Size that can be selected so if you’re using the top % setting, it will help decrease those coins that can be still plotting 30 lines even though the top 3% of the largest wicks are set in the settings. Hope Im being clear but it's easy enough. I believe in you and your capabilities of comprehending it all and getting it all figured out. So this script is for a visualization for the scanner that I will be uploading soon-after. It's always nice to get a few comments if my ideas/scripts have been helpful to you and please don't hold back if you have something to tell me that I screwed up on (I am still rather new to this coding thing but I like to think I at least have some fresh ideas that aren’t out there in the public library). Talk to you soon and may the force be with your trades. Peace and love people...peace and love. -ChasinAlts out.
OrdinaryLeastSquaresLibrary "OrdinaryLeastSquares"
One of the most common ways to estimate the coefficients for a linear regression is to use the Ordinary Least Squares (OLS) method.
This library implements OLS in pine. This implementation can be used to fit a linear regression of multiple independent variables onto one dependent variable,
as long as the assumptions behind OLS hold.
solve_xtx_inv(x, y) Solve a linear system of equations using the Ordinary Least Squares method.
This function returns both the estimated OLS solution and a matrix that essentially measures the model stability (linear dependence between the columns of 'x').
NOTE: The latter is an intermediate step when estimating the OLS solution but is useful when calculating the covariance matrix and is returned here to save computation time
so that this step doesn't have to be calculated again when things like standard errors should be calculated.
Parameters:
x : The matrix containing the independent variables. Each column is regarded by the algorithm as one independent variable. The row count of 'x' and 'y' must match.
y : The matrix containing the dependent variable. This matrix can only contain one dependent variable and can therefore only contain one column. The row count of 'x' and 'y' must match.
Returns: Returns both the estimated OLS solution and a matrix that essentially measures the model stability (xtx_inv is equal to (X'X)^-1).
solve(x, y) Solve a linear system of equations using the Ordinary Least Squares method.
Parameters:
x : The matrix containing the independent variables. Each column is regarded by the algorithm as one independent variable. The row count of 'x' and 'y' must match.
y : The matrix containing the dependent variable. This matrix can only contain one dependent variable and can therefore only contain one column. The row count of 'x' and 'y' must match.
Returns: Returns the estimated OLS solution.
standard_errors(x, y, beta_hat, xtx_inv) Calculate the standard errors.
Parameters:
x : The matrix containing the independent variables. Each column is regarded by the algorithm as one independent variable. The row count of 'x' and 'y' must match.
y : The matrix containing the dependent variable. This matrix can only contain one dependent variable and can therefore only contain one column. The row count of 'x' and 'y' must match.
beta_hat : The Ordinary Least Squares (OLS) solution provided by solve_xtx_inv() or solve().
xtx_inv : This is (X'X)^-1, which means we take the transpose of the X matrix, multiply that the X matrix and then take the inverse of the result.
This essentially measures the linear dependence between the columns of the X matrix.
Returns: The standard errors.
estimate(x, beta_hat) Estimate the next step of a linear model.
Parameters:
x : The matrix containing the independent variables. Each column is regarded by the algorithm as one independent variable. The row count of 'x' and 'y' must match.
beta_hat : The Ordinary Least Squares (OLS) solution provided by solve_xtx_inv() or solve().
Returns: Returns the new estimate of Y based on the linear model.
Higher-timeframe requests█ OVERVIEW
This publication focuses on enhancing awareness of the best practices for accessing higher-timeframe (HTF) data via the request.security() function. Some "traditional" approaches, such as what we explored in our previous `security()` revisited publication, have shown limitations in their ability to retrieve non-repainting HTF data. The fundamental technique outlined in this script is currently the most effective in preventing repainting when requesting data from a higher timeframe. For detailed information about why it works, see this section in the Pine Script™ User Manual .
█ CONCEPTS
Understanding repainting
Repainting is a behavior that occurs when a script's calculations or outputs behave differently after restarting it. There are several types of repainting behavior, not all of which are inherently useless or misleading. The most prevalent form of repainting occurs when a script's calculations or outputs exhibit different behaviors on historical and realtime bars.
When a script calculates across historical data, it only needs to execute once per bar, as those values are confirmed and not subject to change. After each historical execution, the script commits the states of its calculations for later access.
On a realtime, unconfirmed bar, values are fluid . They are subject to change on each new tick from the data provider until the bar closes. A script's code can execute on each tick in a realtime bar, meaning its calculations and outputs are subject to realtime fluctuations, just like the underlying data it uses. Each time a script executes on an unconfirmed bar, it first reverts applicable values to their last committed states, a process referred to as rollback . It only commits the new values from a realtime bar after the bar closes. See the User Manual's Execution model page to learn more.
In essence, a script can repaint when it calculates on realtime bars due to fluctuations before a bar's confirmation, which it cannot reproduce on historical data. A common strategy to avoid repainting when necessary involves forcing only confirmed values on realtime bars, which remain unchanged until each bar's conclusion.
Repainting in higher-timeframe (HTF) requests
When working with a script that retrieves data from higher timeframes with request.security() , it's crucial to understand the differences in how such requests behave on historical and realtime bars .
The request.security() function executes all code required by its `expression` argument using data from the specified context (symbol, timeframe, or modifiers) rather than on the chart's data. As when executing code in the chart's context, request.security() only returns new historical values when a bar closes in the requested context. However, the values it returns on realtime HTF bars can also update before confirmation, akin to the rollback and recalculation process that scripts perform in the chart's context on the open bar. Similar to how scripts operate in the chart's context, request.security() only confirms new values after a realtime bar closes in its specified context.
Once a script's execution cycle restarts, what were previously realtime bars become historical bars, meaning the request.security() call will only return confirmed values from the HTF on those bars. Therefore, if the requested data fluctuates across an open HTF bar, the script will repaint those values after it restarts.
This behavior is not a bug; it's simply the default behavior of request.security() . In some cases, having the latest information from an unconfirmed HTF bar is precisely what a script needs. However, in many other cases, traders will require confirmed, stable values that do not fluctuate across an open HTF bar. Below, we explain the most reliable approach to achieve such a result.
Achieving consistent timing on all bars
One can retrieve non-fluctuating values with consistent timing across historical and realtime feeds by exclusively using request.security() to fetch the data from confirmed HTF bars. The best way to achieve this result is offsetting the `expression` argument by at least one bar (e.g., `close [1 ]`) and using barmerge.lookahead_on as the `lookahead` argument.
We discourage the use of barmerge.lookahead_on alone since it prompts the function to look toward future values of HTF bars across historical data, which is heavily misleading. However, when paired with a requested `expression` that includes a one-bar historical offset, the "future" data the function retrieves is not from the future. Instead, it represents the last confirmed bar's values at the start of each HTF bar, thus preventing the results on realtime bars from fluctuating before confirmation from the timeframe.
For example, this line of code uses a request.security() call with barmerge.lookahead_on to request the close price from the "1D" timeframe, offset by one bar with the history-referencing operator [ ] . This line will return the daily price with consistent timing across all bars:
float htfClose = request.security(syminfo.tickerid, "1D", close , lookahead = barmerge.lookahead_on)
Note that:
• This technique only works as intended for higher-timeframe requests .
• When designing a script to work specifically with HTFs, we recommend including conditions to prevent request.security() from accessing timeframes equal to or lower than the chart's timeframe, especially if you intend to publish it. In this script, we included an if structure that raises a runtime error when the requested timeframe is too small.
• A necessary trade-off with this approach is that the script must wait for an HTF bar's confirmation to retrieve new data on realtime bars, thus delaying its availability until the open of the subsequent HTF bar. The time elapsed during such a delay varies with each market, but it's typically relatively small.
👉 Failing to offset the function's `expression` argument while using barmerge.lookahead_on will produce historical results with lookahead bias , as it will look to the future states of historical HTF bars, retrieving values before the times at which they're available in the feed. See the `lookahead` and Future leak with `request.security()` sections in the Pine Script™ User Manual for more information.
Evolving practices
The fundamental technique outlined in this publication is currently the only reliable approach to requesting non-repainting HTF data with request.security() . It is the superior approach because it avoids the pitfalls of other methods, such as the one introduced in the `security()` revisited publication. That publication proposed using a custom `f_security()` function, which applied offsets to the `expression` and the requested result based on historical and realtime bar states. At that time, we explored techniques that didn't carry the risk of lookahead bias if misused (i.e., removing the historical offset on the `expression` while using lookahead), as requests that look ahead to the future on historical bars exhibit dangerously misleading behavior.
Despite these efforts, we've unfortunately found that the bar state method employed by `f_security()` can produce inaccurate results with inconsistent timing in some scenarios, undermining its credibility as a universal non-repainting technique. As such, we've deprecated that approach, and the Pine Script™ User Manual no longer recommends it.
█ METHOD VARIANTS
In this script, all non-repainting requests employ the same underlying technique to avoid repainting. However, we've applied variants to cater to specific use cases, as outlined below:
Variant 1
Variant 1, which the script displays using a lime plot, demonstrates a non-repainting HTF request in its simplest form, aligning with the concept explained in the "Achieving consistent timing" section above. It uses barmerge.lookahead_on and offsets the `expression` argument in request.security() by one bar to retrieve the value from the last confirmed HTF bar. For detailed information about why this works, see the Avoiding Repainting section of the User Manual's Other timeframes and data page.
Variant 2
Variant 2 ( fuchsia ) introduces a custom function, `htfSecurity()`, which wraps the request.security() function to facilitate convenient repainting control. By specifying a value for its `repaint` parameter, users can determine whether to allow repainting HTF data. When the `repaint` value is `false`, the function applies lookahead and a one-bar offset to request the last confirmed value from the specified `timeframe`. When the value is `true`, the function requests the `expression` using the default behavior of request.security() , meaning the results can fluctuate across chart bars within realtime HTF bars and repaint when the script restarts.
Note that:
• This function exclusively handles HTF requests. If the requested timeframe is not higher than the chart's, it will raise a runtime error .
• We prefer this approach since it provides optional repainting control. Sometimes, a script's calculations need to respond immediately to realtime HTF changes, which `repaint = true` allows. In other cases, such as when issuing alerts, triggering strategy commands, and more, one will typically need stable values that do not repaint, in which case `repaint = false` will produce the desired behavior.
Variant 3
Variant 3 ( white ) builds upon the same fundamental non-repainting approach used by the first two. The difference in this variant is that it applies repainting control to tuples , which one cannot pass as the `expression` argument in our `htfSecurity()` function. Tuples are handy for consolidating `request.*()` calls when a script requires several values from the same context, as one can request a single tuple from the context rather than executing multiple separate request.security() calls.
This variant applies the internal logic of our `htfSecurity()` function in the script's global scope to request a tuple containing open and `srcInput` values from a higher timeframe with repainting control. Historically, Pine Script™ did not allow the history-referencing operator [ ] when requesting tuples unless the tuple came from a function call, which limited this technique. However, updates to Pine over time have lifted this restriction, allowing us to pass tuples with historical offsets directly as the `expression` in request.security() . By offsetting all items in a tuple `expression` by one bar and using barmerge.lookahead_on , we effectively retrieve a tuple of stable, non-repainting HTF values.
Since we cannot encapsulate this method within the `htfSecurity()` function and must execute the calculations in the global scope, the script's "Repainting" input directly controls the global `offset` and `lookahead` values to ensure it behaves as intended.
Variant 4 (Control)
Variant 4, which the script displays as a translucent orange plot, uses a default request.security() call, providing a reference point to compare the difference between a repainting request and the non-repainting variants outlined above. Whenever the script restarts its execution cycle, realtime bars become historical bars, and the request.security() call here will repaint the results on those bars.
█ Inputs
Repainting
The "Repainting" input (`repaintInput` variable) controls whether Variant 2 and Variant 3 are allowed to use fluctuating values from an unconfirmed HTF bar. If its value is `false` (default), these requests will only retrieve stable values from the last confirmed HTF bar.
Source
The "Source" input (`srcInput` variable) determines the series the script will use in the `expression` for all HTF data requests. Its default value is close .
HTF Selection
This script features two ways to specify the higher timeframe for all its data requests, which users can control with the "HTF Selection" input (`tfTypeInput` variable):
1) If its value is "Fixed TF", the script uses the timeframe value specified by the "Fixed Higher Timeframe" input (`fixedTfInput` variable). The script will raise a runtime error if the selected timeframe is not larger than the chart's.
2) If the input's value is "Multiple of chart TF", the script multiplies the value of the "Timeframe Multiple" input (`tfMultInput` variable) by the chart's timeframe.in_seconds() value, then converts the result to a valid timeframe string via timeframe.from_seconds() .
Timeframe Display
This script features the option to display an "information box", i.e., a single-cell table that shows the higher timeframe the script is currently using. Users can toggle the display and determine the table's size, location, and color scheme via the inputs in the "Timeframe Display" group.
█ Outputs
This script produces the following outputs:
• It plots the results from all four of the above variants for visual comparison.
• It highlights the chart's background gray whenever a new bar starts on the higher timeframe, signifying when confirmations occur in the requested context.
• To demarcate which bars the script considers historical or realtime bars, it plots squares with contrasting colors corresponding to bar states at the bottom of the chart pane.
• It displays the higher timeframe string in a single-cell table with a user-specified size, location, and color scheme.
Look first. Then leap.
CandlestickPatternsLibrary "CandlestickPatterns"
This library provides a wide range of candlestick patterns, and available for user to call each pattern individually. It's a comprehensive and common tool designed for traders seeking to raise their technical analysis, and it may help users identify key turning of price action in financial instruments. Credit to public technical “*All Candlestick Patterns*” indicator.
abandonedBaby(order, d1)
The "Abandoned Baby" candlestick pattern is a bullish/bearish pattern consists of three candles.
Parameters:
order (simple string) : (simple string) Pattern order type "bull" or "bear".
d1 (simple float) : (simple float) Previous candle's body percentage out of candle range. Optional argument, default is 5.
darkCloudCover(c1, n)
The "Dark Cloud Cover" is a bearish pattern consists of two candles.
Parameters:
c1 (simple bool) : (simple bool) Previous candle's body must be higher than average. Optional argument, default is true.
n (simple int) : (simple int) Length of average candle's body. Optional argument, default is 14.
doji(d0)
The "Doji" is neither bullish or bearish consists of one candles.
Parameters:
d0 (simple float) : (simple float) Current candle's body percentage out of candle range. Optional argument, default is 5.
dojiStar(order, c1, n, d0)
The "Doji Star" is a bullish/bearish pattern consists of two candles.
Parameters:
order (simple string) : (simple string) Pattern order type "bull" or "bear" .
c1 (simple bool) : (simple bool) Previous candle's body must be higher than average. Optional argument, default is true.
n (simple int) : (simple int) Length of average candle's body. Optional argument, default is 14.
d0 (simple float) : (simple float) Current candle's body percentage out of candle range. Optional argument, default is 5.
downsideTasukiGap(c2, c1, n)
The "Downside Tasuki Gap" is a bearish pattern consists of three candles.
Parameters:
c2 (simple bool) : (simple bool) Before previous candle's body must be higher than average. Optional argument, default is true.
c1 (simple bool) : (simple bool) Previous candle's body must be lower than average. Optional argument, default is true.
n (simple int) : (simple int) Length of average candle's body. Optional argument, default is 14.
dragonflyDoji(d0)
The "Dragon Fly Doji" is a bullish pattern consists of one candle.
Parameters:
d0 (simple float) : (simple float) Current candle's body percentage out of candle range. Optional argument, default is 5.
engulfing(order, c1, c0, n)
The "Engulfing" is a bullish/bearish pattern consists of two candles.
Parameters:
order (simple string) : (simple string) Pattern order type "bull" or "bear".
c1 (simple bool) : (simple bool) Previous candle's body must be lower than average. Optional argument, default is true.
c0 (simple bool) : (simple bool) Current candle's body must be higher than average. Optional argument, default is true.
n (simple int) : (simple int) Length of average candle's body. Optional argument, default is 14.
eveningDojiStar(c2, c0, d1, n)
The "Evening Doji Star" is a bearish pattern consists of three candles.
Parameters:
c2 (simple bool) : (simple bool) Before previous candle's body must be higher than average, default is true.
c0 (simple bool) : (simple bool) Current candle's body must be higher than average. Optional argument, default is true.
d1 (simple float) : (simple float) Previous candle's body percentage out of candle range. Optional argument, default is 5.
n (simple int) : (simple int) Length of average candle's body. Optional argument, default is 14.
eveningStar(c2, c1, c0, n)
The "Evening Star" is a bearish pattern consists of three candles.
Parameters:
c2 (simple bool) : (simple bool) Before previous candle's body must be higher than average. Optional argument, default is true.
c1 (simple bool) : (simple bool) Previous candle's body must be lower than average. Optional argument, default is true.
c0 (simple bool) : (simple bool) Current candle's body must be higher than average. Optional argument, default is true.
n (simple int) : (simple int) Length of average candle's body. Optional argument, default is 14.
fallingThreeMethods(c4, c3, c2, c1, c0, n)
The "Falling Three Methods" is a bearish pattern consists of five candles.
Parameters:
c4 (simple bool) : (simple bool) 5th candle ago body must be higher than average. Optional argument, default is true.
c3 (simple bool) : (simple bool) 4th candle ago body must be lower than average. Optional argument, default is true.
c2 (simple bool) : (simple bool) 3rd candle ago body must be lower than average. Optional argument, default is true.
c1 (simple bool) : (simple bool) 2nd candle ago body must be lower than average. Optional argument, default is true.
c0 (simple bool) : (simple bool) Current candle's body must be higher than average. Optional argument, default is true.
n (simple int) : (simple int) Length of average candle's body. Optional argument, default is 14.
Returns: (bool)
fallingWindow()
The "Falling Window" is a bearish pattern consists of two candles.
gravestoneDoji(d0)
The "Gravestone Doji" is a bearish pattern consists of one candle.
Parameters:
d0 (simple float) : (simple float) Current candle's body percentage out of candle range. Optional argument, default is 5.
hammer(c0, n)
The "Hammer" is a bullish pattern consists of one candle.
Parameters:
c0 (simple bool) : (simple bool) Current candle's body must be lower than average. Optional argument, default is true.
n (simple int) : (simple int) Length of average candle's body. Optional argument, default is 14.
hangingMan(c0, n)
The "Hanging Man" is a bearish pattern consists of one candle.
Parameters:
c0 (simple bool) : (simple bool) Current candle's body must be lower than average. Optional argument, default is true.
n (simple int) : (simple int) Length of average candle's body. Optional argument, default is 14.
haramiCross(order, c1, n)
The "Harami Cross" candlestick pattern is a bullish/bearish pattern consists of two candles.
Parameters:
order (string) : (simple string) Pattern order type "bull" or "bear".
c1 (simple bool) : (simple bool) Previous candle's body must be higher than average. Optional argument, default is true.
n (simple int) : (simple int) Length of average candle's body. Optional argument, default is 14.
harami(order, c1, c0, n)
The "Harami" candlestick pattern is a bullish/bearish pattern consists of two candles.
Parameters:
order (string) : (simple string) Pattern order type "bull" or "bear"
c1 (simple bool) : (simple bool) Previous candle's body must be higher than average. Optional argument, default is true.
c0 (simple bool) : (simple bool) Current candle's body must be lower than average. Optional argument, default is true.
n (simple int) : (simple int) Length of average candle's body. Optional argument, default is 14.
invertedHammer(c0, n)
The "Inverted Hammer" is a bullish pattern consists of one candle.
Parameters:
c0 (simple bool) : (simple bool) Current candle's body must be lower than average. Optional argument, default is true.
n (simple int) : (simple int) Length of average candle's body. Optional argument, default is 14.
kicking(order, c1, c0, n)
The "Kicking" candlestick pattern is a bullish/bearish pattern consists of two candles.
Parameters:
order (string) : (simple string) Pattern order type "bull" or "bear"
c1 (simple bool) : (simple bool) Previous candle's body must be higher than average. Optional argument, default is true.
c0 (simple bool) : (simple bool) Current candle's body must be higher than average. Optional argument, default is true.
n (simple int) : (simple int) Length of average candle's body. Optional argument, default is 14.
longLowerShadow(l0)
The "Long Lower Shadow" candlestick pattern is a bullish pattern consists of one candles.
Parameters:
l0 (simple float) : (simple float) Current candle's lower wick min percentage out of candle range. Optional argument, default is 75.
longUpperShadow(u0)
The "Long Upper Shadow" candlestick pattern is a bearish pattern consists of one candles.
Parameters:
u0 (simple float) : (simple float) Current candle's upper wick min percentage out of candle range. Optional argument, default is 75.
marubozuBlack(c0, n)
The "Marubozu Black" candlestick pattern is a bearish pattern consists of one candles.
Parameters:
c0 (simple bool) : (simple bool) Current candle's body must be higher than average. Optional argument, default is true.
n (simple int) : (simple int) Length of average candle's body. Optional argument, default is 14.
marubozuWhite(c0, n)
The "Marubozu White" candlestick pattern is a bullish pattern consists of one candles.
Parameters:
c0 (simple bool) : (simple bool) Current candle's body must be higher than average. Optional argument, default is true.
n (simple int) : (simple int) Length of average candle's body. Optional argument, default is 14.
morningDojiStar(c2, d1, c0, n)
The "Morning Doji Star" candlestick pattern is a bullish pattern consists of three candles.
Parameters:
c2 (simple bool) : (simple bool) Before previous candle's body must be higher than average. Optional argument, default is true.
d1 (simple float) : (simple float) Previous candle's body percentage out of candle range. Optional argument, default is 5.
c0 (simple bool) : (simple bool) Current candle's body must be higher than average. Optional argument, default is true.
n (simple int) : (simple int) Length of average candle's body. Optional argument, default is 14.
morningStar(c2, c1, c0, n)
The "Morning Star" candlestick pattern is a bullish pattern consists of three candles.
Parameters:
c2 (simple bool) : (simple bool) Before previous candle's body must be higher than average. Optional argument, default is true.
c1 (simple bool) : (simple bool) Previous candle's body must be lower than average. Optional argument, default is true.
c0 (simple bool) : (simple bool) Cuurent candle's body must be higher than average. Optional argument, default is true.
n (simple int) : (simple int) Length of average candle's body. Optional argument, default is 14.
onNeck(c1, c0, n)
The "On Neck" candlestick pattern is a bearish pattern consists of two candles.
Parameters:
c1 (simple bool) : (simple bool) Previous candle's body must be higher than average. Optional argument, default is true.
c0 (simple bool) : (simple bool) Cuurent candle's body must be lower than average. Optional argument, default is true.
n (simple int) : (simple int) Length of average candle's body. Optional argument, default is 14.
piercing(c1, n)
The "Piercing" candlestick pattern is a bullish pattern consists of two candles.
Parameters:
c1 (simple bool) : (simple bool) Previous candle's body must be higher than average. Optional argument, default is true.
n (simple int) : (simple int) Length of average candle's body. Optional argument, default is 14.
risingThreeMethods(c4, c3, c2, c1, c0, n)
The "Rising Three Methods" candlestick pattern is a bullish pattern consists of five candles.
Parameters:
c4 (simple bool) : (simple bool) 5th candle ago body must be higher than average. Optional argument, default is true.
c3 (simple bool) : (simple bool) 4th candle ago body must be Lower than average. Optional argument, default is true.
c2 (simple bool) : (simple bool) 3rd candle ago body must be Lower than average. Optional argument, default is true.
c1 (simple bool) : (simple bool) 2nd candle ago body must be Lower than average. Optional argument, default is true.
c0 (simple bool) : (simple bool) Current candle's body must be higher than average. Optional argument, default is true.
n (simple int) : (simple int) Length of average candle's body. Optional argument, default is 14.
risingWindow()
The "Rising Window" candlestick pattern is a bullish pattern consists of two candle.
shootingStar(c0, n)
The "Shooting Star" candlestick pattern is a bearish pattern consists of one candle.
Parameters:
c0 (simple bool) : (simple bool) Current candle's body must be higher than average. Optional argument, default is true.
n (simple int) : (simple int) Length of average candle's body. Optional argument, default is 14.
spinningTopBlack(l0, u0)
The "Spinning Top Black" is neither bullish or bearish.
Parameters:
l0 (simple float) : (simple float) Current candle's lower wick min percentage out of candle range. Optional argument, default is 34.
u0 (simple float) : (simple float) Current candle's upper wick min percentage out of candle range. Optional argument, default is 34.
spinningTopWhite(l0, u0)
The "Spinning Top White" is neither bullish or bearish.
Parameters:
l0 (simple float) : (simple float) Current candle's lower wick min percentage out of candle range. Optional argument, default is 34.
u0 (simple float) : (simple float) Current candle's upper wick min percentage out of candle range. Optional argument, default is 34.
threeBlackCrows(c2, c1, c0, n)
The "Three Black Crows" candlestick pattern is a bearish pattern consists of three candles.
Parameters:
c2 (simple bool) : (simple bool) Before previous candle's body must be higher than average. Optional argument, default is true.
c1 (simple bool) : (simple bool) Previous candle's body must be higher than average. Optional argument, default is true.
c0 (simple bool) : (simple bool) Cuurent candle's body must be higher than average. Optional argument, default is true.
n (simple int) : (simple int) Length of average candle's body. Optional argument, default is 14.
threeWhiteSoldiers(c2, c1, c0, n)
The "Three White Soldiers" candlestick pattern is a bullish pattern consists of three candles.
Parameters:
c2 (simple bool) : (simple bool) Before previous candle's body must be higher than average. Optional argument, default is true.
c1 (simple bool) : (simple bool) Previous candle's body must be higher than average. Optional argument, default is true.
c0 (simple bool) : (simple bool) Cuurent candle's body must be higher than average. Optional argument, default is true.
n (simple int) : (simple int) Length of average candle's body. Optional argument, default is 14.
triStar(order, d2, d1, d0)
The "Tri Star" candlestick pattern is a bullish/bearish pattern consists of three candles.
Parameters:
order (simple string) : (simple string) Pattern order type "bull" or "bear".
d2 (simple float) : (simple float) Before previous candle's body percentage out of candle range. Optional argument, default is 5.
d1 (simple float) : (simple float) Previous candle's body percentage out of candle range. Optional argument, default is 5.
d0 (simple float) : (simple float) Current candle's body percentage out of candle range. Optional argument, default is 5.
tweezerBottom(c1, n)
The "Tweezer Bottom" candlestick pattern is a bullish pattern consists of two candles.
Parameters:
c1 (simple bool) : (simple bool) Previous candle's body must be higher than average. Optional argument, default is true.
n (simple int) : (simple int) Length of average candle's body. Optional argument, default is 14.
tweezerTop(c1, n)
The "Tweezer Top" candlestick pattern is a bearish pattern consists of two candles.
Parameters:
c1 (simple bool) : (simple bool) Previous candle's body must be higher than average. Optional argument, default is true.
n (simple int) : (simple int) Length of average candle's body. Optional argument, default is 14.
upsideTasukiGap(c2, c1, n)
The "Tri Star" candlestick pattern is a bullish pattern consists of three candles.
Parameters:
c2 (simple bool) : (simple bool) Before Previous candle's body must be higher than average. Optional argument, default is true.
c1 (simple bool) : (simple bool) Previous candle's body must be lower than average. Optional argument, default is true.
n (simple int) : (simple int) Length of average candle's body. Optional argument, default is 14.
Realtime FootprintThe purpose of this script is to gain a better understanding of the order flow by the footprint. To that end, i have added unusual features in addition to the standard features.
I use "Real Time 5D Profile by LucF" main engine to create basic footprint(profile type) and added some popular features and my favorites.
This script can only be used in realtime, because tradingview doesn't provide historical Bid/Ask date.
Bid/Ask date used this script are up/down ticks.
This script can only be used by time based chart (1m, 5m , 60m and daily etc)
This script use many labels and these are limited max 500, so you can't display many bars.
If you want to display foot print bars longer, turn off the unused sub-display function.
Default setting is footprint is 25 labels, IB count is 1, COT high and Ratio high is 1, COT low and Ratio low is 1 and Delta Box Ratio Volume is 1 , total 29.
plus UA , IB stripes , ladder fading mark use several labels.
///////// General Setting ///////////
Resets on Volume / Range bar
: If you want to use simple time based Resets on, please set Total Volume is 0.
Your timeframe is always the first condition. So if you set Total Volume is 1000, both conditions(Volume >= 1000 and your timeframe start next bar) must be met. (that is, new footprint bar doesn't start at when total volume = exactly 1000).
Ticks per row and Maximum row of Bar
: 1 is minimum size(tick). "Maximum row of Bar" decide the number of rows used in one footprint. 1 row is created from 1 label, so you need to reduce this number to display many footprints (Max label is 500).
Volume Filter and For Calculation and Display
: "Volume Filter" decide minimum size of using volume for this script.
"For Calculation and Display" is used to convert volume to an integer.
This script only use integer to make profile look better (I contained Bid number and Ask number in one row( one label) to saving labels. This require to make no difference in width by the number of digits and this script corresponds integers from 0 to 3 digits).
ex) Symbol average volume size is from 0.0001 to 0.001. You decide only use Volume >= 0.0005 by "Volume Filter".
Next, you convert volume to integer, by setting "For Calculation and Display" is 1000 (0.0005 * 1000 = 5).
If 0.00052 → 5.2 → 5, 0.00058 → 5.8 → 6 (Decimal numbers are rounded off)
This integer is used to all calculation in this script.
//////// Main Display ///////
Footprint, Total, Row Delta, Diagonal Delta and Profile
: "Footprint" display Ask and Bid per row. "Total" display Ask + Bid per row.
"Row Delta" display Ask - Bid per row. "Diagonal Delta" display Ask(row N) - Bid(row N -1) per row.
Profile display Total Volume(Ask + Bid) per row by using Block. Profile Block coloring are decided by Row Delta value(default: positive Row Delta (Ask > Bid) is greenish colors and negative Row Delta (Ask < Bid) is reddish colors.)
Volume per Profile Block, Row Imbalance Ratio and Delta Bull/Bear/Neutral Colors
: "Volume per Profile Block" decide one block contain how many total volume.
ex) When you set 20, Total volume 70 display 3 block.
The maximum number of blocks that can be used per low is 20.
So if you set 20, Total volume 400 is 20 blocks. total volume 800 is 20 blocks too.
"Row Imbalance Ratio" decide block coloring. The row imbalance is that the difference between Ask and Bid (row delta) is large.
default is x3, x2 and x1. The larger the difference, the brighter the color.
ex) Ask 30 Bid 10 is light green. Ask 20 Bid 10 is green. Ask 11 Bid 10 is dark green.
Ask 0 Bid 1 is light red. Ask 1 Bid 2 is red. ask 30 Bid 59 is dark green.
Ask 10 Bid 10 is neutral color(gray)
profile coloring is reflected same row's other elements(Ask, Bid, Total and Delta) too.
It's because one label can only use one text color.
/////// Sub Display ///////
Delta, total and Commitment of Traders
: "Delta" is total Ask - total Bid in one footprint bar. Total is total Ask + total Bid in one footprint bar.
"Commitment of traders" is variation of "Delta". COT High is reset to 0 when current highest is touched. COT Low is opposite.
Basic concept of Delta is to compare price with Delta. Ordinary, when price move up, delta is positive. Price move down is negative delta.
This is because market orders move price and market orders are counted by Delta (although this description is not exactly correct).
But, sometimes prices do not move even though many market orders are putting pressure on price , or conversely, price move strongly without many market orders.
This is key point. Big player absorb market orders by iceberg order(Subdivide large orders and pretend to be small limit orders.
Small limit orders look weak in the order book, but they are added each time you fill, so they are more powerful than they look.), so price don't move.
On the other hand, when the price is moving easily, smart players may be aiming to attract and counterattack to a better price for them.
It's more of a sport than science, and there's always no right response. Pay attention to the relationship between price, volume and delta.
ex) If COT Low is large negative value, it means many sell market orders is coming, but iceberg order is absorbing their attack at limit order.
you should not do buy entry, only this clue. but this is one of the hints.
"Delta, Box Ratio and Total texts is contained same label and its color are "Delta" coloring. Positive Delta is Delta Bull color(green),Negative Delta is Delta Bear Color
and Delta = 0 is Neutral Color(gray). When Delta direction and price direction are opposite is Delta Divergence Color(yellow).
I didn't add the cumulative volume delta because I prefer to display the CVD line on the price chart rather than the number.
Box Ratio , Box Ratio Divisor and Heavy Box Ratio Ratio
: This is not ordinary footprint features, but I like this concept so I added.
Box Ratio by Richard W. Arms is simple but useful tool. calculation is "total volume (one bar) divided by Bar range (highest - lowest)."
When Bull and bear are fighting fiercely this number become large, and then important price move happen.
I made average BR from something like 5 SMA and if current BR exceeds average BR x (Heavy Box Ratio Ratio), BR box mark will be filled.
Box Ratio Divisor is used to good looking display(BR multiplied by Box Ratio Divisor is rounded off and displayed as an integer)
Diagonal Imbalance Count , D IB Mark and D IB Stripes
: Diagonal Imbalance is defined by "Diagonal Imbalance Ratio".
ex) You set 2. When Ask(row N) 30 Bid(row N -1)10, it's 30 > 10*2, so positive Diagonal Imbalance.
When Ask(row N) 4 Bid(row N -1)9, it's 4*2 < 9, so negative Diagonal Imbalance.
This calculation does not use equals to avoid Ask(row N) 0 Bid(row N -1)0 became Diagonal Imbalance.
Ask(row N) 0 Bid(row N -1)0, it's 0 = 0*2, not Diagonal Imbalance. Ask(row N) 10 Bid(row N -1)5, it's 10 = 5*2, not Diagonal Imbalance.
"D IB Mark" emphasize Ask or Bid number which is dominant side(Winner of Diagonal Imbalance calculation), by under line.
"Diagonal Imbalance Count" compare Ask side D IB Mark to Bid side D IB Mark in one footprint.
Coloring depend on which is more aggressive side (it has many IB Mark) and When Aggressive direction and price direction are opposite is Delta Divergence Color(yellow).
"D IB Stripes" is a function that further emphasizes with an arrow Mark, when a DIB mark is added on the same side for three consecutive row. Three consecutive arrow is added at third row.
Unfinished Auction, Ratio Bounds and Ladder fading Mark
: "Unfinished Auction" emphasize highest or lowest row which has both Ask and Bid, by Delta Divergence Color(yellow) XXXXXX mark.
Unfinished Auction sometimes has magnet effect, price may touch and breakout at UA side in the future.
This concept is famous as profit taking target than entry decision.
But, I'm interested in the case that Big player make fake breakout at UA side and trapped retail traders, and then do reversal with retail traders stop-loss hunt.
Anyway, it's not stand alone signal.
"Ratio Bounds" gauge decrease of pressure at extreme price. Ratio Bounds High is number which second highest ask is divided by highest ask.
Ratio Bounds Low is number which second lowest bid is divided by lowest bid. The larger the number, the less momentum the price has.
ex)first footprint bar has Ratio Bounds Low 2, second footprint bar has RBL 4, third footprint bar has RBL 20.
This indicates that the bear's power is gradually diminishing.
"Ladder fading mark" emphasizes the decrease of the value in 3 consecutive row at extreme price. I added two type Marks.
Ask/Bid type(triangle Mark) is Ask/Bid values are decreasing of three consecutive row at extreme price.
Row Imbalance type(Diamond Mark) are row Imbalance values are decreasing of three consecutive row at extreme price.
ex)Third lowest Bid 40, second lowest Bid 10 and lowest Bid 5 have triangle up Mark. That is bear's power is gradually diminishing.
(This Mark only check Bid value at lowest price and Ask value at highest price).
Third highest row delta + 60, second highest row delta + 5, highest delta - 20 have diamond Mark. That is Bull's power is gradually diminishing.
Sub display use Delta colors at bottom of Sub display section.
////// Candle & POC /////////
candle and POC
: Ordinary, "POC" Point of Control is row of largest total volume, but this script'POC is volume weighted average.
This is because the regular POC was visually displayed by the profile ,and I was influenced LucF's ideas.
POC coloring is decided in relation to the previous POC. When current POC is higher than previous POC, color is UP Bar Color(green).
In the opposite case, Down Bar color is used.
POC Divergence Color is used when Current POC is up but current bar close is lower than open (Down price Bar),or in the opposite case.
POC coloring has option also highlight background by Delta Divergence Color(yellow). but bg color is displayed at your time frame current price bar not current footprint bar.
The basic explanation is over.
I add some image to promote understanding basic ideas.
Delta Volume Candles [LucF]█ OVERVIEW
This indicator plots on-chart volume delta information using candles that can replace your normal candles, tops and bottoms appended to normal candles, optional MAs of those tops and bottoms levels, a divergence channel and a chart background. The indicator calculates volume delta using intrabar analysis, meaning that it uses the lower timeframe bars constituting each chart bar.
█ CONCEPTS
Volume Delta
The volume delta concept divides a bar's volume in "up" and "down" volumes. The delta is calculated by subtracting down volume from up volume. Many calculation techniques exist to isolate up and down volume within a bar. The simplest use the polarity of interbar price changes to assign their volume to up or down slots, e.g., On Balance Volume or the Klinger Oscillator . Others such as Chaikin Money Flow use assumptions based on a bar's OHLC values. The most precise calculation method uses tick data and assigns the volume of each tick to the up or down slot depending on whether the transaction occurs at the bid or ask price. While this technique is ideal, it requires huge amounts of data on historical bars, which considerably limits the historical depth of charts and the number of symbols for which tick data is available. Furthermore, historical tick data is not yet available on TradingView.
This indicator uses intrabar analysis to achieve a compromise between the simplest and most precise methods of calculating volume delta. It is currently the most precise method usable on TradingView charts. TradingView's Volume Profile built-in indicators use it, as do the CVD - Cumulative Volume Delta Candles and CVD - Cumulative Volume Delta (Chart) indicators published from the TradingView account . My Delta Volume Channels and Volume Delta Columns Pro indicators also use intrabar analysis. Other volume delta indicators such as my Realtime 5D Profile use realtime chart updates to calculate volume delta without intrabar analysis, but that type of indicator only works in real time; they cannot calculate on historical bars.
This is the logic I use to determine the polarity of intrabars, which determines the up or down slot where its volume is added:
• If the intrabar's open and close values are different, their relative position is used.
• If the intrabar's open and close values are the same, the difference between the intrabar's close and the previous intrabar's close is used.
• As a last resort, when there is no movement during an intrabar, and it closes at the same price as the previous intrabar, the last known polarity is used.
Once all intrabars making up a chart bar have been analyzed and the up or down property of each intrabar's volume determined, the up volumes are added, and the down volumes subtracted. The resulting value is volume delta for that chart bar, which can be used as an estimate of the buying/selling pressure on an instrument. Not all markets have volume information. Without it, this indicator is useless.
Intrabar analysis
Intrabars are chart bars at a lower timeframe than the chart's. The timeframe used to access intrabars determines the number of intrabars accessible for each chart bar. On a 1H chart, each chart bar of an active market will, for example, usually contain 60 bars at the lower timeframe of 1min, provided there was market activity during each minute of the hour.
This indicator automatically calculates an appropriate lower timeframe using the chart's timeframe and the settings you use in the script's "Intrabars" section of the inputs. As it can access lower timeframes as small as seconds when available, the indicator can be used on charts at relatively small timeframes such as 1min, provided the market is active enough to produce bars at second timeframes.
The quantity of intrabars analyzed in each chart bar determines:
• The precision of calculations (more intrabars yield more precise results).
• The chart coverage of calculations (there is a 100K limit to the quantity of intrabars that can be analyzed on any chart,
so the more intrabars you analyze per chart bar, the less chart bars can be calculated by the indicator).
The information box displayed at the bottom right of the chart shows the lower timeframe used for intrabars, as well as the average number of intrabars detected for chart bars and statistics on chart coverage.
Balances
This indicator calculates five balances from volume delta values. The balances are oscillators with a zero centerline; positive values are bullish, and negative values are bearish. It is important to understand the balances as they can be used to:
• Color candle bodies.
• Calculate body and top and bottom divergences.
• Color an EMA channel.
• Color the chart's background.
• Configure markers and alerts.
The five balances are:
1 — Bar Balance : This is the only balance using instant values; it is simply the subtraction of the down volume from the up volume on the bar, so the instant volume delta for that bar.
2 — Average Balance : Calculates a distinct EMA for both the up and down volumes, and subtracts the down EMA from the up EMA.
The result is akin to MACD's histogram because it is the subtraction of two moving averages.
3 — Momentum Balance : Starts by calculating, separately for both up and down volumes, the difference between the same EMAs used in "Average Balance" and
an SMA of twice the period used for the "Average Balance" EMAs. The difference for the up side is subtracted from the difference for the down side,
and an RSI of that value is calculated and brought over the −50/+50 scale.
4 — Relative Balance : The reference values used in the calculation are the up and down EMAs used in the "Average Balance".
From those, we calculate two intermediate values using how much the instant up and down volumes on the bar exceed their respective EMA — but with a twist.
If the bar's up volume does not exceed the EMA of up volume, a zero value is used. The same goes for the down volume with the EMA of down volume.
Once we have our two intermediate values for the up and down volumes exceeding their respective MA, we subtract them. The final value is an ALMA of that subtraction.
The rationale behind using zero values when the bar's up/down volume does not exceed its EMA is to only take into account the more significant volume.
If both instant volume values exceed their MA, then the difference between the two is the signal's value.
The signal is called "relative" because the intermediate values are the difference between the instant up/down volumes and their respective MA.
This balance flatlines when the bar's up/down volumes do not exceed their EMAs, which makes it useful to spot areas where trader interest dwindles, such as consolidations.
The smaller the period of the final value's ALMA, the more easily it will flatline. These flat zones should be considered no-trade zones.
5 — Percent Balance : This balance is the ALMA of the ratio of the "Bar Balance" over the total volume for that bar.
From the balances and marker conditions, two more values are calculated:
1 — Marker Bias : This sums the up/down (+1/‒1) occurrences of the markers 1 to 4 over a period you define, so it ranges from −4 to +4, times the period.
Its calculation will depend on the modes used to calculate markers 3 and 4.
2 — Combined Balances : This is the sum of the bull/bear (+1/−1) states of each of the five balances, so it ranges from −5 to +5.
The periods for all of these balances can be configured in the "Periods" section at the bottom of the script's inputs. As you cannot see the balances on the chart, you can use my Volume Delta Columns Pro indicator in a pane; it can plot the same balances, so you will be able to analyze them.
Divergences
In the context of this indicator, a divergence is any bar where the bear/bull state of a balance (above/below its zero centerline) diverges from the polarity of a chart bar. No directional bias is assigned to divergences when they occur. Candle bodies and tops/bottoms can each be colored differently on divergences detected from distinct balances.
Divergence Channel
The divergence channel is the space between two levels (by default, the bar's open and close ) saved when divergences occur. When price (by default the close ) has breached a channel and a new divergence occurs, a new channel is created. Until that new channel is breached, bars where additional divergences occur will expand the channel's levels if the bar's price points are outside the channel.
Prices breaches of the divergence channel will change its state. Divergence channels can be in one of three different states:
• Bull (green): Price has breached the channel to the upside.
• Bear (red): Price has breached the channel to the downside.
• Neutral (gray): The channel has not yet been breached.
█ HOW TO USE THE INDICATOR
I do not make videos to explain how to use my indicators. I do, however, try hard to include in their description everything one needs to understand what they do. From there, it's up to you to explore and figure out if they can be useful in your trading practice. Communicating in videos what this description and the script's tooltips contain would make for very long videos that would likely exceed the attention span of most people who find this description too long. There is no quick way to understand an indicator such as this one because it uses many different concepts and has quite a bit of settings one can use to modify its visuals and behavior — thus how one uses it. I will happily answer questions on the inner workings of the indicator, but I do not answer questions like "How do I trade using this indicator?" A useful answer to that question would require an in-depth analysis of who you are, your trading methodology and objectives, which I do not have time for. I do not teach trading.
Start by loading the indicator on an active chart containing volume information. See here if you need help.
The default configuration displays:
• Normal candles where the bodies are only colored if the bar's volume has increased since the last bar.
If you want to use this indicator's candles, you may want to disable your chart's candles by clicking the eye icon to the right of the symbol's name in the top left of the chart.
• A top or bottom appended to the normal candles. It represents the difference between up and down volume for that bar
and is positioned at the top or bottom, depending on its polarity. If up volume is greater than down volume, a top is displayed. If down volume is greater, a bottom is plotted.
The size of tops and bottoms is determined by calculating a factor which is the proportion of volume delta over the bar's total volume.
That factor is then used to calculate the top or bottom size relative to a baseline of the average candle body size of the last 100 bars.
• An information box in the bottom right displaying intrabar and chart coverage information.
• A light red background when the intrabar volume differs from the chart's volume by more than 1%.
The script's inputs contain tooltips explaining most of the fields. I will not repeat them here. Following is a brief description of each section of the indicator's inputs which will give you an idea of what the indicator can do:
Normal Candles is where you configure the replacement candles plotted by the script. You can choose from different coloring schemes for their bodies and specify a unique color for bodies where a divergence calculated using the method you choose occurs.
Volume Tops & Botttoms is where you configure the display of tops and bottoms, and their EMAs. The EMAs are calculated from the high point of tops and the low point of bottoms. They can act as a channel to evaluate price, and you can choose to color the channel using a gradient reflecting the advances/declines in the balance of your choice.
Divergence Channel is where you set up the appearance and behavior of the divergence channel. These areas represent levels where price and volume delta information do not converge. They can be interpreted as regions with no clear direction from where one will look for breaches. You can configure the channel to take into account one or both types of divergences you have configured for candle bodies and tops/bottoms.
Background allows you to configure a gradient background color that reflects the advances/declines in the balance of your choice. You can use this to provide context to the volume delta values from bars. You can also control the background color displayed on volume discrepancies between the intrabar and the chart's timeframe.
Intrabars is where you choose the calculation mode determining the lower timeframe used to access intrabars. The indicator uses the chart's timeframe and the type of market you are on to calculate the lower timeframe. Your setting there should reflect which compromise you prefer between the precision of calculations and chart coverage. This is also where you control the display of the information box in the lower right corner of the chart.
Markers allows you to control the plotting of chart markers on different conditions. Their configuration determines when alerts generated from the indicator will fire. Note that in order to generate alerts from this script, they must be created from your chart. See this Help Center page to learn how. Only the last 500 markers will be visible on the chart, but this will not affect the generation of alerts.
Periods is where you configure the periods for the balances and the EMAs used in the indicator.
The raw values calculated by this script can be inspected using the Data Window.
█ INTERPRETATION
Rightly or wrongly, volume delta is considered by many a useful complement to the interpretation of price action. I use it extensively in an attempt to find convergence between my read of volume delta and price movement — not so much as a predictor of future price movement. No system or person can predict the future. Accordingly, I consider people who speak or act as if they know the future with certainty to be dangerous to themselves and others; they are charlatans, imprudent or blissfully ignorant.
I try to avoid elaborate volume delta interpretation schemes involving too many variables and prefer to keep things simple:
• Trends that have more chances of continuing should be accompanied by VD of the same polarity.
In trends, I am looking for "slow and steady". I work from the assumption that traders and systems often overreact, which translates into unproductive volatility.
Wild trends are more susceptible to overreactions.
• I prefer steady VD values over wildly increasing ones, as large VD increases often come with increased price volatility, which can backfire.
Large VD values caused by stopping volume will also often occur on trend reversals with abnormally high candles.
• Prices escaping divergence channels may be leading a trend in that direction, although there is no telling how long that trend will last; could be just a few bars or hundreds.
When price is in a channel, shifts in VD balances can sometimes give us an idea of the direction where price has the most chance of breaking.
• Dwindling VD will often indicate trend exhaustion and predate reversals by many bars, but the problem is that mere pauses in a trend will often produce the same behavior in VD.
I think it is too perilous to infer rigidly from VD decreases.
Divergence Channel
Here I have configured the divergence channels to be visible. First, I set the bodies to display divergences on the default Bar Balance. They are indicated by yellow bodies. Then I activated the divergence channels by choosing to draw levels on body divergences and checked the "Fill" checkbox to fill the channel with the same color as the levels. The divergence channel is best understood as a direction-less area from where a breach can be acted on if other variables converge with the breach's direction:
Tops and Bottoms EMAs
I find these EMAs rather interesting. They have no equivalent elsewhere, as they are calculated from the top and bottom values this indicator plots. The only similarity they have with volume-weighted MAs, including VWAP, is that they use price and volume. This indicator's Tops and Bottoms EMAs, however, use the price and volume delta. While the channel differs from other channels in how it is calculated, it can be used like others, as a baseline from which to evaluate price movement or, alternatively, as stop levels. Remember that you can change the period used for the EMAs in the "Periods" section of the inputs.
This chart shows the EMAs in action, filled with a gradient representing the advances/decline from the Momentum balance. Notice the anomaly in the chart's latest bars where the Momentum balance gradient has been indicating a bullish bias for some time, during which price was mostly below the EMAs. Price has just broken above the channel on positive VD. My interpretation of this situation would be that it is a risky opportunity for a long trade in the larger context where the market has been in a downtrend since the 5th. Intrepid traders choosing to enter here could do so with a "make or break" tight stop that will minimize their losses should the market continue its downtrend while hopefully preserving the potential upside of price continuing on the longer-term uptrend prevalent since the 28th:
█ NOTES
Volume
If you use indicators such as this one which depends on volume information, it is important to realize that the volume data they consume comes from data feeds, and that all data feeds are NOT created equally. Those who create the data feeds we use must make decisions concerning the nature of the transactions they tally and the way they are tallied in each feed, and these decisions affect the nature of our volume data. My Volume X-ray publication discusses some of the reasons why volume information from different timeframes, brokers/exchanges or sectors may vary considerably. I encourage you to read it. This indicator's display of a warning through a background color on volume discrepancies between the timeframe used to access intrabars and the chart's timeframe is an attempt to help you realize these variations in feeds. Don't take things for granted, and understand that the quality of a given feed's volume information affects the quality of the results this indicator calculates.
Markets as ecosystems
I believe it is perilous to think that behavioral patterns you discover in one market through the lens of this or any other indicator will necessarily port to other markets. While this may sometimes be the case, it will often not. Why is that? Because each market is its own ecosystem. As cities do, all markets share some common characteristics, but they also all have their idiosyncrasies. A proportion of a city's inhabitants is always composed of outsiders who come and go, but a core population of regulars and systems is usually the force that actually defines most of the city's observable characteristics. I believe markets work somewhat the same way; they may look the same, but if you live there for a while and pay attention, you will notice the idiosyncrasies. Some things that work in some markets will, accordingly, not work in others. Please keep that in mind when you draw conclusions.
On Up/Down or Buy/Sell Volume
Buying or selling volume are misnomers, as every unit of volume transacted is both bought and sold by two different traders. While this does not keep me from using the terms, there is no such thing as “buy only” or “sell only” volume. Trader lingo is riddled with peculiarities. Without access to order book information, traders work with the assumption that when price moves up during a bar, there was more buying pressure than selling pressure, just as when buy market orders take out limit ask orders in the order book at successively higher levels. The built-in volume indicator available on TradingView uses this logic to color the volume columns green or red. While this script’s calculations are more precise because it analyses intrabars to calculate its information, it uses pretty much the same imperfect logic. Until Pine scripts can have access to how much volume was transacted at the bid/ask prices, our volume delta calculations will remain a mere proxy.
Repainting
• The values calculated on the realtime bar will update as new information comes from the feed.
• Historical values may recalculate if the historical feed is updated or when calculations start from a new point in history.
• Markers and alerts will not repaint as they only occur on a bar's close. Keep this in mind when viewing markers on historical bars,
where one could understandably and incorrectly assume they appear at the bar's open.
To learn more about repainting, see the Pine Script™ User Manual's page on the subject .
Superfluity
In "The Bed of Procrustes", Nassim Nicholas Taleb writes: To bankrupt a fool, give him information . This indicator can display a lot of information. The inevitable adaptation period you will need to figure out how to use it should help you eliminate all the visuals you do not need. The more you eliminate, the easier it will be to focus on those that are the most useful to your trading practice. Don't be a fool.
█ THANKS
Thanks to alexgrover for his Dekidaka-Ashi indicator. His volume plots on candles were the inspiration for my top/bottom plots.
Kudos to PineCoders for their libraries. I use two of them in this script: Time and lower_tf .
The first versions of this script used functionality that I would not have known about were it not for these two guys:
— A guy called Kuan who commented on a Backtest Rookies presentation of their Volume Profile indicator.
— theheirophant , my partner in the exploration of the sometimes weird abysses of request.security() ’s behavior at lower timeframes.