KRXNameMapperLibrary "KRXNameMapper"
TODO: add library description here
getCompanyName(code)
TODO: add function description here
Parameters:
code (string)
Returns: TODO: add what function returns
Indicadores e estratégias
ma_libraryTitle: Library: Advanced Moving Average Collection
Description:
This library provides a comprehensive set of Moving Average algorithms, ranging from standard filters (SMA, EMA) to adaptive trendlines (KAMA, FRAMA) and experimental smoothers (ALMA, JMA).
It has been fully optimized for Pine Script v6, ensuring efficient execution and strict robustness against na (missing) values. Unlike standard implementations that propagate na values, these functions dynamically recalculate weights to maintain continuity in disjointed datasets.
🧩 Library Features
Robustness: Non-recursive filters ignore na values within the lookback window. Recursive filters maintain state to prevent calculation breaks.
Optimization: Logic updated to v6 standards, utilizing efficient loops and var persistence.
Standardization: All functions utilize a consistent f_ prefix and standardized parameters for easy integration.
Scope: Contains over 35 different smoothing algorithms.
📊 Input Requirements
Source (src): The data series to smooth (usually close, hl2, etc.).
Length (length): The lookback period (must be a simple int).
Specifics: Some adaptive MAs (like f_evwma) require volume data, while others (like f_alma) require offset/sigma settings.
🛠️ Integration Example
You can import the library and call functions directly, or use the built-in f_selector to create dynamic inputs for your users.
code
Pine
download
content_copy
expand_less
//@version=6
indicator("MA Library Demo", overlay=true)
// Import the library
import YourUsername/ma_/1 as ma
// --- Example 1: Direct Function Call ---
// calculating Jurik Moving Average (JMA)
float jma_val = ma.f_jma(close, 14)
plot(jma_val, "JMA", color=color.yellow, linewidth=2)
// --- Example 2: User Selector ---
// Allowing the user to choose the MA type via settings
string selected_type = input.string("ALMA", "MA Type", options= )
int length = input.int(20, "Length")
// Using the generic selector function
float dynamic_ma = ma.f_selector(close, length, selected_type)
plot(dynamic_ma, "Dynamic MA", color=color.aqua)
📋 Included Algorithms
The following methods are available (prefixed with f_):
Standard: SMA, EMA, WMA, VWMA, RMA
Adaptive: KAMA (Kaufman), FRAMA (Fractal), VIDYA (Chande/VARMA), VAMA (Vol. Adjusted)
Low Lag: ZLEMA (Zero Lag), HMA (Hull), JMA (Jurik), DEMA, TEMA
Statistical/Math: LSMA (Least Squares), GMMA (Geometric Mean), FLSMA (Fisher Least Squares)
Advanced/Exotic:
ALMA (Arnaud Legoux)
EIT (Ehlers Instantaneous Trend)
ESD (Ehlers Simple Decycler)
AHMA (Ahrens)
BMF (Blackman Filter)
CMA (Corrective)
DSWF (Damped Sine Wave)
EVWMA (Elastic Vol. Weighted)
HCF (Hybrid Convolution)
LMA (Leo)
MD (McGinley Dynamic)
MF (Modular Filter)
MM (Moving Median)
QMA (Quick)
RPMA (Repulsion)
RSRMA (Right Sided Ricker)
SMMA (Smoothed)
SSMA (Shapeshifting)
SWMA (Sine Weighted)
TMA (Triangular)
TSF (True Strength Force)
VBMA (Variable Band)
Jags_LibLibrary "Jags_Lib"
TODO: add library description here
ComplexRejBear(_can)
Returns whether candle is a rejection based on complex rules within library
Parameters:
_can (int)
Returns: true/false
ComplexRejBull(_can)
Returns whether candle is a rejection based on complex rules within library
Parameters:
_can (int)
Returns: true/false
bullHCS(_can, _lookback)
Returns whether current candle is creating a bull HCS
Parameters:
_can (int)
_lookback (int)
Returns: integer value representing the first candle in the HCS. 0 = no HCS
bearHCS(_can, _lookback)
Returns whether current candle is creating a bear HCS
Parameters:
_can (int)
_lookback (int)
Returns: integer value representing the first candle in the HCS. 0 = no HCS
bullIBLQtake(_can, _lookback)
Returns whether current candle is is a BullRej which has taken IB liquidiy in the last _lookback candles
Parameters:
_can (int)
_lookback (int)
Returns: integer value representing the first candle in the HCS. 0 = no HCS
bearIBLQtake(_can, _lookback)
Returns whether current candle is is a BearRej which has taken IB liquidiy in the last _lookback candles
Parameters:
_can (int)
_lookback (int)
Returns: integer value representing the first candle in the HCS. 0 = no HCS
Vantage_NewsVantage News is a Pine Script library that provides pre-market economic event filtering defaults intended for strategies that trade on YM futures. It determines a default for whether trading should be blocked, delayed, or allowed on any given day.
Core Concept
News events are pre-compiled into Pine Script data libraries organized by half-year (LO1_News2025H1, LO1_News2025H2, etc.), updated weekly on Sundays. There are no API calls — events are baked into arrays of dates, times, type IDs, and severities.
Severity System
Can be configured to define or override three default severity tiers:
- Sev 3 (CPI, NFP, FOMC) — defaults to blocks the entire day or delays, depending on policy
- Sev 2 (ISM PMI, claims) — defaults to delay trading until the event time + a configurable post-delay window
- Sev 1 (secondary indicators) — defaults to no delays
Blocking vs Delaying
- Block: No trading for the full session. WillTradeToday() returns false.
- Delay: Trading allowed after eventTime + delayMinutes. IsDelayed(currentTimeMs) returns true until the release time passes.
Provides a per-event-type policy mechanism so overrides can force any event to block, delay, or be ignored regardless of its base severity.
Next Trading Window Calculation
FindNextTradingWindow() scans forward up to 14 days, skipping weekends and blocked days based on the provided configuration. If the next tradeable day has a delay, it returns the delayed start time — so an info panel can show e.g. "Mon 7:35 AM" to indicate the next trading opening
Exception Mappings
Each half-year library can ship per-event-type overrides (different severity, custom delay minutes, tags). When the applyLibExceptionMappings configuration is enabled, these override the base severity — allowing the data to carry date-specific adjustments.
Special Handling
CME early close days are encoded as a special event type. CheckCmeEarlyClose() returns a halt timestamp so a strategy can truncate the session.
Caching
Evaluation is lazy and memoized by date string — EvaluateForDate() only recomputes when the date changes. The event cache is built once at initialization via a day index for fast date lookups.
Vantage_NewsTypesLibrary "Vantage_NewsTypes"
News Type definitions and default severity ratings with respect to YM futures. Updated weekly as news changes.
Severity 1 - Watch but not considered very impactful
Severity 2 - Default to don't trade until after event
Severity 3 - Default to don't trade on a day with this event
HawkDoveScoreLibLibrary "HawkDoveScoreLib"
hds_score(sym2y, sym10y, symBE10, symBS, symFCI, tfMacro, lenTrend, lenNorm, smoothScore, useRatePath, useCurve, useRealYield, useBalanceSh, useStressFCI, wRatePath, wCurve, wRealYield, wBS, wStress)
Parameters:
sym2y (string)
sym10y (string)
symBE10 (string)
symBS (string)
symFCI (string)
tfMacro (string)
lenTrend (int)
lenNorm (int)
smoothScore (simple int)
useRatePath (bool)
useCurve (bool)
useRealYield (bool)
useBalanceSh (bool)
useStressFCI (bool)
wRatePath (float)
wCurve (float)
wRealYield (float)
wBS (float)
wStress (float)
hds_regime(score, thrDove, thrHawk)
Parameters:
score (float)
thrDove (int)
thrHawk (int)
hds_regime_label(score, thrDove, thrHawk)
Parameters:
score (float)
thrDove (int)
thrHawk (int)
pubLibCandlestickPatternsLibrary "pubLibCandlestickPatterns"
candlestick pattern conditions for indicator and strategy development
doji()
bull_marubozu()
bear_marubozu()
spinning_top()
bull_belt_hold_line()
bear_belt_hold_line()
bull_breakaway()
bear_breakaway()
concealing_baby_swallow()
bull_counterattack()
bear_counterattack()
dark_cloud_cover()
long_legged_doji()
southern_doji()
northern_doji()
dumpling_top()
bull_engulfing()
bear_engulfing()
frypan_bottom()
hammer()
hanging_man()
bull_harami()
bear_harami()
bull_harami_cross()
bear_harami_cross()
high_wave()
bull_hikkake()
bear_hikkake()
homing_pigeon()
in_neck()
bull_kicking()
bear_kicking()
matching_low()
on_neck()
piercing()
bull_separating_lines()
bear_separating_lines()
upgap_side_by_side_white_lines()
downgap_side_by_side_white_lines()
stalled()
bull_star()
bear_star()
bull_doji_star()
bear_doji_star()
morning_star()
evening_star()
morning_doji_star()
evening_doji_star()
abandoned_baby_bottom()
abandoned_baby_top()
inverted_hammer()
shooting_star()
dragonfly_doji()
gravestone_doji()
stick_sandwich()
upward_gapping_tasuki()
downward_gapping_tasuki()
three_black_crows()
advance_block()
three_advancing_white_soldiers()
bull_three_line_strike()
bear_three_line_strike()
rising_three_methods()
falling_three_methods()
three_stars_in_the_south()
thrusting()
tower_bottom()
tower_top()
tri_star_bottom()
tri_star_top()
tweezer_bottom()
tweezer_top()
upside_gap_two_crows()
NodialTreesLows2: ML Random Forest / Pivot Lows (Part 2 of 2)Title: `Library: ML Random Forest / Pivot Lows (Part 2 of 2)`
Description:
This library contains the second half (Trees 6-11) of the Random Forest Classifier designed to validate Pivot Lows (Long setups).
It is a direct extension of NodialTreesL1 and cannot be used alone. Due to Pine Script's compilation limits on complexity and file size, the 12-tree ensemble model has been split into two separate libraries.
### 🧩 Library Contents
This module exports the following methods representing the specific decision paths of the trained AI model:
- `tree_6(array f)`
- `tree_7(array f)`
- `tree_8(array f)`
- `tree_9(array f)`
- `tree_10(array f)`
- `tree_11(array f)`
### ⚠️ Implementation Guide
To use this library, you must combine it with Part 1.
Please refer to the NodialTreesLows1 library description for:
1. The full Integration Code Example (how to average the votes).
2. The exact Input Feature List (the 27 required metrics).
3. Detailed explanation of the Machine Learning logic.
How to finish the integration:
Import this library alongside Part 1 and add the results of `tree_6` through `tree_11` to your voting sum, as shown in the Part 1 documentation.
NodialTreesLows1: ML Random Forest / Pivot Lows (Part 1 of 2)Title: `Library: ML Random Forest / Pivot Lows (Part 1 of 2)`
Description:
This library contains the first half (Trees 0-5) of a Random Forest Classifier designed to validate Pivot Lows (Long setups).
Due to Pine Script size constraints, the model is split into two libraries. You must use this library in conjunction with NodialTreesL2 to run the full ensemble.
### 🧩 System Architecture
- Model: Random Forest (12 Trees total).
- This Library: Contains `tree_0` to `tree_5`.
- Logic: Each tree analyzes a feature array and outputs a probability score.
- Target: Validating Swing Lows / Support Bounces.
### 📊 Input Requirements
The methods expect an `array` of size 27 containing market features (Price Action, Momentum, Volatility, Volume, Structure). The exact order of features is critical for the model's accuracy.
### 🛠️ Integration Example
Since this is a modular library, you need to import both parts and average their results to get the final prediction.
### 📋 Feature Mapping (Array Indexing)
To get accurate predictions, the input array must contain exactly 27 floats in this specific order:
0. Timeframe (in seconds)
1. RSI (Raw Value)
2. MACD Histogram
3. Relative Volume
4. EMA Distance (%)
5. EMA Slope
6. ATR Ratio
7. ADX
8. Buying/Selling Pressure
9. Wick Ratio
10-16. Divergences & Pattern Flags (Boolean 0.0/1.0)
17-22. Proprietary Momentum Metrics ("Onion" Structure)
23-26. Derived Volatility/Volume Features
*Note: For the advanced proprietary metrics (Indices 17-26), users must implement their own calculations or use compatible indicators.*
//@version=6
indicator("My ML Long Strategy", overlay=true)
// Import BOTH libraries
import YourUsername/NodialTreesL1/1 as rf_part1
import YourUsername/NodialTreesL2/1 as rf_part2
// ... (Calculate your 27 features and fill the array) ...
// var features = array.from(timeframe, rsi, macd, ...)
// Calculate Ensemble Probability (Average of 12 Trees)
float vote_sum = 0.0
// Trees from Part 1
vote_sum += rf_part1.tree_0(features)
vote_sum += rf_part1.tree_1(features)
vote_sum += rf_part1.tree_2(features)
vote_sum += rf_part1.tree_3(features)
vote_sum += rf_part1.tree_4(features)
vote_sum += rf_part1.tree_5(features)
// Trees from Part 2 (Trees 6-11)
vote_sum += rf_part2.tree_6(features)
vote_sum += rf_part2.tree_7(features)
vote_sum += rf_part2.tree_8(features)
vote_sum += rf_part2.tree_9(features)
vote_sum += rf_part2.tree_10(features)
vote_sum += rf_part2.tree_11(features)
// Final Probability (0.0 to 1.0)
float final_prob = vote_sum / 12.0
if final_prob > 0.60
label.new(bar_index, low, "Valid Low", color=color.green)
NodialTreesHighs2: ML Random Forest / Pivot Highs (Part 2 of 2)Title: `Library: ML Random Forest / Pivot Highs (Part 2 of 2)`
Description:
This library contains the second half (Trees 6-11) of the Random Forest Classifier designed to validate Pivot Highs (Short setups).
It is a direct extension of NodialTreesH1 and cannot be used alone. Due to Pine Script's compilation limits on complexity and file size, the 12-tree ensemble model has been split into two separate libraries.
### 🧩 Library Contents
This module exports the following methods representing the specific decision paths of the trained AI model:
- `tree_6(array f)`
- `tree_7(array f)`
- `tree_8(array f)`
- `tree_9(array f)`
- `tree_10(array f)`
- `tree_11(array f)`
### ⚠️ Implementation Guide
To use this library, you must combine it with Part 1.
Please refer to the NodialTreesH1 library description for:
1. The full Integration Code Example (how to average the votes).
2. The exact Input Feature List (the 27 required metrics).
3. Detailed explanation of the Machine Learning logic.
How to finish the integration:
Import this library alongside Part 1 and add the results of `tree_6` through `tree_11` to your voting sum, as shown in the Part 1 documentation.
NodialTreesHighs1: ML Random Forest / Pivot Highs (Part 1 of 2)Title: `Library: ML Random Forest / Pivot Highs (Part 1 of 2)`
Description:
This library contains the first half (Trees 0-5) of a Random Forest Classifier designed to validate Pivot Highs (Short setups).
Due to Pine Script size constraints, the model is split into two libraries. You must use this library in conjunction with NodialTreesH2 to run the full ensemble.
### 🧩 System Architecture
- Model: Random Forest (12 Trees total).
- This Library: Contains `tree_0` to `tree_5`.
- Logic: Each tree analyzes a feature array and outputs a probability score.
- Target: Validating Swing Highs / Resistance Rejections.
### 📊 Input Requirements
The methods expect an `array` of size 27 containing market features (Price Action, Momentum, Volatility, Volume, Structure). The exact order of features is critical for the model's accuracy.
### 🛠️ Integration Example
Since this is a modular library, you need to import both parts and average their results to get the final prediction.
### 📋 Feature Mapping (Array Indexing)
To get accurate predictions, the input array must contain exactly 27 floats in this specific order:
0. Timeframe (in seconds)
1. RSI (Raw Value)
2. MACD Histogram
3. Relative Volume
4. EMA Distance (%)
5. EMA Slope
6. ATR Ratio
7. ADX
8. Buying/Selling Pressure
9. Wick Ratio
10-16. Divergences & Pattern Flags (Boolean 0.0/1.0)
17-22. Proprietary Momentum Metrics ("Onion" Structure)
23-26. Derived Volatility/Volume Features
*Note: For the advanced proprietary metrics (Indices 17-26), users must implement their own calculations or use compatible indicators.*
//@version=6
indicator("My ML Short Strategy", overlay=true)
// Import BOTH libraries
import YourUsername/NodialTreesH1/1 as rf_part1
import YourUsername/NodialTreesH2/1 as rf_part2
// ... (Calculate your 27 features and fill the array) ...
// var features = array.from(timeframe, rsi, macd, ...)
// Calculate Ensemble Probability (Average of 12 Trees)
float vote_sum = 0.0
// Trees from Part 1
vote_sum += rf_part1.tree_0(features)
vote_sum += rf_part1.tree_1(features)
vote_sum += rf_part1.tree_2(features)
vote_sum += rf_part1.tree_3(features)
vote_sum += rf_part1.tree_4(features)
vote_sum += rf_part1.tree_5(features)
// Trees from Part 2 (Trees 6-11)
vote_sum += rf_part2.tree_6(features)
vote_sum += rf_part2.tree_7(features)
vote_sum += rf_part2.tree_8(features)
vote_sum += rf_part2.tree_9(features)
vote_sum += rf_part2.tree_10(features)
vote_sum += rf_part2.tree_11(features)
// Final Probability (0.0 to 1.0)
float final_prob = vote_sum / 12.0
if final_prob > 0.60
label.new(bar_index, high, "Valid Short", color=color.red)
NormalizedVolume_HHHLNormalized volume + HH/HL/LH/LL structure logic for confirming moves and spotting traps. Library only—intended for use with indicators. Does not plot or draw anything by itself.
lib_w2c_INDILibrary "lib_w2c_INDI"
f_getChunk18()
f_getChunk19()
f_getChunk20()
f_getChunkNames()
f_getExpectedLength()
f_getKeyword()
f_dummyRegister()
f_loadChunk(name)
Parameters:
name (string)
lib_w2b_INDILibrary "lib_w2b_INDI"
f_getChunk9()
f_getChunk10()
f_getChunk11()
f_getChunk12()
f_getChunk13()
f_getChunk14()
f_getChunk15()
f_getChunk16()
f_getChunk17()
f_getChunkNames()
f_getExpectedLength()
f_getKeyword()
f_dummyRegister()
f_loadChunk(name)
Parameters:
name (string)
lib_w2a_INDILibrary "lib_w2a_INDI"
f_getChunk0()
f_getChunk1()
f_getChunk2()
f_getChunk3()
f_getChunk4()
f_getChunk5()
f_getChunk6()
f_getChunk7()
f_getChunk8()
f_getChunkNames()
f_getExpectedLength()
f_getKeyword()
f_dummyRegister()
f_loadChunk(name)
Parameters:
name (string)
lib_w1c_INDILibrary "lib_w1c_INDI"
f_getChunk18()
f_getChunkNames()
f_getExpectedLength()
f_getKeyword()
f_dummyRegister()
f_loadChunk(name)
Parameters:
name (string)
lib_w1b_INDILibrary "lib_w1b_INDI"
f_getChunk9()
f_getChunk10()
f_getChunk11()
f_getChunk12()
f_getChunk13()
f_getChunk14()
f_getChunk15()
f_getChunk16()
f_getChunk17()
f_getChunkNames()
f_getExpectedLength()
f_getKeyword()
f_dummyRegister()
f_loadChunk(name)
Parameters:
name (string)
lib_w1a_INDILibrary "lib_w1a_INDI"
f_getChunk0()
f_getChunk1()
f_getChunk2()
f_getChunk3()
f_getChunk4()
f_getChunk5()
f_getChunk6()
f_getChunk7()
f_getChunk8()
f_getChunkNames()
f_getExpectedLength()
f_getKeyword()
f_dummyRegister()
f_loadChunk(name)
Parameters:
name (string)
lib_b2_INDILibrary "lib_b2_INDI"
f_getChunk0()
f_getChunkNames()
f_getExpectedLength()
f_getKeyword()
f_dummyRegister()
f_loadChunk(name)
Parameters:
name (string)
lib_b1_INDILibrary "lib_b1_INDI"
f_getChunk0()
f_getChunkNames()
f_getExpectedLength()
f_getKeyword()
f_dummyRegister()
f_loadChunk(name)
Parameters:
name (string)
TPOSmartMoneyLibLibrary "TPOSmartMoneyLib"
Library for TPO (Time Price Opportunity) and Smart Money concepts including session management, PDH/PDL detection, sweeping logic, and volume profile utilities
f_price_to_tick(p)
Convert price to tick
Parameters:
p (float) : Price value
Returns: Tick value
f_tick_to_row(t, row_ticks_in)
Convert tick to row
Parameters:
t (int) : Tick value
row_ticks_in (int) : Number of ticks per row
Returns: Row index
f_row_to_price(row, row_ticks_in)
Convert row to price (midpoint)
Parameters:
row (int) : Row index
row_ticks_in (int) : Number of ticks per row
Returns: Price at row midpoint
f_calc_row_ticks(natr_ref, row_gran_mult)
Calculate dynamic row size based on normalized ATR
Parameters:
natr_ref (float) : Daily normalized ATR reference value
row_gran_mult (float) : Row granularity multiplier
Returns: Number of ticks per row
f_more_transp_pct(c, pct)
Increase color transparency by percentage
Parameters:
c (color) : Input color
pct (float) : Percentage to increase transparency (0.0 to 1.0)
Returns: Color with increased transparency
f_dom_color(dom, buy_col, sell_col, gamma, transp_weak, transp_strong)
Calculate dominance color based on buy/sell ratio
Parameters:
dom (float) : Dominance ratio (-1 to 1, negative = sell, positive = buy)
buy_col (color) : Buy dominant color
sell_col (color) : Sell dominant color
gamma (float) : Gamma correction for color intensity
transp_weak (int) : Transparency for weak dominance
transp_strong (int) : Transparency for strong dominance
Returns: Blended color
f_sess_part(sess_str, get_start)
Parse session string to get start or end time
Parameters:
sess_str (string) : Session string in format "HHMM-HHMM"
get_start (bool) : True to get start time, false to get end time
Returns: Time string in HHMM format
f_hhmm_to_h(hhmm)
Convert HHMM string to hours
Parameters:
hhmm (string) : Time string in HHMM format
Returns: Hours (0-23)
f_hhmm_to_m(hhmm)
Convert HHMM string to minutes
Parameters:
hhmm (string) : Time string in HHMM format
Returns: Minutes (0-59)
f_prev_day_window_bounds(today_day_rth, win_start, win_end, session_tz)
Calculate previous day window bounds
Parameters:
today_day_rth (int) : Today's RTH start timestamp
win_start (string) : Window start time in HHMM format
win_end (string) : Window end time in HHMM format
session_tz (string) : Session timezone
Returns: Tuple of
f_default_session_colors()
Get default session colors
Returns: Array of 4 colors
f_session_names()
Get session names
Returns: Array of 4 session names
f_process_hl(arr, rng, keep_bars, lock_to_live)
Process high/low lines with sweeping detection
Parameters:
arr (array) : Array of HLLine objects
rng (float) : Price range for visibility filtering
keep_bars (int) : Maximum bars to keep lines
lock_to_live (bool) : Whether to lock line end to current bar
Returns: 0 (for chaining)
f_process_naked_lines(arr, calc_bars, bars_per_day, keep_to_day_end)
Process naked lines (POC/VAH/VAL) with sweeping detection
Parameters:
arr (array) : Array of NakedLine objects
calc_bars (int) : Maximum calculation bars
bars_per_day (int) : Bars per day for scope calculation
keep_to_day_end (bool) : Whether to extend to day end
Returns: 0 (for chaining)
f_update_pdhl_lines(pd_hl, pdh, pdl, new_day, pd_rng, bars_per_day, pdh_color, pdl_color)
Detect and create PDH/PDL lines
Parameters:
pd_hl (array) : Array to store HLLine objects
pdh (float) : Previous day high
pdl (float) : Previous day low
new_day (bool) : Whether it's a new day
pd_rng (float) : Price range for visibility
bars_per_day (int) : Bars per day
pdh_color (color) : PDH line color
pdl_color (color) : PDL line color
Returns: 0 (for chaining)
f_poc_from_vals(keys, vals)
Calculate POC from sorted keys and values
Parameters:
keys (array) : Sorted array of row keys
vals (array) : Array of volume values
Returns: POC row key
f_value_area(keys, vals, poc_key, va_pct)
Calculate Value Area from volume distribution
Parameters:
keys (array) : Sorted array of row keys
vals (array) : Array of volume values
poc_key (int) : POC row key
va_pct (float) : Value Area percentage (typically 0.70)
Returns: Tuple of
f_find_key_sorted(keys, target)
Find key in sorted array using binary search
Parameters:
keys (array) : Sorted array of keys
target (int) : Target key to find
Returns: Index of key, or -1 if not found
f_zscore_safe(x, len)
Safe z-score calculation using built-in functions
Parameters:
x (float) : Input series
len (int) : Lookback length
Returns: Z-score
HLLine
Represents a high/low line with sweeping detection
Fields:
ln (series line) : Line object
lb (series label) : Label object
lvl (series float) : Price level
startBar (series int) : Bar index where line starts
swept (series bool) : Whether the level has been swept
isHigh (series bool) : True if this is a high, false if low
col (series color) : Line color
NakedLine
Represents a naked POC/VAH/VAL line
Fields:
ln (series line) : Line object
lb (series label) : Label object
lvl (series float) : Price level
startBar (series int) : Bar index where line starts
swept (series bool) : Whether the level has been swept
sweptBar (series int) : Bar index where swept occurred
endBar (series int) : Bar index where line should end






















