ORB + Session VWAP Pro (London & NY) — fixedORB + Session VWAP Pro (London & NY) — Listing copy (EN)
What it is
A clean, non-repainting intraday tool that fuses the classic Opening Range Breakout (ORB) with a session-anchored VWAP filter for London and New York. It highlights only the higher-quality breakouts (above/below session VWAP), adds an optional retest confirmation, and scores each signal with an intuitive Confidence metric (0–100).
Why it works
• ORB provides the day’s first actionable structure (range high/low).
• Session VWAP filters “cheap” breaks and favors flows aligned with session value.
• Optional retest reduces first-tick whipsaws.
• Confidence blends breakout depth (vs ATR), VWAP slope and band distance.
Key visuals
• LDN/NY OR High/Low (line break style) + optional OR boxes.
• Active Session VWAP (resets per signal window; falls back to daily VWAP outside).
• Optional VWAP bands (stdev or %).
• Session shading (London/NY windows).
• Signal markers (LDN BUY/SELL, NY BUY/SELL) fired with cooldown.
Signals
• London Long / Short: Break of LDN OR High/Low ± ATR buffer, aligned with VWAP side.
• NY Long / Short: Same logic during NY window.
• Retest (optional): Requires a tag back to the OR level ± tolerance before confirmation.
• Confidence: 0–100; gate via Min Confidence (default 55).
Inputs that matter
• Open Range Length (min): Default 15.
• London/NY times & timezones.
• ATR buffer & retest tolerance.
• Bands mode: Stdev (with lookback) or % (e.g., 1%).
• Signal cooldown: Avoids clutter on fast moves.
Non-repaint policy
• OR lines build within fixed time windows using the current bar’s timestamp.
• VWAP is cumulative within the session window; no lookahead.
• All ta.crossover/ta.crossunder are precomputed every bar (no conditional execution).
• Signals are based on live bar values, not future bars.
⸻
Quick start (examples)
1) EURUSD, London momentum
• Chart: 5m or 15m.
• OR: 15 min starting 08:00 Europe/London.
• Signals: Use defaults; keep ATR buffer = 0.2 and Retest = ON, Min Confidence ≥ 55.
• Play:
• BUY when price breaks LDN OR High + buffer and stays above VWAP; retest confirms.
• Trail behind VWAP or band #1; partials into band #2.
2) NAS100, New York breakout & run
• Chart: 5m.
• NY window: 09:30 America/New_York, OR = 15 min.
• Retest OFF on high momentum days; Min Confidence ≥ 60.
• Use band mode Stdev, bandLen=50, show ±1/±2.
• Momentum continuation: add on pullbacks that hold above VWAP after the breakout.
3) XAUUSD, London fake & VWAP fade
• Chart: 5m.
• Keep Retest ON; accept only shorts that break OR Low but retest fails back under VWAP.
• Confidence gate ≥ 50 to allow more mean-reversion setups.
⸻
Pro tips
• Adjust ATR buffer to the instrument: FX 0.15–0.25, indices 0.20–0.35, metals 0.20–0.30.
• Retest ON for choppy conditions; OFF for news momentum.
• Use VWAP bands: take partials at ±1; stretch targets at ±2/±3.
• Session timezones are explicit (London/New York). Ensure they match your instrument’s behavior.
• Pair with a higher-TF bias (e.g., 1H/4H trend) for directional filtering.
⸻
Alerts (ready to use)
• ORB+SVWAP — LDN Long, LDN Short, NY Long, NY Short
(Respect your cooldown; alerts fire only after confirmation and confidence gate.)
⸻
Known limits & notes
• Designed for intraday. On 1D+ charts, session windows compress.
• If your broker session differs from London/NY clocks on a holiday, adjust input times.
• Session-anchored VWAP uses the script’s signal window, not exchange sessions, by design.
Pesquisar nos scripts por "如何用wind搜索股票的发行价和份数"
chrono_utilsLibrary "chrono_utils"
📝 Description
Collection of objects and common functions that are related to datetime windows session days and time ranges. The main purpose of this library is to handle time-related functionality and make it easy to reason about a future bar checking if it will be part of a predefined session and/or inside a datetime window. All existing session functionality I found in the documentation e.g. "not na(time(timeframe, session, timezone))" are not suitable for strategy scripts, since the execution of the orders is delayed by one bar, due to the script execution happening at the bar close. Moreover, a history operator with a negative value that looks forward is not allowed in any pinescript expression. So, a prediction for the next bar using the bars_back argument of "time()"" and "time_close()" was necessary. Thus, I created this library to overcome this small but very important limitation. In the meantime, I added useful functionality to handle session-based behavior. An interesting utility that emerged from this development is data anomaly detection where a comparison between the prediction and the actual value is happening. If those two values are different then a data inconsistency happens between the prediction bar and the actual bar (probably due to a holiday, half session day, a timezone change etc..)
🤔 How to Guide
To use the functionality this library provides in your script you have to import it first!
Copy the import statement of the latest release by pressing the copy button below and then paste it into your script. Give a short name to this library so you can refer to it later on. The import statement should look like this:
import jason5480/chrono_utils/2 as chr
To check if a future bar will be inside a window first of all you have to initialize a DateTimeWindow object.
A code example is the following:
var dateTimeWindow = chr.DateTimeWindow.new().init(fromDateTime = timestamp('01 Jan 2023 00:00'), toDateTime = timestamp('01 Jan 2024 00:00'))
Then you have to "ask" the dateTimeWindow if the future bar defined by an offset (default is 1 that corresponds th the next bar), will be inside that window:
// Filter bars outside of the datetime window
bool dateFilterApproval = dateTimeWindow.is_bar_included()
You can visualize the result by drawing the background of the bars that are outside the given window:
bgcolor(color = dateFilterApproval ? na : color.new(color.fuchsia, 90), offset = 1, title = 'Datetime Window Filter')
In the same way, you can "ask" the Session if the future bar defined by an offset it will be inside that session.
First of all, you should initialize a Session object.
A code example is the following:
var sess = chr.Session.new().from_sess_string(sess = '0800-1700:23456', refTimezone = 'UTC')
Then check if the given bar defined by the offset (default is 1 that corresponds th the next bar), will be inside the session like that:
// Filter bars outside the sessions
bool sessionFilterApproval = view.sess.is_bar_included()
You can visualize the result by drawing the background of the bars that are outside the given session:
bgcolor(color = sessionFilterApproval ? na : color.new(color.red, 90), offset = 1, title = 'Session Filter')
In case you want to visualize multiple session ranges you can create a SessionView object like that:
var view = SessionView.new().init(SessionDays.new().from_sess_string('2345'), array.from(SessionTimeRange.new().from_sess_string('0800-1600'), SessionTimeRange.new().from_sess_string('1300-2200')), array.from('London', 'New York'), array.from(color.blue, color.orange))
and then call the draw method of the SessionView object like that:
view.draw()
🏋️♂️ Please refer to the "EXAMPLE DATETIME WINDOW FILTER" and "EXAMPLE SESSION FILTER" regions of the script for more advanced code examples of how to utilize the full potential of this library, including user input settings and advanced visualization!
⚠️ Caveats
As I mentioned in the description there are some cases that the prediction of the next bar is not accurate. A wrong prediction will affect the outcome of the filtering. The main reasons this could happen are the following:
Public holidays when the market is closed
Half trading days usually before public holidays
Change in the daylight saving time (DST)
A data anomaly of the chart, where there are missing and/or inconsistent data.
A bug in this library (Please report by PM sending the symbol, timeframe, and settings)
Special thanks to @robbatt and @skinra for the constructive feedback 🏆. Without them, the exposed API of this library would be very lengthy and complicated to use. Thanks to them, now the user of this library will be able to get the most, with only a few lines of code!
STD/C-Filtered, N-Order Power-of-Cosine FIR Filter [Loxx]STD/C-Filtered, N-Order Power-of-Cosine FIR Filter is a Discrete-Time, FIR Digital Filter that uses Power-of-Cosine Family of FIR filters. This is an N-order algorithm that turns the following indicator from a static max 16 orders to a N orders, but limited to 50 in code. You can change the top end value if you with to higher orders than 50, but the signal is likely too noisy at that level. This indicator also includes a clutter and standard deviation filter.
See the static order version of this indicator here:
STD/C-Filtered, Power-of-Cosine FIR Filter
Amplitudes for STD/C-Filtered, N-Order Power-of-Cosine FIR Filter:
What are FIR Filters?
In discrete-time signal processing, windowing is a preliminary signal shaping technique, usually applied to improve the appearance and usefulness of a subsequent Discrete Fourier Transform. Several window functions can be defined, based on a constant (rectangular window), B-splines, other polynomials, sinusoids, cosine-sums, adjustable, hybrid, and other types. The windowing operation consists of multipying the given sampled signal by the window function. For trading purposes, these FIR filters act as advanced weighted moving averages.
What is Power-of-Sine Digital FIR Filter?
Also called Cos^alpha Window Family. In this family of windows, changing the value of the parameter alpha generates different windows.
f(n) = math.cos(alpha) * (math.pi * n / N) , 0 ≤ |n| ≤ N/2
where alpha takes on integer values and N is a even number
General expanded form:
alpha0 - alpha1 * math.cos(2 * math.pi * n / N)
+ alpha2 * math.cos(4 * math.pi * n / N)
- alpha3 * math.cos(4 * math.pi * n / N)
+ alpha4 * math.cos(6 * math.pi * n / N)
- ...
Special Cases for alpha:
alpha = 0: Rectangular window, this is also just the SMA (not included here)
alpha = 1: MLT sine window (not included here)
alpha = 2: Hann window (raised cosine = cos^2)
alpha = 4: Alternative Blackman (maximized roll-off rate)
This indicator contains a binomial expansion algorithm to handle N orders of a cosine power series. You can read about how this is done here: The Binomial Theorem
What is Pascal's Triangle and how was it used here?
In mathematics, Pascal's triangle is a triangular array of the binomial coefficients that arises in probability theory, combinatorics, and algebra. In much of the Western world, it is named after the French mathematician Blaise Pascal, although other mathematicians studied it centuries before him in India, Persia, China, Germany, and Italy.
The rows of Pascal's triangle are conventionally enumerated starting with row n = 0 at the top (the 0th row). The entries in each row are numbered from the left beginning with k=0 and are usually staggered relative to the numbers in the adjacent rows. The triangle may be constructed in the following manner: In row 0 (the topmost row), there is a unique nonzero entry 1. Each entry of each subsequent row is constructed by adding the number above and to the left with the number above and to the right, treating blank entries as 0. For example, the initial number in the first (or any other) row is 1 (the sum of 0 and 1), whereas the numbers 1 and 3 in the third row are added to produce the number 4 in the fourth row.
Rows of Pascal's Triangle
0 Order: 1
1 Order: 1 1
2 Order: 1 2 1
3 Order: 1 3 3 1
4 Order: 1 4 6 4 1
5 Order: 1 5 10 10 5 1
6 Order: 1 6 15 20 15 6 1
7 Order: 1 7 21 35 35 21 7 1
8 Order: 1 8 28 56 70 56 28 8 1
9 Order: 1 9 36 34 84 126 126 84 36 9 1
10 Order: 1 10 45 120 210 252 210 120 45 10 1
11 Order: 1 11 55 165 330 462 462 330 165 55 11 1
12 Order: 1 12 66 220 495 792 924 792 495 220 66 12 1
13 Order: 1 13 78 286 715 1287 1716 1716 1287 715 286 78 13 1
For a 12th order Power-of-Cosine FIR Filter
1. We take the coefficients from the Left side of the 12th row
1 13 78 286 715 1287 1716 1716 1287 715 286 78 13 1
2. We slice those in half to
1 13 78 286 715 1287 1716
3. We reverse the array
1716 1287 715 286 78 13 1
This is our array of alphas: alpha1, alpha2, ... alphaN
4. We then pull alpha one from the previous order, order 11, the middle value
11 Order: 1 11 55 165 330 462 462 330 165 55 11 1
The middle value is 462, this value becomes our alpha0 in the calculation
5. We apply these alphas to the cosine calculations
example: + alpha4 * math.cos(6 * math.pi * n / N)
6. We then divide by the sum of the alphas to derive our final coefficient weighting kernel
**This is only useful for orders that are EVEN, if you use odd ordering, the following are the coefficient outputs and these aren't useful since they cancel each other out and result in a value of zero. See below for an odd numbered oder and compare with the amplitude of the graphic posted above of the even order amplitude:
What is a Standard Deviation Filter?
If price or output or both don't move more than the (standard deviation) * multiplier then the trend stays the previous bar trend. This will appear on the chart as "stepping" of the moving average line. This works similar to Super Trend or Parabolic SAR but is a more naive technique of filtering.
What is a Clutter Filter?
For our purposes here, this is a filter that compares the slope of the trading filter output to a threshold to determine whether to shift trends. If the slope is up but the slope doesn't exceed the threshold, then the color is gray and this indicates a chop zone. If the slope is down but the slope doesn't exceed the threshold, then the color is gray and this indicates a chop zone. Alternatively if either up or down slope exceeds the threshold then the trend turns green for up and red for down. Fro demonstration purposes, an EMA is used as the moving average. This acts to reduce the noise in the signal.
Included
Bar coloring
Loxx's Expanded Source Types
Signals
Alerts
STD/C-Filtered, Power-of-Cosine FIR Filter [Loxx]STD/C-Filtered, Power-of-Cosine FIR Filter is a Discrete-Time, FIR Digital Filter that uses Power-of-Cosine Family of FIR filters. This indicator also includes a clutter and standard deviation filter.
Amplitudes
What are FIR Filters?
In discrete-time signal processing, windowing is a preliminary signal shaping technique, usually applied to improve the appearance and usefulness of a subsequent Discrete Fourier Transform. Several window functions can be defined, based on a constant (rectangular window), B-splines, other polynomials, sinusoids, cosine-sums, adjustable, hybrid, and other types. The windowing operation consists of multipying the given sampled signal by the window function. For trading purposes, these FIR filters act as advanced weighted moving averages.
What is Power-of-Sine Digital FIR Filter?
Also called Cos^alpha Window Family. In this family of windows, changing the value of the parameter alpha generates different windows.
f(n) = math.cos(alpha) * (math.pi * n / N) , 0 ≤ |n| ≤ N/2
where alpha takes on integer values and N is a even number
General expanded form:
alpha0 - alpha1 * math.cos(2 * math.pi * n / N)
+ alpha2 * math.cos(4 * math.pi * n / N)
- alpha3 * math.cos(4 * math.pi * n / N)
+ alpha4 * math.cos(6 * math.pi * n / N)
- ...
Special Cases for alpha:
alpha = 0: Rectangular window, this is also just the SMA (not included here)
alpha = 1: MLT sine window (not included here)
alpha = 2: Hann window (raised cosine = cos^2)
alpha = 4: Alternative Blackman (maximized roll-off rate)
For this indicator, I've included alpha values from 2 to 16
What is a Standard Devaition Filter?
If price or output or both don't move more than the (standard deviation) * multiplier then the trend stays the previous bar trend. This will appear on the chart as "stepping" of the moving average line. This works similar to Super Trend or Parabolic SAR but is a more naive technique of filtering.
What is a Clutter Filter?
For our purposes here, this is a filter that compares the slope of the trading filter output to a threshold to determine whether to shift trends. If the slope is up but the slope doesn't exceed the threshold, then the color is gray and this indicates a chop zone. If the slope is down but the slope doesn't exceed the threshold, then the color is gray and this indicates a chop zone. Alternatively if either up or down slope exceeds the threshold then the trend turns green for up and red for down. Fro demonstration purposes, an EMA is used as the moving average. This acts to reduce the noise in the signal.
Included
Bar coloring
Loxx's Expanded Source Types
Signals
Alerts
HorizonSigma Pro [CHE]HorizonSigma Pro
Disclaimer
Not every timeframe will yield good results . Very short charts are dominated by microstructure noise, spreads, and slippage; signals can flip and the tradable edge shrinks after costs. Very high timeframes adapt more slowly, provide fewer samples, and can lag regime shifts. When you change timeframe, you also change the ratios between horizon, lookbacks, and correlation windows—what works on M5 won’t automatically hold on H1 or D1. Liquidity, session effects (overnight gaps, news bursts), and volatility do not scale linearly with time. Always validate per symbol and timeframe, then retune horizon, z-length, correlation window, and either the neutral band or the z-threshold. On fast charts, “components” mode adapts quicker; on slower charts, “super” reduces noise. Keep prior-shift and calibration enabled, monitor Hit Rate with its confidence interval and the Brier score, and execute only on confirmed (closed-bar) values.
For example, what do “UP 61%” and “DOWN 21%” mean?
“UP 61%” is the model’s estimated probability that the close will be higher after your selected horizon—directional probability, not a price target or profit guarantee. “DOWN 21%” still reports the probability of up; here it’s 21%, which implies 79% for down (a short bias). The label switches to “DOWN” because the probability falls below your short threshold. With a neutral-band policy, for example ±7%, signals are: Long above 57%, Short below 43%, Neutral in between. In z-score mode, fixed z-cutoffs drive the call instead of percentages. The arrow length on the chart is an ATR-scaled projection to visualize reach; treat it as guidance, not a promise.
Part 1 — Scientific description
Objective.
The indicator estimates the probability that price will be higher after a user-defined horizon (a chosen number of bars) and emits long, short, or neutral decisions under explicit thresholds. It combines multi‑feature, z‑normalized inputs, adaptive correlation‑based weighting, a prior‑shifted sigmoid mapping, optional rolling probability calibration, and repaint‑safe confirmation. It also visualizes an ATR‑scaled forward projection and prints a compact statistics panel.
Data and labeling.
For each bar, the target label is whether price increased over the past chosen horizon. Learning is deliberately backward‑looking to avoid look‑ahead: features are associated with outcomes that are only known after that horizon has elapsed.
Feature engineering.
The feature set includes momentum, RSI, stochastic %K, MACD histogram slope, a normalized EMA(20/50) trend spread, ATR as a share of price, Bollinger Band width, and volume normalized by its moving average. All features are standardized over rolling windows. A compressed “super‑feature” is available that aggregates core trend and momentum components while penalizing excessive width (volatility). Users can switch between a “components” mode (weighted sum of individual features) and a “super” mode (single compressed driver).
Weighting and learning.
Weights are the rolling correlations between features (evaluated one horizon ago) and realized directional outcomes, smoothed by an EMA and optionally clamped to a bounded range to stabilize outliers. This produces an adaptive, regime‑aware weighting without explicit machine‑learning libraries.
Scoring and probability mapping.
The raw score is either the weighted component sum or the weighted super‑feature. The score is standardized again and passed through a sigmoid whose steepness is user‑controlled. A “prior shift” moves the sigmoid’s midpoint to the current base rate of up moves, estimated over the evaluation window, so that probabilities remain well‑calibrated when markets drift bullish or bearish. Probabilities and standardized scores are EMA‑smoothed for stability.
Decision policy.
Two modes are supported:
- Neutral band: go long if the probability is above one half plus a user‑set band; go short if it is below one half minus that band; otherwise stay neutral.
- Z‑score thresholds: use symmetric positive/negative cutoffs on the standardized score to trigger long/short.
Repaint protection.
All values used for decisions can be locked to confirmed (closed) bars. Intrabar updates are available as a preview, but confirmed values drive evaluation and stats.
Calibration.
An optional rolling linear calibration maps past confirmed probabilities to realized outcomes over the evaluation window. The mapping is clipped to the unit interval and can be injected back into the decision logic if desired. This improves reliability (probabilities that “mean what they say”) without necessarily improving raw separability.
Evaluation metrics.
The table reports: hit rate on signaled bars; a Wilson confidence interval for that hit rate at a chosen confidence level; Brier score as a measure of probability accuracy; counts of long/short trades; average realized return by side; profit factor; net return; and exposure (signal density). All are computed on rolling windows consistent with the learning scheme.
Visualization.
On the chart, an arrowed projection shows the predicted direction from the current bar to the chosen horizon, with magnitude scaled by ATR (optionally scaled by the square‑root of the horizon). Labels display either the decision probability or the standardized score. Neutral states can display a configurable icon for immediate recognition.
Computational properties.
The design relies on rolling means, standard deviations, correlations, and EMAs. Per‑bar cost is constant with respect to history length, and memory is constant per tracked series. Graphical objects are updated in place to obey platform limits.
Assumptions and limitations.
The method is correlation‑based and will adapt after regime changes, not before them. Calibration improves probability reliability but not necessarily ranking power. Intrabar previews are non‑binding and should not be evaluated as historical performance.
Part 2 — Trader‑facing description
What it does.
This tool tells you how likely price is to be higher after your chosen number of bars and converts that into Long / Short / Neutral calls. It learns, in real time, which components—momentum, trend, volatility, breadth, and volume—matter now, adjusts their weights, and shows you a probability line plus a forward arrow scaled by volatility.
How to set it up.
1) Choose your horizon. Intraday scalps: 5–10 bars. Swings: 10–30 bars. The default of 14 bars is a balanced starting point.
2) Pick a feature mode.
- components: granular and fast to adapt when leadership rotates between signals.
- super: cleaner single driver; less noise, slightly slower to react.
3) Decide how signals are triggered.
- Neutral band (probability based): intuitive and easy to tune. Widen the band for fewer, higher‑quality trades; tighten to catch more moves.
- Z‑score thresholds: consistent numeric cutoffs that ignore base‑rate drift.
4) Keep reliability helpers on. Leave prior shift and calibration enabled to stabilize probabilities across bullish/bearish regimes.
5) Smoothing. A short EMA on the probability or score reduces whipsaws while preserving turns.
6) Overlay. The arrow shows the call and a volatility‑scaled reach for the next horizon. Treat it as guidance, not a promise.
Reading the stats table.
- Hit Rate with a confidence interval: your recent accuracy with an uncertainty range; trust the range, not only the point.
- Brier Score: lower is better; it checks whether a stated “70%” really behaves like 70% over time.
- Profit Factor, Net Return, Exposure: quick triage of tradability and signal density.
- Average Return by Side: sanity‑check that the long and short calls each pull their weight.
Typical adjustments.
- Too many trades? Increase the neutral band or raise the z‑threshold.
- Missing the move? Tighten the band, or switch to components mode to react faster.
- Choppy timeframe? Lengthen the z‑score and correlation windows; keep calibration on.
- Volatility regime change? Revisit the ATR multiplier and enable square‑root scaling of horizon.
Execution and risk.
- Size positions by volatility (ATR‑based sizing works well).
- Enter on confirmed values; use intrabar previews only as early signals.
- Combine with your market structure (levels, liquidity zones). This model is statistical, not clairvoyant.
What it is not.
Not a black‑box machine‑learning model. It is transparent, correlation‑weighted technical analysis with strong attention to probability reliability and repaint safety.
Suggested defaults (robust starting point).
- Horizon 14; components mode; weight EMA 10; correlation window 500; z‑length 200.
- Neutral band around seven percentage points, or z‑threshold around one‑third of a standard deviation.
- Prior shift ON, Calibration ON, Use calibrated for decisions OFF to start.
- ATR multiplier 1.0; square‑root horizon scaling ON; EMA smoothing 3.
- Confidence setting equivalent to about 95%.
Disclaimer
No indicator guarantees profits. HorizonSigma Pro is a decision aid; always combine with solid risk management and your own judgment. Backtest, forward test, and size responsibly.
The content provided, including all code and materials, is strictly for educational and informational purposes only. It is not intended as, and should not be interpreted as, financial advice, a recommendation to buy or sell any financial instrument, or an offer of any financial product or service. All strategies, tools, and examples discussed are provided for illustrative purposes to demonstrate coding techniques and the functionality of Pine Script within a trading context.
Any results from strategies or tools provided are hypothetical, and past performance is not indicative of future results. Trading and investing involve high risk, including the potential loss of principal, and may not be suitable for all individuals. Before making any trading decisions, please consult with a qualified financial professional to understand the risks involved.
By using this script, you acknowledge and agree that any trading decisions are made solely at your discretion and risk.
Enhance your trading precision and confidence 🚀
Best regards
Chervolino
Relational Quadratic Kernel Channel [Vin]The Relational Quadratic Kernel Channel (RQK-Channel-V) is designed to provide more valuable potential price extremes or continuation points in the price trend.
Example:
Usage:
Lookback Window: Adjust the "Lookback Window" parameter to control the number of previous bars considered when calculating the Rational Quadratic Estimate. Longer windows capture longer-term trends, while shorter windows respond more quickly to price changes.
Relative Weight: The "Relative Weight" parameter allows you to control the importance of each data point in the calculation. Higher values emphasize recent data, while lower values give more weight to historical data.
Source: Choose the data source (e.g., close price) that you want to use for the kernel estimate.
ATR Length: Set the length of the Average True Range (ATR) used for channel width calculation. A longer ATR length results in wider channels, while a shorter length leads to narrower channels.
Channel Multipliers: Adjust the "Channel Multiplier" parameters to control the width of the channels. Higher multipliers result in wider channels, while lower multipliers produce narrower channels. The indicator provides three sets of channels, each with its own multiplier for flexibility.
Details:
Rational Quadratic Kernel Function:
The Rational Quadratic Kernel Function is a type of smoothing function used to estimate a continuous curve or line from discrete data points. It is often used in time series analysis to reduce noise and emphasize trends or patterns in the data.
The formula for the Rational Quadratic Kernel Function is generally defined as:
K(x) = (1 + (x^2) / (2 * α * β))^(-α)
Where:
x represents the distance or difference between data points.
α and β are parameters that control the shape of the kernel. These parameters can be adjusted to control the smoothness or flexibility of the kernel function.
In the context of this indicator, the Rational Quadratic Kernel Function is applied to a specified source (e.g., close prices) over a defined lookback window. It calculates a smoothed estimate of the source data, which is then used to determine the central value of the channels. The kernel function allows the indicator to adapt to different market conditions and reduce noise in the data.
The specific parameters (length and relativeWeight) in your indicator allows to fine-tune how the Rational Quadratic Kernel Function is applied, providing flexibility in capturing both short-term and long-term trends in the data.
To know more about unsupervised ML implementations, I highly recommend to follow the users, @jdehorty and @LuxAlgo
Optimizing the parameters:
Lookback Window (length): The lookback window determines how many previous bars are considered when calculating the kernel estimate.
For shorter-term trading strategies, you may want to use a shorter lookback window (e.g., 5-10).
For longer-term trading or investing, consider a longer lookback window (e.g., 20-50).
Relative Weight (relativeWeight): This parameter controls the importance of each data point in the calculation.
A higher relative weight (e.g., 2 or 3) emphasizes recent data, which can be suitable for trend-following strategies.
A lower relative weight (e.g., 1) gives more equal importance to historical and recent data, which may be useful for strategies that aim to capture both short-term and long-term trends.
ATR Length (atrLength): The length of the Average True Range (ATR) affects the width of the channels.
Longer ATR lengths result in wider channels, which may be suitable for capturing broader price movements.
Shorter ATR lengths result in narrower channels, which can be helpful for identifying smaller price swings.
Channel Multipliers (channelMultiplier1, channelMultiplier2, channelMultiplier3): These parameters determine the width of the channels relative to the ATR.
Adjust these multipliers based on your risk tolerance and desired channel width.
Higher multipliers result in wider channels, which may lead to fewer signals but potentially larger price movements.
Lower multipliers create narrower channels, which can result in more frequent signals but potentially smaller price movements.
SP500 Session Gap Fade StrategySummary in one paragraph
SPX Session Gap Fade is an intraday gap fade strategy for index futures, designed around regular cash sessions on five minute charts. It helps you participate only when there is a full overnight or pre session gap and a valid intraday session window, instead of trading every open. The original part is the gap distance engine which anchors both stop and optional target to the previous session reference close at a configurable flat time, so every trade’s risk scales with the actual gap size rather than a fixed tick stop.
Scope and intent
• Markets. Primarily index futures such as ES, NQ, YM, and liquid index CFDs that exhibit overnight gaps and regular cash hours.
• Timeframes. Intraday timeframes from one minute to fifteen minutes. Default usage is five minute bars.
• Default demo used in the publication. Symbol CME:ES1! on a five minute chart.
• Purpose. Provide a simple, transparent way to trade opening gaps with a session anchored risk model and forced flat exit so you are not holding into the last part of the session.
• Limits. This is a strategy. Orders are simulated on standard candles only.
Originality and usefulness
• Unique concept or fusion. The core novelty is the combination of a strict “full gap” entry condition with a session anchored reference close and a gap distance based TP and SL engine. The stop and optional target are symmetric multiples of the actual gap distance from the previous session’s flat close, rather than fixed ticks.
• Failure mode it addresses. Fixed sized stops do not scale when gaps are unusually small or unusually large, which can either under risk or over risk the account. The session flat logic also reduces the chance of holding residual positions into late session liquidity and news.
• Testability. All key pieces are explicit in the Inputs: session window, minutes before session end, whether to use gap exits, whether TP or SL are active, and whether to allow candle based closes and forced flat. You can toggle each component and see how it changes entries and exits.
• Portable yardstick. The main unit is the absolute price gap between the entry bar open and the previous session reference close. tp_mult and sl_mult are multiples of that gap, which makes the risk model portable across contracts and volatility regimes.
Method overview in plain language
The strategy first defines a trading session using exchange time, for example 08:30 to 15:30 for ES day hours. It also defines a “flat” time a fixed number of minutes before session end. At the flat bar, any open position is closed and the bar’s close price is stored as the reference close for the next session. Inside the session, the strategy looks for a full gap bar relative to the prior bar: a gap down where today’s high is below yesterday’s low, or a gap up where today’s low is above yesterday’s high. A full gap down generates a long entry; a full gap up generates a short entry. If the gap risk engine is enabled and a valid reference close exists, the strategy measures the distance between the entry bar open and that reference close. It then sets a stop and optional target as configurable multiples of that gap distance and manages them with strategy.exit. Additional exits can be triggered by a candle color flip or by the forced flat time.
Base measures
• Range basis. The main unit is the absolute difference between the current entry bar open and the stored reference close from the previous session flat bar. That value is used as a “gap unit” and scaled by tp_mult and sl_mult to build the target and stop.
Components
• Component one: Gap Direction. Detects full gap up or full gap down by comparing the current high and low to the previous bar’s high and low. Gap down signals a long fade, gap up signals a short fade. There is no smoothing; it is a strict structural condition.
• Component two: Session Window. Only allows entries when the current time is within the configured session window. It also defines a flat time before the session end where positions are forced flat and the reference close is updated.
• Component three: Gap Distance Risk Engine. Computes the absolute distance between the entry open and the stored reference close. The stop and optional target are placed as entry ± gap_distance × multiplier so that risk scales with gap size.
• Optional component: Candle Exit. If enabled, a bullish bar closes short positions and a bearish bar closes long positions, which can shorten holding time when price reverses quickly inside the session.
• Session windows. Session logic uses the exchange time of the chart symbol. When changing symbols or venues, verify that the session time string still matches the new instrument’s cash hours.
Fusion rule
All gates are hard conditions rather than weighted scores. A trade can only open if the session window is active and the full gap condition is true. The gap distance engine only activates if a valid reference close exists and use_gap_risk is on. TP and SL are controlled by separate booleans so you can use SL only, TP only, or both. Long and short are symmetric by construction: long trades fade full gap downs, short trades fade full gap ups with mirrored TP and SL logic.
Signal rule
• Long entry. Inside the active session, when the current bar shows a full gap down relative to the previous bar (current high below prior low), the strategy opens a long position. If the gap risk engine is active, it places a gap based stop below the entry and an optional target above it.
• Short entry. Inside the active session, when the current bar shows a full gap up relative to the previous bar (current low above prior high), the strategy opens a short position. If the gap risk engine is active, it places a gap based stop above the entry and an optional target below it.
• Forced flat. At the configured flat time before session end, any open position is closed and the close price of that bar becomes the new reference close for the following session.
• Candle based exit. If enabled, a bearish bar closes longs, and a bullish bar closes shorts, regardless of where TP or SL sit, as long as a position is open.
What you will see on the chart
• Markers on entry bars. Standard strategy entry markers labeled “long” and “short” on the gap bars where trades open.
• Exit markers. Standard exit markers on bars where either the gap stop or target are hit, or where a candle exit or forced flat close occurs. Exit IDs “long_gap” and “short_gap” label gap based exits.
• Reference levels. Horizontal lines for the current long TP, long SL, short TP, and short SL while a position is open and the gap engine is enabled. They update when a new trade opens and disappear when flat.
• Session background. This version does not add background shading for the session; session logic runs internally based on time.
• No on chart table. All decisions are visible through orders and exit levels. Use the Strategy Tester for performance metrics.
Inputs with guidance
Session Settings
• Trading session (sess). Session window in exchange time. Typical value uses the regular cash session for each contract, for example “0830-1530” for ES. Adjust if your broker or symbol uses different hours.
• Minutes before session end to force exit (flat_before_min). Minutes before the session end where positions are forced flat and the reference close is stored. Typical range is 15 to 120. Raising it closes trades earlier in the day; lowering it allows trades later in the session.
Gap Risk
• Enable gap based TP/SL (use_gap_risk). Master switch for the gap distance exit engine. Turning it off keeps entries and forced flat logic but removes automatic TP and SL placement.
• Use TP limit from gap (use_gap_tp). Enables gap based profit targets. Typical values are true for structured exits or false if you want to manage exits manually and only keep a stop.
• Use SL stop from gap (use_gap_sl). Enables gap based stop losses. This should normally remain true so that each trade has a defined initial risk in ticks.
• TP multiplier of gap distance (tp_mult). Multiplier applied to the gap distance for the target. Typical range is 0.5 to 2.0. Raising it places the target further away and reduces hit frequency.
• SL multiplier of gap distance (sl_mult). Multiplier applied to the gap distance for the stop. Typical range is 0.5 to 2.0. Raising it widens the stop and increases risk per trade; lowering it tightens the stop and may increase the number of small losses.
Exit Controls
• Exit with candle logic (use_candle_exit). If true, closes shorts on bullish candles and longs on bearish candles. Useful when you want to react to intraday reversal bars even if TP or SL have not been reached.
• Force flat before session end (use_forced_flat). If true, guarantees you are flat by the configured flat time and updates the reference close. Turn this off only if you understand the impact on overnight risk.
Filters
There is no separate trend or volatility filter in this version. All trades depend on the presence of a full gap bar inside the session. If you need extra filtering such as ATR, volume, or higher timeframe bias, they should be added explicitly and documented in your own fork.
Usage recipes
Intraday conservative gap fade
• Timeframe. Five minute chart on ES regular session.
• Gap risk. use_gap_risk = true, use_gap_tp = true, use_gap_sl = true.
• Multipliers. tp_mult around 0.7 to 1.0 and sl_mult around 1.0.
• Exits. use_candle_exit = false, use_forced_flat = true. Focus on the structured TP and SL around the gap.
Intraday aggressive gap fade
• Timeframe. Five minute chart.
• Gap risk. use_gap_risk = true, use_gap_tp = false, use_gap_sl = true.
• Multipliers. sl_mult around 0.7 to 1.0.
• Exits. use_candle_exit = true, use_forced_flat = true. Entries fade full gaps, stops are tight, and candle color flips flatten trades early.
Higher timeframe gap tests
• Timeframe. Fifteen minute or sixty minute charts on instruments with regular gaps.
• Gap risk. Keep use_gap_risk = true. Consider slightly higher sl_mult if gaps are structurally wider on the higher timeframe.
• Note. Expect fewer trades and be careful with sample size; multi year data is recommended.
Properties visible in this publication
• On average our risk for each position over the last 200 trades is 0.4% with a max intraday loss of 1.5% of the total equity in this case of 100k $ with 1 contract ES. For other assets, recalculations and customizations has to be applied.
• Initial capital. 100 000.
• Base currency. USD.
• Default order size method. Fixed with size 1 contract.
• Pyramiding. 0.
• Commission. Flat 2 USD per order in the Strategy Tester Properties. (2$ buying + 2$selling)
• Slippage. One tick in the Strategy Tester Properties.
• Process orders on close. ON.
Realism and responsible publication
• No performance claims are made. Past results do not guarantee future outcomes.
• Costs use a realistic flat commission and one tick of slippage per trade for ES class futures.
• Default sizing with one contract on a 100 000 reference account targets modest per trade risk. In practice, extreme slippage or gap through events can exceed this, so treat the one and a half percent risk target as a design goal, not a guarantee.
• All orders are simulated on standard candles. Shapes can move while a bar is forming and settle on bar close.
Honest limitations and failure modes
• Economic releases, thin liquidity, and limit conditions can break the assumptions behind the simple gap model and lead to slippage or skipped fills.
• Symbols with very frequent or very large gaps may require adjusted multipliers or alternative risk handling, especially in high volatility regimes.
• Very quiet periods without clean gaps will produce few or no trades. This is expected behavior, not a bug.
• Session windows follow the exchange time of the chart. Always confirm that the configured session matches the symbol.
• When both the stop and target lie inside the same bar’s range, the TradingView engine decides which is hit first based on its internal intrabar assumptions. Without bar magnifier, tie handling is approximate.
Legal
Education and research only. This strategy is not investment advice. You remain responsible for all trading decisions. Always test on historical data and in simulation with realistic costs before considering any live use.
Multi-Mode Seasonality Map [BackQuant]Multi-Mode Seasonality Map
A fast, visual way to expose repeatable calendar patterns in returns, volatility, volume, and range across multiple granularities (Day of Week, Day of Month, Hour of Day, Week of Month). Built for idea generation, regime context, and execution timing.
What is “seasonality” in markets?
Seasonality refers to statistically repeatable patterns tied to the calendar or clock, rather than to price levels. Examples include specific weekdays tending to be stronger, certain hours showing higher realized volatility, or month-end flow boosting volumes. This tool measures those effects directly on your charted symbol.
Why seasonality matters
It’s orthogonal alpha: timing edges independent of price structure that can complement trend, mean reversion, or flow-based setups.
It frames expectations: when a session typically runs hot or cold, you size and pace risk accordingly.
It improves execution: entering during historically favorable windows, avoiding historically noisy windows.
It clarifies context: separating normal “calendar noise” from true anomaly helps avoid overreacting to routine moves.
How traders use seasonality in practice
Timing entries/exits : If Tuesday morning is historically weak for this asset, a mean-reversion buyer may wait for that drift to complete before entering.
Sizing & stops : If 13:00–15:00 shows elevated volatility, widen stops or reduce size to maintain constant risk.
Session playbooks : Build repeatable routines around the hours/days that consistently drive PnL.
Portfolio rotation : Compare seasonal edges across assets to schedule focus and deploy attention where the calendar favors you.
Why Day-of-Week (DOW) can be especially helpful
Flows cluster by weekday (ETF creations/redemptions, options hedging cadence, futures roll patterns, macro data releases), so DOW often encodes a stable micro-structure signal.
Desk behavior and liquidity provision differ by weekday, impacting realized range and slippage.
DOW is simple to operationalize: easy rules like “fade Monday afternoon chop” or “press Thursday trend extension” can be tested and enforced.
What this indicator does
Multi-mode heatmaps : Switch between Day of Week, Day of Month, Hour of Day, Week of Month .
Metric selection : Analyze Returns , Volatility ((high-low)/open), Volume (vs 20-bar average), or Range (vs 20-bar average).
Confidence intervals : Per cell, compute mean, standard deviation, and a z-based CI at your chosen confidence level.
Sample guards : Enforce a minimum sample size so thin data doesn’t mislead.
Readable map : Color palettes, value labels, sample size, and an optional legend for fast interpretation.
Scoreboard : Optional table highlights best/worst DOW and today’s seasonality with CI and a simple “edge” tag.
How it’s calculated (under the hood)
Per bar, compute the chosen metric (return, vol, volume %, or range %) over your lookback window.
Bucket that metric into the active calendar bin (e.g., Tuesday, the 15th, 10:00 hour, or Week-2 of month).
For each bin, accumulate sum , sum of squares , and count , then at render compute mean , std dev , and confidence interval .
Color scale normalizes to the observed min/max of eligible bins (those meeting the minimum sample size).
How to read the heatmap
Color : Greener/warmer typically implies higher mean value for the chosen metric; cooler implies lower.
Value label : The center number is the bin’s mean (e.g., average % return for Tuesdays).
Confidence bracket : Optional “ ” shows the CI for the mean, helping you gauge stability.
n = sample size : More samples = more reliability. Treat small-n bins with skepticism.
Suggested workflows
Pick the lens : Start with Analysis Type = Returns , Heatmap View = Day of Week , lookback ≈ 252 trading days . Note the best/worst weekdays and their CI width.
Sanity-check volatility : Switch to Volatility to see which bins carry the most realized range. Use that to plan stop width and trade pacing.
Check liquidity proxy : Flip to Volume , identify thin vs thick windows. Execute risk in thicker windows to reduce slippage.
Drill to intraday : Use Hour of Day to reveal opening bursts, lunchtime lulls, and closing ramps. Combine with your main strategy to schedule entries.
Calendar nuance : Inspect Week of Month and Day of Month for end-of-month, options-cycle, or data-release effects.
Codify rules : Translate stable edges into rules like “no fresh risk during bottom-quartile hours” or “scale entries during top-quartile hours.”
Parameter guidance
Analysis Period (Days) : 252 for a one-year view. Shorten (100–150) to emphasize the current regime; lengthen (500+) for long-memory effects.
Heatmap View : Start with DOW for robustness, then refine with Hour-of-Day for your execution window.
Confidence Level : 95% is standard; use 90% if you want wider coverage with fewer false “insufficient data” bins.
Min Sample Size : 10–20 helps filter noise. For Hour-of-Day on higher timeframes, consider lowering if your dataset is small.
Color Scheme : Choose a palette with good mid-tone contrast (e.g., Red-Green or Viridis) for quick thresholding.
Interpreting common patterns
Return-positive but low-vol bins : Favorable drift windows for passive adds or tight-stop trend continuation.
Return-flat but high-vol bins : Opportunity for mean reversion or breakout scalping, but manage risk accordingly.
High-volume bins : Better expected execution quality; schedule size here if slippage matters.
Wide CI : Edge is unstable or sample is thin; treat as exploratory until more data accumulates.
Best practices
Revalidate after regime shifts (new macro cycle, liquidity regime change, major exchange microstructure updates).
Use multiple lenses: DOW to find the day, then Hour-of-Day to refine the entry window.
Combine with your core setup signals; treat seasonality as a filter or weight, not a standalone trigger.
Test across assets/timeframes—edges are instrument-specific and may not transfer 1:1.
Limitations & notes
History-dependent: short histories or sparse intraday data reduce reliability.
Not causal: a hot Tuesday doesn’t guarantee future Tuesday strength; treat as probabilistic bias.
Aggregation bias: changing session hours or symbol migrations can distort older samples.
CI is z-approximate: good for fast triage, not a substitute for full hypothesis testing.
Quick setup
Use Returns + Day of Week + 252d to get a clean yearly map of weekday edge.
Flip to Hour of Day on intraday charts to schedule precise entries/exits.
Keep Show Values and Confidence Intervals on while you calibrate; hide later for a clean visual.
The Multi-Mode Seasonality Map helps you convert the calendar from an afterthought into a quantitative edge, surfacing when an asset tends to move, expand, or stay quiet—so you can plan, size, and execute with intent.
Dynamic Equity Allocation Model"Cash is Trash"? Not Always. Here's Why Science Beats Guesswork.
Every retail trader knows the frustration: you draw support and resistance lines, you spot patterns, you follow market gurus on social media—and still, when the next bear market hits, your portfolio bleeds red. Meanwhile, institutional investors seem to navigate market turbulence with ease, preserving capital when markets crash and participating when they rally. What's their secret?
The answer isn't insider information or access to exotic derivatives. It's systematic, scientifically validated decision-making. While most retail traders rely on subjective chart analysis and emotional reactions, professional portfolio managers use quantitative models that remove emotion from the equation and process multiple streams of market information simultaneously.
This document presents exactly such a system—not a proprietary black box available only to hedge funds, but a fully transparent, academically grounded framework that any serious investor can understand and apply. The Dynamic Equity Allocation Model (DEAM) synthesizes decades of financial research from Nobel laureates and leading academics into a practical tool for tactical asset allocation.
Stop drawing colorful lines on your chart and start thinking like a quant. This isn't about predicting where the market goes next week—it's about systematically adjusting your risk exposure based on what the data actually tells you. When valuations scream danger, when volatility spikes, when credit markets freeze, when multiple warning signals align—that's when cash isn't trash. That's when cash saves your portfolio.
The irony of "cash is trash" rhetoric is that it ignores timing. Yes, being 100% cash for decades would be disastrous. But being 100% equities through every crisis is equally foolish. The sophisticated approach is dynamic: aggressive when conditions favor risk-taking, defensive when they don't. This model shows you how to make that decision systematically, not emotionally.
Whether you're managing your own retirement portfolio or seeking to understand how institutional allocation strategies work, this comprehensive analysis provides the theoretical foundation, mathematical implementation, and practical guidance to elevate your investment approach from amateur to professional.
The choice is yours: keep hoping your chart patterns work out, or start using the same quantitative methods that professionals rely on. The tools are here. The research is cited. The methodology is explained. All you need to do is read, understand, and apply.
The Dynamic Equity Allocation Model (DEAM) is a quantitative framework for systematic allocation between equities and cash, grounded in modern portfolio theory and empirical market research. The model integrates five scientifically validated dimensions of market analysis—market regime, risk metrics, valuation, sentiment, and macroeconomic conditions—to generate dynamic allocation recommendations ranging from 0% to 100% equity exposure. This work documents the theoretical foundations, mathematical implementation, and practical application of this multi-factor approach.
1. Introduction and Theoretical Background
1.1 The Limitations of Static Portfolio Allocation
Traditional portfolio theory, as formulated by Markowitz (1952) in his seminal work "Portfolio Selection," assumes an optimal static allocation where investors distribute their wealth across asset classes according to their risk aversion. This approach rests on the assumption that returns and risks remain constant over time. However, empirical research demonstrates that this assumption does not hold in reality. Fama and French (1989) showed that expected returns vary over time and correlate with macroeconomic variables such as the spread between long-term and short-term interest rates. Campbell and Shiller (1988) demonstrated that the price-earnings ratio possesses predictive power for future stock returns, providing a foundation for dynamic allocation strategies.
The academic literature on tactical asset allocation has evolved considerably over recent decades. Ilmanen (2011) argues in "Expected Returns" that investors can improve their risk-adjusted returns by considering valuation levels, business cycles, and market sentiment. The Dynamic Equity Allocation Model presented here builds on this research tradition and operationalizes these insights into a practically applicable allocation framework.
1.2 Multi-Factor Approaches in Asset Allocation
Modern financial research has shown that different factors capture distinct aspects of market dynamics and together provide a more robust picture of market conditions than individual indicators. Ross (1976) developed the Arbitrage Pricing Theory, a model that employs multiple factors to explain security returns. Following this multi-factor philosophy, DEAM integrates five complementary analytical dimensions, each tapping different information sources and collectively enabling comprehensive market understanding.
2. Data Foundation and Data Quality
2.1 Data Sources Used
The model draws its data exclusively from publicly available market data via the TradingView platform. This transparency and accessibility is a significant advantage over proprietary models that rely on non-public data. The data foundation encompasses several categories of market information, each capturing specific aspects of market dynamics.
First, price data for the S&P 500 Index is obtained through the SPDR S&P 500 ETF (ticker: SPY). The use of a highly liquid ETF instead of the index itself has practical reasons, as ETF data is available in real-time and reflects actual tradability. In addition to closing prices, high, low, and volume data are captured, which are required for calculating advanced volatility measures.
Fundamental corporate metrics are retrieved via TradingView's Financial Data API. These include earnings per share, price-to-earnings ratio, return on equity, debt-to-equity ratio, dividend yield, and share buyback yield. Cochrane (2011) emphasizes in "Presidential Address: Discount Rates" the central importance of valuation metrics for forecasting future returns, making these fundamental data a cornerstone of the model.
Volatility indicators are represented by the CBOE Volatility Index (VIX) and related metrics. The VIX, often referred to as the market's "fear gauge," measures the implied volatility of S&P 500 index options and serves as a proxy for market participants' risk perception. Whaley (2000) describes in "The Investor Fear Gauge" the construction and interpretation of the VIX and its use as a sentiment indicator.
Macroeconomic data includes yield curve information through US Treasury bonds of various maturities and credit risk premiums through the spread between high-yield bonds and risk-free government bonds. These variables capture the macroeconomic conditions and financing conditions relevant for equity valuation. Estrella and Hardouvelis (1991) showed that the shape of the yield curve has predictive power for future economic activity, justifying the inclusion of these data.
2.2 Handling Missing Data
A practical problem when working with financial data is dealing with missing or unavailable values. The model implements a fallback system where a plausible historical average value is stored for each fundamental metric. When current data is unavailable for a specific point in time, this fallback value is used. This approach ensures that the model remains functional even during temporary data outages and avoids systematic biases from missing data. The use of average values as fallback is conservative, as it generates neither overly optimistic nor pessimistic signals.
3. Component 1: Market Regime Detection
3.1 The Concept of Market Regimes
The idea that financial markets exist in different "regimes" or states that differ in their statistical properties has a long tradition in financial science. Hamilton (1989) developed regime-switching models that allow distinguishing between different market states with different return and volatility characteristics. The practical application of this theory consists of identifying the current market state and adjusting portfolio allocation accordingly.
DEAM classifies market regimes using a scoring system that considers three main dimensions: trend strength, volatility level, and drawdown depth. This multidimensional view is more robust than focusing on individual indicators, as it captures various facets of market dynamics. Classification occurs into six distinct regimes: Strong Bull, Bull Market, Neutral, Correction, Bear Market, and Crisis.
3.2 Trend Analysis Through Moving Averages
Moving averages are among the oldest and most widely used technical indicators and have also received attention in academic literature. Brock, Lakonishok, and LeBaron (1992) examined in "Simple Technical Trading Rules and the Stochastic Properties of Stock Returns" the profitability of trading rules based on moving averages and found evidence for their predictive power, although later studies questioned the robustness of these results when considering transaction costs.
The model calculates three moving averages with different time windows: a 20-day average (approximately one trading month), a 50-day average (approximately one quarter), and a 200-day average (approximately one trading year). The relationship of the current price to these averages and the relationship of the averages to each other provide information about trend strength and direction. When the price trades above all three averages and the short-term average is above the long-term, this indicates an established uptrend. The model assigns points based on these constellations, with longer-term trends weighted more heavily as they are considered more persistent.
3.3 Volatility Regimes
Volatility, understood as the standard deviation of returns, is a central concept of financial theory and serves as the primary risk measure. However, research has shown that volatility is not constant but changes over time and occurs in clusters—a phenomenon first documented by Mandelbrot (1963) and later formalized through ARCH and GARCH models (Engle, 1982; Bollerslev, 1986).
DEAM calculates volatility not only through the classic method of return standard deviation but also uses more advanced estimators such as the Parkinson estimator and the Garman-Klass estimator. These methods utilize intraday information (high and low prices) and are more efficient than simple close-to-close volatility estimators. The Parkinson estimator (Parkinson, 1980) uses the range between high and low of a trading day and is based on the recognition that this information reveals more about true volatility than just the closing price difference. The Garman-Klass estimator (Garman and Klass, 1980) extends this approach by additionally considering opening and closing prices.
The calculated volatility is annualized by multiplying it by the square root of 252 (the average number of trading days per year), enabling standardized comparability. The model compares current volatility with the VIX, the implied volatility from option prices. A low VIX (below 15) signals market comfort and increases the regime score, while a high VIX (above 35) indicates market stress and reduces the score. This interpretation follows the empirical observation that elevated volatility is typically associated with falling markets (Schwert, 1989).
3.4 Drawdown Analysis
A drawdown refers to the percentage decline from the highest point (peak) to the lowest point (trough) during a specific period. This metric is psychologically significant for investors as it represents the maximum loss experienced. Calmar (1991) developed the Calmar Ratio, which relates return to maximum drawdown, underscoring the practical relevance of this metric.
The model calculates current drawdown as the percentage distance from the highest price of the last 252 trading days (one year). A drawdown below 3% is considered negligible and maximally increases the regime score. As drawdown increases, the score decreases progressively, with drawdowns above 20% classified as severe and indicating a crisis or bear market regime. These thresholds are empirically motivated by historical market cycles, in which corrections typically encompassed 5-10% drawdowns, bear markets 20-30%, and crises over 30%.
3.5 Regime Classification
Final regime classification occurs through aggregation of scores from trend (40% weight), volatility (30%), and drawdown (30%). The higher weighting of trend reflects the empirical observation that trend-following strategies have historically delivered robust results (Moskowitz, Ooi, and Pedersen, 2012). A total score above 80 signals a strong bull market with established uptrend, low volatility, and minimal losses. At a score below 10, a crisis situation exists requiring defensive positioning. The six regime categories enable a differentiated allocation strategy that not only distinguishes binarily between bullish and bearish but allows gradual gradations.
4. Component 2: Risk-Based Allocation
4.1 Volatility Targeting as Risk Management Approach
The concept of volatility targeting is based on the idea that investors should maximize not returns but risk-adjusted returns. Sharpe (1966, 1994) defined with the Sharpe Ratio the fundamental concept of return per unit of risk, measured as volatility. Volatility targeting goes a step further and adjusts portfolio allocation to achieve constant target volatility. This means that in times of low market volatility, equity allocation is increased, and in times of high volatility, it is reduced.
Moreira and Muir (2017) showed in "Volatility-Managed Portfolios" that strategies that adjust their exposure based on volatility forecasts achieve higher Sharpe Ratios than passive buy-and-hold strategies. DEAM implements this principle by defining a target portfolio volatility (default 12% annualized) and adjusting equity allocation to achieve it. The mathematical foundation is simple: if market volatility is 20% and target volatility is 12%, equity allocation should be 60% (12/20 = 0.6), with the remaining 40% held in cash with zero volatility.
4.2 Market Volatility Calculation
Estimating current market volatility is central to the risk-based allocation approach. The model uses several volatility estimators in parallel and selects the higher value between traditional close-to-close volatility and the Parkinson estimator. This conservative choice ensures the model does not underestimate true volatility, which could lead to excessive risk exposure.
Traditional volatility calculation uses logarithmic returns, as these have mathematically advantageous properties (additive linkage over multiple periods). The logarithmic return is calculated as ln(P_t / P_{t-1}), where P_t is the price at time t. The standard deviation of these returns over a rolling 20-trading-day window is then multiplied by √252 to obtain annualized volatility. This annualization is based on the assumption of independently identically distributed returns, which is an idealization but widely accepted in practice.
The Parkinson estimator uses additional information from the trading range (High minus Low) of each day. The formula is: σ_P = (1/√(4ln2)) × √(1/n × Σln²(H_i/L_i)) × √252, where H_i and L_i are high and low prices. Under ideal conditions, this estimator is approximately five times more efficient than the close-to-close estimator (Parkinson, 1980), as it uses more information per observation.
4.3 Drawdown-Based Position Size Adjustment
In addition to volatility targeting, the model implements drawdown-based risk control. The logic is that deep market declines often signal further losses and therefore justify exposure reduction. This behavior corresponds with the concept of path-dependent risk tolerance: investors who have already suffered losses are typically less willing to take additional risk (Kahneman and Tversky, 1979).
The model defines a maximum portfolio drawdown as a target parameter (default 15%). Since portfolio volatility and portfolio drawdown are proportional to equity allocation (assuming cash has neither volatility nor drawdown), allocation-based control is possible. For example, if the market exhibits a 25% drawdown and target portfolio drawdown is 15%, equity allocation should be at most 60% (15/25).
4.4 Dynamic Risk Adjustment
An advanced feature of DEAM is dynamic adjustment of risk-based allocation through a feedback mechanism. The model continuously estimates what actual portfolio volatility and portfolio drawdown would result at the current allocation. If risk utilization (ratio of actual to target risk) exceeds 1.0, allocation is reduced by an adjustment factor that grows exponentially with overutilization. This implements a form of dynamic feedback that avoids overexposure.
Mathematically, a risk adjustment factor r_adjust is calculated: if risk utilization u > 1, then r_adjust = exp(-0.5 × (u - 1)). This exponential function ensures that moderate overutilization is gently corrected, while strong overutilization triggers drastic reductions. The factor 0.5 in the exponent was empirically calibrated to achieve a balanced ratio between sensitivity and stability.
5. Component 3: Valuation Analysis
5.1 Theoretical Foundations of Fundamental Valuation
DEAM's valuation component is based on the fundamental premise that the intrinsic value of a security is determined by its future cash flows and that deviations between market price and intrinsic value are eventually corrected. Graham and Dodd (1934) established in "Security Analysis" the basic principles of fundamental analysis that remain relevant today. Translated into modern portfolio context, this means that markets with high valuation metrics (high price-earnings ratios) should have lower expected returns than cheaply valued markets.
Campbell and Shiller (1988) developed the Cyclically Adjusted P/E Ratio (CAPE), which smooths earnings over a full business cycle. Their empirical analysis showed that this ratio has significant predictive power for 10-year returns. Asness, Moskowitz, and Pedersen (2013) demonstrated in "Value and Momentum Everywhere" that value effects exist not only in individual stocks but also in asset classes and markets.
5.2 Equity Risk Premium as Central Valuation Metric
The Equity Risk Premium (ERP) is defined as the expected excess return of stocks over risk-free government bonds. It is the theoretical heart of valuation analysis, as it represents the compensation investors demand for bearing equity risk. Damodaran (2012) discusses in "Equity Risk Premiums: Determinants, Estimation and Implications" various methods for ERP estimation.
DEAM calculates ERP not through a single method but combines four complementary approaches with different weights. This multi-method strategy increases estimation robustness and avoids dependence on single, potentially erroneous inputs.
The first method (35% weight) uses earnings yield, calculated as 1/P/E or directly from operating earnings data, and subtracts the 10-year Treasury yield. This method follows Fed Model logic (Yardeni, 2003), although this model has theoretical weaknesses as it does not consistently treat inflation (Asness, 2003).
The second method (30% weight) extends earnings yield by share buyback yield. Share buybacks are a form of capital return to shareholders and increase value per share. Boudoukh et al. (2007) showed in "The Total Shareholder Yield" that the sum of dividend yield and buyback yield is a better predictor of future returns than dividend yield alone.
The third method (20% weight) implements the Gordon Growth Model (Gordon, 1962), which models stock value as the sum of discounted future dividends. Under constant growth g assumption: Expected Return = Dividend Yield + g. The model estimates sustainable growth as g = ROE × (1 - Payout Ratio), where ROE is return on equity and payout ratio is the ratio of dividends to earnings. This formula follows from equity theory: unretained earnings are reinvested at ROE and generate additional earnings growth.
The fourth method (15% weight) combines total shareholder yield (Dividend + Buybacks) with implied growth derived from revenue growth. This method considers that companies with strong revenue growth should generate higher future earnings, even if current valuations do not yet fully reflect this.
The final ERP is the weighted average of these four methods. A high ERP (above 4%) signals attractive valuations and increases the valuation score to 95 out of 100 possible points. A negative ERP, where stocks have lower expected returns than bonds, results in a minimal score of 10.
5.3 Quality Adjustments to Valuation
Valuation metrics alone can be misleading if not interpreted in the context of company quality. A company with a low P/E may be cheap or fundamentally problematic. The model therefore implements quality adjustments based on growth, profitability, and capital structure.
Revenue growth above 10% annually adds 10 points to the valuation score, moderate growth above 5% adds 5 points. This adjustment reflects that growth has independent value (Modigliani and Miller, 1961, extended by later growth theory). Net margin above 15% signals pricing power and operational efficiency and increases the score by 5 points, while low margins below 8% indicate competitive pressure and subtract 5 points.
Return on equity (ROE) above 20% characterizes outstanding capital efficiency and increases the score by 5 points. Piotroski (2000) showed in "Value Investing: The Use of Historical Financial Statement Information" that fundamental quality signals such as high ROE can improve the performance of value strategies.
Capital structure is evaluated through the debt-to-equity ratio. A conservative ratio below 1.0 multiplies the valuation score by 1.2, while high leverage above 2.0 applies a multiplier of 0.8. This adjustment reflects that high debt constrains financial flexibility and can become problematic in crisis times (Korteweg, 2010).
6. Component 4: Sentiment Analysis
6.1 The Role of Sentiment in Financial Markets
Investor sentiment, defined as the collective psychological attitude of market participants, influences asset prices independently of fundamental data. Baker and Wurgler (2006, 2007) developed a sentiment index and showed that periods of high sentiment are followed by overvaluations that later correct. This insight justifies integrating a sentiment component into allocation decisions.
Sentiment is difficult to measure directly but can be proxied through market indicators. The VIX is the most widely used sentiment indicator, as it aggregates implied volatility from option prices. High VIX values reflect elevated uncertainty and risk aversion, while low values signal market comfort. Whaley (2009) refers to the VIX as the "Investor Fear Gauge" and documents its role as a contrarian indicator: extremely high values typically occur at market bottoms, while low values occur at tops.
6.2 VIX-Based Sentiment Assessment
DEAM uses statistical normalization of the VIX by calculating the Z-score: z = (VIX_current - VIX_average) / VIX_standard_deviation. The Z-score indicates how many standard deviations the current VIX is from the historical average. This approach is more robust than absolute thresholds, as it adapts to the average volatility level, which can vary over longer periods.
A Z-score below -1.5 (VIX is 1.5 standard deviations below average) signals exceptionally low risk perception and adds 40 points to the sentiment score. This may seem counterintuitive—shouldn't low fear be bullish? However, the logic follows the contrarian principle: when no one is afraid, everyone is already invested, and there is limited further upside potential (Zweig, 1973). Conversely, a Z-score above 1.5 (extreme fear) adds -40 points, reflecting market panic but simultaneously suggesting potential buying opportunities.
6.3 VIX Term Structure as Sentiment Signal
The VIX term structure provides additional sentiment information. Normally, the VIX trades in contango, meaning longer-term VIX futures have higher prices than short-term. This reflects that short-term volatility is currently known, while long-term volatility is more uncertain and carries a risk premium. The model compares the VIX with VIX9D (9-day volatility) and identifies backwardation (VIX > 1.05 × VIX9D) and steep backwardation (VIX > 1.15 × VIX9D).
Backwardation occurs when short-term implied volatility is higher than longer-term, which typically happens during market stress. Investors anticipate immediate turbulence but expect calming. Psychologically, this reflects acute fear. The model subtracts 15 points for backwardation and 30 for steep backwardation, as these constellations signal elevated risk. Simon and Wiggins (2001) analyzed the VIX futures curve and showed that backwardation is associated with market declines.
6.4 Safe-Haven Flows
During crisis times, investors flee from risky assets into safe havens: gold, US dollar, and Japanese yen. This "flight to quality" is a sentiment signal. The model calculates the performance of these assets relative to stocks over the last 20 trading days. When gold or the dollar strongly rise while stocks fall, this indicates elevated risk aversion.
The safe-haven component is calculated as the difference between safe-haven performance and stock performance. Positive values (safe havens outperform) subtract up to 20 points from the sentiment score, negative values (stocks outperform) add up to 10 points. The asymmetric treatment (larger deduction for risk-off than bonus for risk-on) reflects that risk-off movements are typically sharper and more informative than risk-on phases.
Baur and Lucey (2010) examined safe-haven properties of gold and showed that gold indeed exhibits negative correlation with stocks during extreme market movements, confirming its role as crisis protection.
7. Component 5: Macroeconomic Analysis
7.1 The Yield Curve as Economic Indicator
The yield curve, represented as yields of government bonds of various maturities, contains aggregated expectations about future interest rates, inflation, and economic growth. The slope of the yield curve has remarkable predictive power for recessions. Estrella and Mishkin (1998) showed that an inverted yield curve (short-term rates higher than long-term) predicts recessions with high reliability. This is because inverted curves reflect restrictive monetary policy: the central bank raises short-term rates to combat inflation, dampening economic activity.
DEAM calculates two spread measures: the 2-year-minus-10-year spread and the 3-month-minus-10-year spread. A steep, positive curve (spreads above 1.5% and 2% respectively) signals healthy growth expectations and generates the maximum yield curve score of 40 points. A flat curve (spreads near zero) reduces the score to 20 points. An inverted curve (negative spreads) is particularly alarming and results in only 10 points.
The choice of two different spreads increases analysis robustness. The 2-10 spread is most established in academic literature, while the 3M-10Y spread is often considered more sensitive, as the 3-month rate directly reflects current monetary policy (Ang, Piazzesi, and Wei, 2006).
7.2 Credit Conditions and Spreads
Credit spreads—the yield difference between risky corporate bonds and safe government bonds—reflect risk perception in the credit market. Gilchrist and Zakrajšek (2012) constructed an "Excess Bond Premium" that measures the component of credit spreads not explained by fundamentals and showed this is a predictor of future economic activity and stock returns.
The model approximates credit spread by comparing the yield of high-yield bond ETFs (HYG) with investment-grade bond ETFs (LQD). A narrow spread below 200 basis points signals healthy credit conditions and risk appetite, contributing 30 points to the macro score. Very wide spreads above 1000 basis points (as during the 2008 financial crisis) signal credit crunch and generate zero points.
Additionally, the model evaluates whether "flight to quality" is occurring, identified through strong performance of Treasury bonds (TLT) with simultaneous weakness in high-yield bonds. This constellation indicates elevated risk aversion and reduces the credit conditions score.
7.3 Financial Stability at Corporate Level
While the yield curve and credit spreads reflect macroeconomic conditions, financial stability evaluates the health of companies themselves. The model uses the aggregated debt-to-equity ratio and return on equity of the S&P 500 as proxies for corporate health.
A low leverage level below 0.5 combined with high ROE above 15% signals robust corporate balance sheets and generates 20 points. This combination is particularly valuable as it represents both defensive strength (low debt means crisis resistance) and offensive strength (high ROE means earnings power). High leverage above 1.5 generates only 5 points, as it implies vulnerability to interest rate increases and recessions.
Korteweg (2010) showed in "The Net Benefits to Leverage" that optimal debt maximizes firm value, but excessive debt increases distress costs. At the aggregated market level, high debt indicates fragilities that can become problematic during stress phases.
8. Component 6: Crisis Detection
8.1 The Need for Systematic Crisis Detection
Financial crises are rare but extremely impactful events that suspend normal statistical relationships. During normal market volatility, diversified portfolios and traditional risk management approaches function, but during systemic crises, seemingly independent assets suddenly correlate strongly, and losses exceed historical expectations (Longin and Solnik, 2001). This justifies a separate crisis detection mechanism that operates independently of regular allocation components.
Reinhart and Rogoff (2009) documented in "This Time Is Different: Eight Centuries of Financial Folly" recurring patterns in financial crises: extreme volatility, massive drawdowns, credit market dysfunction, and asset price collapse. DEAM operationalizes these patterns into quantifiable crisis indicators.
8.2 Multi-Signal Crisis Identification
The model uses a counter-based approach where various stress signals are identified and aggregated. This methodology is more robust than relying on a single indicator, as true crises typically occur simultaneously across multiple dimensions. A single signal may be a false alarm, but the simultaneous presence of multiple signals increases confidence.
The first indicator is a VIX above the crisis threshold (default 40), adding one point. A VIX above 60 (as in 2008 and March 2020) adds two additional points, as such extreme values are historically very rare. This tiered approach captures the intensity of volatility.
The second indicator is market drawdown. A drawdown above 15% adds one point, as corrections of this magnitude can be potential harbingers of larger crises. A drawdown above 25% adds another point, as historical bear markets typically encompass 25-40% drawdowns.
The third indicator is credit market spreads above 500 basis points, adding one point. Such wide spreads occur only during significant credit market disruptions, as in 2008 during the Lehman crisis.
The fourth indicator identifies simultaneous losses in stocks and bonds. Normally, Treasury bonds act as a hedge against equity risk (negative correlation), but when both fall simultaneously, this indicates systemic liquidity problems or inflation/stagflation fears. The model checks whether both SPY and TLT have fallen more than 10% and 5% respectively over 5 trading days, adding two points.
The fifth indicator is a volume spike combined with negative returns. Extreme trading volumes (above twice the 20-day average) with falling prices signal panic selling. This adds one point.
A crisis situation is diagnosed when at least 3 indicators trigger, a severe crisis at 5 or more indicators. These thresholds were calibrated through historical backtesting to identify true crises (2008, 2020) without generating excessive false alarms.
8.3 Crisis-Based Allocation Override
When a crisis is detected, the system overrides the normal allocation recommendation and caps equity allocation at maximum 25%. In a severe crisis, the cap is set at 10%. This drastic defensive posture follows the empirical observation that crises typically require time to develop and that early reduction can avoid substantial losses (Faber, 2007).
This override logic implements a "safety first" principle: in situations of existential danger to the portfolio, capital preservation becomes the top priority. Roy (1952) formalized this approach in "Safety First and the Holding of Assets," arguing that investors should primarily minimize ruin probability.
9. Integration and Final Allocation Calculation
9.1 Component Weighting
The final allocation recommendation emerges through weighted aggregation of the five components. The standard weighting is: Market Regime 35%, Risk Management 25%, Valuation 20%, Sentiment 15%, Macro 5%. These weights reflect both theoretical considerations and empirical backtesting results.
The highest weighting of market regime is based on evidence that trend-following and momentum strategies have delivered robust results across various asset classes and time periods (Moskowitz, Ooi, and Pedersen, 2012). Current market momentum is highly informative for the near future, although it provides no information about long-term expectations.
The substantial weighting of risk management (25%) follows from the central importance of risk control. Wealth preservation is the foundation of long-term wealth creation, and systematic risk management is demonstrably value-creating (Moreira and Muir, 2017).
The valuation component receives 20% weight, based on the long-term mean reversion of valuation metrics. While valuation has limited short-term predictive power (bull and bear markets can begin at any valuation), the long-term relationship between valuation and returns is robustly documented (Campbell and Shiller, 1988).
Sentiment (15%) and Macro (5%) receive lower weights, as these factors are subtler and harder to measure. Sentiment is valuable as a contrarian indicator at extremes but less informative in normal ranges. Macro variables such as the yield curve have strong predictive power for recessions, but the transmission from recessions to stock market performance is complex and temporally variable.
9.2 Model Type Adjustments
DEAM allows users to choose between four model types: Conservative, Balanced, Aggressive, and Adaptive. This choice modifies the final allocation through additive adjustments.
Conservative mode subtracts 10 percentage points from allocation, resulting in consistently more cautious positioning. This is suitable for risk-averse investors or those with limited investment horizons. Aggressive mode adds 10 percentage points, suitable for risk-tolerant investors with long horizons.
Adaptive mode implements procyclical adjustment based on short-term momentum: if the market has risen more than 5% in the last 20 days, 5 percentage points are added; if it has declined more than 5%, 5 points are subtracted. This logic follows the observation that short-term momentum persists (Jegadeesh and Titman, 1993), but the moderate size of adjustment avoids excessive timing bets.
Balanced mode makes no adjustment and uses raw model output. This neutral setting is suitable for investors who wish to trust model recommendations unchanged.
9.3 Smoothing and Stability
The allocation resulting from aggregation undergoes final smoothing through a simple moving average over 3 periods. This smoothing is crucial for model practicality, as it reduces frequent trading and thus transaction costs. Without smoothing, the model could fluctuate between adjacent allocations with every small input change.
The choice of 3 periods as smoothing window is a compromise between responsiveness and stability. Longer smoothing would excessively delay signals and impede response to true regime changes. Shorter or no smoothing would allow too much noise. Empirical tests showed that 3-period smoothing offers an optimal ratio between these goals.
10. Visualization and Interpretation
10.1 Main Output: Equity Allocation
DEAM's primary output is a time series from 0 to 100 representing the recommended percentage allocation to equities. This representation is intuitive: 100% means full investment in stocks (specifically: an S&P 500 ETF), 0% means complete cash position, and intermediate values correspond to mixed portfolios. A value of 60% means, for example: invest 60% of wealth in SPY, hold 40% in money market instruments or cash.
The time series is color-coded to enable quick visual interpretation. Green shades represent high allocations (above 80%, bullish), red shades low allocations (below 20%, bearish), and neutral colors middle allocations. The chart background is dynamically colored based on the signal, enhancing readability in different market phases.
10.2 Dashboard Metrics
A tabular dashboard presents key metrics compactly. This includes current allocation, cash allocation (complement), an aggregated signal (BULLISH/NEUTRAL/BEARISH), current market regime, VIX level, market drawdown, and crisis status.
Additionally, fundamental metrics are displayed: P/E Ratio, Equity Risk Premium, Return on Equity, Debt-to-Equity Ratio, and Total Shareholder Yield. This transparency allows users to understand model decisions and form their own assessments.
Component scores (Regime, Risk, Valuation, Sentiment, Macro) are also displayed, each normalized on a 0-100 scale. This shows which factors primarily drive the current recommendation. If, for example, the Risk score is very low (20) while other scores are moderate (50-60), this indicates that risk management considerations are pulling allocation down.
10.3 Component Breakdown (Optional)
Advanced users can display individual components as separate lines in the chart. This enables analysis of component dynamics: do all components move synchronously, or are there divergences? Divergences can be particularly informative. If, for example, the market regime is bullish (high score) but the valuation component is very negative, this signals an overbought market not fundamentally supported—a classic "bubble warning."
This feature is disabled by default to keep the chart clean but can be activated for deeper analysis.
10.4 Confidence Bands
The model optionally displays uncertainty bands around the main allocation line. These are calculated as ±1 standard deviation of allocation over a rolling 20-period window. Wide bands indicate high volatility of model recommendations, suggesting uncertain market conditions. Narrow bands indicate stable recommendations.
This visualization implements a concept of epistemic uncertainty—uncertainty about the model estimate itself, not just market volatility. In phases where various indicators send conflicting signals, the allocation recommendation becomes more volatile, manifesting in wider bands. Users can understand this as a warning to act more cautiously or consult alternative information sources.
11. Alert System
11.1 Allocation Alerts
DEAM implements an alert system that notifies users of significant events. Allocation alerts trigger when smoothed allocation crosses certain thresholds. An alert is generated when allocation reaches 80% (from below), signaling strong bullish conditions. Another alert triggers when allocation falls to 20%, indicating defensive positioning.
These thresholds are not arbitrary but correspond with boundaries between model regimes. An allocation of 80% roughly corresponds to a clear bull market regime, while 20% corresponds to a bear market regime. Alerts at these points are therefore informative about fundamental regime shifts.
11.2 Crisis Alerts
Separate alerts trigger upon detection of crisis and severe crisis. These alerts have highest priority as they signal large risks. A crisis alert should prompt investors to review their portfolio and potentially take defensive measures beyond the automatic model recommendation (e.g., hedging through put options, rebalancing to more defensive sectors).
11.3 Regime Change Alerts
An alert triggers upon change of market regime (e.g., from Neutral to Correction, or from Bull Market to Strong Bull). Regime changes are highly informative events that typically entail substantial allocation changes. These alerts enable investors to proactively respond to changes in market dynamics.
11.4 Risk Breach Alerts
A specialized alert triggers when actual portfolio risk utilization exceeds target parameters by 20%. This is a warning signal that the risk management system is reaching its limits, possibly because market volatility is rising faster than allocation can be reduced. In such situations, investors should consider manual interventions.
12. Practical Application and Limitations
12.1 Portfolio Implementation
DEAM generates a recommendation for allocation between equities (S&P 500) and cash. Implementation by an investor can take various forms. The most direct method is using an S&P 500 ETF (e.g., SPY, VOO) for equity allocation and a money market fund or savings account for cash allocation.
A rebalancing strategy is required to synchronize actual allocation with model recommendation. Two approaches are possible: (1) rule-based rebalancing at every 10% deviation between actual and target, or (2) time-based monthly rebalancing. Both have trade-offs between responsiveness and transaction costs. Empirical evidence (Jaconetti, Kinniry, and Zilbering, 2010) suggests rebalancing frequency has moderate impact on performance, and investors should optimize based on their transaction costs.
12.2 Adaptation to Individual Preferences
The model offers numerous adjustment parameters. Component weights can be modified if investors place more or less belief in certain factors. A fundamentally-oriented investor might increase valuation weight, while a technical trader might increase regime weight.
Risk target parameters (target volatility, max drawdown) should be adapted to individual risk tolerance. Younger investors with long investment horizons can choose higher target volatility (15-18%), while retirees may prefer lower volatility (8-10%). This adjustment systematically shifts average equity allocation.
Crisis thresholds can be adjusted based on preference for sensitivity versus specificity of crisis detection. Lower thresholds (e.g., VIX > 35 instead of 40) increase sensitivity (more crises are detected) but reduce specificity (more false alarms). Higher thresholds have the reverse effect.
12.3 Limitations and Disclaimers
DEAM is based on historical relationships between indicators and market performance. There is no guarantee these relationships will persist in the future. Structural changes in markets (e.g., through regulation, technology, or central bank policy) can break established patterns. This is the fundamental problem of induction in financial science (Taleb, 2007).
The model is optimized for US equities (S&P 500). Application to other markets (international stocks, bonds, commodities) would require recalibration. The indicators and thresholds are specific to the statistical properties of the US equity market.
The model cannot eliminate losses. Even with perfect crisis prediction, an investor following the model would lose money in bear markets—just less than a buy-and-hold investor. The goal is risk-adjusted performance improvement, not risk elimination.
Transaction costs are not modeled. In practice, spreads, commissions, and taxes reduce net returns. Frequent trading can cause substantial costs. Model smoothing helps minimize this, but users should consider their specific cost situation.
The model reacts to information; it does not anticipate it. During sudden shocks (e.g., 9/11, COVID-19 lockdowns), the model can only react after price movements, not before. This limitation is inherent to all reactive systems.
12.4 Relationship to Other Strategies
DEAM is a tactical asset allocation approach and should be viewed as a complement, not replacement, for strategic asset allocation. Brinson, Hood, and Beebower (1986) showed in their influential study "Determinants of Portfolio Performance" that strategic asset allocation (long-term policy allocation) explains the majority of portfolio performance, but this leaves room for tactical adjustments based on market timing.
The model can be combined with value and momentum strategies at the individual stock level. While DEAM controls overall market exposure, within-equity decisions can be optimized through stock-picking models. This separation between strategic (market exposure) and tactical (stock selection) levels follows classical portfolio theory.
The model does not replace diversification across asset classes. A complete portfolio should also include bonds, international stocks, real estate, and alternative investments. DEAM addresses only the US equity allocation decision within a broader portfolio.
13. Scientific Foundation and Evaluation
13.1 Theoretical Consistency
DEAM's components are based on established financial theory and empirical evidence. The market regime component follows from regime-switching models (Hamilton, 1989) and trend-following literature. The risk management component implements volatility targeting (Moreira and Muir, 2017) and modern portfolio theory (Markowitz, 1952). The valuation component is based on discounted cash flow theory and empirical value research (Campbell and Shiller, 1988; Fama and French, 1992). The sentiment component integrates behavioral finance (Baker and Wurgler, 2006). The macro component uses established business cycle indicators (Estrella and Mishkin, 1998).
This theoretical grounding distinguishes DEAM from purely data-mining-based approaches that identify patterns without causal theory. Theory-guided models have greater probability of functioning out-of-sample, as they are based on fundamental mechanisms, not random correlations (Lo and MacKinlay, 1990).
13.2 Empirical Validation
While this document does not present detailed backtest analysis, it should be noted that rigorous validation of a tactical asset allocation model should include several elements:
In-sample testing establishes whether the model functions at all in the data on which it was calibrated. Out-of-sample testing is crucial: the model should be tested in time periods not used for development. Walk-forward analysis, where the model is successively trained on rolling windows and tested in the next window, approximates real implementation.
Performance metrics should be risk-adjusted. Pure return consideration is misleading, as higher returns often only compensate for higher risk. Sharpe Ratio, Sortino Ratio, Calmar Ratio, and Maximum Drawdown are relevant metrics. Comparison with benchmarks (Buy-and-Hold S&P 500, 60/40 Stock/Bond portfolio) contextualizes performance.
Robustness checks test sensitivity to parameter variation. If the model only functions at specific parameter settings, this indicates overfitting. Robust models show consistent performance over a range of plausible parameters.
13.3 Comparison with Existing Literature
DEAM fits into the broader literature on tactical asset allocation. Faber (2007) presented a simple momentum-based timing system that goes long when the market is above its 10-month average, otherwise cash. This simple system avoided large drawdowns in bear markets. DEAM can be understood as a sophistication of this approach that integrates multiple information sources.
Ilmanen (2011) discusses various timing factors in "Expected Returns" and argues for multi-factor approaches. DEAM operationalizes this philosophy. Asness, Moskowitz, and Pedersen (2013) showed that value and momentum effects work across asset classes, justifying cross-asset application of regime and valuation signals.
Ang (2014) emphasizes in "Asset Management: A Systematic Approach to Factor Investing" the importance of systematic, rule-based approaches over discretionary decisions. DEAM is fully systematic and eliminates emotional biases that plague individual investors (overconfidence, hindsight bias, loss aversion).
References
Ang, A. (2014) *Asset Management: A Systematic Approach to Factor Investing*. Oxford: Oxford University Press.
Ang, A., Piazzesi, M. and Wei, M. (2006) 'What does the yield curve tell us about GDP growth?', *Journal of Econometrics*, 131(1-2), pp. 359-403.
Asness, C.S. (2003) 'Fight the Fed Model', *The Journal of Portfolio Management*, 30(1), pp. 11-24.
Asness, C.S., Moskowitz, T.J. and Pedersen, L.H. (2013) 'Value and Momentum Everywhere', *The Journal of Finance*, 68(3), pp. 929-985.
Baker, M. and Wurgler, J. (2006) 'Investor Sentiment and the Cross-Section of Stock Returns', *The Journal of Finance*, 61(4), pp. 1645-1680.
Baker, M. and Wurgler, J. (2007) 'Investor Sentiment in the Stock Market', *Journal of Economic Perspectives*, 21(2), pp. 129-152.
Baur, D.G. and Lucey, B.M. (2010) 'Is Gold a Hedge or a Safe Haven? An Analysis of Stocks, Bonds and Gold', *Financial Review*, 45(2), pp. 217-229.
Bollerslev, T. (1986) 'Generalized Autoregressive Conditional Heteroskedasticity', *Journal of Econometrics*, 31(3), pp. 307-327.
Boudoukh, J., Michaely, R., Richardson, M. and Roberts, M.R. (2007) 'On the Importance of Measuring Payout Yield: Implications for Empirical Asset Pricing', *The Journal of Finance*, 62(2), pp. 877-915.
Brinson, G.P., Hood, L.R. and Beebower, G.L. (1986) 'Determinants of Portfolio Performance', *Financial Analysts Journal*, 42(4), pp. 39-44.
Brock, W., Lakonishok, J. and LeBaron, B. (1992) 'Simple Technical Trading Rules and the Stochastic Properties of Stock Returns', *The Journal of Finance*, 47(5), pp. 1731-1764.
Calmar, T.W. (1991) 'The Calmar Ratio', *Futures*, October issue.
Campbell, J.Y. and Shiller, R.J. (1988) 'The Dividend-Price Ratio and Expectations of Future Dividends and Discount Factors', *Review of Financial Studies*, 1(3), pp. 195-228.
Cochrane, J.H. (2011) 'Presidential Address: Discount Rates', *The Journal of Finance*, 66(4), pp. 1047-1108.
Damodaran, A. (2012) *Equity Risk Premiums: Determinants, Estimation and Implications*. Working Paper, Stern School of Business.
Engle, R.F. (1982) 'Autoregressive Conditional Heteroskedasticity with Estimates of the Variance of United Kingdom Inflation', *Econometrica*, 50(4), pp. 987-1007.
Estrella, A. and Hardouvelis, G.A. (1991) 'The Term Structure as a Predictor of Real Economic Activity', *The Journal of Finance*, 46(2), pp. 555-576.
Estrella, A. and Mishkin, F.S. (1998) 'Predicting U.S. Recessions: Financial Variables as Leading Indicators', *Review of Economics and Statistics*, 80(1), pp. 45-61.
Faber, M.T. (2007) 'A Quantitative Approach to Tactical Asset Allocation', *The Journal of Wealth Management*, 9(4), pp. 69-79.
Fama, E.F. and French, K.R. (1989) 'Business Conditions and Expected Returns on Stocks and Bonds', *Journal of Financial Economics*, 25(1), pp. 23-49.
Fama, E.F. and French, K.R. (1992) 'The Cross-Section of Expected Stock Returns', *The Journal of Finance*, 47(2), pp. 427-465.
Garman, M.B. and Klass, M.J. (1980) 'On the Estimation of Security Price Volatilities from Historical Data', *Journal of Business*, 53(1), pp. 67-78.
Gilchrist, S. and Zakrajšek, E. (2012) 'Credit Spreads and Business Cycle Fluctuations', *American Economic Review*, 102(4), pp. 1692-1720.
Gordon, M.J. (1962) *The Investment, Financing, and Valuation of the Corporation*. Homewood: Irwin.
Graham, B. and Dodd, D.L. (1934) *Security Analysis*. New York: McGraw-Hill.
Hamilton, J.D. (1989) 'A New Approach to the Economic Analysis of Nonstationary Time Series and the Business Cycle', *Econometrica*, 57(2), pp. 357-384.
Ilmanen, A. (2011) *Expected Returns: An Investor's Guide to Harvesting Market Rewards*. Chichester: Wiley.
Jaconetti, C.M., Kinniry, F.M. and Zilbering, Y. (2010) 'Best Practices for Portfolio Rebalancing', *Vanguard Research Paper*.
Jegadeesh, N. and Titman, S. (1993) 'Returns to Buying Winners and Selling Losers: Implications for Stock Market Efficiency', *The Journal of Finance*, 48(1), pp. 65-91.
Kahneman, D. and Tversky, A. (1979) 'Prospect Theory: An Analysis of Decision under Risk', *Econometrica*, 47(2), pp. 263-292.
Korteweg, A. (2010) 'The Net Benefits to Leverage', *The Journal of Finance*, 65(6), pp. 2137-2170.
Lo, A.W. and MacKinlay, A.C. (1990) 'Data-Snooping Biases in Tests of Financial Asset Pricing Models', *Review of Financial Studies*, 3(3), pp. 431-467.
Longin, F. and Solnik, B. (2001) 'Extreme Correlation of International Equity Markets', *The Journal of Finance*, 56(2), pp. 649-676.
Mandelbrot, B. (1963) 'The Variation of Certain Speculative Prices', *The Journal of Business*, 36(4), pp. 394-419.
Markowitz, H. (1952) 'Portfolio Selection', *The Journal of Finance*, 7(1), pp. 77-91.
Modigliani, F. and Miller, M.H. (1961) 'Dividend Policy, Growth, and the Valuation of Shares', *The Journal of Business*, 34(4), pp. 411-433.
Moreira, A. and Muir, T. (2017) 'Volatility-Managed Portfolios', *The Journal of Finance*, 72(4), pp. 1611-1644.
Moskowitz, T.J., Ooi, Y.H. and Pedersen, L.H. (2012) 'Time Series Momentum', *Journal of Financial Economics*, 104(2), pp. 228-250.
Parkinson, M. (1980) 'The Extreme Value Method for Estimating the Variance of the Rate of Return', *Journal of Business*, 53(1), pp. 61-65.
Piotroski, J.D. (2000) 'Value Investing: The Use of Historical Financial Statement Information to Separate Winners from Losers', *Journal of Accounting Research*, 38, pp. 1-41.
Reinhart, C.M. and Rogoff, K.S. (2009) *This Time Is Different: Eight Centuries of Financial Folly*. Princeton: Princeton University Press.
Ross, S.A. (1976) 'The Arbitrage Theory of Capital Asset Pricing', *Journal of Economic Theory*, 13(3), pp. 341-360.
Roy, A.D. (1952) 'Safety First and the Holding of Assets', *Econometrica*, 20(3), pp. 431-449.
Schwert, G.W. (1989) 'Why Does Stock Market Volatility Change Over Time?', *The Journal of Finance*, 44(5), pp. 1115-1153.
Sharpe, W.F. (1966) 'Mutual Fund Performance', *The Journal of Business*, 39(1), pp. 119-138.
Sharpe, W.F. (1994) 'The Sharpe Ratio', *The Journal of Portfolio Management*, 21(1), pp. 49-58.
Simon, D.P. and Wiggins, R.A. (2001) 'S&P Futures Returns and Contrary Sentiment Indicators', *Journal of Futures Markets*, 21(5), pp. 447-462.
Taleb, N.N. (2007) *The Black Swan: The Impact of the Highly Improbable*. New York: Random House.
Whaley, R.E. (2000) 'The Investor Fear Gauge', *The Journal of Portfolio Management*, 26(3), pp. 12-17.
Whaley, R.E. (2009) 'Understanding the VIX', *The Journal of Portfolio Management*, 35(3), pp. 98-105.
Yardeni, E. (2003) 'Stock Valuation Models', *Topical Study*, 51, Yardeni Research.
Zweig, M.E. (1973) 'An Investor Expectations Stock Price Predictive Model Using Closed-End Fund Premiums', *The Journal of Finance*, 28(1), pp. 67-78.
Daily Range SeqDaily Range Seq
Time Window: 04:00 - 10:25 EST
Eval. Window: 10:30 - 15:55 EST
Time Window sets the target for price during the Eval. Window.
If high of time window is created first, then target the high during the Eval. Window.
If low of time window is created first, then target the low during the Eval. Window.
Machine Learning BBPct [BackQuant]Machine Learning BBPct
What this is (in one line)
A Bollinger Band %B oscillator enhanced with a simplified K-Nearest Neighbors (KNN) pattern matcher. The model compares today’s context (volatility, momentum, volume, and position inside the bands) to similar situations in recent history and blends that historical consensus back into the raw %B to reduce noise and improve context awareness. It is informational and diagnostic—designed to describe market state, not to sell a trading system.
Background: %B in plain terms
Bollinger %B measures where price sits inside its dynamic envelope: 0 at the lower band, 1 at the upper band, ~ 0.5 near the basis (the moving average). Readings toward 1 indicate pressure near the envelope’s upper edge (often strength or stretch), while readings toward 0 indicate pressure near the lower edge (often weakness or stretch). Because bands adapt to volatility, %B is naturally comparable across regimes.
Why add (simplified) KNN?
Classic %B is reactive and can be whippy in fast regimes. The simplified KNN layer builds a “nearest-neighbor memory” of recent market states and asks: “When the market looked like this before, where did %B tend to be next bar?” It then blends that estimate with the current %B. Key ideas:
• Feature vector . Each bar is summarized by up to five normalized features:
– %B itself (normalized)
– Band width (volatility proxy)
– Price momentum (ROC)
– Volume momentum (ROC of volume)
– Price position within the bands
• Distance metric . Euclidean distance ranks the most similar recent bars.
• Prediction . Average the neighbors’ prior %B (lagged to avoid lookahead), inverse-weighted by distance.
• Blend . Linearly combine raw %B and KNN-predicted %B with a configurable weight; optional filtering then adapts to confidence.
This remains “simplified” KNN: no training/validation split, no KD-trees, no scaling beyond windowed min-max, and no probabilistic calibration.
How the script is organized (by input groups)
1) BBPct Settings
• Price Source – Which price to evaluate (%B is computed from this).
• Calculation Period – Lookback for SMA basis and standard deviation.
• Multiplier – Standard deviation width (e.g., 2.0).
• Apply Smoothing / Type / Length – Optional smoothing of the %B stream before ML (EMA, RMA, DEMA, TEMA, LINREG, HMA, etc.). Turning this off gives you the raw %B.
2) Thresholds
• Overbought/Oversold – Default 0.8 / 0.2 (inside ).
• Extreme OB/OS – Stricter zones (e.g., 0.95 / 0.05) to flag stretch conditions.
3) KNN Machine Learning
• Enable KNN – Switch between pure %B and hybrid.
• K (neighbors) – How many historical analogs to blend (default 8).
• Historical Period – Size of the search window for neighbors.
• ML Weight – Blend between raw %B and KNN estimate.
• Number of Features – Use 2–5 features; higher counts add context but raise the risk of overfitting in short windows.
4) Filtering
• Method – None, Adaptive, Kalman-style (first-order),
or Hull smoothing.
• Strength – How aggressively to smooth. “Adaptive” uses model confidence to modulate its alpha: higher confidence → stronger reliance on the ML estimate.
5) Performance Tracking
• Win-rate Period – Simple running score of past signal outcomes based on target/stop/time-out logic (informational, not a robust backtest).
• Early Entry Lookback – Horizon for forecasting a potential threshold cross.
• Profit Target / Stop Loss – Used only by the internal win-rate heuristic.
6) Self-Optimization
• Enable Self-Optimization – Lightweight, rolling comparison of a few canned settings (K = 8/14/21 via simple rules on %B extremes).
• Optimization Window & Stability Threshold – Governs how quickly preferred K changes and how sensitive the overfitting alarm is.
• Adaptive Thresholds – Adjust the OB/OS lines with volatility regime (ATR ratio), widening in calm markets and tightening in turbulent ones (bounded 0.7–0.9 and 0.1–0.3).
7) UI Settings
• Show Table / Zones / ML Prediction / Early Signals – Toggle informational overlays.
• Signal Line Width, Candle Painting, Colors – Visual preferences.
Step-by-step logic
A) Compute %B
Basis = SMA(source, len); dev = stdev(source, len) × multiplier; Upper/Lower = Basis ± dev.
%B = (price − Lower) / (Upper − Lower). Optional smoothing yields standardBB .
B) Build the feature vector
All features are min-max normalized over the KNN window so distances are in comparable units. Features include normalized %B, normalized band width, normalized price ROC, normalized volume ROC, and normalized position within bands. You can limit to the first N features (2–5).
C) Find nearest neighbors
For each bar inside the lookback window, compute the Euclidean distance between current features and that bar’s features. Sort by distance, keep the top K .
D) Predict and blend
Use inverse-distance weights (with a strong cap for near-zero distances) to average neighbors’ prior %B (lagged by one bar). This becomes the KNN estimate. Blend it with raw %B via the ML weight. A variance of neighbor %B around the prediction becomes an uncertainty proxy ; combined with a stability score (how long parameters remain unchanged), it forms mlConfidence ∈ . The Adaptive filter optionally transforms that confidence into a smoothing coefficient.
E) Adaptive thresholds
Volatility regime (ATR(14) divided by its 50-bar SMA) nudges OB/OS thresholds wider or narrower within fixed bounds. The aim: comparable extremeness across regimes.
F) Early entry heuristic
A tiny two-step slope/acceleration probe extrapolates finalBB forward a few bars. If it is on track to cross OB/OS soon (and slope/acceleration agree), it flags an EARLY_BUY/SELL candidate with an internal confidence score. This is explicitly a heuristic—use as an attention cue, not a signal by itself.
G) Informational win-rate
The script keeps a rolling array of trade outcomes derived from signal transitions + rudimentary exits (target/stop/time). The percentage shown is a rough diagnostic , not a validated backtest.
Outputs and visual language
• ML Bollinger %B (finalBB) – The main line after KNN blending and optional filtering.
• Gradient fill – Greenish tones above 0.5, reddish below, with intensity following distance from the midline.
• Adaptive zones – Overbought/oversold and extreme bands; shaded backgrounds appear at extremes.
• ML Prediction (dots) – The KNN estimate plotted as faint circles; becomes bright white when confidence > 0.7.
• Early arrows – Optional small triangles for approaching OB/OS.
• Candle painting – Light green above the midline, light red below (optional).
• Info panel – Current value, signal classification, ML confidence, optimized K, stability, volatility regime, adaptive thresholds, overfitting flag, early-entry status, and total signals processed.
Signal classification (informational)
The indicator does not fire trade commands; it labels state:
• STRONG_BUY / STRONG_SELL – finalBB beyond extreme OS/OB thresholds.
• BUY / SELL – finalBB beyond adaptive OS/OB.
• EARLY_BUY / EARLY_SELL – forecast suggests a near-term cross with decent internal confidence.
• NEUTRAL – between adaptive bands.
Alerts (what you can automate)
• Entering adaptive OB/OS and extreme OB/OS.
• Midline cross (0.5).
• Overfitting detected (frequent parameter flipping).
• Early signals when early confidence > 0.7.
These are purely descriptive triggers around the indicator’s state.
Practical interpretation
• Mean-reversion context – In range markets, adaptive OS/OB with ML smoothing can reduce whipsaws relative to raw %B.
• Trend context – In persistent trends, the KNN blend can keep finalBB nearer the mid/upper region during healthy pullbacks if history supports similar contexts.
• Regime awareness – Watch the volatility regime and adaptive thresholds. If thresholds compress (high vol), “OB/OS” comes sooner; if thresholds widen (calm), it takes more stretch to flag.
• Confidence as a weight – High mlConfidence implies neighbors agree; you may rely more on the ML curve. Low confidence argues for de-emphasizing ML and leaning on raw %B or other tools.
• Stability score – Rising stability indicates consistent parameter selection and fewer flips; dropping stability hints at a shifting backdrop.
Methodological notes
• Normalization uses rolling min-max over the KNN window. This is simple and scale-agnostic but sensitive to outliers; the distance metric will reflect that.
• Distance is unweighted Euclidean. If you raise featureCount, you increase dimensionality; consider keeping K larger and lookback ample to avoid sparse-neighbor artifacts.
• Lag handling intentionally uses neighbors’ previous %B for prediction to avoid lookahead bias.
• Self-optimization is deliberately modest: it only compares a few canned K/threshold choices using simple “did an extreme anticipate movement?” scoring, then enforces a stability regime and an overfitting guard. It is not a grid search or GA.
• Kalman option is a first-order recursive filter (fixed gain), not a full state-space estimator.
• Hull option derives a dynamic length from 1/strength; it is a convenience smoothing alternative.
Limitations and cautions
• Non-stationarity – Nearest neighbors from the recent window may not represent the future under structural breaks (policy shifts, liquidity shocks).
• Curse of dimensionality – Adding features without sufficient lookback can make genuine neighbors rare.
• Overfitting risk – The script includes a crude overfitting detector (frequent parameter flips) and will fall back to defaults when triggered, but this is only a guardrail.
• Win-rate display – The internal score is illustrative; it does not constitute a tradable backtest.
• Latency vs. smoothness – Smoothing and ML blending reduce noise but add lag; tune to your timeframe and objectives.
Tuning guide
• Short-term scalping – Lower len (10–14), slightly lower multiplier (1.8–2.0), small K (5–8), featureCount 3–4, Adaptive filter ON, moderate strength.
• Swing trading – len (20–30), multiplier ~2.0, K (8–14), featureCount 4–5, Adaptive thresholds ON, filter modest.
• Strong trends – Consider higher adaptive_upper/lower bounds (or let volatility regime do it), keep ML weight moderate so raw %B still reflects surges.
• Chop – Higher ML weight and stronger Adaptive filtering; accept lag in exchange for fewer false extremes.
How to use it responsibly
Treat this as a state descriptor and context filter. Pair it with your execution signals (structure breaks, volume footprints, higher-timeframe bias) and risk management. If mlConfidence is low or stability is falling, lean less on the ML line and more on raw %B or external confirmation.
Summary
Machine Learning BBPct augments a familiar oscillator with a transparent, simplified KNN memory of recent conditions. By blending neighbors’ behavior into %B and adapting thresholds to volatility regime—while exposing confidence, stability, and a plain early-entry heuristic—it provides an informational, probability-minded view of stretch and reversion that you can interpret alongside your own process.
BayesStack RSI [CHE]BayesStack RSI — Stacked RSI with Bayesian outcome stats and gradient visualization
Summary
BayesStack RSI builds a four-length RSI stack and evaluates it with a simple Bayesian success model over a rolling window. It highlights bull and bear stack regimes, colors price with magnitude-based gradients, and reports per-regime counts, wins, and estimated win rate in a compact table. Signals seek to be more robust through explicit ordering tolerance, optional midline gating, and outcome evaluation that waits for events to mature by a fixed horizon. The design focuses on readable structure, conservative confirmation, and actionable context rather than raw oscillator flips.
Motivation: Why this design?
Classical RSI signals flip frequently in volatile phases and drift in calm regimes. Pure threshold rules often misclassify shallow pullbacks and stacked momentum phases. The core idea here is ordered, spaced RSI layers combined with outcome tracking. By requiring a consistent order with a tolerance and optionally gating by the midline, regime identification becomes clearer. A horizon-based maturation check and smoothed win-rate estimate provide pragmatic feedback about how often a given stack has recently worked.
What’s different vs. standard approaches?
Reference baseline: Traditional single-length RSI with overbought and oversold rules or simple crossovers.
Architecture differences:
Four fixed RSI lengths with strict ordering and a spacing tolerance.
Optional requirement that all RSI values stay above or below the midline for bull or bear regimes.
Outcome evaluation after a fixed horizon, then rolling counts and a prior-smoothed win rate.
Dispersion measurement across the four RSIs with a percent-rank diagnostic.
Gradient coloring of candles and wicks driven by stack magnitude.
A last-bar statistics table with counts, wins, win rate, dispersion, and priors.
Practical effect: Charts emphasize sustained momentum alignment instead of single-length crosses. Users see when regimes start, how strong alignment is, and how that regime has recently performed for the chosen horizon.
How it works (technical)
The script computes RSI on four lengths and forms a “stack” when they are strictly ordered with at least the chosen tolerance between adjacent lengths. A bull stack requires a descending set from long to short with positive spacing. A bear stack requires the opposite. Optional gating further requires all RSI values to sit above or below the midline.
For evaluation, each detected stack is checked again after the horizon has fully elapsed. A bull event is a success if price is higher than it was at event time after the horizon has passed. A bear event succeeds if price is lower under the same rule. Rolling sums over the training window track counts and successes; a pair of priors stabilizes the win-rate estimate when sample sizes are small.
Dispersion across the four RSIs is measured and converted to a percent rank over a configurable window. Gradients for bars and wicks are normalized over a lookback, then shaped by gamma controls to emphasize strong regimes. A statistics table is created once and updated on the last bar to minimize overhead. Overlay markers and wick coloring are rendered to the price chart even though the indicator runs in a separate pane.
Parameter Guide
Source — Input series for RSI. Default: close. Tips: Use typical price or hlc3 for smoother behavior.
Overbought / Oversold — Guide levels for context. Defaults: seventy and thirty. Bounds: fifty to one hundred, zero to fifty. Tips: Narrow the band for faster feedback.
Stacking tolerance (epsilon) — Minimum spacing between adjacent RSIs to qualify as a stack. Default: zero point twenty-five RSI points. Trade-off: Higher values reduce false stacks but delay entries.
Horizon H — Bars ahead for outcome evaluation. Default: three. Trade-off: Longer horizons reduce noise but delay success attribution.
Rolling window — Lookback for counts and wins. Default: five hundred. Trade-off: Longer windows stabilize the win rate but adapt more slowly.
Alpha prior / Beta prior — Priors used to stabilize the win-rate estimate. Defaults: one and one. Trade-off: Larger priors reduce variance with sparse samples.
Show RSI 8/13/21/34 — Toggle raw RSI lines. Default: on.
Show consensus RSI — Weighted combination of the four RSIs. Default: on.
Show OB/OS zones — Draw overbought, oversold, and midline. Default: on.
Background regime — Pane background tint during bull or bear stacks. Default: on.
Overlay regime markers — Entry markers on price when a stack forms. Default: on.
Show statistics table — Last-bar table with counts, wins, win rate, dispersion, priors, and window. Default: on.
Bull requires all above fifty / Bear requires all below fifty — Midline gate. Defaults: both on. Trade-off: Stricter regimes, fewer but cleaner signals.
Enable gradient barcolor / wick coloring — Gradient visuals mapped to stack magnitude. Defaults: on. Trade-off: Clearer regime strength vs. extra rendering cost.
Collection period — Normalization window for gradients. Default: one hundred. Trade-off: Shorter values react faster but fluctuate more.
Gamma bars and shapes / Gamma plots — Curve shaping for gradients. Defaults: zero point seven and zero point eight. Trade-off: Higher values compress weak signals and emphasize strong ones.
Gradient and wick transparency — Visual opacity controls. Defaults: zero.
Up/Down colors (dark and neon) — Gradient endpoints. Defaults: green and red pairs.
Fallback neutral candles — Directional coloring when gradients are off. Default: off.
Show last candles — Limit for gradient squares rendering. Default: three hundred thirty-three.
Dispersion percent-rank length / High and Low thresholds — Window and cutoffs for dispersion diagnostics. Defaults: two hundred fifty, eighty, and twenty.
Table X/Y, Dark theme, Text size — Table anchor, theme, and typography. Defaults: right, top, dark, small.
Reading & Interpretation
RSI stack lines: Alignment and spacing convey regime quality. Wider spacing suggests stronger alignment.
Consensus RSI: A single line that summarizes the four lengths; use as a smoother reference.
Zones: Overbought, oversold, and midline provide context rather than standalone triggers.
Background tint: Indicates active bull or bear stack.
Markers: “Bull Stack Enter” or “Bear Stack Enter” appears when the stack first forms.
Gradients: Brighter tones suggest stronger stack magnitude; dull tones suggest weak alignment.
Table: Count and Wins show sample size and successes over the window. P(win) is a prior-stabilized estimate. Dispersion percent rank near the high threshold flags stretched alignment; near the low threshold flags tight clustering.
Practical Workflows & Combinations
Trend following: Enter only on new stack markers aligned with structure such as higher highs and higher lows for bull, or lower lows and lower highs for bear. Use the consensus RSI to avoid chasing into overbought or oversold extremes.
Exits and stops: Consider reducing exposure when dispersion percent rank reaches the high threshold or when the stack loses ordering. Use the table’s P(win) as a context check rather than a direct signal.
Multi-asset and multi-timeframe: Defaults travel well on liquid assets from intraday to daily. Combine with higher-timeframe structure or moving averages for regime confirmation. The script itself does not fetch higher-timeframe data.
Behavior, Constraints & Performance
Repaint and confirmation: Stack markers evaluate on the live bar and can flip until close. Alert behavior follows TradingView settings. Outcome evaluation uses matured events and does not look into the future.
HTF and security: Not used. Repaint paths from higher-timeframe aggregation are avoided by design.
Resources: max bars back is two thousand. The script uses rolling sums, percent rank, gradient rendering, and a last-bar table update. Shapes and colored wicks add draw overhead.
Known limits: Lag can appear after sharp turns. Very small windows can overfit recent noise. P(win) is sensitive to sample size and priors. Dispersion normalization depends on the collection period.
Sensible Defaults & Quick Tuning
Start with the shipped defaults.
Too many flips: Increase stacking tolerance, enable midline gates, or lengthen the collection period.
Too sluggish: Reduce stacking tolerance, shorten the collection period, or relax midline gates.
Sparse samples: Extend the rolling window or increase priors to stabilize P(win).
Visual overload: Disable gradient squares or wick coloring, or raise transparency.
What this indicator is—and isn’t
This is a visualization and context layer for RSI stack regimes with simple outcome statistics. It is not a complete trading system, not predictive, and not a signal generator on its own. Use it with market structure, risk controls, and position management that fit your process.
Metadata
- Pine version: v6
- Overlay: false (price overlays are drawn via forced overlay where applicable)
- Primary outputs: Four RSI lines, consensus line, OB/OS guides, background tint, entry markers, gradient bars and wicks, statistics table
- Inputs with defaults: See Parameter Guide
- Metrics and functions used: RSI, rolling sums, percent rank, dispersion across RSI set, gradient color mapping, table rendering, alerts
- Special techniques: Ordered RSI stacking with tolerance, optional midline gating, horizon-based outcome maturation, prior-stabilized win rate, gradient normalization with gamma shaping
- Performance and constraints: max bars back two thousand, rendering of shapes and table on last bar, no higher-timeframe data, no security calls
- Recommended use-cases: Regime confirmation, momentum alignment, post-entry management with dispersion and recent outcome context
- Compatibility: Works across assets and timeframes that support RSI
- Limitations and risks: Sensitive to parameter choices and market regime changes; not a standalone strategy
- Diagnostics: Statistics table, dispersion percent rank, gradient intensity
Disclaimer
The content provided, including all code and materials, is strictly for educational and informational purposes only. It is not intended as, and should not be interpreted as, financial advice, a recommendation to buy or sell any financial instrument, or an offer of any financial product or service. All strategies, tools, and examples discussed are provided for illustrative purposes to demonstrate coding techniques and the functionality of Pine Script within a trading context.
Any results from strategies or tools provided are hypothetical, and past performance is not indicative of future results. Trading and investing involve high risk, including the potential loss of principal, and may not be suitable for all individuals. Before making any trading decisions, please consult with a qualified financial professional to understand the risks involved.
By using this script, you acknowledge and agree that any trading decisions are made solely at your discretion and risk.
Best regards and happy trading
Chervolino.
Do not use this indicator on Heikin-Ashi, Renko, Kagi, Point-and-Figure, or Range charts, as these chart types can produce unrealistic results for signal markers and alerts.
Overnight Time Box Overnight Time Box (22:59 → 09:59, minutes & TZ)
Automatically draws a time-based box for a customizable window that can cross midnight. Perfect for marking the overnight range up to London open (e.g., 22:59–09:59 in Europe/Bucharest), but works with any minute-level window.
What it does
Builds a daily box covering all price action between two user-defined times (e.g., 22:59 → 09:59).
Tracks session High/Low in real time and can plot extended HL lines for reference.
Keeps historical boxes on the chart for backtesting and review (no flicker, no errors).
How to use
Add the script to an intraday chart.
Configure:
Time zone (default: Europe/Bucharest).
Interval (HHMM-HHMM) — e.g., 2259-0959 (minutes supported).
Optional: High/Low lines, fill color, border color, line width.
Use on intraday timeframes (M1–H4).
Note: On Daily/Weekly/Monthly, a heads-up label reminds you it’s designed for intraday use.
Inputs
Time zone: correct DST handling.
Interval (HHMM-HHMM): supports windows that span midnight.
Draw High/Low lines: extended HL guides for the session.
Colors & widths: full visual customization.
Use cases
Mark the overnight range into London open (10:00 RO).
Delimit Killzones / ICT Silver Bullet windows.
Study range, liquidity raids, FVGs before major sessions.
Tech notes
Built on Pine Script v5 using input.session → stable, DST-safe.
Increased max_boxes_count / max_lines_count to preserve history.
Boxes are “frozen” at session end and remain on chart.
Limitations
Intended for intraday only.
One interval per script instance; attach multiple instances for multiple windows.
Fair Value Lead-Lag Model [BackQuant]Fair Value Lead-Lag Model
A cross-asset model that estimates where price "should" be relative to a chosen reference series, then tracks the deviation as a normalized oscillator. It helps you answer two questions: 1) is the asset rich or cheap vs its driver, and 2) is the driver leading or lagging price over the next N bars.
Concept in one paragraph
Many assets co-move with a macro or sector driver. Think BTC vs DXY, gold vs real yields, a stock vs its sector ETF. This tool builds a rolling fair value of the charted asset from a reference series and shows how far price is above or below that fair value in standard deviation units. You can shift the reference forward or backward to test who leads whom, then use the deviation and its bands to structure mean-reversion or trend-following ideas.
What the model does
Reference mapping : Pulls a reference symbol at a chosen timeframe, with an optional lead or lag in bars to test causality.
Fair value engine : Converts the reference into a synthetic fair value of the chart using one of four methods:
Ratio : price/ref with a rolling average ratio. Good when the relationship is proportional.
Spread : price minus ref with a rolling average spread. Good when the relationship is additive.
Z-Score : normalizes both series, aligns on standardized units, then re-projects to price space. Good when scale drifts.
Beta-Adjusted : rolling regression style. Uses covariance and variance to compute beta, then builds a fair value = mean(price) + beta * (ref − mean(ref)).
Deviation and bands : Computes a z-scored deviation of price vs fair value and plots sigma bands (±1, ±2, ±3) around the fair value line on the chart.
Correlation context : Shows rolling correlation so you can judge if deviations are meaningful or just noise when co-movement is weak.
Visuals :
Fair value line on price chart with sigma envelopes.
Deviation as a column oscillator and optional line.
Threshold shading beyond user-set upper and lower levels.
Summary table with reference, deviation, status, correlation, and method.
Why this is useful
Mean reversion framework : When correlation is healthy and deviation stretches beyond your sigma threshold, probability favors reversion toward fair value. This is classic pairs logic adapted to a driver and a target.
Trend confirmation : If price rides the fair value line and deviation stays modest while correlation is positive, it supports trend persistence. Pullbacks to negative deviation in an uptrend can be buyable.
Lead-lag discovery : Shift the reference forward by +N bars. If correlation improves, the reference tends to lead. Shift backward for the reverse. Use the best setting for planning early entries or hedges.
Regime detection : Large persistent deviations with falling correlation hint at regime change. The relationship you relied on may be breaking down, so reduce confidence or switch methods.
How to use it step by step
Pick a sensible reference : Choose a macro, index, currency, or sector driver that logically explains the asset’s moves. Example: gold with DXY, a semiconductor stock with SOXX.
Test lead-lag : Nudge Lead/Lag Periods to small positive values like +1 to +5 to see if the reference leads. If correlation improves, keep that offset. If correlation worsens, try a small negative value or zero.
Select a method :
Start with Beta-Adjusted when the relationship is approximately linear with drift.
Use Ratio if the assets usually move in proportional terms.
Use Spread when they trade around a level difference.
Use Z-Score when scales wander or volatility regimes shift.
Tune windows :
Rolling Window controls how quickly fair value adapts. Shorter equals faster but noisier.
Normalization Period controls how deviations are standardized. Longer equals stabler sigma sizing.
Correlation Length controls how co-movement is measured. Keep it near the fair value window.
Trade the edges :
Mean reversion idea : Wait for deviation beyond your Upper or Lower Threshold with positive correlation. Fade back toward fair value. Exit at the fair value line or the next inner sigma band.
Trend idea : In an uptrend, buy pullbacks when deviation dips negative but correlation remains healthy. In a downtrend, sell bounces when deviation spikes positive.
Read the table : Deviation shows how many sigmas you are from fair value. Status tells you overvalued or undervalued. Correlation color hints confidence. Method tells you the projection style used.
Reading the display
Fair value line on price chart: the model’s estimate of where price should trade given the reference, updated each bar.
Sigma bands around fair value: a quick sense of residual volatility. Reversions often target inner bands first.
Deviation oscillator : above zero means rich vs fair value, below zero means cheap. Color bins intensify with distance.
Correlation line (optional): scale is folded to match thresholds. Higher values increase trust in deviations.
Parameter tips
Start with Rolling Window 20 to 30, Normalization Period 100, Correlation Length 50.
Upper and Lower Threshold at ±2.0 are classic. Tighten to ±1.5 for more signals or widen to ±2.5 to focus on outliers.
When correlation drifts below about 0.3, treat deviations with caution. Consider switching method or reference.
If the fair value line whipsaws, increase Rolling Window or move to Beta-Adjusted which tends to be smoother.
Playbook examples
Pairs-style reversion : Asset is +2.3 sigma rich vs reference, correlation 0.65, trend flat. Short the deviation back toward fair value. Cover near the fair value line or +1 sigma.
Pro-trend pullback : Uptrend with correlation 0.7. Deviation dips to −1.2 sigma while price sits near the −1 sigma band. Buy the dip, target the fair value line, trail if the line is rising.
Lead-lag timing : Reference leads by +3 bars with improved correlation. Use reference swings as early cues to anticipate deviation turns on the target.
Caveats
The model assumes a stable relationship over the chosen windows. Structural breaks, policy shocks, and index rebalances can invalidate recent history.
Correlation is descriptive, not causal. A strong correlation does not guarantee future convergence.
Do not force trades when the reference has low liquidity or mismatched hours. Use a reference timeframe that captures real overlap.
Bottom line
This tool turns a loose cross-asset intuition into a quantified, visual fair value map. It gives you a consistent way to find rich or cheap conditions, time mean-reversion toward a statistically grounded target, and confirm or fade trends when the driver agrees.
Quantum Flux Universal Strategy Summary in one paragraph
Quantum Flux Universal is a regime switching strategy for stocks, ETFs, index futures, major FX pairs, and liquid crypto on intraday and swing timeframes. It helps you act only when the normalized core signal and its guide agree on direction. It is original because the engine fuses three adaptive drivers into the smoothing gains itself. Directional intensity is measured with binary entropy, path efficiency shapes trend quality, and a volatility squash preserves contrast. Add it to a clean chart, watch the polarity lane and background, and trade from positive or negative alignment. For conservative workflows use on bar close in the alert settings when you add alerts in a later version.
Scope and intent
• Markets. Large cap equities and ETFs. Index futures. Major FX pairs. Liquid crypto
• Timeframes. One minute to daily
• Default demo used in the publication. QQQ on one hour
• Purpose. Provide a robust and portable way to detect when momentum and confirmation align, while dampening chop and preserving turns
• Limits. This is a strategy. Orders are simulated on standard candles only
Originality and usefulness
• Unique concept or fusion. The novelty sits in the gain map. Instead of gating separate indicators, the model mixes three drivers into the adaptive gains that power two one pole filters. Directional entropy measures how one sided recent movement has been. Kaufman style path efficiency scores how direct the path has been. A volatility squash stabilizes step size. The drivers are blended into the gains with visible inputs for strength, windows, and clamps.
• What failure mode it addresses. False starts in chop and whipsaw after fast spikes. Efficiency and the squash reduce over reaction in noise.
• Testability. Every component has an input. You can lengthen or shorten each window and change the normalization mode. The polarity plot and background provide a direct readout of state.
• Portable yardstick. The core is normalized with three options. Z score, percent rank mapped to a symmetric range, and MAD based Z score. Clamp bounds define the effective unit so context transfers across symbols.
Method overview in plain language
The strategy computes two smoothed tracks from the chart price source. The fast track and the slow track use gains that are not fixed. Each gain is modulated by three drivers. A driver for directional intensity, a driver for path efficiency, and a driver for volatility. The difference between the fast and the slow tracks forms the raw flux. A small phase assist reduces lag by subtracting a portion of the delayed value. The flux is then normalized. A guide line is an EMA of a small lead on the flux. When the flux and its guide are both above zero, the polarity is positive. When both are below zero, the polarity is negative. Polarity changes create the trade direction.
Base measures
• Return basis. The step is the change in the chosen price source. Its absolute value feeds the volatility estimate. Mean absolute step over the window gives a stable scale.
• Efficiency basis. The ratio of net move to the sum of absolute step over the window gives a value between zero and one. High values mean trend quality. Low values mean chop.
• Intensity basis. The fraction of up moves over the window plugs into binary entropy. Intensity is one minus entropy, which maps to zero in uncertainty and one in very one sided moves.
Components
• Directional Intensity. Measures how one sided recent bars have been. Smoothed with RMA. More intensity increases the gain and makes the fast and slow tracks react sooner.
• Path Efficiency. Measures the straightness of the price path. A gamma input shapes the curve so you can make trend quality count more or less. Higher efficiency lifts the gain in clean trends.
• Volatility Squash. Normalizes the absolute step with Z score then pushes it through an arctangent squash. This caps the effect of spikes so they do not dominate the response.
• Normalizer. Three modes. Z score for familiar units, percent rank for a robust monotone map to a symmetric range, and MAD based Z for outlier resistance.
• Guide Line. EMA of the flux with a small lead term that counteracts lag without heavy overshoot.
Fusion rule
• Weighted sum of the three drivers with fixed weights visible in the code comments. Intensity has fifty percent weight. Efficiency thirty percent. Volatility twenty percent.
• The blend power input scales the driver mix. Zero means fixed spans. One means full driver control.
• Minimum and maximum gain clamps bound the adaptive gain. This protects stability in quiet or violent regimes.
Signal rule
• Long suggestion appears when flux and guide are both above zero. That sets polarity to plus one.
• Short suggestion appears when flux and guide are both below zero. That sets polarity to minus one.
• When polarity flips from plus to minus, the strategy closes any long and enters a short.
• When flux crosses above the guide, the strategy closes any short.
What you will see on the chart
• White polarity plot around the zero line
• A dotted reference line at zero named Zen
• Green background tint for positive polarity and red background tint for negative polarity
• Strategy long and short markers placed by the TradingView engine at entry and at close conditions
• No table in this version to keep the visual clean and portable
Inputs with guidance
Setup
• Price source. Default ohlc4. Stable for noisy symbols.
• Fast span. Typical range 6 to 24. Raising it slows the fast track and can reduce churn. Lowering it makes entries more reactive.
• Slow span. Typical range 20 to 60. Raising it lengthens the baseline horizon. Lowering it brings the slow track closer to price.
Logic
• Guide span. Typical range 4 to 12. A small guide smooths without eating turns.
• Blend power. Typical range 0.25 to 0.85. Raising it lets the drivers modulate gains more. Lowering it pushes behavior toward fixed EMA style smoothing.
• Vol window. Typical range 20 to 80. Larger values calm the volatility driver. Smaller values adapt faster in intraday work.
• Efficiency window. Typical range 10 to 60. Larger values focus on smoother trends. Smaller values react faster but accept more noise.
• Efficiency gamma. Typical range 0.8 to 2.0. Above one increases contrast between clean trends and chop. Below one flattens the curve.
• Min alpha multiplier. Typical range 0.30 to 0.80. Lower values increase smoothing when the mix is weak.
• Max alpha multiplier. Typical range 1.2 to 3.0. Higher values shorten smoothing when the mix is strong.
• Normalization window. Typical range 100 to 300. Larger values reduce drift in the baseline.
• Normalization mode. Z score, percent rank, or MAD Z. Use MAD Z for outlier heavy symbols.
• Clamp level. Typical range 2.0 to 4.0. Lower clamps reduce the influence of extreme runs.
Filters
• Efficiency filter is implicit in the gain map. Raising efficiency gamma and the efficiency window increases the preference for clean trends.
• Micro versus macro relation is handled by the fast and slow spans. Increase separation for swing, reduce for scalping.
• Location filter is not included in v1.0. If you need distance gates from a reference such as VWAP or a moving mean, add them before publication of a new version.
Alerts
• This version does not include alertcondition lines to keep the core minimal. If you prefer alerts, add names Long Polarity Up, Short Polarity Down, Exit Short on Flux Cross Up in a later version and select on bar close for conservative workflows.
Strategy has been currently adapted for the QQQ asset with 30/60min timeframe.
For other assets may require new optimization
Properties visible in this publication
• Initial capital 25000
• Base currency Default
• Default order size method percent of equity with value 5
• Pyramiding 1
• Commission 0.05 percent
• Slippage 10 ticks
• Process orders on close ON
• Bar magnifier ON
• Recalculate after order is filled OFF
• Calc on every tick OFF
Honest limitations and failure modes
• Past results do not guarantee future outcomes
• Economic releases, circuit breakers, and thin books can break the assumptions behind intensity and efficiency
• Gap heavy symbols may benefit from the MAD Z normalization
• Very quiet regimes can reduce signal contrast. Use longer windows or higher guide span to stabilize context
• Session time is the exchange time of the chart
• If both stop and target can be hit in one bar, tie handling would matter. This strategy has no fixed stops or targets. It uses polarity flips for exits. If you add stops later, declare the preference
Open source reuse and credits
• None beyond public domain building blocks and Pine built ins such as EMA, SMA, standard deviation, RMA, and percent rank
• Method and fusion are original in construction and disclosure
Legal
Education and research only. Not investment advice. You are responsible for your decisions. Test on historical data and in simulation before any live use. Use realistic costs.
Strategy add on block
Strategy notice
Orders are simulated by the TradingView engine on standard candles. No request.security() calls are used.
Entries and exits
• Entry logic. Enter long when both the normalized flux and its guide line are above zero. Enter short when both are below zero
• Exit logic. When polarity flips from plus to minus, close any long and open a short. When the flux crosses above the guide line, close any short
• Risk model. No initial stop or target in v1.0. The model is a regime flipper. You can add a stop or trail in later versions if needed
• Tie handling. Not applicable in this version because there are no fixed stops or targets
Position sizing
• Percent of equity in the Properties panel. Five percent is the default for examples. Risk per trade should not exceed five to ten percent of equity. One to two percent is a common choice
Properties used on the published chart
• Initial capital 25000
• Base currency Default
• Default order size percent of equity with value 5
• Pyramiding 1
• Commission 0.05 percent
• Slippage 10 ticks
• Process orders on close ON
• Bar magnifier ON
• Recalculate after order is filled OFF
• Calc on every tick OFF
Dataset and sample size
• Test window Jan 2, 2014 to Oct 16, 2025 on QQQ one hour
• Trade count in sample 324 on the example chart
Release notes template for future updates
Version 1.1.
• Add alertcondition lines for long, short, and exit short
• Add optional table with component readouts
• Add optional stop model with a distance unit expressed as ATR or a percent of price
Notes. Backward compatibility Yes. Inputs migrated Yes.
Volume Profile + VAH, VAL, and POCWhat it is
A clean, on-chart volume profile that approximates your visible range using a configurable Bars Back window. It builds a horizontal histogram of volume by price, splits each price bin into Buy vs Sell volume, draws POC, and computes Value Area High/Low (VAH/VAL). A Stealth Mode toggle switches to a subtle grayscale palette for low-key charts.
Why this instead of the built-in VPVR?
Buy/Sell split per bin: See which prices were defended by buyers vs sellers, not just total volume.
Value Area from POC outward: Classic expansion method until the selected % of total volume (default 70%).
Sleek borders & Stealth Mode: Crisp bin outlines and a one-click professional colorway.
Deterministic & fast: No sessions or anchors needed—set your Bars Back and go.
How it works (under the hood)
Window selection – Pine can’t read your viewport, so we approximate it with Bars Back (user input).
Binning – The window’s price range is divided into N bins.
Volume allocation – For each bar in the window:
Distribute Across Hi–Lo (optional): Spread volume across all bins the bar overlaps, weighted by overlap; or
Single-price mode: Assign all volume to one bin using a representative price (hlc3).
Buy/Sell split (two methods):
Body Proportional (recommended): Split by relative up/down body size (|close−open|).
Up/Down Candle: 100% buy if close ≥ open, else 100% sell.
POC & VA: Point of Control is the bin with max total volume. VAH/VAL expands from POC toward the higher-volume neighbor until the selected % of total volume is included.
Reading the visuals
Horizontal bars (right side): Total volume per price bin.
Left sub-segment = Sell volume
Right sub-segment = Buy volume
POC line: Price level with peak total volume.
VAH / VAL (dashed): Upper and lower bounds of the selected Value Area.
Borders: Each bin has a clean outer outline so the profile looks tight and organized.
Stealth Mode: Grayscale palette that preserves contrast without loud colors.
Key inputs (organized for clarity)
Theme
Stealth Mode: Toggles the grayscale look.
Core
Price Bins: Vertical resolution of the profile.
Lookback (Bars): Approximates your visible range.
Style
Profile Width (bars): How far the histogram extends to the right.
Bin Border Width: Outline thickness.
Markers & Lines
Show POC, Show VAH/VAL, Value Area %, VA line width.
Advanced
Distribute Volume Across Hi–Lo: More accurate, heavier compute.
Buy/Sell Split Method: Body Proportional (realistic) or Up/Down (simple).
Tips & best practices
Start with Body Proportional + Distribute Across ON for intraday accuracy.
If the chart lags, reduce Price Bins or Bars Back, or switch off distribution.
For small windows, fewer bins often looks cleaner (e.g., 30–60).
Stealth Mode plays nicely with both dark and light chart themes.
Limitations & notes
Viewport: Pine can’t access the actual visible bars; Bars Back is a practical stand-in.
Buy/Sell split: This is an approximation from candle bodies, not true bid/ask delta.
Designed for overlay; profile renders to the right of the latest bar.
Meta-LR ForecastThis indicator builds a forward-looking projection from the current bar by combining twelve time-compressed “mini forecasts.” Each forecast is a linear-regression-based outlook whose contribution is adaptively scaled by trend strength (via ADX) and normalized to each timeframe’s own volatility (via that timeframe’s ATR). The result is a 12-segment polyline that starts at the current price and extends one bar at a time into the future (1× through 12× the chart’s timeframe). Alongside the plotted path, the script computes two summary measures:
* Per-TF Bias% — a directional efficiency × R² score for each micro-forecast, expressed as a percent.
* Meta Bias% — the same score, but applied to the final, accumulated 12-step path. It summarizes how coherent and directional the combined projection is.
This tool is an indicator, not a strategy. It does not place orders. Nothing here is trade advice; it is a visual, quantitative framework to help you assess directional bias and trend context across a ladder of timeframe multiples.
The core engine fits a simple least-squares line on a normalized price series for each small forecast horizon and extrapolates one bar forward. That “trend” forecast is paired with its mirror, an “anti-trend” forecast, constructed around the current normalized price. The model then blends between these two wings according to current trend strength as measured by ADX.
ADX is transformed into a weight (w) in using an adaptive band centered on the rolling mean (μ) with width derived from the standard deviation (σ) of ADX over a configurable lookback. When ADX is deeply below the lower band, the weight approaches -1, favoring anti-trend behavior. Inside the flat band, the weight is near zero, producing neutral behavior. Clearly above the upper band, the weight approaches +1, favoring a trend-following stance. The transitions between these regions are linear so the regime shift is smooth rather than abrupt.
You can shape how quickly the model commits to either wing using two exponents. One exponent controls how aggressively positive weights lean into the trend forecast; the other controls how aggressively negative weights lean into the anti-trend forecast. Raising these exponents makes the response more gradual; lowering them makes the shift more decisive. An optional switch can force full anti-trend behavior when ADX registers a deep-low condition far below the lower tail, if you prefer a categorical stance in very flat markets.
A key design choice is volatility normalization. Every micro-forecast is computed in ATR units of its own timeframe. The script fetches that timeframe’s ATR inside each security call and converts normalized outputs back to price with that exact ATR. This avoids scaling higher-timeframe effects by the chart ATR or by square-root time approximations. Using “ATR-true” for each timeframe keeps the cross-timeframe accumulation consistent and dimensionally correct.
Bias% is defined as directional efficiency multiplied by R², expressed as a percent. Directional efficiency captures how much net progress occurred relative to the total path length; R² captures how well the path aligns with a straight line. If price meanders without net progress, efficiency drops; if the variation is well-explained by a line, R² rises. Multiplying the two penalizes choppy, low-signal paths and rewards sustained, coherent motion.
The forward path is built by converting each per-timeframe Bias% into a small ATR-sized delta, then cumulatively adding those deltas to form a 12-step projection. This produces a polyline anchored at the current close and stepping forward one bar per timeframe multiple. Segment color flips by slope, allowing a quick read of the path’s direction and inflection.
Inputs you can tune include:
* Max Regression Length. Upper bound for each micro-forecast’s regression window. Larger values smooth the trend estimate at the cost of responsiveness; smaller values react faster but can add noise.
* Price Source. The price series analyzed (for example, close or typical price).
* ADX Length. Period used for the DMI/ADX calculation.
* ATR Length (normalization). Window used for ATR; this is applied per timeframe inside each security call.
* Band Lookback (for μ, σ). Lookback used to compute the adaptive ADX band statistics. Larger values stabilize the band; smaller values react more quickly.
* Flat half-width (σ). Width of the neutral band on both sides of μ. Wider flats spend more time neutral; narrower flats switch regimes more readily.
* Tail width beyond flat (σ). Distance from the flat band edge to the extreme trend/anti-trend zone. Larger tails create a longer ramp; smaller tails reach extremes sooner.
* Polyline Width. Visual thickness of the plotted segments.
* Negative Wing Aggression (anti-trend). Exponent shaping for negative weights; higher values soften the tilt into mean reversion.
* Positive Wing Aggression (trend). Exponent shaping for positive weights; lower values make trend commitment stronger and sooner.
* Force FULL Anti-Trend at Deep-Low ADX. Optional hard switch for extremely low ADX conditions.
On the chart you will see:
* A 12-segment forward polyline starting from the current close to bar\_index + 1 … +12, with green segments for up-steps and red for down-steps.
* A small label at the latest bar showing Meta Bias% when available, or “n/a” when insufficient data exists.
Interpreting the readouts:
* Trend-following contexts are characterized by ADX above the adaptive upper band, pushing w toward +1. The blended forecast leans toward the regression extrapolation. A strongly positive Meta Bias% in this environment suggests directional alignment across the ladder of timeframes.
* Mean-reversion contexts occur when ADX is well below the lower tail, pushing w toward -1 (or forcing anti-trend if enabled). After a sharp advance, a negative Meta Bias% may indicate the model projects pullback tendencies.
* Neutral contexts occur when ADX sits inside the flat band; w is near zero, the blended forecast remains close to current price, and Meta Bias% tends to hover near zero.
These are analytical cues, not rules. Always corroborate with your broader process, including market structure, time-of-day behavior, liquidity conditions, and risk limits.
Practical usage patterns include:
* Momentum confirmation. Combine a rising Meta Bias% with higher-timeframe structure (such as higher highs and higher lows) to validate continuation setups. Treat the 12th step’s distance as a coarse sense of potential room rather than as a target.
* Fade filtering. If you prefer fading extremes, require ADX to be near or below the lower ramp before acting on counter-moves, and avoid fades when ADX is decisively above the upper band.
* Position planning. Because per-step deltas are ATR-scaled, the path’s vertical extent can be mentally mapped to typical noise for the instrument, informing stop distance choices. The script itself does not compute orders or size.
* Multi-timeframe alignment. Each step corresponds to a clean multiple of your chart timeframe, so the polyline visualizes how successively larger windows bias price, all referenced to the current bar.
House-rules and repainting disclosures:
* Indicator, not strategy. The script does not execute, manage, or suggest orders. It displays computed paths and bias scores for analysis only.
* No performance claims. Past behavior of any measure, including Meta Bias%, does not guarantee future results. There are no assurances of profitability.
* Higher-timeframe updates. Values obtained via security for higher-timeframe series can update intrabar until the higher-timeframe bar closes. The forward path and Meta Bias% may change during formation of a higher-timeframe candle. If you need confirmed higher-timeframe inputs, consider reading the prior higher-timeframe value or acting only after the higher-timeframe close.
* Data sufficiency. The model requires enough history to compute ATR, ADX statistics, and regression windows. On very young charts or illiquid symbols, parts of the readout can be unavailable until sufficient data accumulates.
* Volatility regimes. ATR normalization helps compare across timeframes, but unusual volatility regimes can make the path look deceptively flat or exaggerated. Judge the vertical scale relative to your instrument’s typical ATR.
Tuning tips:
* Stability versus responsiveness. Increase Max Regression Length to steady the micro-forecasts but accept slower response. If you lower it, consider slightly increasing Band Lookback so regime boundaries are not too jumpy.
* Regime bands. Widen the flat half-width to spend more time neutral, which can reduce over-trading tendencies in chop. Shrink the tail width if you want the model to commit to extremes sooner, at the cost of more false swings.
* Wing shaping. If anti-trend behavior feels too abrupt at low ADX, raise the negative wing exponent. If you want trend bias to kick in more decisively at high ADX, lower the positive wing exponent. Small changes have large effects.
* Forced anti-trend. Enable the deep-low option only if you explicitly want a categorical “markets are flat, fade moves” policy. Many users prefer leaving it off to keep regime decisions continuous.
Troubleshooting:
* Nothing plots or the label shows “n/a.” Ensure the chart has enough history for the ADX band statistics, ATR, and the regression windows. Exotic or illiquid symbols with missing data may starve the higher-timeframe computations. Try a more liquid market or a higher timeframe.
* Path flickers or shifts during the bar. This is expected when any higher-timeframe input is still forming. Wait for the higher-timeframe close for fully confirmed behavior, or modify the code to read prior values from the higher timeframe.
* Polyline looks too flat or too steep. Check the chart’s vertical scale and recent ATR regime. Adjust Max Regression Length, the wing exponents, or the band widths to suit the instrument.
Integration ideas for manual workflows:
* Confluence checklist. Use Meta Bias% as one of several independent checks, alongside structure, session context, and event risk. Act only when multiple cues align.
* Stop and target thinking. Because deltas are ATR-scaled at each timeframe, benchmark your proposed stops and targets against the forward steps’ magnitude. Stops that are much tighter than the prevailing ATR often sit inside normal noise.
* Session context. Consider session hours and microstructure. The same ADX value can imply different tradeability in different sessions, particularly in index futures and FX.
This indicator deliberately avoids:
* Fixed thresholds for buy or sell decisions. Markets vary and fixed numbers invite overfitting. Decide what constitutes “high enough” Meta Bias% for your market and timeframe.
* Automatic risk sizing. Proper sizing depends on account parameters, instrument specifications, and personal risk tolerance. Keep that decision in your risk plan, not in a visual bias tool.
* Claims of edge. These measures summarize path geometry and trend context; they do not ensure a tradable edge on their own.
Summary of how to think about the output:
* The script builds a 12-step forward path by stacking linear-regression micro-forecasts across increasing multiples of the chart timeframe.
* Each micro-forecast is blended between trend and anti-trend using an adaptive ADX band with separate aggression controls for positive and negative regimes.
* All computations are done in ATR-true units for each timeframe before reconversion to price, ensuring dimensional consistency when accumulating steps.
* Bias% (per-timeframe and Meta) condenses directional efficiency and trend fidelity into a compact score.
* The output is designed to serve as an analytical overlay that helps assess whether conditions look trend-friendly, fade-friendly, or neutral, while acknowledging higher-timeframe update behavior and avoiding prescriptive trade rules.
Use this tool as one component within a disciplined process that includes independent confirmation, event awareness, and robust risk management.
ABS NR — Fail-Safe Confirm (v4.2.2)
# ABS NR — Fail-Safe Confirm (v4.2.2)
## What it is (quick take)
**ABS NR FS** is a **non-repainting “arm → confirm” entry framework** for intraday and swing execution. It blends:
* **Regime** (EMA stack + 60-min slope),
* **Location** (Keltner basis/edges),
* **Stretch** (session-anchored **VWAP Z-score**),
* **Momentum gating** (TSI cross/slope),
* **Guards** (session window, minimum ATR%, gap filter, optional market alignment).
You’ll see a **small dot** when a setup is **armed** (candidate) and a **triangle** when that setup **confirms** within a user-defined number of bars. A **gray “X”** marks a timeout (candidate canceled).
> Tip: This entry tool works best when paired with a trend context filter and a dedicated exit tool.
---
## How to use it (operational workflow)
1. **Read the regime**
* **Bull trend**: fast > slow > long EMA **and** 60-min slope up.
* **Bear trend**: fast < slow < long EMA **and** 60-min slope down.
* **Range**: neither bull nor bear.
2. **Wait for a candidate (dot)**
Two families:
* **Reclaim (trend-following):** price crosses the **KC basis** with acceptable |Z| (not overstretched) and passes the TSI gate.
* **Fade (range-revert):** price **pokes a KC band**, prints a **reversal wick**, |Z| is stretched, and TSI gate agrees.
3. **Trade the confirmation (triangle)**
The confirm must occur **within N bars** and follow your chosen **Confirm mode** logic (see Inputs). If confirmation doesn’t arrive in time, an **X** cancels the candidate.
4. **Use guards to avoid junk**
Session windows (US focus), minimum ATR%, gap guard, and optional **market alignment** (e.g., SPY above EMA20 for longs).
5. **Manage the position**
* Entries: take **triangles** in the direction of your playbook (reclaims with trend; fades in clean ranges).
* Filters and exits: use your own process or pair with a trend/exit companion.
---
## Visual semantics & alerts
* **Candidate L / S (dot)** → a setup armed on this bar.
* **CONFIRM L / S (triangle)** → actionable signal that met confirm rules within your time window.
* **Cancel L / S (X)** → candidate expired without confirmation; ignore the dot.
**Alerts (stable names for automation):**
* **ABS FS — Confirmed** → fires on confirmed long or short.
* **ABS FS — Candidate Armed** → fires as a candidate arms.
---
## Non-repainting behavior (why signals don’t repaint)
* All HTF requests use **lookahead\_off**.
* With **Strict NR = true**, the 60-min slope uses the **prior completed** 60-min bar and arming/confirming only occurs on confirmed bars.
* Confirmation triangles finalize on bar close.
* If you disable strictness, signals may appear slightly earlier but with more intrabar sensitivity.
---
## Inputs reference (what each control does and the trade-offs)
### A) Behavior / Modes
**Mode** (`Turbo / Aggressive / Balanced / Conservative`)
Changes multiple internal thresholds:
* **Turbo** → most signals; relaxes prior-bar break & VWAP-side checks and time/vol/gap guards. Highest frequency, highest noise.
* **Aggressive** → more signals than Balanced, fewer than Turbo.
* **Balanced** → default; steady trade-off of frequency vs. quality.
* **Conservative** → tightens |Z| and other checks; fewest but cleanest signals.
**Strict NR (bar close + prior HTF 60m)**
* **true** = safer: uses prior 60-min slope; arms/confirms on confirmed bars → **fewer/cleaner** signals.
* **false** = earlier and more reactive; slightly noisier.
---
### B) Keltner Channel (location engine)
* **KC EMA Length (`kcLen`)**
Higher → smoother basis (fewer basis crosses). Lower → snappier basis (more crosses).
* **ATR Length (`atrLen`)**
Higher → steadier band width; Lower → more reactive band width.
* **KC ATR Mult (`kcMult`)**
Higher → wider bands (fewer edge pokes → fewer fades). Lower → narrower (more fades).
---
### C) Trend & HTF slope
* **Trend EMA Fast/Slow/Long (`emaFastLen / emaSlowLen / emaLongLen`)**
Larger = slower regime flips (fewer reclaims); smaller = faster flips (more reclaims).
* **HTF EMA Len (60m) (`htfLen`)**
Larger = steadier HTF slope (fewer signals); smaller = more sensitive (more signals).
---
### D) VWAP Z-Score (stretch / mean-revert logic)
* **VWAP Z-Length (`zLen`)**
Window for Z over session-anchored VWAP distance. Larger = smoother |Z| (fewer fades/re-entries). Smaller = more reactive (more).
* **Range Fade |Z| (base) (`zFadeBase`)**
Minimum |Z| to allow **fades** in ranges. Raise to demand more stretch (fewer fades). Lower to take more fades.
* **Max |Z| Trend Re-entry (base) (`maxZTrendBase`)**
Caps how stretched price can be and still permit **reclaims** with trend. Lower = stricter (avoid chases). Higher = will chase further.
---
### E) TSI Momentum Gate
* **TSI Long/Short/Signal (`tsiLong / tsiShort / tsiSig`)**
Larger = smoother/laggier momentum; smaller = snappier.
* **TSI gate (`CrossOnly / CrossOrSlope / Off`)**
* **CrossOnly**: require TSI cross of its signal (strict).
* **CrossOrSlope**: cross *or* favorable slope (balanced default).
* **Off**: no momentum gate (most signals, most noise).
---
### F) Guards (filters to avoid low-quality tape)
* **US focus 09:35–10:30 & 14:00–15:45 (base) (`useTimeBase`)**
`true` limits to high-quality windows. `false` trades all session.
* **Skip N bars after 09:30 ET (`skipFirst`)**
Skips the open scramble. Larger = skip longer.
* **Min volatility ATR% (base)** = `useVolMinBase` + `atrPctMinBase`
Requires `ATR(10)/Close*100 ≥ atrPctMinBase`. Raise threshold to avoid dead tape; lower to accept quieter sessions.
* **Gap guard (base)** = `gapGuardBase` + `gapMul`
Blocks signals when the opening gap exceeds `gapMul * ATR`. Increase `gapMul` to allow more gapped opens; decrease to be stricter.
---
### G) Visuals & Sides
* **Plot Keltner (`plotKC`)** → show/hide basis & bands.
* **Show Longs / Show Shorts** → enable/disable each side.
---
### H) Fail-Safe Confirmation
* **Confirm mode (`BreakHighOnly / BreakHigh+Hold / TwoBarImpulse`)**
* **BreakHighOnly**: confirm by taking out the armed bar’s extreme. Fastest, most frequent.
* **BreakHigh+Hold**: must **break**, have **body ≥ X·ATR**, **and** hold above/below the basis → higher quality, fewer signals.
* **TwoBarImpulse**: decisive follow-through vs. prior bar with **body ≥ X·ATR** → momentum-biased confirmations.
* **Confirm within N bars (`confirmBars`)**
Confirmation window size. Smaller = faster validation; larger = more patience (can be later).
* **Impulse body ≥ X·ATR (`impulseBodyATR`)**
Raise for stronger confirmations (fewer weak triangles). Lower to accept lighter pushes.
* **Require market alignment (`needMarket`) + `marketTicker`**
When enabled: Longs require **market > EMA20 (5m)**; Shorts require **market < EMA20 (5m)**.
* **Diagnostics: Show debug letters (`debug`)**
Tiny “B/C” audit marks for base/confirm while tuning.
---
## Tuning recipes (quick, practical)
* **If you’re getting chopped:**
* Set **Mode = Conservative**
* **Confirm mode = BreakHigh+Hold**
* Raise **impulseBodyATR** (e.g., 0.45)
* Keep **needMarket = true**
* Keep **Strict NR = true**
* **If you need more signals:**
* **Mode = Aggressive** (or Turbo if you accept more noise)
* **Confirm mode = BreakHighOnly**
* Lower **impulseBodyATR** (0.25–0.30)
* Increase **confirmBars** to 3
* **Range-day focus (fades):**
* Keep session guard on
* Raise **zFadeBase** to demand real stretch
* Keep **maxZTrendBase** moderate (don’t chase)
* **Trend-day focus (reclaims):**
* Slightly **lower `maxZTrendBase`** (avoid chasing excessive stretch)
* Use **CrossOrSlope** TSI gating
* Consider turning **needMarket** on
---
## Best practices & notes
* **Instrument specificity:** Tune Z, TSI, and guards per symbol and timeframe.
* **Session awareness:** Session filter uses **exchange-local** time; adjust for non-US markets.
* **Automation:** Use the two provided alert names; they’re stable.
* **Risk management:** Confirmation improves quality but doesn’t remove risk. Always pre-define stop/size logic.
---
## Suggested starting point (balanced profile)
* **Mode = balanced**
* **Strict NR = true**
* **Confirm mode = BreakHigh+Hold**
* **confirmBars = 2**
* **impulseBodyATR ≈ 0.35**
* **needMarket = off** (turn on for extra confluence)
* Leave Keltner/TSI defaults; then nudge `zFadeBase` and `maxZTrendBase` to match your symbol.
---
*This tool is a signal generator, not a broker or strategy. Validate on your markets/timeframes and integrate with your risk plan.*
Price Imbalance as Consecutive Levels of AveragesOverview
The Price Imbalance as Consecutive Levels of Averages indicator is an advanced technical analysis tool designed to identify and visualize price imbalances in financial markets. Unlike traditional moving average (MA) indicators that update continuously with each new price bar, this indicator employs moving averages calculated over consecutive, non-overlapping historical windows. This unique approach leverages comparative historical data to provide deeper insights into trend strength and potential reversals, offering traders a more nuanced understanding of market dynamics and reducing the likelihood of false signals or fakeouts.
Key Features
Consecutive Rolling Moving Averages: Utilizes three distinct simple moving averages (SMAs) calculated over consecutive, non-overlapping windows to capture different historical segments of price data.
Dynamic Color-Coded Visualization: SMA lines change color and style based on the relationship between the averages, highlighting both extreme and normal market conditions.
Median and Secondary Median Lines: Provides additional layers of price distribution insight during normal trend conditions through the plotting of primary and secondary median lines.
Fakeout Prevention: Filters out short-term volatility and sharp price movements by requiring consistent historical alignment of multiple moving averages.
Customizable Parameters: Offers flexibility to adjust SMA window lengths and line extensions to align with various trading strategies and timeframes.
Real-Time Updates with Historical Context: Continuously recalculates and updates SMA lines based on comparative historical windows, ensuring that the indicator reflects both current and past market conditions.
Inputs & Settings
Rolling Window Lengths:
Window 1 Length (Most Recent) Bars: Number of bars used to calculate the most recent SMA. (Default: 5, Range: 2–300)
Window 2 Length (Preceding) Bars: Number of bars for the second SMA, shifted by Window 1. (Default: 8, Range: 2–300)
Window 3 Length (Third Rolling) Bars: Number of bars for the third SMA, shifted by the combined lengths of Window 1 and Window 2. (Default: 13, Range: 2–300)
Horizontal Line Extension:
Horizontal Line Extension (Bars): Determines how far each SMA line extends horizontally on the chart. (Default: 10 bars, Range: 1–100)
Functionality and Theory
1. Calculating Consecutive Simple Moving Averages (SMAs):
The indicator calculates three SMAs, each based on distinct and consecutive historical windows of price data. This approach contrasts with traditional MAs that continuously update with each new price bar, offering a static view of past trends rather than an ongoing one.
Mean1 (SMA1): Calculated over the most recent Window 1 Length bars. Represents the short-term trend.
Mean1=∑i=1N1CloseiN1
Mean1=N1∑i=1N1Closei
Where N1N1 is the length of Window 1.
Mean2 (SMA2): Calculated over the preceding Window 2 Length bars, shifted back by Window 1 Length bars. Represents the medium-term trend.
\text{Mean2} = \frac{\sum_{i=1}^{N_2} \text{Close}_{i + N_1}}}{N_2}
Where N2N2 is the length of Window 2.
Mean3 (SMA3): Calculated over the third rolling Window 3 Length bars, shifted back by the combined lengths of Window 1 and Window 2 bars. Represents the long-term trend.
\text{Mean3} = \frac{\sum_{i=1}^{N_3} \text{Close}_{i + N_1 + N_2}}}{N_3}
Where N3N3 is the length of Window 3.
2. Determining Market Conditions:
The relationship between the three SMAs categorizes the market condition into either extreme or normal states, enabling traders to quickly assess trend strength and potential reversals.
Extreme Bullish:
Mean3Mean2>Mean1
Mean3>Mean2>Mean1
Indicates a strong and sustained downward trend. SMA lines are colored purple and styled as dashed lines.
Normal Bullish:
Mean1>Mean2andnot in extreme bullish condition
Mean1>Mean2andnot in extreme bullish condition
Indicates a standard upward trend. SMA lines are colored green and styled as solid lines.
Normal Bearish:
Mean1Mean2>Mean1
Mean3>Mean2>Mean1
Normal Bullish:
Mean1>Mean2andnot in Extreme Bullish
Mean1>Mean2andnot in Extreme Bullish
Normal Bearish:
Mean1 Mean2 > Mean3
Visualization: All three SMAs are displayed as gold dashed lines.
Median Lines: Not displayed to maintain chart clarity.
Interpretation: Indicates a strong and sustained upward trend. Traders may consider entering long positions, confident in the trend's strength without the distraction of additional lines.
2. Normal Bullish Condition:
SMAs Alignment: Mean1 > Mean2 (not in extreme condition)
Visualization: Mean1 and Mean2 are green solid lines; Mean3 is gray.
Median Lines: A thin blue dotted median line is plotted between Mean1 and Mean2, with two additional thin blue dashed lines as secondary medians.
Interpretation: Confirms an upward trend while providing deeper insights into price distribution. Traders can use the median and secondary median lines to identify optimal entry points and manage risk more effectively.
3. Extreme Bearish Condition:
SMAs Alignment: Mean3 > Mean2 > Mean1
Visualization: All three SMAs are displayed as purple dashed lines.
Median Lines: Not displayed to maintain chart clarity.
Interpretation: Indicates a strong and sustained downward trend. Traders may consider entering short positions, confident in the trend's strength without the distraction of additional lines.
4. Normal Bearish Condition:
SMAs Alignment: Mean1 < Mean2 (not in extreme condition)
Visualization: Mean1 and Mean2 are red solid lines; Mean3 is gray.
Median Lines: A thin blue dotted median line is plotted between Mean1 and Mean2, with two additional thin blue dashed lines as secondary medians.
Interpretation: Confirms a downward trend while providing deeper insights into price distribution. Traders can use the median and secondary median lines to identify optimal entry points and manage risk more effectively.
Customization and Flexibility
The Price Imbalance as Consecutive Levels of Averages indicator is highly adaptable, allowing traders to tailor it to their specific trading styles and market conditions through adjustable parameters:
SMA Window Lengths: Modify the lengths of Window 1, Window 2, and Window 3 to capture different historical trend segments, whether focusing on short-term fluctuations or long-term movements.
Line Extension: Adjust the horizontal extension of SMA and median lines to align with different trading horizons and chart preferences.
Color and Style Preferences: While default colors and styles are optimized for clarity, traders can customize these elements to match their personal chart aesthetics and enhance visual differentiation.
This flexibility ensures that the indicator remains versatile and applicable across various markets, asset classes, and trading strategies, providing valuable insights tailored to individual trading needs.
Conclusion
The Price Imbalance as Consecutive Levels of Averages indicator offers a comprehensive and innovative approach to analyzing price trends and imbalances within financial markets. By utilizing three consecutive, non-overlapping SMAs and incorporating median lines during normal trend conditions, the indicator provides clear and actionable insights into trend strength and price distribution. Its unique design leverages comparative historical data, distinguishing it from traditional moving averages and enhancing its utility in identifying genuine market movements while minimizing false signals. This dynamic and customizable tool empowers traders to refine their technical analysis, optimize their trading strategies, and navigate the markets with greater confidence and precision.
FluxGate Daily Swing StrategySummary in one paragraph
FluxGate treats long and short as different ecosystems. It runs two independent engines so the long side can be bold when the tape rewards upside persistence while the short side can stay selective when downside is messy. The core reads three directional drivers from price geometry then removes overlap before gating with clean path checks. The complementary risk module anchors stop distance to a higher timeframe ATR so a unit means the same thing on SPY and BTC. It can add take profit breakeven and an ATR trail that only activates after the trade earns it. If a stop is hit the strategy can re enter in the same direction on the next bar with a daily retry cap that you control. Add it to a clean chart. Use defaults to see the intended behavior. For conservative workflows evaluate on bar close.
Scope and intent
• Markets. Large cap equities and liquid ETFs major FX pairs US index futures and liquid crypto pairs
• Timeframes. From one minute to daily
• Default demo in this publication. SPY on one day timeframe
• Purpose. Reduce false starts without missing sustained trends by fusing independent drivers and suppressing activity when the path is noisy
• Limits. This is a strategy. Orders are simulated on standard candles. Non standard chart types are not supported for execution
Originality and usefulness
• Unique fusion. FluxGate extracts three drivers that look at price from different angles. Direction measures slope of a smoothed guide and scales by realized volatility so a point of slope does not mean a different thing on different symbols. Persistence looks at short sign agreement to reward series of closes that keep direction. Curvature measures the second difference of a local fit to wake up during convex pushes. These three are then orthonormalized so a strong reading in one does not double count through another.
• Gates that matter. Efficiency ratio prefers direct paths over treadmills. Entropy turns up versus down frequency into an information read. Light fractal cohesion punishes wrinkly paths. Together they slow the system in chop and allow it to open up when the path is clean.
• Separate long and short engines. Threshold tilts adapt to the skew of score excursions. That lets long engage earlier when upside distribution supports it and keeps short cautious where downside surprise and venue frictions are common.
• Practical risk behavior. Stops are ATR anchored on a higher timeframe so the unit is portable. Take profit is expressed in R so two R means the same concept across symbols. Breakeven and trailing only activate after a chosen R so early noise does not squeeze a good entry. Re entry after stop lets the system try again without you babysitting the chart.
• Testability. Every major window and the aggression controls live in Inputs. There is no hidden magic number.
Method overview in plain language
Base measures
• Return basis. Natural log of close over prior close for stability and easy aggregation through time. Realized volatility is the standard deviation of returns over a moving window.
• Range basis for risk. ATR computed on a higher timeframe anchor such as day week or month. That anchor is steady across venues and avoids chasing chart specific quirks.
Components
• Directional intensity. Use an EMA of typical price as a guide. Take the day to day slope as raw direction. Divide by realized volatility to get a unit free measure. Soft clip to keep outliers from dominating.
• Persistence. Encode whether each bar closed up or down. Measure short sign agreement so a string of higher closes scores better than a jittery sequence. This favors push continuity without guessing tops or bottoms.
• Curvature. Fit a short linear regression and compute the second difference of the fitted series. Strong curvature flags acceleration that slope alone may miss.
• Efficiency gate. Compare net move to path length over a gate window. Values near one indicate direct paths. Values near zero indicate treadmill behavior.
• Entropy gate. Convert up versus down frequency into a probability of direction. High entropy means coin toss. The gate narrows there.
• Fractal cohesion. A light read of path wrinkliness relative to span. Lower cohesion reduces the urge to act.
• Phase assist. Map price inside a recent channel to a small signed bias that grows with confidence. This helps entries lean toward the right half of the channel without becoming a breakout rule.
• Shock control. Compare short volatility to long volatility. When short term volatility spikes the shock gate temporarily damps activity so the system waits for pressure to normalize.
Fusion rule
• Normalize the three drivers after removing overlap
• Blend with weights that adapt to your aggression input
• Multiply by the gates to respect path quality
• Smooth just enough to avoid jitter while keeping timing responsive
• Compute an adaptive mean and deviation of the score and set separate long and short thresholds with a small tilt informed by skew sign
• The result is one long score and one short score that can cross their thresholds at different times for the same tape which is a feature not a bug
Signal rule
• A long suggestion appears when the long score crosses above its long threshold while all gates are active
• A short suggestion appears when the short score crosses below its short threshold while all gates are active
• If any required gate is missing the state is wait
• When a position is open the status is in long or in short until the complementary risk engine exits or your entry mode closes and flips
Inputs with guidance
Setup Long
• Base length Long. Master window for the long engine. Typical range twenty four to eighty. Raising it improves selectivity and reduces trade count. Lowering it reacts faster but can increase noise
• Aggression Long. Zero to one. Higher values make thresholds more permissive and shorten smoothing
Setup Short
• Base length Short. Master window for the short engine. Typical range twenty eight to ninety six
• Aggression Short. Zero to one. Lower values keep shorts conservative which is often useful on upward drifting symbols
Entries and UI
• Entry mode. Both or Long only or Short only
Complementary risk engine
• Enable risk engine. Turns on bracket exits while keeping your signal logic untouched
• ATR anchor timeframe. Day Week or Month. This sets the structural unit of stop distance
• ATR length. Default fourteen
• Stop multiple. Default one point five times the anchor ATR
• Use take profit. On by default
• Take profit in R. Default two R
• Breakeven trigger in R. Default one R
Usage recipes
Intraday trend focus
• Entry mode Both
• ATR anchor Week
• Aggression Long zero point five Aggression Short zero point three
• Stop multiple one point five Take profit two R
• Expect fewer trades that stick to directional pushes and skip treadmill noise
Intraday mean reversion focus
• Session windows optional if you add them in your copy
• ATR anchor Day
• Lower aggression both sides
• Breakeven later and trailing later so the first bounce has room
• This favors fade entries that still convert into trends when the path stays clean
Swing continuation
• Signal timeframe four hours or one day
• Confirm timeframe one day if you choose to include bias
• ATR anchor Week or Month
• Larger base windows and a steady two R target
• This accepts fewer entries and aims for larger holds
Properties visible in this publication
• Initial capital 25.000
• Base currency USD
• Default order size percent of equity value three - 3% of the total capital
• Pyramiding zero
• Commission zero point zero three percent - 0.03% of total capital
• Slippage five ticks
• Process orders on close off
• Recalculate after order is filled off
• Calc on every tick off
• Bar magnifier off
• Any request security calls use lookahead off everywhere
Realism and responsible publication
• No performance promises. Past results never guarantee future outcomes
• Fills and slippage vary by venue and feed
• Strategies run on standard candles only
• Shapes can update while a bar is forming and settle on close
• Keep risk per trade sensible. Around one percent is typical for study. Above five to ten percent is rarely sustainable
Honest limitations and failure modes
• Sudden news and thin liquidity can break assumptions behind entropy and cohesion reads
• Gap heavy symbols often behave better with a True Range basis for risk than a simple range
• Very quiet regimes can reduce score contrast. Consider longer windows or higher thresholds when markets sleep
• Session windows follow the exchange time of the chart if you add them
• If stop and target can both be inside a single bar this strategy prefers stop first to keep accounting conservative
Open source reuse and credits
• No reused open source beyond public domain building blocks such as ATR EMA and linear regression concepts
Legal
Education and research only. Not investment advice. You are responsible for your decisions. Test on history and in simulation with realistic costs
EMP Probabilistic [CHE]Part 1 — For Traders (Practical Overview, no formulas)
What this tool does
EMP Probabilistic \ turns raw price action into a clean, probability-aware map. It builds two adaptive bands around the session open of a higher timeframe you choose (called the S-timeframe) and highlights a robust median threshold. At a glance you know:
Where price has recently tended to stay,
Whether current momentum sits above or below the median, and
A live Long vs. Short probability based on recent outcomes.
Why it improves decisions
Objective context in any regime: The nonparametric band comes straight from recent market behavior, without assuming a particular distribution.
Volatility-aware risk lens: The parametric band adapts to current volatility, helping you judge stretch and room for continuation or snap-back.
No lookahead: All stats update only after an S-bar is finished. That means the panel reflects information you truly had at that time.
How to read the chart
Orange band = empirical, distribution-free range derived from recent session returns (nonparametric).
Teal band = volatility-scaled range around the session open (parametric).
Median dots: green when close is above the median threshold, red when below.
Info panel: shows the active S-timeframe, window sizes, live coverage for both bands, the internal width parameter and volatility estimate, plus a one-line summary.
Probability label: “Long XX% • Short YY%” — a simple read on the recent balance of up vs. down S-bars.
How to use it (quick start)
1. Choose S-timeframe with Auto, Multiplier, or Manual. “Auto” scales your chart TF up to a sensible higher step.
2. Set alpha to control how tight the inner band should be. A typical value gives you a comfortable center zone without cutting off healthy trends.
3. Trade the context:
Trend-following: Prefer longs when price holds above the median; prefer shorts when it stays below.
Mean-reversion: Fade moves near the outer edges during ranges; look for reversion back toward the median.
Breakout filter: Require closes that push and hold beyond the volatility band for momentum plays; avoid noise when price chops inside the middle of the orange band.
Risk management made practical
Size positions relative to the teal band width to keep risk consistent across instruments and regimes.
For stops, many traders set them just beyond the opposite orange bound or use a fraction of the teal band.
Watch the panel’s coverage readouts and Brier score; when they deteriorate, the market may be shifting — reduce size or demand stronger confirmation.
Suggested presets
Scalping (Crypto/FX): Auto S-TF, alpha around a fifth, calibration window near two hundred, RS volatility, metrics window near two hundred.
Intraday Futures: Multiplier 3–5× your chart TF; similar alpha and window sizes; RS volatility is a solid default.
Swing/Equities: S-TF at least daily; test both RS and GK volatility modes; keep windows on the larger side for stability.
What makes it different
Two complementary lenses: a distribution-free read of recent behavior and a volatility-scaled read for risk and stretch.
Self-calibrating width: the parametric band quietly nudges its internal multiplier so actual coverage tracks your target.
Clean UX: grouped inputs, tooltips, an info panel that tells you what’s going on, and a simple median bias you can act on.
Repainting & timing
The logic updates only when the S-bar closes. On lower-timeframe charts you’ll see intrabar flips of the dot color — that’s just live price moving around. For strict signals, confirm on S-bar close.
Friendly note (not financial advice)
Use this as a context engine. It won’t predict the future, but it will keep you on the right side of probability and volatility more often, which is exactly where consistency starts.
Part 2 — Under the Hood (Conceptual, no formulas)
Data and timeframe design
The script works on a higher S-timeframe you select. It fetches the open, high, low, close, and time of that S-bar. Internally, it only updates its rolling windows after an S-bar has finished. It then pushes the previous S-bar’s statistics into its arrays. That design removes lookahead and keeps the metrics out-of-sample relative to the current S-bar.
Nonparametric band (distribution-free)
The orange band comes from the empirical distribution of recent session-level close-minus-open moves. The script keeps a rolling window, sorts a safe copy, and reads three key points: a lower bound, a median, and an upper bound. Because it’s based purely on observed outcomes, it adapts naturally to skew, fat tails, and regime shifts without assuming any particular shape. The orange range shows “where price has tended to live” lately on the chosen S-timeframe.
Parametric band (volatility-scaled)
The teal band models log-space variability around the session open using one of two well-known OHLC volatility estimators: Rogers–Satchell or Garman–Klass. Each estimator contributes a per-bar variance figure; the script averages these across the rolling window to form a current volatility scale. It then builds a symmetric band around the session open in price space. This gives you a volatility-aware notion of stretch that complements the distribution-free orange band.
Self-calibration of band width
The teal band has an internal width multiplier. After each completed S-bar the script checks whether the realized move stayed inside that band. If the band was too tight, the multiplier is nudged upward; if it was too loose, it’s eased downward. A simple learning rate governs how quickly it adapts. Over time this keeps the realized inside-coverage close to the target implied by your alpha setting, without you having to hand-tune anything.
Long/Short probability and calibration quality
The Long vs. Short probability is a transparent statistic: it’s just the recent fraction of up sessions in the rolling window. It is not a complex model — and that’s the point. You get an honest, intuitive read on directional tendency.
To monitor how well this simple probability lines up with reality, the script tracks a Brier-style score over a separate metrics window. Lower is better: it means your recent probability read has matched outcomes more closely.
Coverage tracking for both bands
The panel reports coverage for the orange band (nonparametric) and the teal band (parametric). These are rolling averages of how often recent S-bar moves landed inside each band. Watching these two numbers tells you whether market behavior still aligns with the recent distribution and with the current volatility model.
Why it doesn’t repaint
Because the arrays update only when an S-bar closes and only push the previous bar’s stats, the panel and metrics reflect information you had at the time. Intrabar visuals can change while a bar is forming — that’s expected — but the decision framework itself is anchored to completed S-bars.
Performance and practicality
The heaviest step is sorting a copy of the window for the nonparametric band. With typical window sizes this stays responsive on TradingView. The volatility estimators and rolling averages are lightweight. Inputs are grouped with clear tooltips so you can tune without hunting.
Limitations and good practice
In thin or gappy markets the bands can jump; consider a larger window or a higher S-timeframe.
During violent regime shifts, shorten the window and increase the learning rate slightly so the teal band catches up faster — but don’t overdo it, or you’ll chase noise.
The Long/Short probability is intentionally simple; it’s a context indicator, not a standalone signal factory. Combine it with structure, volume, or your execution rules.
Takeaway
Under the hood, the script blends empirical behavior and volatility scaling, then self-calibrates so the teal band’s real-world coverage stays near your target. You get clarity, consistency, and a dashboard that tells you when its own assumptions are holding up — exactly what you need to trade with confidence.
Disclaimer
The content provided, including all code and materials, is strictly for educational and informational purposes only. It is not intended as, and should not be interpreted as, financial advice, a recommendation to buy or sell any financial instrument, or an offer of any financial product or service. All strategies, tools, and examples discussed are provided for illustrative purposes to demonstrate coding techniques and the functionality of Pine Script within a trading context.
Any results from strategies or tools provided are hypothetical, and past performance is not indicative of future results. Trading and investing involve high risk, including the potential loss of principal, and may not be suitable for all individuals. Before making any trading decisions, please consult with a qualified financial professional to understand the risks involved.
By using this script, you acknowledge and agree that any trading decisions are made solely at your discretion and risk.
Best regards and happy trading
Chervolino
Physics CandlesPhysics Candles embed volume and motion physics directly onto price candles or market internals according to the cyclic pattern of financial securities. The indicator works on both real-time “ticks” and historical data using statistical modeling to highlight when these values, like volume or momentum, is unusual or relatively high for some periodic window in time. Each candle is made out of one or more sub-candles that each contain their own information of motion, which converts to the color and transparency, or brightness, of that particular candle segment. The segments extend throughout the entire candle, both body and wicks, and Thick Wicks can be implemented to see the color coding better. This candle segmentation allows you to see if all the volume or energy is evenly distributed throughout the candle or highly contained in one small portion of it, and how intense these values are compared to similar time periods without going to lower time frames. Candle segmentation can also change a trader’s perspective on how valuable the information is. A “low” volume candle, for instance, could signify high value short-term stopping volume if the volume is all concentrated in one segment.
The Candles are flexible. The physics information embedded on the candles need not be from the same price security or market internal as the chart when using the Physics Source option, and multiple Candles can be overlayed together. You could embed stock price Candles with market volume, market price Candles with stock momentum, market structure with internal acceleration, stock price with stock force, etc. My particular use case is scalping the SPX futures market (ES), whose price action is also dictated by the volume action in the associated cash market, or SPY, as well as a host of other securities. Physics allows you to embed the ES volume on the SPY price action, or the SPY volume on the ES price action, or you can combine them both by overlaying two Candle streams and increasing the Number of Overlays option to two. That option decreases the transparency levels of your coloring scheme so that overlaying multiple Candles converges toward the same visual color intensity as if you had one. The Candle and Physics Sources allows for both Symbols and Spreads to visualize Candle physics from a single ticker or some mathematical transformation of tickers.
Due to certain TradingView programming restrictions, each Candle can only be made out of a maximum of 8 candle segments, or an “8-bit” resolution. Since limits are just an opportunity to go beyond, the user has the option to stack multiple Candle indicators together to further increase the candle resolution. If you don’t want to see the Candles for some particular period of the day, you can hide them, or use the hiding feature to have multiple Candles calibrated to show multiple parts of the trading day. Securities tend to have low volume after hours with sharp spikes at the open or close. Multiple Candles can be used for multiple parts of the trading day to accommodate these different cycles in volume.
The Candles do not need be associated with the nominal security listed on the TV chart. The Candle Source allows the user to look at AAPL Candles, for instance, while on a TSLA or SPY chart, each with their respective volume actions integrated into the candles, for instance, to allow the user to see multiple security price and volume correlation on a single chart.
The physics information currently embeddable on Candles are volume or time, velocity, momentum, acceleration, force, and kinetic energy. In order to apply equations of motion containing a mass variable to financial securities, some analogous value for mass must be assumed. Traders often regard volume or time as inextricable variables to a securities price that can indicate the direction and strength of a move. Since mass is the inextricable variable to calculating the momentum, force, or kinetic energy of motion, the user has the option to assume either time or volume is analogous to mass. Volume may be a better option for mass as it is not strictly dependent on the speed of a security, whereas time is.
Data transformations and outlier statistics are used to color code the intensity of the physics for each candle segment relative to past periodic behavior. A million shares during pre-market or a million shares during noontime may be more intense signals than a typical million shares traded at the open, and should have more intense color signals. To account for a specific cyclic behavior in the market, the user can specify the Window and Cycle Time Frames. The Window Time Frame splits up a Cycle into windows, samples and aggregates the statistics for each window, then compares the current physics values against past values in the same window. Intraday traders may benefit from using a Daily Cycle with a 30-minute Window Time Frame and 1-minute Sample Time Frame. These settings sample and compare the physics of 1-minute candles within the current 30-minute window to the same 30-minute window statistics for all past trading days, up until the data limit imposed by TradingView, or until the Data Collection Start Date specified in the settings. Longer-term traders may benefit from using a Monthly Cycle with a Weekly Time Frame, or a Yearly Cycle with a Quarterly Time Frame.
Multiple statistics and data transformation methods are available to convey relative intensity in different ways for different trading signals. Physics Candles allows for both Normal and Log-Normal assumptions in the physics distribution. The data can then be transformed by Linear, Logarithmic, Z-Score, or Power-Law scoring, where scoring simply assigns an intensity to the relative physics value of each candle segment based on some mathematical transformation. Z-scoring often renders adequate detection by scoring the segment value, such as volume or momentum, according to the mean and standard deviation of the data set in each window of the cycle. Logarithmic or power-law transformation with a gamma below 1 decreases the disparity between intensities so more less-important signals will show up, whereas the power-law transformation with gamma values above 1 increases the disparity between intensities, so less more-important signals will show up. These scores are then converted to color and transparency between the Min Score and the Max Score Cutoffs. The Auto-Normalization feature can automatically pick these cutoffs specific to each window based on the mean and standard deviation of the data set, or the user can manually set them. Physics was developed with novices in mind so that most users could calibrate their own settings by plotting the candle segment distributions directly on the chart and fiddling with the settings to see how different cutoffs capture different portions of the distribution and affect the relative color intensities differently. Security distributions are often skewed with fat-tails, known as kurtosis, where high-volume segments for example, have a higher-probabilities than expected for a normal distribution. These distribution are really log-normal, so that taking the logarithm leads to a standard bell-shaped distribution. Taking the Z-score of the Log-Normal distribution could make the most statistical sense, but color sensitivity is a discretionary preference.
Background Philosophy
This indicator was developed to study and trade the physics of motion in financial securities from a visually intuitive perspective. Newton’s laws of motion are loosely applied to financial motion:
“A body remains at rest, or in motion at a constant speed in a straight line, unless acted upon by a force”.
Financial securities remain at rest, or in motion at constant speed up or down, unless acted upon by the force of traders exchanging securities.
“When a body is acted upon by a force, the time rate of change of its momentum equals the force”.
Momentum is the product of mass and velocity, and force is the product of mass and acceleration. Traders render force on the security through the mass of their trading activity and the acceleration of price movement.
“If two bodies exert forces on each other, these forces have the same magnitude but opposite directions.”
Force arises from the interaction of traders, buyers and sellers. One body of motion, traders’ capitalization, exerts an equal and opposite force on another body of motion, the financial security. A securities movement arises at the expense of a buyer or seller’s capitalization.
Volume
The premise of this indicator assumes that volume, v, is an analogous means of measuring physical mass, m. This premise allows the application of the equations of motion to the movement of financial securities. We know from E=mc^2 that mass has energy. Energy can be used to create motion as kinetic energy. Taking a simple hypothetical example, the interaction of one short seller looking to cover lower and one buyer looking to sell higher exchange shares in a security at an agreed upon price to create volume or mass, and therefore, potential energy. Eventually the short seller will actively cover and buy the security from the previous buyer, moving the security higher, or the buyer will actively sell to the short seller, moving the security lower. The potential energy inherent in the initial consolidation or trading activity between buy and seller is now converted to kinetic energy on the subsequent trading activity that moves the securities price. The more potential energy that is created in the consolidation, the more kinetic energy there is to move price. This is why point and figure traders are said to give price targets based on the level of volatility or size of a consolidation range, or why Gann traders square price and time, as time is roughly proportional to mass and trading activity. The build-up of potential energy between short sellers and buyers in GME or TSLA led to their explosive moves beyond their standard fundamental valuations.
Position
Position, p, is simply the price or value of a financial security or market internal.
Time
Time, t, is another means of measuring mass to discover price behavior beyond the time snapshots that simple candle charts provide. We know from E=mc^2 that time is related to rest mass and energy given the speed of light, c, where time ≈ distance * sqrt(mass/E). This relation can also be derived from F=ma. The more mass there is, the longer it takes to compute the physics of a system. The more energy there is, the shorter it takes to compute the physics of a system. Similarly, more time is required to build a “resting” low-volatility trading consolidation with more mass. More energy added to that trading consolidation by competing buyers and sellers decreases the time it takes to build that same mass. Time is also related to price through velocity.
Velocity = (p(t1) – p(t0)) / p(t0)
Velocity, v, is the relative percent change of a securities price, p, over a period of time, t0 to t1. The period of time is between subsequent candles, and since time is constant between candles within the same timeframe, it is not used to calculate velocity or acceleration. Price moves faster with higher velocity, and slower with slower velocity, over the same fixed period of time. The product of velocity and mass gives momentum.
Momentum = mv
This indicator uses physics definition of momentum, not finance’s. In finance, momentum is defined as the amount of change in a securities price, either relative or absolute. This is definition is unfortunate, pun intended, since a one dollar move in a security from a thousand shares traded between a few traders has the exact same “momentum” as a one dollar move from millions of shares traded between hundreds of traders with everything else equal. If momentum is related to the energy of the move, momentum should consider both the level of activity in a price move, and the amount of that price move. If we equate mass to volume to account for the level of trading activity and use physics definition of momentum as the product of mass and velocity, this revised definition now gives a thousand-times more momentum to a one-dollar price move that has a thousand-times more volume behind it. If you want to use finance’s volume-less definition of momentum, use velocity in this indicator.
Acceleration = v(t1) – v(t0)
Acceleration, a, is the difference between velocities over some period of time, t0 to t1. Positive acceleration is necessary to increase a securities speed in the positive direction, while negative acceleration is necessary to decrease it. Acceleration is related to force by mass.
Force = ma
Force is required to change the speed of a securities valuation. Price movements with considerable force have considerably more impact on future direction. A change in direction requires force.
Kinetic Energy = 0.5mv^2
Kinetic energy is the energy that a financial security gains from the change in its velocity by force. The built-up of potential energy in trading consolidations can be converted to kinetic energy on a breakout from the consolidation.
Cycle Theory and Relativity
Just as the physics of motion is relative to a point of reference, so too should the physics of financial securities be relative to a point of reference. An object moving at a 100 mph towards another object moving in the same direction at 100 mph will not appear to be moving relative to each other, nor will they collide, but from an outsider observer, the objects are going 100 mph and will collide with significant impact if they run into a stationary object relative to the observer. Similarly, trading with a hundred thousand shares at the open when the average volume is a couple million may have a much smaller impact on the price compared to trading a hundred thousand shares pre-market when the average volume is ten thousand shares. The point of reference used in this indicator is the average statistics collected for a given Window Time Frame for every Cycle Time Frame. The physics values are normalized relative to these statistics.
Examples
The main chart of this publication shows the Force Candles for the SPY. An intense force candle is observed pre-market that implicates the directional overtone of the day. The assumption that direction should follow force arises from physical observation. If a large object is accelerating intensely in a particular direction, it may be fair to assume that the object continues its direction for the time being unless acted upon by another force.
The second example shows a similar Force Candle for the SPY that counters the assumption made in the first example and emphasizes the importance of both motion and context. While it’s fair to assume that a heavy highly accelerating object should continue its course, if that object runs into an obstacle, say a brick wall, it’s course may deviate. This example shows SPY running into the 50% retracement wall from the low of Mar 2020, a significant support level noted in literature. The example also conveys Gann’s idea of “lost motion”, where the SPY penetrated the 50% price but did not break through it. A brick wall is not one atom thick and price support is not one tick thick. An object can penetrate only one layer of a wall and not go through it.
The third example shows how Volume Candles can be used to identify scalping opportunities on the SPY and conveys why price behavior is as important as motion and context. It doesn’t take a brick wall to impede direction if you know that the person driving the car tends to forget to feed the cats before they leave. In the chart below, the SPY breaks down to a confluence of the 5-day SMA, 20-day SMA, and an important daily trendline (not shown) after the bullish bounce from the 50% retracement days earlier. High volume candles on the SMA signify stopping volume that reverse price direction. The character of the day changes. Bulls become more aggressive than bears with higher volume on upswings and resistance, whiles bears take on a defensive position with lower volume on downswings and support. High volume stopping candles are seen after rallies, and can tell you when to take profit, get out of a position, or go short. The character change can indicate that its relatively safe to re-enter bullish positions on many major supports, especially given the overarching bullish theme from the large reaction off the 50% retracement level.
The last example emphasizes the importance of relativity. The Volume Candles in the chart below are brightest pre-market even though the open has much higher volume since the pre-market activity is much higher compared to past pre-markets than the open is compared to past opens. Pre-market behavior is a good indicator for the character of the day. These bullish Volume Candles are some of the brightest seen since the bounce off the 50% retracement and indicates that bulls are making a relatively greater attempt to bring the SPY higher at the start of the day.
Infrequently Asked Questions
Where do I start?
The default settings are what I use to scalp the SPY throughout most of the extended trading day, on a one-minute chart using SPY volume. I also overlay another Candle set containing ES future volume on the SPY price structure by setting the Physics Source to ES1! and the Number of Overlays setting to 2 for each Candle stream in order to account for pre- and post-market trading activity better. Since the closing volume is exponential-like up until the end of the regular trading day, adding additional Candle streams with a tighter Window Time Frame (e.g., 2-5 minute) in the last 15 minutes of trading can be beneficial. The Hide feature can allow you to set certain intraday timeframes to hide one Candle set in order to show another Candle set during that time.
How crazy can you get with this indicator?
I hope you can answer this question better. One interesting use case is embedding the velocity of market volume onto an internal market structure. The PCTABOVEVWAP.US is a market statistic that indicates the percent of securities above their VWAP among US stocks and is helpful for determining short term trends in the US market. When securities are rising above their VWAP, the average long is up on the day and a rising PCTABOVEVWAP.US can be viewed as more bullish. When securities are falling below their VWAP, the average short is up on the day and a falling PCTABOVEVWAP.US can be viewed as more bearish. (UPVOL.US - DNVOL.US) / TVOL.US is a “spread” symbol, in TV parlance, that indicates the decimal percent difference between advancing volume and declining volume in the US market, showing the relative flow of volume between stocks that are up on the day, and stocks that are down on the day. Setting PCTABOVEVWAP.US in the Candle Source, (UPVOL.US - DNVOL.US) / TVOL.US in the Physics Source, and selecting the Physics to Velocity will embed the relative velocity of the spread symbol onto the PCTABOVEVWAP.US candles. This can be helpful in seeing short term trends in the US market that have an increasing amount of volume behind them compared to other trends. The chart below shows Volume Candles (top) and these Spread Candles (bottom). The first top at 9:30 and second top at 10:30, the high of the day, break down when the spread candles light up, showing a high velocity volume transfer from up stocks to down stocks.
How do I plot the indicator distribution and why should I even care?
The distribution is visually helpful in seeing how different normalization settings effect the distribution of candle segments. It is also helpful in seeing what physics intensities you want to ignore or show by segmenting part of the distribution within the Min and Max Cutoff values. The intensity of color is proportional to the physics value between the Min and Max Cutoff values, which correspond to the Min and Max Colors in your color scheme. Any physics value outside these Min and Max Cutoffs will be the same as the Min and Max Colors.
Select the Print Windows feature to show the window numbers according to the Cycle Time Frame and Window Time Frame settings. The window numbers are labeled at the start of each window and are candle width in size, so you may need to zoom into to see them. Selecting the Plot Window feature and input the window number of interest to shows the distribution of physics values for that particular window along with some statistics.
A log-normal volume distribution of segmented z-scores is shown below for 30-minute opening of the SPY. The Min and Max Cutoff at the top of the graph contain the part of the distribution whose intensities will be linearly color-coded between the Min and Max Colors of the color scheme. The part of the distribution below the Min Cutoff will be treated as lowest quality signals and set to the Min Color, while the few segments above the Max Cutoff will be treated as the highest quality signals and set to the Max Color.
What do I do if I don’t see anything?
Troubleshooting issues with this indicator can involve checking for error messages shown near the indicator name on the chart or using the Data Validation section to evaluate the statistics and normalization cutoffs. For example, if the Plot Window number is set to a window number that doesn’t exist, an error message will tell you and you won’t see any candles. You can use the Print Windows option to show windows that do exist for you current settings. The auto-normalization cutoff values may be inappropriate for your particular use case and literally cut the candles out of the chart. Try changing the chart time frame to see if they are appropriate for your cycle, sample and window time frames. If you get a “Timeframe passed to the request.security_lower_tf() function must be lower than the timeframe of the main chart” error, this means that the chart timeframe should be increased above the sample time frame. If you get a “Symbol resolve error”, ensure that you have correct symbol or spread in the Candle or Physics Source.
How do I see a relative physics values without cycles?
Set the Window Time Frame to be equal to the Cycle Time Frame. This will aggregate all the statistics into one bucket and show the physics values, such as volume, relative to all the past volumes that TV will allow.
How do I see candles without segmentation?
Segmentation can be very helpful in one context or annoying in another. Segmentation can be removed by setting the candle resolution value to 1.
Notes
I have yet to find a trading platform that consistently provides accurate real-time volume and pricing information, lacking adequate end-user data validation or quality control. I can provide plenty of examples of real-time volume counts or prices provided by TradingView and other platforms that were significantly off from what they should have been when comparing against the exchanges own data, and later retroactively corrected or not corrected at all. Since no indicator can work accurately with inaccurate data, please use at your own discretion.
The first version is a beta version. Debugging and validating code in Pine script is difficult without proper unit testing. Please report any bugs with enough information to reproduce them and indicate why they are important. I also encourage you to export the data from TradingView and verify the calculations for your particular use case.
The indicator works on real-time updates that occur at a higher frequency than the candle time frame, which TV incorrectly refers to as ticks. They use this terminology inaccurately as updates are really aggregated tick data that can take place at different prices and may not accurately reflect the real tick price action. Consequently, this inaccuracy also impacts the real-time segmentation accuracy to some degree. TV does not provide a means of retaining “tick” information, so the higher granularity of information seen real-time will be lost on a disconnect.
TV does not provide time and sales information. The volume and price information collected using the Sample Time Frame is intraday, which provides only part of the picture. Intraday volume is generally 50 to 80% of the end of day volume. Consequently, the daily+ OHLC prices are intraday, and may differ significantly from exchanged settled OHLC prices.
The Cycle and Window Time Frames refer to calendar days and time, not trading days or time. For example, the first window week of a monthly cycle is the first seven days of the month, not the first Monday through Friday of trading for the month.
Chart Time Frames that are higher than the Window Time Frames average the normalized physics for price action that occurred within a given Candle segment. It does not average price action that did not occur.
One of the main performance bottleneck in TradingView’s Pine Script is client-side drawing and plotting. The performance of this indicator can be increased by lowering the resolution (the number of sub-candles this indicator plots), getting a faster computer, or increasing the performance of your computer like plugging your laptop in and eliminating unnecessary processes.
The statistical integrity of this indicator relies on the number of samples collected per sample window in a given cycle. Higher sample counts can be obtained by increasing the chart time frame or upgrading the TradingView plan for a higher bar count. While increasing the chart time frame doesn’t increase the visual number of bars plotted on the chart, it does increase the number of bars that can be pulled at a lower time frame, up to 100,000.
Due to a limitation in Pine Scripts request_lower_tf() function, using a spread symbol will only work for regular trading hours, not extended trading hours.
Ideally, velocity or momentum should be calculated between candle closes. To eliminate the need to deal with price gaps that would lead to an incorrect statistical distributions, momentum is calculated between candle open and closes as a percent change of the price or value, which should not be an issue for most liquid securities.
ATR Regime Study [CHE] ATR Regime Study — ATR percentile regimes with clear bands, table and live label
Summary
This study classifies volatility into five regimes by converting ATR into a percentile rank over a rolling window, plotted on a standardized scale between zero and one hundred. Colored bands mark regime thresholds, while a compact table and an optional label report the current percentile and regime. The standardized scale makes symbols and timeframes easier to compare than raw ATR values. Implemented in Pine v6 as a separate pane (overlay set to false), it is a context tool to adapt tactics and risk handling to the prevailing volatility environment.
Motivation: Why this design?
Raw ATR varies with price scale and asset characteristics, which makes regime comparison inconsistent and leads to poor transfer of settings across symbols and timeframes. The core idea is to transform ATR into a percentile rank within a user-defined lookback, then map it into discrete regimes. This yields a stable, interpretable context signal that shifts slower than raw ATR while still responding to genuine volatility changes.
What’s different vs. standard approaches?
Reference baseline: Traditional ATR plots or ATR bands using fixed multipliers.
Architecture differences:
Percentile ranking of ATR within a rolling window.
Five discrete regimes with fixed thresholds at ninety, seventy, thirty, and ten.
Visual fills between thresholds plus a live table and a last-bar label.
Practical effect: You read a single normalized line between zero and one hundred with consistent thresholds. This improves cross-asset comparison and makes regime shifts obvious at a glance.
How it works (technical)
The script computes ATR over a configurable length, then converts that series to a percentile rank over a configurable number of bars. The percentile is naturally scaled and limited between zero and one hundred. That value is mapped to one of five regimes: above ninety (Extreme), between seventy and ninety (Elevated), between thirty and seventy (Normal), between ten and thirty (Calm), and below ten (Squeeze). Horizontal guide lines mark the thresholds, and fills shade the regions. A table is created once and updated on each bar to show regime definitions and highlight the current row. An optional label on the last bar displays the current percentile and regime. No higher-timeframe requests are used, so repaint risk is limited to normal live-bar fluctuation until the bar closes.
Parameter Guide
ATR length — Effect: Controls how fast ATR reacts to new ranges. Default: fourteen. Trade-offs/Tips: Increase to reduce noise in choppy markets; decrease to react faster during regime changes.
Percentile window (bars) — Effect: Number of bars used for the percentile ranking. Default: two hundred fifty-two. Trade-offs/Tips: Larger windows stabilize the percentile but slow adaptation after structural regime shifts; smaller windows adapt faster but may flip more often.
Table › Show — Effect: Toggles the regime overview table. Default: enabled. Trade-offs/Tips: Disable on constrained layouts to reduce visual clutter.
Table › Position — Effect: Anchors the table in a chart corner. Default: Top Right. Trade-offs/Tips: Choose a corner that avoids overlapping other panels or drawings.
Label › Show — Effect: Toggles a last-bar label with current percentile and regime. Default: enabled. Trade-offs/Tips: Useful for quick reads; disable if it obscures other annotations.
Reading & Interpretation
The white line shows ATR percentile between zero and one hundred. Crossing above seventy signals an elevated volatility environment; above ninety indicates event-driven extremes. Between thirty and seventy represents typical conditions. Between ten and thirty indicates calm conditions that often suit mean reversion. Below ten reflects compression, where breakout probability often increases. The colored bands visually reinforce these ranges. The table summarizes regime definitions and highlights the current state. The last-bar label mirrors the current percentile and regime for quick inspection.
Practical Workflows & Combinations
Trend following: Prefer continuation tactics when the percentile holds in the Normal or Elevated bands and structure confirms higher highs and higher lows. Consider wider stops and partial position sizing as percentile rises.
Mean reversion: Favor fades in Calm regimes within defined ranges; use structure filters and time-of-day constraints to avoid low-liquidity whipsaws.
Breakout preparation: Track compressions below ten; plan entries only with structure confirmation and risk caps, since compressions can persist.
Multi-asset/Multi-TF: Defaults travel well on daily charts. For intraday, reduce the percentile window to align with session dynamics. Combine with trend or market structure tools for confirmation.
Behavior, Constraints & Performance
Repaint/confirmation: The percentile updates during live bars and stabilizes on close; closed bars do not repaint.
security/HTF: Not used. If you add higher-timeframe aggregation externally, account for standard repaint caveats.
Resources: Declared maximum bars back is two thousand; limits for lines and labels are five hundred each. A short loop updates the table rows; arrays are used for table content only.
Known limits: Regime boundaries are fixed; assets with persistent volatility shifts may require window retuning. Low-liquidity periods and gaps can produce abrupt percentile changes. ATR is direction-agnostic and should be paired with trend or structure context.
Sensible Defaults & Quick Tuning
Start with ATR length fourteen and percentile window two hundred fifty-two on daily charts.
Too many flips: Increase ATR length or increase the percentile window.
Too sluggish: Decrease the percentile window or reduce ATR length.
Intraday noise: Keep ATR length moderate and reduce the window to a session-appropriate size; optionally hide the label to declutter.
Compressed markets: Maintain defaults but rely more on structure and volume filters before acting.
What this indicator is—and isn’t
This is a volatility regime context layer that standardizes ATR into interpretable regimes. It is not a complete trading system, not predictive, and not a stand-alone entry signal. Use it alongside structure analysis, confirmation tools, and disciplined risk management.
Disclaimer
The content provided, including all code and materials, is strictly for educational and informational purposes only. It is not intended as, and should not be interpreted as, financial advice, a recommendation to buy or sell any financial instrument, or an offer of any financial product or service. All strategies, tools, and examples discussed are provided for illustrative purposes to demonstrate coding techniques and the functionality of Pine Script within a trading context.
Any results from strategies or tools provided are hypothetical, and past performance is not indicative of future results. Trading and investing involve high risk, including the potential loss of principal, and may not be suitable for all individuals. Before making any trading decisions, please consult with a qualified financial professional to understand the risks involved.
By using this script, you acknowledge and agree that any trading decisions are made solely at your discretion and risk.
Best regards and happy trading
Chervolino






















