Channel Based Zigzag [HeWhoMustNotBeNamed]🎲 Concept
Zigzag is built based on the price and number of offset bars. But, in this experiment, we build zigzag based on different bands such as Bollinger Band, Keltner Channel and Donchian Channel. The process is simple:
🎯 Derive bands based on input parameters
🎯 High of a bar is considered as pivot high only if the high price is above or equal to upper band.
🎯 Similarly low of a bar is considered as pivot low only if low price is below or equal to lower band.
🎯 Adding the pivot high/low follows same logic as that of regular zigzag where pivot high is always followed by pivot low and vice versa.
🎯 If the new pivot added is of same direction as that of last pivot, then both pivots are compared with each other and only the extreme one is kept. (Highest in case of pivot high and lowest in case of pivot low)
🎯 If a bar has both pivot high and pivot low - pivot with same direction as previous pivot is added to the list first before adding the pivot with opposite direction.
🎲 Use Cases
Can be used for pattern recognition algorithms instead of standard zigzag. This will help derive patterns which are relative to bands and channels.
Example: John Bollinger explains how to manually scan double tap using Bollinger Bands in this video: www.youtube.com This modified zigzag base can be used to achieve the same using algorithmic means.
🎲 Settings
Few simple configurations which will let you select the band properties. Notice that there is no zigzag length here. All the calculations depend on the bands.
With bands display, indicator looks something like this
Note that pivots do not always represent highest/lowest prices. They represent highest/lowest price relative to bands.
As mentioned many times, application of zigzag is not for buying at lower price and selling at higher price. It is mainly used for pattern recognition either manually or via algorithms. Lets build new Harmonic, Chart patterns, Trend Lines using the new zigzag?
Pesquisar nos scripts por "algo"
Machine Learning: kNN (New Approach)Description:
kNN is a very robust and simple method for data classification and prediction. It is very effective if the training data is large. However, it is distinguished by difficulty at determining its main parameter, K (a number of nearest neighbors), beforehand. The computation cost is also quite high because we need to compute distance of each instance to all training samples. Nevertheless, in algorithmic trading KNN is reported to perform on a par with such techniques as SVM and Random Forest. It is also widely used in the area of data science.
The input data is just a long series of prices over time without any particular features. The value to be predicted is just the next bar's price. The way that this problem is solved for both nearest neighbor techniques and for some other types of prediction algorithms is to create training records by taking, for instance, 10 consecutive prices and using the first 9 as predictor values and the 10th as the prediction value. Doing this way, given 100 data points in your time series you could create 10 different training records. It's possible to create even more training records than 10 by creating a new record starting at every data point. For instance, you could take the first 10 data points and create a record. Then you could take the 10 consecutive data points starting at the second data point, the 10 consecutive data points starting at the third data point, etc.
By default, shown are only 10 initial data points as predictor values and the 6th as the prediction value.
Here is a step-by-step workthrough on how to compute K nearest neighbors (KNN) algorithm for quantitative data:
1. Determine parameter K = number of nearest neighbors.
2. Calculate the distance between the instance and all the training samples. As we are dealing with one-dimensional distance, we simply take absolute value from the instance to value of x (| x – v |).
3. Rank the distance and determine nearest neighbors based on the K'th minimum distance.
4. Gather the values of the nearest neighbors.
5. Use average of nearest neighbors as the prediction value of the instance.
The original logic of the algorithm was slightly modified, and as a result at approx. N=17 the resulting curve nicely approximates that of the sma(20). See the description below. Beside the sma-like MA this algorithm also gives you a hint on the direction of the next bar move.
LinearRegressionLibraryLibrary "LinearRegressionLibrary" contains functions for fitting a regression line to the time series by means of different models, as well as functions for estimating the accuracy of the fit.
Linear regression algorithms:
RepeatedMedian(y, n, lastBar) applies repeated median regression (robust linear regression algorithm) to the input time series within the selected interval.
Parameters:
y :: float series, source time series (e.g. close)
n :: integer, the length of the selected time interval
lastBar :: integer, index of the last bar of the selected time interval (defines the position of the interval)
Output:
mSlope :: float, slope of the regression line
mInter :: float, intercept of the regression line
TheilSen(y, n, lastBar) applies the Theil-Sen estimator (robust linear regression algorithm) to the input time series within the selected interval.
Parameters:
y :: float series, source time series
n :: integer, the length of the selected time interval
lastBar :: integer, index of the last bar of the selected time interval (defines the position of the interval)
Output:
tsSlope :: float, slope of the regression line
tsInter :: float, intercept of the regression line
OrdinaryLeastSquares(y, n, lastBar) applies the ordinary least squares regression (non-robust) to the input time series within the selected interval.
Parameters:
y :: float series, source time series
n :: integer, the length of the selected time interval
lastBar :: integer, index of the last bar of the selected time interval (defines the position of the interval)
Output:
olsSlope :: float, slope of the regression line
olsInter :: float, intercept of the regression line
Model performance metrics:
metricRMSE(y, n, lastBar, slope, intercept) returns the Root-Mean-Square Error (RMSE) of the regression. The better the model, the lower the RMSE.
Parameters:
y :: float series, source time series (e.g. close)
n :: integer, the length of the selected time interval
lastBar :: integer, index of the last bar of the selected time interval (defines the position of the interval)
slope :: float, slope of the evaluated linear regression line
intercept :: float, intercept of the evaluated linear regression line
Output:
rmse :: float, RMSE value
metricMAE(y, n, lastBar, slope, intercept) returns the Mean Absolute Error (MAE) of the regression. MAE is is similar to RMSE but is less sensitive to outliers. The better the model, the lower the MAE.
Parameters:
y :: float series, source time series
n :: integer, the length of the selected time interval
lastBar :: integer, index of the last bar of the selected time interval (defines the position of the interval)
slope :: float, slope of the evaluated linear regression line
intercept :: float, intercept of the evaluated linear regression line
Output:
mae :: float, MAE value
metricR2(y, n, lastBar, slope, intercept) returns the coefficient of determination (R squared) of the regression. The better the linear regression fits the data (compared to the sample mean), the closer the value of the R squared is to 1.
Parameters:
y :: float series, source time series
n :: integer, the length of the selected time interval
lastBar :: integer, index of the last bar of the selected time interval (defines the position of the interval)
slope :: float, slope of the evaluated linear regression line
intercept :: float, intercept of the evaluated linear regression line
Output:
Rsq :: float, R-sqared score
Usage example:
//@version=5
indicator('ExampleLinReg', overlay=true)
// import the library
import tbiktag/LinearRegressionLibrary/1 as linreg
// define the studied interval: last 100 bars
int Npoints = 100
int lastBar = bar_index
int firstBar = bar_index - Npoints
// apply repeated median regression to the closing price time series within the specified interval
{square bracket}slope, intercept{square bracket} = linreg.RepeatedMedian(close, Npoints, lastBar)
// calculate the root-mean-square error of the obtained linear fit
rmse = linreg.metricRMSE(close, Npoints, lastBar, slope, intercept)
// plot the line and print the RMSE value
float y1 = intercept
float y2 = intercept + slope * (Npoints - 1)
if barstate.islast
{indent} line.new(firstBar,y1, lastBar,y2)
{indent} label.new(lastBar,y2,text='RMSE = '+str.format("{0,number,#.#}", rmse))
Pulu's 3 Moving Averages
Pulu's 3 Moving Averages
Release version 1, date 2021-09-28
This script allows you to customize three sets of moving averages, turn on/off, set color and parameters. It also tags the start date of the last set of moving average if there is. This, release version 1, supports eight moving average algorithms:
ALMA, Arnaud Legoux Moving Average
EMA, Exponential Moving Average
RMA, Adjusted exponential moving average (aka Wilder’s EMA)
SMA, Simple Moving Average
SWMA, Symmetrically-Weighted Moving Average
VWAP, Volume-Weighted Average Price
VWMA, Volume-Weighted Moving Average
WMA, Weighted Moving Average
The availability and function parameters
Func. Availability Parameters
ALMA
MA1, MA2, MA3
source
length
offset
sigma
EMA
RMA
SMA
VWMA
WMA
MA1, MA2, MA3
source
length
SWMA
VWAP
MA1
source
Parameters
Parameter Description
source the series of values to process. The default is to use the closing price to calculate the moving average.
length an integer value that defines the number of bars to calculate the moving average on. The SWMA and VWAP do not use this parameter.
ALMA offset a floating-point value that controls the tradeoff between smoothness (with a value closer to 1) and responsiveness (with a value closer to 0). This parameter is only used by ALMA.
ALMA sigma a floating-point value that specifies the ALMA’s smoothness. The larger this value, the smoother the moving average is. This parameter is only used by ALMA.
I'm not sure if it is needed, so I do not let the three Moving Averages of the script to have indivial algorithm setting. Because that will involve much complicated condition testing and use up more TradingView script lines limit. If you need to combine different algorithms in the three sets of moving averages, or have other ideas, leave a message to let me know; maybe I will try it in the next update.
我不確定是否需要,所以我沒有讓腳本的三組移動平均線有各別的算法設置。因為這將涉及更多複雜的條件測試,並使用更多 TradingView 腳本列數限制。如果您需要在三組均線中組合不同的算法,或者有其他想法,請留言告訴我;也許我會在下一次更新中嘗試。
Cycles StrategyThis is back-testable strategy is a modified version of the Stochastic strategy. It is meant to accompany the modified Stochastic indicator: "Cycles".
Modifications to the Stochastic strategy include;
1. Programmable settings for the Stochastic Periods (%K, %D and Smooth %K).
2. Programmable settings for the MACD Periods (Fast, Slow, Smoothing)
3. Programmable thresholds for %K, to qualify a potential entry strategy.
4. Programmable thresholds for %D, to qualify a potential exit strategy.
5. Buttons to choose which components to use in the trading algorithm.
6. Choose the month and year to back test.
The trading algorithm:
1. When %K exceeds the upper/lower threshold and then hooks down/up, in the direction of the Moving Average (MA). This is the minimum entry qualification.
2. When %D exceeds the lower/upper threshold and angled in the direction of the trade, is the exit qualification.
3. Additional entry filters include the direction of MACD, Signal and %D. Also, the "cliff", being a long entry is a higher high or a short entry is a lower low.
4. Strategy can only go "Long" or "Short" depending on the selected setting.
5. By matching the settings in the "Cycles" indicator, you can (almost) see what the strategy is doing.
6. Be sure to select the "Recalculate" buttons, to recalculate on every new Tick, for best results.
Please click the Like button and leave a comment if you appreciate this script. Improvements will be implemented as time goes on.
I am not a licensed trade advisor. This strategy is for entertainment only. Use at your own risk!
Accurate Bollinger Bands mcbw_ [True Volatility Distribution]The Bollinger Bands have become a very important technical tool for discretionary and algorithmic traders alike over the last decades. It was designed to give traders an edge on the markets by setting probabilistic values to different levels of volatility. However, some of the assumptions that go into its calculations make it unusable for traders who want to get a correct understanding of the volatility that the bands are trying to be used for. Let's go through what the Bollinger Bands are said to show, how their calculations work, the problems in the calculations, and how the current indicator I am presenting today fixes these.
--> If you just want to know how the settings work then skip straight to the end or click on the little (i) symbol next to the values in the indicator settings window when its on your chart <--
--------------------------- What Are Bollinger Bands ---------------------------
The Bollinger Bands were formed in the 1980's, a time when many retail traders interacted with their symbols via physically printed charts and computer memory for personal computer memory was measured in Kb (about a factor of 1 million smaller than today). Bollinger Bands are designed to help a trader or algorithm see the likelihood of price expanding outside of its typical range, the further the lines are from the current price implies the less often they will get hit. With a hands on understanding many strategies use these levels for designated levels of breakout trades or to assist in defining price ranges.
--------------------------- How Bollinger Bands Work ---------------------------
The calculations that go into Bollinger Bands are rather simple. There is a moving average that centers the indicator and an equidistant top band and bottom band are drawn at a fixed width away. The moving average is just a typical moving average (or common variant) that tracks the price action, while the distance to the top and bottom bands is a direct function of recent price volatility. The way that the distance to the bands is calculated is inspired by formulas from statistics. The standard deviation is taken from the candles that go into the moving average and then this is multiplied by a user defined value to set the bands position, I will call this value 'the multiple'. When discussing Bollinger Bands, that trading community at large normally discusses 'the multiple' as a multiplier of the standard deviation as it applies to a normal distribution (gaußian probability). On a normal distribution the number of standard deviations away (which trades directly use as 'the multiple') you are directly corresponds to how likely/unlikely something is to happen:
1 standard deviation equals 68.3%, meaning that the price should stay inside the 1 standard deviation 68.3% of the time and be outside of it 31.7% of the time;
2 standard deviation equals 95.5%, meaning that the price should stay inside the 2 standard deviation 95.5% of the time and be outside of it 4.5% of the time;
3 standard deviation equals 99.7%, meaning that the price should stay inside the 3 standard deviation 99.7% of the time and be outside of it 0.3% of the time.
Therefore when traders set 'the multiple' to 2, they interpret this as meaning that price will not reach there 95.5% of the time.
---------------- The Problem With The Math of Bollinger Bands ----------------
In and of themselves the Bollinger Bands are a great tool, but they have become misconstrued with some incorrect sense of statistical meaning, when they should really just be taken at face value without any further interpretation or implication.
In order to explain this it is going to get a bit technical so I will give a little math background and try to simplify things. First let's review some statistics topics (distributions, percentiles, standard deviations) and then with that understanding explore the incorrect logic of how Bollinger Bands have been interpreted/employed.
---------------- Quick Stats Review ----------------
.
(If you are comfortable with statistics feel free to skip ahead to the next section)
.
-------- I: Probability distributions --------
When you have a lot of data it is helpful to see how many times different results appear in your dataset. To visualize this people use "histograms", which just shows how many times each element appears in the dataset by stacking each of the same elements on top of each other to form a graph. You may be familiar with the bell curve (also called the "normal distribution", which we will be calling it by). The normal distribution histogram looks like a big hump around zero and then drops off super quickly the further you get from it. This shape (the bell curve) is very nice because it has a lot of very nifty mathematical properties and seems to show up in nature all the time. Since it pops up in so many places, society has developed many different shortcuts related to it that speed up all kinds of calculations, including the shortcut that 1 standard deviation = 68.3%, 2 standard deviations = 95.5%, and 3 standard deviations = 99.7% (these only apply to the normal distribution). Despite how handy the normal distribution is and all the shortcuts we have for it are, and how much it shows up in the natural world, there is nothing that forces your specific dataset to look like it. In fact, your data can actually have any possible shape. As we will explore later, economic and financial datasets *rarely* follow the normal distribution.
-------- II: Percentiles --------
After you have made the histogram of your dataset you have built the "probability distribution" of your own dataset that is specific to all the data you have collected. There is a whole complicated framework for how to accurately calculate percentiles but we will dramatically simplify it for our use. The 'percentile' in our case is just the number of data points we are away from the "middle" of the data set (normally just 0). Lets say I took the difference of the daily close of a symbol for the last two weeks, green candles would be positive and red would be negative. In this example my dataset of day by day closing price difference is:
week 1:
week 2:
sorting all of these value into a single dataset I have:
I can separate the positive and negative returns and explore their distributions separately:
negative return distribution =
positive return distribution =
Taking the 25th% percentile of these would just be taking the value that is 25% towards the end of the end of these returns. Or akin the 100%th percentile would just be taking the vale that is 100% at the end of those:
negative return distribution (50%) = -5
positive return distribution (50%) = +4
negative return distribution (100%) = -10
positive return distribution (100%) = +20
Or instead of separating the positive and negative returns we can also look at all of the differences in the daily close as just pure price movement and not account for the direction, in this case we would pool all of the data together by ignoring the negative signs of the negative reruns
combined return distribution =
In this case the 50%th and 100%th percentile of the combined return distribution would be:
combined return distribution (50%) = 4
combined return distribution (100%) = 10
Sometimes taking the positive and negative distributions separately is better than pooling them into a combined distribution for some purposes. Other times the combined distribution is better.
Most financial data has very different distributions for negative returns and positive returns. This is encapsulated in sayings like "Price takes the stairs up and the elevator down".
-------- III: Standard Deviation --------
The formula for the standard deviation (refereed to here by its shorthand 'STDEV') can be intimidating, but going through each of its elements will illuminate what it does. The formula for STDEV is equal to:
square root ( (sum ) / N )
Going back the the dataset that you might have, the variables in the formula above are:
'mean' is the average of your entire dataset
'x' is just representative of a single point in your dataset (one point at a time)
'N' is the total number of things in your dataset.
Going back to the STDEV formula above we can see how each part of it works. Starting with the '(x - mean)' part. What this does is it takes every single point of the dataset and measure how far away it is from the mean of the entire dataset. Taking this value to the power of two: '(x - mean) ^ 2', means that points that are very far away from the dataset mean get 'penalized' twice as much. Points that are very close to the dataset mean are not impacted as much. In practice, this would mean that if your dataset had a bunch of values that were in a wide range but always stayed in that range, this value ('(x - mean) ^ 2') would end up being small. On the other hand, if your dataset was full of the exact same number, but had a couple outliers very far away, this would have a much larger value since the square par of '(x - mean) ^ 2' make them grow massive. Now including the sum part of 'sum ', this just adds up all the of the squared distanced from the dataset mean. Then this is divided by the number of values in the dataset ('N'), and then the square root of that value is taken.
There is nothing inherently special or definitive about the STDEV formula, it is just a tool with extremely widespread use and adoption. As we saw here, all the STDEV formula is really doing is measuring the intensity of the outliers.
--------------------------- Flaws of Bollinger Bands ---------------------------
The largest problem with Bollinger Bands is the assumption that price has a normal distribution. This is assumption is massively incorrect for many reasons that I will try to encapsulate into two points:
Price return do not follow a normal distribution, every single symbol on every single timeframe has is own unique distribution that is specific to only itself. Therefore all the tools, shortcuts, and ideas that we use for normal distributions do not apply to price returns, and since they do not apply here they should not be used. A more general approach is needed that allows each specific symbol on every specific timeframe to be treated uniquely.
The distributions of price returns on the positive and negative side are almost never the same. A more general approach is needed that allows positive and negative returns to be calculated separately.
In addition to the issues of the normal distribution assumption, the standard deviation formula (as shown above in the quick stats review) is essentially just a tame measurement of outliers (a more aggressive form of outlier measurement might be taking the differences to the power of 3 rather than 2). Despite this being a bit of a philosophical question, does the measurement of outlier intensity as defined by the STDEV formula really measure what we want to know as traders when we're experiencing volatility? Or would adjustments to that formula better reflect what we *experience* as volatility when we are actively trading? This is an open ended question that I will leave here, but I wanted to pose this question because it is a key part of what how the Bollinger Bands work that we all assume as a given.
Circling back on the normal distribution assumption, the standard deviation formula used in the calculation of the bands only encompasses the deviation of the candles that go into the moving average and have no knowledge of the historical price action. Therefore the level of the bands may not really reflect how the price action behaves over a longer period of time.
------------ Delivering Factually Accurate Data That Traders Need------------
In light of the problems identified above, this indicator fixes all of these issue and delivers statistically correct information that discretionary and algorithmic traders can use, with truly accurate probabilities. It takes the price action of the last 2,000 candles and builds a huge dataset of distributions that you can directly select your percentiles from. It also allows you to have the positive and negative distributions calculated separately, or if you would like, you can pool all of them together in a combined distribution. In addition to this, there is a wide selection of moving averages directly available in the indicator to choose from.
Hedge funds, quant shops, algo prop firms, and advanced mechanical groups all employ the true return distributions in their work. Now you have access to the same type of data with this indicator, wherein it's doing all the lifting for you.
------------------------------ Indicator Settings ------------------------------
.
---- Moving average ----
Select the type of moving average you would like and its length
---- Bands ----
The percentiles that you enter here will be pulled directly from the return distribution of the last 2,000 candles. With the typical Bollinger Bands, traders would select 2 standard deviations and incorrectly think that the levels it highlights are the 95.5% levels. Now, if you want the true 95.5% level, you can just enter 95.5 into the percentile value here. Each of the three available bands takes the true percentile you enter here.
---- Separate Positive & Negative Distributions----
If this box is checked the positive and negative distributions are treated indecently, completely separate from each other. You will see that the width of the top and bottom bands will be different for each of the percentiles you enter.
If this box is unchecked then all the negative and positive distributions are pooled together. You will notice that the width of the top and bottom bands will be the exact same.
---- Distribution Size ----
This is the number of candles that the price return is calculated over. EG: to collect the price return over the last 33 candles, the difference of price from now to 33 candles ago is calculated for the last 2,000 candles, to build a return distribution of 2000 points of price differences over 33 candles.
NEGATIVE NUMBERS(<0) == exact number of candles to include;
EG: setting this value to -20 will always collect volatility distributions of 20 candles
POSITIVE NUMBERS(>0) == number of candles to include as a multiple of the Moving Average Length value set above;
EG: if the Moving Average Length value is set to 22, setting this value to 2 will use the last 22*2 = 44 candles for the collection of volatility distributions
MORE candles being include will generally make the bands WIDER and their size will change SLOWER over time.
I wish you focus, dedication, and earnest success on your journey.
Happy trading :)
AiTrend Pattern Matrix for kNN Forecasting (AiBitcoinTrend)The AiTrend Pattern Matrix for kNN Forecasting (AiBitcoinTrend) is a cutting-edge indicator that combines advanced mathematical modeling, AI-driven analytics, and segment-based pattern recognition to forecast price movements with precision. This tool is designed to provide traders with deep insights into market dynamics by leveraging multivariate pattern detection and sophisticated predictive algorithms.
👽 Core Features
Segment-Based Pattern Recognition
At its heart, the indicator divides price data into discrete segments, capturing key elements like candle bodies, high-low ranges, and wicks. These segments are normalized using ATR-based volatility adjustments to ensure robustness across varying market conditions.
AI-Powered k-Nearest Neighbors (kNN) Prediction
The predictive engine uses the kNN algorithm to identify the closest historical patterns in a multivariate dictionary. By calculating the distance between current and historical segments, the algorithm determines the most likely outcomes, weighting predictions based on either proximity (distance) or averages.
Dynamic Dictionary of Historical Patterns
The indicator maintains a rolling dictionary of historical patterns, storing multivariate data for:
Candle body ranges, High-low ranges, Wick highs and lows.
This dynamic approach ensures the model adapts continuously to evolving market conditions.
Volatility-Normalized Forecasting
Using ATR bands, the indicator normalizes patterns, reducing noise and enhancing the reliability of predictions in high-volatility environments.
AI-Driven Trend Detection
The indicator not only predicts price levels but also identifies market regimes by comparing current conditions to historically significant highs, lows, and midpoints. This allows for clear visualizations of trend shifts and momentum changes.
👽 Deep Dive into the Core Mathematics
👾 Segment-Based Multivariate Pattern Analysis
The indicator analyzes price data by dividing each bar into distinct segments, isolating key components such as:
Body Ranges: Differences between the open and close prices.
High-Low Ranges: Capturing the full volatility of a bar.
Wick Extremes: Quantifying deviations beyond the body, both above and below.
Each segment contributes uniquely to the predictive model, ensuring a rich, multidimensional understanding of price action. These segments are stored in a rolling dictionary of patterns, enabling the indicator to reference historical behavior dynamically.
👾 Volatility Normalization Using ATR
To ensure robustness across varying market conditions, the indicator normalizes patterns using Average True Range (ATR). This process scales each component to account for the prevailing market volatility, allowing the algorithm to compare patterns on a level playing field regardless of differing price scales or fluctuations.
👾 k-Nearest Neighbors (kNN) Algorithm
The AI core employs the kNN algorithm, a machine-learning technique that evaluates the similarity between the current pattern and a library of historical patterns.
Euclidean Distance Calculation:
The indicator computes the multivariate distance across four distinct dimensions: body range, high-low range, wick low, and wick high. This ensures a comprehensive and precise comparison between patterns.
Weighting Schemes: The contribution of each pattern to the forecast is either weighted by its proximity (distance) or averaged, based on user settings.
👾 Prediction Horizon and Refinement
The indicator forecasts future price movements (Y_hat) by predicting logarithmic changes in the price and projecting them forward using exponential scaling. This forecast is smoothed using a user-defined EMA filter to reduce noise and enhance actionable clarity.
👽 AI-Driven Pattern Recognition
Dynamic Dictionary of Patterns: The indicator maintains a rolling dictionary of N multivariate patterns, continuously updated to reflect the latest market data. This ensures it adapts seamlessly to changing market conditions.
Nearest Neighbor Matching: At each bar, the algorithm identifies the most similar historical pattern. The prediction is based on the aggregated outcomes of the closest neighbors, providing confidence levels and directional bias.
Multivariate Synthesis: By combining multiple dimensions of price action into a unified prediction, the indicator achieves a level of depth and accuracy unattainable by single-variable models.
Visual Outputs
Forecast Line (Y_hat_line):
A smoothed projection of the expected price trend, based on the weighted contribution of similar historical patterns.
Trend Regime Bands:
Dynamic high, low, and midlines highlight the current market regime, providing actionable insights into momentum and range.
Historical Pattern Matching:
The nearest historical pattern is displayed, allowing traders to visualize similarities
👽 Applications
Trend Identification:
Detect and follow emerging trends early using dynamic trend regime analysis.
Reversal Signals:
Anticipate market reversals with high-confidence predictions based on historically similar scenarios.
Range and Momentum Trading:
Leverage multivariate analysis to understand price ranges and momentum, making it suitable for both breakout and mean-reversion strategies.
Disclaimer: This information is for entertainment purposes only and does not constitute financial advice. Please consult with a qualified financial advisor before making any investment decisions.
Intellect_city - Halvings Bitcoin CycleWhat is halving?
The halving timer shows when the next Bitcoin halving will occur, as well as the dates of past halvings. This event occurs every 210,000 blocks, which is approximately every 4 years. Halving reduces the emission reward by half. The original Bitcoin reward was 50 BTC per block found.
Why is halving necessary?
Halving allows you to maintain an algorithmically specified emission level. Anyone can verify that no more than 21 million bitcoins can be issued using this algorithm. Moreover, everyone can see how much was issued earlier, at what speed the emission is happening now, and how many bitcoins remain to be mined in the future. Even a sharp increase or decrease in mining capacity will not significantly affect this process. In this case, during the next difficulty recalculation, which occurs every 2014 blocks, the mining difficulty will be recalculated so that blocks are still found approximately once every ten minutes.
How does halving work in Bitcoin blocks?
The miner who collects the block adds a so-called coinbase transaction. This transaction has no entry, only exit with the receipt of emission coins to your address. If the miner's block wins, then the entire network will consider these coins to have been obtained through legitimate means. The maximum reward size is determined by the algorithm; the miner can specify the maximum reward size for the current period or less. If he puts the reward higher than possible, the network will reject such a block and the miner will not receive anything. After each halving, miners have to halve the reward they assign to themselves, otherwise their blocks will be rejected and will not make it to the main branch of the blockchain.
The impact of halving on the price of Bitcoin
It is believed that with constant demand, a halving of supply should double the value of the asset. In practice, the market knows when the halving will occur and prepares for this event in advance. Typically, the Bitcoin rate begins to rise about six months before the halving, and during the halving itself it does not change much. On average for past periods, the upper peak of the rate can be observed more than a year after the halving. It is almost impossible to predict future periods because, in addition to the reduction in emissions, many other factors influence the exchange rate. For example, major hacks or bankruptcies of crypto companies, the situation on the stock market, manipulation of “whales,” or changes in legislative regulation.
---------------------------------------------
Table - Past and future Bitcoin halvings:
---------------------------------------------
Date: Number of blocks: Award:
0 - 03-01-2009 - 0 block - 50 BTC
1 - 28-11-2012 - 210000 block - 25 BTC
2 - 09-07-2016 - 420000 block - 12.5 BTC
3 - 11-05-2020 - 630000 block - 6.25 BTC
4 - 20-04-2024 - 840000 block - 3.125 BTC
5 - 24-03-2028 - 1050000 block - 1.5625 BTC
6 - 26-02-2032 - 1260000 block - 0.78125 BTC
7 - 30-01-2036 - 1470000 block - 0.390625 BTC
8 - 03-01-2040 - 1680000 block - 0.1953125 BTC
9 - 07-12-2043 - 1890000 block - 0.09765625 BTC
10 - 10-11-2047 - 2100000 block - 0.04882813 BTC
11 - 14-10-2051 - 2310000 block - 0.02441406 BTC
12 - 17-09-2055 - 2520000 block - 0.01220703 BTC
13 - 21-08-2059 - 2730000 block - 0.00610352 BTC
14 - 25-07-2063 - 2940000 block - 0.00305176 BTC
15 - 28-06-2067 - 3150000 block - 0.00152588 BTC
16 - 01-06-2071 - 3360000 block - 0.00076294 BTC
17 - 05-05-2075 - 3570000 block - 0.00038147 BTC
18 - 08-04-2079 - 3780000 block - 0.00019073 BTC
19 - 12-03-2083 - 3990000 block - 0.00009537 BTC
20 - 13-02-2087 - 4200000 block - 0.00004768 BTC
21 - 17-01-2091 - 4410000 block - 0.00002384 BTC
22 - 21-12-2094 - 4620000 block - 0.00001192 BTC
23 - 24-11-2098 - 4830000 block - 0.00000596 BTC
24 - 29-10-2102 - 5040000 block - 0.00000298 BTC
25 - 02-10-2106 - 5250000 block - 0.00000149 BTC
26 - 05-09-2110 - 5460000 block - 0.00000075 BTC
27 - 09-08-2114 - 5670000 block - 0.00000037 BTC
28 - 13-07-2118 - 5880000 block - 0.00000019 BTC
29 - 16-06-2122 - 6090000 block - 0.00000009 BTC
30 - 20-05-2126 - 6300000 block - 0.00000005 BTC
31 - 23-04-2130 - 6510000 block - 0.00000002 BTC
32 - 27-03-2134 - 6720000 block - 0.00000001 BTC
Adaptive Fisherized Z-scoreHello Fellas,
It's time for a new adaptive fisherized indicator of me, where I apply adaptive length and more on a classic indicator.
Today, I chose the Z-score, also called standard score, as indicator of interest.
Special Features
Advanced Smoothing: JMA, T3, Hann Window and Super Smoother
Adaptive Length Algorithms: In-Phase Quadrature, Homodyne Discriminator, Median and Hilbert Transform
Inverse Fisher Transform (IFT)
Signals: Enter Long, Enter Short, Exit Long and Exit Short
Bar Coloring: Presents the trade state as bar colors
Band Levels: Changes the band levels
Decision Making
When you create such a mod you need to think about which concepts are the best to conclude. I decided to take Inverse Fisher Transform instead of normalization to make a version which fits to a fixed scale to avoid the usual distortion created by normalization.
Moreover, I chose JMA, T3, Hann Window and Super Smoother, because JMA and T3 are the bleeding-edge MA's at the moment with the best balance of lag and responsiveness. Additionally, I chose Hann Window and Super Smoother because of their extraordinary smoothing capabilities and because Ehlers favours them.
Furthermore, I decided to choose the half length of the dominant cycle instead of the full dominant cycle to make the indicator more responsive which is very important for a signal emitter like Z-score. Signal emitters always need to be faster or have the same speed as the filters they are combined with.
Usage
The Z-score is a low timeframe scalper which works best during choppy/ranging phases. The direction you should trade is determined by the last trend change. E.g. when the last trend change was from bearish market to bullish market and you are now in a choppy/ranging phase confirmed by e.g. Chop Zone or KAMA slope you want to do long trades.
Interpretation
The Z-score indicator is a momentum indicator which shows the number of standard deviations by which the value of a raw score (price/source) is above or below the mean value of what is being observed or measured. Easily explained, it is almost the same as Bollinger Bands with another visual representation form.
Signals
B -> Buy -> Z-score crosses above lower band
S -> Short -> Z-score crosses below upper band
BE -> Buy Exit -> Z-score crosses above 0
SE -> Sell Exit -> Z-score crosses below 0
If you were reading till here, thank you already. Now, follows a bunch of knowledge for people who don't know the concepts I talk about.
T3
The T3 moving average, short for "Tim Tillson's Triple Exponential Moving Average," is a technical indicator used in financial markets and technical analysis to smooth out price data over a specific period. It was developed by Tim Tillson, a software project manager at Hewlett-Packard, with expertise in Mathematics and Computer Science.
The T3 moving average is an enhancement of the traditional Exponential Moving Average (EMA) and aims to overcome some of its limitations. The primary goal of the T3 moving average is to provide a smoother representation of price trends while minimizing lag compared to other moving averages like Simple Moving Average (SMA), Weighted Moving Average (WMA), or EMA.
To compute the T3 moving average, it involves a triple smoothing process using exponential moving averages. Here's how it works:
Calculate the first exponential moving average (EMA1) of the price data over a specific period 'n.'
Calculate the second exponential moving average (EMA2) of EMA1 using the same period 'n.'
Calculate the third exponential moving average (EMA3) of EMA2 using the same period 'n.'
The formula for the T3 moving average is as follows:
T3 = 3 * (EMA1) - 3 * (EMA2) + (EMA3)
By applying this triple smoothing process, the T3 moving average is intended to offer reduced noise and improved responsiveness to price trends. It achieves this by incorporating multiple time frames of the exponential moving averages, resulting in a more accurate representation of the underlying price action.
JMA
The Jurik Moving Average (JMA) is a technical indicator used in trading to predict price direction. Developed by Mark Jurik, it’s a type of weighted moving average that gives more weight to recent market data rather than past historical data.
JMA is known for its superior noise elimination. It’s a causal, nonlinear, and adaptive filter, meaning it responds to changes in price action without introducing unnecessary lag. This makes JMA a world-class moving average that tracks and smooths price charts or any market-related time series with surprising agility.
In comparison to other moving averages, such as the Exponential Moving Average (EMA), JMA is known to track fast price movement more accurately. This allows traders to apply their strategies to a more accurate picture of price action.
Inverse Fisher Transform
The Inverse Fisher Transform is a transform used in DSP to alter the Probability Distribution Function (PDF) of a signal or in our case of indicators.
The result of using the Inverse Fisher Transform is that the output has a very high probability of being either +1 or –1. This bipolar probability distribution makes the Inverse Fisher Transform ideal for generating an indicator that provides clear buy and sell signals.
Hann Window
The Hann function (aka Hann Window) is named after the Austrian meteorologist Julius von Hann. It is a window function used to perform Hann smoothing.
Super Smoother
The Super Smoother uses a special mathematical process for the smoothing of data points.
The Super Smoother is a technical analysis indicator designed to be smoother and with less lag than a traditional moving average.
Adaptive Length
Length based on the dominant cycle length measured by a "dominant cycle measurement" algorithm.
Happy Trading!
Best regards,
simwai
---
Credits to
@cheatcountry
@everget
@loxx
@DasanC
@blackcat1402
Support & Resistance AI (K means/median) [ThinkLogicAI]█ OVERVIEW
K-means is a clustering algorithm commonly used in machine learning to group data points into distinct clusters based on their similarities. While K-means is not typically used directly for identifying support and resistance levels in financial markets, it can serve as a tool in a broader analysis approach.
Support and resistance levels are price levels in financial markets where the price tends to react or reverse. Support is a level where the price tends to stop falling and might start to rise, while resistance is a level where the price tends to stop rising and might start to fall. Traders and analysts often look for these levels as they can provide insights into potential price movements and trading opportunities.
█ BACKGROUND
The K-means algorithm has been around since the late 1950s, making it more than six decades old. The algorithm was introduced by Stuart Lloyd in his 1957 research paper "Least squares quantization in PCM" for telecommunications applications. However, it wasn't widely known or recognized until James MacQueen's 1967 paper "Some Methods for Classification and Analysis of Multivariate Observations," where he formalized the algorithm and referred to it as the "K-means" clustering method.
So, while K-means has been around for a considerable amount of time, it continues to be a widely used and influential algorithm in the fields of machine learning, data analysis, and pattern recognition due to its simplicity and effectiveness in clustering tasks.
█ COMPARE AND CONTRAST SUPPORT AND RESISTANCE METHODS
1) K-means Approach:
Cluster Formation: After applying the K-means algorithm to historical price change data and visualizing the resulting clusters, traders can identify distinct regions on the price chart where clusters are formed. Each cluster represents a group of similar price change patterns.
Cluster Analysis: Analyze the clusters to identify areas where clusters tend to form. These areas might correspond to regions of price behavior that repeat over time and could be indicative of support and resistance levels.
Potential Support and Resistance Levels: Based on the identified areas of cluster formation, traders can consider these regions as potential support and resistance levels. A cluster forming at a specific price level could suggest that this level has been historically significant, causing similar price behavior in the past.
Cluster Standard Deviation: In addition to looking at the means (centroids) of the clusters, traders can also calculate the standard deviation of price changes within each cluster. Standard deviation is a measure of the dispersion or volatility of data points around the mean. A higher standard deviation indicates greater price volatility within a cluster.
Low Standard Deviation: If a cluster has a low standard deviation, it suggests that prices within that cluster are relatively stable and less likely to exhibit sudden and large price movements. Traders might consider placing tighter stop-loss orders for trades within these clusters.
High Standard Deviation: Conversely, if a cluster has a high standard deviation, it indicates greater price volatility within that cluster. Traders might opt for wider stop-loss orders to allow for potential price fluctuations without getting stopped out prematurely.
Cluster Density: Each data point is assigned to a cluster so a cluster that is more dense will act more like gravity and
2) Traditional Approach:
Trendlines: Draw trendlines connecting significant highs or lows on a price chart to identify potential support and resistance levels.
Chart Patterns: Identify chart patterns like double tops, double bottoms, head and shoulders, and triangles that often indicate potential reversal points.
Moving Averages: Use moving averages to identify levels where the price might find support or resistance based on the average price over a specific period.
Psychological Levels: Identify round numbers or levels that traders often pay attention to, which can act as support and resistance.
Previous Highs and Lows: Identify significant previous price highs and lows that might act as support or resistance.
The key difference lies in the approach and the foundation of these methods. Traditional methods are based on well-established principles of technical analysis and market psychology, while the K-means approach involves clustering price behavior without necessarily incorporating market sentiment or specific price patterns.
It's important to note that while the K-means approach might provide an interesting way to analyze price data, it should be used cautiously and in conjunction with other traditional methods. Financial markets are influenced by a wide range of factors beyond just price behavior, and the effectiveness of any method for identifying support and resistance levels should be thoroughly tested and validated. Additionally, developments in trading strategies and analysis techniques could have occurred since my last update.
█ K MEANS ALGORITHM
The algorithm for K means is as follows:
Initialize cluster centers
assign data to clusters based on minimum distance
calculate cluster center by taking the average or median of the clusters
repeat steps 1-3 until cluster centers stop moving
█ LIMITATIONS OF K MEANS
There are 3 main limitations of this algorithm:
Sensitive to Initializations: K-means is sensitive to the initial placement of centroids. Different initializations can lead to different cluster assignments and final results.
Assumption of Equal Sizes and Variances: K-means assumes that clusters have roughly equal sizes and spherical shapes. This may not hold true for all types of data. It can struggle with identifying clusters with uneven densities, sizes, or shapes.
Impact of Outliers: K-means is sensitive to outliers, as a single outlier can significantly affect the position of cluster centroids. Outliers can lead to the creation of spurious clusters or distortion of the true cluster structure.
█ LIMITATIONS IN APPLICATION OF K MEANS IN TRADING
Trading data often exhibits characteristics that can pose challenges when applying indicators and analysis techniques. Here's how the limitations of outliers, varying scales, and unequal variance can impact the use of indicators in trading:
Outliers are data points that significantly deviate from the rest of the dataset. In trading, outliers can represent extreme price movements caused by rare events, news, or market anomalies. Outliers can have a significant impact on trading indicators and analyses:
Indicator Distortion: Outliers can skew the calculations of indicators, leading to misleading signals. For instance, a single extreme price spike could cause indicators like moving averages or RSI (Relative Strength Index) to give false signals.
Risk Management: Outliers can lead to overly aggressive trading decisions if not properly accounted for. Ignoring outliers might result in unexpected losses or missed opportunities to adjust trading strategies.
Different Scales: Trading data often includes multiple indicators with varying units and scales. For example, prices are typically in dollars, volume in units traded, and oscillators have their own scale. Mixing indicators with different scales can complicate analysis:
Normalization: Indicators on different scales need to be normalized or standardized to ensure they contribute equally to the analysis. Failure to do so can lead to one indicator dominating the analysis due to its larger magnitude.
Comparability: Without normalization, it's challenging to directly compare the significance of indicators. Some indicators might have a larger numerical range and could overshadow others.
Unequal Variance: Unequal variance in trading data refers to the fact that some indicators might exhibit higher volatility than others. This can impact the interpretation of signals and the performance of trading strategies:
Volatility Adjustment: When combining indicators with varying volatility, it's essential to adjust for their relative volatilities. Failure to do so might lead to overemphasizing or underestimating the importance of certain indicators in the trading strategy.
Risk Assessment: Unequal variance can impact risk assessment. Indicators with higher volatility might lead to riskier trading decisions if not properly taken into account.
█ APPLICATION OF THIS INDICATOR
This indicator can be used in 2 ways:
1) Make a directional trade:
If a trader thinks price will go higher or lower and price is within a cluster zone, The trader can take a position and place a stop on the 1 sd band around the cluster. As one can see below, the trader can go long the green arrow and place a stop on the one standard deviation mark for that cluster below it at the red arrow. using this we can calculate a risk to reward ratio.
Calculating risk to reward: targeting a risk reward ratio of 2:1, the trader could clearly make that given that the next resistance area above that in the orange cluster exceeds this risk reward ratio.
2) Take a reversal Trade:
We can use cluster centers (support and resistance levels) to go in the opposite direction that price is currently moving in hopes of price forming a pivot and reversing off this level.
Similar to the directional trade, we can use the standard deviation of the cluster to place a stop just in case we are wrong.
In this example below we can see that shorting on the red arrow and placing a stop at the one standard deviation above this cluster would give us a profitable trade with minimal risk.
Using the cluster density table in the upper right informs the trader just how dense the cluster is. Higher density clusters will give a higher likelihood of a pivot forming at these levels and price being rejected and switching direction with a larger move.
█ FEATURES & SETTINGS
General Settings:
Number of clusters: The user can select from 3 to five clusters. A good rule of thumb is that if you are trading intraday, less is more (Think 3 rather than 5). For daily 4 to 5 clusters is good.
Cluster Method: To get around the outlier limitation of k means clustering, The median was added. This gives the user the ability to choose either k means or k median clustering. K means is the preferred method if the user things there are no large outliers, and if there appears to be large outliers or it is assumed there are then K medians is preferred.
Bars back To train on: This will be the amount of bars to include in the clustering. This number is important so that the user includes bars that are recent but not so far back that they are out of the scope of where price can be. For example the last 2 years we have been in a range on the sp500 so 505 days in this setting would be more relevant than say looking back 5 years ago because price would have to move far to get there.
Show SD Bands: Select this to show the 1 standard deviation bands around the support and resistance level or unselect this to just show the support and resistance level by itself.
Features:
Besides the support and resistance levels and standard deviation bands, this indicator gives a table in the upper right hand corner to show the density of each cluster (support and resistance level) and is color coded to the cluster line on the chart. Higher density clusters mean price has been there previously more than lower density clusters and could mean a higher likelihood of a reversal when price reaches these areas.
█ WORKS CITED
Victor Sim, "Using K-means Clustering to Create Support and Resistance", 2020, towardsdatascience.com
Chris Piech, "K means", stanford.edu
█ ACKNOLWEDGMENTS
@jdehorty- Thanks for the publish template. It made organizing my thoughts and work alot easier.
Recursive Micro Zigzag🎲 Overview
Zigzag is basic building block for any pattern recognition algorithm. This indicator is a research-oriented tool that combines the concepts of Micro Zigzag and Recursive Zigzag to facilitate a comprehensive analysis of price patterns. This indicator focuses on deriving zigzag on multiple levels in more efficient and enhanced manner in order to support enhanced pattern recognition.
The Recursive Micro Zigzag Indicator utilises the Micro Zigzag as the foundation and applies the Recursive Zigzag technique to derive higher-level zigzags. By integrating these techniques, this indicator enables researchers to analyse price patterns at multiple levels and gain a deeper understanding of market behaviour.
🎲 Concept:
Micro Zigzag Base : The indicator utilises the Micro Zigzag concept to capture detailed price movements within each candle. It allows for the visualisation of the sequential price action within the candle, aiding in pattern recognition at a micro level.
Basic implementation of micro zigzag can be found in this link - Micro-Zigzag
Recursive Zigzag Expansion : Building upon the Micro Zigzag base, the indicator applies the Recursive Zigzag concept to derive higher-level zigzags. Through recursive analysis of the Micro Zigzag's pivots, the indicator uncovers intricate patterns and trends that may not be evident in single-level zigzags.
Earlier implementations of recursive zigzag can be found here:
Recursive Zigzag
Recursive Zigzag - Trendoscope
And the libraries
rZigzag
ZigzagMethods
The major differences in this implementation are
Micro Zigzag Base - Earlier implementation made use of standard zigzag as base whereas this implementation uses Micro Zigzag as base
Not cap on Pivot depth - Earlier implementation was limited by the depth of level 0 zigzag. In this implementation, we are trying to build the recursive algorithm progressively so that there is no cap on the depth of level 0 zigzag. But, if we go for higher levels, there is chance of program timing out due to pine limitations.
These algorithms are useful in automatically spotting patterns on the chart including Harmonic Patterns, Chart Patterns, Elliot Waves and many more.
cbndLibrary "cbnd"
Description:
A standalone Cumulative Bivariate Normal Distribution (CBND) functions that do not require any external libraries.
This includes 3 different CBND calculations: Drezner(1978), Drezner and Wesolowsky (1990), and Genz (2004)
Comments:
The standardized cumulative normal distribution function returns the probability that one random
variable is less than a and that a second random variable is less than b when the correlation
between the two variables is p. Since no closed-form solution exists for the bivariate cumulative
normal distribution, we present three approximations. The first one is the well-known
Drezner (1978) algorithm. The second one is the more efficient Drezner and Wesolowsky (1990)
algorithm. The third is the Genz (2004) algorithm, which is the most accurate one and therefore
our recommended algorithm. West (2005b) and Agca and Chance (2003) discuss the speed and
accuracy of bivariate normal distribution approximations for use in option pricing in
ore detail.
Reference:
The Complete Guide to Option Pricing Formulas, 2nd ed. (Espen Gaarder Haug)
CBND1(A, b, rho)
Returns the Cumulative Bivariate Normal Distribution (CBND) using Drezner 1978 Algorithm
Parameters:
A : float,
b : float,
rho : float,
Returns: float.
CBND2(A, b, rho)
Returns the Cumulative Bivariate Normal Distribution (CBND) using Drezner and Wesolowsky (1990) function
Parameters:
A : float,
b : float,
rho : float,
Returns: float.
CBND3(x, y, rho)
Returns the Cumulative Bivariate Normal Distribution (CBND) using Genz (2004) algorithm (this is the preferred method)
Parameters:
x : float,
y : float,
rho : float,
Returns: float.
STD-Stepped Fast Cosine Transform Moving Average [Loxx]STD-Stepped Fast Cosine Transform Moving Average is an experimental moving average that uses Fast Cosine Transform to calculate a moving average. This indicator has standard deviation stepping in order to smooth the trend by weeding out low volatility movements.
What is the Discrete Cosine Transform?
A discrete cosine transform (DCT) expresses a finite sequence of data points in terms of a sum of cosine functions oscillating at different frequencies. The DCT, first proposed by Nasir Ahmed in 1972, is a widely used transformation technique in signal processing and data compression. It is used in most digital media, including digital images (such as JPEG and HEIF, where small high-frequency components can be discarded), digital video (such as MPEG and H.26x), digital audio (such as Dolby Digital, MP3 and AAC), digital television (such as SDTV, HDTV and VOD), digital radio (such as AAC+ and DAB+), and speech coding (such as AAC-LD, Siren and Opus). DCTs are also important to numerous other applications in science and engineering, such as digital signal processing, telecommunication devices, reducing network bandwidth usage, and spectral methods for the numerical solution of partial differential equations.
The use of cosine rather than sine functions is critical for compression, since it turns out (as described below) that fewer cosine functions are needed to approximate a typical signal, whereas for differential equations the cosines express a particular choice of boundary conditions. In particular, a DCT is a Fourier-related transform similar to the discrete Fourier transform (DFT), but using only real numbers. The DCTs are generally related to Fourier Series coefficients of a periodically and symmetrically extended sequence whereas DFTs are related to Fourier Series coefficients of only periodically extended sequences. DCTs are equivalent to DFTs of roughly twice the length, operating on real data with even symmetry (since the Fourier transform of a real and even function is real and even), whereas in some variants the input and/or output data are shifted by half a sample. There are eight standard DCT variants, of which four are common.
The most common variant of discrete cosine transform is the type-II DCT, which is often called simply "the DCT". This was the original DCT as first proposed by Ahmed. Its inverse, the type-III DCT, is correspondingly often called simply "the inverse DCT" or "the IDCT". Two related transforms are the discrete sine transform (DST), which is equivalent to a DFT of real and odd functions, and the modified discrete cosine transform (MDCT), which is based on a DCT of overlapping data. Multidimensional DCTs (MD DCTs) are developed to extend the concept of DCT to MD signals. There are several algorithms to compute MD DCT. A variety of fast algorithms have been developed to reduce the computational complexity of implementing DCT. One of these is the integer DCT (IntDCT), an integer approximation of the standard DCT, : ix, xiii, 1, 141–304 used in several ISO/IEC and ITU-T international standards.
Notable settings
windowper = period for calculation, restricted to powers of 2: "16", "32", "64", "128", "256", "512", "1024", "2048", this reason for this is FFT is an algorithm that computes DFT (Discrete Fourier Transform) in a fast way, generally in 𝑂(𝑁⋅log2(𝑁)) instead of 𝑂(𝑁2). To achieve this the input matrix has to be a power of 2 but many FFT algorithm can handle any size of input since the matrix can be zero-padded. For our purposes here, we stick to powers of 2 to keep this fast and neat. read more about this here: Cooley–Tukey FFT algorithm
smthper = smoothing count, this smoothing happens after the first FCT regular pass. this zeros out frequencies from the previously calculated values above SS count. the lower this number, the smoother the output, it works opposite from other smoothing periods
Included
Alerts
Signals
Loxx's Expanded Source Types
Additional reading
A Fast Computational Algorithm for the Discrete Cosine Transform by Chen et al.
Practical Fast 1-D DCT Algorithms With 11 Multiplications by Loeffler et al.
Cooley–Tukey FFT algorithm
Weighted Burg AR Spectral Estimate Extrapolation of Price [Loxx]Weighted Burg AR Spectral Estimate Extrapolation of Price is an indicator that uses an autoregressive spectral estimation called the Weighted Burg Algorithm. This method is commonly used in speech modeling and speech prediction engines. This method also includes Levinson–Durbin algorithm. As was already discussed previously in the following indicator:
Levinson-Durbin Autocorrelation Extrapolation of Price
What is Levinson recursion or Levinson–Durbin recursion?
In many applications, the duration of an uninterrupted measurement of a time series is limited. However, it is often possible to obtain several separate segments of data. The estimation of an autoregressive model from this type of data is discussed. A straightforward approach is to take the average of models estimated from each segment separately. In this way, the variance of the estimated parameters is reduced. However, averaging does not reduce the bias in the estimate. With the Burg algorithm for segments, both the variance and the bias in the estimated parameters are reduced by fitting a single model to all segments simultaneously. As a result, the model estimated with the Burg algorithm for segments is more accurate than models obtained with averaging. The new weighted Burg algorithm for segments allows combining segments of different amplitudes.
The Burg algorithm estimates the AR parameters by determining reflection coefficients that minimize the sum of for-ward and backward residuals. The extension of the algorithm to segments is that the reflection coefficients are estimated by minimizing the sum of forward and backward residuals of all segments taken together. This means a single model is fitted to all segments in one time. This concept is also used for prediction error methods in system identification, where the input to the system is known, like in ARX modeling
Data inputs
Source Settings: -Loxx's Expanded Source Types. You typically use "open" since open has already closed on the current active bar
LastBar - bar where to start the prediction
PastBars - how many bars back to model
LPOrder - order of linear prediction model; 0 to 1
FutBars - how many bars you want to forward predict
BurgWin - weighing function index, rectangular, hamming, or parabolic
Things to know
Normally, a simple moving average is calculated on source data. I've expanded this to 38 different averaging methods using Loxx's Moving Avreages.
This indicator repaints
Included
Bar color muting
Further reading
Performance of the weighted burg methods of ar spectral estimation for pitch-synchronous analysis of voiced speech
The Burg algorithm for segments
Techniques for the Enhancement of Linear Predictive Speech Coding in Adverse Conditions
Related Indicators
Synthetic Price Action GeneratorNOTICE:
First thing you need to know, it "DOES NOT" reflect the price of the ticker you will load it on. THIS IS NOT AN INDICATOR FOR TRADING! It's a developer tool solely generating random values that look exactly like the fractals we observe every single day. This script's generated candles are as fake as the never ending garbage news cycles we are often force fed and expected to believe by using carefully scripted narratives peddled as hypnotic truth to psychologically and emotionally influence you to the point of control by coercion and subjugation. I wanted to make the script's synthetic nature very clear using that analogy, it's dynamically artificial. Do not accidentally become disillusioned by this scripts values, make trading decisions from it, and lastly don't become victim to predatory media magic ministry parrots with pretty, handsome smiles, compelling you to board their ferris wheel of fear. Now, on to the good stuff...
BACKSTORY:
Occasionally I find myself in situations where I have to build analyzers in Pine to actually build novel quantitative analytic indicators and tools worthy of future use. These analyzers certainly don't exist on this platform, but usually are required to engineer and tweak algorithms of the highest quality with the finest computational caliber. I have numerous other synthesizers to publish besides this one.
For many reasons, I needed a synthetic environment to utilize the analyzers I built in Pine, to even pursue building some exotic indicators and algorithms. Pine doesn't allow sourcing of tuples. Not to mention, I required numerous Pine advancements to make long held dreams into tangible realities. Many Pine upgrades have arrived and MANY, MANY more are in need of implementation for all. Now that I have this, intending to use it in the future often when in need, you can now use it too. I do anticipate some skilled Pine poets will employ this intended handy utility to design and/or improved indicators for trading.
ORIGIN:
This was inspired by the brilliance from the world renowned ALGOmist John F. Ehlers, but it's taken on a completely alien form from its original DNA. Browsing on the internet for something else, I came across an article with a small code snippet, and I remembered an old wish of mine. I have long known that by flipping back and forth on specific tickers and timeframes in my Watchlist is not the most efficient way to evaluate indicators in multiple theatres of price action. I realized, I always wanted to possess and use this sort of tool, so... I put it into Pine form, but now have decided to inject it with Pine Script steroids. The outcome is highly mutable candle formations in a reusable mutagenic package, observable above and masquerading as genuine looking price candles.
OVERVIEW:
I guess you could call it a price action synthesizer, but I entitled it "Synthetic Price Action Generator" for those who may be searching for such a thing. You may find this more useful on the All or 5Y charts initially to witness indication from beginning (barstate.isfirst === barindex==0) to end (last_bar_index), but you may also use keyboard shortcuts + + to view the earliest plottable bars on any timeframe. I often use that keyboard shortcut to qualify an indicator through the entirety of it's runtime.
A lot can go wrong unexpectedly with indicator initialization, and you will never know it if you don't inspect it. Many recursively endowed Infinite Impulse Response (IIR) Filters can initialize with unintended results that minutely ring in slightly erroneous fashion for the entire runtime, beginning to end, causing deviations from "what should of been..." values with false signals. Looking closely at spg(), you will recognize that 3 EMAs are employed to manage and maintain randomness of CLOSE, HIGH, and LOW. In fact, any indicator's barindex==0 initialization can be inspected with the keyboard shortcuts above. If you see anything obviously strange in an authors indicator, please contact the developer if possible and respectfully notify them.
PURPOSE:
The primary intended application of this script, is to offer developers from advanced to even novice skill levels assistance with building next generation indicators. Mostly, it's purpose is for testing and troubleshooting indicators AND evaluating how they perform in a "manageable" randomized environment. Some times indicators flake out on rare but problematic price fluctuations, and this may help you with finding your issues/errata sooner than later. While the candles upon initial loading look pristine, by tweaking it to the minval/maxval parameters limits OR beyond with a few code modifications, you can generate unusual volatility, for instance... huge wicks. Limits of minval= and maxval= of are by default set to a comfort zone of operation. Massive wicks or candle bodies will undoubtedly affect your indication and often render them useless on tickers that exhibit that behavior, like WGMCF intraday currently.
Copy/paste boundaries are provided for relevant insertion into another script. Paste placement should happen at the very top of a script. Note that by overwriting the close, open, high, etc... values, your compiler will give you generous warnings of "variable shadowing" in abundance, but this is an expected part of applying it to your novel script, no worries. plotcandle() can be copied over too and enabled/disabled in Settings->Style. Always remember to fully remove this scripts' code and those assignments properly before actual trading use of your script occurs, AND specifically when publishing. The entirety of this provided code should never, never exist in a published indicator.
OTHER INTENTIONS:
Even though these are 100% synthetic generated price points, you will notice ALL of the fractal pseudo-patterns that commonly exist in the markets, are naturally occurring with this generator too. You can also swiftly immerse yourself in pattern recognition exercises with increased efficiency in real time by clicking any SPAG Setting in focus and then using the up/down arrow keys. I hope I explained potential uses adequately...
On a personal note, the existence of fractal symmetry often makes me wonder, do we truly live in a totality chaotic universe or is it ordered mathematically for some outcomes to a certain extent. I think both. My observations, it's a pre-deterministic reality completely influenced by infinitesimal amounts of sentient free will with unimaginable existing and emerging quantities. Some how an unknown mysterious mechanism governing the totality of universal physics and mathematics counts this 100.0% flawlessly and perpetually. Anyways, you can't change the past that long existed before your birth or even yesterday, but you can choose to dream, create, and forge the future into your desires and hopes. As always, shite always happens when your not looking for it. What you choose to do after stepping in it unintentionally... is totally up to you. :) Maybe this tool and tips provided will aid you in not stepping in an algo cachucha up to your ankles somehow.
SCRIPTING LESSONS PORTRAYED IN THIS SCRIPT:
Pine etiquette and code cleanliness
Overwrite capabilities of built-in Pine variables for testing indicators
Various techniques to organize Settings panel while providing ease of adjustment utility
Use of tooltip= to provide users adequate valuable information. Most people want to trade with indicators, not blindly make adjustments to them without any knowledge of their intended operation/effects
When available time provides itself, I will consider your inquiries, thoughts, and concepts presented below in the comments section, should you have any questions or comments regarding this indicator. When my indicators achieve more prevalent use by TV members , I may implement more ideas when they present themselves as worthy additions. Have a profitable future everyone!
the "fasle" hull moving averageThere is a little different between my "fasle hull moving average" the "correct one".
the correct algorithm:
hma = wma((2*wma(close,n/2) - wma(close,n),sqrt(n))
the "fasle" algorithm:
=wma((2*wma(close,n/4) - wma(close,n),sqrt(n))
Amazing! Why the "fasle" describe the trend so accurate!?
Finite Difference - Backward (mcbw_)In calculus there exists a 'derivative', which simply just measures the difference between two points on a curve. For well behaved mathematical functions there are infinitely many points and so there exists a derivative at every point. Where there are infinitely many points in a curve that curve is called 'continuous'. Continuous curves are very nice to deal with since each point on it exists almost exactly where its neighbors are. However, if the curve does not have infinitely many points on it, but instead has a finite number of points on it, that curve is called 'discrete' instead of continuous. Taking the derivative of discrete curves is much trickier business since there are none of the mathematical conveniences that a continuous offers. In the real world everything we measure is a discrete curve, including Price (since we measure it a finite number of times, aka each candlestick)!
The branch of Discrete Mathematics has found an approach to measure the derivative along a discrete curve, that approach is aptly called " Finite Difference ". To get a more accurate approximation of a discrete derivative, the finite difference approach uses weighted combinations of neighboring points. The most common type of finite difference is a 'central' difference, this uses a combination of points before and after the point of interest to approximate the discrete derivative. This is great for historical analysis but is not of much use for trading algorithms since it technically means using future prices to calculate the derivative of the current point. Instead we can use a less common variant called a ' Backwards Difference ' that only uses a combination of points before the current one to help approximate the current derivative.
In this script you can choose the " Order " of your derivative and the " Accuracy " of its approximation. This script is for educational purposes for folks building trading algorithms. Many trading algorithms often have an element of seeing how much Price has changed from the previous candle to the current candle. This approach is the lowest accuracy derivative possible, and using the backwards finite differences, made available for the first time on TradingView (!!), algorithms that use derivatives can now have higher orders of accuracy!
Happy Trading/Developing!
Bayesian Price Projection Model [Pinescriptlabs]📊 Dynamic Price Projection Algorithm 📈
This algorithm combines **statistical calculations**, **technical analysis**, and **Bayesian theory** to forecast a future price while providing **uncertainty ranges** that represent upper and lower bounds. The calculations are designed to adjust projections by considering market **trends**, **volatility**, and the historical probabilities of reaching new highs or lows.
Here’s how it works:
🚀 Future Price Projection
A dynamic calculation estimates the future price based on three key elements:
1. **Trend**: Defines whether the market is predisposed to move up or down.
2. **Volatility**: Quantifies the magnitude of the expected change based on historical fluctuations.
3. **Time Factor**: Uses the logarithm of the projected period (`proyeccion_dias`) to adjust how time impacts the estimate.
🧠 **Bayesian Probabilistic Adjustment**
- Conditional probabilities are calculated using **Bayes' formula**:
\
This models future events using conditional information:
- **Probability of reaching a new all-time high** if the price is trending upward.
- **Probability of reaching a new all-time low** if the price is trending downward.
- These probabilities refine the future price estimate by considering:
- **Higher volatility** increases the likelihood of hitting extreme levels (highs/lows).
- **Market trends** influence the expected price movement direction.
🌟 **Volatility Calculation**
- Volatility is measured using the **ATR (Average True Range)** indicator with a 14-period window. This reflects the average amplitude of price fluctuations.
- To express volatility as a percentage, the ATR is normalized by dividing it by the closing price and multiplying it by 200.
- Volatility is then categorized into descriptive levels (e.g., **Very Low**, **Low**, **Moderate**, etc.) for better interpretation.
---
🎯 **Deviation Limits (Upper and Lower)**
- The upper and lower limits form a **projected range** around the estimated future price, providing a framework for uncertainty.
- These limits are calculated by adjusting the ATR using:
- A user-defined **multiplier** (`factor_desviacion`).
- **Bayesian probabilities** calculated earlier.
- The **square root of the projected period** (`proyeccion_dias`), incorporating the principle that uncertainty grows over time.
🔍 **Interpreting the Model**
This can be seen as a **dynamic probabilistic model** that:
- Combines **technical analysis** (trends and ATR).
- Refines probabilities using **Bayesian theory**.
- Provides a **visual projection range** to help you understand potential future price movements and associated uncertainties.
⚡ Whether you're analyzing **volatile markets** or confirming **bullish/bearish scenarios**, this tool equips you with a robust, data-driven approach! 🚀
Español :
📊 Algoritmo de Proyección de Precio Dinámico 📈
Este algoritmo combina **cálculos estadísticos**, **análisis técnico** y **la teoría de Bayes** para proyectar un precio futuro, junto con rangos de **incertidumbre** que representan los límites superior e inferior. Los cálculos están diseñados para ajustar las proyecciones considerando la **tendencia del mercado**, **volatilidad** y las probabilidades históricas de alcanzar nuevos máximos o mínimos.
Aquí se explica su funcionamiento:
🚀 **Proyección de Precio Futuro**
Se realiza un cálculo dinámico del precio futuro estimado basado en tres elementos clave:
1. **Tendencia**: Define si el mercado tiene predisposición a subir o bajar.
2. **Volatilidad**: Determina la magnitud del cambio esperado en función de las fluctuaciones históricas.
3. **Factor de Tiempo**: Usa el logaritmo del período proyectado (`proyeccion_dias`) para ajustar cómo el tiempo afecta la estimación.
🧠 **Ajuste Probabilístico con la Teoría de Bayes**
- Se calculan probabilidades condicionales mediante la fórmula de **Bayes**:
\
Esto permite modelar eventos futuros considerando información condicional:
- **Probabilidad de alcanzar un nuevo máximo histórico** si el precio sube.
- **Probabilidad de alcanzar un nuevo mínimo histórico** si el precio baja.
- Estas probabilidades ajustan la estimación del precio futuro considerando:
- **Mayor volatilidad** aumenta la probabilidad de alcanzar niveles extremos (máximos/mínimos).
- **La tendencia del mercado** afecta la dirección esperada del movimiento del precio.
🌟 **Cálculo de Volatilidad**
- La volatilidad se mide usando el indicador **ATR (Average True Range)** con un período de 14 velas. Este indicador refleja la amplitud promedio de las fluctuaciones del precio.
- Para obtener un valor porcentual, el ATR se normaliza dividiéndolo por el precio de cierre y multiplicándolo por 200.
- Además, se clasifica esta volatilidad en categorías descriptivas (e.g., **Muy Baja**, **Baja**, **Moderada**, etc.) para facilitar su interpretación.
🎯 **Límites de Desviación (Superior e Inferior)**
- Los límites superior e inferior representan un **rango proyectado** en torno al precio futuro estimado, proporcionando un marco para la incertidumbre.
- Estos límites se calculan ajustando el ATR según:
- Un **multiplicador** definido por el usuario (`factor_desviacion`).
- Las **probabilidades condicionales** calculadas previamente.
- La **raíz cuadrada del período proyectado** (`proyeccion_dias`), lo que incorpora el principio de que la incertidumbre aumenta con el tiempo.
---
🔍 **Interpretación del Modelo**
Este modelo se puede interpretar como un **modelo probabilístico dinámico** que:
- Integra **análisis técnico** (tendencias y ATR).
- Ajusta probabilidades utilizando **la teoría de Bayes**.
- Proporciona un **rango de proyección visual** para ayudarte a entender los posibles movimientos futuros del precio y su incertidumbre.
⚡ Ya sea que estés analizando **mercados volátiles** o confirmando **escenarios alcistas/bajistas**, ¡esta herramienta te ofrece un enfoque robusto y basado en datos! 🚀
Market Trades PinescriptlabsThis algorithm is designed to emulate the true order book of exchanges by showing the quantity of transactions of an asset in real-time, while identifying patterns of high activity and volatility in the market through the analysis of volume and price movements. 📈 Below, I explain how to understand and use the information provided by the chart, along with the trades table:
Identification of High Activity Zones 🚀
The algorithm calculates the average volume and the rate of price change to detect areas with spikes in activity. This is visualized on the chart with labels "Volatility Spike Buy" and "Volatility Spike Sell":
Volatility Spike Buy: Indicates an unusual increase in volatility in the buying market, suggesting a potential surge in buying interest. 🟢
Volatility Spike Sell: Signals an increase in volatility in the selling market, which may indicate selling pressure or a sudden massive sell-off. 🔴
Market Trades Table 📋
The table provides a detailed view of the latest trades:
Price: Displays the price at which each trade was executed. 💵
Quantity (Traded): Indicates the amount of the asset traded. 💰
Type of Trade (Buy/Sell): Differentiates between buy (Buy) and sell (Sell) operations based on volume and strength. 🔄
Date and Time: Refers to the start of the calculated trading candle. ⏰
Recency: Identifies the most recent trade to facilitate tracking of current activity. 🔍
Analysis of Trade Imbalance ⚖️
The imbalance between buys and sells is calculated based on the volume of both. This indicator helps to understand whether the market has a tendency toward buying or selling, showing if there is greater strength on one side of the market.
A positive imbalance suggests more buying pressure. 📊
A negative imbalance indicates greater selling pressure. 📉
Volume Presentation
Visualizes the volume of buying and selling in the market, allowing the identification of buying or selling strength through the size of the volume candle. 🔍
Español :
"Este algoritmo está diseñado para emular el verdadero libro de órdenes de los intercambios al mostrar la cantidad de transacciones de un activo en tiempo real, mientras identifica patrones de alta actividad y volatilidad en el mercado a través del análisis de volumen y movimientos de precios. 📈 A continuación, explico cómo entender y usar la información proporcionada por el gráfico, junto con la tabla de operaciones:"
Identificación de Zonas de Alta Actividad 🚀
El algoritmo calcula el volumen promedio y la velocidad de cambio de precio para detectar zonas con picos de actividad. Esto se visualiza en el gráfico con etiquetas de "Volatility Spike Buy" y "Volatility Spike Sell":
Volatility Spike Buy: Indica un incremento inusual de volatilidad en el mercado de compra, sugiriendo un posible interés de compra elevado. 🟢
Volatility Spike Sell: Señala un incremento de volatilidad en el mercado de venta, lo cual puede indicar presión de venta o una venta masiva repentina. 🔴
Tabla de Operaciones en el Mercado (Market Trades) 📋
La tabla proporciona una vista detallada de las últimas operaciones:
Precio: Muestra el precio al cual se realizó cada operación. 💵
Cantidad (Transaccionada): Indica la cantidad del activo transaccionada. 💰
Tipo de operación (Buy/Sell): Diferencia entre operaciones de compra (Buy) y de venta (Sell), dependiendo del volumen y fuerza. 🔄
Fecha y Hora: Refleja el inicio de la vela de negociación calculada. ⏰
Recency: Identifica la operación más reciente para facilitar el seguimiento de la actividad actual. 🔍
Análisis de Desequilibrio de Operaciones (Imbalance) ⚖️
El desequilibrio entre compras y ventas se calcula con base en el volumen de ambas. Este indicador ayuda a entender si el mercado tiene una tendencia hacia la compra o venta, mostrando si hay una mayor fuerza en uno de los lados del mercado.
Un desequilibrio positivo sugiere más presión de compra. 📊
Un desequilibrio negativo indica mayor presión de venta. 📉
Presentación en Volumen
Visualiza el volumen de compra y venta en el mercado, permitiendo identificar mediante el tamaño de la vela de volumen la fuerza, ya sea compradora o vendedora. 🔍
RSI (Kernel Optimized) | Flux Charts💎 GENERAL OVERVIEW
Introducing our new KDE Optimized RSI Indicator! This indicator adds a new aspect to the well-known RSI indicator, with the help of the KDE (Kernel Density Estimation) algorithm, estimates the probability of a candlestick will be a pivot or not. For more information about the process, please check the "HOW DOES IT WORK ?" section.
Features of the new KDE Optimized RSI Indicator :
A New Approach To Pivot Detection
Customizable KDE Algorithm
Realtime RSI & KDE Dashboard
Alerts For Possible Pivots
Customizable Visuals
❓ HOW TO INTERPRET THE KDE %
The KDE % is a critical metric that reflects how closely the current RSI aligns with the KDE (Kernel Density Estimation) array. In simple terms, it represents the likelihood that the current candlestick is forming a pivot point based on historical data patterns. a low percentage suggests a lower probability of the current candlestick being a pivot point. In these cases, price action is less likely to reverse, and existing trends may continue. At moderate levels, the possibility of a pivot increases, indicating potential trend shifts or consolidations.Traders should start monitoring closely for confirmation signals. An even higher KDE % suggests a strong likelihood that the current candlestick could form a pivot point, which could lead to a reversal or significant price movement. These points often align with overbought or oversold conditions in traditional RSI analysis, making them key moments for potential trade entry or exit.
📌 HOW DOES IT WORK ?
The RSI (Relative Strength Index) is a widely used oscillator among traders. It outputs a value between 0 - 100 and gives a glimpse about the current momentum of the price action. This indicator then calculates the RSI for each candlesticks, and saves them into an array if the candlestick is a pivot. The low & high pivot RSIs' are inserted into two different arrays. Then the a KDE array is calculated for both of the low & high pivot RSI arrays. Explaining the KDE might be too much for this write-up, but for a brief explanation, here are the steps :
1. Define the necessary options for the KDE function. These are : Bandwidth & Nº Steps, Array Range (Array Max - Array Min)
2. After that, create a density range array. The array has (steps * 2 - 1) elements and they are calculated by (arrMin + i * stepCount), i being the index.
3. Then, define a kernel function. This indicator has 3 different kernel distribution modes : Uniform, Gaussian and Sigmoid
4. Then, define a temporary value for the current element of KDE array.
5. For each element E in the pivot RSI array, add "kernel(densityRange.get(i) - E, 1.0 / bandwidth)" to the temporary value.
6. Add 1.0 / arrSize * to the KDE array.
Then the prefix sum array of the KDE array is calculated. For each candlestick, the index closest to it's RSI value in the KDE array is found using binary search. Then for the low pivot KDE calculation, the sum of KDE values from found index to max index is calculated. For the high pivot KDE, the sum of 0 to found index is used. Then if high or low KDE value is greater than the activation threshold determined in the settings, a bearish or bullish arrow is plotted after bar confirmation respectively. The arrows are drawn as long as the KDE value of current candlestick is greater than the threshold. When the KDE value is out of the threshold, a less transparent arrow is drawn, indicating a possible pivot point.
🚩 UNIQUENESS
This indicator combines RSI & KDE Algorithm to get a foresight of possible pivot points. Pivot points are important entry, confirmation and exit points for traders. But to their nature, they can be only detected after more candlesticks are rendered after them. The purpose of this indicator is to alert the traders of possible pivot points using KDE algorithm right away when they are confirmed. The indicator also has a dashboard for realtime view of the current RSI & Bullish or Bearish KDE value. You can fully customize the KDE algorithm and set up alerts for pivot detection.
⚙️ SETTINGS
1. RSI Settings
RSI Length -> The amount of bars taken into account for RSI calculation.
Source -> The source value for RSI calculation.
2. Pivots
Pivot Lengths -> Pivot lengths for both high & low pivots. For example, if this value is set to 21; 21 bars before AND 21 bars after a candlestick must be higher for a candlestick to be a low pivot.
3. KDE
Activation Threshold -> This setting determines the amount of arrows shown. Higher options will result in more arrows being rendered.
Kernel -> The kernel function as explained in the upper section.
Bandwidth -> The bandwidth variable as explained in the upper section. The smoothness of the KDE function is tied to this setting.
Nº Bins -> The Nº Steps variable as explained in the upper section. It determines the precision of the KDE algorithm.
Uptrick: Dynamic AMA RSI Indicator### **Uptrick: Dynamic AMA RSI Indicator**
**Overview:**
The **Uptrick: Dynamic AMA RSI Indicator** is an advanced technical analysis tool designed for traders who seek to optimize their trading strategies by combining adaptive moving averages with the Relative Strength Index (RSI). This indicator dynamically adjusts to market conditions, offering a nuanced approach to trend detection and momentum analysis. By leveraging the Adaptive Moving Average (AMA) and Fast Adaptive Moving Average (FAMA), along with RSI-based overbought and oversold signals, traders can better identify entry and exit points with higher precision and reduced noise.
**Key Components:**
1. **Source Input:**
- The source input is the price data that forms the basis of all calculations. Typically set to the closing price, traders can customize this to other price metrics such as open, high, low, or even the output of another indicator. This flexibility allows the **Uptrick** indicator to be tailored to a wide range of trading strategies.
2. **Adaptive Moving Average (AMA):**
- The AMA is a moving average that adapts its sensitivity based on the dominant market cycle. This adaptation allows the AMA to respond swiftly to significant price movements while smoothing out minor fluctuations, making it particularly effective in trending markets. The AMA adjusts its responsiveness dynamically using a calculated phase adjustment from the dominant cycle, ensuring it remains responsive to the current market environment without being overly reactive to market noise.
3. **Fast Adaptive Moving Average (FAMA):**
- The FAMA is a more sensitive version of the AMA, designed to react faster to price changes. It serves as a signal line in the crossover strategy, highlighting shorter-term trends. The interaction between the AMA and FAMA forms the core of the signal generation, with crossovers between these lines indicating potential buy or sell opportunities.
4. **Relative Strength Index (RSI):**
- The RSI is a momentum oscillator that measures the speed and change of price movements, providing insights into whether an asset is overbought or oversold. In the **Uptrick** indicator, the RSI is used to confirm the validity of crossover signals between the AMA and FAMA, adding an additional layer of reliability to the trading signals.
**Indicator Logic:**
1. **Dominant Cycle Calculation:**
- The indicator starts by calculating the dominant market cycle using a smoothed price series. This involves applying exponential moving averages to a series of price differences, extracting cycle components, and determining the instantaneous phase of the cycle. This phase is then adjusted to provide a phase adjustment factor, which plays a critical role in determining the adaptive alpha.
2. **Adaptive Alpha Calculation:**
- The adaptive alpha, a key feature of the AMA, is computed based on the fast and slow limits set by the trader. This alpha is clamped within these limits to ensure the AMA remains appropriately sensitive to market conditions. The dynamic adjustment of alpha allows the AMA to be highly responsive in volatile markets and more conservative in stable markets.
3. **Crossover Detection:**
- The indicator generates trading signals based on crossovers between the AMA and FAMA:
- **CrossUp:** When the AMA crosses above the FAMA, it indicates a potential bullish trend, suggesting a buy opportunity.
- **CrossDown:** When the AMA crosses below the FAMA, it signals a potential bearish trend, indicating a sell opportunity.
4. **RSI Confirmation:**
- To enhance the reliability of these crossover signals, the indicator uses the RSI to confirm overbought and oversold conditions:
- **Buy Signal:** A buy signal is generated only when the AMA crosses above the FAMA and the RSI confirms an oversold condition, ensuring that the signal aligns with a momentum reversal from a low point.
- **Sell Signal:** A sell signal is triggered when the AMA crosses below the FAMA and the RSI confirms an overbought condition, indicating a momentum reversal from a high point.
5. **Signal Management:**
- To prevent signal redundancy during strong trends, the indicator tracks the last generated signal (buy or sell) and ensures that the next signal is only issued when there is a genuine reversal in trend direction.
6. **Signal Visualization:**
- **Buy Signals:** The indicator plots a "BUY" label below the bar when a buy signal is generated, using a green color to clearly mark the entry point.
- **Sell Signals:** A "SELL" label is plotted above the bar when a sell signal is detected, marked in red to indicate an exit or shorting opportunity.
- **Bar Coloring (Optional):** Traders have the option to enable bar coloring, where green bars indicate a bullish trend (AMA above FAMA) and red bars indicate a bearish trend (AMA below FAMA), providing a visual representation of the market’s direction.
**Customization Options:**
- **Source:** Traders can select the price data input that best suits their strategy (e.g., close, open, high, low, or custom indicators).
- **Fast Limit:** Adjustable sensitivity for the fast response of the AMA, allowing traders to tailor the indicator to different market conditions.
- **Slow Limit:** Sets the slower boundary for the AMA’s sensitivity, providing stability in less volatile markets.
- **RSI Length:** The period for the RSI calculation can be adjusted to fit different trading timeframes.
- **Overbought/Oversold Levels:** These thresholds can be customized to define the RSI levels that trigger buy or sell confirmations.
- **Enable Bar Colors:** Traders can choose whether to enable bar coloring based on the AMA/FAMA relationship, enhancing visual clarity.
**How Different Traders Can Use the Indicator:**
1. **Day Traders:**
- **Uptrick: Dynamic AMA RSI Indicator** is highly effective for day traders who need to make quick decisions in fast-moving markets. The adaptive nature of the AMA and FAMA allows the indicator to respond rapidly to intraday price swings. Day traders can use the buy and sell signals generated by the crossover and RSI confirmation to time their entries and exits with greater precision, minimizing exposure to false signals often prevalent in high-frequency trading environments.
2. **Swing Traders:**
- Swing traders can benefit from the indicator’s ability to identify and confirm trend reversals over several days or weeks. By adjusting the RSI length and sensitivity limits, swing traders can fine-tune the indicator to catch longer-term price movements, helping them to ride trends and maximize profits over medium-term trades. The dual confirmation of crossovers with RSI ensures that swing traders enter trades that have a higher probability of success.
3. **Position Traders:**
- For position traders who hold trades over longer periods, the **Uptrick** indicator offers a reliable method to stay in trades that align with the dominant trend while avoiding premature exits. By adjusting the slow limit and extending the RSI length, position traders can smooth out the indicator’s sensitivity, allowing them to focus on major market shifts rather than short-term volatility. The bar coloring feature also provides a clear visual indication of the overall trend, aiding in trade management decisions.
4. **Scalpers:**
- Scalpers, who seek to profit from small price movements, can use the fast responsiveness of the FAMA in conjunction with the RSI to identify micro-trends within larger market moves. The indicator’s ability to adapt quickly to changing conditions makes it a valuable tool for scalpers looking to execute numerous trades in a short period, capturing profits from minor price fluctuations while avoiding prolonged exposure.
5. **Algorithmic Traders:**
- Algorithmic traders can incorporate the **Uptrick** indicator into automated trading systems. The precise crossover signals combined with RSI confirmation provide clear and actionable rules that can be coded into algorithms. The adaptive nature of the indicator ensures that it can be used across different market conditions and timeframes, making it a versatile component of algorithmic strategies.
**Usage:**
The **Uptrick: Dynamic AMA RSI Indicator** is a versatile tool that can be integrated into various trading strategies, from short-term day trading to long-term investing. Its ability to adapt to changing market conditions and provide clear buy and sell signals makes it an invaluable asset for traders seeking to improve their trading performance. Whether used as a standalone indicator or in conjunction with other technical tools, **Uptrick** offers a dynamic approach to market analysis, helping traders to navigate the complexities of financial markets with greater confidence.
**Conclusion:**
The **Uptrick: Dynamic AMA RSI Indicator** offers a comprehensive and adaptable solution for traders across different styles and timeframes. By combining the strengths of adaptive moving averages with RSI confirmation, it delivers robust signals that help traders capitalize on market trends while minimizing the risk of false signals. This indicator is a powerful addition to any trader’s toolkit, enabling them to make informed decisions with greater precision and confidence. Whether you're a day trader, swing trader, or long-term investor, the **Uptrick** indicator can enhance your trading strategy and improve your market outcomes.
ICT Power Of Three | Flux Charts💎 GENERAL OVERVIEW
Introducing our new ICT Power Of Three Indicator! This indicator is built around the ICT's "Power Of Three" strategy. This strategy makes use of these 3 key smart money concepts : Accumulation, Manipulation and Distribution. Each step is explained in detail within this write-up. For more information about the process, check the "HOW DOES IT WORK" section.
Features of the new ICT Power Of Three Indicator :
Implementation of ICT's Power Of Three Strategy
Different Algorithm Modes
Customizable Execution Settings
Customizable Backtesting Dashboard
Alerts for Buy, Sell, TP & SL Signals
📌 HOW DOES IT WORK ?
The "Power Of Three" comes from these three keywords "Accumulation, Manipulation and Distribution". Here is a brief explanation of each keyword :
Accumulation -> Accumulation phase is when the smart money accumulate their positions in a fixed range. This phase indicates price stability, generally meaning that the price constantly switches between up & down trend between a low and a high pivot point. When the indicator detects an accumulation zone, the Power Of Three strategy begins.
Manipulation -> When the smart money needs to increase their position sizes, they need retail traders' positions for liquidity. So, they manipulate the market into the opposite direction of their intended direction. This will result in retail traders opening positions the way that the smart money intended them to do, creating liquidity. After this step, the real move that the smart money intended begins.
Distribution -> This is when the real intention of the smart money comes into action. With the new liquidity thanks to the manipulation phase, the smart money add their positions towards the opposite direction of the retail mindset. The purpose of this indicator is to detect the accumulation and manipulation phases, and help the trader move towards the same direction as the smart money for their trades.
Detection Methods Of The Indicator :
Accumulation -> The indicator detects accumulation zones as explained step-by-step :
1. Draw two lines from the lowest point and the highest point of the latest X bars.
2. If the (high line - low line) is lower than Average True Range (ATR) * accumulationConstant
3. After the condition is validated, an accumulation zone is detected. The accumulation zone will be invalidated and manipulation phase will begin when the range is broken.
Manipulation -> If the accumulation range is broken, check if the current bar closes / wicks above the (high line + ATR * manipulationConstant) or below the (low line - ATR * manipulationConstant). If the condition is met, the indicator detects a manipulation zone.
Distribution -> The purpose of this indicator is to try to foresee the distribution zone, so instead of a detection, after the manipulation zone is detected the indicator automatically create a "shadow" distribution zone towards the opposite direction of the freshly detected manipulation zone. This shadow distribution zone comes with a take-profit and stop-loss layout, customizable by the trader in the settings.
The X bars, accumulationConstant and manipulationConstant are subject to change with the "Algorithm Mode" setting. Read the "Settings" section for more information.
This indicator follows these steps and inform you step by step by plotting them in your chart.
🚩UNIQUENESS
This indicator is an all-in-one suite for the ICT's Power Of Three concept. It's capable of plotting the strategy, giving signals, a backtesting dashboard and alerts feature. Different and customizable algorithm modes will help the trader fine-tune the indicator for the asset they are currently trading. The backtesting dashboard allows you to see how your settings perform in the current ticker. You can also set up alerts to get informed when the strategy is executable for different tickers.
⚙️SETTINGS
1. General Configuration
Algorithm Mode -> The indicator offers 3 different detection algorithm modes according to your needs. Here is the explanation of each mode.
a) Small Manipulation
This mode has the default bar length for the accumulation detection, but a lower manipulation constant, meaning that slighter imbalances in the price action can be detected as manipulation. This setting can be useful on tickers that have lower liquidity, thus can be manipulated easier.
b) Big Manipulation
This mode has the default bar length for the accumulation detection, but a higher manipulation constant, meaning that heavier imbalances on the price action are required in order to detect manipulation zones. This setting can be useful on tickers that have higher liquidity, thus can be manipulated harder.
c) Short Accumulation
This mode has a ~70% lower bar length requirement for accumulation zone detection, and the default manipulation constant. This setting can be useful on tickers that are highly volatile and do not enter accumulation phases too often.
Breakout Method -> If "Close" is selected, bar close price will be taken into calculation when Accumulation & Manipulation zone invalidation. If "Wick" is selected, a wick will be enough to validate the corresponding zone.
2. TP / SL
TP / SL Method -> If "Fixed" is selected, you can adjust the TP / SL ratios from the settings below. If "Dynamic" is selected, the TP / SL zones will be auto-determined by the algorithm.
Risk -> The risk you're willing to take if "Dynamic" TP / SL Method is selected. Higher risk usually means a better winrate at the cost of losing more if the strategy fails. This setting is has a crucial effect on the performance of the indicator, as different tickers may have different volatility so the indicator may have increased performance when this setting is correctly adjusted.
3. Visuals
Show Zones -> Enables / Disables rendering of Accumulation (yellow) and Manipulation (red) zones.
KNN OscillatorOverview
The KNN Oscillator is an advanced technical analysis tool designed to help traders identify potential trend reversals and market momentum. Using the K-Nearest Neighbors (KNN) algorithm, this oscillator normalizes KNN values to create a dynamic and responsive indicator. The oscillator line changes color to reflect the market sentiment, providing clear visual cues for trading decisions.
Key Features
Dynamic Color Oscillator: The line changes color based on the oscillator value – green for positive, red for negative, and grey for neutral.
Advanced KNN Algorithm: Utilizes the K-Nearest Neighbors algorithm for precise trend detection.
Normalized Values: Ensures the oscillator values are normalized to align with the stock price range, making it applicable to various assets.
Easy Integration: Can be easily added to any TradingView chart for enhanced analysis.
How It Works
The KNN Oscillator leverages the K-Nearest Neighbors algorithm to calculate the average distance of the nearest neighbors over a specified period. These values are then normalized to match the stock price range, ensuring they are comparable across different assets. The oscillator value is derived by taking the difference between the normalized KNN values and the source price. The line's color changes dynamically to provide an immediate visual indication of the market's state:
Green: Positive values indicate upward momentum.
Red: Negative values indicate downward momentum.
Grey: Neutral values indicate a stable or consolidating market.
Usage Instructions
Trend Reversal Detection: Use the color changes to identify potential trend reversals. A shift from red to green suggests a bullish reversal, while a shift from green to red indicates a bearish reversal.
Momentum Analysis: The oscillator's value and color help gauge market momentum. Strong positive values (green) indicate strong upward momentum, while strong negative values (red) indicate strong downward momentum.
Market Sentiment: The dynamic color changes provide an easy-to-understand visual representation of market sentiment, helping traders make informed decisions quickly.
Confirmation Tool: Use the KNN Oscillator in conjunction with other technical indicators to confirm signals and improve the accuracy of your trades.
Scalability: Applicable to various timeframes and asset classes, making it a versatile tool for all types of traders.