Multi T3 Slopes [Loxx]Multi T3 Slopes is an indicator that checks slopes of 5 (different period) T3 Moving Averages and adds them up to show overall trend. To us this, check for color changes from red to green where there is no red if green is larger than red and there is no red when red is larger than green. When red and green both show up, its a sign of chop.
What is the T3 moving average?
Better Moving Averages Tim Tillson
November 1, 1998
Tim Tillson is a software project manager at Hewlett-Packard, with degrees in Mathematics and Computer Science. He has privately traded options and equities for 15 years.
Introduction
"Digital filtering includes the process of smoothing, predicting, differentiating, integrating, separation of signals, and removal of noise from a signal. Thus many people who do such things are actually using digital filters without realizing that they are; being unacquainted with the theory, they neither understand what they have done nor the possibilities of what they might have done."
This quote from R. W. Hamming applies to the vast majority of indicators in technical analysis . Moving averages, be they simple, weighted, or exponential, are lowpass filters; low frequency components in the signal pass through with little attenuation, while high frequencies are severely reduced.
"Oscillator" type indicators (such as MACD , Momentum, Relative Strength Index ) are another type of digital filter called a differentiator.
Tushar Chande has observed that many popular oscillators are highly correlated, which is sensible because they are trying to measure the rate of change of the underlying time series, i.e., are trying to be the first and second derivatives we all learned about in Calculus.
We use moving averages (lowpass filters) in technical analysis to remove the random noise from a time series, to discern the underlying trend or to determine prices at which we will take action. A perfect moving average would have two attributes:
It would be smooth, not sensitive to random noise in the underlying time series. Another way of saying this is that its derivative would not spuriously alternate between positive and negative values.
It would not lag behind the time series it is computed from. Lag, of course, produces late buy or sell signals that kill profits.
The only way one can compute a perfect moving average is to have knowledge of the future, and if we had that, we would buy one lottery ticket a week rather than trade!
Having said this, we can still improve on the conventional simple, weighted, or exponential moving averages. Here's how:
Two Interesting Moving Averages
We will examine two benchmark moving averages based on Linear Regression analysis.
In both cases, a Linear Regression line of length n is fitted to price data.
I call the first moving average ILRS, which stands for Integral of Linear Regression Slope. One simply integrates the slope of a linear regression line as it is successively fitted in a moving window of length n across the data, with the constant of integration being a simple moving average of the first n points. Put another way, the derivative of ILRS is the linear regression slope. Note that ILRS is not the same as a SMA ( simple moving average ) of length n, which is actually the midpoint of the linear regression line as it moves across the data.
We can measure the lag of moving averages with respect to a linear trend by computing how they behave when the input is a line with unit slope. Both SMA (n) and ILRS(n) have lag of n/2, but ILRS is much smoother than SMA .
Our second benchmark moving average is well known, called EPMA or End Point Moving Average. It is the endpoint of the linear regression line of length n as it is fitted across the data. EPMA hugs the data more closely than a simple or exponential moving average of the same length. The price we pay for this is that it is much noisier (less smooth) than ILRS, and it also has the annoying property that it overshoots the data when linear trends are present.
However, EPMA has a lag of 0 with respect to linear input! This makes sense because a linear regression line will fit linear input perfectly, and the endpoint of the LR line will be on the input line.
These two moving averages frame the tradeoffs that we are facing. On one extreme we have ILRS, which is very smooth and has considerable phase lag. EPMA has 0 phase lag, but is too noisy and overshoots. We would like to construct a better moving average which is as smooth as ILRS, but runs closer to where EPMA lies, without the overshoot.
A easy way to attempt this is to split the difference, i.e. use (ILRS(n)+EPMA(n))/2. This will give us a moving average (call it IE /2) which runs in between the two, has phase lag of n/4 but still inherits considerable noise from EPMA. IE /2 is inspirational, however. Can we build something that is comparable, but smoother? Figure 1 shows ILRS, EPMA, and IE /2.
Filter Techniques
Any thoughtful student of filter theory (or resolute experimenter) will have noticed that you can improve the smoothness of a filter by running it through itself multiple times, at the cost of increasing phase lag.
There is a complementary technique (called twicing by J.W. Tukey) which can be used to improve phase lag. If L stands for the operation of running data through a low pass filter, then twicing can be described by:
L' = L(time series) + L(time series - L(time series))
That is, we add a moving average of the difference between the input and the moving average to the moving average. This is algebraically equivalent to:
2L-L(L)
This is the Double Exponential Moving Average or DEMA , popularized by Patrick Mulloy in TASAC (January/February 1994).
In our taxonomy, DEMA has some phase lag (although it exponentially approaches 0) and is somewhat noisy, comparable to IE /2 indicator.
We will use these two techniques to construct our better moving average, after we explore the first one a little more closely.
Fixing Overshoot
An n-day EMA has smoothing constant alpha=2/(n+1) and a lag of (n-1)/2.
Thus EMA (3) has lag 1, and EMA (11) has lag 5. Figure 2 shows that, if I am willing to incur 5 days of lag, I get a smoother moving average if I run EMA (3) through itself 5 times than if I just take EMA (11) once.
This suggests that if EPMA and DEMA have 0 or low lag, why not run fast versions (eg DEMA (3)) through themselves many times to achieve a smooth result? The problem is that multiple runs though these filters increase their tendency to overshoot the data, giving an unusable result. This is because the amplitude response of DEMA and EPMA is greater than 1 at certain frequencies, giving a gain of much greater than 1 at these frequencies when run though themselves multiple times. Figure 3 shows DEMA (7) and EPMA(7) run through themselves 3 times. DEMA^3 has serious overshoot, and EPMA^3 is terrible.
The solution to the overshoot problem is to recall what we are doing with twicing:
DEMA (n) = EMA (n) + EMA (time series - EMA (n))
The second term is adding, in effect, a smooth version of the derivative to the EMA to achieve DEMA . The derivative term determines how hot the moving average's response to linear trends will be. We need to simply turn down the volume to achieve our basic building block:
EMA (n) + EMA (time series - EMA (n))*.7;
This is algebraically the same as:
EMA (n)*1.7-EMA( EMA (n))*.7;
I have chosen .7 as my volume factor, but the general formula (which I call "Generalized Dema") is:
GD (n,v) = EMA (n)*(1+v)-EMA( EMA (n))*v,
Where v ranges between 0 and 1. When v=0, GD is just an EMA , and when v=1, GD is DEMA . In between, GD is a cooler DEMA . By using a value for v less than 1 (I like .7), we cure the multiple DEMA overshoot problem, at the cost of accepting some additional phase delay. Now we can run GD through itself multiple times to define a new, smoother moving average T3 that does not overshoot the data:
T3(n) = GD ( GD ( GD (n)))
In filter theory parlance, T3 is a six-pole non-linear Kalman filter. Kalman filters are ones which use the error (in this case (time series - EMA (n)) to correct themselves. In Technical Analysis , these are called Adaptive Moving Averages; they track the time series more aggressively when it is making large moves.
Included
Signals: long, short, continuation long, continuation short.
Alerts
Bar coloring
Loxx's expanded source types
Pesquisar nos scripts por "ha溢价率"
T3 Volatility Quality Index (VQI) w/ DSL & Pips Filtering [Loxx]T3 Volatility Quality Index (VQI) w/ DSL & Pips Filtering is a VQI indicator that uses T3 smoothing and discontinued signal lines to determine breakouts and breakdowns. This also allows filtering by pips.***
What is the Volatility Quality Index ( VQI )?
The idea behind the volatility quality index is to point out the difference between bad and good volatility in order to identify better trade opportunities in the market. This forex indicator works using the True Range algorithm in combination with the open, close, high and low prices.
What are DSL Discontinued Signal Line?
A lot of indicators are using signal lines in order to determine the trend (or some desired state of the indicator) easier. The idea of the signal line is easy : comparing the value to it's smoothed (slightly lagging) state, the idea of current momentum/state is made.
Discontinued signal line is inheriting that simple signal line idea and it is extending it : instead of having one signal line, more lines depending on the current value of the indicator.
"Signal" line is calculated the following way :
When a certain level is crossed into the desired direction, the EMA of that value is calculated for the desired signal line
When that level is crossed into the opposite direction, the previous "signal" line value is simply "inherited" and it becomes a kind of a level
This way it becomes a combination of signal lines and levels that are trying to combine both the good from both methods.
In simple terms, DSL uses the concept of a signal line and betters it by inheriting the previous signal line's value & makes it a level.
What is the T3 moving average?
Better Moving Averages Tim Tillson
November 1, 1998
Tim Tillson is a software project manager at Hewlett-Packard, with degrees in Mathematics and Computer Science. He has privately traded options and equities for 15 years.
Introduction
"Digital filtering includes the process of smoothing, predicting, differentiating, integrating, separation of signals, and removal of noise from a signal. Thus many people who do such things are actually using digital filters without realizing that they are; being unacquainted with the theory, they neither understand what they have done nor the possibilities of what they might have done."
This quote from R. W. Hamming applies to the vast majority of indicators in technical analysis . Moving averages, be they simple, weighted, or exponential, are lowpass filters; low frequency components in the signal pass through with little attenuation, while high frequencies are severely reduced.
"Oscillator" type indicators (such as MACD , Momentum, Relative Strength Index ) are another type of digital filter called a differentiator.
Tushar Chande has observed that many popular oscillators are highly correlated, which is sensible because they are trying to measure the rate of change of the underlying time series, i.e., are trying to be the first and second derivatives we all learned about in Calculus.
We use moving averages (lowpass filters) in technical analysis to remove the random noise from a time series, to discern the underlying trend or to determine prices at which we will take action. A perfect moving average would have two attributes:
It would be smooth, not sensitive to random noise in the underlying time series. Another way of saying this is that its derivative would not spuriously alternate between positive and negative values.
It would not lag behind the time series it is computed from. Lag, of course, produces late buy or sell signals that kill profits.
The only way one can compute a perfect moving average is to have knowledge of the future, and if we had that, we would buy one lottery ticket a week rather than trade!
Having said this, we can still improve on the conventional simple, weighted, or exponential moving averages. Here's how:
Two Interesting Moving Averages
We will examine two benchmark moving averages based on Linear Regression analysis.
In both cases, a Linear Regression line of length n is fitted to price data.
I call the first moving average ILRS, which stands for Integral of Linear Regression Slope. One simply integrates the slope of a linear regression line as it is successively fitted in a moving window of length n across the data, with the constant of integration being a simple moving average of the first n points. Put another way, the derivative of ILRS is the linear regression slope. Note that ILRS is not the same as a SMA ( simple moving average ) of length n, which is actually the midpoint of the linear regression line as it moves across the data.
We can measure the lag of moving averages with respect to a linear trend by computing how they behave when the input is a line with unit slope. Both SMA (n) and ILRS(n) have lag of n/2, but ILRS is much smoother than SMA .
Our second benchmark moving average is well known, called EPMA or End Point Moving Average. It is the endpoint of the linear regression line of length n as it is fitted across the data. EPMA hugs the data more closely than a simple or exponential moving average of the same length. The price we pay for this is that it is much noisier (less smooth) than ILRS, and it also has the annoying property that it overshoots the data when linear trends are present.
However, EPMA has a lag of 0 with respect to linear input! This makes sense because a linear regression line will fit linear input perfectly, and the endpoint of the LR line will be on the input line.
These two moving averages frame the tradeoffs that we are facing. On one extreme we have ILRS, which is very smooth and has considerable phase lag. EPMA has 0 phase lag, but is too noisy and overshoots. We would like to construct a better moving average which is as smooth as ILRS, but runs closer to where EPMA lies, without the overshoot.
A easy way to attempt this is to split the difference, i.e. use (ILRS(n)+EPMA(n))/2. This will give us a moving average (call it IE /2) which runs in between the two, has phase lag of n/4 but still inherits considerable noise from EPMA. IE /2 is inspirational, however. Can we build something that is comparable, but smoother? Figure 1 shows ILRS, EPMA, and IE /2.
Filter Techniques
Any thoughtful student of filter theory (or resolute experimenter) will have noticed that you can improve the smoothness of a filter by running it through itself multiple times, at the cost of increasing phase lag.
There is a complementary technique (called twicing by J.W. Tukey) which can be used to improve phase lag. If L stands for the operation of running data through a low pass filter, then twicing can be described by:
L' = L(time series) + L(time series - L(time series))
That is, we add a moving average of the difference between the input and the moving average to the moving average. This is algebraically equivalent to:
2L-L(L)
This is the Double Exponential Moving Average or DEMA , popularized by Patrick Mulloy in TASAC (January/February 1994).
In our taxonomy, DEMA has some phase lag (although it exponentially approaches 0) and is somewhat noisy, comparable to IE /2 indicator.
We will use these two techniques to construct our better moving average, after we explore the first one a little more closely.
Fixing Overshoot
An n-day EMA has smoothing constant alpha=2/(n+1) and a lag of (n-1)/2.
Thus EMA (3) has lag 1, and EMA (11) has lag 5. Figure 2 shows that, if I am willing to incur 5 days of lag, I get a smoother moving average if I run EMA (3) through itself 5 times than if I just take EMA (11) once.
This suggests that if EPMA and DEMA have 0 or low lag, why not run fast versions (eg DEMA (3)) through themselves many times to achieve a smooth result? The problem is that multiple runs though these filters increase their tendency to overshoot the data, giving an unusable result. This is because the amplitude response of DEMA and EPMA is greater than 1 at certain frequencies, giving a gain of much greater than 1 at these frequencies when run though themselves multiple times. Figure 3 shows DEMA (7) and EPMA(7) run through themselves 3 times. DEMA^3 has serious overshoot, and EPMA^3 is terrible.
The solution to the overshoot problem is to recall what we are doing with twicing:
DEMA (n) = EMA (n) + EMA (time series - EMA (n))
The second term is adding, in effect, a smooth version of the derivative to the EMA to achieve DEMA . The derivative term determines how hot the moving average's response to linear trends will be. We need to simply turn down the volume to achieve our basic building block:
EMA (n) + EMA (time series - EMA (n))*.7;
This is algebraically the same as:
EMA (n)*1.7-EMA( EMA (n))*.7;
I have chosen .7 as my volume factor, but the general formula (which I call "Generalized Dema") is:
GD (n,v) = EMA (n)*(1+v)-EMA( EMA (n))*v,
Where v ranges between 0 and 1. When v=0, GD is just an EMA , and when v=1, GD is DEMA . In between, GD is a cooler DEMA . By using a value for v less than 1 (I like .7), we cure the multiple DEMA overshoot problem, at the cost of accepting some additional phase delay. Now we can run GD through itself multiple times to define a new, smoother moving average T3 that does not overshoot the data:
T3(n) = GD ( GD ( GD (n)))
In filter theory parlance, T3 is a six-pole non-linear Kalman filter. Kalman filters are ones which use the error (in this case (time series - EMA (n)) to correct themselves. In Technical Analysis , these are called Adaptive Moving Averages; they track the time series more aggressively when it is making large moves.
Included
Signals
Alerts
Related indicators
Zero-line Volatility Quality Index (VQI)
Volatility Quality Index w/ Pips Filtering
Variety Moving Average Waddah Attar Explosion (WAE)
***This indicator is tuned to Forex. If you want to make it useful for other tickers, you must change the pip filtering value to match the asset. This means that for BTC, for example, you likely need to use a value of 10,000 or more for pips filter.
T3 PPO [Loxx]T3 PPO is a percentage price oscillator indicator using T3 moving average. This indicator is used to spot reversals. Dark red is upward price exhaustion, dark green is downward price exhaustion.
What is Percentage Price Oscillator (PPO)?
The percentage price oscillator (PPO) is a technical momentum indicator that shows the relationship between two moving averages in percentage terms. The moving averages are a 26-period and 12-period exponential moving average (EMA).
The PPO is used to compare asset performance and volatility, spot divergence that could lead to price reversals, generate trade signals, and help confirm trend direction.
What is the T3 moving average?
Better Moving Averages Tim Tillson
November 1, 1998
Tim Tillson is a software project manager at Hewlett-Packard, with degrees in Mathematics and Computer Science. He has privately traded options and equities for 15 years.
Introduction
"Digital filtering includes the process of smoothing, predicting, differentiating, integrating, separation of signals, and removal of noise from a signal. Thus many people who do such things are actually using digital filters without realizing that they are; being unacquainted with the theory, they neither understand what they have done nor the possibilities of what they might have done."
This quote from R. W. Hamming applies to the vast majority of indicators in technical analysis . Moving averages, be they simple, weighted, or exponential, are lowpass filters; low frequency components in the signal pass through with little attenuation, while high frequencies are severely reduced.
"Oscillator" type indicators (such as MACD , Momentum, Relative Strength Index ) are another type of digital filter called a differentiator.
Tushar Chande has observed that many popular oscillators are highly correlated, which is sensible because they are trying to measure the rate of change of the underlying time series, i.e., are trying to be the first and second derivatives we all learned about in Calculus.
We use moving averages (lowpass filters) in technical analysis to remove the random noise from a time series, to discern the underlying trend or to determine prices at which we will take action. A perfect moving average would have two attributes:
It would be smooth, not sensitive to random noise in the underlying time series. Another way of saying this is that its derivative would not spuriously alternate between positive and negative values.
It would not lag behind the time series it is computed from. Lag, of course, produces late buy or sell signals that kill profits.
The only way one can compute a perfect moving average is to have knowledge of the future, and if we had that, we would buy one lottery ticket a week rather than trade!
Having said this, we can still improve on the conventional simple, weighted, or exponential moving averages. Here's how:
Two Interesting Moving Averages
We will examine two benchmark moving averages based on Linear Regression analysis.
In both cases, a Linear Regression line of length n is fitted to price data.
I call the first moving average ILRS, which stands for Integral of Linear Regression Slope. One simply integrates the slope of a linear regression line as it is successively fitted in a moving window of length n across the data, with the constant of integration being a simple moving average of the first n points. Put another way, the derivative of ILRS is the linear regression slope. Note that ILRS is not the same as a SMA ( simple moving average ) of length n, which is actually the midpoint of the linear regression line as it moves across the data.
We can measure the lag of moving averages with respect to a linear trend by computing how they behave when the input is a line with unit slope. Both SMA (n) and ILRS(n) have lag of n/2, but ILRS is much smoother than SMA .
Our second benchmark moving average is well known, called EPMA or End Point Moving Average. It is the endpoint of the linear regression line of length n as it is fitted across the data. EPMA hugs the data more closely than a simple or exponential moving average of the same length. The price we pay for this is that it is much noisier (less smooth) than ILRS, and it also has the annoying property that it overshoots the data when linear trends are present.
However, EPMA has a lag of 0 with respect to linear input! This makes sense because a linear regression line will fit linear input perfectly, and the endpoint of the LR line will be on the input line.
These two moving averages frame the tradeoffs that we are facing. On one extreme we have ILRS, which is very smooth and has considerable phase lag. EPMA has 0 phase lag, but is too noisy and overshoots. We would like to construct a better moving average which is as smooth as ILRS, but runs closer to where EPMA lies, without the overshoot.
A easy way to attempt this is to split the difference, i.e. use (ILRS(n)+EPMA(n))/2. This will give us a moving average (call it IE /2) which runs in between the two, has phase lag of n/4 but still inherits considerable noise from EPMA. IE /2 is inspirational, however. Can we build something that is comparable, but smoother? Figure 1 shows ILRS, EPMA, and IE /2.
Filter Techniques
Any thoughtful student of filter theory (or resolute experimenter) will have noticed that you can improve the smoothness of a filter by running it through itself multiple times, at the cost of increasing phase lag.
There is a complementary technique (called twicing by J.W. Tukey) which can be used to improve phase lag. If L stands for the operation of running data through a low pass filter, then twicing can be described by:
L' = L(time series) + L(time series - L(time series))
That is, we add a moving average of the difference between the input and the moving average to the moving average. This is algebraically equivalent to:
2L-L(L)
This is the Double Exponential Moving Average or DEMA , popularized by Patrick Mulloy in TASAC (January/February 1994).
In our taxonomy, DEMA has some phase lag (although it exponentially approaches 0) and is somewhat noisy, comparable to IE /2 indicator.
We will use these two techniques to construct our better moving average, after we explore the first one a little more closely.
Fixing Overshoot
An n-day EMA has smoothing constant alpha=2/(n+1) and a lag of (n-1)/2.
Thus EMA (3) has lag 1, and EMA (11) has lag 5. Figure 2 shows that, if I am willing to incur 5 days of lag, I get a smoother moving average if I run EMA (3) through itself 5 times than if I just take EMA (11) once.
This suggests that if EPMA and DEMA have 0 or low lag, why not run fast versions (eg DEMA (3)) through themselves many times to achieve a smooth result? The problem is that multiple runs though these filters increase their tendency to overshoot the data, giving an unusable result. This is because the amplitude response of DEMA and EPMA is greater than 1 at certain frequencies, giving a gain of much greater than 1 at these frequencies when run though themselves multiple times. Figure 3 shows DEMA (7) and EPMA(7) run through themselves 3 times. DEMA^3 has serious overshoot, and EPMA^3 is terrible.
The solution to the overshoot problem is to recall what we are doing with twicing:
DEMA (n) = EMA (n) + EMA (time series - EMA (n))
The second term is adding, in effect, a smooth version of the derivative to the EMA to achieve DEMA . The derivative term determines how hot the moving average's response to linear trends will be. We need to simply turn down the volume to achieve our basic building block:
EMA (n) + EMA (time series - EMA (n))*.7;
This is algebraically the same as:
EMA (n)*1.7-EMA( EMA (n))*.7;
I have chosen .7 as my volume factor, but the general formula (which I call "Generalized Dema") is:
GD (n,v) = EMA (n)*(1+v)-EMA( EMA (n))*v,
Where v ranges between 0 and 1. When v=0, GD is just an EMA , and when v=1, GD is DEMA . In between, GD is a cooler DEMA . By using a value for v less than 1 (I like .7), we cure the multiple DEMA overshoot problem, at the cost of accepting some additional phase delay. Now we can run GD through itself multiple times to define a new, smoother moving average T3 that does not overshoot the data:
T3(n) = GD ( GD ( GD (n)))
In filter theory parlance, T3 is a six-pole non-linear Kalman filter. Kalman filters are ones which use the error (in this case (time series - EMA (n)) to correct themselves. In Technical Analysis , these are called Adaptive Moving Averages; they track the time series more aggressively when it is making large moves.
VHF-Adaptive T3 iTrend [Loxx]VHF-Adaptive T3 iTrend is an iTrend indicator with T3 smoothing and Vertical Horizontal Filter Adaptive period input. iTrend is used to determine where the trend starts and ends. You'll notice that the noise filter on this one is extreme. Adjust the period inputs accordingly to suit your take and your backtest requirements. This is also useful for scalping lower timeframes. Enjoy!
What is VHF Adaptive Period?
Vertical Horizontal Filter (VHF) was created by Adam White to identify trending and ranging markets. VHF measures the level of trend activity, similar to ADX DI. Vertical Horizontal Filter does not, itself, generate trading signals, but determines whether signals are taken from trend or momentum indicators. Using this trend information, one is then able to derive an average cycle length.
What is the T3 moving average?
Better Moving Averages Tim Tillson
November 1, 1998
Tim Tillson is a software project manager at Hewlett-Packard, with degrees in Mathematics and Computer Science. He has privately traded options and equities for 15 years.
Introduction
"Digital filtering includes the process of smoothing, predicting, differentiating, integrating, separation of signals, and removal of noise from a signal. Thus many people who do such things are actually using digital filters without realizing that they are; being unacquainted with the theory, they neither understand what they have done nor the possibilities of what they might have done."
This quote from R. W. Hamming applies to the vast majority of indicators in technical analysis . Moving averages, be they simple, weighted, or exponential, are lowpass filters; low frequency components in the signal pass through with little attenuation, while high frequencies are severely reduced.
"Oscillator" type indicators (such as MACD , Momentum, Relative Strength Index ) are another type of digital filter called a differentiator.
Tushar Chande has observed that many popular oscillators are highly correlated, which is sensible because they are trying to measure the rate of change of the underlying time series, i.e., are trying to be the first and second derivatives we all learned about in Calculus.
We use moving averages (lowpass filters) in technical analysis to remove the random noise from a time series, to discern the underlying trend or to determine prices at which we will take action. A perfect moving average would have two attributes:
It would be smooth, not sensitive to random noise in the underlying time series. Another way of saying this is that its derivative would not spuriously alternate between positive and negative values.
It would not lag behind the time series it is computed from. Lag, of course, produces late buy or sell signals that kill profits.
The only way one can compute a perfect moving average is to have knowledge of the future, and if we had that, we would buy one lottery ticket a week rather than trade!
Having said this, we can still improve on the conventional simple, weighted, or exponential moving averages. Here's how:
Two Interesting Moving Averages
We will examine two benchmark moving averages based on Linear Regression analysis.
In both cases, a Linear Regression line of length n is fitted to price data.
I call the first moving average ILRS, which stands for Integral of Linear Regression Slope. One simply integrates the slope of a linear regression line as it is successively fitted in a moving window of length n across the data, with the constant of integration being a simple moving average of the first n points. Put another way, the derivative of ILRS is the linear regression slope. Note that ILRS is not the same as a SMA ( simple moving average ) of length n, which is actually the midpoint of the linear regression line as it moves across the data.
We can measure the lag of moving averages with respect to a linear trend by computing how they behave when the input is a line with unit slope. Both SMA (n) and ILRS(n) have lag of n/2, but ILRS is much smoother than SMA .
Our second benchmark moving average is well known, called EPMA or End Point Moving Average. It is the endpoint of the linear regression line of length n as it is fitted across the data. EPMA hugs the data more closely than a simple or exponential moving average of the same length. The price we pay for this is that it is much noisier (less smooth) than ILRS, and it also has the annoying property that it overshoots the data when linear trends are present.
However, EPMA has a lag of 0 with respect to linear input! This makes sense because a linear regression line will fit linear input perfectly, and the endpoint of the LR line will be on the input line.
These two moving averages frame the tradeoffs that we are facing. On one extreme we have ILRS, which is very smooth and has considerable phase lag. EPMA has 0 phase lag, but is too noisy and overshoots. We would like to construct a better moving average which is as smooth as ILRS, but runs closer to where EPMA lies, without the overshoot.
A easy way to attempt this is to split the difference, i.e. use (ILRS(n)+EPMA(n))/2. This will give us a moving average (call it IE /2) which runs in between the two, has phase lag of n/4 but still inherits considerable noise from EPMA. IE /2 is inspirational, however. Can we build something that is comparable, but smoother? Figure 1 shows ILRS, EPMA, and IE /2.
Filter Techniques
Any thoughtful student of filter theory (or resolute experimenter) will have noticed that you can improve the smoothness of a filter by running it through itself multiple times, at the cost of increasing phase lag.
There is a complementary technique (called twicing by J.W. Tukey) which can be used to improve phase lag. If L stands for the operation of running data through a low pass filter, then twicing can be described by:
L' = L(time series) + L(time series - L(time series))
That is, we add a moving average of the difference between the input and the moving average to the moving average. This is algebraically equivalent to:
2L-L(L)
This is the Double Exponential Moving Average or DEMA , popularized by Patrick Mulloy in TASAC (January/February 1994).
In our taxonomy, DEMA has some phase lag (although it exponentially approaches 0) and is somewhat noisy, comparable to IE /2 indicator.
We will use these two techniques to construct our better moving average, after we explore the first one a little more closely.
Fixing Overshoot
An n-day EMA has smoothing constant alpha=2/(n+1) and a lag of (n-1)/2.
Thus EMA (3) has lag 1, and EMA (11) has lag 5. Figure 2 shows that, if I am willing to incur 5 days of lag, I get a smoother moving average if I run EMA (3) through itself 5 times than if I just take EMA (11) once.
This suggests that if EPMA and DEMA have 0 or low lag, why not run fast versions (eg DEMA (3)) through themselves many times to achieve a smooth result? The problem is that multiple runs though these filters increase their tendency to overshoot the data, giving an unusable result. This is because the amplitude response of DEMA and EPMA is greater than 1 at certain frequencies, giving a gain of much greater than 1 at these frequencies when run though themselves multiple times. Figure 3 shows DEMA (7) and EPMA(7) run through themselves 3 times. DEMA^3 has serious overshoot, and EPMA^3 is terrible.
The solution to the overshoot problem is to recall what we are doing with twicing:
DEMA (n) = EMA (n) + EMA (time series - EMA (n))
The second term is adding, in effect, a smooth version of the derivative to the EMA to achieve DEMA . The derivative term determines how hot the moving average's response to linear trends will be. We need to simply turn down the volume to achieve our basic building block:
EMA (n) + EMA (time series - EMA (n))*.7;
This is algebraically the same as:
EMA (n)*1.7-EMA( EMA (n))*.7;
I have chosen .7 as my volume factor, but the general formula (which I call "Generalized Dema") is:
GD (n,v) = EMA (n)*(1+v)-EMA( EMA (n))*v,
Where v ranges between 0 and 1. When v=0, GD is just an EMA , and when v=1, GD is DEMA . In between, GD is a cooler DEMA . By using a value for v less than 1 (I like .7), we cure the multiple DEMA overshoot problem, at the cost of accepting some additional phase delay. Now we can run GD through itself multiple times to define a new, smoother moving average T3 that does not overshoot the data:
T3(n) = GD ( GD ( GD (n)))
In filter theory parlance, T3 is a six-pole non-linear Kalman filter. Kalman filters are ones which use the error (in this case (time series - EMA (n)) to correct themselves. In Technical Analysis , these are called Adaptive Moving Averages; they track the time series more aggressively when it is making large moves.
Included
Bar coloring
Alerts
Signals
Loxx's Expanded Source Types
STD-Adaptive T3 Channel w/ Ehlers Swiss Army Knife Mod. [Loxx]STD-Adaptive T3 Channel w/ Ehlers Swiss Army Knife Mod. is an adaptive T3 indicator using standard deviation adaptivity and Ehlers Swiss Army Knife indicator to adjust the alpha value of the T3 calculation. This helps identify trends and reduce noise. In addition. I've included a Keltner Channel to show reversal/exhaustion zones.
What is the Swiss Army Knife Indicator?
John Ehlers explains the calculation here: www.mesasoftware.com
What is the T3 moving average?
Better Moving Averages Tim Tillson
November 1, 1998
Tim Tillson is a software project manager at Hewlett-Packard, with degrees in Mathematics and Computer Science. He has privately traded options and equities for 15 years.
Introduction
"Digital filtering includes the process of smoothing, predicting, differentiating, integrating, separation of signals, and removal of noise from a signal. Thus many people who do such things are actually using digital filters without realizing that they are; being unacquainted with the theory, they neither understand what they have done nor the possibilities of what they might have done."
This quote from R. W. Hamming applies to the vast majority of indicators in technical analysis . Moving averages, be they simple, weighted, or exponential, are lowpass filters; low frequency components in the signal pass through with little attenuation, while high frequencies are severely reduced.
"Oscillator" type indicators (such as MACD , Momentum, Relative Strength Index ) are another type of digital filter called a differentiator.
Tushar Chande has observed that many popular oscillators are highly correlated, which is sensible because they are trying to measure the rate of change of the underlying time series, i.e., are trying to be the first and second derivatives we all learned about in Calculus.
We use moving averages (lowpass filters) in technical analysis to remove the random noise from a time series, to discern the underlying trend or to determine prices at which we will take action. A perfect moving average would have two attributes:
It would be smooth, not sensitive to random noise in the underlying time series. Another way of saying this is that its derivative would not spuriously alternate between positive and negative values.
It would not lag behind the time series it is computed from. Lag, of course, produces late buy or sell signals that kill profits.
The only way one can compute a perfect moving average is to have knowledge of the future, and if we had that, we would buy one lottery ticket a week rather than trade!
Having said this, we can still improve on the conventional simple, weighted, or exponential moving averages. Here's how:
Two Interesting Moving Averages
We will examine two benchmark moving averages based on Linear Regression analysis.
In both cases, a Linear Regression line of length n is fitted to price data.
I call the first moving average ILRS, which stands for Integral of Linear Regression Slope. One simply integrates the slope of a linear regression line as it is successively fitted in a moving window of length n across the data, with the constant of integration being a simple moving average of the first n points. Put another way, the derivative of ILRS is the linear regression slope. Note that ILRS is not the same as a SMA ( simple moving average ) of length n, which is actually the midpoint of the linear regression line as it moves across the data.
We can measure the lag of moving averages with respect to a linear trend by computing how they behave when the input is a line with unit slope. Both SMA (n) and ILRS(n) have lag of n/2, but ILRS is much smoother than SMA .
Our second benchmark moving average is well known, called EPMA or End Point Moving Average. It is the endpoint of the linear regression line of length n as it is fitted across the data. EPMA hugs the data more closely than a simple or exponential moving average of the same length. The price we pay for this is that it is much noisier (less smooth) than ILRS, and it also has the annoying property that it overshoots the data when linear trends are present.
However, EPMA has a lag of 0 with respect to linear input! This makes sense because a linear regression line will fit linear input perfectly, and the endpoint of the LR line will be on the input line.
These two moving averages frame the tradeoffs that we are facing. On one extreme we have ILRS, which is very smooth and has considerable phase lag. EPMA has 0 phase lag, but is too noisy and overshoots. We would like to construct a better moving average which is as smooth as ILRS, but runs closer to where EPMA lies, without the overshoot.
A easy way to attempt this is to split the difference, i.e. use (ILRS(n)+EPMA(n))/2. This will give us a moving average (call it IE /2) which runs in between the two, has phase lag of n/4 but still inherits considerable noise from EPMA. IE /2 is inspirational, however. Can we build something that is comparable, but smoother? Figure 1 shows ILRS, EPMA, and IE /2.
Filter Techniques
Any thoughtful student of filter theory (or resolute experimenter) will have noticed that you can improve the smoothness of a filter by running it through itself multiple times, at the cost of increasing phase lag.
There is a complementary technique (called twicing by J.W. Tukey) which can be used to improve phase lag. If L stands for the operation of running data through a low pass filter, then twicing can be described by:
L' = L(time series) + L(time series - L(time series))
That is, we add a moving average of the difference between the input and the moving average to the moving average. This is algebraically equivalent to:
2L-L(L)
This is the Double Exponential Moving Average or DEMA , popularized by Patrick Mulloy in TASAC (January/February 1994).
In our taxonomy, DEMA has some phase lag (although it exponentially approaches 0) and is somewhat noisy, comparable to IE /2 indicator.
We will use these two techniques to construct our better moving average, after we explore the first one a little more closely.
Fixing Overshoot
An n-day EMA has smoothing constant alpha=2/(n+1) and a lag of (n-1)/2.
Thus EMA (3) has lag 1, and EMA (11) has lag 5. Figure 2 shows that, if I am willing to incur 5 days of lag, I get a smoother moving average if I run EMA (3) through itself 5 times than if I just take EMA (11) once.
This suggests that if EPMA and DEMA have 0 or low lag, why not run fast versions (eg DEMA (3)) through themselves many times to achieve a smooth result? The problem is that multiple runs though these filters increase their tendency to overshoot the data, giving an unusable result. This is because the amplitude response of DEMA and EPMA is greater than 1 at certain frequencies, giving a gain of much greater than 1 at these frequencies when run though themselves multiple times. Figure 3 shows DEMA (7) and EPMA(7) run through themselves 3 times. DEMA^3 has serious overshoot, and EPMA^3 is terrible.
The solution to the overshoot problem is to recall what we are doing with twicing:
DEMA (n) = EMA (n) + EMA (time series - EMA (n))
The second term is adding, in effect, a smooth version of the derivative to the EMA to achieve DEMA . The derivative term determines how hot the moving average's response to linear trends will be. We need to simply turn down the volume to achieve our basic building block:
EMA (n) + EMA (time series - EMA (n))*.7;
This is algebraically the same as:
EMA (n)*1.7-EMA( EMA (n))*.7;
I have chosen .7 as my volume factor, but the general formula (which I call "Generalized Dema") is:
GD (n,v) = EMA (n)*(1+v)-EMA( EMA (n))*v,
Where v ranges between 0 and 1. When v=0, GD is just an EMA , and when v=1, GD is DEMA . In between, GD is a cooler DEMA . By using a value for v less than 1 (I like .7), we cure the multiple DEMA overshoot problem, at the cost of accepting some additional phase delay. Now we can run GD through itself multiple times to define a new, smoother moving average T3 that does not overshoot the data:
T3(n) = GD ( GD ( GD (n)))
In filter theory parlance, T3 is a six-pole non-linear Kalman filter. Kalman filters are ones which use the error (in this case (time series - EMA (n)) to correct themselves. In Technical Analysis , these are called Adaptive Moving Averages; they track the time series more aggressively when it is making large moves.
Included:
Bar coloring
Signals
Alerts
Loxx's Expanded Source Types
T3 Velocity [Loxx]T3 Velocity is a simple velocity indicator using T3 moving average that uses gradient colors to better identify trends.
What is the T3 moving average?
Better Moving Averages Tim Tillson
November 1, 1998
Tim Tillson is a software project manager at Hewlett-Packard, with degrees in Mathematics and Computer Science. He has privately traded options and equities for 15 years.
Introduction
"Digital filtering includes the process of smoothing, predicting, differentiating, integrating, separation of signals, and removal of noise from a signal. Thus many people who do such things are actually using digital filters without realizing that they are; being unacquainted with the theory, they neither understand what they have done nor the possibilities of what they might have done."
This quote from R. W. Hamming applies to the vast majority of indicators in technical analysis . Moving averages, be they simple, weighted, or exponential, are lowpass filters; low frequency components in the signal pass through with little attenuation, while high frequencies are severely reduced.
"Oscillator" type indicators (such as MACD , Momentum, Relative Strength Index ) are another type of digital filter called a differentiator.
Tushar Chande has observed that many popular oscillators are highly correlated, which is sensible because they are trying to measure the rate of change of the underlying time series, i.e., are trying to be the first and second derivatives we all learned about in Calculus.
We use moving averages (lowpass filters) in technical analysis to remove the random noise from a time series, to discern the underlying trend or to determine prices at which we will take action. A perfect moving average would have two attributes:
It would be smooth, not sensitive to random noise in the underlying time series. Another way of saying this is that its derivative would not spuriously alternate between positive and negative values.
It would not lag behind the time series it is computed from. Lag, of course, produces late buy or sell signals that kill profits.
The only way one can compute a perfect moving average is to have knowledge of the future, and if we had that, we would buy one lottery ticket a week rather than trade!
Having said this, we can still improve on the conventional simple, weighted, or exponential moving averages. Here's how:
Two Interesting Moving Averages
We will examine two benchmark moving averages based on Linear Regression analysis.
In both cases, a Linear Regression line of length n is fitted to price data.
I call the first moving average ILRS, which stands for Integral of Linear Regression Slope. One simply integrates the slope of a linear regression line as it is successively fitted in a moving window of length n across the data, with the constant of integration being a simple moving average of the first n points. Put another way, the derivative of ILRS is the linear regression slope. Note that ILRS is not the same as a SMA ( simple moving average ) of length n, which is actually the midpoint of the linear regression line as it moves across the data.
We can measure the lag of moving averages with respect to a linear trend by computing how they behave when the input is a line with unit slope. Both SMA (n) and ILRS(n) have lag of n/2, but ILRS is much smoother than SMA .
Our second benchmark moving average is well known, called EPMA or End Point Moving Average. It is the endpoint of the linear regression line of length n as it is fitted across the data. EPMA hugs the data more closely than a simple or exponential moving average of the same length. The price we pay for this is that it is much noisier (less smooth) than ILRS, and it also has the annoying property that it overshoots the data when linear trends are present.
However, EPMA has a lag of 0 with respect to linear input! This makes sense because a linear regression line will fit linear input perfectly, and the endpoint of the LR line will be on the input line.
These two moving averages frame the tradeoffs that we are facing. On one extreme we have ILRS, which is very smooth and has considerable phase lag. EPMA has 0 phase lag, but is too noisy and overshoots. We would like to construct a better moving average which is as smooth as ILRS, but runs closer to where EPMA lies, without the overshoot.
A easy way to attempt this is to split the difference, i.e. use (ILRS(n)+EPMA(n))/2. This will give us a moving average (call it IE /2) which runs in between the two, has phase lag of n/4 but still inherits considerable noise from EPMA. IE /2 is inspirational, however. Can we build something that is comparable, but smoother? Figure 1 shows ILRS, EPMA, and IE /2.
Filter Techniques
Any thoughtful student of filter theory (or resolute experimenter) will have noticed that you can improve the smoothness of a filter by running it through itself multiple times, at the cost of increasing phase lag.
There is a complementary technique (called twicing by J.W. Tukey) which can be used to improve phase lag. If L stands for the operation of running data through a low pass filter, then twicing can be described by:
L' = L(time series) + L(time series - L(time series))
That is, we add a moving average of the difference between the input and the moving average to the moving average. This is algebraically equivalent to:
2L-L(L)
This is the Double Exponential Moving Average or DEMA , popularized by Patrick Mulloy in TASAC (January/February 1994).
In our taxonomy, DEMA has some phase lag (although it exponentially approaches 0) and is somewhat noisy, comparable to IE /2 indicator.
We will use these two techniques to construct our better moving average, after we explore the first one a little more closely.
Fixing Overshoot
An n-day EMA has smoothing constant alpha=2/(n+1) and a lag of (n-1)/2.
Thus EMA (3) has lag 1, and EMA (11) has lag 5. Figure 2 shows that, if I am willing to incur 5 days of lag, I get a smoother moving average if I run EMA (3) through itself 5 times than if I just take EMA (11) once.
This suggests that if EPMA and DEMA have 0 or low lag, why not run fast versions (eg DEMA (3)) through themselves many times to achieve a smooth result? The problem is that multiple runs though these filters increase their tendency to overshoot the data, giving an unusable result. This is because the amplitude response of DEMA and EPMA is greater than 1 at certain frequencies, giving a gain of much greater than 1 at these frequencies when run though themselves multiple times. Figure 3 shows DEMA (7) and EPMA(7) run through themselves 3 times. DEMA^3 has serious overshoot, and EPMA^3 is terrible.
The solution to the overshoot problem is to recall what we are doing with twicing:
DEMA (n) = EMA (n) + EMA (time series - EMA (n))
The second term is adding, in effect, a smooth version of the derivative to the EMA to achieve DEMA . The derivative term determines how hot the moving average's response to linear trends will be. We need to simply turn down the volume to achieve our basic building block:
EMA (n) + EMA (time series - EMA (n))*.7;
This is algebraically the same as:
EMA (n)*1.7-EMA( EMA (n))*.7;
I have chosen .7 as my volume factor, but the general formula (which I call "Generalized Dema") is:
GD (n,v) = EMA (n)*(1+v)-EMA( EMA (n))*v,
Where v ranges between 0 and 1. When v=0, GD is just an EMA , and when v=1, GD is DEMA . In between, GD is a cooler DEMA . By using a value for v less than 1 (I like .7), we cure the multiple DEMA overshoot problem, at the cost of accepting some additional phase delay. Now we can run GD through itself multiple times to define a new, smoother moving average T3 that does not overshoot the data:
T3(n) = GD ( GD ( GD (n)))
In filter theory parlance, T3 is a six-pole non-linear Kalman filter. Kalman filters are ones which use the error (in this case (time series - EMA (n)) to correct themselves. In Technical Analysis , these are called Adaptive Moving Averages; they track the time series more aggressively when it is making large moves.
Included:
Bar coloring
Signals
Alerts
Loxx's Expanded Source Types
R-squared Adaptive T3 w/ DSL [Loxx]R-squared Adaptive T3 w/ DSL is the following T3 indicator but with Discontinued Signal Lines added to reduce noise and thereby increase signal accuracy. This adaptation makes this indicator lower TF scalp friendly.
What is R-squared Adaptive?
One tool available in forecasting the trendiness of the breakout is the coefficient of determination ( R-squared ), a statistical measurement.
The R-squared indicates linear strength between the security's price (the Y - axis) and time (the X - axis). The R-squared is the percentage of squared error that the linear regression can eliminate if it were used as the predictor instead of the mean value. If the R-squared were 0.99, then the linear regression would eliminate 99% of the error for prediction versus predicting closing prices using a simple moving average .
R-squared is used here to derive a T3 factor used to modify price before passing price through a six-pole non-linear Kalman filter.
What is the T3 moving average?
Better Moving Averages Tim Tillson
November 1, 1998
Tim Tillson is a software project manager at Hewlett-Packard, with degrees in Mathematics and Computer Science. He has privately traded options and equities for 15 years.
Introduction
"Digital filtering includes the process of smoothing, predicting, differentiating, integrating, separation of signals, and removal of noise from a signal. Thus many people who do such things are actually using digital filters without realizing that they are; being unacquainted with the theory, they neither understand what they have done nor the possibilities of what they might have done."
This quote from R. W. Hamming applies to the vast majority of indicators in technical analysis . Moving averages, be they simple, weighted, or exponential, are lowpass filters; low frequency components in the signal pass through with little attenuation, while high frequencies are severely reduced.
"Oscillator" type indicators (such as MACD , Momentum, Relative Strength Index ) are another type of digital filter called a differentiator.
Tushar Chande has observed that many popular oscillators are highly correlated, which is sensible because they are trying to measure the rate of change of the underlying time series, i.e., are trying to be the first and second derivatives we all learned about in Calculus.
We use moving averages (lowpass filters) in technical analysis to remove the random noise from a time series, to discern the underlying trend or to determine prices at which we will take action. A perfect moving average would have two attributes:
It would be smooth, not sensitive to random noise in the underlying time series. Another way of saying this is that its derivative would not spuriously alternate between positive and negative values.
It would not lag behind the time series it is computed from. Lag, of course, produces late buy or sell signals that kill profits.
The only way one can compute a perfect moving average is to have knowledge of the future, and if we had that, we would buy one lottery ticket a week rather than trade!
Having said this, we can still improve on the conventional simple, weighted, or exponential moving averages. Here's how:
Two Interesting Moving Averages
We will examine two benchmark moving averages based on Linear Regression analysis.
In both cases, a Linear Regression line of length n is fitted to price data.
I call the first moving average ILRS, which stands for Integral of Linear Regression Slope. One simply integrates the slope of a linear regression line as it is successively fitted in a moving window of length n across the data, with the constant of integration being a simple moving average of the first n points. Put another way, the derivative of ILRS is the linear regression slope. Note that ILRS is not the same as a SMA ( simple moving average ) of length n, which is actually the midpoint of the linear regression line as it moves across the data.
We can measure the lag of moving averages with respect to a linear trend by computing how they behave when the input is a line with unit slope. Both SMA (n) and ILRS(n) have lag of n/2, but ILRS is much smoother than SMA .
Our second benchmark moving average is well known, called EPMA or End Point Moving Average. It is the endpoint of the linear regression line of length n as it is fitted across the data. EPMA hugs the data more closely than a simple or exponential moving average of the same length. The price we pay for this is that it is much noisier (less smooth) than ILRS, and it also has the annoying property that it overshoots the data when linear trends are present.
However, EPMA has a lag of 0 with respect to linear input! This makes sense because a linear regression line will fit linear input perfectly, and the endpoint of the LR line will be on the input line.
These two moving averages frame the tradeoffs that we are facing. On one extreme we have ILRS, which is very smooth and has considerable phase lag. EPMA has 0 phase lag, but is too noisy and overshoots. We would like to construct a better moving average which is as smooth as ILRS, but runs closer to where EPMA lies, without the overshoot.
A easy way to attempt this is to split the difference, i.e. use (ILRS(n)+EPMA(n))/2. This will give us a moving average (call it IE /2) which runs in between the two, has phase lag of n/4 but still inherits considerable noise from EPMA. IE /2 is inspirational, however. Can we build something that is comparable, but smoother? Figure 1 shows ILRS, EPMA, and IE /2.
Filter Techniques
Any thoughtful student of filter theory (or resolute experimenter) will have noticed that you can improve the smoothness of a filter by running it through itself multiple times, at the cost of increasing phase lag.
There is a complementary technique (called twicing by J.W. Tukey) which can be used to improve phase lag. If L stands for the operation of running data through a low pass filter, then twicing can be described by:
L' = L(time series) + L(time series - L(time series))
That is, we add a moving average of the difference between the input and the moving average to the moving average. This is algebraically equivalent to:
2L-L(L)
This is the Double Exponential Moving Average or DEMA , popularized by Patrick Mulloy in TASAC (January/February 1994).
In our taxonomy, DEMA has some phase lag (although it exponentially approaches 0) and is somewhat noisy, comparable to IE /2 indicator.
We will use these two techniques to construct our better moving average, after we explore the first one a little more closely.
Fixing Overshoot
An n-day EMA has smoothing constant alpha=2/(n+1) and a lag of (n-1)/2.
Thus EMA (3) has lag 1, and EMA (11) has lag 5. Figure 2 shows that, if I am willing to incur 5 days of lag, I get a smoother moving average if I run EMA (3) through itself 5 times than if I just take EMA (11) once.
This suggests that if EPMA and DEMA have 0 or low lag, why not run fast versions (eg DEMA (3)) through themselves many times to achieve a smooth result? The problem is that multiple runs though these filters increase their tendency to overshoot the data, giving an unusable result. This is because the amplitude response of DEMA and EPMA is greater than 1 at certain frequencies, giving a gain of much greater than 1 at these frequencies when run though themselves multiple times. Figure 3 shows DEMA (7) and EPMA(7) run through themselves 3 times. DEMA^3 has serious overshoot, and EPMA^3 is terrible.
The solution to the overshoot problem is to recall what we are doing with twicing:
DEMA (n) = EMA (n) + EMA (time series - EMA (n))
The second term is adding, in effect, a smooth version of the derivative to the EMA to achieve DEMA . The derivative term determines how hot the moving average's response to linear trends will be. We need to simply turn down the volume to achieve our basic building block:
EMA (n) + EMA (time series - EMA (n))*.7;
This is algebraically the same as:
EMA (n)*1.7-EMA( EMA (n))*.7;
I have chosen .7 as my volume factor, but the general formula (which I call "Generalized Dema") is:
GD (n,v) = EMA (n)*(1+v)-EMA( EMA (n))*v,
Where v ranges between 0 and 1. When v=0, GD is just an EMA , and when v=1, GD is DEMA . In between, GD is a cooler DEMA . By using a value for v less than 1 (I like .7), we cure the multiple DEMA overshoot problem, at the cost of accepting some additional phase delay. Now we can run GD through itself multiple times to define a new, smoother moving average T3 that does not overshoot the data:
T3(n) = GD ( GD ( GD (n)))
In filter theory parlance, T3 is a six-pole non-linear Kalman filter. Kalman filters are ones which use the error (in this case (time series - EMA (n)) to correct themselves. In Technical Analysis , these are called Adaptive Moving Averages; they track the time series more aggressively when it is making large moves.
Included:
Bar coloring
Signals
Alerts
EMA and FEMA Signla/DSL smoothing
Loxx's Expanded Source Types
Pips-Stepped, R-squared Adaptive T3 [Loxx]Pips-Stepped, R-squared Adaptive T3 is a a T3 moving average with optional adaptivity, trend following, and pip-stepping. This indicator also uses optional flat coloring to determine chops zones. This indicator is R-squared adaptive. This is also an experimental indicator.
What is the T3 moving average?
Better Moving Averages Tim Tillson
November 1, 1998
Tim Tillson is a software project manager at Hewlett-Packard, with degrees in Mathematics and Computer Science. He has privately traded options and equities for 15 years.
Introduction
"Digital filtering includes the process of smoothing, predicting, differentiating, integrating, separation of signals, and removal of noise from a signal. Thus many people who do such things are actually using digital filters without realizing that they are; being unacquainted with the theory, they neither understand what they have done nor the possibilities of what they might have done."
This quote from R. W. Hamming applies to the vast majority of indicators in technical analysis . Moving averages, be they simple, weighted, or exponential, are lowpass filters; low frequency components in the signal pass through with little attenuation, while high frequencies are severely reduced.
"Oscillator" type indicators (such as MACD , Momentum, Relative Strength Index ) are another type of digital filter called a differentiator.
Tushar Chande has observed that many popular oscillators are highly correlated, which is sensible because they are trying to measure the rate of change of the underlying time series, i.e., are trying to be the first and second derivatives we all learned about in Calculus.
We use moving averages (lowpass filters) in technical analysis to remove the random noise from a time series, to discern the underlying trend or to determine prices at which we will take action. A perfect moving average would have two attributes:
It would be smooth, not sensitive to random noise in the underlying time series. Another way of saying this is that its derivative would not spuriously alternate between positive and negative values.
It would not lag behind the time series it is computed from. Lag, of course, produces late buy or sell signals that kill profits.
The only way one can compute a perfect moving average is to have knowledge of the future, and if we had that, we would buy one lottery ticket a week rather than trade!
Having said this, we can still improve on the conventional simple, weighted, or exponential moving averages. Here's how:
Two Interesting Moving Averages
We will examine two benchmark moving averages based on Linear Regression analysis.
In both cases, a Linear Regression line of length n is fitted to price data.
I call the first moving average ILRS, which stands for Integral of Linear Regression Slope. One simply integrates the slope of a linear regression line as it is successively fitted in a moving window of length n across the data, with the constant of integration being a simple moving average of the first n points. Put another way, the derivative of ILRS is the linear regression slope. Note that ILRS is not the same as a SMA ( simple moving average ) of length n, which is actually the midpoint of the linear regression line as it moves across the data.
We can measure the lag of moving averages with respect to a linear trend by computing how they behave when the input is a line with unit slope. Both SMA (n) and ILRS(n) have lag of n/2, but ILRS is much smoother than SMA .
Our second benchmark moving average is well known, called EPMA or End Point Moving Average. It is the endpoint of the linear regression line of length n as it is fitted across the data. EPMA hugs the data more closely than a simple or exponential moving average of the same length. The price we pay for this is that it is much noisier (less smooth) than ILRS, and it also has the annoying property that it overshoots the data when linear trends are present.
However, EPMA has a lag of 0 with respect to linear input! This makes sense because a linear regression line will fit linear input perfectly, and the endpoint of the LR line will be on the input line.
These two moving averages frame the tradeoffs that we are facing. On one extreme we have ILRS, which is very smooth and has considerable phase lag. EPMA has 0 phase lag, but is too noisy and overshoots. We would like to construct a better moving average which is as smooth as ILRS, but runs closer to where EPMA lies, without the overshoot.
A easy way to attempt this is to split the difference, i.e. use (ILRS(n)+EPMA(n))/2. This will give us a moving average (call it IE /2) which runs in between the two, has phase lag of n/4 but still inherits considerable noise from EPMA. IE /2 is inspirational, however. Can we build something that is comparable, but smoother? Figure 1 shows ILRS, EPMA, and IE /2.
Filter Techniques
Any thoughtful student of filter theory (or resolute experimenter) will have noticed that you can improve the smoothness of a filter by running it through itself multiple times, at the cost of increasing phase lag.
There is a complementary technique (called twicing by J.W. Tukey) which can be used to improve phase lag. If L stands for the operation of running data through a low pass filter, then twicing can be described by:
L' = L(time series) + L(time series - L(time series))
That is, we add a moving average of the difference between the input and the moving average to the moving average. This is algebraically equivalent to:
2L-L(L)
This is the Double Exponential Moving Average or DEMA , popularized by Patrick Mulloy in TASAC (January/February 1994).
In our taxonomy, DEMA has some phase lag (although it exponentially approaches 0) and is somewhat noisy, comparable to IE /2 indicator.
We will use these two techniques to construct our better moving average, after we explore the first one a little more closely.
Fixing Overshoot
An n-day EMA has smoothing constant alpha=2/(n+1) and a lag of (n-1)/2.
Thus EMA (3) has lag 1, and EMA (11) has lag 5. Figure 2 shows that, if I am willing to incur 5 days of lag, I get a smoother moving average if I run EMA (3) through itself 5 times than if I just take EMA (11) once.
This suggests that if EPMA and DEMA have 0 or low lag, why not run fast versions (eg DEMA (3)) through themselves many times to achieve a smooth result? The problem is that multiple runs though these filters increase their tendency to overshoot the data, giving an unusable result. This is because the amplitude response of DEMA and EPMA is greater than 1 at certain frequencies, giving a gain of much greater than 1 at these frequencies when run though themselves multiple times. Figure 3 shows DEMA (7) and EPMA(7) run through themselves 3 times. DEMA^3 has serious overshoot, and EPMA^3 is terrible.
The solution to the overshoot problem is to recall what we are doing with twicing:
DEMA (n) = EMA (n) + EMA (time series - EMA (n))
The second term is adding, in effect, a smooth version of the derivative to the EMA to achieve DEMA . The derivative term determines how hot the moving average's response to linear trends will be. We need to simply turn down the volume to achieve our basic building block:
EMA (n) + EMA (time series - EMA (n))*.7;
This is algebraically the same as:
EMA (n)*1.7-EMA( EMA (n))*.7;
I have chosen .7 as my volume factor, but the general formula (which I call "Generalized Dema") is:
GD (n,v) = EMA (n)*(1+v)-EMA( EMA (n))*v,
Where v ranges between 0 and 1. When v=0, GD is just an EMA , and when v=1, GD is DEMA . In between, GD is a cooler DEMA . By using a value for v less than 1 (I like .7), we cure the multiple DEMA overshoot problem, at the cost of accepting some additional phase delay. Now we can run GD through itself multiple times to define a new, smoother moving average T3 that does not overshoot the data:
T3(n) = GD ( GD ( GD (n)))
In filter theory parlance, T3 is a six-pole non-linear Kalman filter. Kalman filters are ones which use the error (in this case (time series - EMA (n)) to correct themselves. In Technical Analysis , these are called Adaptive Moving Averages; they track the time series more aggressively when it is making large moves.
What is R-squared Adaptive?
One tool available in forecasting the trendiness of the breakout is the coefficient of determination (R-squared), a statistical measurement.
The R-squared indicates linear strength between the security's price (the Y - axis) and time (the X - axis). The R-squared is the percentage of squared error that the linear regression can eliminate if it were used as the predictor instead of the mean value. If the R-squared were 0.99, then the linear regression would eliminate 99% of the error for prediction versus predicting closing prices using a simple moving average.
R-squared is used here to derive a T3 factor used to modify price before passing price through a six-pole non-linear Kalman filter.
Included:
Bar coloring
Signals
Alerts
Flat coloring
R-squared Adaptive T3 [Loxx]R-squared Adaptive T3 is an R-squared adaptive version of Tilson's T3 moving average. This adaptivity was originally proposed by mladen on various forex forums. This is considered experimental but shows how to use r-squared adapting methods to moving averages. In theory, the T3 is a six-pole non-linear Kalman filter.
What is the T3 moving average?
Better Moving Averages Tim Tillson
November 1, 1998
Tim Tillson is a software project manager at Hewlett-Packard, with degrees in Mathematics and Computer Science. He has privately traded options and equities for 15 years.
Introduction
"Digital filtering includes the process of smoothing, predicting, differentiating, integrating, separation of signals, and removal of noise from a signal. Thus many people who do such things are actually using digital filters without realizing that they are; being unacquainted with the theory, they neither understand what they have done nor the possibilities of what they might have done."
This quote from R. W. Hamming applies to the vast majority of indicators in technical analysis. Moving averages, be they simple, weighted, or exponential, are lowpass filters; low frequency components in the signal pass through with little attenuation, while high frequencies are severely reduced.
"Oscillator" type indicators (such as MACD, Momentum, Relative Strength Index) are another type of digital filter called a differentiator.
Tushar Chande has observed that many popular oscillators are highly correlated, which is sensible because they are trying to measure the rate of change of the underlying time series, i.e., are trying to be the first and second derivatives we all learned about in Calculus.
We use moving averages (lowpass filters) in technical analysis to remove the random noise from a time series, to discern the underlying trend or to determine prices at which we will take action. A perfect moving average would have two attributes:
It would be smooth, not sensitive to random noise in the underlying time series. Another way of saying this is that its derivative would not spuriously alternate between positive and negative values.
It would not lag behind the time series it is computed from. Lag, of course, produces late buy or sell signals that kill profits.
The only way one can compute a perfect moving average is to have knowledge of the future, and if we had that, we would buy one lottery ticket a week rather than trade!
Having said this, we can still improve on the conventional simple, weighted, or exponential moving averages. Here's how:
Two Interesting Moving Averages
We will examine two benchmark moving averages based on Linear Regression analysis.
In both cases, a Linear Regression line of length n is fitted to price data.
I call the first moving average ILRS, which stands for Integral of Linear Regression Slope. One simply integrates the slope of a linear regression line as it is successively fitted in a moving window of length n across the data, with the constant of integration being a simple moving average of the first n points. Put another way, the derivative of ILRS is the linear regression slope. Note that ILRS is not the same as a SMA (simple moving average) of length n, which is actually the midpoint of the linear regression line as it moves across the data.
We can measure the lag of moving averages with respect to a linear trend by computing how they behave when the input is a line with unit slope. Both SMA(n) and ILRS(n) have lag of n/2, but ILRS is much smoother than SMA.
Our second benchmark moving average is well known, called EPMA or End Point Moving Average. It is the endpoint of the linear regression line of length n as it is fitted across the data. EPMA hugs the data more closely than a simple or exponential moving average of the same length. The price we pay for this is that it is much noisier (less smooth) than ILRS, and it also has the annoying property that it overshoots the data when linear trends are present.
However, EPMA has a lag of 0 with respect to linear input! This makes sense because a linear regression line will fit linear input perfectly, and the endpoint of the LR line will be on the input line.
These two moving averages frame the tradeoffs that we are facing. On one extreme we have ILRS, which is very smooth and has considerable phase lag. EPMA has 0 phase lag, but is too noisy and overshoots. We would like to construct a better moving average which is as smooth as ILRS, but runs closer to where EPMA lies, without the overshoot.
A easy way to attempt this is to split the difference, i.e. use (ILRS(n)+EPMA(n))/2. This will give us a moving average (call it IE/2) which runs in between the two, has phase lag of n/4 but still inherits considerable noise from EPMA. IE/2 is inspirational, however. Can we build something that is comparable, but smoother? Figure 1 shows ILRS, EPMA, and IE/2.
Filter Techniques
Any thoughtful student of filter theory (or resolute experimenter) will have noticed that you can improve the smoothness of a filter by running it through itself multiple times, at the cost of increasing phase lag.
There is a complementary technique (called twicing by J.W. Tukey) which can be used to improve phase lag. If L stands for the operation of running data through a low pass filter, then twicing can be described by:
L' = L(time series) + L(time series - L(time series))
That is, we add a moving average of the difference between the input and the moving average to the moving average. This is algebraically equivalent to:
2L-L(L)
This is the Double Exponential Moving Average or DEMA, popularized by Patrick Mulloy in TASAC (January/February 1994).
In our taxonomy, DEMA has some phase lag (although it exponentially approaches 0) and is somewhat noisy, comparable to IE/2 indicator.
We will use these two techniques to construct our better moving average, after we explore the first one a little more closely.
Fixing Overshoot
An n-day EMA has smoothing constant alpha=2/(n+1) and a lag of (n-1)/2.
Thus EMA(3) has lag 1, and EMA(11) has lag 5. Figure 2 shows that, if I am willing to incur 5 days of lag, I get a smoother moving average if I run EMA(3) through itself 5 times than if I just take EMA(11) once.
This suggests that if EPMA and DEMA have 0 or low lag, why not run fast versions (eg DEMA(3)) through themselves many times to achieve a smooth result? The problem is that multiple runs though these filters increase their tendency to overshoot the data, giving an unusable result. This is because the amplitude response of DEMA and EPMA is greater than 1 at certain frequencies, giving a gain of much greater than 1 at these frequencies when run though themselves multiple times. Figure 3 shows DEMA(7) and EPMA(7) run through themselves 3 times. DEMA^3 has serious overshoot, and EPMA^3 is terrible.
The solution to the overshoot problem is to recall what we are doing with twicing:
DEMA(n) = EMA(n) + EMA(time series - EMA(n))
The second term is adding, in effect, a smooth version of the derivative to the EMA to achieve DEMA. The derivative term determines how hot the moving average's response to linear trends will be. We need to simply turn down the volume to achieve our basic building block:
EMA(n) + EMA(time series - EMA(n))*.7;
This is algebraically the same as:
EMA(n)*1.7-EMA(EMA(n))*.7;
I have chosen .7 as my volume factor, but the general formula (which I call "Generalized Dema") is:
GD(n,v) = EMA(n)*(1+v)-EMA(EMA(n))*v,
Where v ranges between 0 and 1. When v=0, GD is just an EMA, and when v=1, GD is DEMA. In between, GD is a cooler DEMA. By using a value for v less than 1 (I like .7), we cure the multiple DEMA overshoot problem, at the cost of accepting some additional phase delay. Now we can run GD through itself multiple times to define a new, smoother moving average T3 that does not overshoot the data:
T3(n) = GD(GD(GD(n)))
In filter theory parlance, T3 is a six-pole non-linear Kalman filter. Kalman filters are ones which use the error (in this case (time series - EMA(n)) to correct themselves. In Technical Analysis, these are called Adaptive Moving Averages; they track the time series more aggressively when it is making large moves.
Included:
Bar coloring
Signals
Alerts
Loxx's Expanded Source Types
Drawdown RangeHello death eaters, presenting a unique script which can be used for fundamental analysis or mean reversion based trades.
Process of deriving this table is as below:
Find out ATH for given day
Calculate the drawdown from ATH for the day and drawdown percentage
Based on the drawdown percentage, increment the count of basket which is based on input iNumber of ranges . For example, if number of ranges is 5, then there will be 5 baskets. First basket will fit drawdown percentage 0-20% and each subsequent ones will accommodate next 20% range.
Repeat the process from start to last bar. Once done, table will plot how much percentage of days belong to which basket.
For example, from the below chart of NASDAQ:AAPL
We can deduce following,
Historically stock has traded within 1% drawdown from ATH for 6.59% of time. This is the max amount of time stock has stayed in specific range of drawdown from ATH.
Stock has traded at the drawdown range of 82-83% from ATH for 0.17% of time. This is the least amount of time the stock has stayed in specific range of drawdown from ATH.
At present, stock is trading 2-3% below ATH and this has happened for about 2.46% of total days in trade
Maximum drawdown the stock has suffered is 83%
Lets take another example of NASDAQ:TSLA
Stock is trading at 21-22% below ATH. But, historically the max drawdown range where stock has traded is within 0-1%. Now, if we make this range to show 20 divisions instead of 100, it will look something like this:
Table suggests that stock is trading about 20-25% below ATH - which is right. But, table also suggests that stock has spent most number of days within this drawdown range when we divide it by 20 baskets instad of 100. I would probably wait for price to break out of this range before going long or short. At present, it seems a stage ranging stage. I might think about selling PUTs or covered CALLs outside this range.
Similarly, if you look at AMEX:SPY , 36% of the time, price has stayed within 5% from ATH - makes it a compelling bull case!!
NYSE:BABA is trading at 50-55% below ATH - which is the most it has retraced so far. In general, it is used to be within 15-20% from ATH
NOW, Bit of explanation on input options.
Number of Ranges : Says how many baskets the drawdown map needs to be divided into.
Reference : You can take ATH as reference or chose a time window between which the highest need to be considered for drawdown. This can be useful for megacaps which has gone beyond initial phase of uncertainity. There is no point looking at 80% drawdown AAPL had during 1990s. More approriate to look at it post 2000s where it started making higher impact and growth.
Cumulative Percentage : When this is unchecked, percentage division shows 0-nth percentage instad of percentage ranges. For example this is how it looks on SPY:
We can see that SPY has remained within 6% from ATH for more than 50% of the time.
Hope this is helpful. Happy trading :)
PS: this can be used in conjunction with Drawdown-Price-vs-Fundamentals to pick value stocks at discounted price while also keeping an eye on range tendencies of it.
Thanks to @mattX5 for the ideas and discussion today :)
{Gunzo} Heiken Ashi RibbonsHeiken Ashi Ribbons is a trend-following indicator which gives entry and exit points for short-term, medium-term and long term trading (using Exponential Moving Averages and Heiken Ashi formulas).
OVERVIEW :
The Heiken Ashi Ribbons indicator is composed of 3 moving average ribbons (slow, normal and fast) that are computed using the Heiken Ashi formulas. The 3 ribbons give a clear vision of the current trend as they use moving averages that smooth out the price and filter noise from short term fluctuations. In a simplified way, you can consider each ribbon as a moving average with a larger body size.
If the price is above the slow ribbon, we consider the asset as trending up in the short term (trending down otherwise). If the price is above the fast ribbon, we consider the asset as trending up in the long term (trending down otherwise).
CALCULATION :
First of all, to compute a ribbon for this indicator we calculate a moving average (EMA by default) for common sources (OHLC) :
EMA (open), EMA (high), EMA (low), EMA (close)
We then apply the Heiken Ashi formulas to the moving averages calculated previously.
HA (open) = HA (open) previous + HA (close) previous
HA (close) = ( EMA (open) + EMA (high) + EMA (low) + EMA (close) ) / 4
HA (high) = max( EMA (open), EMA (close), EMA (high) )
HA (low) = min ( EMA (open), EMA (close), EMA (low) )
The ribbon displayed (by default) on the chart is the area between HA (open) and HA (close).
SETTINGS :
1st Moving average length : Length of the slow moving average
2nd Moving average length : Length of the normal moving average
3rd Moving average length : Length of the fast moving average
Moving average method : Moving average calculation method (EMA : Exponential Moving Average, SMA : Simple Moving Average, WMA : Weighted Moving Average)
Ribbon type : standard ribbon uses the area between HA (open) and HA (close). Large ribbon uses the area between HA (low) and HA (high)
Display ribbon as candles : change the type of visualization between area and candles
Display short term buy/sell signals : Display short term buy/sell signals (crosses) when the fast moving average and normal moving average are crossing
Display long term buy/sell signals : Display long buy/sell signals (circles) when the fast moving average and slow moving average are crossing
Display ribbon trending up signals : Display ribbon direction change (triangle up) when the trend of the ribbon changes to trending up
Display ribbon trending down signals : Display ribbon direction change (triangle down) when the trend of the ribbon changes to trending down
VISUALIZATIONS :
This indicator has 2 possible visualizations :
Ribbons : the ribbons can be considered as enhanced moving averages for trading purposes. They represent the area between the Heiken Ashi of the moving average of the open and closing price. The color of the moving average line is green when the ribbon is trending up and red when the ribbon is trending down.
Signals : Various signals can be displayed at the bottom of the chart (Buy/Sell signals, Ribbon direction changes signals).
USAGE :
This indicator can be used in many strategies, just like when you are using multiple moving averages. You should test these strategies and use the one that best fits your trading style.
Strategy based on crossovers :
When the fast ribbon crosses above the normal ribbon, it is a short term buy signal (it is recommended to wait for a confirmation)
When the fast ribbon crosses under the normal ribbon, it is a short term sell signal (it is recommended to wait for a confirmation)
When the fast ribbon crosses above the slow ribbon, it is a long term buy signal
When the fast ribbon crosses over the slow ribbon, it is a long term buy signal
Strategy based on price position :
When the prices closes above the ribbon, it is a buy signal (long term if above slow ribbon, short term if above fast ribbon)
When the prices closes below the ribbon, it is a sell signal (long term if below slow ribbon, short term if below fast ribbon)
Strategy based on price bouncing :
When the price decreases and reaches the green long term ribbon, the price candles may not be able to cross the ribbon. If the price increases, we consider that move as a bounce on the ribbon, which is a buy signal
When the price increases and reaches a red long term ribbon, the price candles may not be able to cross the ribbon. If the price decreases, we consider that move as a bounce on the ribbon, which is a sell signal
Strategy based on ribbon direction :
When the direction of the ribbon changes, the trend of the asset is changing which may lead to a crossover to the next candles if the trend is continuing in that direction (it is recommended to validate the entry points with a second indicator as this strategy may have some false signals).
MultiLayer Acceleration/Deceleration Strategy [Skyrexio]Overview
MultiLayer Acceleration/Deceleration Strategy leverages the combination of Acceleration/Deceleration Indicator(AC), Williams Alligator, Williams Fractals and Exponential Moving Average (EMA) to obtain the high probability long setups. Moreover, strategy uses multi trades system, adding funds to long position if it considered that current trend has likely became stronger. Acceleration/Deceleration Indicator is used for creating signals, while Alligator and Fractal are used in conjunction as an approximation of short-term trend to filter them. At the same time EMA (default EMA's period = 100) is used as high probability long-term trend filter to open long trades only if it considers current price action as an uptrend. More information in "Methodology" and "Justification of Methodology" paragraphs. The strategy opens only long trades.
Unique Features
No fixed stop-loss and take profit: Instead of fixed stop-loss level strategy utilizes technical condition obtained by Fractals and Alligator to identify when current uptrend is likely to be over (more information in "Methodology" and "Justification of Methodology" paragraphs)
Configurable Trading Periods: Users can tailor the strategy to specific market windows, adapting to different market conditions.
Multilayer trades opening system: strategy uses only 10% of capital in every trade and open up to 5 trades at the same time if script consider current trend as strong one.
Short and long term trend trade filters: strategy uses EMA as high probability long-term trend filter and Alligator and Fractal combination as a short-term one.
Methodology
The strategy opens long trade when the following price met the conditions:
1. Price closed above EMA (by default, period = 100). Crossover is not obligatory.
2. Combination of Alligator and Williams Fractals shall consider current trend as an upward (all details in "Justification of Methodology" paragraph)
3. Acceleration/Deceleration shall create one of two types of long signals (all details in "Justification of Methodology" paragraph). Buy stop order is placed one tick above the candle's high of last created long signal.
4. If price reaches the order price, long position is opened with 10% of capital.
5. If currently we have opened position and price creates and hit the order price of another one long signal, another one long position will be added to the previous with another one 10% of capital. Strategy allows to open up to 5 long trades simultaneously.
6. If combination of Alligator and Williams Fractals shall consider current trend has been changed from up to downtrend, all long trades will be closed, no matter how many trades has been opened.
Script also has additional visuals. If second long trade has been opened simultaneously the Alligator's teeth line is plotted with the green color. Also for every trade in a row from 2 to 5 the label "Buy More" is also plotted just below the teeth line. With every next simultaneously opened trade the green color of the space between teeth and price became less transparent.
Strategy settings
In the inputs window user can setup strategy setting: EMA Length (by default = 100, period of EMA, used for long-term trend filtering EMA calculation). User can choose the optimal parameters during backtesting on certain price chart.
Justification of Methodology
Let's explore the key concepts of this strategy and understand how they work together. We'll begin with the simplest: the EMA.
The Exponential Moving Average (EMA) is a type of moving average that assigns greater weight to recent price data, making it more responsive to current market changes compared to the Simple Moving Average (SMA). This tool is widely used in technical analysis to identify trends and generate buy or sell signals. The EMA is calculated as follows:
1.Calculate the Smoothing Multiplier:
Multiplier = 2 / (n + 1), Where n is the number of periods.
2. EMA Calculation
EMA = (Current Price) × Multiplier + (Previous EMA) × (1 − Multiplier)
In this strategy, the EMA acts as a long-term trend filter. For instance, long trades are considered only when the price closes above the EMA (default: 100-period). This increases the likelihood of entering trades aligned with the prevailing trend.
Next, let’s discuss the short-term trend filter, which combines the Williams Alligator and Williams Fractals. Williams Alligator
Developed by Bill Williams, the Alligator is a technical indicator that identifies trends and potential market reversals. It consists of three smoothed moving averages:
Jaw (Blue Line): The slowest of the three, based on a 13-period smoothed moving average shifted 8 bars ahead.
Teeth (Red Line): The medium-speed line, derived from an 8-period smoothed moving average shifted 5 bars forward.
Lips (Green Line): The fastest line, calculated using a 5-period smoothed moving average shifted 3 bars forward.
When the lines diverge and align in order, the "Alligator" is "awake," signaling a strong trend. When the lines overlap or intertwine, the "Alligator" is "asleep," indicating a range-bound or sideways market. This indicator helps traders determine when to enter or avoid trades.
Fractals, another tool by Bill Williams, help identify potential reversal points on a price chart. A fractal forms over at least five consecutive bars, with the middle bar showing either:
Up Fractal: Occurs when the middle bar has a higher high than the two preceding and two following bars, suggesting a potential downward reversal.
Down Fractal: Happens when the middle bar shows a lower low than the surrounding two bars, hinting at a possible upward reversal.
Traders often use fractals alongside other indicators to confirm trends or reversals, enhancing decision-making accuracy.
How do these tools work together in this strategy? Let’s consider an example of an uptrend.
When the price breaks above an up fractal, it signals a potential bullish trend. This occurs because the up fractal represents a shift in market behavior, where a temporary high was formed due to selling pressure. If the price revisits this level and breaks through, it suggests the market sentiment has turned bullish.
The breakout must occur above the Alligator’s teeth line to confirm the trend. A breakout below the teeth is considered invalid, and the downtrend might still persist. Conversely, in a downtrend, the same logic applies with down fractals.
In this strategy if the most recent up fractal breakout occurs above the Alligator's teeth and follows the last down fractal breakout below the teeth, the algorithm identifies an uptrend. Long trades can be opened during this phase if a signal aligns. If the price breaks a down fractal below the teeth line during an uptrend, the strategy assumes the uptrend has ended and closes all open long trades.
By combining the EMA as a long-term trend filter with the Alligator and fractals as short-term filters, this approach increases the likelihood of opening profitable trades while staying aligned with market dynamics.
Now let's talk about Acceleration/Deceleration signals. AC indicator is calculated using the Awesome Oscillator, so let's first of all briefly explain what is Awesome Oscillator and how it can be calculated. The Awesome Oscillator (AO), developed by Bill Williams, is a momentum indicator designed to measure market momentum by contrasting recent price movements with a longer-term historical perspective. It helps traders detect potential trend reversals and assess the strength of ongoing trends.
The formula for AO is as follows:
AO = SMA5(Median Price) − SMA34(Median Price)
where:
Median Price = (High + Low) / 2
SMA5 = 5-period Simple Moving Average of the Median Price
SMA 34 = 34-period Simple Moving Average of the Median Price
The Acceleration/Deceleration (AC) Indicator, introduced by Bill Williams, measures the rate of change in market momentum. It highlights shifts in the driving force of price movements and helps traders spot early signs of trend changes. The AC Indicator is particularly useful for identifying whether the current momentum is accelerating or decelerating, which can indicate potential reversals or continuations. For AC calculation we shall use the AO calculated above is the following formula:
AC = AO − SMA5(AO), where SMA5(AO)is the 5-period Simple Moving Average of the Awesome Oscillator
When the AC is above the zero line and rising, it suggests accelerating upward momentum.
When the AC is below the zero line and falling, it indicates accelerating downward momentum.
When the AC is below zero line and rising it suggests the decelerating the downtrend momentum. When AC is above the zero line and falling, it suggests the decelerating the uptrend momentum.
Now we can explain which AC signal types are used in this strategy. The first type of long signal is when AC value is below zero line. In this cases we need to see three rising bars on the histogram in a row after the falling one. The second type of signals occurs above the zero line. There we need only two rising AC bars in a row after the falling one to create the signal. The signal bar is the last green bar in this sequence. The strategy places the buy stop order one tick above the candle's high, which corresponds to the signal bar on AC indicator.
After that we can have the following scenarios:
Price hit the order on the next candle in this case strategy opened long with this price.
Price doesn't hit the order price, the next candle set lower high. If current AC bar is increasing buy stop order changes by the script to the high of this new bar plus one tick. This procedure repeats until price finally hit buy order or current AC bar become decreasing. In the second case buy order cancelled and strategy wait for the next AC signal.
If long trades are initiated, the strategy continues utilizing subsequent signals until the total number of trades reaches a maximum of 5. All open trades are closed when the trend shifts to a downtrend, as determined by the combination of the Alligator and Fractals described earlier.
Why we use AC signals? If currently strategy algorithm considers the high probability of the short-term uptrend with the Alligator and Fractals combination pointed out above and the long-term trend is also suggested by the EMA filter as bullish. Rising AC bars after period of falling AC bars indicates the high probability of local pull back end and there is a high chance to open long trade in the direction of the most likely main uptrend. The numbers of rising bars are different for the different AC values (below or above zero line). This is needed because if AC below zero line the local downtrend is likely to be stronger and needs more rising bars to confirm that it has been changed than if AC is above zero.
Why strategy use only 10% per signal? Sometimes we can see the false signals which appears on sideways. Not risking that much script use only 10% per signal. If the first long trade has been open and price continue going up and our trend approximation by Alligator and Fractals is uptrend, strategy add another one 10% of capital to every next AC signal while number of active trades no more than 5. This capital allocation allows to take part in long trades when current uptrend is likely to be strong and use only 10% of capital when there is a high probability of sideways.
Backtest Results
Operating window: Date range of backtests is 2023.01.01 - 2024.11.01. It is chosen to let the strategy to close all opened positions.
Commission and Slippage: Includes a standard Binance commission of 0.1% and accounts for possible slippage over 5 ticks.
Initial capital: 10000 USDT
Percent of capital used in every trade: 10%
Maximum Single Position Loss: -5.15%
Maximum Single Profit: +24.57%
Net Profit: +2108.85 USDT (+21.09%)
Total Trades: 111 (36.94% win rate)
Profit Factor: 2.391
Maximum Accumulated Loss: 367.61 USDT (-2.97%)
Average Profit per Trade: 19.00 USDT (+1.78%)
Average Trade Duration: 75 hours
How to Use
Add the script to favorites for easy access.
Apply to the desired timeframe and chart (optimal performance observed on 3h BTC/USDT).
Configure settings using the dropdown choice list in the built-in menu.
Set up alerts to automate strategy positions through web hook with the text: {{strategy.order.alert_message}}
Disclaimer:
Educational and informational tool reflecting Skyrex commitment to informed trading. Past performance does not guarantee future results. Test strategies in a simulated environment before live implementation
These results are obtained with realistic parameters representing trading conditions observed at major exchanges such as Binance and with realistic trading portfolio usage parameters.
MultiLayer Awesome Oscillator Saucer Strategy [Skyrexio]Overview
MultiLayer Awesome Oscillator Saucer Strategy leverages the combination of Awesome Oscillator (AO), Williams Alligator, Williams Fractals and Exponential Moving Average (EMA) to obtain the high probability long setups. Moreover, strategy uses multi trades system, adding funds to long position if it considered that current trend has likely became stronger. Awesome Oscillator is used for creating signals, while Alligator and Fractal are used in conjunction as an approximation of short-term trend to filter them. At the same time EMA (default EMA's period = 100) is used as high probability long-term trend filter to open long trades only if it considers current price action as an uptrend. More information in "Methodology" and "Justification of Methodology" paragraphs. The strategy opens only long trades.
Unique Features
No fixed stop-loss and take profit: Instead of fixed stop-loss level strategy utilizes technical condition obtained by Fractals and Alligator to identify when current uptrend is likely to be over (more information in "Methodology" and "Justification of Methodology" paragraphs)
Configurable Trading Periods: Users can tailor the strategy to specific market windows, adapting to different market conditions.
Multilayer trades opening system: strategy uses only 10% of capital in every trade and open up to 5 trades at the same time if script consider current trend as strong one.
Short and long term trend trade filters: strategy uses EMA as high probability long-term trend filter and Alligator and Fractal combination as a short-term one.
Methodology
The strategy opens long trade when the following price met the conditions:
1. Price closed above EMA (by default, period = 100). Crossover is not obligatory.
2. Combination of Alligator and Williams Fractals shall consider current trend as an upward (all details in "Justification of Methodology" paragraph)
3. Awesome Oscillator shall create the "Saucer" long signal (all details in "Justification of Methodology" paragraph). Buy stop order is placed one tick above the candle's high of last created "Saucer signal".
4. If price reaches the order price, long position is opened with 10% of capital.
5. If currently we have opened position and price creates and hit the order price of another one "Saucer" signal another one long position will be added to the previous with another one 10% of capital. Strategy allows to open up to 5 long trades simultaneously.
6. If combination of Alligator and Williams Fractals shall consider current trend has been changed from up to downtrend, all long trades will be closed, no matter how many trades has been opened.
Script also has additional visuals. If second long trade has been opened simultaneously the Alligator's teeth line is plotted with the green color. Also for every trade in a row from 2 to 5 the label "Buy More" is also plotted just below the teeth line. With every next simultaneously opened trade the green color of the space between teeth and price became less transparent.
Strategy settings
In the inputs window user can setup strategy setting: EMA Length (by default = 100, period of EMA, used for long-term trend filtering EMA calculation). User can choose the optimal parameters during backtesting on certain price chart.
Justification of Methodology
Let's go through all concepts used in this strategy to understand how they works together. Let's start from the easies one, the EMA. Let's briefly explain what is EMA. The Exponential Moving Average (EMA) is a type of moving average that gives more weight to recent prices, making it more responsive to current price changes compared to the Simple Moving Average (SMA). It is commonly used in technical analysis to identify trends and generate buy or sell signals. It can be calculated with the following steps:
1.Calculate the Smoothing Multiplier:
Multiplier = 2 / (n + 1), Where n is the number of periods.
2. EMA Calculation
EMA = (Current Price) × Multiplier + (Previous EMA) × (1 − Multiplier)
In this strategy uses EMA an initial long term trend filter. It allows to open long trades only if price close above EMA (by default 50 period). It increases the probability of taking long trades only in the direction of the trend.
Let's go to the next, short-term trend filter which consists of Alligator and Fractals. Let's briefly explain what do these indicators means. The Williams Alligator, developed by Bill Williams, is a technical indicator designed to spot trends and potential market reversals. It uses three smoothed moving averages, referred to as the jaw, teeth, and lips:
Jaw (Blue Line): The slowest of the three, based on a 13-period smoothed moving average shifted 8 bars ahead.
Teeth (Red Line): The medium-speed line, derived from an 8-period smoothed moving average shifted 5 bars forward.
Lips (Green Line): The fastest line, calculated using a 5-period smoothed moving average shifted 3 bars forward.
When these lines diverge and are properly aligned, the "alligator" is considered "awake," signaling a strong trend. Conversely, when the lines overlap or intertwine, the "alligator" is "asleep," indicating a range-bound or sideways market. This indicator assists traders in identifying when to act on or avoid trades.
The Williams Fractals, another tool introduced by Bill Williams, are used to pinpoint potential reversal points on a price chart. A fractal forms when there are at least five consecutive bars, with the middle bar displaying the highest high (for an up fractal) or the lowest low (for a down fractal), relative to the two bars on either side.
Key Points:
Up Fractal: Occurs when the middle bar has a higher high than the two preceding and two following bars, suggesting a potential downward reversal.
Down Fractal: Happens when the middle bar shows a lower low than the surrounding two bars, hinting at a possible upward reversal.
Traders often combine fractals with other indicators to confirm trends or reversals, improving the accuracy of trading decisions.
How we use their combination in this strategy? Let’s consider an uptrend example. A breakout above an up fractal can be interpreted as a bullish signal, indicating a high likelihood that an uptrend is beginning. Here's the reasoning: an up fractal represents a potential shift in market behavior. When the fractal forms, it reflects a pullback caused by traders selling, creating a temporary high. However, if the price manages to return to that fractal’s high and break through it, it suggests the market has "changed its mind" and a bullish trend is likely emerging.
The moment of the breakout marks the potential transition to an uptrend. It’s crucial to note that this breakout must occur above the Alligator's teeth line. If it happens below, the breakout isn’t valid, and the downtrend may still persist. The same logic applies inversely for down fractals in a downtrend scenario.
So, if last up fractal breakout was higher, than Alligator's teeth and it happened after last down fractal breakdown below teeth, algorithm considered current trend as an uptrend. During this uptrend long trades can be opened if signal was flashed. If during the uptrend price breaks down the down fractal below teeth line, strategy considered that uptrend is finished with the high probability and strategy closes all current long trades. This combination is used as a short term trend filter increasing the probability of opening profitable long trades in addition to EMA filter, described above.
Now let's talk about Awesome Oscillator's "Sauser" signals. Briefly explain what is the Awesome Oscillator. The Awesome Oscillator (AO), created by Bill Williams, is a momentum-based indicator that evaluates market momentum by comparing recent price activity to a broader historical context. It assists traders in identifying potential trend reversals and gauging trend strength.
AO = SMA5(Median Price) − SMA34(Median Price)
where:
Median Price = (High + Low) / 2
SMA5 = 5-period Simple Moving Average of the Median Price
SMA 34 = 34-period Simple Moving Average of the Median Price
Now we know what is AO, but what is the "Saucer" signal? This concept was introduced by Bill Williams, let's briefly explain it and how it's used by this strategy. Initially, this type of signal is a combination of the following AO bars: we need 3 bars in a row, the first one shall be higher than the second, the third bar also shall be higher, than second. All three bars shall be above the zero line of AO. The price bar, which corresponds to third "saucer's" bar is our signal bar. Strategy places buy stop order one tick above the price bar which corresponds to signal bar.
After that we can have the following scenarios.
Price hit the order on the next candle in this case strategy opened long with this price.
Price doesn't hit the order price, the next candle set lower low. If current AO bar is increasing buy stop order changes by the script to the high of this new bar plus one tick. This procedure repeats until price finally hit buy order or current AO bar become decreasing. In the second case buy order cancelled and strategy wait for the next "Saucer" signal.
If long trades has been opened strategy use all the next signals until number of trades doesn't exceed 5. All trades are closed when the trend changes to downtrend according to combination of Alligator and Fractals described above.
Why we use "Saucer" signals? If AO above the zero line there is a high probability that price now is in uptrend if we take into account our two trend filters. When we see the decreasing bars on AO and it's above zero it's likely can be considered as a pullback on the uptrend. When we see the stop of AO decreasing and the first increasing bar has been printed there is a high probability that this local pull back is finished and strategy open long trade in the likely direction of a main trend.
Why strategy use only 10% per signal? Sometimes we can see the false signals which appears on sideways. Not risking that much script use only 10% per signal. If the first long trade has been open and price continue going up and our trend approximation by Alligator and Fractals is uptrend, strategy add another one 10% of capital to every next saucer signal while number of active trades no more than 5. This capital allocation allows to take part in long trades when current uptrend is likely to be strong and use only 10% of capital when there is a high probability of sideways.
Backtest Results
Operating window: Date range of backtests is 2023.01.01 - 2024.11.25. It is chosen to let the strategy to close all opened positions.
Commission and Slippage: Includes a standard Binance commission of 0.1% and accounts for possible slippage over 5 ticks.
Initial capital: 10000 USDT
Percent of capital used in every trade: 10%
Maximum Single Position Loss: -5.10%
Maximum Single Profit: +22.80%
Net Profit: +2838.58 USDT (+28.39%)
Total Trades: 107 (42.99% win rate)
Profit Factor: 3.364
Maximum Accumulated Loss: 373.43 USDT (-2.98%)
Average Profit per Trade: 26.53 USDT (+2.40%)
Average Trade Duration: 78 hours
These results are obtained with realistic parameters representing trading conditions observed at major exchanges such as Binance and with realistic trading portfolio usage parameters.
How to Use
Add the script to favorites for easy access.
Apply to the desired timeframe and chart (optimal performance observed on 3h BTC/USDT).
Configure settings using the dropdown choice list in the built-in menu.
Set up alerts to automate strategy positions through web hook with the text: {{strategy.order.alert_message}}
Disclaimer:
Educational and informational tool reflecting Skyrex commitment to informed trading. Past performance does not guarantee future results. Test strategies in a simulated environment before live implementation
Non-Psychological Levels🟩 Non-Psychological Levels is a structural analysis tool that segments price action into objective ranges, identifying Broken and Unbroken levels without relying on psychological or time-based assumptions. By emphasizing mechanically derived price behavior, it provides traders with a clear framework for analyzing support and resistance in a consistent and unbiased manner across various market conditions.
This indicator introduces a new approach to understanding market structure by focusing on price movement within defined segments, free from behavioral patterns, round numbers, or specific time intervals. While the indicator is time-agnostic in design, it works within the natural time progression of the chart, ensuring that segmentation aligns with the inherent structure of price movement. Broken levels, where price has breached a structural boundary, and Unbroken levels, which remain intact, are visualized with horizontal lines. These structural zones are complemented by dynamically boxed segments that contextualize both historical and ongoing price behavior.
By offering an objective perspective, the Non-Psychological Levels indicator complements psychology-based tools, helping traders explore market dynamics from multiple angles. When structural levels align with psychological zones, they reinforce critical price areas; when they differ, they provide opportunities to analyze price behavior from an alternative lens. This indicator is designed as both an educational framework and a practical tool, encouraging a deeper understanding of structural price behavior in technical analysis.
⭕ THEORY AND CONCEPT ⭕
The Non-Psychological Levels indicator is grounded in the principle of analyzing price behavior without reliance on psychological assumptions or time-based factors. Its primary purpose is to provide a structural framework for identifying support and resistance levels by focusing solely on price movement within mechanically defined segments. By removing external influences such as sentiment, time intervals, or market sessions, the indicator offers an unbiased lens through which traders can observe price dynamics.
Non-psychology, as defined here, refers to an approach that excludes behavioral and emotional patterns—like fear, greed, or herd mentality—from price analysis. Traditional tools often depend on these patterns to identify zones such as pivots or Fibonacci retracements, but these methods can be inconsistent in volatile markets. In contrast, the Non-Psychological Levels indicator focuses entirely on what price is doing, free from assumptions about trader behavior or external time constraints.
The indicator’s time-agnostic and mechanically driven design segments price action into consistent ranges, highlighting "Broken" levels (where price breaches structural boundaries) and "Unbroken" levels (where price holds). These structural zones remain unaffected by subjective or external influences, ensuring clarity and consistency across different markets and timeframes. By doing so, the indicator reveals a pure view of price structure, independent of psychological biases.
Importantly, the Non-Psychological Levels indicator is not intended to replace psychology-based tools but to complement them. When its structural levels align with psychological zones like round numbers or session highs/lows, the significance of these areas is reinforced. Conversely, when the levels differ, the contrast provides traders with alternative insights into market dynamics. This dual perspective—blending mechanical objectivity with behavioral analysis—enhances the depth and flexibility of market evaluation.
The following principles outline the theoretical foundation of the indicator and its unique contribution to structural price analysis:
Time-Agnostic Design : The indicator avoids reliance on time-based factors like daily opens, session intervals, or specific events. Instead, it segments price action using bar indexes, ensuring that structural levels are identified independently of external time variables. While the x-axis of a chart inherently represents time, this indicator abstracts away its influence, allowing traders to focus purely on price movement without the bias of temporal context.
Mechanical and Neutral Framework : Every calculation within the indicator is predetermined by a set of mechanical rules, ensuring no subjective input or interpretation affects the results. This objectivity guarantees that levels are derived solely from observed price behavior, providing a reliable framework that traders can trust to remain consistent across different assets, timeframes, and market conditions.
Broken and Unbroken Levels : Broken levels represent zones where price has breached a structural boundary, while Unbroken levels highlight areas where price has consistently respected its range. This distinction provides a clear and systematic method for identifying key support and resistance levels, offering insights into where future price interactions are most likely to occur.
Neutral Price Behavior : By dividing price action into equal segments, the indicator removes the influence of external factors like trader sentiment or psychological expectations. Each segment independently determines significant levels based purely on price action, enabling a structural view of the market that abstracts away behavioral or emotional biases.
Complement to Psychological Tools : While the indicator itself avoids behavioral assumptions, its levels can align with psychological zones like round numbers, pivots, or Fibonacci levels. When these structural and psychological levels overlap, it reinforces the importance of key areas, while divergences offer opportunities to examine price behavior from a new perspective.
Educational Value : The indicator encourages traders to explore the contrast between structural and psychological analysis. By introducing a framework that isolates price behavior from external influences, it challenges traditional methods of technical analysis, fostering deeper insights into market structure and behavior.
🔍 UNDERSTANDING STRUCTURAL LEVELS 🔍
The Non-Psychological Levels indicator offers a straightforward yet powerful way to understand market structure by segmenting price action into mechanically defined ranges. This segmentation highlights two key elements: "Broken" levels, where price has breached structural boundaries, and "Unbroken" levels, which remain intact and respected by price action. Together, these components create a framework for identifying potential areas of support and resistance.
Broken Levels : These are structural boundaries that price has surpassed, indicating areas where previous support or resistance failed. Broken levels often signal transitions in price behavior, such as shifts in momentum or the start of trending movements. They provide insight into zones where price has already tested and moved beyond.
Unbroken Levels : These levels remain intact within a given price segment, marking areas where price has consistently respected boundaries. Unbroken levels are particularly useful for identifying potential reversal points or zones of continued support or resistance. Their persistence across price action often makes them reliable indicators of market structure.
The visual segmentation of price action into distinct ranges allows traders to observe how price transitions between structural zones. For example:
- Clusters of Unbroken levels near the current price may suggest strong support or resistance, offering areas of interest for reversals or breakouts.
- Gaps between Unbroken levels highlight areas of price inefficiency or low interaction, which may become significant if revisited.
By focusing solely on structural price behavior, the Non-Psychological Levels indicator enables traders to analyze price independently of time or psychological factors. This makes it a valuable tool for understanding price dynamics objectively, whether used on its own or alongside other indicators.
🛠️ SETTINGS 🛠️
The Non-Psychological Levels indicator offers various customizable settings to help users tailor its visualization to their specific trading style and market conditions. These settings allow adjustments to sensitivity, level projection, and the source of price calculations (e.g., wicks or closing prices). Below, we outline each setting and its impact on the chart, along with examples to illustrate their functionality.
Custom Settings
Sensitivity : This setting adjusts the balance between detailed and broader structural levels by controlling the number of segments. Higher values result in more segments, revealing finer price levels, while lower values consolidate segments to highlight major price movements.
Source : Allows the user to choose between 'Wick' or 'Close' for detecting levels. Selecting 'Wick' emphasizes the absolute highs and lows of price action, while 'Close' focuses on closing prices within each segment.
Level Labels : Configures the visual representation of price levels, allowing users to toggle between price values, symbols (▲ ▼), or disabling labels altogether. This setting ensures clarity in how Broken and Unbroken levels are displayed on the chart.
Unbroken Levels : - - - Users can customize the colors and label styles for Unbroken levels, which highlight areas where price has respected structural boundaries.
Broken Levels : -|- Similar to Unbroken levels, users can specify the visual appearance of Broken levels, including color customization for Broken highs and lows. These settings help distinguish areas where price has breached a structural boundary.
Projection Options : This setting allows users to control how broken and unbroken levels are visually extended on the chart. The Future option projects lines forward to the right of the current price, showing potential future relevance of levels. The All option extends lines both forward and backward, providing a comprehensive view of how levels align with historical and potential future price action. The None option disables projections, keeping the chart focused solely on current segment levels without any extensions.
Segments : Includes options for customizing the segment visualization:
- Live Segment : Toggles the display of a highlighted box representing the current developing segment, helping users focus on ongoing price action.
- Boxes : Allows users to display filled boxes around each segment for additional visual emphasis.
- Segment Colors : Users can define separate colors for support (lower) and resistance (upper) segments, making it easier to interpret directional trends.
- Boundaries : Enables or disables vertical lines to mark segment boundaries, providing a clearer view of structural divisions.
Repaint : This setting allows users to enable or disable triangle labels within the live segment. When enabled, the triangles dynamically update to reflect real-time price behavior during the live bar but will repaint until the bar is fully confirmed. Disabling this option prevents the triangles from appearing during the live bar, reducing potential confusion as they may otherwise flash on and off during price updates. This setting ensures users can choose their preferred visualization while maintaining clarity in real-time analysis.
Color Settings : Offers extensive customization for all visual elements, including Broken and Unbroken levels, segment boundaries, and live segments. These settings ensure the indicator can adapt to individual preferences for chart readability.
🖼️ CHART EXAMPLES 🖼️
The following chart examples illustrate different configurations and features of the Non-Psychological Levels indicator. These examples highlight how the indicator’s settings influence the visualization of structural price behavior, helping traders understand its functionality in various scenarios.
Broken and Unbroken Levels : Orange prices are Broken HIghs. Blue prices are Broken Lows. Green and Red are Unbroken.
Boundaries : Enable Boundaries to visualize segments.
High Sensitivity Setting : A high sensitivity setting produces fewer segments and levels, emphasizing broader price ranges and major structural zones. This configuration is better suited for higher timeframes or identifying overarching trends.
Low Sensitivity Setting : A low sensitivity setting results in a greater number of segments and levels, offering a granular view of price structure. This configuration is ideal for analyzing detailed price movements on lower timeframes.
Live Segment with Triangles Enabled : This example shows the live segment box with triangle labels enabled. These triangles update dynamically during the live bar but may repaint until the bar is confirmed, helping traders observe real-time price behavior.
Broken and Unbroken Levels : This example highlights Broken levels (where price has breached structural boundaries and are drawn through subsequent price action) and Unbroken levels (where price has respected structural boundaries). These distinctions visually identify areas of potential support and resistance.
Broken and Unbroken Levels with Projection: All : This example demonstrates the "Project All" feature, where broken and unbroken levels are extended both forward and backward on the chart. This visualization highlights historical and potential future support and resistance zones, helping traders better understand how price interacts with these structural levels over time.
Segment Boxes with Boundaries : Filled boxes around individual segments visually distinguish each price interval, offering clarity in observing structural price transitions.
📊 SUMMARY 📊
The Non-Psychological Levels indicator provides a unique framework for analyzing structural price behavior through the identification of Broken and Unbroken levels. These levels act as a mechanical representation of support and resistance, independent of psychological biases or time-based factors. By focusing purely on price movement within defined segments, the indicator offers a neutral and consistent approach to understanding market dynamics.
This method complements traditional tools by providing an unbiased perspective. When structural levels align with psychological zones—such as round numbers or session-based highs and lows—they reinforce the significance of these areas as key price zones. When they diverge, the indicator introduces an alternative view, prompting further exploration of price behavior. This dual perspective enhances the depth of analysis by combining the mechanical and behavioral aspects of price action.
The Non-Psychological Levels indicator is not designed to generate trading signals or predict future price movements but serves as a visual and educational tool. Its adaptability across all markets and timeframes allows traders to integrate it into their broader strategies. By highlighting structural price dynamics, the indicator offers a fresh perspective on market analysis while remaining compatible with other technical tools.
⚙️ COMPATIBILITY AND LIMITATIONS ⚙️
Asset Compatibility :
The Non-Psychological Levels indicator is compatible with all asset classes, including cryptocurrencies, forex, stocks, and commodities. It can be applied to any chart or timeframe, making it a flexible tool for structural price analysis. Users should adjust the Sensitivity setting to ensure the segmentation aligns with the price behavior of the specific asset being analyzed. For instance, higher sensitivity values are more suitable for assets with large price ranges, while lower values work well for assets with tighter ranges.
Visual Range Dependency :
The indicator is optimized to perform calculations only within the visible range of the chart. This is a significant advantage, as it prevents unnecessary calculations and maintains efficient performance. However, because of this dependency, levels may appear to "recalculate" when the chart is zoomed in or out quickly or shifted abruptly. While this does not affect the integrity of the levels, it may cause a temporary lag as the indicator adjusts to the new visual range.
Persistence of Levels Beyond Visibility :
Even if levels are not visible on the chart due to zoom or scroll settings, they still exist in the background and are recalculated when revisited. This ensures that the structural price analysis remains consistent, regardless of the chart view.
Box Limitations in Pine Script :
The indicator is subject to Pine Script's inherent limitation of 500 boxes. This means that no more than 500 segments or level boxes can be drawn on the chart simultaneously. For most configurations, this limitation is mitigated by focusing on the visual range, but users employing very low sensitivity settings may exceed the limit. In such cases, only the most recent 500 boxes will be displayed, potentially omitting earlier segments.
Lag with Low Sensitivity Settings :
When sensitivity is set to a low value, the indicator creates many more segments, resulting in finer granularity and a higher number of boxes. While this provides detailed structural levels, it may increase the likelihood of exceeding Pine Script’s 500-box limit or cause a temporary lag when rendering a dense set of boxes over a wide visual range. Users should adjust sensitivity to balance detail with performance, especially on assets with high volatility or broad price ranges.
Live Segment Caution :
The live segment box updates in real time to reflect price movements as the segment is still developing. Since the segment high and segment low are not yet finalized, users should interpret this feature as a dynamic visualization of current price behavior rather than a definitive structural analysis. This ensures clarity during ongoing price action while maintaining the integrity of the indicator's framework.
Cross-Market Versatility :
The indicator’s time-agnostic and mechanical design ensures that it functions identically across all markets and timeframes. However, users should consider the unique characteristics of different markets when interpreting the results, as certain assets (e.g., highly volatile cryptocurrencies) may require sensitivity adjustments for optimal segmentation.
Visual Range Dependency: Levels recalculate efficiently within the chart's visible range but may lag temporarily when zooming or scrolling quickly.
These considerations ensure that the Non-Psychological Levels indicator remains robust and versatile while highlighting some inherent limitations of Pine Script and real-time recalculations. Users can mitigate these constraints by carefully adjusting sensitivity and understanding how the visual range dependency affects performance.
⚠️ DISCLAIMER ⚠️
The Non-Psychological Levels indicator is a visual analysis tool and is not designed as a predictive or trading signal indicator. Its primary purpose is to highlight structural price levels, providing an objective framework for understanding support and resistance within mechanically segmented price action.
The indicator operates within the visible range of the chart to ensure efficiency and adaptiveness, but this recalculation should not be interpreted as a forecast of future price behavior. While the structural levels may align with significant price zones in hindsight, they are purely a reflection of observed price dynamics and should not be used as standalone trading signals.
This indicator is intended as an educational and visual aid to complement other analysis methods. Users are encouraged to integrate it into a broader trading strategy and make adjustments to the settings based on their individual needs and market conditions.
🧠 BEYOND THE CODE 🧠
The Non-Psychological Levels indicator, like other xxattaxx indicators , is designed with education and community collaboration in mind. Its open-source nature encourages exploration, experimentation, and the development of new approaches to price analysis. By focusing on structural price behavior rather than psychological or time-based factors, this indicator introduces a fresh perspective for users to study.
Beyond its visual utility, the indicator serves as an educational framework for understanding the concept of non-psychological analysis. It offers traders an opportunity to explore price dynamics in a purely mechanical way, challenging conventional methods and fostering deeper insights into structural behavior. This approach is especially valuable for those interested in exploring new concepts or seeking alternative perspectives on market analysis.
Your comments, suggestions, and discussions are invaluable in shaping the future of this project. We actively encourage your feedback and contributions, which will directly help us refine and improve the Non-Psychological Levels indicator. We look forward to seeing the creative ways in which you use and enhance this tool. MVS
Fractal Trend Detector [Skyrexio]Introduction
Fractal Trend Detector leverages the combination of Williams fractals and Alligator Indicator to help traders to understand with the high probability what is the current trend: bullish or bearish. It visualizes the potential uptrend with the coloring bars in green, downtrend - in red color. Indicator also contains two additional visualizations, the strong uptrend and downtrend as the green and red zones and the white line - trend invalidation level (more information in "Methodology and it's justification" paragraph)
Features
Optional strong up and downtrends visualization: with the specified parameter in settings user can add/hide the green and red zones of the strong up and downtrends.
Optional trend invalidation level visualization: with the specified parameter in settings user can add/hide the white line which shows the current trend invalidation price.
Alerts: user can set up the alert and have notifications when uptrend/downtrend has been started, strong uptrend/downtrend started.
Methodology and it's justification
In this script we apply the concept of trend given by Bill Williams in his book "Trading Chaos". This approach leverages the Alligator and Fractals in conjunction. Let's briefly explain these two components.
The Williams Alligator, created by Bill Williams, is a technical analysis tool used to identify trends and potential market reversals. It consists of three moving averages, called the jaw, teeth, and lips, which represent different time periods:
Jaw (Blue Line): The slowest line, showing a 13-period smoothed moving average shifted 8 bars forward.
Teeth (Red Line): The medium-speed line, an 8-period smoothed moving average shifted 5 bars forward.
Lips (Green Line): The fastest line, a 5-period smoothed moving average shifted 3 bars forward.
When the lines are spread apart and aligned, the "alligator" is "awake," indicating a strong trend. When the lines intertwine, the "alligator" is "sleeping," signaling a non-trending or range-bound market. This indicator helps traders identify when to enter or avoid trades.
Williams Fractals, introduced by Bill Williams, are a technical analysis tool used to identify potential reversal points on a price chart. A fractal is a series of at least five consecutive bars where the middle bar has the highest high (for a up fractal) or the lowest low (for a down fractal), compared to the two bars on either side.
Key Points:
Up fractal: Formed when the middle bar shows a higher high than the two preceding and two following bars, signaling a potential turning point downward.
Down fractal: Formed when the middle bar has a lower low than the two surrounding bars, indicating a potential upward reversal.
Fractals are often used with other indicators to confirm trend direction or reversal, helping traders make more informed trading decisions.
How we can use its combination? Let's explain the uptrend example. The up fractal breakout to the upside can be interpret as bullish sign, there is a high probability that uptrend has just been started. It can be explained as following: the up fractal created is the potential change in market's behavior. A lot of traders made a decision to sell and it created the pullback with the fractal at the top. But if price is able to reach the fractal's top and break it, this is a high probability sign that market "changed his opinion" and bullish trend has been started. The moment of breaking is the potential changing to the uptrend. Here is another one important point, this breakout shall happen above the Alligator's teeth line. If not, this crossover doesn't count and the downtrend potentially remaining. The inverted logic is true for the down fractals and downtrend.
According to this methodology we received the high probability up and downtrend changes, but we can even add it. If current trend established by the indicator as the uptrend and alligator's lines have the following order: lips is higher than teeth, teeth is higher than jaw, script count it as a strong uptrend and start print the green zone - zone between lips and jaw. It can be used as a high probability support of the current bull market. The inverted logic can be used for bearish trend and red zones: if lips is lower than teeth and teeth is lower than jaw it's interpreted by the indicator as a strong down trend.
Indicator also has the trend invalidation line (white line). If current bar is green and market condition is interpreted by the script as an uptrend you will see the invalidation line below current price. This is the price level which shall be crossed by the price to change up trend to down trend according to algorithm. This level is recalculated on every candle. The inverted logic is valid for downtrend.
How to use indicator
Apply it to desired chart and time frame. It works on every time frame.
Setup the settings with enabling/disabling visualization of strong up/downtrend zones and trend invalidation line. "Show Strong Bullish/Bearish Trends" and "Show Trend Invalidation Price" checkboxes in the settings. By default they are turned on.
Analyze the price action. Indicator colored candle in green if it's more likely that current state is uptrend, in red if downtrend has the high probability to be now. Green zones between two lines showing if current uptrend is likely to be strong. This zone can be used as a high probability support on the uptrend. The red zone show high probability of strong downtrend and can be used as a resistance. White line is showing the level where uptrend or downtrend is going be invalidated according to indicator's algorithm. If current bar is green invalidation line will be below the current price, if red - above the current price.
Set up the alerts if it's needed. Indicator has four custom alerts called "Uptrend has been started" when current bar closed as green and the previous was not green, "Downtrend has been started" when current bar closed red and the previous was not red, "Uptrend became strong" if script started printing the green zone "Downtrend became strong" if script started printing the red zone.
Disclaimer:
Educational and informational tool reflecting Skyrex commitment to informed trading. Past performance does not guarantee future results. Test indicators before live implementation.
Moments Functions
This script is a TradingView Pine Script (version 5) for calculating and plotting statistical moments of a financial series. Here's a breakdown of what it does:
Script Overview
Purpose:
The script calculates and visualizes moments such as Mean, Variance, Skewness, and Kurtosis of a price series.
It also provides the option to display log returns and various statistical bands.
Inputs:
Moments Selection: Choose from Mean, Variance, Skewness, or Excess Kurtosis.
Source Settings: Define the lookback period and source data (e.g., closing price or log returns).
Plot Settings: Control visibility and styling of plots, bands, and information panels.
Colors Settings: Customize colors for different plot elements.
Functions:
f_va(): Computes sample variance.
f_sd(): Computes sample standard deviation.
f_skew(): Computes sample skewness.
f_kurt(): Computes sample kurtosis.
seskew(): Calculates the standard error of skewness.
sekurt(): Calculates the standard error of kurtosis.
skewcv(): Computes critical values for skewness.
kurtcv(): Computes critical values for kurtosis.
Outputs:
Plots:
Moment values (Mean, Variance, Skewness, Kurtosis).
Log Returns (if selected).
Standard Deviation Bands (if selected).
Critical Values for Skewness and Kurtosis (if selected).
Information Panel: Displays current statistical values and their significance.
Customization:
Users can customize appearance and behavior of the script through various input options, including colors, line thickness, and background settings.
Key Variables and Constants
Constants:
zscoreS and zscoreL: Z-scores for confidence intervals based on sample size.
skewrv and kurtrv: Reference values for skewness and excess kurtosis.
Sample Functions:
f_va() and f_sd(): Custom functions to calculate sample variance and standard deviation.
f_skew() and f_kurt(): Custom functions to calculate skewness and kurtosis.
Critical Values:
Functions skewcv() and kurtcv() calculate critical values used to assess statistical significance of skewness and kurtosis.
Plotting
Plot Types:
Mean, variance, skewness, and excess kurtosis are plotted based on user selection.
Log returns are plotted if enabled.
Standard deviation bands and critical values are plotted if enabled.
Labels:
Information panel labels display mean, variance/standard deviation, skewness, and kurtosis values along with their significance.
Example Usage
To use this script:
Add it to a TradingView chart.
Adjust inputs to configure which statistical moments to display, the source data, and the appearance of the plots.
Review the plotted data and labels to analyze the statistical properties of the selected price series.
This script is useful for traders and analysts looking to perform advanced statistical analysis on financial data directly within TradingView.
When comparing two stock prices over a period of time, the statistical moments—mean, variance, skewness, and kurtosis—can provide a deep insight into the behavior of the stock prices and their distributions. Here’s what each moment signifies in this context:
1. Mean
Definition: The mean (or average) is the sum of the stock prices over the period divided by the number of data points. It represents the central value of the price series.
Interpretation: When comparing two stocks, the mean tells you the average price level of each stock over the period. A higher mean indicates that, on average, the stock price is higher compared to another stock with a lower mean.
Comparison Insight: If Stock A has a higher mean price than Stock B, it implies that Stock A's prices are generally higher than those of Stock B over the given period.
2. Variance
Definition: Variance measures the dispersion or spread of the stock prices around the mean. It is the average of the squared differences from the mean.
Interpretation: A higher variance indicates that the stock prices fluctuate more widely from the mean, implying greater volatility. Conversely, a lower variance indicates more stable and predictable prices.
Comparison Insight: Comparing the variances of two stocks helps in assessing which stock has more price volatility. If Stock A has a higher variance than Stock B, it means Stock A's prices are more volatile and less predictable compared to Stock B.
3. Skewness
Definition: Skewness measures the asymmetry of the distribution of stock prices around the mean. It can be positive, negative, or zero:
Positive Skewness: The distribution has a long right tail, with more frequent small returns and fewer large positive returns.
Negative Skewness: The distribution has a long left tail, with more frequent small returns and fewer large negative returns.
Zero Skewness: The distribution is symmetric around the mean.
Interpretation: Skewness tells you about the direction of outliers in the stock price distribution. Positive skewness means a higher probability of large positive returns, while negative skewness means a higher probability of large negative returns.
Comparison Insight: By comparing skewness, you can understand the nature of extreme returns for two stocks. For example, if Stock A has positive skewness and Stock B has negative skewness, Stock A might have more frequent large gains, whereas Stock B might have more frequent large losses.
4. Kurtosis
Definition: Kurtosis measures the "tailedness" of the distribution of stock prices. It indicates how much of the distribution is in the tails versus the center. High kurtosis means more outliers (extreme returns), while low kurtosis means fewer outliers.
Interpretation:
High Kurtosis: Indicates a higher likelihood of extreme price movements (both high and low) compared to a normal distribution.
Low Kurtosis: Indicates that extreme price movements are less common.
Comparison Insight: Comparing kurtosis between two stocks shows which stock has more extreme returns. If Stock A has higher kurtosis than Stock B, it means Stock A has more frequent extreme price changes, suggesting more risk or opportunities for large gains or losses.
Summary
Mean: Compares average price levels.
Variance: Compares price volatility.
Skewness: Compares the asymmetry of price movements.
Kurtosis: Compares the likelihood of extreme price changes.
By analyzing these statistical moments, you can gain a comprehensive view of how the two stocks behave relative to each other, which can inform investment decisions based on risk, return expectations, and the nature of price movements.
Simple Neural Network Transformed RSI [QuantraSystems]Simple Neural Network Transformed RSI
Introduction
The Simple Neural Network Transformed RSI (ɴɴᴛ ʀsɪ) stands out as a formidable tool for traders who specialize in lower timeframe trading.
It is an innovative enhancement of the traditional RSI readings with simple neural network smoothing techniques.
This unique blend results in fairly accurate signals, tailored for swift market movements. The ɴɴᴛ ʀsɪ is particularly resistant to the usual market noise found in lower timeframes, ensuring a clearer view of short-term trends.
Furthermore, its diverse range of visualization options adds versatility, making it a valuable tool for traders seeking to capitalize on short-duration market dynamics.
Legend
In the Image you can see the BTCUSD 1D Chart with the ɴɴᴛ ʀsɪ in Trend Following Mode to display the current trend. This is visualized with the barcoloring.
Its Overbought and Oversold zones start at 50% and end at 100% of the selected Standard Deviation (default σ = 2), which can indicate extremely rare situations which can lead to either a softening momentum in the trend or even a mean reversion situation.
Here you can also see the original Indicator line and the Heikin Ashi transformed Indicator bars - more on that now.
Notes
Quantra Standard Value Contents:
To draw out all the information from the indicator calculation we have added a Heikin-Ashi (HA) Candle Visualization.
This HA transformation smoothens out the indicator values and gives a more informative look into Momentum and Trend of the Indicator itself.
This allows early entries and exits by observing the HA transformed Indicator values.
To diversify, different visualization options are available, either a classic line, HA transformed or Hybrid, which contains both of the previous.
To make Quantra's Indicators as useful and versatile as possible we have created options
to change the barcoloring and thus the derived signal from the indicator based on different modes.
Option to choose different Modes:
Trend Following (Indicator above mid line counts as uptrend, below is downtrend)
Extremities (Everything going beyond the Deviation Bands in a Mean Reversion manner is highlighted)
Candles (Color of HA candles as barcolor)
Reversion (HA ONLY) (Reversion Signals via the triangles if HA candles change state outside of the Deviation Bands)
- Reversion Signals are indicated by the triangles in the Heikin-Ashi or Hybrid visualization when the HA Candles revert
from downwards to upwards or the other way around OUTSIDE of the SD Bands.
Depending on the Indicator they signal OB/OS areas and can either work as high probability entries and exits for Mean Reversion trades or
indicate Momentum slow downs and potential ranges.
Please use another indicator to confirm this.
Case Study
To effectively utilize the NNT-RSI, traders should know their style and familiarize themselves with the available options.
As stated above, you have multiple modes available that you can combine as you need and see fit.
In the given example mostly only the mode was used in an isolated fashion.
Trend Following:
Purely relied on State Change - Midline crossover
Could be combined with Momentum or Reversion analysis for better entries/exits.
Extremities:
Ideal entry/exit is in the accordingly colored OS/OB Area, the Reversion signaled the latest possible entry/exit.
HA Candles:
Specifically applicable for strong trends. Powerful and fast tool.
Can whip if used as sole condition.
Reversions:
Shows the single entry and exit bars which have a positive expected value outcome.
Can also be used as confirmation or as last signal.
Please note that we always advise to find more confluence by additional indicators.
Traders are encouraged to test and determine the most suitable settings for their specific trading strategies and timeframes.
In the showcased trades the default settings were used.
Methodology
The Simple Neural Network Transformed RSI uses a simple neural network logic to process RSI values, smoothing them for more accurate trend analysis.
This is achieved through a linear combination of RSI values over a specified input length, weighted evenly to produce a neural network output.
// Simple neural network logic (linear combination with weighted aggregation)
var float inputs = array.new_float(nnLength, na)
for i = 0 to nnLength - 1
array.set(inputs, i, rsi1 )
nnOutput = 0.0
for i = 0 to nnLength - 1
nnOutput := nnOutput + array.get(inputs, i) * (1 / nnLength)
nnOutput
This output is then compared against a standard or dynamic mean line to generate trend following signals.
Mean = ta.sma(nnOutput, sdLook)
cross = useMean? 50 : Mean
The indicator also incorporates Heikin Ashi candlestick calculations to provide additional insights into market dynamics, such as trend strength and potential reversals.
// Calculate Heikin Ashi representation
ha = ha(
na(nnOutput ) ? nnOutput : nnOutput ,
math.max(nnOutput, nnOutput ),
math.min(nnOutput, nnOutput ),
nnOutput)
Standard deviation bands are used to create dynamic overbought and oversold zones, further enhancing the tool's analytical capabilities.
// Calculate Dynamic OB/OS Zones
stdv_bands(_src, _length, _mult) =>
float basis = ta.sma(_src, _length)
float dev = _mult * ta.stdev(_src, _length)
= stdv_bands(nnOutput, sdLook,sdMult/2)
= stdv_bands(nnOutput, sdLook, sdMult)
The Standard Deviation bands take defined parameters from the user, in this case sigma of ideally between 2 to 3,
to help the indicator detect extremely improbable conditions and thus take an inversely probable signal from it to forward to the user.
The parameter settings and also the visualizations allow for ample customizations by the trader.
For questions or recommendations, please feel free to seek contact in the comments.
Goertzel Cycle Composite Wave [Loxx]As the financial markets become increasingly complex and data-driven, traders and analysts must leverage powerful tools to gain insights and make informed decisions. One such tool is the Goertzel Cycle Composite Wave indicator, a sophisticated technical analysis indicator that helps identify cyclical patterns in financial data. This powerful tool is capable of detecting cyclical patterns in financial data, helping traders to make better predictions and optimize their trading strategies. With its unique combination of mathematical algorithms and advanced charting capabilities, this indicator has the potential to revolutionize the way we approach financial modeling and trading.
*** To decrease the load time of this indicator, only XX many bars back will render to the chart. You can control this value with the setting "Number of Bars to Render". This doesn't have anything to do with repainting or the indicator being endpointed***
█ Brief Overview of the Goertzel Cycle Composite Wave
The Goertzel Cycle Composite Wave is a sophisticated technical analysis tool that utilizes the Goertzel algorithm to analyze and visualize cyclical components within a financial time series. By identifying these cycles and their characteristics, the indicator aims to provide valuable insights into the market's underlying price movements, which could potentially be used for making informed trading decisions.
The Goertzel Cycle Composite Wave is considered a non-repainting and endpointed indicator. This means that once a value has been calculated for a specific bar, that value will not change in subsequent bars, and the indicator is designed to have a clear start and end point. This is an important characteristic for indicators used in technical analysis, as it allows traders to make informed decisions based on historical data without the risk of hindsight bias or future changes in the indicator's values. This means traders can use this indicator trading purposes.
The repainting version of this indicator with forecasting, cycle selection/elimination options, and data output table can be found here:
Goertzel Browser
The primary purpose of this indicator is to:
1. Detect and analyze the dominant cycles present in the price data.
2. Reconstruct and visualize the composite wave based on the detected cycles.
To achieve this, the indicator performs several tasks:
1. Detrending the price data: The indicator preprocesses the price data using various detrending techniques, such as Hodrick-Prescott filters, zero-lag moving averages, and linear regression, to remove the underlying trend and focus on the cyclical components.
2. Applying the Goertzel algorithm: The indicator applies the Goertzel algorithm to the detrended price data, identifying the dominant cycles and their characteristics, such as amplitude, phase, and cycle strength.
3. Constructing the composite wave: The indicator reconstructs the composite wave by combining the detected cycles, either by using a user-defined list of cycles or by selecting the top N cycles based on their amplitude or cycle strength.
4. Visualizing the composite wave: The indicator plots the composite wave, using solid lines for the cycles. The color of the lines indicates whether the wave is increasing or decreasing.
This indicator is a powerful tool that employs the Goertzel algorithm to analyze and visualize the cyclical components within a financial time series. By providing insights into the underlying price movements, the indicator aims to assist traders in making more informed decisions.
█ What is the Goertzel Algorithm?
The Goertzel algorithm, named after Gerald Goertzel, is a digital signal processing technique that is used to efficiently compute individual terms of the Discrete Fourier Transform (DFT). It was first introduced in 1958, and since then, it has found various applications in the fields of engineering, mathematics, and physics.
The Goertzel algorithm is primarily used to detect specific frequency components within a digital signal, making it particularly useful in applications where only a few frequency components are of interest. The algorithm is computationally efficient, as it requires fewer calculations than the Fast Fourier Transform (FFT) when detecting a small number of frequency components. This efficiency makes the Goertzel algorithm a popular choice in applications such as:
1. Telecommunications: The Goertzel algorithm is used for decoding Dual-Tone Multi-Frequency (DTMF) signals, which are the tones generated when pressing buttons on a telephone keypad. By identifying specific frequency components, the algorithm can accurately determine which button has been pressed.
2. Audio processing: The algorithm can be used to detect specific pitches or harmonics in an audio signal, making it useful in applications like pitch detection and tuning musical instruments.
3. Vibration analysis: In the field of mechanical engineering, the Goertzel algorithm can be applied to analyze vibrations in rotating machinery, helping to identify faulty components or signs of wear.
4. Power system analysis: The algorithm can be used to measure harmonic content in power systems, allowing engineers to assess power quality and detect potential issues.
The Goertzel algorithm is used in these applications because it offers several advantages over other methods, such as the FFT:
1. Computational efficiency: The Goertzel algorithm requires fewer calculations when detecting a small number of frequency components, making it more computationally efficient than the FFT in these cases.
2. Real-time analysis: The algorithm can be implemented in a streaming fashion, allowing for real-time analysis of signals, which is crucial in applications like telecommunications and audio processing.
3. Memory efficiency: The Goertzel algorithm requires less memory than the FFT, as it only computes the frequency components of interest.
4. Precision: The algorithm is less susceptible to numerical errors compared to the FFT, ensuring more accurate results in applications where precision is essential.
The Goertzel algorithm is an efficient digital signal processing technique that is primarily used to detect specific frequency components within a signal. Its computational efficiency, real-time capabilities, and precision make it an attractive choice for various applications, including telecommunications, audio processing, vibration analysis, and power system analysis. The algorithm has been widely adopted since its introduction in 1958 and continues to be an essential tool in the fields of engineering, mathematics, and physics.
█ Goertzel Algorithm in Quantitative Finance: In-Depth Analysis and Applications
The Goertzel algorithm, initially designed for signal processing in telecommunications, has gained significant traction in the financial industry due to its efficient frequency detection capabilities. In quantitative finance, the Goertzel algorithm has been utilized for uncovering hidden market cycles, developing data-driven trading strategies, and optimizing risk management. This section delves deeper into the applications of the Goertzel algorithm in finance, particularly within the context of quantitative trading and analysis.
Unveiling Hidden Market Cycles:
Market cycles are prevalent in financial markets and arise from various factors, such as economic conditions, investor psychology, and market participant behavior. The Goertzel algorithm's ability to detect and isolate specific frequencies in price data helps trader analysts identify hidden market cycles that may otherwise go unnoticed. By examining the amplitude, phase, and periodicity of each cycle, traders can better understand the underlying market structure and dynamics, enabling them to develop more informed and effective trading strategies.
Developing Quantitative Trading Strategies:
The Goertzel algorithm's versatility allows traders to incorporate its insights into a wide range of trading strategies. By identifying the dominant market cycles in a financial instrument's price data, traders can create data-driven strategies that capitalize on the cyclical nature of markets.
For instance, a trader may develop a mean-reversion strategy that takes advantage of the identified cycles. By establishing positions when the price deviates from the predicted cycle, the trader can profit from the subsequent reversion to the cycle's mean. Similarly, a momentum-based strategy could be designed to exploit the persistence of a dominant cycle by entering positions that align with the cycle's direction.
Enhancing Risk Management:
The Goertzel algorithm plays a vital role in risk management for quantitative strategies. By analyzing the cyclical components of a financial instrument's price data, traders can gain insights into the potential risks associated with their trading strategies.
By monitoring the amplitude and phase of dominant cycles, a trader can detect changes in market dynamics that may pose risks to their positions. For example, a sudden increase in amplitude may indicate heightened volatility, prompting the trader to adjust position sizing or employ hedging techniques to protect their portfolio. Additionally, changes in phase alignment could signal a potential shift in market sentiment, necessitating adjustments to the trading strategy.
Expanding Quantitative Toolkits:
Traders can augment the Goertzel algorithm's insights by combining it with other quantitative techniques, creating a more comprehensive and sophisticated analysis framework. For example, machine learning algorithms, such as neural networks or support vector machines, could be trained on features extracted from the Goertzel algorithm to predict future price movements more accurately.
Furthermore, the Goertzel algorithm can be integrated with other technical analysis tools, such as moving averages or oscillators, to enhance their effectiveness. By applying these tools to the identified cycles, traders can generate more robust and reliable trading signals.
The Goertzel algorithm offers invaluable benefits to quantitative finance practitioners by uncovering hidden market cycles, aiding in the development of data-driven trading strategies, and improving risk management. By leveraging the insights provided by the Goertzel algorithm and integrating it with other quantitative techniques, traders can gain a deeper understanding of market dynamics and devise more effective trading strategies.
█ Indicator Inputs
src: This is the source data for the analysis, typically the closing price of the financial instrument.
detrendornot: This input determines the method used for detrending the source data. Detrending is the process of removing the underlying trend from the data to focus on the cyclical components.
The available options are:
hpsmthdt: Detrend using Hodrick-Prescott filter centered moving average.
zlagsmthdt: Detrend using zero-lag moving average centered moving average.
logZlagRegression: Detrend using logarithmic zero-lag linear regression.
hpsmth: Detrend using Hodrick-Prescott filter.
zlagsmth: Detrend using zero-lag moving average.
DT_HPper1 and DT_HPper2: These inputs define the period range for the Hodrick-Prescott filter centered moving average when detrendornot is set to hpsmthdt.
DT_ZLper1 and DT_ZLper2: These inputs define the period range for the zero-lag moving average centered moving average when detrendornot is set to zlagsmthdt.
DT_RegZLsmoothPer: This input defines the period for the zero-lag moving average used in logarithmic zero-lag linear regression when detrendornot is set to logZlagRegression.
HPsmoothPer: This input defines the period for the Hodrick-Prescott filter when detrendornot is set to hpsmth.
ZLMAsmoothPer: This input defines the period for the zero-lag moving average when detrendornot is set to zlagsmth.
MaxPer: This input sets the maximum period for the Goertzel algorithm to search for cycles.
squaredAmp: This boolean input determines whether the amplitude should be squared in the Goertzel algorithm.
useAddition: This boolean input determines whether the Goertzel algorithm should use addition for combining the cycles.
useCosine: This boolean input determines whether the Goertzel algorithm should use cosine waves instead of sine waves.
UseCycleStrength: This boolean input determines whether the Goertzel algorithm should compute the cycle strength, which is a normalized measure of the cycle's amplitude.
WindowSizePast: These inputs define the window size for the composite wave.
FilterBartels: This boolean input determines whether Bartel's test should be applied to filter out non-significant cycles.
BartNoCycles: This input sets the number of cycles to be used in Bartel's test.
BartSmoothPer: This input sets the period for the moving average used in Bartel's test.
BartSigLimit: This input sets the significance limit for Bartel's test, below which cycles are considered insignificant.
SortBartels: This boolean input determines whether the cycles should be sorted by their Bartel's test results.
StartAtCycle: This input determines the starting index for selecting the top N cycles when UseCycleList is set to false. This allows you to skip a certain number of cycles from the top before selecting the desired number of cycles.
UseTopCycles: This input sets the number of top cycles to use for constructing the composite wave when UseCycleList is set to false. The cycles are ranked based on their amplitudes or cycle strengths, depending on the UseCycleStrength input.
SubtractNoise: This boolean input determines whether to subtract the noise (remaining cycles) from the composite wave. If set to true, the composite wave will only include the top N cycles specified by UseTopCycles.
█ Exploring Auxiliary Functions
The following functions demonstrate advanced techniques for analyzing financial markets, including zero-lag moving averages, Bartels probability, detrending, and Hodrick-Prescott filtering. This section examines each function in detail, explaining their purpose, methodology, and applications in finance. We will examine how each function contributes to the overall performance and effectiveness of the indicator and how they work together to create a powerful analytical tool.
Zero-Lag Moving Average:
The zero-lag moving average function is designed to minimize the lag typically associated with moving averages. This is achieved through a two-step weighted linear regression process that emphasizes more recent data points. The function calculates a linearly weighted moving average (LWMA) on the input data and then applies another LWMA on the result. By doing this, the function creates a moving average that closely follows the price action, reducing the lag and improving the responsiveness of the indicator.
The zero-lag moving average function is used in the indicator to provide a responsive, low-lag smoothing of the input data. This function helps reduce the noise and fluctuations in the data, making it easier to identify and analyze underlying trends and patterns. By minimizing the lag associated with traditional moving averages, this function allows the indicator to react more quickly to changes in market conditions, providing timely signals and improving the overall effectiveness of the indicator.
Bartels Probability:
The Bartels probability function calculates the probability of a given cycle being significant in a time series. It uses a mathematical test called the Bartels test to assess the significance of cycles detected in the data. The function calculates coefficients for each detected cycle and computes an average amplitude and an expected amplitude. By comparing these values, the Bartels probability is derived, indicating the likelihood of a cycle's significance. This information can help in identifying and analyzing dominant cycles in financial markets.
The Bartels probability function is incorporated into the indicator to assess the significance of detected cycles in the input data. By calculating the Bartels probability for each cycle, the indicator can prioritize the most significant cycles and focus on the market dynamics that are most relevant to the current trading environment. This function enhances the indicator's ability to identify dominant market cycles, improving its predictive power and aiding in the development of effective trading strategies.
Detrend Logarithmic Zero-Lag Regression:
The detrend logarithmic zero-lag regression function is used for detrending data while minimizing lag. It combines a zero-lag moving average with a linear regression detrending method. The function first calculates the zero-lag moving average of the logarithm of input data and then applies a linear regression to remove the trend. By detrending the data, the function isolates the cyclical components, making it easier to analyze and interpret the underlying market dynamics.
The detrend logarithmic zero-lag regression function is used in the indicator to isolate the cyclical components of the input data. By detrending the data, the function enables the indicator to focus on the cyclical movements in the market, making it easier to analyze and interpret market dynamics. This function is essential for identifying cyclical patterns and understanding the interactions between different market cycles, which can inform trading decisions and enhance overall market understanding.
Bartels Cycle Significance Test:
The Bartels cycle significance test is a function that combines the Bartels probability function and the detrend logarithmic zero-lag regression function to assess the significance of detected cycles. The function calculates the Bartels probability for each cycle and stores the results in an array. By analyzing the probability values, traders and analysts can identify the most significant cycles in the data, which can be used to develop trading strategies and improve market understanding.
The Bartels cycle significance test function is integrated into the indicator to provide a comprehensive analysis of the significance of detected cycles. By combining the Bartels probability function and the detrend logarithmic zero-lag regression function, this test evaluates the significance of each cycle and stores the results in an array. The indicator can then use this information to prioritize the most significant cycles and focus on the most relevant market dynamics. This function enhances the indicator's ability to identify and analyze dominant market cycles, providing valuable insights for trading and market analysis.
Hodrick-Prescott Filter:
The Hodrick-Prescott filter is a popular technique used to separate the trend and cyclical components of a time series. The function applies a smoothing parameter to the input data and calculates a smoothed series using a two-sided filter. This smoothed series represents the trend component, which can be subtracted from the original data to obtain the cyclical component. The Hodrick-Prescott filter is commonly used in economics and finance to analyze economic data and financial market trends.
The Hodrick-Prescott filter is incorporated into the indicator to separate the trend and cyclical components of the input data. By applying the filter to the data, the indicator can isolate the trend component, which can be used to analyze long-term market trends and inform trading decisions. Additionally, the cyclical component can be used to identify shorter-term market dynamics and provide insights into potential trading opportunities. The inclusion of the Hodrick-Prescott filter adds another layer of analysis to the indicator, making it more versatile and comprehensive.
Detrending Options: Detrend Centered Moving Average:
The detrend centered moving average function provides different detrending methods, including the Hodrick-Prescott filter and the zero-lag moving average, based on the selected detrending method. The function calculates two sets of smoothed values using the chosen method and subtracts one set from the other to obtain a detrended series. By offering multiple detrending options, this function allows traders and analysts to select the most appropriate method for their specific needs and preferences.
The detrend centered moving average function is integrated into the indicator to provide users with multiple detrending options, including the Hodrick-Prescott filter and the zero-lag moving average. By offering multiple detrending methods, the indicator allows users to customize the analysis to their specific needs and preferences, enhancing the indicator's overall utility and adaptability. This function ensures that the indicator can cater to a wide range of trading styles and objectives, making it a valuable tool for a diverse group of market participants.
The auxiliary functions functions discussed in this section demonstrate the power and versatility of mathematical techniques in analyzing financial markets. By understanding and implementing these functions, traders and analysts can gain valuable insights into market dynamics, improve their trading strategies, and make more informed decisions. The combination of zero-lag moving averages, Bartels probability, detrending methods, and the Hodrick-Prescott filter provides a comprehensive toolkit for analyzing and interpreting financial data. The integration of advanced functions in a financial indicator creates a powerful and versatile analytical tool that can provide valuable insights into financial markets. By combining the zero-lag moving average,
█ In-Depth Analysis of the Goertzel Cycle Composite Wave Code
The Goertzel Cycle Composite Wave code is an implementation of the Goertzel Algorithm, an efficient technique to perform spectral analysis on a signal. The code is designed to detect and analyze dominant cycles within a given financial market data set. This section will provide an extremely detailed explanation of the code, its structure, functions, and intended purpose.
Function signature and input parameters:
The Goertzel Cycle Composite Wave function accepts numerous input parameters for customization, including source data (src), the current bar (forBar), sample size (samplesize), period (per), squared amplitude flag (squaredAmp), addition flag (useAddition), cosine flag (useCosine), cycle strength flag (UseCycleStrength), past sizes (WindowSizePast), Bartels filter flag (FilterBartels), Bartels-related parameters (BartNoCycles, BartSmoothPer, BartSigLimit), sorting flag (SortBartels), and output buffers (goeWorkPast, cyclebuffer, amplitudebuffer, phasebuffer, cycleBartelsBuffer).
Initializing variables and arrays:
The code initializes several float arrays (goeWork1, goeWork2, goeWork3, goeWork4) with the same length as twice the period (2 * per). These arrays store intermediate results during the execution of the algorithm.
Preprocessing input data:
The input data (src) undergoes preprocessing to remove linear trends. This step enhances the algorithm's ability to focus on cyclical components in the data. The linear trend is calculated by finding the slope between the first and last values of the input data within the sample.
Iterative calculation of Goertzel coefficients:
The core of the Goertzel Cycle Composite Wave algorithm lies in the iterative calculation of Goertzel coefficients for each frequency bin. These coefficients represent the spectral content of the input data at different frequencies. The code iterates through the range of frequencies, calculating the Goertzel coefficients using a nested loop structure.
Cycle strength computation:
The code calculates the cycle strength based on the Goertzel coefficients. This is an optional step, controlled by the UseCycleStrength flag. The cycle strength provides information on the relative influence of each cycle on the data per bar, considering both amplitude and cycle length. The algorithm computes the cycle strength either by squaring the amplitude (controlled by squaredAmp flag) or using the actual amplitude values.
Phase calculation:
The Goertzel Cycle Composite Wave code computes the phase of each cycle, which represents the position of the cycle within the input data. The phase is calculated using the arctangent function (math.atan) based on the ratio of the imaginary and real components of the Goertzel coefficients.
Peak detection and cycle extraction:
The algorithm performs peak detection on the computed amplitudes or cycle strengths to identify dominant cycles. It stores the detected cycles in the cyclebuffer array, along with their corresponding amplitudes and phases in the amplitudebuffer and phasebuffer arrays, respectively.
Sorting cycles by amplitude or cycle strength:
The code sorts the detected cycles based on their amplitude or cycle strength in descending order. This allows the algorithm to prioritize cycles with the most significant impact on the input data.
Bartels cycle significance test:
If the FilterBartels flag is set, the code performs a Bartels cycle significance test on the detected cycles. This test determines the statistical significance of each cycle and filters out the insignificant cycles. The significant cycles are stored in the cycleBartelsBuffer array. If the SortBartels flag is set, the code sorts the significant cycles based on their Bartels significance values.
Waveform calculation:
The Goertzel Cycle Composite Wave code calculates the waveform of the significant cycles for specified time windows. The windows are defined by the WindowSizePast parameters, respectively. The algorithm uses either cosine or sine functions (controlled by the useCosine flag) to calculate the waveforms for each cycle. The useAddition flag determines whether the waveforms should be added or subtracted.
Storing waveforms in a matrix:
The calculated waveforms for the cycle is stored in the matrix - goeWorkPast. This matrix holds the waveforms for the specified time windows. Each row in the matrix represents a time window position, and each column corresponds to a cycle.
Returning the number of cycles:
The Goertzel Cycle Composite Wave function returns the total number of detected cycles (number_of_cycles) after processing the input data. This information can be used to further analyze the results or to visualize the detected cycles.
The Goertzel Cycle Composite Wave code is a comprehensive implementation of the Goertzel Algorithm, specifically designed for detecting and analyzing dominant cycles within financial market data. The code offers a high level of customization, allowing users to fine-tune the algorithm based on their specific needs. The Goertzel Cycle Composite Wave's combination of preprocessing, iterative calculations, cycle extraction, sorting, significance testing, and waveform calculation makes it a powerful tool for understanding cyclical components in financial data.
█ Generating and Visualizing Composite Waveform
The indicator calculates and visualizes the composite waveform for specified time windows based on the detected cycles. Here's a detailed explanation of this process:
Updating WindowSizePast:
The WindowSizePast is updated to ensure they are at least twice the MaxPer (maximum period).
Initializing matrices and arrays:
The matrix goeWorkPast is initialized to store the Goertzel results for specified time windows. Multiple arrays are also initialized to store cycle, amplitude, phase, and Bartels information.
Preparing the source data (srcVal) array:
The source data is copied into an array, srcVal, and detrended using one of the selected methods (hpsmthdt, zlagsmthdt, logZlagRegression, hpsmth, or zlagsmth).
Goertzel function call:
The Goertzel function is called to analyze the detrended source data and extract cycle information. The output, number_of_cycles, contains the number of detected cycles.
Initializing arrays for waveforms:
The goertzel array is initialized to store the endpoint Goertzel.
Calculating composite waveform (goertzel array):
The composite waveform is calculated by summing the selected cycles (either from the user-defined cycle list or the top cycles) and optionally subtracting the noise component.
Drawing composite waveform (pvlines):
The composite waveform is drawn on the chart using solid lines. The color of the lines is determined by the direction of the waveform (green for upward, red for downward).
To summarize, this indicator generates a composite waveform based on the detected cycles in the financial data. It calculates the composite waveforms and visualizes them on the chart using colored lines.
█ Enhancing the Goertzel Algorithm-Based Script for Financial Modeling and Trading
The Goertzel algorithm-based script for detecting dominant cycles in financial data is a powerful tool for financial modeling and trading. It provides valuable insights into the past behavior of these cycles. However, as with any algorithm, there is always room for improvement. This section discusses potential enhancements to the existing script to make it even more robust and versatile for financial modeling, general trading, advanced trading, and high-frequency finance trading.
Enhancements for Financial Modeling
Data preprocessing: One way to improve the script's performance for financial modeling is to introduce more advanced data preprocessing techniques. This could include removing outliers, handling missing data, and normalizing the data to ensure consistent and accurate results.
Additional detrending and smoothing methods: Incorporating more sophisticated detrending and smoothing techniques, such as wavelet transform or empirical mode decomposition, can help improve the script's ability to accurately identify cycles and trends in the data.
Machine learning integration: Integrating machine learning techniques, such as artificial neural networks or support vector machines, can help enhance the script's predictive capabilities, leading to more accurate financial models.
Enhancements for General and Advanced Trading
Customizable indicator integration: Allowing users to integrate their own technical indicators can help improve the script's effectiveness for both general and advanced trading. By enabling the combination of the dominant cycle information with other technical analysis tools, traders can develop more comprehensive trading strategies.
Risk management and position sizing: Incorporating risk management and position sizing functionality into the script can help traders better manage their trades and control potential losses. This can be achieved by calculating the optimal position size based on the user's risk tolerance and account size.
Multi-timeframe analysis: Enhancing the script to perform multi-timeframe analysis can provide traders with a more holistic view of market trends and cycles. By identifying dominant cycles on different timeframes, traders can gain insights into the potential confluence of cycles and make better-informed trading decisions.
Enhancements for High-Frequency Finance Trading
Algorithm optimization: To ensure the script's suitability for high-frequency finance trading, optimizing the algorithm for faster execution is crucial. This can be achieved by employing efficient data structures and refining the calculation methods to minimize computational complexity.
Real-time data streaming: Integrating real-time data streaming capabilities into the script can help high-frequency traders react to market changes more quickly. By continuously updating the cycle information based on real-time market data, traders can adapt their strategies accordingly and capitalize on short-term market fluctuations.
Order execution and trade management: To fully leverage the script's capabilities for high-frequency trading, implementing functionality for automated order execution and trade management is essential. This can include features such as stop-loss and take-profit orders, trailing stops, and automated trade exit strategies.
While the existing Goertzel algorithm-based script is a valuable tool for detecting dominant cycles in financial data, there are several potential enhancements that can make it even more powerful for financial modeling, general trading, advanced trading, and high-frequency finance trading. By incorporating these improvements, the script can become a more versatile and effective tool for traders and financial analysts alike.
█ Understanding the Limitations of the Goertzel Algorithm
While the Goertzel algorithm-based script for detecting dominant cycles in financial data provides valuable insights, it is important to be aware of its limitations and drawbacks. Some of the key drawbacks of this indicator are:
Lagging nature:
As with many other technical indicators, the Goertzel algorithm-based script can suffer from lagging effects, meaning that it may not immediately react to real-time market changes. This lag can lead to late entries and exits, potentially resulting in reduced profitability or increased losses.
Parameter sensitivity:
The performance of the script can be sensitive to the chosen parameters, such as the detrending methods, smoothing techniques, and cycle detection settings. Improper parameter selection may lead to inaccurate cycle detection or increased false signals, which can negatively impact trading performance.
Complexity:
The Goertzel algorithm itself is relatively complex, making it difficult for novice traders or those unfamiliar with the concept of cycle analysis to fully understand and effectively utilize the script. This complexity can also make it challenging to optimize the script for specific trading styles or market conditions.
Overfitting risk:
As with any data-driven approach, there is a risk of overfitting when using the Goertzel algorithm-based script. Overfitting occurs when a model becomes too specific to the historical data it was trained on, leading to poor performance on new, unseen data. This can result in misleading signals and reduced trading performance.
Limited applicability:
The Goertzel algorithm-based script may not be suitable for all markets, trading styles, or timeframes. Its effectiveness in detecting cycles may be limited in certain market conditions, such as during periods of extreme volatility or low liquidity.
While the Goertzel algorithm-based script offers valuable insights into dominant cycles in financial data, it is essential to consider its drawbacks and limitations when incorporating it into a trading strategy. Traders should always use the script in conjunction with other technical and fundamental analysis tools, as well as proper risk management, to make well-informed trading decisions.
█ Interpreting Results
The Goertzel Cycle Composite Wave indicator can be interpreted by analyzing the plotted lines. The indicator plots two lines: composite waves. The composite wave represents the composite wave of the price data.
The composite wave line displays a solid line, with green indicating a bullish trend and red indicating a bearish trend.
Interpreting the Goertzel Cycle Composite Wave indicator involves identifying the trend of the composite wave lines and matching them with the corresponding bullish or bearish color.
█ Conclusion
The Goertzel Cycle Composite Wave indicator is a powerful tool for identifying and analyzing cyclical patterns in financial markets. Its ability to detect multiple cycles of varying frequencies and strengths make it a valuable addition to any trader's technical analysis toolkit. However, it is important to keep in mind that the Goertzel Cycle Composite Wave indicator should be used in conjunction with other technical analysis tools and fundamental analysis to achieve the best results. With continued refinement and development, the Goertzel Cycle Composite Wave indicator has the potential to become a highly effective tool for financial modeling, general trading, advanced trading, and high-frequency finance trading. Its accuracy and versatility make it a promising candidate for further research and development.
█ Footnotes
What is the Bartels Test for Cycle Significance?
The Bartels Cycle Significance Test is a statistical method that determines whether the peaks and troughs of a time series are statistically significant. The test is named after its inventor, George Bartels, who developed it in the mid-20th century.
The Bartels test is designed to analyze the cyclical components of a time series, which can help traders and analysts identify trends and cycles in financial markets. The test calculates a Bartels statistic, which measures the degree of non-randomness or autocorrelation in the time series.
The Bartels statistic is calculated by first splitting the time series into two halves and calculating the range of the peaks and troughs in each half. The test then compares these ranges using a t-test, which measures the significance of the difference between the two ranges.
If the Bartels statistic is greater than a critical value, it indicates that the peaks and troughs in the time series are non-random and that there is a significant cyclical component to the data. Conversely, if the Bartels statistic is less than the critical value, it suggests that the peaks and troughs are random and that there is no significant cyclical component.
The Bartels Cycle Significance Test is particularly useful in financial analysis because it can help traders and analysts identify significant cycles in asset prices, which can in turn inform investment decisions. However, it is important to note that the test is not perfect and can produce false signals in certain situations, particularly in noisy or volatile markets. Therefore, it is always recommended to use the test in conjunction with other technical and fundamental indicators to confirm trends and cycles.
Deep-dive into the Hodrick-Prescott Fitler
The Hodrick-Prescott (HP) filter is a statistical tool used in economics and finance to separate a time series into two components: a trend component and a cyclical component. It is a powerful tool for identifying long-term trends in economic and financial data and is widely used by economists, central banks, and financial institutions around the world.
The HP filter was first introduced in the 1990s by economists Robert Hodrick and Edward Prescott. It is a simple, two-parameter filter that separates a time series into a trend component and a cyclical component. The trend component represents the long-term behavior of the data, while the cyclical component captures the shorter-term fluctuations around the trend.
The HP filter works by minimizing the following objective function:
Minimize: (Sum of Squared Deviations) + λ (Sum of Squared Second Differences)
Where:
1. The first term represents the deviation of the data from the trend.
2. The second term represents the smoothness of the trend.
3. λ is a smoothing parameter that determines the degree of smoothness of the trend.
The smoothing parameter λ is typically set to a value between 100 and 1600, depending on the frequency of the data. Higher values of λ lead to a smoother trend, while lower values lead to a more volatile trend.
The HP filter has several advantages over other smoothing techniques. It is a non-parametric method, meaning that it does not make any assumptions about the underlying distribution of the data. It also allows for easy comparison of trends across different time series and can be used with data of any frequency.
However, the HP filter also has some limitations. It assumes that the trend is a smooth function, which may not be the case in some situations. It can also be sensitive to changes in the smoothing parameter λ, which may result in different trends for the same data. Additionally, the filter may produce unrealistic trends for very short time series.
Despite these limitations, the HP filter remains a valuable tool for analyzing economic and financial data. It is widely used by central banks and financial institutions to monitor long-term trends in the economy, and it can be used to identify turning points in the business cycle. The filter can also be used to analyze asset prices, exchange rates, and other financial variables.
The Hodrick-Prescott filter is a powerful tool for analyzing economic and financial data. It separates a time series into a trend component and a cyclical component, allowing for easy identification of long-term trends and turning points in the business cycle. While it has some limitations, it remains a valuable tool for economists, central banks, and financial institutions around the world.
Goertzel Browser [Loxx]As the financial markets become increasingly complex and data-driven, traders and analysts must leverage powerful tools to gain insights and make informed decisions. One such tool is the Goertzel Browser indicator, a sophisticated technical analysis indicator that helps identify cyclical patterns in financial data. This powerful tool is capable of detecting cyclical patterns in financial data, helping traders to make better predictions and optimize their trading strategies. With its unique combination of mathematical algorithms and advanced charting capabilities, this indicator has the potential to revolutionize the way we approach financial modeling and trading.
█ Brief Overview of the Goertzel Browser
The Goertzel Browser is a sophisticated technical analysis tool that utilizes the Goertzel algorithm to analyze and visualize cyclical components within a financial time series. By identifying these cycles and their characteristics, the indicator aims to provide valuable insights into the market's underlying price movements, which could potentially be used for making informed trading decisions.
The primary purpose of this indicator is to:
1. Detect and analyze the dominant cycles present in the price data.
2. Reconstruct and visualize the composite wave based on the detected cycles.
3. Project the composite wave into the future, providing a potential roadmap for upcoming price movements.
To achieve this, the indicator performs several tasks:
1. Detrending the price data: The indicator preprocesses the price data using various detrending techniques, such as Hodrick-Prescott filters, zero-lag moving averages, and linear regression, to remove the underlying trend and focus on the cyclical components.
2. Applying the Goertzel algorithm: The indicator applies the Goertzel algorithm to the detrended price data, identifying the dominant cycles and their characteristics, such as amplitude, phase, and cycle strength.
3. Constructing the composite wave: The indicator reconstructs the composite wave by combining the detected cycles, either by using a user-defined list of cycles or by selecting the top N cycles based on their amplitude or cycle strength.
4. Visualizing the composite wave: The indicator plots the composite wave, using solid lines for the past and dotted lines for the future projections. The color of the lines indicates whether the wave is increasing or decreasing.
5. Displaying cycle information: The indicator provides a table that displays detailed information about the detected cycles, including their rank, period, Bartel's test results, amplitude, and phase.
This indicator is a powerful tool that employs the Goertzel algorithm to analyze and visualize the cyclical components within a financial time series. By providing insights into the underlying price movements and their potential future trajectory, the indicator aims to assist traders in making more informed decisions.
█ What is the Goertzel Algorithm?
The Goertzel algorithm, named after Gerald Goertzel, is a digital signal processing technique that is used to efficiently compute individual terms of the Discrete Fourier Transform (DFT). It was first introduced in 1958, and since then, it has found various applications in the fields of engineering, mathematics, and physics.
The Goertzel algorithm is primarily used to detect specific frequency components within a digital signal, making it particularly useful in applications where only a few frequency components are of interest. The algorithm is computationally efficient, as it requires fewer calculations than the Fast Fourier Transform (FFT) when detecting a small number of frequency components. This efficiency makes the Goertzel algorithm a popular choice in applications such as:
1. Telecommunications: The Goertzel algorithm is used for decoding Dual-Tone Multi-Frequency (DTMF) signals, which are the tones generated when pressing buttons on a telephone keypad. By identifying specific frequency components, the algorithm can accurately determine which button has been pressed.
2. Audio processing: The algorithm can be used to detect specific pitches or harmonics in an audio signal, making it useful in applications like pitch detection and tuning musical instruments.
3. Vibration analysis: In the field of mechanical engineering, the Goertzel algorithm can be applied to analyze vibrations in rotating machinery, helping to identify faulty components or signs of wear.
4. Power system analysis: The algorithm can be used to measure harmonic content in power systems, allowing engineers to assess power quality and detect potential issues.
The Goertzel algorithm is used in these applications because it offers several advantages over other methods, such as the FFT:
1. Computational efficiency: The Goertzel algorithm requires fewer calculations when detecting a small number of frequency components, making it more computationally efficient than the FFT in these cases.
2. Real-time analysis: The algorithm can be implemented in a streaming fashion, allowing for real-time analysis of signals, which is crucial in applications like telecommunications and audio processing.
3. Memory efficiency: The Goertzel algorithm requires less memory than the FFT, as it only computes the frequency components of interest.
4. Precision: The algorithm is less susceptible to numerical errors compared to the FFT, ensuring more accurate results in applications where precision is essential.
The Goertzel algorithm is an efficient digital signal processing technique that is primarily used to detect specific frequency components within a signal. Its computational efficiency, real-time capabilities, and precision make it an attractive choice for various applications, including telecommunications, audio processing, vibration analysis, and power system analysis. The algorithm has been widely adopted since its introduction in 1958 and continues to be an essential tool in the fields of engineering, mathematics, and physics.
█ Goertzel Algorithm in Quantitative Finance: In-Depth Analysis and Applications
The Goertzel algorithm, initially designed for signal processing in telecommunications, has gained significant traction in the financial industry due to its efficient frequency detection capabilities. In quantitative finance, the Goertzel algorithm has been utilized for uncovering hidden market cycles, developing data-driven trading strategies, and optimizing risk management. This section delves deeper into the applications of the Goertzel algorithm in finance, particularly within the context of quantitative trading and analysis.
Unveiling Hidden Market Cycles:
Market cycles are prevalent in financial markets and arise from various factors, such as economic conditions, investor psychology, and market participant behavior. The Goertzel algorithm's ability to detect and isolate specific frequencies in price data helps trader analysts identify hidden market cycles that may otherwise go unnoticed. By examining the amplitude, phase, and periodicity of each cycle, traders can better understand the underlying market structure and dynamics, enabling them to develop more informed and effective trading strategies.
Developing Quantitative Trading Strategies:
The Goertzel algorithm's versatility allows traders to incorporate its insights into a wide range of trading strategies. By identifying the dominant market cycles in a financial instrument's price data, traders can create data-driven strategies that capitalize on the cyclical nature of markets.
For instance, a trader may develop a mean-reversion strategy that takes advantage of the identified cycles. By establishing positions when the price deviates from the predicted cycle, the trader can profit from the subsequent reversion to the cycle's mean. Similarly, a momentum-based strategy could be designed to exploit the persistence of a dominant cycle by entering positions that align with the cycle's direction.
Enhancing Risk Management:
The Goertzel algorithm plays a vital role in risk management for quantitative strategies. By analyzing the cyclical components of a financial instrument's price data, traders can gain insights into the potential risks associated with their trading strategies.
By monitoring the amplitude and phase of dominant cycles, a trader can detect changes in market dynamics that may pose risks to their positions. For example, a sudden increase in amplitude may indicate heightened volatility, prompting the trader to adjust position sizing or employ hedging techniques to protect their portfolio. Additionally, changes in phase alignment could signal a potential shift in market sentiment, necessitating adjustments to the trading strategy.
Expanding Quantitative Toolkits:
Traders can augment the Goertzel algorithm's insights by combining it with other quantitative techniques, creating a more comprehensive and sophisticated analysis framework. For example, machine learning algorithms, such as neural networks or support vector machines, could be trained on features extracted from the Goertzel algorithm to predict future price movements more accurately.
Furthermore, the Goertzel algorithm can be integrated with other technical analysis tools, such as moving averages or oscillators, to enhance their effectiveness. By applying these tools to the identified cycles, traders can generate more robust and reliable trading signals.
The Goertzel algorithm offers invaluable benefits to quantitative finance practitioners by uncovering hidden market cycles, aiding in the development of data-driven trading strategies, and improving risk management. By leveraging the insights provided by the Goertzel algorithm and integrating it with other quantitative techniques, traders can gain a deeper understanding of market dynamics and devise more effective trading strategies.
█ Indicator Inputs
src: This is the source data for the analysis, typically the closing price of the financial instrument.
detrendornot: This input determines the method used for detrending the source data. Detrending is the process of removing the underlying trend from the data to focus on the cyclical components.
The available options are:
hpsmthdt: Detrend using Hodrick-Prescott filter centered moving average.
zlagsmthdt: Detrend using zero-lag moving average centered moving average.
logZlagRegression: Detrend using logarithmic zero-lag linear regression.
hpsmth: Detrend using Hodrick-Prescott filter.
zlagsmth: Detrend using zero-lag moving average.
DT_HPper1 and DT_HPper2: These inputs define the period range for the Hodrick-Prescott filter centered moving average when detrendornot is set to hpsmthdt.
DT_ZLper1 and DT_ZLper2: These inputs define the period range for the zero-lag moving average centered moving average when detrendornot is set to zlagsmthdt.
DT_RegZLsmoothPer: This input defines the period for the zero-lag moving average used in logarithmic zero-lag linear regression when detrendornot is set to logZlagRegression.
HPsmoothPer: This input defines the period for the Hodrick-Prescott filter when detrendornot is set to hpsmth.
ZLMAsmoothPer: This input defines the period for the zero-lag moving average when detrendornot is set to zlagsmth.
MaxPer: This input sets the maximum period for the Goertzel algorithm to search for cycles.
squaredAmp: This boolean input determines whether the amplitude should be squared in the Goertzel algorithm.
useAddition: This boolean input determines whether the Goertzel algorithm should use addition for combining the cycles.
useCosine: This boolean input determines whether the Goertzel algorithm should use cosine waves instead of sine waves.
UseCycleStrength: This boolean input determines whether the Goertzel algorithm should compute the cycle strength, which is a normalized measure of the cycle's amplitude.
WindowSizePast and WindowSizeFuture: These inputs define the window size for past and future projections of the composite wave.
FilterBartels: This boolean input determines whether Bartel's test should be applied to filter out non-significant cycles.
BartNoCycles: This input sets the number of cycles to be used in Bartel's test.
BartSmoothPer: This input sets the period for the moving average used in Bartel's test.
BartSigLimit: This input sets the significance limit for Bartel's test, below which cycles are considered insignificant.
SortBartels: This boolean input determines whether the cycles should be sorted by their Bartel's test results.
UseCycleList: This boolean input determines whether a user-defined list of cycles should be used for constructing the composite wave. If set to false, the top N cycles will be used.
Cycle1, Cycle2, Cycle3, Cycle4, and Cycle5: These inputs define the user-defined list of cycles when 'UseCycleList' is set to true. If using a user-defined list, each of these inputs represents the period of a specific cycle to include in the composite wave.
StartAtCycle: This input determines the starting index for selecting the top N cycles when UseCycleList is set to false. This allows you to skip a certain number of cycles from the top before selecting the desired number of cycles.
UseTopCycles: This input sets the number of top cycles to use for constructing the composite wave when UseCycleList is set to false. The cycles are ranked based on their amplitudes or cycle strengths, depending on the UseCycleStrength input.
SubtractNoise: This boolean input determines whether to subtract the noise (remaining cycles) from the composite wave. If set to true, the composite wave will only include the top N cycles specified by UseTopCycles.
█ Exploring Auxiliary Functions
The following functions demonstrate advanced techniques for analyzing financial markets, including zero-lag moving averages, Bartels probability, detrending, and Hodrick-Prescott filtering. This section examines each function in detail, explaining their purpose, methodology, and applications in finance. We will examine how each function contributes to the overall performance and effectiveness of the indicator and how they work together to create a powerful analytical tool.
Zero-Lag Moving Average:
The zero-lag moving average function is designed to minimize the lag typically associated with moving averages. This is achieved through a two-step weighted linear regression process that emphasizes more recent data points. The function calculates a linearly weighted moving average (LWMA) on the input data and then applies another LWMA on the result. By doing this, the function creates a moving average that closely follows the price action, reducing the lag and improving the responsiveness of the indicator.
The zero-lag moving average function is used in the indicator to provide a responsive, low-lag smoothing of the input data. This function helps reduce the noise and fluctuations in the data, making it easier to identify and analyze underlying trends and patterns. By minimizing the lag associated with traditional moving averages, this function allows the indicator to react more quickly to changes in market conditions, providing timely signals and improving the overall effectiveness of the indicator.
Bartels Probability:
The Bartels probability function calculates the probability of a given cycle being significant in a time series. It uses a mathematical test called the Bartels test to assess the significance of cycles detected in the data. The function calculates coefficients for each detected cycle and computes an average amplitude and an expected amplitude. By comparing these values, the Bartels probability is derived, indicating the likelihood of a cycle's significance. This information can help in identifying and analyzing dominant cycles in financial markets.
The Bartels probability function is incorporated into the indicator to assess the significance of detected cycles in the input data. By calculating the Bartels probability for each cycle, the indicator can prioritize the most significant cycles and focus on the market dynamics that are most relevant to the current trading environment. This function enhances the indicator's ability to identify dominant market cycles, improving its predictive power and aiding in the development of effective trading strategies.
Detrend Logarithmic Zero-Lag Regression:
The detrend logarithmic zero-lag regression function is used for detrending data while minimizing lag. It combines a zero-lag moving average with a linear regression detrending method. The function first calculates the zero-lag moving average of the logarithm of input data and then applies a linear regression to remove the trend. By detrending the data, the function isolates the cyclical components, making it easier to analyze and interpret the underlying market dynamics.
The detrend logarithmic zero-lag regression function is used in the indicator to isolate the cyclical components of the input data. By detrending the data, the function enables the indicator to focus on the cyclical movements in the market, making it easier to analyze and interpret market dynamics. This function is essential for identifying cyclical patterns and understanding the interactions between different market cycles, which can inform trading decisions and enhance overall market understanding.
Bartels Cycle Significance Test:
The Bartels cycle significance test is a function that combines the Bartels probability function and the detrend logarithmic zero-lag regression function to assess the significance of detected cycles. The function calculates the Bartels probability for each cycle and stores the results in an array. By analyzing the probability values, traders and analysts can identify the most significant cycles in the data, which can be used to develop trading strategies and improve market understanding.
The Bartels cycle significance test function is integrated into the indicator to provide a comprehensive analysis of the significance of detected cycles. By combining the Bartels probability function and the detrend logarithmic zero-lag regression function, this test evaluates the significance of each cycle and stores the results in an array. The indicator can then use this information to prioritize the most significant cycles and focus on the most relevant market dynamics. This function enhances the indicator's ability to identify and analyze dominant market cycles, providing valuable insights for trading and market analysis.
Hodrick-Prescott Filter:
The Hodrick-Prescott filter is a popular technique used to separate the trend and cyclical components of a time series. The function applies a smoothing parameter to the input data and calculates a smoothed series using a two-sided filter. This smoothed series represents the trend component, which can be subtracted from the original data to obtain the cyclical component. The Hodrick-Prescott filter is commonly used in economics and finance to analyze economic data and financial market trends.
The Hodrick-Prescott filter is incorporated into the indicator to separate the trend and cyclical components of the input data. By applying the filter to the data, the indicator can isolate the trend component, which can be used to analyze long-term market trends and inform trading decisions. Additionally, the cyclical component can be used to identify shorter-term market dynamics and provide insights into potential trading opportunities. The inclusion of the Hodrick-Prescott filter adds another layer of analysis to the indicator, making it more versatile and comprehensive.
Detrending Options: Detrend Centered Moving Average:
The detrend centered moving average function provides different detrending methods, including the Hodrick-Prescott filter and the zero-lag moving average, based on the selected detrending method. The function calculates two sets of smoothed values using the chosen method and subtracts one set from the other to obtain a detrended series. By offering multiple detrending options, this function allows traders and analysts to select the most appropriate method for their specific needs and preferences.
The detrend centered moving average function is integrated into the indicator to provide users with multiple detrending options, including the Hodrick-Prescott filter and the zero-lag moving average. By offering multiple detrending methods, the indicator allows users to customize the analysis to their specific needs and preferences, enhancing the indicator's overall utility and adaptability. This function ensures that the indicator can cater to a wide range of trading styles and objectives, making it a valuable tool for a diverse group of market participants.
The auxiliary functions functions discussed in this section demonstrate the power and versatility of mathematical techniques in analyzing financial markets. By understanding and implementing these functions, traders and analysts can gain valuable insights into market dynamics, improve their trading strategies, and make more informed decisions. The combination of zero-lag moving averages, Bartels probability, detrending methods, and the Hodrick-Prescott filter provides a comprehensive toolkit for analyzing and interpreting financial data. The integration of advanced functions in a financial indicator creates a powerful and versatile analytical tool that can provide valuable insights into financial markets. By combining the zero-lag moving average,
█ In-Depth Analysis of the Goertzel Browser Code
The Goertzel Browser code is an implementation of the Goertzel Algorithm, an efficient technique to perform spectral analysis on a signal. The code is designed to detect and analyze dominant cycles within a given financial market data set. This section will provide an extremely detailed explanation of the code, its structure, functions, and intended purpose.
Function signature and input parameters:
The Goertzel Browser function accepts numerous input parameters for customization, including source data (src), the current bar (forBar), sample size (samplesize), period (per), squared amplitude flag (squaredAmp), addition flag (useAddition), cosine flag (useCosine), cycle strength flag (UseCycleStrength), past and future window sizes (WindowSizePast, WindowSizeFuture), Bartels filter flag (FilterBartels), Bartels-related parameters (BartNoCycles, BartSmoothPer, BartSigLimit), sorting flag (SortBartels), and output buffers (goeWorkPast, goeWorkFuture, cyclebuffer, amplitudebuffer, phasebuffer, cycleBartelsBuffer).
Initializing variables and arrays:
The code initializes several float arrays (goeWork1, goeWork2, goeWork3, goeWork4) with the same length as twice the period (2 * per). These arrays store intermediate results during the execution of the algorithm.
Preprocessing input data:
The input data (src) undergoes preprocessing to remove linear trends. This step enhances the algorithm's ability to focus on cyclical components in the data. The linear trend is calculated by finding the slope between the first and last values of the input data within the sample.
Iterative calculation of Goertzel coefficients:
The core of the Goertzel Browser algorithm lies in the iterative calculation of Goertzel coefficients for each frequency bin. These coefficients represent the spectral content of the input data at different frequencies. The code iterates through the range of frequencies, calculating the Goertzel coefficients using a nested loop structure.
Cycle strength computation:
The code calculates the cycle strength based on the Goertzel coefficients. This is an optional step, controlled by the UseCycleStrength flag. The cycle strength provides information on the relative influence of each cycle on the data per bar, considering both amplitude and cycle length. The algorithm computes the cycle strength either by squaring the amplitude (controlled by squaredAmp flag) or using the actual amplitude values.
Phase calculation:
The Goertzel Browser code computes the phase of each cycle, which represents the position of the cycle within the input data. The phase is calculated using the arctangent function (math.atan) based on the ratio of the imaginary and real components of the Goertzel coefficients.
Peak detection and cycle extraction:
The algorithm performs peak detection on the computed amplitudes or cycle strengths to identify dominant cycles. It stores the detected cycles in the cyclebuffer array, along with their corresponding amplitudes and phases in the amplitudebuffer and phasebuffer arrays, respectively.
Sorting cycles by amplitude or cycle strength:
The code sorts the detected cycles based on their amplitude or cycle strength in descending order. This allows the algorithm to prioritize cycles with the most significant impact on the input data.
Bartels cycle significance test:
If the FilterBartels flag is set, the code performs a Bartels cycle significance test on the detected cycles. This test determines the statistical significance of each cycle and filters out the insignificant cycles. The significant cycles are stored in the cycleBartelsBuffer array. If the SortBartels flag is set, the code sorts the significant cycles based on their Bartels significance values.
Waveform calculation:
The Goertzel Browser code calculates the waveform of the significant cycles for both past and future time windows. The past and future windows are defined by the WindowSizePast and WindowSizeFuture parameters, respectively. The algorithm uses either cosine or sine functions (controlled by the useCosine flag) to calculate the waveforms for each cycle. The useAddition flag determines whether the waveforms should be added or subtracted.
Storing waveforms in matrices:
The calculated waveforms for each cycle are stored in two matrices - goeWorkPast and goeWorkFuture. These matrices hold the waveforms for the past and future time windows, respectively. Each row in the matrices represents a time window position, and each column corresponds to a cycle.
Returning the number of cycles:
The Goertzel Browser function returns the total number of detected cycles (number_of_cycles) after processing the input data. This information can be used to further analyze the results or to visualize the detected cycles.
The Goertzel Browser code is a comprehensive implementation of the Goertzel Algorithm, specifically designed for detecting and analyzing dominant cycles within financial market data. The code offers a high level of customization, allowing users to fine-tune the algorithm based on their specific needs. The Goertzel Browser's combination of preprocessing, iterative calculations, cycle extraction, sorting, significance testing, and waveform calculation makes it a powerful tool for understanding cyclical components in financial data.
█ Generating and Visualizing Composite Waveform
The indicator calculates and visualizes the composite waveform for both past and future time windows based on the detected cycles. Here's a detailed explanation of this process:
Updating WindowSizePast and WindowSizeFuture:
The WindowSizePast and WindowSizeFuture are updated to ensure they are at least twice the MaxPer (maximum period).
Initializing matrices and arrays:
Two matrices, goeWorkPast and goeWorkFuture, are initialized to store the Goertzel results for past and future time windows. Multiple arrays are also initialized to store cycle, amplitude, phase, and Bartels information.
Preparing the source data (srcVal) array:
The source data is copied into an array, srcVal, and detrended using one of the selected methods (hpsmthdt, zlagsmthdt, logZlagRegression, hpsmth, or zlagsmth).
Goertzel function call:
The Goertzel function is called to analyze the detrended source data and extract cycle information. The output, number_of_cycles, contains the number of detected cycles.
Initializing arrays for past and future waveforms:
Three arrays, epgoertzel, goertzel, and goertzelFuture, are initialized to store the endpoint Goertzel, non-endpoint Goertzel, and future Goertzel projections, respectively.
Calculating composite waveform for past bars (goertzel array):
The past composite waveform is calculated by summing the selected cycles (either from the user-defined cycle list or the top cycles) and optionally subtracting the noise component.
Calculating composite waveform for future bars (goertzelFuture array):
The future composite waveform is calculated in a similar way as the past composite waveform.
Drawing past composite waveform (pvlines):
The past composite waveform is drawn on the chart using solid lines. The color of the lines is determined by the direction of the waveform (green for upward, red for downward).
Drawing future composite waveform (fvlines):
The future composite waveform is drawn on the chart using dotted lines. The color of the lines is determined by the direction of the waveform (fuchsia for upward, yellow for downward).
Displaying cycle information in a table (table3):
A table is created to display the cycle information, including the rank, period, Bartel value, amplitude (or cycle strength), and phase of each detected cycle.
Filling the table with cycle information:
The indicator iterates through the detected cycles and retrieves the relevant information (period, amplitude, phase, and Bartel value) from the corresponding arrays. It then fills the table with this information, displaying the values up to six decimal places.
To summarize, this indicator generates a composite waveform based on the detected cycles in the financial data. It calculates the composite waveforms for both past and future time windows and visualizes them on the chart using colored lines. Additionally, it displays detailed cycle information in a table, including the rank, period, Bartel value, amplitude (or cycle strength), and phase of each detected cycle.
█ Enhancing the Goertzel Algorithm-Based Script for Financial Modeling and Trading
The Goertzel algorithm-based script for detecting dominant cycles in financial data is a powerful tool for financial modeling and trading. It provides valuable insights into the past behavior of these cycles and potential future impact. However, as with any algorithm, there is always room for improvement. This section discusses potential enhancements to the existing script to make it even more robust and versatile for financial modeling, general trading, advanced trading, and high-frequency finance trading.
Enhancements for Financial Modeling
Data preprocessing: One way to improve the script's performance for financial modeling is to introduce more advanced data preprocessing techniques. This could include removing outliers, handling missing data, and normalizing the data to ensure consistent and accurate results.
Additional detrending and smoothing methods: Incorporating more sophisticated detrending and smoothing techniques, such as wavelet transform or empirical mode decomposition, can help improve the script's ability to accurately identify cycles and trends in the data.
Machine learning integration: Integrating machine learning techniques, such as artificial neural networks or support vector machines, can help enhance the script's predictive capabilities, leading to more accurate financial models.
Enhancements for General and Advanced Trading
Customizable indicator integration: Allowing users to integrate their own technical indicators can help improve the script's effectiveness for both general and advanced trading. By enabling the combination of the dominant cycle information with other technical analysis tools, traders can develop more comprehensive trading strategies.
Risk management and position sizing: Incorporating risk management and position sizing functionality into the script can help traders better manage their trades and control potential losses. This can be achieved by calculating the optimal position size based on the user's risk tolerance and account size.
Multi-timeframe analysis: Enhancing the script to perform multi-timeframe analysis can provide traders with a more holistic view of market trends and cycles. By identifying dominant cycles on different timeframes, traders can gain insights into the potential confluence of cycles and make better-informed trading decisions.
Enhancements for High-Frequency Finance Trading
Algorithm optimization: To ensure the script's suitability for high-frequency finance trading, optimizing the algorithm for faster execution is crucial. This can be achieved by employing efficient data structures and refining the calculation methods to minimize computational complexity.
Real-time data streaming: Integrating real-time data streaming capabilities into the script can help high-frequency traders react to market changes more quickly. By continuously updating the cycle information based on real-time market data, traders can adapt their strategies accordingly and capitalize on short-term market fluctuations.
Order execution and trade management: To fully leverage the script's capabilities for high-frequency trading, implementing functionality for automated order execution and trade management is essential. This can include features such as stop-loss and take-profit orders, trailing stops, and automated trade exit strategies.
While the existing Goertzel algorithm-based script is a valuable tool for detecting dominant cycles in financial data, there are several potential enhancements that can make it even more powerful for financial modeling, general trading, advanced trading, and high-frequency finance trading. By incorporating these improvements, the script can become a more versatile and effective tool for traders and financial analysts alike.
█ Understanding the Limitations of the Goertzel Algorithm
While the Goertzel algorithm-based script for detecting dominant cycles in financial data provides valuable insights, it is important to be aware of its limitations and drawbacks. Some of the key drawbacks of this indicator are:
Lagging nature:
As with many other technical indicators, the Goertzel algorithm-based script can suffer from lagging effects, meaning that it may not immediately react to real-time market changes. This lag can lead to late entries and exits, potentially resulting in reduced profitability or increased losses.
Parameter sensitivity:
The performance of the script can be sensitive to the chosen parameters, such as the detrending methods, smoothing techniques, and cycle detection settings. Improper parameter selection may lead to inaccurate cycle detection or increased false signals, which can negatively impact trading performance.
Complexity:
The Goertzel algorithm itself is relatively complex, making it difficult for novice traders or those unfamiliar with the concept of cycle analysis to fully understand and effectively utilize the script. This complexity can also make it challenging to optimize the script for specific trading styles or market conditions.
Overfitting risk:
As with any data-driven approach, there is a risk of overfitting when using the Goertzel algorithm-based script. Overfitting occurs when a model becomes too specific to the historical data it was trained on, leading to poor performance on new, unseen data. This can result in misleading signals and reduced trading performance.
No guarantee of future performance: While the script can provide insights into past cycles and potential future trends, it is important to remember that past performance does not guarantee future results. Market conditions can change, and relying solely on the script's predictions without considering other factors may lead to poor trading decisions.
Limited applicability: The Goertzel algorithm-based script may not be suitable for all markets, trading styles, or timeframes. Its effectiveness in detecting cycles may be limited in certain market conditions, such as during periods of extreme volatility or low liquidity.
While the Goertzel algorithm-based script offers valuable insights into dominant cycles in financial data, it is essential to consider its drawbacks and limitations when incorporating it into a trading strategy. Traders should always use the script in conjunction with other technical and fundamental analysis tools, as well as proper risk management, to make well-informed trading decisions.
█ Interpreting Results
The Goertzel Browser indicator can be interpreted by analyzing the plotted lines and the table presented alongside them. The indicator plots two lines: past and future composite waves. The past composite wave represents the composite wave of the past price data, and the future composite wave represents the projected composite wave for the next period.
The past composite wave line displays a solid line, with green indicating a bullish trend and red indicating a bearish trend. On the other hand, the future composite wave line is a dotted line with fuchsia indicating a bullish trend and yellow indicating a bearish trend.
The table presented alongside the indicator shows the top cycles with their corresponding rank, period, Bartels, amplitude or cycle strength, and phase. The amplitude is a measure of the strength of the cycle, while the phase is the position of the cycle within the data series.
Interpreting the Goertzel Browser indicator involves identifying the trend of the past and future composite wave lines and matching them with the corresponding bullish or bearish color. Additionally, traders can identify the top cycles with the highest amplitude or cycle strength and utilize them in conjunction with other technical indicators and fundamental analysis for trading decisions.
This indicator is considered a repainting indicator because the value of the indicator is calculated based on the past price data. As new price data becomes available, the indicator's value is recalculated, potentially causing the indicator's past values to change. This can create a false impression of the indicator's performance, as it may appear to have provided a profitable trading signal in the past when, in fact, that signal did not exist at the time.
The Goertzel indicator is also non-endpointed, meaning that it is not calculated up to the current bar or candle. Instead, it uses a fixed amount of historical data to calculate its values, which can make it difficult to use for real-time trading decisions. For example, if the indicator uses 100 bars of historical data to make its calculations, it cannot provide a signal until the current bar has closed and become part of the historical data. This can result in missed trading opportunities or delayed signals.
█ Conclusion
The Goertzel Browser indicator is a powerful tool for identifying and analyzing cyclical patterns in financial markets. Its ability to detect multiple cycles of varying frequencies and strengths make it a valuable addition to any trader's technical analysis toolkit. However, it is important to keep in mind that the Goertzel Browser indicator should be used in conjunction with other technical analysis tools and fundamental analysis to achieve the best results. With continued refinement and development, the Goertzel Browser indicator has the potential to become a highly effective tool for financial modeling, general trading, advanced trading, and high-frequency finance trading. Its accuracy and versatility make it a promising candidate for further research and development.
█ Footnotes
What is the Bartels Test for Cycle Significance?
The Bartels Cycle Significance Test is a statistical method that determines whether the peaks and troughs of a time series are statistically significant. The test is named after its inventor, George Bartels, who developed it in the mid-20th century.
The Bartels test is designed to analyze the cyclical components of a time series, which can help traders and analysts identify trends and cycles in financial markets. The test calculates a Bartels statistic, which measures the degree of non-randomness or autocorrelation in the time series.
The Bartels statistic is calculated by first splitting the time series into two halves and calculating the range of the peaks and troughs in each half. The test then compares these ranges using a t-test, which measures the significance of the difference between the two ranges.
If the Bartels statistic is greater than a critical value, it indicates that the peaks and troughs in the time series are non-random and that there is a significant cyclical component to the data. Conversely, if the Bartels statistic is less than the critical value, it suggests that the peaks and troughs are random and that there is no significant cyclical component.
The Bartels Cycle Significance Test is particularly useful in financial analysis because it can help traders and analysts identify significant cycles in asset prices, which can in turn inform investment decisions. However, it is important to note that the test is not perfect and can produce false signals in certain situations, particularly in noisy or volatile markets. Therefore, it is always recommended to use the test in conjunction with other technical and fundamental indicators to confirm trends and cycles.
Deep-dive into the Hodrick-Prescott Fitler
The Hodrick-Prescott (HP) filter is a statistical tool used in economics and finance to separate a time series into two components: a trend component and a cyclical component. It is a powerful tool for identifying long-term trends in economic and financial data and is widely used by economists, central banks, and financial institutions around the world.
The HP filter was first introduced in the 1990s by economists Robert Hodrick and Edward Prescott. It is a simple, two-parameter filter that separates a time series into a trend component and a cyclical component. The trend component represents the long-term behavior of the data, while the cyclical component captures the shorter-term fluctuations around the trend.
The HP filter works by minimizing the following objective function:
Minimize: (Sum of Squared Deviations) + λ (Sum of Squared Second Differences)
Where:
The first term represents the deviation of the data from the trend.
The second term represents the smoothness of the trend.
λ is a smoothing parameter that determines the degree of smoothness of the trend.
The smoothing parameter λ is typically set to a value between 100 and 1600, depending on the frequency of the data. Higher values of λ lead to a smoother trend, while lower values lead to a more volatile trend.
The HP filter has several advantages over other smoothing techniques. It is a non-parametric method, meaning that it does not make any assumptions about the underlying distribution of the data. It also allows for easy comparison of trends across different time series and can be used with data of any frequency.
However, the HP filter also has some limitations. It assumes that the trend is a smooth function, which may not be the case in some situations. It can also be sensitive to changes in the smoothing parameter λ, which may result in different trends for the same data. Additionally, the filter may produce unrealistic trends for very short time series.
Despite these limitations, the HP filter remains a valuable tool for analyzing economic and financial data. It is widely used by central banks and financial institutions to monitor long-term trends in the economy, and it can be used to identify turning points in the business cycle. The filter can also be used to analyze asset prices, exchange rates, and other financial variables.
The Hodrick-Prescott filter is a powerful tool for analyzing economic and financial data. It separates a time series into a trend component and a cyclical component, allowing for easy identification of long-term trends and turning points in the business cycle. While it has some limitations, it remains a valuable tool for economists, central banks, and financial institutions around the world.
Trading Made Easy ATR BandsAs always, this is not financial advice and use at your own risk. Trading is risky and can cost you significant sums of money if you are not careful. Make sure you always have a proper entry and exit plan that includes defining your risk before you enter a trade.
Background:
This is my take on two relatively famous indicators that paint the colour of your candles in order to help identify trend direction and smooth out market noise. The Elder Impulse System was designed by Dr . Alexander Elder in his book Come Into My Trading Room and attempts to identify the change of trends and when these trends speed up and slow down (school.stockcharts.com). The system used a 13 period EMA and a MACD histogram, and compared each of these indicators to the previous period. In short, when both the histogram and the EMA were rising, the trend was accelerating to the upside and when both were falling, accelerating to the downside. Conversely, when the indicators were not in alignment, say the MACD falling but the EMA rising, it signaled a slowing down of momentum. The downside of this indicator is that it be can rather jumpy, focusing on a short period EMA for 50% of its calculation, leaving a trader to potentially sit on the sidelines during opportune pull backs to enter winning positions, or exit early when there is still a lot of gas left in the tank.
A similar concept has been employed by John Carter and his organization, SimplerTrading, with the 10X bars indicator. However, here they use the famous Directional Movement Index (DMI) created by J. Welles Wilder as the basis for their bars (www.simplertrading.com). John Carter states that the use of this indicator can lead to getting in earlier on more, bigger, and faster setups. The downside of this indicator is the reliance on the ADX calculations to keep you out of rangebound trades. Anyone who is familiar with the DMI system understands it has unparalleled ability to identify longer term trends, but it is also quite slow, leaving the trader to miss a good portion of the initial runup due to this ADX portion that is very slow to get moving and also slow to signal exits.
In short, both of these systems are designed with one thing in mind: keeping the trader on the right side of the move --- but both suffer from the same issue but on opposite sides of the spectrum. One is too fast and the other is too slow. Ultimately, leaving profits on the table for the trader when such a situation could be avoided.
Here I present my own take on these and have made the “Trading Made Easy ATR Bands”. I name it this because trading is much easier when you trade with the prevailing trend, and this system identifies these periods quite effectively while doing a better job of handling the speed flux of most markets. The base formula uses the DMI as its main calculation and the relationship between the DMI+ and DMI- lines, respectively, like the 10X bars. While the trader can investigate these on their own to understand these more intimately, essentially the DMI+ and DMI- lines are calculating the highs and lows respectively of each bar compared to a period in the past and smoothed with the true range, a measurement of volatility . What this ultimately presents is a picture of uptrends and downtrends, where price is making consistently more highs or more lows over a period of time. Where I have modified this relative to the 10X bars is I have ignored the ADX calculations. Further, values over 25 have been discussed as “strong” momentum, in my calculations, I have sped this up to 20 to get a trader into the move earlier. Second, I have added an additional calculation based around the 21-period exponential moving average calculated against its previous output. This then, like the Elder Impulse System, has two forms of market momentum as its calculation to smooth out noise, but has the benefit of being less jumpy, like the original 10X bar system. I have added a series of exponential moving averages following the Fibonacci sequence from 8-144 as a system of dynamic support and resistance showing the sentiment of both the shorter and longer term market participants. Last, I have added a series of Keltner Channels , from 1X-4X, that encompass the 21 period EMA as a base line. The 21 EMA is a stable in all of John Carter’s work and I do believe he is correct that the market is mostly structured around this line, since it roughly approximates one month of trading data. It is not uncommon to see price expand and contract back to this line over and over again.
Trade Signals:
Strong Bullish Momentum – The system will generate a green bar when the DMI+ line is over the DMI- line, the DMI+ line is equal or greater than 20 and the 21 EMA has increased relative to its last close.
Weak Bullish Momentum – The system will generate a blue bar in several scenarios. First, when the DMI+ line is over the DMI- line but the DMI+ line is not over 20 and the EMA is equal or less than the previous close. It will also print a blue bar if either the DMI or the EMA are not aligned, such as the DMI+ is over the DMI- but not over 20 but the EMA has risen compared to the last bar. Last, it will also print a blue bar if the DMI- is over the DMI+ but the EMA is rising.
Strong Bearish Momentum – The system will generate a red bar when the DMI- line is over the DMI+ line, the DMI- line is equal or greater than 20, and the 21 EMA has fallen relative to its last close.
Weak Bearish Momentum – The system will generate an orange bar in several scenarios. First when the DMI- line is over the DMI+ line but the DMI- line is not over 20 and the EMA is equal or greater than the last bar. It will also print an orange bar if either the DMI or the EMA are not aligned, such as the DMI- is over the DMI+ but not over 20 but the EMA has fallen. Lastly, it will also print an orange bar if the DMI+ line is over the DMI- and the EMA has fallen relative to the last bar.
Uses:
1) Like the Elder Impulse System and 10X Bar systems, these should be used as trade filters only.. It is in the trader’s best interest to trade with the trends and these bars identify these periods but may not always generate the most opportune time to enter a market. For instance, trying to short a market when the market is in a phase of Strong Bullish Momentum would not be wise, and vice versa with trying to open long positions when the market is exhibiting Strong Bearish Momentum. Use multiple forms of evidence to confirm the signals shown before entering any trade and to not take these signals on their without confluence of ideas. A viable system could use the Elder Triple Screen System (for reference, see this decent write up --- www.dailyforex.com) with the Trading Made Easy Bands as your “Tide” or longer term filter, and a further trading plan to establish an entry on a short time frame pull back.
2) Interim Trend Exhaustion – Keltner channels work as moving standard deviations from the 21 EMA . 3X multipliers will encompass 99.7% of price and 4X will encompass 99.9% of price away from the 21 EMA . During a trend it would be a good idea to lock in partial profits when price reaches these outer extrema as it is very highly probable that a retracement back to the mean is approaching. While not part of the system, and not recommended to be used by this system, a mean reversion trader could in theory look for reversals at these extrema points and trade a mean reversion strategy back to the 21EMA, but is a much riskier trade with lower probability of success. A trend trader should look to enter trades when a signal is given within the 1ATR or 2ATR zone as this is when price has not really started accelerating yet and is likely to see continued momentum in that direction.
888 BOT #backtest█ 888 BOT #backtest (open source)
This is an Expert Advisor 'EA' or Automated trading script for ‘longs’ and ‘shorts’, which uses only a Take Profit or, in the worst case, a Stop Loss to close the trade.
It's a much improved version of the previous ‘Repanocha’. It doesn`t use 'Trailing Stop' or 'security()' functions (although using a security function doesn`t mean that the script repaints) and all signals are confirmed, therefore the script doesn`t repaint in alert mode and is accurate in backtest mode.
Apart from the previous indicators, some more and other functions have been added for Stop-Loss, re-entry and leverage.
It uses 8 indicators, (many of you already know what they are, but in case there is someone new), these are the following:
1. Jurik Moving Average
It's a moving average created by Mark Jurik for professionals which eliminates the 'lag' or delay of the signal. It's better than other moving averages like EMA , DEMA , AMA or T3.
There are two ways to decrease noise using JMA . Increasing the 'LENGTH' parameter will cause JMA to move more slowly and therefore reduce noise at the expense of adding 'lag'
The 'JMA LENGTH', 'PHASE' and 'POWER' parameters offer a way to select the optimal balance between 'lag' and over boost.
Green: Bullish , Red: Bearish .
2. Range filter
Created by Donovan Wall, its function is to filter or eliminate noise and to better determine the price trend in the short term.
First, a uniform average price range 'SAMPLING PERIOD' is calculated for the filter base and multiplied by a specific quantity 'RANGE MULTIPLIER'.
The filter is then calculated by adjusting price movements that do not exceed the specified range.
Finally, the target ranges are plotted to show the prices that will trigger the filter movement.
Green: Bullish , Red: Bearish .
3. Average Directional Index ( ADX Classic) and ( ADX Masanakamura)
It's an indicator designed by Welles Wilder to measure the strength and direction of the market trend. The price movement is strong when the ADX has a positive slope and is above a certain minimum level 'ADX THRESHOLD' and for a given period 'ADX LENGTH'.
The green color of the bars indicates that the trend is bullish and that the ADX is above the level established by the threshold.
The red color of the bars indicates that the trend is down and that the ADX is above the threshold level.
The orange color of the bars indicates that the price is not strong and will surely lateralize.
You can choose between the classic option and the one created by a certain 'Masanakamura'. The main difference between the two is that in the first it uses RMA () and in the second SMA () in its calculation.
4. Parabolic SAR
This indicator, also created by Welles Wilder, places points that help define a trend. The Parabolic SAR can follow the price above or below, the peculiarity that it offers is that when the price touches the indicator, it jumps to the other side of the price (if the Parabolic SAR was below the price it jumps up and vice versa) to a distance predetermined by the indicator. At this time the indicator continues to follow the price, reducing the distance with each candle until it is finally touched again by the price and the process starts again. This procedure explains the name of the indicator: the Parabolic SAR follows the price generating a characteristic parabolic shape, when the price touches it, stops and turns ( SAR is the acronym for 'stop and reverse'), giving rise to a new cycle. When the points are below the price, the trend is up, while the points above the price indicate a downward trend.
5. RSI with Volume
This indicator was created by LazyBear from the popular RSI .
The RSI is an oscillator-type indicator used in technical analysis and also created by Welles Wilder that shows the strength of the price by comparing individual movements up or down in successive closing prices.
LazyBear added a volume parameter that makes it more accurate to the market movement.
A good way to use RSI is by considering the 50 'RSI CENTER LINE' centerline. When the oscillator is above, the trend is bullish and when it is below, the trend is bearish .
6. Moving Average Convergence Divergence ( MACD ) and ( MAC-Z )
It was created by Gerald Appel. Subsequently, the histogram was added to anticipate the crossing of MA. Broadly speaking, we can say that the MACD is an oscillator consisting of two moving averages that rotate around the zero line. The MACD line is the difference between a short moving average 'MACD FAST MA LENGTH' and a long moving average 'MACD SLOW MA LENGTH'. It's an indicator that allows us to have a reference on the trend of the asset on which it is operating, thus generating market entry and exit signals.
We can talk about a bull market when the MACD histogram is above the zero line, along with the signal line, while we are talking about a bear market when the MACD histogram is below the zero line.
There is the option of using the MAC-Z indicator created by LazyBear, which according to its author is more effective, by using the parameter VWAP ( volume weighted average price ) 'Z-VWAP LENGTH' together with a standard deviation 'STDEV LENGTH' in its calculation.
7. Volume Condition
Volume indicates the number of participants in this war between bulls and bears, the more volume the more likely the price will move in favor of the trend. A low trading volume indicates a lower number of participants and interest in the instrument in question. Low volumes may reveal weakness behind a price movement.
With this condition, those signals whose volume is less than the volume SMA for a period 'SMA VOLUME LENGTH' multiplied by a factor 'VOLUME FACTOR' are filtered. In addition, it determines the leverage used, the more volume , the more participants, the more probability that the price will move in our favor, that is, we can use more leverage. The leverage in this script is determined by how many times the volume is above the SMA line.
The maximum leverage is 8.
8. Bollinger Bands
This indicator was created by John Bollinger and consists of three bands that are drawn superimposed on the price evolution graph.
The central band is a moving average, normally a simple moving average calculated with 20 periods is used. ('BB LENGTH' Number of periods of the moving average)
The upper band is calculated by adding the value of the simple moving average X times the standard deviation of the moving average. ('BB MULTIPLIER' Number of times the standard deviation of the moving average)
The lower band is calculated by subtracting the simple moving average X times the standard deviation of the moving average.
the band between the upper and lower bands contains, statistically, almost 90% of the possible price variations, which means that any movement of the price outside the bands has special relevance.
In practical terms, Bollinger bands behave as if they were an elastic band so that, if the price touches them, it has a high probability of bouncing.
Sometimes, after the entry order is filled, the price is returned to the opposite side. If price touch the Bollinger band in the same previous conditions, another order is filled in the same direction of the position to improve the average entry price, (% MINIMUM BETTER PRICE ': Minimum price for the re-entry to be executed and that is better than the price of the previous position in a given %) in this way we give the trade a chance that the Take Profit is executed before. The downside is that the position is doubled in size. 'ACTIVATE DIVIDE TP': Divide the size of the TP in half. More probability of the trade closing but less profit.
█ STOP LOSS and RISK MANAGEMENT.
A good risk management is what can make your equity go up or be liquidated.
The % risk is the percentage of our capital that we are willing to lose by operation. This is recommended to be between 1-5%.
% Risk: (% Stop Loss x % Equity per trade x Leverage) / 100
First the strategy is calculated with Stop Loss, then the risk per operation is determined and from there, the amount per operation is calculated and not vice versa.
In this script you can use a normal Stop Loss or one according to the ATR. Also activate the option to trigger it earlier if the risk percentage is reached. '% RISK ALLOWED'
'STOP LOSS CONFIRMED': The Stop Loss is only activated if the closing of the previous bar is in the loss limit condition. It's useful to prevent the SL from triggering when they do a ‘pump’ to sweep Stops and then return the price to the previous state.
█ BACKTEST
The objective of the Backtest is to evaluate the effectiveness of our strategy. A good Backtest is determined by some parameters such as:
- RECOVERY FACTOR: It consists of dividing the 'net profit' by the 'drawdown’. An excellent trading system has a recovery factor of 10 or more; that is, it generates 10 times more net profit than drawdown.
- PROFIT FACTOR: The ‘Profit Factor’ is another popular measure of system performance. It's as simple as dividing what win trades earn by what loser trades lose. If the strategy is profitable then by definition the 'Profit Factor' is going to be greater than 1. Strategies that are not profitable produce profit factors less than one. A good system has a profit factor of 2 or more. The good thing about the ‘Profit Factor’ is that it tells us what we are going to earn for each dollar we lose. A profit factor of 2.5 tells us that for every dollar we lose operating we will earn 2.5.
- SHARPE: (Return system - Return without risk) / Deviation of returns.
When the variations of gains and losses are very high, the deviation is very high and that leads to a very poor ‘Sharpe’ ratio. If the operations are very close to the average (little deviation) the result is a fairly high 'Sharpe' ratio. If a strategy has a 'Sharpe' ratio greater than 1 it is a good strategy. If it has a 'Sharpe' ratio greater than 2, it is excellent. If it has a ‘Sharpe’ ratio less than 1 then we don't know if it is good or bad, we have to look at other parameters.
- MATHEMATICAL EXPECTATION: (% winning trades X average profit) + (% losing trades X average loss).
To earn money with a Trading system, it is not necessary to win all the operations, what is really important is the final result of the operation. A Trading system has to have positive mathematical expectation as is the case with this script: ME = (0.87 x 30.74$) - (0.13 x 56.16$) = (26.74 - 7.30) = 19.44$ > 0
The game of roulette, for example, has negative mathematical expectation for the player, it can have positive winning streaks, but in the long term, if you continue playing you will end up losing, and casinos know this very well.
PARAMETERS
'BACKTEST DAYS': Number of days back of historical data for the calculation of the Backtest.
'ENTRY TYPE': For '% EQUITY' if you have $ 10,000 of capital and select 7.5%, for example, your entry would be $ 750 without leverage. If you select CONTRACTS for the 'BTCUSDT' pair, for example, it would be the amount in 'Bitcoins' and if you select 'CASH' it would be the amount in $ dollars.
'QUANTITY (LEVERAGE 1X)': The amount for an entry with X1 leverage according to the previous section.
'MAXIMUM LEVERAGE': It's the maximum allowed multiplier of the quantity entered in the previous section according to the volume condition.
The settings are for Bitcoin at Binance Futures (BTC: USDTPERP) in 15 minutes.
For other pairs and other timeframes, the settings have to be adjusted again. And within a month, the settings will be different because we all know the market and the trend are changing.
Universal Global SessionUniversal Global Session
This Script combines the world sessions of: Stocks, Forex, Bitcoin Kill Zones, strategic points, all configurable, in a single Script, to capitalize the opening and closing times of global exchanges as investment assets, becoming an Universal Global Session .
It is based on the great work of @oscarvs ( BITCOIN KILL ZONES v2 ) and the scripts of @ChrisMoody. Thank you Oscar and Chris for your excellent judgment and great work.
At the end of this writing you can find all the internet references of the extensive documentation that I present here. To maximize your benefits in the use of this Script, I recommend that you read the entire document to create an objective and practical criterion.
All the hours of the different exchanges are presented at GMT -6. In Market24hClock you can adjust it to your preferences.
After a deep investigation I have been able to show that the different world sessions reveal underlying investment cycles, where it is possible to find sustained changes in the nominal behavior of the trend before the passage from one session to another and in the natural overlaps between the sessions. These underlying movements generally occur 15 minutes before the start, close or overlap of the session, when the session properly starts and also 15 minutes after respectively. Therefore, this script is designed to highlight these particular trending behaviors. Try it, discover your own conclusions and let me know in the notes, thank you.
Foreign Exchange Market Hours
It is the schedule by which currency market participants can buy, sell, trade and speculate on currencies all over the world. It is open 24 hours a day during working days and closes on weekends, thanks to the fact that operations are carried out through a network of information systems, instead of physical exchanges that close at a certain time. It opens Monday morning at 8 am local time in Sydney —Australia— (which is equivalent to Sunday night at 7 pm, in New York City —United States—, according to Eastern Standard Time), and It closes at 5pm local time in New York City (which is equivalent to 6am Saturday morning in Sydney).
The Forex market is decentralized and driven by local sessions, where the hours of Forex trading are based on the opening range of each active country, becoming an efficient transfer mechanism for all participants. Four territories in particular stand out: Sydney, Tokyo, London and New York, where the highest volume of operations occurs when the sessions in London and New York overlap. Furthermore, Europe is complemented by major financial centers such as Paris, Frankfurt and Zurich. Each day of forex trading begins with the opening of Australia, then Asia, followed by Europe, and finally North America. As markets in one region close, another opens - or has already opened - and continues to trade in the currency market. The seven most traded currencies in the world are: the US dollar, the euro, the Japanese yen, the British pound, the Australian dollar, the Canadian dollar, and the New Zealand dollar.
Currencies are needed around the world for international trade, this means that operations are not dominated by a single exchange market, but rather involve a global network of brokers from around the world, such as banks, commercial companies, central banks, companies investment management, hedge funds, as well as retail forex brokers and global investors. Because this market operates in multiple time zones, it can be accessed at any time except during the weekend, therefore, there is continuously at least one open market and there are some hours of overlap between the closing of the market of one region and the opening of another. The international scope of currency trading means that there are always traders around the world making and satisfying demands for a particular currency.
The market involves a global network of exchanges and brokers from around the world, although time zones overlap, the generally accepted time zone for each region is as follows:
Sydney 5pm to 2am EST (10pm to 7am UTC)
London 3am to 12 noon EST (8pm to 5pm UTC)
New York 8am to 5pm EST (1pm to 10pm UTC)
Tokyo 7pm to 4am EST (12am to 9am UTC)
Trading Session
A financial asset trading session refers to a period of time that coincides with the daytime trading hours for a given location, it is a business day in the local financial market. This may vary according to the asset class and the country, therefore operators must know the hours of trading sessions for the securities and derivatives in which they are interested in trading. If investors can understand market hours and set proper targets, they will have a much greater chance of making a profit within a workable schedule.
Kill Zones
Kill zones are highly liquid events. Many different market participants often come together and perform around these events. The activity itself can be event-driven (margin calls or option exercise-related activity), portfolio management-driven (asset allocation rebalancing orders and closing buy-in), or institutionally driven (larger players needing liquidity to complete the size) or a combination of any of the three. This intense cross-current of activity at a very specific point in time often occurs near significant technical levels and the established trends emerging from these events often persist until the next Death Zone approaches or enters.
Kill Zones are evolving with time and the course of world history. Since the end of World War II, New York has slowly invaded London's place as the world center for commercial banking. So much so that during the latter part of the 20th century, New York was considered the new center of the financial universe. With the end of the cold war, that leadership appears to have shifted towards Europe and away from the United States. Furthermore, Japan has slowly lost its former dominance in the global economic landscape, while Beijing's has increased dramatically. Only time will tell how these death zones will evolve given the ever-changing political, economic, and socioeconomic influences of each region.
Financial Markets
New York
New York (NYSE Chicago, NASDAQ)
7:30 am - 2:00 pm
It is the second largest currency platform in the world, followed largely by foreign investors as it participates in 90% of all operations, where movements on the New York Stock Exchange (NYSE) can have an immediate effect (powerful) on the dollar, for example, when companies merge and acquisitions are finalized, the dollar can instantly gain or lose value.
A. Complementary Stock Exchanges
Brazil (BOVESPA - Brazilian Stock Exchange)
07:00 am - 02:55 pm
Canada (TSX - Toronto Stock Exchange)
07:30 am - 02:00 pm
New York (NYSE - New York Stock Exchange)
08:30 am - 03:00 pm
B. North American Trading Session
07:00 am - 03:00 pm
(from the beginning of the business day on NYSE and NASDAQ, until the end of the New York session)
New York, Chicago and Toronto (Canada) open the North American session. Characterized by the most aggressive trading within the markets, currency pairs show high volatility. As the US markets open, trading is still active in Europe, however trading volume generally decreases with the end of the European session and the overlap between the US and Europe.
C. Strategic Points
US main session starts in 1 hour
07:30 am
The euro tends to drop before the US session. The NYSE, CHX and TSX (Canada) trading sessions begin 1 hour after this strategic point. The North American session begins trading Forex at 07:00 am.
This constitutes the beginning of the overlap of the United States and the European market that spans from 07:00 am to 10:35 am, often called the best time to trade EUR / USD, it is the period of greatest liquidity for the main European currencies since it is where they have their widest daily ranges.
When New York opens at 07:00 am the most intense trading begins in both the US and European markets. The overlap of European and American trading sessions has 80% of the total average trading range for all currency pairs during US business hours and 70% of the total average trading range for all currency pairs during European business hours. The intersection of the US and European sessions are the most volatile overlapping hours of all.
Influential news and data for the USD are released between 07:30 am and 09:00 am and play the biggest role in the North American Session. These are the strategically most important moments of this activity period: 07:00 am, 08:00 am and 08:30 am.
The main session of operations in the United States and Canada begins
08:30 am
Start of main trading sessions in New York, Chicago and Toronto. The European session still overlaps the North American session and this is the time for large-scale unpredictable trading. The United States leads the market. It is difficult to interpret the news due to speculation. Trends develop very quickly and it is difficult to identify them, however trends (especially for the euro), which have developed during the overlap, often turn the other way when Europe exits the market.
Second hour of the US session and last hour of the European session
09:30 am
End of the European session
10:35 am
The trend of the euro will change rapidly after the end of the European session.
Last hour of the United States session
02:00 pm
Institutional clients and very large funds are very active during the first and last working hours of almost all stock exchanges, knowing this allows to better predict price movements in the opening and closing of large markets. Within the last trading hours of the secondary market session, a pullback can often be seen in the EUR / USD that continues until the opening of the Tokyo session. Generally it happens if there was an upward price movement before 04:00 pm - 05:00 pm.
End of the trade session in the United States
03:00 pm
D. Kill Zones
11:30 am - 1:30 pm
New York Kill Zone. The United States is still the world's largest economy, so by default, the New York opening carries a lot of weight and often comes with a huge injection of liquidity. In fact, most of the world's marketable assets are priced in US dollars, making political and economic activity within this region even more important. Because it is relatively late in the world's trading day, this Death Zone often sees violent price swings within its first hour, leading to the proven adage "never trust the first hour of trading in America. North.
---------------
London
London (LSE - London Stock Exchange)
02:00 am - 10:35 am
Britain dominates the currency markets around the world, and London is its main component. London, a central trading capital of the world, accounts for about 43% of world trade, many Forex trends often originate from London.
A. Complementary Stock Exchange
Dubai (DFM - Dubai Financial Market)
12:00 am - 03:50 am
Moscow (MOEX - Moscow Exchange)
12:30 am - 10:00 am
Germany (FWB - Frankfurt Stock Exchange)
01:00 am - 10:30 am
Afríca (JSE - Johannesburg Stock Exchange)
01:00 am - 09:00 am
Saudi Arabia (TADAWUL - Saudi Stock Exchange)
01:00 am - 06:00 am
Switzerland (SIX - Swiss Stock Exchange)
02:00 am - 10:30 am
B. European Trading Session
02:00 am - 11:00 am
(from the opening of the Frankfurt session to the close of the Order Book on the London Stock Exchange / Euronext)
It is a very liquid trading session, where trends are set that start during the first trading hours in Europe and generally continue until the beginning of the US session.
C. Middle East Trading Session
12:00 am - 06:00 am
(from the opening of the Dubai session to the end of the Riyadh session)
D. Strategic Points
European session begins
02:00 am
London, Frankfurt and Zurich Stock Exchange enter the market, overlap between Europe and Asia begins.
End of the Singapore and Asia sessions
03:00 am
The euro rises almost immediately or an hour after Singapore exits the market.
Middle East Oil Markets Completion Process
05:00 am
Operations are ending in the European-Asian market, at which time Dubai, Qatar and in another hour in Riyadh, which constitute the Middle East oil markets, are closing. Because oil trading is done in US dollars, and the region with the trading day coming to an end no longer needs the dollar, consequently, the euro tends to grow more frequently.
End of the Middle East trading session
06:00 am
E. Kill Zones
5:00 am - 7:00 am
London Kill Zone. Considered the center of the financial universe for more than 500 years, Europe still has a lot of influence in the banking world. Many older players use the European session to establish their positions. As such, the London Open often sees the most significant trend-setting activity on any trading day. In fact, it has been suggested that 80% of all weekly trends are set through the London Kill Zone on Tuesday.
F. Kill Zones (close)
2:00 pm - 4:00 pm
London Kill Zone (close).
---------------
Tokyo
Tokyo (JPX - Tokyo Stock Exchange)
06:00 pm - 12:00 am
It is the first Asian market to open, receiving most of the Asian trade, just ahead of Hong Kong and Singapore.
A. Complementary Stock Exchange
Singapore (SGX - Singapore Exchange)
07:00 pm - 03:00 am
Hong Kong (HKEx - Hong Kong Stock Exchange)
07:30 pm - 02:00 am
Shanghai (SSE - Shanghai Stock Exchange)
07:30 pm - 01:00 am
India (NSE - India National Stock Exchange)
09:45 pm - 04:00 am
B. Asian Trading Session
06:00 pm - 03:00 am
From the opening of the Tokyo session to the end of the Singapore session
The first major Asian market to open is Tokyo which has the largest market share and is the third largest Forex trading center in the world. Singapore opens in an hour, and then the Chinese markets: Shanghai and Hong Kong open 30 minutes later. With them, the trading volume increases and begins a large-scale operation in the Asia-Pacific region, offering more liquidity for the Asian-Pacific currencies and their crosses. When European countries open their doors, more liquidity will be offered to Asian and European crossings.
C. Strategic Points
Second hour of the Tokyo session
07:00 pm
This session also opens the Singapore market. The commercial dynamics grows in anticipation of the opening of the two largest Chinese markets in 30 minutes: Shanghai and Hong Kong, within these 30 minutes or just before the China session begins, the euro usually falls until the same moment of the opening of Shanghai and Hong Kong.
Second hour of the China session
08:30 pm
Hong Kong and Shanghai start trading and the euro usually grows for more than an hour. The EUR / USD pair mixes up as Asian exporters convert part of their earnings into both US dollars and euros.
Last hour of the Tokyo session
11:00 pm
End of the Tokyo session
12:00 am
If the euro has been actively declining up to this time, China will raise the euro after the Tokyo shutdown. Hong Kong, Shanghai and Singapore remain open and take matters into their own hands causing the growth of the euro. Asia is a huge commercial and industrial region with a large number of high-quality economic products and gigantic financial turnover, making the number of transactions on the stock exchanges huge during the Asian session. That is why traders, who entered the trade at the opening of the London session, should pay attention to their terminals when Asia exits the market.
End of the Shanghai session
01:00 am
The trade ends in Shanghai. This is the last trading hour of the Hong Kong session, during which market activity peaks.
D. Kill Zones
10:00 pm - 2:00 am
Asian Kill Zone. Considered the "Institutional" Zone, this zone represents both the launch pad for new trends as well as a recharge area for the post-American session. It is the beginning of a new day (or week) for the world and as such it makes sense that this zone often sets the tone for the remainder of the global business day. It is ideal to pay attention to the opening of Tokyo, Beijing and Sydney.
--------------
Sidney
Sydney (ASX - Australia Stock Exchange)
06:00 pm - 12:00 am
A. Complementary Stock Exchange
New Zealand (NZX - New Zealand Stock Exchange)
04:00 pm - 10:45 pm
It's where the global trading day officially begins. While it is the smallest of the megamarkets, it sees a lot of initial action when markets reopen Sunday afternoon as individual traders and financial institutions are trying to regroup after the long hiatus since Friday afternoon. On weekdays it constitutes the end of the current trading day where the change in the settlement date occurs.
B. Pacific Trading Session
04:00 pm - 12:00 am
(from the opening of the Wellington session to the end of the Sydney session)
Forex begins its business hours when Wellington (New Zealand Exchange) opens local time on Monday. Sydney (Australian Stock Exchange) opens in 2 hours. It is a session with a fairly low volatility, configuring itself as the calmest session of all. Strong movements appear when influential news is published and when the Pacific session overlaps the Asian Session.
C. Strategic Points
End of the Sydney session
12:00 am
---------------
Conclusions
The best time to trade is during overlaps in trading times between open markets. Overlaps equate to higher price ranges, creating greater opportunities.
Regarding press releases (news), it should be noted that these in the currency markets have the power to improve a normally slow trading period. When a major announcement is made regarding economic data, especially when it goes against the predicted forecast, the coin can lose or gain value in a matter of seconds. In general, the more economic growth a country produces, the more positive the economy is for international investors. Investment capital tends to flow to countries that are believed to have good growth prospects and subsequently good investment opportunities, leading to the strengthening of the country's exchange rate. Also, a country that has higher interest rates through its government bonds tends to attract investment capital as foreign investors seek high-yield opportunities. However, stable economic growth and attractive yields or interest rates are inextricably intertwined. It's important to take advantage of market overlaps and keep an eye out for press releases when setting up a trading schedule.
References:
www.investopedia.com
www.investopedia.com
www.investopedia.com
www.investopedia.com
market24hclock.com
market24hclock.com
Other alts compensated capitalization [Peregringlk]DISCLAIMER: I'm not a native English speaker, so let me know please about mistakes in my wording.
Introduction
==========
This indicator (the middle one in the image) shows how the "others altcoins" (all altcoins except coins with high capitalization) are adding own value to its capitalization by removing BTC price changes. By "own value" I mean USD value gaining by actual buys in BTC markets beyong arbitrage effects of BTC price changes.
The main idea is that, if bitcoin has increased is value by 20%, and the other altcoins has increased its capitalization by 30%, the chart will only plot an increased of 10%. In other words, it will show its increased capitalization measured in BTC (the combined altcoin/BTC market is uptrending). Its purpose is to try to identify altseasons. A bit more concisely, the graph will only grow when both USD and BTC capitalization are growing. If any of them are going down, the graph will go down as well.
Rationale
========
- Altseasons are characterized by an incresed in BTC value of almost every altcoin during some period of time, although not all at once, but distributed over the altseason. For example, in the crazy altseason of Dec17/Jan18, almost every (low capitalized) altcoin increased its BTC value by a minimum of +300%, some at the beginning of the season, some at the end.
- When this happens, BTC loss capitalization dominance, but this also can happen if BTC is downtrending while altcoins are being bought in BTC markets but its USD value doesn't change too much. This happens when altcoins are uptrending in BTC price, but there are actually no gain of USD value because the BTC gain in value is not enough to compensate the BTC fall in price. Since BTC is losing USD price, but altcoins are not, dominance falls. So, looking at BTC dominance is not enough to spot possible beginnings of altseasons, because of arbitrage of other effects.
- The "big altcoins" are removed from the counting because one single big capitalized altcoin that grows, let's say, a 20%, will have an observable effect on the total altcoin capitalization, even if the rest of the altcoins are stagnated in price. For example, at today's date (8th April 2020), Ethereum by itself has the 23.89% of the total altcoins capitalization. A +10% in Ethereum price will increase the total altcoin capitalization by a +2.38%. I wanted to remove that effect to focus on generalized price changes of all altcoins. Remember that there are only 9 big altcoins 9 coins representing the 71% of the alts capitalization, while there are exists more than 5000 altcoins in total.
- Another key factor is that I want to focus on what happens in alt/BTC markets, because almost every altcoin can be traded against BTC, and most of them can only be traded against BTC. However, big altcoins can usually be traded against USD or other alt coins or fiat currencies as well. Removing the big alts from the equation helps (just a bit) to simplify the interpretation of the chart because arbitrage effects of those "impactfull" alts are limited (although not removed, because arbitrage also happens cross-markets).
- There are situations where BTC price is going up, alts USD capitalization is going up as well, but alts BTC capitalization is going down because altcoins are being sold in BTC markets, it just happens that the speed of the selling is not high enough as to compensated the increased in BTC price. That makes the USD capitalization grows, while alts are really being dumped in BTC markets. I wanted to reflect that effect as well by making sure that the graph is growing only when both USD and BTC capitalization of alts are growing.
Interpretation
============
If you want, you can see this chart as if plotting the Other alts capitalization as if priced against a fictional coin FCOIN, that start by having a price of 1, that combines the up and downs of both BTC price and alts USD capitalization in a very conservative way: if FCOIN price goes up, means that the other alts are gained USD value but only when they have overcome BTC price changes. Otherwise, it goes down.
If this fictional FCOIN has went up during some days straight with a total gain of maybe, greater than 10%, we are maybe in front of the start of an altseason. Sometimes, maybe (it requires some more years to extract a theory out of here), it can be used as proxy of the BTC near future (trend changes or continuations): if this FCOIN goes up, while BTC is doing nothing relevant or even is going down, it could signal that "people" is getting prepared and a generalized altcoin accumulation process has started, because of a combined people's assumption that BTC will start to have an stable uptrend, or will continue the current trend soon. There's some matches in the past about that, but there are also false positives, as usual.
Additionally, four customizable EMAs are added to the script, by default 21, 50, 100 and 150.
Definitions
=========
- Let's call `altcap_btc` the altcoin capitalization in USD, divided by BTC price. In other words, `altcap_btc` is the capitalization in terms of BTC.
- Let's call `x` the BTC price change rate as `btc_price_current_candle / btc_price_previous_candle`. So, if BTC has grown a +20%, `x = 1.20`, and if BTC has gone down a -20%, `x = 0.80`.
- Let's call `y` the `altcap_btc` price change rate, calculated as before but for `altcap_btc`.
- For pure math equivalence, `x * y` is thus the USD capitalization change rate.
Calculation
=========
For plotting the graph, for each candle, I choose a change rate, and then I plot the total accumulated change rate as by `ch0 * ch1 * ch2 * .... * ch_today`, where each `chX` is the choosen change rate of each candle since the beginning of the chart. So, if the "alts compensated value" has grown yesterday +20% and today's -10%, `1.20 * 0.9 = 1.08`, which means that in two days the compensated value has grown an 8% in total.
- If `x * y > 1` (USD cap is growing), I take `y` as change rate (alt/btc change rate).
- If both `x` and `y` are `> 1`, then the graph grows because I'm taking `y`.
- If `x > 1` and `y < 1`, the graph goes down because I'm taking `y`, reflecting the BTC markets are dumping.
- If `x < 1` and `y > 1`, the graph goes up because I'm taking `y`, reflecting the BTC markets are pumping so much that it overcomes the btc fall.
- `x < 1` and `y < 1` is impossible here because `x * y` must be `> 1` by precondition.
- If `x * y < 1` (USD cap is going down), I take `y` or `x * y` depending on the individual change rates:
- If `x` and `y` go in different directions (one up and the other down), I take `x * y` to reflect that USD capitalization has gone down. I don't take `y` here because it could be `> 1`, and I don't want to make the graph grow if alts are lossing USD value. Also, if `y < 1` and I take `y` the graph will go down faster than USD capitalization and I want to show that "alts compensated value is gown down slower than BTC because some boughts are happening". I don't take `x` either here for the same reasons.
- If both `x` and `y` are `< 1`, I take `y`, because otherwise the graph would be less than 0.000001 today after two years of bleeding, making literally impossible to see if alts "grow tomorrow".
- `x > 1` and `y > 1` is impossible here because `x * y` must be `< 1` by precondition.