Hi Guys, my name is Denis and you are watching
Close To algo trading. today I'm gonna talk about portfolio optimization. What do you think, does portfolio optimization
really help us beat the market? Stay with me and maybe we can answer this
question. Before we start, perhaps we need to know what
Portfolio optimization means. Portfolio optimization is the process of selecting
proportions of various assets to include in a portfolio, in such a way as to make the
portfolio better than any other according to specific constraints. For my experiment I will try to maximize Sharpe
Ratio. Sharpe ratio is the measure of risk-adjusted
return of a financial portfolio. We will calculate it in the following way:
Sharpe = (Portfolio Return) / (Portfolio Std) because in our case risk free rate doesn't
play any role and it can be 0. Well, we need to allocate our money to different
assets in the way that our portfolio has maximum sharpe value. The good news is that there are a lot of methods
that already exist which can solve this task for us. Now that we know what we're going to optimize,
let's move on to our participance. Today, the following methods will take part
in our battle: - Classic Mean-variance optimization
- Hierarchical Risk Parity created by Marcus Lopez de Prado
- The Critical Line Algorithm, which was specially designed for portfolio optimization. - Efficient Frontier with nonconvex optimizer And we also have two more exotic methods
LSTM Model for Sharpe Value optimization Trained LSTM Model for prediction future allocation. - The first one I found on the internet. Guys adopt deep learning models to directly
optimise the portfolio Sharpe ratio. They claimed that this method showing
the best performance over the testing period, from 2011 to the end of April 2020, including
the financial instabilities of the first quarter of 2020 compared to other methods. The implementation of this model you can find
in the risk_model.py or in authors' github Let's call this method LSTM, just because
they use a simple LSTM network. - The second method is different from all others, I trained a simple network that predicted
allocation based on the model from the first method. First what I want to know is which of these
methods works better. For that I chose to use a 125 days price period
for all methods. Set of assets was selected randomly, and it
consisted of 10 stocks. Then I randomly collected 10 periods of 125
prices from 2008 to 2020 and tried to optimize the portfolio using the different methods. The result was interesting, as you may see
in most periods the best sharpe value have MeanVariance and CLA method, and LSTM method
shows very good results, in some cases it outperforms others. One very strange result shows other methods,
they are very similar to random allocation. For the first test i used only stocks, let
do the same test but reduce our assets to four etfs. We can see that MeanVariance and CLA are the
leaders and LSTM shows mixed results but still looks pretty good. What we can say, looking at these results. Well, CLA and MeanVarians look very good,
but will they perform so well if we will use this allocation for our portfolio for future? Let's check it. For the tests, I decided to use backtrader
and I created a very simple only long strategy. I used 125 days of historical price to calculate
allocation of assets and based on this allocation i rebalance the portfolio. Rebalancing was done every 22 days. Once in a month. There is one difference between the test of
classic models and LSTM. For each of classic model i did two tests,
in first test i used normal calculation of asset allocation
based on 125 days prices, and in the second test i collected historical price and the
calculation always performed on the collected data. For LSTM, I didn't use collected data, and
I always used a new model for calculation, because if i tried to reuse the model, the
allocation didn't change. Also, we have a model that should predict
future allocation. For the stocks I collected data on 1000 training
data elements from 2000 to 2006. One training element
consist of the 125 days and the allocation for the next 22 days. Next, a simple network was trained and used
for the testing. Same was done for ETFs test. The test statistics for 2008-2010 period you
can see on the table, and the sharp value below in the graph. Result is interesting, most of the methods
show results not far away from random, and some
of them perform very poor. However, our trained model outperforms all
others methods. Let's check another period that is not so
close to our training data. For 2011 - 2017 years the winner is HRP method
with data collection, but we can really see that all results are
very close and not far away from the random allocation. Also I checked the period from 2018 to 2021,
so the LSTM model does not work very well, the trained model is a bit better. CLA and MeanVarians are the winner. In addition, I did a test for a specific period
from 2012 to 2016, because if you check the return graph, you can find that our trained
model and also LSTM model perform very poorly. From 2012 to 2013 our model loses money, but
the spy is growing. It is also visible here. Maybe it is because I took the training for
true allocation from the LSTM model. But in this special period we can see that
the random allocation outperforms other methods. The final test was done on ETFs. So, trained models a little outperforms other
methods. Based on these results, we can clearly see
that the past allocation doesn't provide a better allocation in the future and in some
cases it is worse as random. Model that try to predict future allocation
look much more promising, but to get stable results we have to add more data and more
features. However, at the moment it is quite difficult
to rely on any of these methods. Well. that is all for today, I hope it was interesting
and see ya in the next video. Bye.