knitr::opts_chunk$set(echo = TRUE) library(scrAndFun) library(Data) library(kableExtra) coinList.xts <- list( BitCoinYahoo.xts = xts::xts(BitCoinYahoo[,2:7], order.by = BitCoinYahoo[,1]), BitCashYahoo.xts = xts::xts(BitCashYahoo[,2:7], order.by = BitCashYahoo[,1]), EthereumYahoo.xts = xts::xts(EthereumYahoo[,2:7], order.by = EthereumYahoo[,1]), RippleYahoo.xts = xts::xts(RippleYahoo[,2:7], order.by = RippleYahoo[,1]) ) coinList.xts <- sapply(coinList.xts, function(x) { cl <- quantmod::Cl(x) x$re.proc <- (cl[-1] - lag(cl)[-1]) / lag(cl)[-1] x })
Article Time Series Momentum p. 233: When calculating the exponential weighted average and volatility they argue, that they weightis choosen such that
$$ \sum^\infty_{i=0}(1-\delta)\delta^i i = \delta (1-\delta) = 60$$
Hence we put $\delta = \frac{60}{61}$.
However in the proof of the relation above, one is using convergence of a geometric sequence. Hence it needs some observations before the above relation can be assumed to be true.
When implementing this, for the first values of $\sigma$ and $\bar{r_t}$ then there isn't any values before. We have written the code such that the weighted average is calculating using all the periods available before time $t$. The first for $\sigma$ we have put equal to 0 and for $\bar{r}$ equal to the value of $r_t$ (the first realized return). As the time goes more and more observations are used and convergence is more likely to be trusted.
First is this a approach which you can approve?
Second since $\delta = \frac{60}{61}$ the strategy gives uch weight to the nearest values. This can be seen in the code below, which is the Price, exponential weighted average mean (return) and exponential weighted volatility (of returns) where we get some extremely fluctating values (the mean and sd are annualized) and the ammounts for the volatility are big.
r
foo <- fun_dailyDf(xtsObj = coinList.xts$BitCoinYahoo.xts,
dateFrom = "2013/",
delta = 60/61, annu = 261)
head(foo)
tail(foo)
Can it really be true, that we shall use this volatility (the annualized) on daily returns? Isn't better to use daily volatility and scale by thoose? What are we doing wrong because this effect our results a lot.
Article Time Series Momentum p. 232 Table 1: If we try to replicate this, then we get extreme high vaue for the annualized volatility. Just looking at the daily give some high values
```r tableSumStats <- data.frame(Crypto = c("Bitcoin", "Bitcoin Cash", "Ethereum", "Ripple"), "Data start date" = sapply(coinList.xts, function(x) { paste0(lubridate::month(zoo::index(x)[1], label = TRUE, abbr = TRUE), "-", lubridate::epiyear(zoo::index(x)[1])) }), "Daily mean" = paste0(round(sapply(coinList.xts, function(x) { mean(x$re.proc, na.rm = TRUE) * 100 }), digits = 2), "%"), "Daily SD" = paste0(round(sapply(coinList.xts, function(x) { sd(x$re.proc, na.rm = TRUE) * 100 }), digits = 2), "%"), row.names = NULL) colnames(tableSumStats) <- c("Crypto", "Data start date", "Daily mean", "Daily SD")
kable(tableSumStats, "latex", booktabs = TRUE) %>%
kable_styling(latex_options = "striped")
```
Is it normal for cryptocurrencies to have this high volatility? Shall we give annualized volatility when they have done it in the article?
Article Time Series Momentum p. 236 column 2: "The sharpe ratio is statistically different from zero", how do they test this? We have made the strategy and taken the Sharpe ratio on the daily returns of the strategy.
Article Time Series Momentum p. 233: Scaling is done by 261 to annualize. Since we are working with cryptocurrencies we will scale with 365.25. Correct?
Article Time Series Momentum p. 236: The choose the position size to be $40\%/\sigma_{t_1}$, and argue that the choice of 40% is incnsequential. However they argue that when they choose this, then they get a annualized volatility of 12 % for the portfolio where they have average the return across all securities, and this is roughly the level of volatility exhibited by other factors such as those of Fama and French (1993) and Asness, Moskowitz, and Peernsen (2010).
What shall we choose?
After a lot of research Yahoo Finance is the place where we can get most date and which the fewest amount of missing values, hence they have the best data. Is it okay continue using them?
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.