import numpy as np
import numpy.random as npr
import matplotlib.pyplot as plt
import pandas as pd
import math
I modify the Vasicek model to allow for a second-order autocorrelation term to describe the evolution of interest rates. Like in the Vasicek model, interest rate movements are driven by only one source of market risk.
$r_t = \alpha +\beta_1 r_{t-1} +\beta_2 r_{t-2} + \epsilon_t $
$dr_t =(r_t-r_{t-1})dt = \alpha dt +(\beta_1-1) r_{t-1} dt +\beta_2 r_{t-2} dt + \sigma\sqrt{dt} z_t $
Long-term mean r = $\alpha/(1-\beta_1-\beta_2)$
I calculated the Long-term standard deviation as:
$ \sigma_r^2 = \frac {\sigma^2} {1-(\beta_1+\beta_2)^2}$
#Simulate I paths:
T = 500
I = 100
dt = 1/50
#What are plausible parameter values?
sigma = 0.05
r0 = -0.5
r1 = -0.5
alpha = 0.01
beta1 = 0.6
beta2 = 0.2
theta = [alpha,beta1,beta2,sigma]
def AR2(theta):
r = np.zeros((T+1, I))
r[0, :] = r0
for t in range(1, T+1):
z=npr.standard_normal(I)
r[t, :]=r[t-1,:] +theta[0]*dt+(theta[1]-1)*r[t-1,:]*dt+theta[2]*r[t-2,:]*dt+theta[3]*np.sqrt(dt)*z
return r
r = AR2(theta)
LTMean = alpha/(1-beta1-beta2)
LTVariance = sigma**2/(1-(beta1+beta2)**2)
LTsigma = np.sqrt(LTVariance)
print(round(LTMean,5), round(LTsigma,5))
0.05 0.08333
df = pd.DataFrame(r)
df.describe()
| 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | ... | 90 | 91 | 92 | 93 | 94 | 95 | 96 | 97 | 98 | 99 | |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| count | 501.000000 | 501.000000 | 501.000000 | 501.000000 | 501.000000 | 501.000000 | 501.000000 | 501.000000 | 501.000000 | 501.000000 | ... | 501.000000 | 501.000000 | 501.000000 | 501.000000 | 501.000000 | 501.000000 | 501.000000 | 501.000000 | 501.000000 | 501.000000 |
| mean | -0.038628 | -0.235112 | -0.145912 | -0.155338 | -0.198329 | -0.221376 | -0.184843 | -0.126753 | -0.235146 | -0.220857 | ... | -0.246036 | -0.212320 | -0.201876 | -0.278547 | -0.192729 | -0.234005 | -0.192947 | -0.162839 | -0.169626 | -0.234105 |
| std | 0.227118 | 0.124417 | 0.129843 | 0.163378 | 0.168532 | 0.146634 | 0.096514 | 0.197651 | 0.134695 | 0.122593 | ... | 0.110509 | 0.138144 | 0.181645 | 0.126960 | 0.137457 | 0.119759 | 0.191393 | 0.130315 | 0.155724 | 0.141987 |
| min | -0.500000 | -0.500000 | -0.500000 | -0.500000 | -0.507542 | -0.515160 | -0.501672 | -0.512556 | -0.503250 | -0.502334 | ... | -0.513537 | -0.502903 | -0.500000 | -0.511108 | -0.500000 | -0.505580 | -0.500000 | -0.513373 | -0.500000 | -0.530609 |
| 25% | -0.207821 | -0.361145 | -0.208063 | -0.285233 | -0.373893 | -0.312114 | -0.189256 | -0.283545 | -0.323580 | -0.295511 | ... | -0.310752 | -0.333386 | -0.364566 | -0.331508 | -0.314484 | -0.342212 | -0.359168 | -0.219082 | -0.277055 | -0.349991 |
| 50% | 0.023268 | -0.185365 | -0.134216 | -0.087260 | -0.133199 | -0.202160 | -0.162069 | -0.087047 | -0.224446 | -0.238438 | ... | -0.272603 | -0.174450 | -0.243196 | -0.273589 | -0.155124 | -0.195429 | -0.207480 | -0.120601 | -0.170480 | -0.202239 |
| 75% | 0.138841 | -0.123875 | -0.038563 | -0.019049 | -0.049182 | -0.126423 | -0.121276 | 0.005242 | -0.126329 | -0.096837 | ... | -0.151060 | -0.106458 | -0.029258 | -0.205003 | -0.082945 | -0.133805 | 0.013113 | -0.079946 | -0.034610 | -0.109194 |
| max | 0.258028 | -0.052890 | 0.052876 | 0.090283 | 0.019199 | 0.008025 | -0.054506 | 0.215874 | -0.031058 | -0.005427 | ... | -0.047474 | 0.031433 | 0.145415 | 0.005290 | 0.011012 | -0.073849 | 0.072951 | 0.032995 | 0.081269 | -0.013450 |
8 rows × 100 columns
rT = r[-1]
fig, ax1 = plt.subplots()#
ax1.hist(rT, bins=50); ax1.set_title('AR(2) Process - Normal Distribution')
Text(0.5, 1.0, 'AR(2) Process - Normal Distribution')
df = pd.DataFrame(rT)
df.describe()
| 0 | |
|---|---|
| count | 100.000000 |
| mean | -0.019123 |
| std | 0.084412 |
| min | -0.203339 |
| 25% | -0.074519 |
| 50% | -0.012926 |
| 75% | 0.030556 |
| max | 0.226487 |
plt.plot(r[:, :], lw = 1)
plt.xlabel('# periods'); plt.ylabel('Index Level'); plt.title("AR(2) Process with mean 0.05")
plt.grid(True)
#Autocorrelation is very high among state variables and dies out very slowly.
np.corrcoef(r[:-1,0], r[1:,0])
array([[1. , 0.99945515],
[0.99945515, 1. ]])
dr = r[1:,0]/r[:-1,0]-1
dr.shape
np.corrcoef(r[:-1,0], dr)
array([[1. , 0.01648582],
[0.01648582, 1. ]])
Though LIBOR, Euribor, and the federal funds rate are concerned with the same action, i.e. interbank loans, there is an important distinction: the federal funds rate is a target interest rate that is set by the FOMC for implementing U.S. monetary policies.
As of 1 October 2019 the short-term interest rate in Euro is calculated as ESTR. ESTR is a bank borrowing rate that relies on individual daily transactions. That compares with Eonia, a lending rate administered by EMMI that relies on voluntary contributions by banks. ESTR is a volume-weighted trimmed mean of overnight transactions.
ESTR Data http://webstat.banque-france.fr/en/quickview.do?SERIES_KEY=285.ESTR.B.EU000A2X2A25.WT
#import csv data
lines = open('./ESTR.csv','r').readlines()
lines = [line.replace("", "") for line in lines]
lines[:4]
['Date,ESTR\n', '01.10.19,-0.549\n', '02.10.19,-0.551\n', '03.10.19,-0.555\n']
#import data from the csv file with read_csv
estr = pd.read_csv('./ESTR.csv', index_col=0, parse_dates=True, sep=",", dayfirst=True, false_values='0',)
np.round(estr.head())
| ESTR | |
|---|---|
| Date | |
| 2019-10-01 | -1.0 |
| 2019-10-02 | -1.0 |
| 2019-10-03 | -1.0 |
| 2019-10-04 | -1.0 |
| 2019-10-07 | -1.0 |
#nulls = estr[ estr['ESTR'] == '-' ].index
# Delete these row indexes from dataFrame
#estr.drop(nulls , inplace=True)
estr.sort_index(axis = 0)
estr['ESTR'] = estr['ESTR'].astype(float) #for some reason the data appeared as strings
es = np.array(estr['ESTR'])
des = es[1:]/es[:-1]-1 #% change in short term rates
plt.plot(es)
[<matplotlib.lines.Line2D at 0x11b881278>]
estr.describe()
| ESTR | |
|---|---|
| count | 231.000000 |
| mean | -0.542087 |
| std | 0.007155 |
| min | -0.555000 |
| 25% | -0.548000 |
| 50% | -0.541000 |
| 75% | -0.538000 |
| max | -0.511000 |
round(np.mean(es),5)
-0.54209
Bird's Eye View Calibration:
We should be able to obtain the betas through polynomial fitting. The problem is their sum should be subunitary, to achieve stationarity.
$ beta = (beta_1 + beta_2) < 1 $
From the long-term standard deviation we obtain the standard deviation of the market risk factor:
$ \sigma^2 = \sigma_r^2 (1-(\beta_1+\beta_2)^2) = \sigma_r^2 (1-\beta^2) $
From the long-term mean we obtain:
$\alpha = \mu (1-(\beta_1+\beta_2)) = \mu (1-\beta)$
I will assume the betas are proportional to the first and secodn order autocorrelations.
#Bird's Eye View Calibration:
#step1: fit the rt line
#step2: scale down betas if necessary
#step3: calculate the other parameters. I'm happy to find that the alpha from the linear regression perfectly matches the alpha parameter from the long-term mean.
l=len(es)-3; l
X = np.vstack((np.ones(l), es[1:-2], es[0:-3])).T #independent variable
Y = es[2:-1] #dependent variable rt
a,b1,b2 = np.linalg.lstsq(X, Y, rcond=None)[0]
#pd.DataFrame(result)
print("alpha: beta1: beta2:")
print(round(a,5),round(b1,5),round(b2,5))
alpha: beta1: beta2: -0.05676 0.4762 0.4191
beta = b1+b2
sigma = np.sqrt(np.var(es)*(1-beta**2))
alpha = np.mean(es)*(1-beta)
print("alpha: sigma: ")
print(round(alpha,5),round(sigma,5))
alpha: sigma: -0.05676 0.00318
#AR(2) model with bird's eye view parameters
T = len(es)-1
I = 100
dt = 1/50
theta = [a, b1, b2, sigma]
rb = AR2(theta)
LTMean = a/(1-b1-b2)
LTVariance = sigma**2/(1-(b1+b2)**2)
LTsigma = np.sqrt(LTVariance)
print(round(LTMean,5), round(LTsigma,5))
-0.54209 0.00714
plt.plot(rb, lw = 1)
plt.xlabel('# periods'); plt.ylabel('Index Level'); plt.title("AR(2) Process with calibrated parameters")
plt.grid(True)
plt.plot(es, rb[:,50], 'b.')
[<matplotlib.lines.Line2D at 0x11bfbdb38>]
#Next I shall try to fit the AR2 model parameters using fmin.
ordd = 2
N=1
I=1
f = lambda theta: AR2(theta) #I minimize the norm for each value of the short-term rate
#g = lambda theta: np.linalg.norm(es-f(theta), ord=ordd)**ordd
def g(theta):
g=0
for n in range(0,N):
g += np.linalg.norm(es-f(theta), ord=ordd)**ordd
return g
import scipy.optimize
thetaopt = scipy.optimize.fmin(func=g, x0=theta,ftol=1e-7,xtol=1e-7)
print('Parameters a,b,sigma for Euro short-term rate', thetaopt)
Warning: Maximum number of function evaluations has been exceeded. Parameters a,b,sigma for Euro short-term rate [-0.05719077 0.48540728 0.43505976 0.00317354]
ropt = AR2(thetaopt)
plt.plot(es, ropt,'b.')
[<matplotlib.lines.Line2D at 0x101d9a2358>]
plt.plot(ropt)
[<matplotlib.lines.Line2D at 0x11bd013c8>]
The results do not appear implausible, however, the interesting shape of the ESTR trajectory probably indicates that either the process is not AR2, or that market risk is not normally distributed.
#Test AR2 goodness of fit
import statsmodels.api as sm
ols_model = sm.OLS(Y,X)
ols_results = ols_model.fit()
print(ols_results.summary())
OLS Regression Results
==============================================================================
Dep. Variable: y R-squared: 0.723
Model: OLS Adj. R-squared: 0.721
Method: Least Squares F-statistic: 294.3
Date: Sun, 25 Oct 2020 Prob (F-statistic): 1.57e-63
Time: 18:12:01 Log-Likelihood: 951.00
No. Observations: 228 AIC: -1896.
Df Residuals: 225 BIC: -1886.
Df Model: 2
Covariance Type: nonrobust
==============================================================================
coef std err t P>|t| [0.025 0.975]
------------------------------------------------------------------------------
const -0.0568 0.020 -2.837 0.005 -0.096 -0.017
x1 0.4762 0.061 7.867 0.000 0.357 0.595
x2 0.4191 0.061 6.908 0.000 0.300 0.539
==============================================================================
Omnibus: 163.771 Durbin-Watson: 2.163
Prob(Omnibus): 0.000 Jarque-Bera (JB): 4027.524
Skew: 2.379 Prob(JB): 0.00
Kurtosis: 23.033 Cond. No. 413.
==============================================================================
Notes:
[1] Standard Errors assume that the covariance matrix of the errors is correctly specified.
The exceedingly small value of Prob (F-statistic) indicates the AR2 regression is a good fit for the short-term rate. The estimated coefficients are highly significant.