Friday, August 30, 2019

Business Forecasting;Using the United Kingdom statistics locates Banks consumer credit: Gross lending figures from 1993-2003

If you order your cheap custom essays from our custom writing service you will receive a perfectly written assignment on Business Forecasting;Using the United Kingdom statistics locates Banks consumer credit: Gross lending figures from 1993-2003. What we need from you is to provide us with your detailed paper instructions for our experienced writers to follow all of your specific writing requirements. Specify your order details, state the exact number of pages required and our custom writing professionals will deliver the best quality Business Forecasting;Using the United Kingdom statistics locates Banks consumer credit: Gross lending figures from 1993-2003 paper right on time.


Out staff of freelance writers includes over 120 experts proficient in Business Forecasting;Using the United Kingdom statistics locates Banks consumer credit: Gross lending figures from 1993-2003, therefore you can rest assured that your assignment will be handled by only top rated specialists. Order your Business Forecasting;Using the United Kingdom statistics locates Banks consumer credit: Gross lending figures from 1993-2003 paper at affordable prices!


Using the United Kingdom statistics locates Banks consumer credit Gross lending figures from 1-00


1.Examine the data by plotting and/or otherwise for seasonal effects, trends and cycles.


YearJanFebMarAprMayJunJulAugSepOctNovDec


14546145444054788457641454787


Essay service for your Business Forecasting;Using the United Kingdom statistics locates Banks consumer credit: Gross lending figures from 1993-2003 paper


14418048746465745746455148468850557


15481457154855085656785587611655657657815855


1657654485776506586070666841648061764687161


1766088646787007477751675074718751


1875167786818087884178606487880445


1810784767888511874858087460105710450


000501006108411156108061077108806108571081070


00111141001106117118811711151158147184165


0017611651671841408511114711518144141


0010110


Resource http//www.statistics.gov.uk/statbase/TSDtimezone.asp (Accessed /04/0)


The figure shows Banks consumer credit Gross lending on a monthly basis starting with the first month of 14 and ending with the twelfth month of 00. And we will use the data in 00 to check on the forecast.




As seen from the graph above, the time series data moves upward over a period of time so it can be said that the data shows a fairly strongly positive trend, so that we might expect high autocorrelation coefficients. Considering ACF, it supports this expectation because it is significantly different from zero. Autocorrelation at lag 1 is 0.45 and move downwards to 0.581 at lag 16.


Afterwards, we will choose data from 14-001 to model.


.Try a classical decomposition method on part of the data and check it against the rest of the data.



The ACF of the actual data shows that there is a trend or cycle or both, so we should look at the ACF of the first difference.



From the ACF of the first difference, we could use the 1 points moving average to smooth the data. Subsequently, we will calculate the twelve point moving average, seasonal estimate, trend-cycle estimate and forecast based on the trend-cycle and the seasonal ratios.


After calculating center moving average and getting seasonal estimate (ratio), we will find out the factors as following table


SequenceMeanCount


10.57


0.17


1.007


40.87


51.017


61.07


71.047


81.057


1.007


101.007


111.007


11.047


Then we can de-seasonalise the data by dividing the factors into the data to create a trend cycle column. Next, we regress the trend cycle against the time and will get a trend column.


Coefficients


Unstandardized Coefficients Standardized CoefficientstSig.


Model BStd. ErrorBeta


1(Constant)45.16484.847 46.480.000


TIME84.571.510.8555.560.000


aDependent Variable TRENCYC


From the coefficients table above, the trend equation is


Trend = 45.164+84.57 Time


Then we can create a moving average forecasting column with the factors and trend column. The moving average forecasting equation is


Forecast = Trend Factors


Therefore, it can be implied that


Forecast = (45.164+84.57 Time) Factors


Moreover, the errors will be calculated by deducting the forecast data from the actual data in order to see how accurate the forecast is.


Errors = Data Forecast


See the figures of the result as following table


TimeDataCenter Moving AverageRatiosSequenceFactorsTrend-cycleTrend lineForecastErrors


14Jan1410.5446.40.588.0400.6


Feb800.147.6411.8874.66.7


Mar4871.00487.00418.4418.4680.76


Apr4460.84485.7148.5416.41.06


May546571.014610.8466.54410.646.8


Jun64571.0485.804451.14540.416.67


Jul74644804.000.871.04451.46455.664717.0-.0


Aug85514857.11.1481.05550.48455.664851.0661.8


Sep48407.001.01.00550.484704.84704.884.6


Oct104688461.500.4101.004688.004788.74788.7-100.7


Nov1150550.881.00111.00505.00487.0487.0178.1


Dec1575107.041.0511.045165.8457.455155.7516.5


15Jan14815107.040.10.55066.5041.81478.71.


Feb14457156.60.8710.150.08516.164664.81-.81


Mar155485585.61.041.005485.00510.5510.574.48


Apr16508554.0.540.850.0454.885188.8-0.8


May175654.461.0551.0150.0457.54.058.7


Jun185678547.61.0461.05566.67546.5557.86105.14


Jul1558755.61.0171.0457.15547.5576.86-18.86


Aug061165615.711.081.05584.7656.051.0.08


Sep15565664.50.81.00556.005716.665716.66-154.66


Oct576578.41.01101.00576.005801.05801.0-8.0


Nov57815817.670.111.005781.005885.85885.8-104.8


Dec45855586.541.0011.0456.8156.7608.5-5.5


16Jan5576545.60.710.56065.66054.05751.810.6


Feb65448607.70.00.1586.81618.455751.8-17.


Mar75776106.50.51.00577.006.806.80-44.80


Apr865061.581.040.8647.5607.166181.0168.8


May65866.1.0551.016516.861.56455.416.57


Jun06065.0.561.0514.716475.876605.-57.


Jul1706664.001.1071.04674.6560.68.644.6


Aug68416485.1.0581.056515.46644.5676.8-15.8


Sep648065.0.1.006480.00678.5678.5-48.4


Oct4617655.001.05101.00617.00681.0681.010.70


Nov564686655.70.7111.006468.00687.66687.66-4.66


Dec6716167.581.0611.046885.5868.0761.0-100.0


17Jan7668.0.410.5678.47066.7671.05-1.05


Feb8608868.10.880.1660.117150.76507.16-41.16


Mar646664.580.01.00646.0075.075.0-8.0


Apr40787050.671.0540.875.6571.44717.0608.4


May41700711.880.851.016.60740.807477.84-468.84


Jun474776.581.061.076.477488.16767.-164.


Jul4774.671.0871.047618.7757.57875.447.58


Aug4475167448.541.0181.057158.107656.8780.7-5.7


Sep457507588.70.1.00750.007741.7741.-1.


Oct46747710.51.0101.0074.00785.5785.5117.41


Nov4771774.50.5111.0071.0070.470.4-518.4


Dec488751707.61.1111.048414.474.0814.0746.


18Jan47516801.080.410.5711.588078.667674.7-158.7


Feb50778116.670.00.1806.64816.01748.4-1.4


Mar51868.81.01.0086.00847.7847.7115.6


Apr5818088.0.840.8846.481.78165.014.1


May587844.710.851.018145.548416.08500.5-7.5


Jun54884857.1.0561.08807.848500.448670.451.55


Jul55178580.881.0671.048775.68584.8088.118.81


Aug568606864.461.0081.05816.1866.1610.61-46.61


Sep574877.1.061.004.00875.51875.51488.4


Oct5887888.60.101.00878.00887.87887.87-10.87


Nov580886.671.01111.0080.008.8.-.


Dec60445811.581.0611.04081.7006.5866.8578.15


1Jan618107878.880.010.585.6800.4866.-5.


Feb684060.080.10.107.6175.084.5-15.5


Mar676714.11.071.00767.005.665.66507.4


Apr64880.670.640.801.744.01157.1-4.1


May65851104.040.151.01846.748.75.65-1011.65


Jun66874415.71.0561.0680.51.770.8171.0


Jul6785516.61.0471.0447.0857.0880.7-18.7


Aug6880651.1.081.0561.0681.4410165.51-5.51


Sep687770.81.01.0087.00765.80765.801.0


Oct7046086.500.6101.00460.00850.15850.15-0.15


Nov71105767.61.06111.001057.004.514.5166.4


Dec71045010117.001.011.0410048.0810018.871041.60.8


000Jan7501014.10.10.51001.581010.58.06-68.06


Feb7410061076.40.80.111017.5810187.5870.70755.0


Mar751081016.881.051.00108.001071.41071.4561.06


Apr7641071.710.040.85.671056.01014.17-806.17


May77111561044.11.0751.0111045.5410440.6510545.06610.4


Jun781080610465.1.061.01054.11055.011075.5170.4


Jul710771054.1.071.041058.651060.7110.74-60.74


Aug80108810610.81.081.05106.81106.7118.41-46.41


Sep81061061.70.1.0006.0010778.0810778.08-87.08


Oct81085710707.81.01101.0010857.001086.441086.44-5.44


Nov810810817.51.01111.00108.001046.801046.80-54.80


Dec8410701088.10.811.04107.11101.151147.40-76.40


001Jan8511141100.081.0110.5117.4711115.511055.758.7


Feb8610011170.040.00.111014.111.871011.88-168.88


Mar871106110.670.81.001106.001184.1184.-.


Apr8811711450.750.840.811456.11168.5811141.185.7


May811881161.881.051.0111780.01145.411567.470.5


Jun01171117.671.0061.01155.801157.11768.04.6


Jul111.04141.71161.651086.584.48


Aug151.051178.1011706.0111.1.6


Sep11581.001158.001170.71170.7-4.6


Oct41471.00147.0011874.711874.717.8


Nov51841.00184.00115.08115.0888.


Dec61651.04114.04104.44155.1710.8


Then, we can plot the decomposition forecasting against the actual data to illustrate what is happening.



The graph shows that the decomposition forecasting fits the actual data better at the middle. However, at the beginning and the end, there are some errors.


Comparing the forecast data with the rest of the data, we calculate them by extending the time line and see the result as following table


Rest DataTimeFactorsTrendForecastErrors


00Jan176.0070.5117.71151.40140.60


Feb1165.0080.111.151111.06851.4


Mar167.001.0016.5116.518.4


Apr184.001000.8180.8611.51850.75


May14085.001011.011465.158.87145.1


Jun111.00101.0154.581800.57-688.57


Jul147.00101.0416.411.158.71


Aug115.001041.051718.154.1-1.1


Sep18.001051.00180.65180.65186.5


Oct144.001061.001887.011887.01606.


Nov14.001071.00171.6171.6-.6


Dec1.001081.041055.71577.544.05


00Jan158.00100.51140.08148.071114.


Feb17.001100.114.4104.7.77


Sum of squared errors=1855.51


Mean-squared error (MSE)=1108.75


Root-mean-squared error (RMSE)=57.74


This RMSE is about 7. % of the mean for the rest of data during the forecast period


From the table above, we can notice the errors as illustrated the graph below which is plotted from rest data against forecast data of 00-00.



Although there is some significant errors, perhaps which are caused by economic uncertainty during that period, overall the plot graph below shows that the predict data relatively fit the real data.



. Try an Autocorrelation approach on the same data and discuss the differences between the results and those from the Decomposition approach. Please comment on the different from of the equation for the forecast.



After the first difference, it shows that there are seven spikes on lag1, lag, lag4, lag10, lag1, lag1, and lag14 so we will try on these lagged data to calculate correlations.


The table below shows the correlations between the actual data and lagged data


Correlations


DATALAGSLAGSLAGSLAGSLAGSLAGSLAGS


(DATA,1)(DATA,)(DATA,4)(DATA,10)(DATA,1)(DATA,1)(DATA,14)


DATAPearson Correlation1.5.6.5.6.7.40.54


Sig. (-tailed)..000.000.000.000.000.000.000


N65868488


LAGS(DATA,1)Pearson Correlation.51.56.6.505.7.8


Sig. (-tailed).000..000.000.000.000.000.000


N55868488


LAGS(DATA,)Pearson Correlation.6.561.57.8.5.4.58


Sig. (-tailed).000.000..000.000.000.000.000


N868488


LAGS(DATA,4)Pearson Correlation.5.6.571.45.51.51.


Sig. (-tailed).000.000.000..000.000.000.000


N868488


LAGS(DATA,10)Pearson Correlation.6.50.8.451.48.64.


Sig. (-tailed).000.000.000.000..000.000.000


N86868686868488


LAGS(DATA,1)Pearson Correlation.7.5.5.51.481.48.45


Sig. (-tailed)0.000.000.000.000.000..000.000


N84848484848488


LAGS(DATA,1)Pearson Correlation.40.7.4.51.64.481.47


Sig. (-tailed).000.000.000.000.000.000..000


N88888888


LAGS(DATA,14)Pearson Correlation.54.8.58...45.471


Sig. (-tailed).000.000.000.000.000.000.000.


N88888888


Correlation is significant at the 0.01 level (-tailed).


From the correlation between the data and lagged data, these are all high and then we need to use regression on the chosen columns to find the relationship.


Variables Entered/Removed


ModelVariables EnteredVariables RemovedMethod


1LAGS(DATA,14), LAGS(DATA,4), LAGS(DATA,10), LAGS(DATA,1), LAGS(DATA,), LAGS(DATA,1), LAGS(DATA,1).Enter


aAll requested variables entered.


bDependent Variable DATA


Model Summary


ModelRR SquareAdjusted R SquareStd. Error of the Estimate


10.800.610.57448.111


aPredictors (Constant), LAGS(DATA,14), LAGS(DATA,4), LAGS(DATA,10), LAGS(DATA,1), LAGS(DATA,), LAGS(DATA,1), LAGS(DATA,1)


This explains 5.7% of the variability and so will be reasonable fit to the data


Coefficients


Unstandardized CoefficientsStandardized CoefficientstSig.


Model BStd. ErrorBeta


1(Constant)1.58010.57 1.85.067


LAGS(DATA,1)1.658E-0.1.017.16.8


LAGS(DATA,).10.10.01.04.00


LAGS(DATA,4)1.70E-0.0.00.017.86


LAGS(DATA,10)-.11.10-.11-1.17.4


LAGS(DATA,1).5.105.5475.66.000


LAGS(DATA,1)6.E-0.14.05.471.6


LAGS(DATA,14).16.104.1811.88.064


aDependent Variable DATA


From the coefficients table, the lag1, lag4, lag10, lag1, and lag14 data are not significant as well as the constant so the regression should be repeated only with lag and lag1 data because their significant is less than .05


Variables Entered/Removed


ModelVariables EnteredVariables RemovedMethod


1LAGS(DATA,1), LAGS(DATA,).Enter


aAll requested variables entered.


bDependent Variable DATA


cLinear Regression through the Origin


Model Summary


ModelRR SquareAdjusted R SquareStd. Error of the Estimate


1..7.744.446


aFor regression through the origin (the no-intercept model), R Square measures the proportion of the variability in the dependent variable about the origin explained by regression. This CANNOT be compared to R Square for models which include an intercept.


bPredictors LAGS(DATA,1), LAGS(DATA,)


This fit has increased slightly from .7% to 6% by removing the lag1, lag4, lag10, lag1, and lag14 data as well as the constant.


Coefficients


Unstandardized CoefficientsStandardized Coefficients


Model BStd. ErrorBetatSig.


1LAGS(DATA,).4.07.446.08.000


LAGS(DATA,1).651.07.5758.70.000


aDependent Variable DATA


bLinear Regression through the Origin


All lag data are significant, so


Forecast = 0.4 (data lagged) + 0.651 (data lagged1)


See the figures of the Autocorrelation forecasting as following table


TimeDatalaglag1ForecastError


14Jan141


Feb80


Mar487


Apr44641.00


May5465780.00


Jun6457487.00


Jul746446.00


Aug85514657.00


Sep48457.00


Oct104688464.00


Nov11505551.00


Dec15748.00


15Jan14814688.0041.004746.0166.


Feb144571505.0080.004808.81-7.81


Mar15548557.00487.00554.54-4.54


Apr16508481.0046.00474.701.0


May17564571.004657.00508.865.6


Jun1856785485.00457.00564.4.08


Jul15587508.00464.005.8.18


Aug0611656.00551.006087.758.5


Sep15565678.0048.005740.48-178.48


Oct5765587.004688.005504.5858.4


Nov57816116.00505.0057.78-1.78


Dec45855556.0057.0058.8-8.8


16Jan5576576.00481.00566.8.78


Feb654485781.004571.00551.58-65.58


Mar75775855.005485.006141.08-68.08


Apr8650576.00508.005848.501.68


May6585448.0056.00607.16484.84


Jun060577.005678.0060.7-17.7


Jul17066650.005587.00644.7641.1


Aug6841658.006116.006871.01-0.01


Sep648060.00556.0066.510.65


Oct46177066.00576.00685.66.1


Nov564686841.005781.006766.6-8.6


Dec671616480.005855.006656.504.68


17Jan76617.00576.006787.6-5.6


Feb860886468.005448.00686.10-8.10


Mar6467161.00577.00601.0-655.0


Apr40786.00650.006.444.06


May417006088.00658.00657.5151.4


Jun4747646.0060.00666.4880.5


Jul4778.007066.007840.668.4


Aug447516700.006841.00750.44-14.44


Sep45750747.006480.0074.10.87


Oct46747.00617.00781.16-8.16


Nov47717516.006468.007510.1-11.1


Dec488751750.007161.0076.0787.1


18Jan4751674.006.007648.17-1.17


Feb507771.006088.00707.41.06


Mar51868751.00646.00707.84455.17


Apr581807516.0078.008105.174.7


May58777.00700.00778.8044.0


Jun5488486.00747.00856.8447.7


Jul55178180.007.008748.878.11


Aug56860687.007516.008504.57101.4


Sep574884.00750.0088.5040.50


Oct5887817.0074.00177.65-44.65


Nov5808606.0071.00858.580.4


Dec604454.008751.00754.14-0.14


1Jan618107878.007516.00874.51-617.51


Feb68480.0077.0086.7-468.7


Mar6767445.0086.0050.67176.


Apr64888107.008180.008884.15-51.15


May65851184.0087.00866.11-455.11


Jun66874767.00884.001016.0-6.0


Jul678588.0017.0081.6.64


Aug68808511.008606.008.8441.16


Sep687874.004.001051.-64.


Oct7046085.00878.0010006.6-546.6


Nov71105780.0080.00101.474.71


Dec71045087.00445.00105.-8.


000Jan750460.008107.0040.60.40


Feb7410061057.0084.0010005.10.0


Mar7510810450.00767.001045.87-11.87


Apr76450.0088.00.5-50.5


May77111561006.008511.004.0811.


Jun7810806108.00874.001118.66-77.66


Jul710774.0085.0010515.57.77


Aug80108811156.0080.00116.81-414.81


Sep810610806.0087.001145.7-1.7


Oct8108571077.00460.0010887.81-0.81


Nov81081088.001057.0011675.85-78.85


Dec84107006.0010450.0011151.68-44.68


001Jan85111410857.0050.001070.517.75


Feb86100108.001006.001108.51-185.51


Mar8711061070.00108.001175.5-61.5


Apr881171114.004.001074.075.


May81188100.0011156.001166.655.5


Jun011711106.0010806.001180.-.


Jul1111710771141.8887.1


Aug1511881088107.4017.60


Sep11581171061165.06-67.06


Oct4147110857174.7450.6


Nov5184151081501.747.6


Dec6165115810701157.7677.8


Then, we can plot the autocorrelation forecasting against the actual data to illustrate what is happening.



The graph shows that the Autocorrelation forecasting fits the actual data better at the middle. However, at the end, there are some errors.


Comparing the forecast data with the rest of the data, we calculate them by extending the time line and see the result as following table


TimeDatalaglag1ForecastError


00Jan71761471114106.5-07.5


Feb811651841001165.68-00.68


Mar16716511061748.1-6.1


Apr100184176117111.107.71


May101140851165118818.1086.77


Jun10111167117114.0-110.0


Jul10147184114555.76167.4


Aug10411514085151406.8-1.8


Sep1051811111581711.77.77


Oct10614414714715087.1-15.1


Nov1071411518414166.08-14.08


Dec10811816517.56-5.56


00Jan10158144176141.-6.


Feb110171411651470.75-118.75


Sum of squared errors=1071844.


Mean-squared error (MSE)=7650.16


Root-mean-squared error (RMSE)=874.76


This RMSE is about 6.6 % of the mean for the rest of data during the forecast period


From the table above, we can notice the errors as illustrated the graph below which is plotted from rest data against forecast data of 00-00.



However, there are some errors at the end, overall the plot graph below (During 14-00) shows that the predict data relatively fit the real data.




From the graph above, it illustrated the actual data (red line), Decomposition forecasting (blue line), and Autocorrelation forecasting (green line). Some periods Decomposition model is closer and fitter than Autocorrelation model, but some periods do contrast. Therefore, it is relatively difficult to say which model is better because both models are quite close to the real data.


However, it can be seen the error of these two models as following graph



Decomposition Model


Forecast = (45.164 + 84.57 Time) Factors


Autocorrelation Model


Forecast = 0.4 (data lagged) + 0.651 (data lagged1)


From these equations, it demonstrates that decomposition model depends on the trend running following time and some seasonal effects showing by the factors. And that means it can forecast more than one period ahead. Considering Autocorrelation model, it depends on the lag and lag1 data. It means it can forecast third periods ahead because from its equation has to forecast following the lag data.


4. Try a Box-Jenkins ARIMA approach on the same data and compare your forecasts with the rest of the data as before.


The trendcycle data from the question two will be used to analyse on the Box-Jenkins models as the seasonal effect was removed and the trendcycle data is shown at the chart below.



See the ACF and PACF of the trend-cycle data



From ACF, it shows a trend, so we will do the first difference to remove the trend


From Partial ACF, there is one strong spike on lag one and it dies away after one spike so if a model is fitted to the data, it should be an AR(1).



After first different, ACF does not die away so try the Partial ACF.



This PACF does not show any evidence of pattern repeat and does not die away so try the second difference.



The ACF and PACF of the second difference do not give anything helpful.


Therefore, AR(1) model is suggested by the Partial ACF of the trendcycle data. So we will try ARIMA(1,0,0) trendcycle data.


Model Description


Variable TRENCYC


Regressors NONE


Non-seasonal differencing 0


No seasonal component in model.


Parameters


AR1 ________ value originating from estimation


CONSTANT ________ value originating from estimation


5.00 percent confidence intervals will be generated.


Split group number 1 Series length 6


No missing data.


Melards algorithm will be used for estimation.


Termination criteria


Parameter epsilon .001


Maximum Marquardt constant 1.00E+0


SSQ Percentage .001


Maximum number of iterations 10


Initial values


AR1 .416


CONSTANT 808.40


Conclusion of estimation phase.


Estimation terminated at iteration number because


Sum of squares decreased by less than .001 percent.


FINAL PARAMETERS


Number of residuals 6


Standard error 5.65186


Log likelihood -74.7657


AIC 150.551


SBC 1508.658


Variables in the Model


BSEB T-RATIOAPPROX. PROB.


AR1 .8057 .0185 4.8788 .00000000


CONSTANT8144.7165017.450.7476 .00008


The constant term is significant so we can fit an AR(1) to the data with the constant.


The following new variables are being created


Name Label


FIT_1 Fit for TRENCYC from ARIMA, MOD_1 CON


ERR_1 Error for TRENCYC from ARIMA, MOD_1 CON


LCL_1 5% LCL for TRENCYC from ARIMA, MOD_1 CON


UCL_1 5% UCL for TRENCYC from ARIMA, MOD_1 CON


SEP_1 SE of fit for TRENCYC from ARIMA, MOD_1 CON


From the box of variables in the model above, T-ratio of the AR1 and constant are far away from 0 and bigger than and also the all probabilities of which are significant.


So, the ARIMA Model will be


Z t=8144.71650 + 0.8057 Z t-1


Because of using the trendcycle data with ARIMA, the ARIMA forecast data need to be multiplied by factors in order to get the real forecasts and compare to the actual data


Forecast=(8144.71650 + 0.8057 Z t-1) factors


After multiplying the forecast data by factors and plotting the forecast data against the actual data, it can be seen that it is very close to the data all over the period of time so this ARIMA model fits to the actual data.



Then check this model by considering the residuals by plotting the error. So, the graph shows removing trend and it is stationary.



So, the graph shows removing trend and it is relatively stationary. Then check with ACF and PACF which do not suggest anything.



All figures of ARIMA forecasting are shown as following table.


TimeDataForecastErrorLCLUCL


14Jan141777.48-78.40755.11554.1


Feb8040.1-46.5040.4155.8


Mar4874446.04.1067.1566.61


Apr446484.61-456.7476.7561.16


May54657460.854.0877.10576.5


Jun6457477.15180.54.85585.6


Jul7464510.58-410.1774.610.4


Aug8551481.666.46404.1576.7


Sep48506.71-17.71417.016486.4


Oct1046885050.-6.870.6160.0


Nov115054755.176.8575.4654.88


Dec157516.585..61.80


15Jan148146.11-156.6404.57640.8


Feb1445714664.78-10.0646.4605.84


Mar155485508.7401.704.066.44


Apr16508545.5-4.64456.76716.


May1756511.8176.4407.51648.


Jun185678578.0-117.74504.66864.11


Jul155875841.4-44.65447.05676.47


Aug06116567.8.77446.86605.70


Sep1556586.84-07.84460.1704.55


Oct576561.18150.844.48671.8


Nov5781580.8-8.846.57688.


Dec458556060.01-17.14647.7006.64


16Jan557654.7486.5448.76858.8


Feb654485556.16-118.8645.6785.8


Mar7577608.74-55.74484.04708.45


Apr8650570.70660.5146.868.7


May6586577.074.885.4761.65


Jun060667.4-6.76568.76778.17


Jul17066616.686.14778.717.75


Aug68417161.50-05.5640.768000.18


Sep64806546.0-66.0567.1776.61


Oct4617651.5404.655.6476.06


Nov56468640.86-47.865761.15810.56


Dec671616760.6085.0050.877680.


17Jan766564.54-181.6570.4808.75


Feb860886147.1-65.85576.75.65


Mar6466718.7-47.7558.67788.08


Apr40786157.414.76510.1746.60


May41700761.-604.4664.84874.5


Jun4747710.86.45578.1814.7


Jul47766.0675.0616.6685.08


Aug447516800.-470.406448.78808.1


Sep457507177.74.757.56856.7


Oct467475.14410.8665.48711.85


Nov4771746.-555.6767.116.6


Dec4887517701.871008.7865.48585.5


18Jan47516788.7-47.607.47588.8


Feb507770.66146.5676.4005.8


Mar51868064.8.776884.54.4


Apr58180811.58-11.8717.0558.47


May587846.44-17.46716.05.7


Jun54884808.4466.1665.85.4


Jul5517146.76-1.007615.574.67


Aug56860601.88-567.51758.4.40


Sep574815.11046.817015.4874.0


Oct588780.68-4.688040.710400.


Nov5808716.670.756.686.7


Dec6044561.1176.80775.10084.64


1Jan6181078610.5-5.84788.8104.


Feb6847758.77511.4746.4705.8


Mar676700.0746.87840.1101.7


Apr6488540.77-7.18555.771015.1


May658511086.5-56.667816.6810176.10


Jun66874858.6815.14741.54600.6


Jul67851006.57-177.488470.841080.6


Aug68801.6-85.6867.561066.7


Sep6878.5648.758158.5510517.6


Oct7046051.0-41.08771.41110.1


Nov71105744.44116.56854.710614.15


Dec7104501071.-501.76.64117.06


000Jan750510.540.4881.81110.80


Feb74100605.710.678815.111174.6


Mar751081061.76-18.7678.051141.47


Apr76410565.15-147.0601.061160.47


May7711156601.75158.8686.810686.


Jun78108061108.6-5.0680.471168.8


Jul710771068.8-187.8766.81176.


Aug8010881081.448.1715.1145.4


Sep8106100.6-414.6140.811500.40


Oct810857871.7885.86.0711051.48


Nov810810804.087.7064.51184.01


Dec841070117.16-541.50658.11018.


001Jan85111474.51474.18075.581145.00


Feb8610010610.4-645.510480.1118.5


Mar8711061058.510.47778.8118.4


Apr8811710785.1450.8185.611185.0


May8118811505.7088.4101.071571.4


Jun01171114.75-14.75105.85188.7


Jul11115.188.101.74167.15


Aug15165.85-610.11168.7158.14


Sep115811668.7-10.710488.561847.8


Oct4147115.56151.4410115.851475.7


Nov51841147.86-8.861168.15147.57


Dec6165167.8-608.5511577.8817.0


Comparing the forecast data with the rest of the data, we calculate them by extending the time line and see the result as following table


TimeDataForecastErrorLCLUCL


00Jan71761664.7410.8117.404614.04


Feb8116510.5-147.811764.6481488.006


Mar16714.08-715.081186.16145.887


Apr100184184.01714.810.18415087.04


May1011408511.546.1011477.5414541.5


Jun1011118.7-146.0511588.64461465.4704


Jul101471484.85.601.05156.046


Aug1041151700.5-46.811516.18614580.01


Sep105181587.1-58.11055.041511.007


Oct1061441448.7545.51116.8681480.666


Nov10714175.48-.481174.570414807.6


Dec10811400.47-78.411.66146.7884


00Jan101581646.741001.11780.440614844.664


Feb110171184.7741.471146.51456.161


Sum of squared errors=55.1


Mean-squared error (MSE)=4566.7


Root-mean-squared error (RMSE)=68.15


This RMSE is about 4.76 % of the mean for the rest of data during the forecast period


However, overall the plot graph below (During 14-00) shows that the predict data relatively fits the real data.



5. Discuss the differences between the Box-Jenkins results and the other methods you have used. In particular comment on the different mathematical forms of the models selected.


Decomposition Model


Forecast= Trend Factors


=(45.164 + 84.57 Time) Factors


Autocorrelation Model


Forecast= 0.4 (Data lagged) + 0.651 (Data lagged1)


ARIMA Model


Forecast (Zt)=(8144.71650 + 0.8057 Z t-1) Factors


First of all, decomposition model is a linear model which depends on the trendcycle, runs following time, and seasonal effects showing by the factors. So it can forecast more than one period ahead.


Second, autocorrelation model is based on two explanatory variables which are data lagged and data lagged1. So it can forecast third periods ahead because from its equation has to forecast following the lag data.


Finally, Box-Jenkins model or ARIMA model shows the form of an ARIMA(1,0,0) or AR(1) model which depends on the constant term and Zt-1 as well as factors.


The graph below illustrates the comparisons of actual data and three forecasting data which are generated from decomposition model, autocorrelation model and ARIMA model.



It is very difficult to say which model is better because most forecast data are relatively close to real data and some periods one is fitter to the actual data more than others but some periods do contrast. Therefore, we can not tell much different among them.


Next, we will focus on the most appropriate of these three models by considering the error.


Compare the errors of the data for model fit.



From the graph above, it can be seen that the autocorrelation model is more appropriate than other two because of more accuracy. Moreover, it involves only the lag data and does not have any influences from the factors like other two models. If you look at the actual data graph below, it shows a trend pattern but does not show a seasonal pattern. It is emphasised to use autocorrelation model. On the other hand, if the graph shows obviously seasonal pattern, decomposition model should be better. And because of using the trendcycle data to use for ARIMA forecasting, it involves seasonal effects (factors); so make ARIMA model not a proper method as well.



However, there has some lost data at the beginning and at the end by using autocorrelation model to forecast because its data is need to be lagged and it can not forecast a bit more future like Box-Jenkins or ARIMA model.


6. Comment on the impact of your choice of where to split the data in order to use some data to fit the model and other data to check it.


The data at the end (00-00) has been used to check the forecast. From the graph below, we can see that at the beginning and middle of the rest data, the forecast data from autocorrelation model and ARIMA model is close to data but other periods, significant errors will occur. However, there are considerable errors occurred.



Then, trying new data model, we choose data model from 15-00 and data in 1-14 is used for checking. The graph below shows errors from old data (split at the end) against errors from new data (split at the beginning)



The magnitudes of errors from two ways to split data all over the period are similar but they do not overlap at the same time so it does some impact on where to split the data. However, it should be consider the pattern of actual data and choose the appropriate method to forecast rather than think about where to split the data.



Please note that this sample paper on Business Forecasting;Using the United Kingdom statistics locates Banks consumer credit: Gross lending figures from 1993-2003 is for your review only. In order to eliminate any of the plagiarism issues, it is highly recommended that you do not use it for you own writing purposes. In case you experience difficulties with writing a well structured and accurately composed paper on Business Forecasting;Using the United Kingdom statistics locates Banks consumer credit: Gross lending figures from 1993-2003, we are here to assist you. Your cheap research papers on Business Forecasting;Using the United Kingdom statistics locates Banks consumer credit: Gross lending figures from 1993-2003 will be written from scratch, so you do not have to worry about its originality.


Order your authentic assignment and you will be amazed at how easy it is to complete a quality custom paper within the shortest time possible!