Complete Time Series Analysis Overview | Reliance NIFTY 50

Ben Roshan
Analytics Vidhya
Published in
22 min readAug 26, 2020

--

Introduction

Time plays a very important role when it comes to business. Each second is money every national and global economies depends on time. Time series analysis have become a widely used tool in the field of analytics in order to understand a variable which depends on time.

What is time series analysis?

Time series analysis is a statistical technique that deals with time series data, or trend analysis. Time series data means that data is in a series of particular time periods or intervals. The data is considered in three types:

  • Time series data: A set of observations on the values that a variable takes at different times.
  • Cross-sectional data: Data of one or more variables, collected at the same point in time.
  • Pooled data: A combination of time series data and cross-sectional data.

Source:Statistics Solution

Reliance Industries

Reliance Industries Limited (RIL) is an Indian multinational conglomerate company headquartered in Mumbai, Maharashtra, India. Reliance owns businesses across India engaged in energy, petrochemicals, textiles, natural resources, retail, and telecommunications. Reliance is one of the most profitable companies in India, the largest publicly traded company in India by market capitalization, and the largest company in India as measured by revenue after recently surpassing the government-controlled Indian Oil Corporation.[5] On 22 June 2020, Reliance Industries became the first Indian company to exceed US$150 billion in market capitalization after its market capitalization hit ₹11,43,667 crore on the BSE.

The company is ranked 106th on the Fortune Global 500 list of the world’s biggest corporations as of 2019. It is ranked 8th among the Top 250 Global Energy Companies by Platts as of 2016. Reliance continues to be India’s largest exporter, accounting for 8% of India’s total merchandise exports with a value of ₹1,47,755 crore and access to markets in 108 countries. Reliance is responsible for almost 5% of the government of India’s total revenues from customs and excise duty. It is also the highest income tax payer in the private sector in India.

Source: Wikipedia

Acknowledgements

  1. For clearly explaining the AUTO ARIMA model — Vopani
  2. For the wonderful visualization guidelines- Parul Pandey
  3. Prophet documentation — Facebook

Project summary

The project revolves around analyzing the the closing price and Volume-weighted average price of Reliance’s stock changes in time. It starts with preparing the data for visualizations and goes on with an extensive exploratory data analysis which also includes the impact of Reliance stocks due to COVID-19 and followed by time series model building and using the new tool Prophet for time series forecasting released by Facebook. You can checkout the notebook to interact with the plots

Objectives of the project

  1. Data Preparation
  2. Data Visualization
  3. Building a time series model

Dataset

The data is the price history and trading volumes of the fifty stocks in the index NIFTY 50 from NSE (National Stock Exchange) India. All datasets are at a day-level with pricing and trading values split across .csv files for each stock along with a metadata file with some macro-information about the stocks itself. The data spans from 1st January, 2000 to 31st July, 2020.

Import libraries

Import dataset

First let’s welcome our dataset

reliance_raw=pd.read_csv("../input/nifty50-stock-market-data/RELIANCE.csv")

## print shape of dataset with rows and columns and information
print ("The shape of the data is (row, column):"+ str(reliance_raw.shape))
print (reliance_raw.info())

Dataset Details

Description of columns in the file:

  • Date — Date of trade
  • symbol — Name of the company (Reliance)
  • Series — We have only one series(EQ): It stands for Equity. In this series intraday trading is possible in addition to delivery
  • Prev Close — Refers to the prior day’s final price of a security when the market officially closes for the day.
  • Open — The open is the starting period of trading on a securities exchange or organized over-the-counter market.
  • High — Highest price at which a stock traded during the course of the trading day.
  • Low — Lowest price at which a stock traded during the course of the trading day.
  • Last — The last price of a stock is just one price to consider when buying or selling shares. The last price is simply the most recent one
  • Close — The close is a reference to the end of a trading session in the financial markets when the markets close for the day.
  • VWAP(Volume-weighted average price)- It is the ratio of the value traded to total volume traded over a particular time horizon. It is a measure of the average price at which a stock is traded over the trading horizon
  • Volume — It is the amount of a security that was traded during a given period of time
  • Turnover -It is a measure of sellers versus buyers of a particular stock. It is calculated by dividing the daily volume of a stock by the “float” of a stock, which is the number of shares available for sale by the general trading public.
  • Trades- The number of shares being traded on a given day is called trading volumes
  • Deliverable Volume — quantity of shares which actually move from one set of people (who had those shares in their demat account before today and are selling today) to another set of people (who have purchased those shares
  • %Deliverable — shares which are actually transferred from one person’s to another’s demat account.
#Checking out the statistical measures
reliance_raw.describe()

Insights:

  • There are many outliers in our dataset as we can see the max is 3 times the 75th percentile
  • The standard deviation and other statistical measurements is more or less equal among all the features

Data preparation

Inorder for our machine learning algorithm to perform well, we need to cleanse our data. In our case, we don’t have much garbage to clean except few null values. Also let’s also extract few more features from the time to perform in-depth EDA

We have to set Date as index

We have few new features assembled due to extraction. Let’s get rid of the null values by imputing it with mean value.

#Imputing null values with mean 
reliance_analysis.fillna(reliance_analysis.mean(),inplace=True)

#Checking for null values
reliance_analysis.isnull().sum()

Data Visualization

Exploratory data analysis is a core part of time series analysis. In this phase, we will witness a lot of line graphs which can help us understand the trend, seasonality and many other concepts from time series analysis

Distribution of stock measures

Let’s witness the histogram distribution of the stock measures such as open,close,high,low and as well as VWAP.

Insights:

  • All the measures exhibit equal distribution property
  • All the distributions are right skewed

Univariate Analysis

Let’s see the trend of single factor over time

VWAP over time

Now let’s see the Volume-weighted average price over the time. Please visit my kaggle notebook to interact with the plots

Insights:

  • There has been a gradual increase in the trend of VWAP over years
  • There were two spikes in Jan 2008 and May-Oct 2009
  • Mukesh Ambani-controlled company are trading at near their all-time high levels of about Rs 1,625, a price that was last seen over nine-and-a-half years ago in January 2008. Reliance Industries shares ended at Rs 1,621.15 on Wednesday. By contrast, it saw an intra-day high of Rs 1,649 and previous closing high of Rs 1,610 in January 2008. News here

Uni-variate analysis of Open,Close,High and Low

Let’s see open,close, high and low measures over years

cols_plot = ['Open', 'Close', 'High','Low']
axes = reliance_analysis[cols_plot].plot(marker='.', alpha=0.5, linestyle='None', figsize=(11, 9), subplots=True)
for ax in axes:
ax.set_ylabel('Daily trade')

Insights:

  • As we know, all these parameters follow the same pattern without much deviation
  • There is a break between 2008–2012 and 2016–2020. It signifies a sudden dip in the market for Reliance.

Uni-variate analysis of Volume of share over the years

Let’s see the volume of shares that have been traded over NIFTY 50.

ax=reliance_analysis[['Volume']].plot(stacked=True)
ax.set_title('Volume over years',fontsize= 30)
ax.set_xlabel('Year',fontsize = 20)
ax.set_ylabel('Volume',fontsize = 20)
plt.show()

Insights:

  • There have been huge number of shared in 2020. This could be due to the reign of Jio and Investment by top tech giants like Facebook and Google.
  • The thin phase lie between 2008–2016, in this phase there hasn’t been big volumes traded during these years.
  • Reliance have a strong foot in India and has got the trust of the citizens of India that is a valuable company.

Bi-variate analysis

Let’s compare two factors over time

Open Vs Close over time

Our first bi-variate analysis involves open and close parameters

Insights:

  • If you can notice(use rangeslider to zoom, use my kaggle notebook to interact with the plot) we can clearly see most of the time the open is higher than close.
  • But the difference is very subtle. If we take moving average, we might not even notice the difference.
  • There’s one place where you can notice a big difference is on May 2,2008 where the opening starts with 3026 and closes at 2674.5

High vs Low

Now, lets look at the high and low parameters over the years

Insights:

  • High vs Low follows the same path as Open vs Low, where High is a little higher than Low price of the day.
  • If you see at November 25 and 26 2009, The lowest price hit on 25th 2169 and on 26th the high price recorded was 1111, which shows the huge dip

Moving average analysis

Moving average is a smoothing technique applied to time series to remove the fine-grained variation between time steps.The hope of smoothing is to remove noise and better expose the signal of the underlying causal processes. Moving averages are a simple and common type of smoothing used in time series analysis and time series forecasting.Calculating a moving average involves creating a new series where the values are comprised of the average of raw observations in the original time series.A moving average requires that you specify a window size called the window width. This defines the number of raw observations used to calculate the moving average value.The “moving” part in the moving average refers to the fact that the window defined by the window width is slid along the time series to calculate the average values in the new series. You can read the article by investopedia to get a clear picture

In our project we consider the moving mean and standard deviation for 3,7 and 30 days. Thanks to Vopani for this wonderful piece of code.

Glimpse of features created

We have created the moving average and standard deviation for the respective days across High, Low, Volume, VWAP

High vs Low with mean and standard deviation lag — 30 days

In this article, I’m considering only 30 days for comparison to get a lower noise. You can copy and edit this code to change the window according to your wish. Here we compare the High vs Low with mean and standard deviation.

Insights:

  • Considering the standard deviation(purple line),there’s a high deviation whenever there is a drop in the price of stock.
  • With the help of standard deviation we can understand where the company faced loss.
  • Even though the lag curve isn’t much less with noise, we have a clear idea on how the high and low price move over time

Volume with mean and standard deviation lag — 30 days

Insights:

  • Here we have a neat representation of the moving average and standard deviation graph
  • There’s a lot of deviation when the volume value is reaching 2020 and corresponding mean is high compared to standard deviation

Wrath of COVID-19

Performance after lockdown-VWAP

Lockdown has become a big blow for the Indian economy. From MNCs to street vendors were affected due to this lockdown phase. As many companies operate with work from home, many managed to survive the race. Let’s see how Reliance performed during lockdown

fig = px.line(reliance_analysis, x='Date', y='VWAP',title='VWAP after lockdown', range_x=['2020-03-23','2020-06-30'])
fig.show()

Insights:

  • The lockdown starts of with below 1000 VWAP, but gradually it rises more than 1500 VWAP and reaching near 1718 VWAP by June 30,2020.
  • This could be due to the interests shown by Facebook, Google and other companies on Jio shares.

Candlestick after Lockdown (Open,Close,High,Low)

Candlestick charts are used by traders to determine possible price movement based on past patterns. Candlesticks are useful when trading as they show four price points (open, close, high, and low) throughout the period of time the trader specifies. Here we measure the trend after the commencement of lockdown phase

reliance_analysis_lockdown = reliance_analysis[reliance_analysis['Date'] >= '2020-03-23']
fig = go.Figure(data=[go.Candlestick(x=reliance_analysis_lockdown['Date'],
open=reliance_analysis_lockdown['Open'],
high=reliance_analysis_lockdown['High'],
low=reliance_analysis_lockdown['Low'],
close=reliance_analysis_lockdown['Close'])])

fig.show()

Insights:

  • Then stock performance was initially good and there hasn’t been a huge downfall yet for Reliance since lockdown as we can see there is always growth overall
  • Consecutive dips were seen between May 11–14 2020 and June 22–25 2020.
  • There hasn’t been much growth of stock performance between May 11 to June 5 2020. But June 12th and June 19th, the huge rise in stock performance kept the growth in track.

Volume during Phase 1 Lockdown(25 March — 14 April) and Phase 2 Lockdown (15 April — 3 May)

Insights:

  • We can see the gradual fall in the first lockdown due to sudden announcement and WFH was tedious to adopt in that situation and every company like Reliance faced a short dip
  • But Reliance has pushed beyond its boundaries in the phase 2 lockdown as we can see the company reaches the top peak of 2020 in the phase 2 lockdown by coming up with various strategies and plan by WFH, Let’s checkout what Reliance did in 2020 to reach the peak

Major Corporate Announcements 2020(Till June 30)

Here we will witness the major corporate announcements and how the press news has affected the price in stock market

Insights:

  • The slow start of Reliance industries boosted during this covid lockdown. As we can see a sluggish start which got boosted with two announcements on Reliance support to the nation on fighting the COVID-19 and gradually the price of stock increases
  • Also investments from Facebook,Silver lake,General Atlantic and many more has also boosted the price and the gained the trust of shareholders by being valuable company
  • Google investment is not mentioned here because the dataset is available till June 30 and Google invested on Jio in July 2020

Stationarity conversion

A common assumption in many time series techniques is that the data are stationary. A stationary process has the property that the mean, variance and auto-correlation structure do not change over time. Stationarity can be defined in precise mathematical terms, but for our purpose we mean a flat looking series, without trend, constant variance over time, a constant auto-correlation structure over time and no periodic fluctuations (seasonality).

IMPORTANT NOTE: As time went on the libraries have developed to handle the stationary and we don’t actually need to convert the time series data into stationary data. For study purpose,I have explained how to check stationary and stationary conversion in this article

There are two ways you can check the stationary of a time series. The first is by looking at the data. By visualizing the data it should be easy to identify a changing mean or variation in the data. For a more accurate assessment there is the Dickey-Fuller test. I won’t go into the specifics of this test, but if the ‘Test Statistic’ is greater than the ‘Critical Value’ than the time series is stationary,Also we can check the p value. You can read more about stationarity in this article .Below is code that will help you visualize the time series and test for stationary.

Visually checking for stationarity

We can get whether a data is stationary by just plotting it

reliance_stationarity=reliance_analysis[['Close']]

reliance_stationarity.plot()

From the plotted graph we can say that the data doesn’t have a constant average as there are meany leaps and troughs and also the variance is also different at different stages of the data. So our data is not stationary. We can also mathematically test for stationarity with adfuller test

Augmented Dickey Fuller Test

The Augmented Dickey-Fuller test is a type of statistical test called a unit root test.The intuition behind a unit root test is that it determines how strongly a time series is defined by a trend. There are a number of unit root tests and the Augmented Dickey-Fuller may be one of the more widely used. It uses an auto-regressive model and optimizes an information criterion across multiple different lag values. Read this amazing article by machine learning mastery to understand more about it

The null hypothesis of the test is that the time series can be represented by a unit root, that it is not stationary (has some time-dependent structure). The alternate hypothesis (rejecting the null hypothesis) is that the time series is stationary.

  • Null Hypothesis (H0): If failed to be rejected, it suggests the time series has a unit root, meaning it is non-stationary. It has some time dependent structure.
  • Alternate Hypothesis (H1): The null hypothesis is rejected; it suggests the time series does not have a unit root, meaning it is stationary. It does not have time-dependent structure.

We interpret this result using the p-value from the test. A p-value below a threshold (such as 5% or 1%) suggests we reject the null hypothesis (stationary), otherwise a p-value above the threshold suggests we fail to reject the null hypothesis (non-stationary).

  • p-value > 0.05: Fail to reject the null hypothesis (H0), the data has a unit root and is non-stationary.
  • p-value <= 0.05: Reject the null hypothesis (H0), the data does not have a unit root and is stationary.

Since our p value is greater than 0.05 we need to accept the null hypothesis which states that our data is non-stationary

Stationarity Conversion with shift()

Now let’s convert our non-stationary data to stationary with shift() method. Here we take a shift() of 1 day which means all the records will step down to one step and we take the difference from the original data. Since we see a trend in our data, when we subtract today’s value from yesterday’s value considering a trend it will leave a constant value on its way thus making the plot stationary.

reliance_stationarity['Close First Difference']=reliance_stationarity['Close']-reliance_stationarity['Close'].shift(1)
reliance_stationarity['Close First Difference'].plot()

Model building Phase- Forecasting & Prediction

Here we arrive at the most important phase why this project is being built.The forecasting and prediction phase.Many might wonder whether both terms are same or different. It is different. Here are few points to justify the statement.

  • Prediction is concerned with estimating the outcomes for unseen data. For this purpose, you fit a model to a training data set, which results in an estimator f^(x) that can make predictions for new samples x.
  • Forecasting is a sub-discipline of prediction in which we are making predictions about the future, on the basis of time-series data. Thus, the only difference between prediction and forecasting is that we consider the temporal dimension.

For model building we are considering the Close price feature. As it is very reliable for prediction and VWAP is a derived/calculated value which doesn’t make much sense while getting forecast value.

AUTO ARIMA-Autoregressive Integrated Moving Average

What is ARIMA ?

ARIMA, short for ‘Auto Regressive Integrated Moving Average’ is actually a class of models that ‘explains’ a given time series based on its own past values, that is, its own lags and the lagged forecast errors, so that equation can be used to forecast future values.

Any ‘non-seasonal’ time series that exhibits patterns and is not a random white noise can be modeled with ARIMA models.

An ARIMA model is characterized by 3 terms: p, d, q

where,

  • p is the order of the AR term
  • q is the order of the MA term
  • d is the number of differencing required to make the time series stationary

If a time series, has seasonal patterns, then you need to add seasonal terms and it becomes SARIMA, short for ‘Seasonal ARIMA’.

Why AUTO ARIMA ?

Although ARIMA is a very powerful model for forecasting time series data, the data preparation and parameter tuning processes end up being really time consuming. Before implementing ARIMA, you need to make the series stationary, and determine the values of p and q using the plots we discussed above. Auto ARIMA makes this task really simple for us as it eliminates steps like converting stationarity and getting values of p,q from acf and pacf plots.

The following explanation about acf and pacf is for study purposes. It is not used in our models as we are using auto arima where p,q are figured out by the model by selecting the best AIC. You can read more about auto arima in this article by Analytics Vidhya

ACF AND PACF PLOTS

ACF is an (complete) auto-correlation function which gives us values of auto-correlation of any series with its lagged values. We plot these values along with the confidence band and tada! We have an ACF plot. In simple terms, it describes how well the present value of the series is related with its past values.

PACF is a partial auto-correlation function. Basically instead of finding correlations of present with lags like ACF, it finds correlation of the residuals (which remains after removing the effects which are already explained by the earlier lag(s)) with the next lag value hence ‘partial’ and not ‘complete’ as we remove already found variations before we find the next correlation.

Autoregression Intuition

Consider a time series that was generated by an autoregression (AR) process with a lag of k.We know that the ACF describes the autocorrelation between an observation and another observation at a prior time step that includes direct and indirect dependence information.This means we would expect the ACF for the AR(k) time series to be strong to a lag of k and the inertia of that relationship would carry on to subsequent lag values, trailing off at some point as the effect was weakened.We know that the PACF only describes the direct relationship between an observation and its lag. This would suggest that there would be no correlation for lag values beyond k.This is exactly the expectation of the ACF and PACF plots for an AR(k) process.

Moving Average Intuition

Consider a time series that was generated by a moving average (MA) process with a lag of k.Remember that the moving average process is an auto-regression model of the time series of residual errors from prior predictions. Another way to think about the moving average model is that it corrects future forecasts based on errors made on recent forecasts.We would expect the ACF for the MA(k) process to show a strong correlation with recent values up to the lag of k, then a sharp decline to low or no correlation. By definition, this is how the process was generated.For the PACF, we would expect the plot to show a strong relationship to the lag and a trailing off of correlation from the lag onwards. Again, this is exactly the expectation of the ACF and PACF plots for an MA(k) process.

Summary

From the auto-correlation plot we can tell whether or not we need to add MA terms. From the partial auto-correlation plot we know we need to add AR terms and here we plot the acf and pacf plots and select the p,q point where the correlation line first hits the error/zero band

Building AUTO ARIMA model

For building the auto arima model,first let’s split our training and testing data. For time series analysis we step back from using train test split as our data involves time and splitting can cause the mix of date across both train and test dataset which is vulnerable for a data leakage. So we split based on the dates. Here we split the training and validation data based on the year 2019 where the data before 2019 is training data and data after 2019 is validation data

train = reliance_lag[reliance_lag.Date < "2019"]
valid = reliance_lag[reliance_lag.Date >= "2019"]
#Consider the new features created as exogenous_features
exogenous_features = ['High_mean_lag3','High_mean_lag7', 'High_mean_lag30', 'High_std_lag3', 'High_std_lag7',
'High_std_lag30', 'Low_mean_lag3', 'Low_mean_lag7', 'Low_mean_lag30',
'Low_std_lag3', 'Low_std_lag7', 'Low_std_lag30', 'Volume_mean_lag3',
'Volume_mean_lag7', 'Volume_mean_lag30', 'Volume_std_lag3',
'Volume_std_lag7', 'Volume_std_lag30', 'VWAP_mean_lag3',
'VWAP_mean_lag7', 'VWAP_mean_lag30', 'VWAP_std_lag3', 'VWAP_std_lag7',
'VWAP_std_lag30','Month', 'Week', 'Day', 'Day of week']

Training and prediction

Let’s train our model with auto arima. Here the model selects the parameter p,q,d value in a normal arima model by itself by determining AIC value ( Akaike information criterion ) .AIC estimates the relative amount of information lost by a given model: the less information a model loses, the higher the quality of that model. So AUTO ARIMA prefers the parameters which can reap less information loss

From the autoarima results we got the values of p and q as 2,3 respectively and the AIC score is 47648.135

Plotting the forecasted values with the actual data

Let’s plot the results and compare them with actual values

valid[["Close", "Forecast_ARIMAX"]].plot()

Fantastic we have got more or less a similar result. Our model has captured a good amount of information from training dataset. Let’s look at the performance metrics

Performance metrics-RMSE and MAE

Here we calculate how well our model performed with numbers with the help of RMSE and MAE. We hope that the erroe will be very low

print("RMSE of Auto ARIMAX:", np.sqrt(mean_squared_error(valid.Close, valid.Forecast_ARIMAX)))print("\nMAE of Auto ARIMAX:", mean_absolute_error(valid.Close, valid.Forecast_ARIMAX))

We got the RMSE and MAE score of 37 and 26 which is pretty much good score considering a time series data. AUTOARIMA does it again !

Fb Prophet

Facebook developed an open sourcing Prophet, a forecasting tool available in both Python and R. It provides intuitive parameters which are easy to tune. Even someone who lacks deep expertise in time-series forecasting models can use this to generate meaningful predictions for a variety of problems in business scenarios.

IMPORTANT NOTE: The input to Prophet is always a dataframe with two columns: ds and y(we should rename our column). The ds (datestamp) column should be of a format expected by Pandas, ideally YYYY-MM-DD for a date or YYYY-MM-DD HH:MM:SS for a timestamp. The y column must be numeric, and represents the measurement we wish to forecast. Also in fb prophet we are using the cleaned data not the stationary converted data as prophet takes care of stationarity internally. Read the documentation by Facebook here.

We have created the future dataframe for 365 days which you can see above, we have dates till 2021 and now we are going to predict the stock prices for that.

Prediction of future values

### Prediction of future values
reliance_prediction=model.predict(reliance_future)

reliance_prediction.tail()

We have predicted the values for all the dates and even the 2021 dates. yhat is the predicted values,yhat lower and upper is the band/range of that predicted values can be deflected. Let’s look at the plot to get a better understanding.

Forecast Plot

#Forecast plot
model.plot(reliance_prediction)

From the plot we can see the predicted values in blue line which follows most the actual trend and after 2020 we can see th blue line getting extended for the 2021 which is the future prices and we can be assured that the upward trend continues in 2021. If Reliance gets a big deal in 5G, their stock prices will be over the roof.

Forecast Components

#Forecast components
model.plot_components(reliance_prediction)

From the model components of prophet we get the trend,weekly and yearly plots. We can see the stocks were up during the months of March-January

Cross validation in prophet

Prophet includes functionality for time series cross validation to measure forecast error using historical data. This is done by selecting cutoff points in the history, and for each of them fitting the model using data only up to that cutoff point. We can then compare the forecasted values to the actual values. This figure illustrates a simulated historical forecast on the Peyton Manning dataset, where the model was fit to a initial history of 5 years, and a forecast was made on a one year horizon.

This cross validation procedure can be done automatically for a range of historical cutoffs using the cross_validation function. We specify the forecast horizon (horizon), and then optionally the size of the initial training period (initial) and the spacing between cutoff dates (period). By default, the initial training period is set to three times the horizon, and cutoffs are made every half a horizon.

The output of cross_validation is a dataframe with the true values y and the out-of-sample forecast values yhat, at each simulated forecast date and for each cutoff date. In particular, a forecast is made for every observed point between cutoff and cutoff + horizon. This dataframe can then be used to compute error measures of yhat vs. y.

Here we do cross-validation to assess prediction performance on a horizon of 365 days, starting with 1095 days of training data in the first cutoff and then making predictions every 180 days.

#Cross validation for the parameter days
reliance_cv=cross_validation(model,initial='1095 days',period='180 days',horizon="365 days")

Performance metric

Here we check the performance of our model with root mean squared value and plot it

#Checking the parameters
reliance_performance=performance_metrics(reliance_cv)
reliance_performance.head()


#Plotting for root mean squared metric
fig=plot_cross_validation_metric(reliance_cv,metric='rmse')

From the results we can understand the RMSE value lies between 0–500 which is not great and not terrible considering the stock prediction as it is very uncertain to the the right value of prediction.

Conclusion

From the analysis, we can understand what are all the factors which is being undergone while working on a time-series project. Time series result’s aren’t the supreme way to tell accurate results as anything can happen to worsen or brighten up the stock market and the predictions aren’t 100% reliable.But we can easily understand and derive from the past data and associate it with time and figure out why the event happened ? .

Coming to the context with our project, We saw how Reliance has risen from the ashes after being gravely hit in mid 2000’s and now the company is at it’s prime condition partnering with Facebook and Google revolutionizing the telecommunication, retail, technology, petroleum and many more to their portfolio. I believe that Reliance will become an Indian brand identity among the international market.

Thank you for reading this article. You can find my other articles here.

--

--