CHAPTER 3 STOCK PRICE PREDICTION USING MODIFIED GENETIC ALGORITHM AND SIMULATED ANNEALING

Size: px
Start display at page:

Download "CHAPTER 3 STOCK PRICE PREDICTION USING MODIFIED GENETIC ALGORITHM AND SIMULATED ANNEALING"

Transcription

1 CHAPTER 3 STOCK PRICE PREDICTION USING MODIFIED GENETIC ALGORITHM AND SIMULATED ANNEALING 3.1 Introduction Traditional neural networks, endure with a few inadequacies, for example, local minimum issue or suitable topology selection. Numerous meta-heuristic methods have been hence created to enhance their execution. Meta-heuristics indicates a group of suitable methods from which the provision of sensible results within the specified time is developed to solve the complex issues and despite the fact that the bullish solution is not guaranteed, they inquire the needed search space effectively. The term meta-heuristics were firstly presented by Fred Glover (1986) and these methods can be nominated as high level methodologies that supervise the procedures by which the solution is found in concrete optimization issues. The class of methods includes Tabu Search, Iterated Local Search, Genetic Algorithms, Ant Colony Optimization, Fuzzy Logic, Particle Swarm Optimization and much more. Every one of them is iterative procedures which guide subordinated operations so as to discover nearest optimal solutions. (Osman Ibrahim and Gilbert Laporte, 1996). This chapter first presents the description of two meta-heuristic methods such as genetic algorithm and simulated annealing approach. Then the proposed neural network namely Modified Genetic Algorithm Simulated Annealing (MGASA) for predicting stock price is described. 3.2 Genetic Algorithm Genetic Algorithm is a heuristic task for optimization. In optimization the goal is to discover the best conceivable solution for an issue. A numerous possible solutions are developed continuously by utilizing the technique stimulated by Darwinist evolution or natural selection. After every generation, the most competent individual imitate and accumulate the best genetic information in order to become more stable and versatile. GAs additionally works with generation of population. Each one of the individuals in the given generation shows a possible solution to an optimization issue. To handle the issue-specific tasks, several modifications were

2 considered, including limits between other optimization heuristics and genetic algorithms. An appealing quality of GAs is that the execution and parallelization is easy and efficient. An alternate outstanding contrast to other optimization concepts is that GAs does not utilize the parameters to be optimized by them; however an encoded version of the parameters is used. They were put to practical use successfully to an enormous number of diverse fields that range from molecular structure optimization, game theory, protein folding, timetabling problems, container loading optimization, the traveling salesman problem, electronic circuit design, economics, criminal identification, to forecasting foreign exchange and stock market prices Basics of Genetic Algorithm Genetic algorithms [Holland, 1975] belong to a family of evolution-inspired algorithms, such as evolutionary strategies [Rechenberg, 1973] and evolutionary programming. Basically GAs is an unrefined representation of regular evolution with a populace switching its genetic makeup to best survive an environment. The Natural evolution works on the level of chromosomes than on the biological substances themselves, a conceivable solution is implied by an encoded version of the parameters (genotype) despite of the value of the parameters (phenotype) in a genetic algorithm. GA which is having the chromosomes frequently takes the type of bit strings (i.e., Strings of 1s and 0s); every bit position ( locus ) in the chromosome has two conceivable values ( alleles ), 0 and 1.This way of representation is acknowledged as a linear string of genes and called individual. According to nature one does not utilize a solitary individual, however a set of them is allowed, the population P (t). The search takes place by handling populations of chromosomes, transformation starting from one such population to another. After the first population initialization, a number is allotted to each individual that measures the fitness for survival. The chromosomes solve the problem efficiently and its fitness depends on the how well the problem solving is handled. Two individuals are selected with probabilities relative to the corresponding position in the current population and mated to create new individuals. This step is iterated until enough individuals are made to structure the next population. These new individuals can be changed again by a transform which happens with a stable but small probability. Subsequently, this new population is assessed and resulting populations are made until the termination condition is met. Fig 3.1 shows the general scheme of GA process

3 Initialization Termination Select Population Parents Crossover Mutation. Survivor Offspring Fig 3.1 General Scheme of GA Process Because of the irregularity included in the making of every new population, genetic algorithms do not belong to the class of convergent algorithms that satisfy the condition. (3.1) Where x * is the exact solution of the problem, x i is the approximate solution in the i-th iteration step, with C > 0 and p > 0. There are no convergent algorithms available for most of the problems where the genetic algorithms are typically applied. Individual The individual I is a linear string of genes with fixed length l and each of its genes can take values of a certain alphabet A = {a1: : : ::: ak}. The set of binary numbers is the most often used alphabet with A binary = {0.1}. In this case the search space consists of 2 l conceivable solutions. This discrete representation is having advantages and disadvantages which are on the one hand is valuable for issues that have discrete variables like combinatorial issues, but also on the other hand it imposes implicitly lower and upper bounds on continuous variables of the solution. Fig 3.2 demonstrates the schematic representation of an individual Fig 3.2 Schematic representation of an individual

4 Initialization The first population is generally picked arbitrary by allocating every gene in every individual an element of the alphabet A corresponding to uniformly distributed random numbers. Fitness Function The execution of the individual strings is measured by a fitness function. A fitness function is a problem specific user defined heuristic. After every cycle the individuals are given an execution measure got from the fitness function and the "fittest" individual among the population are permitted into the following iteration/cycle. The fitness with the highest number corresponds to a better and a lower one to a worse solution. To ascertain this number, the individual, which is then embedded into a problem specific fitness function f(i), must be decoded into its phenotype first. The assessment of the fitness function must be feasible for all individuals I which in turn compared to all conceivable solutions in the search space Genetic Operators Selection The main intent of selection operator is to give priority to best individuals (those that are closer to the solution) by permitting them to pass on their genes to the next generation and forbid the entrance of worst fit individuals into the following generation. The selection operator mainly works on the range of chromosomes. The goodness of every individual relies upon its fitness. An analytical parameter to be enhanced in GA is the selection pressure, which is the methodology of selecting the best individuals for the upcoming generation. In the event that it is situated excessively low, then the rate of closeness towards the optimal solution is excessively low. If in case that the selection pressure is situated excessively high, the system is prone to be stuck in a local optimum because of the loss of diversity in the population. Thus the selection system controls the selection pressure, which thus decides how quick the algorithm merges. The

5 selection method ought to be picked such that it merges to the global optimal solution by without being caught into local optima, ought to envelop the knowledge of the existing information. A few selection techniques are regularly in practice and they can be divided into two groups. The initial one is proportionate group which picks parents according to their fitness assessment with respect to the fitness estimation of different chromosomes. One of the most used proportionate selection technique is the Roulette wheel selection. The idea of this technique is the probability distribution for selection of a given chromosome corresponding to its fitness. It is the just medium pressure process, since fitter subjects have a higher opportunity to be chosen to be the parents, however there is no assurance. Another group of that selection technique is a rank selection technique which selects the suitable parents corresponding to their rank among the population. Rank selection categories the population and each individual gets fitness as indicated by his ranking. The most exceedingly terrible one has fitness 1 and the best has K. The probability of selection is then allocated to chromosomes in similarity with their positions. A few different sorts of selection methods are present such as Elitist selection or truncated selection, Competition selection. Crossover After the selection operator picks a few chromosomes for proliferation, the crossover strategy makes another, assuredly better, child. The crossover operator consolidates the genes of two or more parents to create better child. The idea behind this operator is that the trading of data between good chromosomes will produce a far superior child. Crossover happens during evolution as per a user defined crossover expectation. The crossover operators are of numerous types: single point crossover, two point crossover, arithmetic crossover and heuristic crossovers etc. The operators are chosen according to the way the chromosomes are encoded. In a single point crossover, the crossover point is chosen randomly and the genes before and after that point are swapped as shown in fig 3.3 respectively.

6 Fig 3.3 Single Point Crossover In two point crossover, two points are chosen arbitrarily and the genes between these points are swapped as shown in fig 3.4 respectively Fig 3.4 Two Point Crossover In uniform Crossover, each gene is separately allotted to a gene of the child at random as shown in fig 3.5 respectively Fig 3.5 Uniform Crossover

7 Mutation Other than a crossover, chromosomes in the population are exposed to mutation. It avoids the algorithm to be caught in a local minimum and also recovers the lost genetic materials as well as arbitrarily distributed genetic data. Mutation has basically considered as a traditional search operator. In the event that crossover is forced to make use of current solutions and discover better ones, mutation should help in the investigation of the entire search space. Mutation is seen as a background operator to keep up genetic differing qualities in the population. It presents new genetic structures in the population by arbitrarily changing its building blocks. Mutation supports to escape from local minima's trapped and keeps up assorted qualities in the population. It always maintains the gene pool well arranged, and along with these merits assuring ergodicity. There are numerous varieties of mutations for the various types of representation. For binary representation, a basic mutation can comprise in transpose the value of every gene with a little chance. The probability is generally taken around 1/L, where L is the length of the chromosome. At the same time consideration ought to be taken, as that it may additionally decrease the diversity in the population and makes the algorithm converge to some local optima. The last phase of each reproductive cycle is the substitution. The algorithm must select which of the parents ought to be supplanted, since the new individuals were made. Obviously, if created offspring has awful execution, no parents need to be supplanted. Actually, individuals with most noteworthy fitness are regularly moved into the next generation as the best without any modifications. After the substitution, new chromosomes in new generations are again assessed, and cycle is iterated until the termination condition is met. Termination may be subjected to the greatest number of eras, time limit or no detectable change in the objective function for a couple of generations. 3.3 Simulated Annealing Simulated annealing (SA) is a random-search technique which makes use of ananalogy the manner in which a metal cools and freezes into a low vitality crystalline structure (the annealing procedure) and the search for a minimum in a more general framework; it structures the premise

8 of an optimization technique for combination and different issues. The technique for simulated annealing has pulled in huge consideration as suitable for optimization issues of vast scale, particularly ones where a craved global extrema is covered up among a local minima. Kirkpatrick, Gelatt and Vecchi (1983) and Černý (1985) noted that the objective function of an optimization issue can be considered as the free vitality of the material. While the optimal solution according to a perfect crystal, crystal with distortions is connected with the nearby optimal solution. The temperature imitates a control parameter that must be accurately characterized to acquire a perfect crystal state Basic Principles Simulated annealing is so named since its comparison to the methodology of physical annealing with solids, in which a crystalline solid is warmed and after that permitted to cool gradually until it accomplishes its most standard conceivable crystal lattice design and also is free of crystal deformities. In the event that the cooling schedule is sufficiently abated, the final configuration brings about a solid with such superior structural integrity. Simulated annealing builds the association between this kind of thermodynamic attitude and the search for global minima for a discrete optimization issue. The algorithmic property of simulated annealing is that it gives an intend to escape local optima by permitting hill climbing moves (i.e., Moves which intensify the objective function value). Fig 3.6 shows the flowchart of simulated annealing.

9 Initialize Solutions Estimate initial temperature Generate new solution Assess new solution Better than current Solution? Yes Replace current solution with new solution No Adjust temperature No Terminati on condition met? Yes Stop Fig 3.6 Simulated Annealing Process At every iteration of a simulated annealing algorithm a discrete optimization problem is applied, the objective function to produce value for the solution (the current solution and a recently chosen solution). Enhancing solution is constantly accepted; while a small amount of nonenhancing (inferior) solution are accepted in the trust of escaping local optima in search of global optima. The probability of tolerating non-enhancing solutions relies on upon a temperature parameter, which is normally non-expanding with every cycle of the algorithm. The Metropolis Monte Carlo integration algorithm was summed up by the Kirkpatrick algorithm to incorporate a

10 temperature schedule for effective searching. The law of thermodynamics state that at a temperature, t, the probability of magnitude energy increases, δe, is given by (3.2) Where k is a constant known as Boltzmann s constant. The simulation in the Metropolis algorithm determines the new energy of the framework. In case the energy is decreased, then the system will move to the new state. In the event that the energy has expanded then the new state is accepted utilizing the probability returned by the above equation. The probability of tolerating a more awful state is given by the equation. > r (3.3) Where c is the change in the evaluation function, t is the current temperature and r is a random number between 0 and 1. The probability of accepting a worse move is a function of both the temperature of the system and of the change in the cost function. A specific number of iterations are done at every temperature and after that the temperature is decreased. This is continued until the system solidifies into a consistent state. One of the parameters of the algorithm is the cooling schedule. This cooling schedule controls the whole process by characterizing the decrease of the temperature while the optimization runs. It controls the algorithm steps from the beginning until the convergence and is a keystone of SA execution. A low cooling schedule will take an excess of emphasis to achieve the global minimum and, if the aggregate number of cycles is constrained, an unsuccessful inquiry can come about. Then again, an excessively quick cooling schedule can get the algorithm caught in a local minimum or even in any smooth location on the search space. One common cooling process is the logarithmic schedule: (3.4) where t k is the value of the temperature at iteration k, T 0 is the initial temperature and α is the cooling speed parameter. This schedule, has been demonstrated to ensure convergence to the global minimum when α=1. Then again, it constitutes such a moderate cooling schedule, to the

11 point that it is rarely utilized as a part of practice. Despite the fact that, the utilization of values of α smaller than 1 can accelerate the methodology, logarithmic cooling schedules are thought to be moderate. Fig 3.7 illustrates the corresponding speed of the three cooling schedules. Fig 3.7 Cooling schedules Another common cooling schedule, and more used in practice, is the geometric schedule: (3.5) In this type of schedule, α must be smaller but nearer to 1. The most typical values of α are between 0.8 and 0.99; smaller values can result in an excessively fast cooling. Finally, another familiar cooling schedule is the exponential one: (3.6) Where N is the dimensionality of the model space. These kinds of schedules are very fast during the first iterations, but the speed of the exponential decay can be reduced by using values of α smaller than 1. Exponential cooling schedules are ideal to be used with temperature-dependent perturbation schemes.

12 3.4 Genetic Algorithm and Simulated Annealing The performance of Genetic Algorithm can be enhanced by consolidating GA with some other strategy to get optimum result for optimization issue. While GA's are exceptionally well known as subjective task optimizers over low-dimensional spaces, they are not as prominent as SA. An alternate issue is premature convergence, which often found in GA optimization. This is outcome of high dependency on crossover. The strength of crossover can bring stagnation as the populace gets to be more homogeneous, and the mutation rate is so low which is difficult to change the search area. The GA optimizers have another additional problem that they are bad at hill climbing. This demonstrates itself in low precision in genuine esteemed issues. With a specific end goal to beat the above issue the coupling of GA and one point search algorithm (local search algorithm, for example, Simulated Annealing (SA), to structure cross breed (hybrid) GA can be favorable. The hybrid algorithm fuses the best features of genetic algorithm (searching bigger areas of solution spaces) and simulated annealing (refining thorough solution of local area). Genetic algorithm creates a new solution set which uses the crossover/mutation operators and afterward simulated annealing further polishes the best solution of genetic algorithm. The essential thought is to use the genetic operators of genetic algorithm to rapidly merge the search to close global minima/maxima, which will further be refined to a close optimum solution by simulated annealing utilizing annealing procedure. A portion of the solution is adopted and some will be rejected, as indicated by a predefined acknowledgement principle. The schematic representation of a genetic algorithm and simulated annealing is shown in fig 3.8 respectively.

13 GA Phase Best Solution of GA SA Phase Optimal Solution Faster Global Refinement Continuous Local Refinement Fig 3.8 Schematic Representation of GA-SA Process In a simulated annealing phase of the genetic-simulated algorithm, the simulated annealing performs a one-sided arbitrary walk that samples the objective function in the space of free variables. It can relocate through an order of local extrema looking for a global solution and to learn when the global extremum has been attained. 3.5 Modified Genetic Algorithm and Simulated Annealing (MGASA) The conventional GA-SA takes significantly more execution time than GAs or the SA. To defeat this inadequacy, this study enhances the customary GA-SA algorithm. The enhanced algorithm changes the optimal method of the SA to the GAs population, that is, the SA just enhances the optimal individual of GAs population, not all individuals. After the change, the algorithm can spare substantially more execution time than the customary GA-SA. Additionally, the MGASA is equipped for attaining better results than other improvement strategies. The MGASA algorithm comprises of two stages: the GA stage and the SA stage. In the MGASA algorithm, Initially GAs creates the initial population randomly. The GA then assesses the initial population and works on the population utilizing three genetic operators to process new population. After every generation the GA sends the best individual to the SA in phase II for further change. Having completed the further change of the individual, the SA sends it to the GA for the following generation once more. This methodology proceeds until the termination condition of the algorithm is met. Phase 1 Optimal genetic algorithm process

14 The GA produces stochastically the initial population and afterward operates on the population utilizing three genetic operators to prepare new population. As per pseudo code of the genetic algorithm, a few parts in respect to GAs ought to be resolved, for example, the choice variables, the population estimate, the generation of the initial population, the assessment of population, the plans of encoding and interpreting for chromosomes, the determination of genetic operators and the termination condition. Fig 3.9 illustrates the steps involved in GA process. Objective Function The objective is to decrease the forecasting error of oil and gas stock price. The objective function can be written as: (3.7) Where n is the population size, A is the actual price and P is the predicted value. GA Phase Step 1: Initialize population and temperature. Step 2: Evaluate the population Step 3: Repeat Apply selection operator Apply crossover operator Apply mutation operator Evaluate population Until termination condition is met Fig 3.9Modified Genetic Algorithm Generate Initial Population The initial population is produced randomly. Each of Initial weights is randomly created between -1 and + 1. Fitness Function

15 GAs assesses the population dependent upon the fitness function. An individual with higher fitness rate has higher opportunity to be chosen into the following generation. Generally the fitness of a string is with respect to the target function. (3.8) Selection Procedure We utilize truncation selection for selecting the population. In truncation selection people are sorted as per their fitness. Just the best individuals are chosen for individuals. The truncation limit shows the extent of the population to be chosen as individuals. At that point we utilize a binary truncation selection for producing new offspring by utilization of genetic operators. In truncation selection, two members of the population are chosen as arbitrary and their fitness contrasted and the best one concurring with fitness worth will be decided to one parent. Likewise, another is parent chosen with the same technique. Genetic Operators Here, we utilize two-point crossover and one-point mutation as genetic operators. Replacement The present population has been replaced by the recently produced offspring, which structures the next generation. Termination Criteria If the number of generation equivalents the maximum generation number then stop. Phase 2 Optimal simulated annealing process In the methodology of the MGASA, the GA will send its best individual to the SA for enhancement. After the optimal individual of the GA being enhanced, the SA passes it to the GA for the subsequent generation. This methodology proceeds until the termination condition is met. Fig 3.11 illustrates the steps involved in simulated annealing. Initial Temperature

16 Output Optimal Results The SA accepts new states dependent upon Metropolis criterion which is a stochastic procedure. The criterion is given by P(e)= min{1, exp( δe/t)}, where δe =f(si) f(sj) is the difference of the objective function values of the new state si and the present state sj, and t is the present temperature. Assuming that δe is not exactly zero, then the new state is held and the present state is discarded. Overall, the new state may be held if the Boltzmann likelihood, Pb=exp( δe/t), is greater than an arbitrary number within the range 0 to 1. At a high temperature, the SA can accept another state that has a higher value than that of the past unified with a substantial likelihood. As cooling proceeds, the state may be accepted by the SA with a less likelihood. Initialize population, set initial (IT) and final temperature (FT) Generate Population No IT FT Yes Yes Select Individuals Perform Crossover operation L=αT No Perform Mutation operation Calculate Fitness Function Send Optimal Solution to SA Phase Accept the generated solution with certain probability condition Neighborhood Solution Generation Fig 3.10 Working Procedure of MGASA Algorithm

17 SA Phase Step 4: Select best optimal solution from GA Step 5: Evaluate the objective function. Step 6: Repeat Generate new neighborhood solution Estimate fitness function Accept new neighborhood based on metropolis criteria Until ( max solutions to be considered for each single iteration) Step 7: Decrease the temperature using the annealing schedule. Step 8: Repeat steps 6-7 until stopping criteria is met. Fig 3.11Modified Simulated Annealing Algorithm Cooling Rate The performance of the SA is relative with respect to the cooling rate. So as to enhance the consistency and the search effectiveness of the SA, an enormous cooling rate ought to be maintained. In the event that the cooling rate of each temperature change counter is excessively

18 low, the SA will cost reckoning time expenditure. On alternate hands, if a faster cooling rate is utilized, the likelihood of getting trapped into a local minimum is higher. In general, the value of cooling rate may be controlled by its sensitivity analysis. The cooling schedule is given as follows: (3.9) Where T k and T k-1 are temperatures at time k and k-1; γ is the cooling rate between 0 and 1 respectively. Number of Transitions at a Temperature In a search methodology of the SA, the state move at every temperature change counter is just dependent on the new states and current status. Hence, the search procedure of SA could be acknowledged as a Markov chain, whose length is characterized by the amount of moves permitted at the current temperature. The amount of moves at each temperature is characterized as: (3.10) R is the maximum number of repetitions at a particular temperature, α is a constant variable. Generation of neighborhood structure The focus of neighborhood structure generation is to change arbitrarily the present state to a feasible range of its current value. There are numerous diverse approaches to generate the neighborhood structure. In the present work, the non-uniform transformation approach in the GA is received with some adjustment for generation methodology. In the event that a uniform arbitrary number distributed in the range [0,1] is less than the mutation Pm, the present choice variable is permitted to transform its value randomly. Otherwise, the present decision variable is not permitted to do that. Termination Condition

19 The algorithm runs until the last generation or when the low RMSE value is reached. 3.6 Empirical Results In this research, we utilize the technical indicators as input to the neural network. Technical indicators are components that forecast the future performance of stocks in a given set of economic situations. By and large technical indicators are utilized for short term designs. They are regularly dependent upon scientific estimations which take into consideration the current relationship between the stock price and the general development of the market where the stock is exchanged. These indicators are ascertained dependent upon fundamental qualities: closing price, opening price, high price, low price,and all these prices represent the stock quality throughout the trading session. The network weights are updated using gradient descent algorithm.after training the network, the weights are frozen to test the network. The testing data sets are provided as input to the neural network and the predicted close price is obtained as the output and is compared with the actual or desired close price Data Description The examined data sample comprises of daily returns from January 2010 to December 2013 of three stock market indices, BSE oil and gas, CNX-100 and CNX-NIFTY. Data samples are collected from the historical values of NSE- NIFTY and BSE (Bombay Stock Exchange) data. The total data set is split into two one for training the network and remaining for testing the performance of the network. In this experiment, the stock index data from January 2010 to December 2012 is used to train the network and the data from January 2013 to December 2013 is used to test the performance of the proposed approach Performance Metrics The following performance measures are used to gauge the performance of the trained forecasting model for the test data: The Mean Squared Error (MSE), Root Mean Squared Error (RMSE), R-Squared (R 2 ), Adjusted R-squared (RA 2 ), Hannan-Quinn Information Criterion (HQ). Table 3.1 illustrates various performance measures that are used to evaluate the effectiveness of the proposed approach along with their corresponding formula. Mean Squared Error (MSE)

20 Mean Squared Error (MSE) is a significant criterion for evaluating the performance of a prediction approach. This method measures the difference between the predicted and the actual value. The lower MSE value indicates a better performance while a higher MSE value represents a poor performance. Root Mean Squared Error (RMSE) Root Mean Squared Error (RMSE) is a commonly utilized indicator for estimating the performance of a prediction model. RMSE value is generally obtained by squaring the sum of the difference between the actual and the predicted values, dividing it by the number of samples used and finally taking the square root of the result. R-Squared(R 2 ) R 2 is a measure that assesses the statistical relationship between two measures. It is a normally employed to define the part of the stock movement in the market comparative to the related index movement. Adjusted R-Squared(R 2 A ) It is widely utilized measure of a model performance against actual (known) values. The error anticipated by the approach will increase as the model complexity increases.thus, the Adjusted R 2 is shown to be better than R 2. Hannan-Quinn Information Criterion (HQ): HQ is criterion for selecting the model. The idea is choose a model with the least value. Table 3.1: Performance Criteria and the related formula Performance Criteria Formula Mean Squared Error Root Mean Squared Error (RMSE)

21 Jan Feb Mar April May June July Aug Sep Oct Nov Dec Close Price R-Squared(R 2 ) Adjusted R-Squared(R A 2 ) = real value, = estimated value, = mean value Hannan-Quinn Information Criterion (HQ) SSR = Results Fig 3.12 presents the results for the returns (close price) for the year 2013 of the BSE Oil and Gas index and table 3.2 shows the error rate of the proposed approach using various performance measures Actual Predicted

22 Fig 3.12 BSE Predicted Close Price Value Table 3.2 Error Rate of BSE Test Criteria Error Rate (%) Mean Squared Error 3.45 Root Mean Squared Error (RMSE) 5.48 R-Squared(R 2 ) 0.17 Adjusted R-Squared(R 2 A ) 1.15 Hannan-Quinn Information Criterion (HQ) Fig 3.13 presents the results for the returns (close price) for the year 2013 of the NSE CNX 100 index and table 3.3 shows the error rate of the proposed approach using various performance measures.

23 Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec Close Price Actual Predicted Fig 3.13 Predicted Close Price of CNX-100 Stock Index Table 3.3 Error Rate of CNX-100 Stock Index Test Criteria Error Rate (%) Mean Squared Error 3.12 Root Mean Squared Error (RMSE) 4.24 R-Squared(R 2 ) 0.12 Adjusted R-Squared(R 2 A ) 1.89 Hannan-Quinn Information Criterion (HQ) -5.47

24 Jan Feb Mar April May Jun Jul Aug Sep Oct Nov Dec Close Price Fig 3.14 presents the results for the returns (close price) for the year 2013 of the NSE CNX NIFTY index and table 3.4 shows the error rate of the proposed approach using various performance measures Actual Predicted Fig 3.14 Predicted Close Price of CNX-NIFTY Stock Index Table 3.4 Error Rate of CNX-NIFTY Stock Index Test Criteria Error Rate (%) Mean Squared Error 3.25 Root Mean Squared Error (RMSE) 3.98 R-Squared(R 2 ) 0.09 Adjusted R-Squared(R 2 A ) 1.12 Hannan-Quinn Information Criterion (HQ) -5.47