Global Journal on Advancement in Engineering and Science (GJAES)

Size: px
Start display at page:

Download "Global Journal on Advancement in Engineering and Science (GJAES)"

Transcription

1 ISSN (PRINT): Global Journal on Advancement in Engineering and Science (GJAES) Volume 2, Issue: Editor-In-Chief Prof. Sudip Mandal Published by: Global Institute of Management & Technology Krishna Nagar, West Bengal, India

2 GLOBAL JOURNAL on ADVANCEMENT in ENGINEERING and SCIENCE (GJAES) ISSN (Print): March-2016 Volume: 2 Issue: 1 In association with Global Institute of Management and Technology Palpara More, N.H.-34, Bhatjangla, Krishna Nagar, Nadia, W.B., India, Pin

3 GLOBAL JOURNAL ON ADVANCEMENT IN ENGINEERING AND SCIENCE (GJAES) ISSN (Print): About GJAES Global Institute of Management & Technology, a unit of NCDTE, is going to publish Global Journal on Advancement in Engineering and Science (GJAES), Volume 2, Issue: 1, an Annual, No-profit International Journal on March In the year of 2015, it has obtained the International Standard of Serial Number (ISSN) for its printed version. The scope of the Journal publication includes but not limited to any field of Electronics, Electrical, Computer Science, Mechanical, Civil, Applied Science, Humanities and Management that would be interest to the academic interest, practicing engineers, management personal and active researchers and students. Papers reporting original research, review on recent technology & science and innovative applications & project from all parts of the world are invited to submit related to scope of this publication. Papers for publication in the GJAES are selected through peer review to ensure originality, timeliness, relevance and readability. Advisory and Editorial Board comprised of renowned academicians and industry experts from the field of Engineering, Science and Management All manuscript should be submitted strictly electronically via by following the given guidelines and steps. Manuscripts should follow the style of the journal and are subject to both review and editing that are given in the above website. GJAES will have the right to print, publish, create derivative works, and sell the work throughout the world, all rights in and to all revisions or versions or subsequent editions of the Work in all languages and media throughout the world. Reproduction of article in any form print/or electronic media- is permitted only on proper prior permission and acknowledgement. There is no processing fee for article publication in GJAES, Volume 2, Issue 1. Published articles will be freely available in the website of Global Institute of Management & Technology and for wide circulation of knowledge among interested researchers and readers. If hardcopy of Journal is needed be any individual or institute, it can be obtained by paying Rs. 200/- per copy to the Editor. For any query, send your to Prof. Sudip Mandal, Editor-in-Chief, GJAES, at id: sudip.mandal007@gmail.com The Editor-in-chief of GJAES is no way responsible for the statements made inside the articles, and views expressed are exclusively of the contributing author(s) alone.

4 GLOBAL JOURNAL ON ADVANCEMENT IN ENGINEERING AND SCIENCE (GJAES) ISSN (Print): Publication Ethics and Copyright Information The papers in the journal reflect the authors opinion and in the interest of the timely dissemination are published as presented, without any change. Their inclusion in the publication does not necessarily constitute endorsement by GJAES. The GJAES does not take any responsibility about the results and authenticity of contents of the articles. The author(s) confirm that this article has not been published elsewhere, nor is it under consideration by any other publisher. The authors further warrant and represent that the work does not violate any proprietary or personal rights of others (including, without limitation, any copyrights or privacy rights); that the work is factually accurate and contains no matter libelous or otherwise unlawful; that he/she has substantially participated in the creation of the work and that it represents his/her original work sufficient for him to claim authorship. The authors further warrant and represent that they have no financial interest in the subject matter of the work or any affiliation with an organization or entity with a financial interest in the subject matter of the work, other than as previously disclosed to the association. The transfer of copyright gives GJAES the right to develop, promote, distribute, sell, and archive a body of scientific works throughout the world. The Author hereby grants and assigns to GJAES all rights in and to Author s work in and contributions to the work. In connection with this assignment, the Author acknowledges that GJAES will have the right to print, publish, create derivative works, and sell the work throughout the world, all rights in and to all revisions or versions or subsequent editions of the Work in all languages and media throughout the world. The author(s), reserve the following rights: All proprietary rights other than copyrights, such as patent rights, The right to use all or part of this article, including tables and figures in future works of their own, provided that the proper acknowledgment is made to the Publisher as copyright holder, The right to make copies of this article for his/her own use, but not for sale. GJAES will have the right to print, publish, create derivative works, and sell the work throughout the world, all rights in and to all revisions or versions or subsequent editions of the Work in all languages and media throughout the world. No part of this material protected by copyright notice may be reproduced or utilized in any means without proper written permission and citation. Published by: Global Institute of Management & Technology Krishna Nagar, Nadia, West Bengal, India Copyright@ 2016 by Global Journal on Advancement in Engineering & Science (GJAES), ISSN (Print): All right reserved

5 GLOBAL JOURNAL ON ADVANCEMENT IN ENGINEERING AND SCIENCE (GJAES) ISSN (Print): President Mr. Naresh Chandra Das, Chairman, GIMT, Krishnagar. Secretary & Director General Prof. (Dr.) Sankar Kumar Moulick, Director, GIMT, Krishnangar. Editor-in-Chief Prof. Sudip Mandal, Department of Electronics & Communication Engineering, GIMT, Krishnagar. Associate Editor-in-Chief Prof. Sujit Majumdar, Dean, GIMT, Krishnagar. Advisory Editorial and Reviewer Board Prof. (Dr.) Santunu Das, Department of Mechanical Engineering, Kalyani Govt. Engg. College, Kalyani. Prof. (Dr.) Sakti Pada Ghosh, Ex-Principal, NIT Durgapur, Durgapur. Prof. (Dr.) Goutam Saha, Department of Information & Technology, NEHU, Shilong. Prof. (Dr.) Rajat Kumar Pal, Department of Computer Science, University of Calcutta, Kolkata. Prof. (Dr.) Arijit Saha, Department of Electronics & Communication Engineering, B P Poddar Institute of Management & Technology, Kolkata. Prof. (Dr.) Asok Kumar, Principal, MCKV Institute of Engineering, Howrah. Prof. (Dr.) Samik Chakraborty, Department of Electronics & Communication Engineering, Indian Maritime University, Kolkata Campus, Kolkata. Prof. (Dr.) Angsuman Sarkar, Department of Electronics & Communication Engineering, Kalyani Govt. Engg. College, Kalyani. Prof. (Dr.) K. Sundararaj, Dean, Department of Aeronautical Engg., SNS College of Technology, Coimbatore.

6 GLOBAL JOURNAL ON ADVANCEMENT IN ENGINEERING AND SCIENCE (GJAES) ISSN (Print): From the Desk of the Editor-in-Chief Greetings to Everyone! It s my real honor and pleasure to inform you that second volume of Global Journal on Advancement in Engineering and Science (GJAES) is ready to publish at the end of May 2016 in association with Global Institute of Management & Technology. The objective of GJAES is to motivate students, researchers, academicians and industrialists by creating a stage to share their research and project output in the field of Engineering, Science and Management. I, as Editor-in-Chief of GJAES, personally want to thank and congratulate all authors for submitting valuable contributions of their research works to this Journal without which this publication would have not been achievable. I also hope that in future they will continue their outstanding research work in same way to improve their academic and research expertise. Moreover, Global Institute of Management & Technology (GIMT) is pleased to announce that National Conference i-con 2016 was held on 18th & 19th March 2016 at the GIMT college campus, Krishnanagar, West Bengal, India. Selected, accepted and presented papers are published in Global Journal on Advancement in Engineering and Science (GJAES), Vol.2, Issue-1 as a Special Issue. 4 Regular Papers and 40 Special Issue Papers (selected from conference proceeding of i-con 2016) are included in this volume. I want to thanks form my core of the heart to our honorable Chairman, Mr. N. C. Das and respected Director, Prof (Dr.) Sankar Kumar Moulick for their constant motivation and support to achieve this during this short time period. I also acknowledge the key roles of all respected Advisory Editorial and Reviewer Board members from different reputed institutes who help to improve the quality research work for this Journal. At the end, I want to dedicate this to my parents without whom I may not able to reach this position. I am apologized to everyone if I did anything wrong unintentionally. Thanks for your constant support. Wish you all the best. Regards Krishnagar Date: 11/05/2016 Prof. Sudip Mandal Editor- in- Chief, GJAES

7 GLOBAL JOURNAL ON ADVANCEMENT IN ENGINEERING AND SCIENCE (GJAES) ISSN (Print): Content Regular Issue Adequacy Analysis of a Wind and Diesel Based Stand Alone Microgrid System Sanchari Deb, Sarmila Patra and Sudip Kumar Deb Comparison of PID Controller Tuning Techniques for Liquid Flow Process Control Pijush Dutta and Asok Kumar Literature Review on Thermal Comfort in Ephemeral Conditions Sagnika Bhattacharjeee and Protyusha Dutta Reliability Centered Maintenance-A Tool for Better Machine Reliability: an Overview Ashok Kumar Das Special Issue: Conference Proceeding of ICON-2016 A Warning System to Alert Human-Elephant Conflict Bhaskar Sarkar, Jasmin Ara and Sanghamitra Chatterjee Simulation-Based Study of Two Reactive Routing Protocols in Wireless Sensor Network (WSN) Dipankar Saha, Debraj Modak and Chandrima Debnath Modelling of Road Traffic Signal Using Atmega-8µc Krittibas Bairagi and Sudip Mandal Smoke Detector Using LDR Saswati De and Amit Kumar Singh Comparative Study between Z-N&F-PID Controller for Speed Control of a DC Motor Sukanya Chatterjee, Priyanka Sil and Pijush Dutta On Unstructured Uncertainty Analysis in Higher Order Actuator Dynamics of the PI Controlled Missile Autopilot Biraj Guha Development of Advanced Glazing System for Energy Efficient Windows Sagnika Bhattacharjee,Protyusha Dutta and S. Neogi Ray Tracing Study of Linear Fresnel Reflector System Gaurab Bhowmick and Subhasis Neogi Long Term Scheduling of a Hydrothermal System over a Year Amirul Ali Mallick, Pratyush Das and Bikas Kumar Paul A Technique to Identify Faults on FMCG Packets using Image Processing Kaustav Roy and Pritam Debnath Simplified Method for Direct Measurement of Dissipation Factor of an Electrical Machine or Insulating Material Arindam Pal and Atanu Paul

8 Thermal Performance of a Heat Pipe Embedded Evacuated Tube Collector in a Compound Parabolic Concentrator under different sky conditions Debabrata Pradhan, Debrudra Mitra and Subhasis Neogi Effect of Green Roof on Heat Flow of a Building: An Experimental Study Arna Ganguly and Subhasis Neogi Geo-dependence of Facial Features and Attributes Nilanjan Mukhopadhyay, Rajib Dutta and Dipankar Das Metadata Based Data Extraction from Industry Data Warehouse Sukanta Singh and Bhaskar Adak Data Warehouse System Architecture for a Typical Health Care Organization Rajib Dutta and Vicky Mondal Fenton s Treatment of Tannery Wastewater Ranajit Basu Advanced Analysis of a Structure using Staad Pro Mainak Ghosal Improvisation of Locally Available Soil for Economical Foundation Saroj Kundu,Sukanya Basu and Pritam Dhar Effect of Porosity of Alumina Wheel in improving Grinding Performance Sujit Majumdar, Ahin Banerjee, Santanu Das, Samik Chakroborty and Debasish Roy To Study the Impact of Temperature Boundary Conditions for Overall Heat Transfer Coefficient Measurement Suitable for Adaptation in Tropical Climate for Energy Efficient Building Debrudra Mitra and Subhasis Neogi Consumer Behavior & Brand Preference towards Sonata Wrist Watches A study with reference to Asansol City, West Bengal Mayukh Thakur Experimental Investigation on Grindability of Titanium Grade 1 Using Silicon Carbide Wheel Under Dry Condition Manish Mukhopadhyay, Ayan Banerjee, Arnab Kundu, Sirsendu Mahata, Bijoy Mandal and Santanu Das On the Performance of Dry Grinding of Titaniumgrade-1 Using Alumina Wheel Ayan Banerjee, Manish Mukhopadhyay, Arnab Kundu, Sirsendu Mahata, Bijoy Mandal and Santanu Das Investigating Milling Burr Formation under Varying Tool Exit Angle Arijit Patra, Arijit Hawladar, Sanjay Samanta and Santanu Das Escape Velocity of a Particle on a Riverbank with Partially Saturated Soil under Cohesion Sanchayan Mukherjee An Experimental Investigation on the Grindability of Inconel Using Alumina Wheel Under Dry Condition Arnab Kundu, Ayan Banerjee, Manish Mukhopadhyay, Sirsendu Mahata, Bijoy Mandal and Santanu Das Experimental Investigationon Grindability of Low Alloy Steel Using Alumina Wheel Under Dry Condition Pinaki Das, Sujit Majumdar Uncertainty of Mathematical Modeling for River Water Quality ; Sujit Kumar Dey Mathematical Modeling: An Overview ; Sujit Kumar Dey History of Atoms and Idea of Atomic Structure; Nirmal Paul, Atreyi Das Different Physicochemical Strategies for the Removal of Hexavalent Chromium; Anirudha Roy

9 Regular Issue Global Journal on Advancement in Engineering and Science (GJAES) Vol. 2, Issue 1 : March-2016, ISSN (Print): Original Research Work Adequacy analysis of a wind and diesel based stand alone microgrid system Sanchari Deb 1, Sarmila Patra 2 and Sudip Kumar Deb 3 1 Department of EEE, BIT Mesra, Ranchi, India 2 Department of EE, Assam Engineering College, Guwahati, India 3 Department of ME, Assam Engineering College, Guwahati, India 1 sancharideb@yahoo.co.in, 2 sarmilapatra@yahoo.com and 3 sudipkumardeb@gmail.com Abstract: The increasing energy demand has led scientists and researchers to explore the renewable sources of energy like solar energy, wind energy, biomass energy, tidal energy, geothermal energy etc. As a result of which microgrids have emerged as a combination of renewable energy resources with non renewable energy resources. The power supplies from most of the renewable resources are random in nature due to environmental factors. The power supply from microgrids must be adequate and sufficient to meet the power demand. Adequacy analysis of microgrids is one of the most important challenges for microgrid planners. Some of the well known adequacy indices used for adequacy study are Loss of load probability (LOLP), Loss of load energy (LOLE) etc. For evaluation of these aforementioned indices, approaches usually made are Combined Outage Probability Table, Markov model, Monte Carlo Simulation (MCS) etc. This paper presents a novel methodology of adequacy analysis of microgrid based on Fault Tree Analysis (FTA). Fault tree analysis provides a flexible way of adequacy calculation incorporating the in uncertainties involved case of a complex system. Keywords: Reliability, microgrid, fault tree, loss of load probability, loss of load expectation. I. Introduction Due to unsustainable nature of fossil fuels, increasing energy demand as well as environmental factors microgrids are becoming popular nowadays. Adequacy analysis of microgrid is important due to the random nature of power output from renewable resources. The power output from wind turbine depends on wind speed which is fluctuating in nature. Similarly the power output from solar PV cell depends on solar radiation and ambient temperature. Because of these uncertainties involved the conventional methods of adequacy analysis cannot be directly applied in case of microgrids. In this paper a hybrid stand alone microgrid consisting of DG and wind turbine is considered and its adequacy indices are computed by fault tree analysis. FTA is used because of its ability to deal with uncertainties involved. Sufficient work has been done regarding adequacy study of microgrid. In [1] Moradi, Barkati and Jamshidi used sequential MCS for reliability analysis of a standalone hybrid microgrid consisting of diesel generator, solar PV cell, wind turbine and solid oxide fuel cell. They provided a probabilistic approach of determining adequacy indices based on random generation of failure rate data. In [2] Zulu and Dilan used a Markov based model for determining the adequacy indices of a microgrid based on solar energy. In [3] Tsuji used a method based on switch transition matrix for adequacy analysis of microgrid considering dynamic behavior. In [4] Ali, Ruzli and Buang reviewed the literature published on FTA. In [5] Cepin and Makvo used FTA for reliability assessment of a conventional power system. All the papers mentioned above provides a probabilistic and complicated methodology for determining adequacy indices of microgrid. Here a new and simple methodology based on FTA is presented for adequacy analysis of microgrid. II. Brief review of FTA FTA is one of popular methodology for reliability analysis. This technique was developed in In FTA the system is modeled based on logical relationship of AND and OR gates [6].The top event can be probability of success or probability of failure depending on our requirement. A parallel system is represented by OR gate and a series system is represented by AND gate. Component 1 Component 2 Component 3 Figure 1: A series system S. Mandal (Editor), GJAES 2016 GJAES Page 1

10 S. Deb et al., Adequacy analysis of a wind and diesel based stand alone microgrid system, Global Journal on Advancement in Engineering and Science, 2(1), March 2016, pp. 1-5 Component 1 Component 2 Component 3 Figure 2: FTA of a series system Fig 1 represents a series system of three components. Since the top event here is system success it is represented by AND gate for FTA. Fig 2 represents the fault tree of a series connected system. Similarly for a parallel system if the top event is system success it is represented by an OR gate for FTA. III. Brief description of microgrid under study Here a hybrid microgrid consisting of DG and wind turbine is considered. DG Wind Turbine Load Figure 3: Microgrid under study Fig 3 represents the diagram of microgrid under study. It is a hybrid microgrid consisting of DG and wind turbine. The capacity of DG is 1100 Kw and the capacity of wind turbine is 1500 kw. And the load considered here is IEEE RTS load model with a peak load of 2250 kw. Table I Reliability data for different unit sources [7,8] Type of unit sources System reliability DG 0.98 Wind turbine 0.90 IV. Uncertainty analysis of wind turbine The wind turbine being outdoor equipment is vulnerable to wind speed. Despite the wind turbine being in ON state the power output may be zero because of extremely low or extremely high wind speed. The wind speed is simulated for one year period by using a MCS based on Weibull distribution [1].To simulate the wind speed the following equations are used. v=α (-lnu) ^ (1/β) (1) α = v Γ β (2) β = ( δ v ) (3) where v represents the average wind speed and δ represents the standard deviation of wind speed. The relation between output power of wind turbine and wind velocity is given by [1]: P WTG = 0 0 < v < v cut in (4) = a v k +bp rated v cut in < v < v rated (5) S. Mandal (Editor), GJAES 2016 GJAES Page 2

11 power S. Deb et al., Adequacy analysis of a wind and diesel based stand alone microgrid system, Global Journal on Advancement in Engineering and Science, 2(1), March 2016, pp. 1-5 =P rated v rated < v < v cut out (6) P a = rated b = (v 3 rated v 3 cut in ) P rated (v3 cut out v 3 rated ) Where P rated the rated power of wind turbine is,v rated is the rated wind speed, v cut in is the cut in wind velocity and v cut out is the cut out wind velocity. The cut in wind velocity is 3 m/s, rated wind velocity is 12 m/s and cut out wind velocity is 25 m/s [1] speed Figure 4: Speed Vs power output of wind turbine Figure 4 shows the variation of output power of wind turbine with respect to wind speed. The output power of wind turbine can be zero because of turbine fault as well as wind speed. And the modern wind turbines are designed in such a way that it automatically stops working at wind speed greater than cut out speed. The different clusters of wind speed are as follows: Cluster 1: (0-5) m/sec Cluster 2:(5-10) m/sec Cluster 3:(10-25) m/sec Cluster 4:more than 25 m/sec And the corresponding clusters of power output are as follows: Cluster 1:( 0-500) kw Cluster 2:( ) kw Cluster 3: ( ) kw Cluster 4:0 kw V. Methodology Here FTA is used for LOLP and LOLE evaluation of the hybrid microgrid. The top event is considered as LOLP.LOLP is 1 when capacity of the hybrid microgrid is less than the load demand [9].The DG can be in two states ON and OFF. But for the wind turbine we have a number of intermediate states depending on the wind speed. The power output from wind turbine can be zero because of failure of any one of the component of wind turbine as well as wind speed. The division of power output of the wind turbine into different clusters is illustrated in the previous section. And nowadays the wind turbines are designed in such a way that it automatically stops working at extremely high wind speed. The wind turbine gives its rated output for moderate wind speed. The probability of wind turbine of being in a particular state is computed by MCS as illustrated in the previous section. The adequacy indices of the microgrid will be computed for the following cases: Case 1: 4 DG S. Mandal (Editor), GJAES 2016 GJAES Page 3

12 S. Deb et al., Adequacy analysis of a wind and diesel based stand alone microgrid system, Global Journal on Advancement in Engineering and Science, 2(1), March 2016, pp. 1-5 Case 2: 4 DG +2 wind turbine Case 3: 3 DG +2 wind turbine Case 4: 3 DG +3 wind turbine LOLP DG Wind turbine Wind turbine Wind turbine fault Wind speed Figure 5: FTA of microgrid Fig 5 represents the fault tree of the microgrid. The top event here is LOLP. The probability of loss of load depends on whether the net output from the combination of DG and wind turbine is greater than load demand. VI. Results and discussion The LOLP and LOLE are calculated for different combinations of the hybrid microgrid. Four cases of the microgrid are considered and the adequacy indices are computed for each of these four cases by FTA. Table 2 gives the adequacy indices for different combinations of DG and wind turbine. The LOLP of case 1 is and the LOLP of case 2 is It is observed that the LOLP decreases after addition of wind turbine with the DG. But if the number of DG is decreased then the LOLP also increases. This is due to the fluctuating nature of wind energy. And if the number of wind turbine is increased from two to three then better adequacy indices are obtained as compared to case 3.But still the adequacy indices of case 4 are worse as compared to that of case 2. Thus in terms of adequacy the 2nd combination gives the best result. S. Mandal (Editor), GJAES 2016 GJAES Page 4

13 S. Deb et al., Adequacy analysis of a wind and diesel based stand alone microgrid system, Global Journal on Advancement in Engineering and Science, 2(1), March 2016, pp. 1-5 Table II Percentage of correct classification and Measurement Case LOLP LOLE (day/year) The entire analysis is performed for normal weather condition. The adequacy indices will become worse for extreme weather condition. This is due to the fact that wind turbine being an outdoor equipment is greatly affected by harsh weather condition. The system reliability of wind turbine will decrease under extreme weather condition. As a result of which the adequacy indices will be affected. VII. Conclusion The increasing energy demand has made researchers and scientists think about the alternate sources of energy. As a result of which microgrids have emerged as one of the popular, cheap and environment friendly way of satisfying the increasing energy demand. Due to stochastic nature of renewable energy the reliability evaluation of microgrid is an important issue. Here the capacity of FTA to evaluate adequacy indices is explored. FTA is simple and efficient means of reliability evaluation compared to the other existing methods. It also provides an adaptable means of representing uncertainties involved in case of a complex system. Microgrid is a complex system consisting of different unit sources with uncertain behaviour. So the adequacy indices of this complex system are evaluated by using FTA. VIII. Reference [1] Ghaderijani M, Barakati SM and Jamshidi A, Application of stochastic simulation method in reliability assessment of a PV wind diesel SOFC hybrid, IACSIT International Journal of Engineering and Technology, vol. 4, pp ,October [2] Esau Z, Jayabeera D, " Reliability assessment in active distribution networks with detailed effects of PV systems",journal of clean energy,springer,vol 2,pp [3] Tsuji T,"A study on power supply reliability of Microgrid with Renewable energy considering dynamic behaviour", REM, France,2012 [4] Bai A, Ruzli R and Buang B, "Reliability analysis using Fault tree analysis:a Review",International Journal of Chemical Engineering and Applications, Vol. 4, No. 3, pp. 1-5,June 2013 [5] Cepin M, Makvo B, "Application of fault tree analysis for assessment of power system reliability",reliability Engineering and system safety,vol 94,pp ,June 2013 [6] Javadi M S, Nobakht A, Meskarbashee A, " Fault Tree Analysis Approach in Reliability Assessment of Power System", International Journal of Multidisciplinary Sciences and Engineering, Vol. 2, No. 6, September 2011 [7] Ghaedi A,Abbaspour A and Firuzadabad MF, Towards a Comprehensive model of large scale DFIG based wind farms in adequacy assessment of power system, IEEE trans of Sustainable Energy,vol 5,pp 55-63,2014. [8] M Padma Lalitha,P Harshvardhan Reddy and P Janardhana Naidu, Reliability Evaluation of Wind and PV Energy Penetrated Power system, Infocom Technologies and optimisation(icrito) 3 rd International Conference IEEE,pp 1-5,2014 [9] R. Billinton and R. Alen, Reliability evaluation of power systems, 2nd ed. New York: Plenum, 1994, pp S. Mandal (Editor), GJAES 2016 GJAES Page 5

14 Regular Issue Global Journal on Advancement in Engineering and Science (GJAES) Vol. 2, Issue 1 : March-2016, ISSN(print): Original Research Work Comparison of PID Controller Tuning Techniques for Liquid Flow Process Control Pijush Dutta 1, Asok Kumar 2 1 Department of Electronics &Communication Engg.,Global Institute of Management & Technolgy, India 2 Department of Electronics & Communication Engg,, MCKV Institute of Engg., India pijushdutta009@gmail.com 1, asok_km650@rediffmail.com 2 Abstract: Control of liquid flow in a process plays a crucial role in process industries. PID control schemes are most widely used in process control systems represented by chemical processes because of its robustness, simplicity and its excellence in linearity performance criterion. The main objective of model-based controller is to compensate the shift in process and maintain the liquid level on its required target value. Our goal in this paper deals with the study of using a three term control namely the PID controller to find the best tuning method amongst the four tunings methods implemented here such as Ziegler Nicholas (Z-N),IMC (internal model control, TL (Tyreus Luyben),CC(Cohen oon) for an single input single output (SISO) liquid flow control system. Various time performance criteria s namely IAE, ISE, ITAE has been used for comparison for high stability and reliability. Compared to the conventional PID tuning methods, the emerged results shows that good performance can be achieved with the proposed IMC method based on its high stability, minimum values of rise, settling time criterions.the simulation is completely done using MATLAB. This various tuning approach would be advantageous for the future industries those who work with PID controller. The main goal of this paper is to improve the control action of controller and provide an extensive reference source for people working in PID controllers for liquid flow process. Keywords: Transfer function, PID controller, tuning of PID controller, error criteria- IAE, ISE, ITAE &MSE I. Introduction In general, PID controller is a generic control loop feedback mechanism which is used to calculate the error values. It calculates the error values by means of measuring the difference between process variable and set point[1]. A PID controller is a simple three-term controller where Proportional, Integral and Derivative. Proportional depends upon the present error, and its offset is more. Integral depends on past error, where it can overcome the offset but it overshoots value is more. Derivative depends upon the future error but it can overcome both offset and overshoot, but it cannot be used separately [3]. The output of controller is given by - u(t)=k p e(t) +K i e t dt +K d de/dt ( 1A) Kp = Proportional gain, Ki = Integral gain, Kd = Derivative gain u(t) = Controlling signal. e(t) = error signal with respect to time.matlab based real time control is realize in this study, to control the liquid flow of the experimentation set is more realistic.however, different systems have different behaviour, different applications have different requirements, and requirements may conflict with one another. Proportional Integral Derivative (PID) controllers are widely used in these process industries due to the fact of their simplicity, easily applicable and robustness. The PID controller offers higher flexibility to achieve desired response[7,8]. PID controller has all the necessary dynamics, fast reaction on change of the controller input (D mode), increase in control signal to lead error towards zero (I mode) and suitable action inside control error area to eliminate oscillations (P mode). In PID controller the gain and time constant of the process can change recurrently by changing the tuning parameter.the error can be minimized by tuning the controller parameters. For PID controller, there are thousands of tuning methods available and for this process model Ziegler Nicholas, Tyreus Luyben, Cohen coon, and Model predictive control are done[12]. Initially, in the section (iii),the process model is derived from the real time running process and then in the section (iv),the above specified tuning methods formula are used for calculating Kp, Ki and Kd values which are required for controlling a process through PID control. The process values (KP, ki, KD) that are found from the calculation are simulated in the matlab.from the simulation process, various characteristics of the process like time domain specifications (peak time, rise time, peak overshoot and settling time) are found out. In section V, the error criteria for a process (ITAE,IAE, ISE, and MSE) are discussed.the transfer function of a single tank liquid flow process has been determined using process reaction curve in which the parameters such as Dead time, gain, time constant are calculated. With the help of these parameters the values of Proportional Gain, Integral time and Derivative Time has been determined. Different tuning methods such as Closed ZN, Modified ZN, and IMC-PID are performed. IMC S. Mandal (Editor), GJAES 2016 GJAES Page 6

15 P. Dutta et al., Comparison of PID Controller Tunning Techniques For Liquid Flow Process Control, Global Journal on Advancement in Engineering and Science, 2(1), March 2016, pp provides a much easier framework for design of robust control system[11].the Best Controller for the process has been determined and characteristics are studied. Different forms of controllers are found almost in every area where controlling is essential. In DCS (Distributed control system), PID control plays an important role. The controllers can act as a standalone device when it is embedded in special purpose control systems. PID control is often combined with logic, sequential functions, selectors,and simple function blocks to build the complicated automation systems used for energy production,transportation, and manufacturing. Many sophisticated control strategies, such as model predictive control, are also organized hierarchically[6].the time domain specification and performance index values are tabulated below. In section(v), based on the values of the tabulation, the most suitable controller is found out and for comparability, the comparison graph is shown below. II. Modelling of the Liquid Flow System Q+ qi Q+ qo Fig.1 Diagram for iquid flow in a single tank Consider the flow through a tanks connecting to two pipe (incoming & outgoing).the resistance R for liquid flow in such a pipe or restriction is defined as the charge in the level difference (the difference of the liquid levels of the two tanks) necessary to cause a unit change in flow rate; that is, R =(change in level difference, m)/( change in flow rate,m 3 / sec); Since the relationship between the flow rate and level difference differs for the laminar flow and turbulent flow, we shall consider both cases in the following.consider the liquid level system shown in figure 1. In this system the liquid spouts through the load valve in the side of the tank. If the flow through this restriction is laminar, the relationship between the steady state flow rate and steady state head at the level of restriction is given by Q=KH (1B) Where Q= Steady-state liquid flow rate, m3/sec K= Coefficient of proportionality, m2/sec, H= Steady-state head, m Notice this law governing laminar flow is analogous to Coulomb s law, which states that the current is directly proportional to the potential difference.in case of the turbulent flow, the relationship between steady state flow rate and steady state head at the level of restriction is given by Q =k H Since, dq= K dh dq =2 H K 2 H From the eqation (2) we get dh dq =2H Q dh; ( 3) Now we introduced the analogous resistance and capacitance here,t representing the turbulent flow in the system R t = dh dq R t = 2H Q The value of the turbulent flow resistance Rt depends upon the flow rate and head. Thus we can say that, the flow rate is defined by Q= 2H Rt (2) (4) (5) (6) (7) (8) S. Mandal (Editor), GJAES 2016 GJAES Page 7

16 P. Dutta et al., Comparison of PID Controller Tunning Techniques For Liquid Flow Process Control, Global Journal on Advancement in Engineering and Science, 2(1), March 2016, pp The capacitance of the liquid level system can be defined as the change in the quality of stored liquid to cause a unit change in potential (head). C=( change in liquid stored, m 3 )/(change in head, m); It should be noted that the capacity (m3) and the capacitance (m2) are different. The capacitance of the tank is equal to its cross- sectional area. If this is constant, the capacitance is constant for any head. Now, consider the system shown in figure 1, the variables are defined as follows: Q = steady state flow rate, m3/sec qi = small deviation of inflow rate from its steady-state value, m3/sec qo = small deviation of outflow rate from its steady-state value, m3/sec H = steady-state head, m h = small deviation of head from its steady-state value, m as stated previously, a system can be linear if the flow is laminar. Even if the flow is turbulent, the system can be linearized if changes in variables are kept small. Based on the assumption that the system is either linear or linearized, the differential equations of this system can be obtained as follows: since the inflow minus outflow during the small time interval dt is equal to the additional amount stored in the tank, is given by Cdh =(qi -qo )dt Since, we know that qi = H R Equation (9) becomes RC dh +h=r qi (10) dt Note that the RC is the time constant of the system. Taking Laplace transform of both sides of equation (10), by assuming zero initial conditions. (SRC +1)H( s) =RQi If we consider, qi is input for the system and h is the output for the system, then the transfer function of the system is given by H(s) R (11) = Qi(s) 1+RCS If the qo is taken as the output and the input is qi, then the transfer function of the system is given by as follows: qo = H R Taking Laplace transform both the sides, we get Q 0 (s)= H(s) R From equation (11) and (12), we obtain Q0(s) Qi(s) = R 1+RCS (9) (12) Q0(s) = 0.23 Qi(s) 1+7S where [R=0.23, C=30.4] (13) III. Tuning Methods A. Ziegler Nichol's Method: The Ziegler Nichols tuning method is a heuristic method of tuning a PID controller. It was developed by john G.ziegler and Nathaniel B.Nichols. It is performed by setting the I[integral] and D[derivative] gains to zero. The P (proportional) gain, Kp is then increased (from zero/until it reaches the ultimate gain, Ku, at which the output of the control loop oscillates with a constant amplitude, Ku and the period of oscillation Pu are used to set the P, I, and D gains depending on the controllers used. Table I: Closed loop Z-N method tunning formula Controller K p T i T D P type controller 0.5K U PI type controller 0.45K U P U /1.2 PID type controller 0.6K U P U / P U /8 S. Mandal (Editor), GJAES 2016 GJAES Page 8

17 P. Dutta et al., Comparison of PID Controller Tunning Techniques For Liquid Flow Process Control, Global Journal on Advancement in Engineering and Science, 2(1), March 2016, pp B. Tyreus Luyben Method: It is one of the conservative tuning methods of PID controller. It depends on the dead time of the process and if it is small it gives a good response and if it exceeds larger value then it results to a sluggish response.ku and Pu also plays a role in their response curve. TableIII: Closed loop Tyreus Luyben method tunning formula Controller K p T i T D PI K U / P U PID K U / P U P U /6.3 C. Cohen Coon (CC):The margin for stability is very low,even smallest error pushes the system into instability.the setting of P& PIcontroller are less in CC parameter compared to the Z-N. Table IIIII: Closed loop Cohen Coon method tunning formula Controller K p T i T D PI T/KT d (9/10+T d /12T) T d (30+3T d /T)/(9+20T d /T) PID T/KT d (4/3+T d /4T) T d (32+6T d /T)/(13+8T d /T) /(11/2+T d /T) D. Internal Model Control (IMC) :Morari and his co-workers have developed an important new control system strategy that is called Internal Model Control or IMC. The Internal Model Control philosophy relies on the Internal Model Principle, which states that control can be achieved only if the control system encapsulates, either implicitly or explicitly, some representation of the process to be controlled Table IVV: Closed loop internal Modal Control method (IMC)tunning formula Controller K p T i T D PID (2T+T d )/2k(λ+T d ) T+T d /2 4/(11+2T d /T) [for the tunning purpose λ 0.25,P u =3.8 & K u =3.13 where K u =4h/πa,determine by the Relay tunning of a PID controller.] IV. Tunning of Minimum Integral Area Criteria To identify the best controller, the error responses of various tuning methods are calculated and tabulated. It is also called to be the time performance criteria based on the error responses. The values to be considered for identifying are ITAE, IAE, ISE, MSE values. A) Integral Time Absolute Error: It amplifies the effect of small errors in the presence of larger time amplitude. It s comparatively slower than ISE but it has lesser oscillation. ITAE= t e(t) dt (14) 0 B) Integral Absolute Error: It is suitable for eliminating smaller errors.iae allows larger deviation than ISE i.e. smaller IAE= e(t) dt (15) 0 C) Integral Square Error: In integral square error, the error is integrated over time. Squaring of large errors will be much bigger so it eliminates the large errors quickly but small errors persist for long period. ISE= square e(t) dt (16) 0 D) Mean Square Error: It is mostly used for defining the natural energy of a signal and it s popularity is due to simple usage even it can be used in signal processing. MSE = t square e(t) dt (17) 0 V. Result and Comparison Fig2: process reaction curve for determining the K u & P u S. Mandal (Editor), GJAES 2016 GJAES Page 9

18 P. Dutta et al., Comparison of PID Controller Tunning Techniques For Liquid Flow Process Control, Global Journal on Advancement in Engineering and Science, 2(1), March 2016, pp Fig. 3: comparison graph of different PID tunning scheme Table1. PID Values of Various Tuning Methods Using various controller tuning methods and with the help of their respective formula, the tuning parameters are found and tabulated below: Controller K p T i T D Z-N method Cohen coon IMC controller Tyreus Luyben Table2. Performance Error Criteria To find the best controller, the error reduction standards are necessary. therefore, ITAE,IAE,ISE and MSE values are shown below in a tabulation: Tunning Method ITAE IAE ISE MSE Z-N method Cohen coon IMC controller Tyreus Luyben Table3. Time Domain Specifications From the simulated response representing the real time level process, their characteristics are determined and listed out in tabulation below: Settling Tunning Peak Settling Time Rise Time Peak Time time methods Overshoot (2%) Z-N method % Cohen coon % IMC controller % Tyreus Luyben % VI. Conclusion In this paper we proposed comparative novel methods of various types of PID tuning for liquid flow process and result of comparison also elucidated. A comprehensive comparative study of various types of tuning methods tested with simulation under different conditions show that better analysis. Simulation results have been given to show the performance of the method. The proposed tuning method is superlative and show good performance in apply with real time implementation. From the investigation of the above specified four tuning algorithm, the best controller for the analyzed model are found based upon the time domain specifications and performance index values which are tabulated in table 2 and 3.The controller which has the characteristics of low rise time, less peak time, low overshoot and earlier settling time leads to the best one which is shown in figure 3.Therefore from the above interpretation, the most suitable controller is Internal Modal control.the main advantage to IMC is that is provides a transparent framework for control system design and tuning. Thus, IMC is able to compensate for disturbances and model uncertainty. The results provided prove that IMC method is better than rest of the mentioned methods. S. Mandal (Editor), GJAES 2016 GJAES Page 10

19 P. Dutta et al., Comparison of PID Controller Tunning Techniques For Liquid Flow Process Control, Global Journal on Advancement in Engineering and Science, 2(1), March 2016, pp VII. References [1] Tuning Of Controllers For Non Linear Process Using Intelligent Techniques D.Mercy 1, September 2013 S.M. Girirajkumar IJAREEIE Vol. 2, Issue 9, September 2013 [2] Comparison of PID Controller Tuning Techniques for a FOPDT System,Karthik Krishnan,G.karpagam INPRESSCO- Vol.4, No.4 (Aug 2014). [3] Implementation of PID Controllers Using Differential Evolution and Genetic Algorithm Methods. MohdSazliSaad-International Journal of Innovation Computing Information and Control vol 8 no 11 nov 2012 Comparison of Tuning Methods of PID Controller [4] Model Based Controller Design for Shelland Tube Heat Exchanger S. Nithya, Abhay Singh Gour, N. Sivakumaran, T. K. Radhakrishnan and N. Anantharaman Sensors & Transducers Journal, Vol.84, Issue 10, October 2007, pp [5] Performance Optimization of PI Controller in Non Linear Process using Genetic Algorithm P. Aravind and S. M. Giriraj Kumar International Journal of Current Engineering and Technology ISSN [6] Real Time Interfacing of a Transducer with a Non-Linear Process Using Simulated Annealing S. M. Giriraj Kumar, K. Ramkumar, Bodla Rakesh, Sanjay Sarma O. V. and Deepak Jayaraj Sensors & Transducers Journal, Vol. 121, Issue 10, October 2010, pp [7] Application Of Design Of PID Controller For Continuous Systems J. Paulusova, M. Dubravska Institute of Control and Industrial Informatics [8] Two-Degree-of-Freedom PID Controllers Mituhiko Araki and Hidefumi Taguchi International Journal of Control, Automation, and Systems Vol. 1, No. 4, December 2003 [9] A Model Reference-Based Adaptive PID Controller for Robot Motion Control of Not Explicitly Known Systems Wei SU INTERNATIONAL JOURNAL OF INTELLIGENT CONTROL AND SYSTEMS VOL. 12, NO. 3, SEPTEMBER 2007, [10] Performance Assessment Of PidControllers W. TanH. J. Marquezand T. Chen [11] A Model Reference PID Control System And Its Application To SISO Process -S.M. Jagdish, S.Sathish babu International Journal of Engineering Research and Applications (IJERA) ISSN: Vol. 2, Issue 2, Mar-Apr 2012, pp [12] Pid Tuning Using Extremum Seeking-Nick. J. Killingsworth IEEE CONTROL SYSTEMS MAGAZINE FEBRUARY 2006 [13] Comparison of PID Tuning Methods- Mohammad Shahrokhi and Alireza Zomorrodi S. Mandal (Editor), GJAES 2016 GJAES Page 11

20 Regular Issue Global Journal on Advancement in Engineering and Science (GJAES) Vol. 2, Issue 1 : March-2016, ISSN (Print): Review Work Literature Review on Thermal Comfort in Ephemeral Conditions Sagnika Bhattacharjeee 1 and Protyusha Dutta 2 1,2 Department Electrical Engineering, Global Institute of Management and Technology Palpara More, NH-34, Krishnagar, Nadia, West Bengal ,India sagnika21@gmail.com 1, protyushadutta@gmail.com Abstract: A review on various thermal comfort models and standards like ASHRAE has been done in this paper. A steady state condition is assumed for conditions characteristic for dwellings and offices as per the conventional theory of thermal comfort. However, the thermal comfort in buildings is rarely steady on account of interaction between building structure, climate, occupancy and HVAC system. The works based on changes in human responses to thermal comfort based on metabolic rates, clothing, gender and other factors is also reviewed Keywords: Thermal comfort, Thermal discomfort, thermal models, standards, field studies, adaptive method, thermal sensation, sensory evaluation, natural ventilation, indoor comfort, outdoor comfort, clothing insulation, metabolic rate, gender, draught, drift I. Introduction Thermal comfort is defined in ISO7730 standard as being that condition of mind which expresses satisfaction with the thermal environment. Discomforts may be caused by the body being too warm or too cold as a whole, or by the unwanted heating or cooling of a particular part of the body. C G Webb, M A Humphreys and Fergus Nicol [1] say that thermal discomfort arises chiefly from a mismatch between the environment people expect and the environment they encounter. From earlier research it has been seen that thermal comfort is strongly related to thermal balance which is influenced by environmental parameters(air temperature, radiant temperature, relative air velocity, relative humidity) and personal parameters(activity level or metabolic rate and clothing thermal resistance. The field studies taken up to study these parameters have been reviewed in this paper. This paper gives a direct comparison of the works done in the field of thermal comfort and also tells about the areas left for improvement like the present ASHRAE standard which fails to be appropriate for drifts, thereby requiring more precision. II. Thermal Comfort Models Thermal models of the human body and its interactions with the surrounding environment are often proposed and to some extent are used as the basis for thermal comfort. Fanger model and Gagge two-node model (1970) [2] are simple and use only a one dimensional approximation of the human body and of the heat and mass exchanged with the environment. Fanger model, a steady state model does not attempt to stimulate transients or thermal regulation. The equations in this model despite being technically correct yield a result very different from the original model if programmed into computer code exactly as presented and can be easily misconstrued if exact computer code is not specified. Predictions generated by the Gagge two-node model and the Fanger model give very different results even for moderate conditions, especially on the cool side of comfort. The corresponding temperature range for a predicted thermal sensation ranging from -1 to +1 is about C for the Gagge two-node model and about C for the Fanger model. This difference is attributed to differences in the comfort algorithms rather than differences in the physical human thermal model. Wissler model is a complex one which divides the body into hundreds of segments and includes complex regulatory algorithms. Smith and Fu model is a sophisticated model and uses a 3000-node finite element model to simulate the human body. Clo-Man model and Tranmod model focused on detailed, transient models of heat and moisture transport through the clothing but use relatively simple thermal models of the body. Tranmod model is a quasi-three-dimensional model which divides the body surface into segments so that each segment is uniformly covered with clothing based on actual clothing ensemble. Wang developed a correction for ramp transients by making an adjustment that is proportional to the net rate of heat storage for the body. S. Mandal (Editor), GJAES 2016 GJAES Page 12

21 S. Bhattacharjee et al., Literature Review on Thermal Comfort in Ephemeral Conditions, Global Journal on Advancement in Engineering and Science, 2(1), March 2016, pp The PMV model [3] predicts thermal sensation well in buildings with HVAC systems; field studies in warm climates in buildings without air conditioning have shown that it predicts a warmer thermal sensation than the occupant actually feels. This is not due to physiological acclimatization. Expectation is the key. People tend to feel warmer than neutral due to their lifestyle and estimated activity. Hence, for non-conditioned buildings an adaptive model has been proposed. This model is a regression equation that relates the neutral temperature indoors to the monthly average temperature outdoors and does not include clothing or activity or the four classical thermal parameters. The new extension of the PMV combines the best of PMV and adaptive model to give a good prediction. III. Thermal Comfort Standards The key objective of standards is to transfer the latest scientific knowledge into practice. Standards governing indoor thermal environments at the international level (ISO); European Standard Organization (CEN) and also the national level(ashrae) are on a constant cycle of revision, public review and promulgation. Standards [4] concerned with thermal comfort are produced by ISO/TC 159 SC5 WG1. The main thermal comfort standard is ISO7730 which is based upon the predicted mean vote(pmv) and predicted percentage of dissatisfied(ppd) thermal comfort indices (Fanger 1970), methods of measurement of local discomfort caused by draughts asymmetric radiation and temperature gradients. Other includes a technical specification, thermal comfort for people with special requirements (ISO 1372 Part 2) and the thermal comfort in vehicles (ISO Parts 1-4). Standards that support thermal comfort assessment include ISO 7726 (measuring instruments), ISO 8896(estimate of metabolic heat production), ISO 9920(estimation of clothing properties) and ISO 10551(subjective assessment methods). The current design standard BSEN ISO 7730 [Moderate thermal environments- determination of the PMV and PPD indices and specification of the conditions for thermal comfort, ISO 1995] is based upon the work of Fanger and essentially comprises a steady state human heat balance model that leads to a prediction of the sensation of human thermal comfort for a given set of thermal conditions. However, the present index does not ensure 100% thermal acceptability. Field studies have shown that for heated and air-conditioned buildings the use of PMV or PPD index agrees with the observations. But for freerunning buildings in warm climates, where summertime reliance on natural ventilation occurs, there seems to be additional adaptations such as changes to clothing or air velocity. IV. Assessment of Indoor and Outdoor Thermal Comfort Leech et al. [5] found in an epidemiological survey that people on an average spend 10% of their time outdoors in summer and 2-% in wintertime. Steady state models may be appropriate for thermal comfort assessment there, while for the relatively short times spent outdoors, tend to over-estimate discomfort. Research studies have been done to understand the thermal sensation of people in different outdoor spaces and under a wide range of climatic conditions by Ahmed (2003), Nikolopoulou and Lykoudis (2006) and de Dear (2003). Potter and de Dear on investigating outdoor scenarios observed that indoor thermal comfort standards are not applicable to the outdoor settings. It has been reported in past research that inaccurate prediction has occurred when applying PMV in more dynamic outdoor environments (Hoppe 2002, Nikolopoulou et al. 2001). According to Hoppe the definitions and limits put forth by Fanger in 1970 can be applied only under steady state conditions. Most well-validated heat budget models like Fanger and Gagge have been develop from indoor laboratory studies that are non-complex. Indoor experiments allow the body to achieve equilibrium with the environment in a short time; however when exercising outdoors for short periods of time, the body may not reach this equilibrium (Hoppe 2002). Recent studies on this subject combine the measurements of weather parameters with interviews, with aim of understanding the complex relationships between meteorological and personal (including physiological) factors in the perception of the atmospheric environment. Some indoor studies like that by Paciuck 1990; Nielsen 1990; Brager and de Dear 1998 and 2002; Hoppe 2002; Brager et al. 2004; Yao et al 2007 may contain useful information for outdoor spaces. V. Thermal Comfort and Thermal Comfort Standards for Buildings Thermal comfort standards are required to help building designers to provide an indoor climate that building occupants will find thermally comfortable. Nicol and Humphreys [6] presented data suggesting that the mean comfort vote changed less with indoor temperature from climate to climate than might be expected. Humphreys further stated the formation of a wide variety of climates. Nicol and Humphreys in 2002 stated that the width of the comfort zone if measured purely in physical terms will therefore depend on the capacity to control the environment. Subsequent works by Leaman and Bordass (2007); Nikolopolou and Steemers (2003) and Toftum et al. suggested that each individual improved thermal satisfaction through perceived control is due to increased S. Mandal (Editor), GJAES 2016 GJAES Page 13

22 S. Bhattacharjee et al., Literature Review on Thermal Comfort in Ephemeral Conditions, Global Journal on Advancement in Engineering and Science, 2(1), March 2016, pp tolerance of wider ranges of thermal conditions when control opportunities are present. The application of adaptive approach is considered to predict a comfortable environment indoors. Sustainability criteria can also be considered for thermal standards for buildings. The research has demonstrated that occupants of buildings with centralized HVAC (Heating, ventilating and airconditioning) systems become finely tuned to the very narrow range of indoor temperatures presented by current HVAC practice. They develop high expectations for homogeneity and cool temperatures and soon become critical if thermal conditions do not match these expectations. In contrast, the occupants of neutrally ventilated buildings appear tolerant of and in fact prefer a wider range of temperatures. This range may extend well beyond the comfort zones published in Standard and may more closely affect the local patterns of outdoor climate change. Unfortunately, the thermal comfort standards embodied in Standard 55 do not present alternative approaches to building conditioning. The existing models, such as ISO7730 or the work of Fanger are not sufficient to characterize the satisfaction and the pleasantness of end-users provide by HVAC systems. For this reason Electricite de France (EDF) [7] has initiated a project with the aim of using sensory evaluation techniques in this design. EDF first verified that it is possible to describe the perceived thermal sensations when standing in front of HVAC appliances. Then for descriptive analysis the recruitment of accessors, identification of the descriptive terms, panel training and sensory evaluation of the HVAC appliances was done. In this study it was shown that it is possible to define sensory descriptors and to train a panel of expert accessors to reliable quantity thermal sensations and carried out a first evaluation of real HVAC appliances. However, this study was done with a small panel. Hence, in the future use of a larger panel of accessors with better quality will enable us to compare the data with those obtained through chamber studies or field studies, controlling all the parameters of global indoor environment in a more reliable way. VI. Human Responses to Thermal Comfort The human body produces heat, exchanges heat with the environment and losses heat by evaporation of the body fluids. According to Hensel (1981) [8] a man s thermoregulatory system behaves mathematically in a highly nonlinear manner and contains multiple sensors, multiple feedbacks, loops and multiple outputs. Hensel further stated that the condition of thermal comfort is when there are no driving impulses to correct the environment by behavior (after Benzinger, 1979) which is more objective definition than that by ISO. Hardy(1970), Fanger (1972), Benzinger(1979) [9], McIntyre (1980) [10] and ASHRAE (1985) [11] show that for cold discomfort being strongly related to mean skin temperature and that warmth discomfort is strongly related to skin wittedness caused by sweat secretion. These relations are on the basis of the methods like Fanger s Comfort Equation and the work of Gagge et al. In recent evaluation by Doherty and Arens it was shown that these models are accurate for humans involved in near-sedentary activity and steady-state conditions only. A. Thermal Comfort subjected to Cyclical Temperature Changes Wyon et al in 1971 performed experiments in which the amplitude of the temperature swings was under the subjects control. They found that subjects tolerated greater amplitudes when the temperature changed more rapidly [12]. McIntyre and Griffiths (1974) [13] later pointed out that due to a much smaller rate of change of the mean radiant temperature; when compared with the air temperature and unusual acceptability criteria the tolerated range in operative temperature was actually smaller than normally found in steady-stat condition. Berglund and Gonzalez(1978) [14] from their experiment concluded that a temperature ramp of 0.6k/h between 23 C and 27 C was thermally acceptable to more than 80% of the subjects wearing summer clothing. The section on temperature drifts or ramps in ASHRAE (1981) standard [15] states that slow rates of operative temperature change (approximately 0.6k/h) during the occupied period are acceptable provide the temperature during drift or ramp does not extend beyond the comfort zone by more than 0.6k and for longer than one hour. B. Effect of Metabolic Rate, Gender and Clothing on Thermal Comfort The effects on thermal comfort due to body movements are quite substantial and hence cannot be neglected. For metabolic heat production it was concluded that precision is needed to get a more accurate thermal comfort limits [16]. In order to improve metabolic rate estimation based on ISO8996, more data and details is required for activities having metabolic rate below 2MET and the level of accuracy is to be increased. Wyon et al. in 1972 using high school pupils found that there is significant difference in the thermal comfort between the genders. Nevins et al in 1975 using college students found that females express more dissatisfaction than males in the same thermal environments. A meta-analysis shows that females are more likely express dissatisfaction in the ratio: 1.74, 95% confidence interval [17]. Thus females should primarily be used as subjects when examining indoor thermal comfort requirements. S. Mandal (Editor), GJAES 2016 GJAES Page 14

23 S. Bhattacharjee et al., Literature Review on Thermal Comfort in Ephemeral Conditions, Global Journal on Advancement in Engineering and Science, 2(1), March 2016, pp The effect of the level of clothing insulation and activity on thermal sensibility during temperature changes was investigated by McIntyre and Gonzalez in 1976 [18]. The clothing insulation does not seem to have an effect on thermal sensitivity due to the fact that in general various thermally sensitive parts of the body are uncovered. C. Effect of Humidity and Air Velocities on Thermal Comfort Studies by Gonzalez and George in 1973 [19], Nevins et al. in 1975 [20] and Berglund in 1979 [21] and Stolwijk in 1979 [22] indicate that when operative temperature is inside or near the comfort zone, fluctuations in relative humidity from 20% to 60% do not have an appreciable effect on the thermal comfort of sedentary or slightly active, normally clothed persons. Relative humidity becomes more important when conditions become warmer and thermoregulation depends more on evaporative heat loss. Fanger et al. in 1988 concluded that an air flow with high turbulence causes more complaints of draught than air flow with low turbulence at the mean velocity. Even though the above factors do not have substantial effects on thermal comfort care must be taken in future to take these parameters into consideration as well so as to achieve greater accuracy. But these results can be misleading as they have many contradictions. So one has to study greater details under various conditions to come to a conclusion regarding the above factors VII. Conclusion The data and detail collected in the field of thermal comfort is still limited. At present what we rely upon are thermal comfort experiments carried about in indoor laboratories with passive subjects. The data which we collected from offices and homes is also limited. The experimental results on cyclical fluctuating ambient temperature given by ASHRAE Standard 55 bear no clear evidence of increased or decreased thermal comfort zones due to ephemeral conditions. The present standard of thermal comfort lacks precision. The thermal comfort models available tend to give results which do not match with reality to some extent. The works that have been done regarding clothing insulation is quite contradictory as it does not stand true during winter conditions. There are no references to effect of draught complaints under high air turbulence. In the future we require setting a standard with greater precision which is more close to reality. For this we require greater data collection, details and the consideration of even those factors which now go neglected like gender, age, mental status. In the recent future UTCI (Universal Thermal Climate Index) [5] is expected to replace the wind chill index and become an international standard. Hopefully, with this new standard we will be able to thaw the discrepancies of the present standard and take a step towards greater precision VIII. Referances [1] Revd M A Humphreys, Thermal Comfort Temperatures World-wide- The current position, WREC 1996 [2] Byron W. Jones, Capabilities and limitations of thermal models for use in thermal comfort standards, Energy and Buildings 34 (2002), [3] P.Ole Fanger, JhonToftum, Extension of the PMV model to non-air-conditioned buildings in warm climates, Energy and Buildings 34(2002) [4] B.W.Oleesen, K.C.Parsons, Introduction to thermal comfort standards and to the proposed new version of EN ISO 7730, Energy and Buildings 34(2002) [5] Peter Hoppe, Different aspects of assessing indoor and outdoor thermal comfort, Energy and Buildings 34(2002) [6] J F Nicol, M A Humphreys, Adaptive thermal comfort and sustainable thermal standards for buildings, Energy and Buildings 34(2002) [7] Francoise Evin, EdouardSiekierski, Sensory evaluation of heating and air-conditioning, Energy and Buildings 34(2002) [8] H. Hensel 1981, Thermoreception and temperature regulation, Academic Press London, (Monographs of the physiological society; nr.38) [9] T H Benzinger 1979, The physiological basis for thermal comfort, in Indoor Climate, ed.p.o. Fanger and O Valbjon, pp , Danish Building Research Institute, Copenhagen [10] D A McIntyre 1980, Indoor climate, Applied Science Puplishers Ltd. London [11] ASHRAE 1985, Handbook of Fundamentals, American Society of Heating, Refrigerating and Air-conditioning Engineers, Atlanta, GA. [12] D.P.Wyon, N.O.Brunn, S. Olesen, P.Kjerulf-Jensen and P.O.Fanger 1971, Factors affecting the subjective tolerance of ambient temperature swings, in Proc. 5th Int. Congress for Heating, Ventilating and Air Conditioning, vol.1. pp , Copenhagen. [13] D A McIntyre and I D Griffiths 1974, Changing temperatures and comfort, Building services Engineer,vol.42,8, pp [14] L G Berglund and R R Gonzalez 1978, Occupant acceptability of eight-hour-long temperature ramps in the summer at low and high humidities, in ASHRAE Transactions, vol 84:2, pp , American Society of Heating, Refrigerating and Air Conditioning Engineers, Atlanta, GA [15] ASHRAE 1981, Thermal environmental conditions for human occupancy, ANSI/ASHRAE Standard , American Society of Heating, Refrigerating and Air-conditioning Engineers, Atlanta, GA. S. Mandal (Editor), GJAES 2016 GJAES Page 15

24 S. Bhattacharjee et al., Literature Review on Thermal Comfort in Ephemeral Conditions, Global Journal on Advancement in Engineering and Science, 2(1), March 2016, pp [16] George Havenith, Ingvar Holmer, Ken Parsons, Personal Factors in thermal comfort assessment: clothing properties and metabolic heat production, Energy and Buildings 34(2002), [17] K C Parsons, The effects of gender, acclimation state, the opportunity to adjust clothing and physical disability on requirements for thermal comfort. [18] D A McIntyre and R R Gonzalez 1976, Man s thermal sensitivity during temperature changes at two levels of clothing insulation and activity, in ASHRAE Transactions, vol,82:2, pp , American Society of Heating,Refrigerating and Air Conditioning Engineers, Atlanta, GA [19] R R Gonzalez and A P Gagge 1973, Magnitude estimates of thermal discomfort during transients of humidity and operative temperature and their relation to the new ASHRAE Effective Temperature(ET), in ASHRAE Transactions, vol.79:1, pp.88-96, American Society of Heating,Refrigerating and Air Conditioning Engineers, Atlanta, GA [20] R G Nevins, R R Gonzalez, Y Nishi and A P Gagge 1975, Effects of changes in ambient temperature and level of humidity on comfort and thermal sensations, in ASHRAE Transactions, vol,81:2,pp , American Society of Heating,Refrigerating and Air Conditioning Engineers, Atlanta, GA [21] L G Berglund 1979, Occupant acceptance of temperature drifts, in Indoor Climate, pp , Copenhagen. [22] J A J Stolwijk 1979, Physiological responses and thermal comfort in changing environmental temperature and humidity, in Indoor Climate, pp , Copenhagen. S. Mandal (Editor), GJAES 2016 GJAES Page 16

25 Regular Issue Global Journal on Advancement in Engineering and Science (GJAES) Vol. 2 Issue 1 : March-2016 (ISSN: ) Review Work Reliability Centered Maintenance-A Tool for Better Machine Reliability: an Overview Ashok Kumar Das Dept. of Mechanical Engg., Global Institute of Management and Technology, Krishangar, India. das.ashok.ashok234@gmail.com Abstract: Reliability-centered maintenance, often known as RCM, is a process to ensure that assets continue to do what their users require in their present operating context. These activities and actions include fault detection, fault isolation, removal and replacement of failed items, repair of failed items, lubrication, servicing (includes replenishment of consumables such as fuel), and calibrations. It is generally used to achieve improvements in fields such as the establishment of safe minimum levels of maintenance, changes to operating procedures and strategies and the establishment of capital maintenance regimes and plans. Reliability centered maintenance is an engineering framework that enables the definition of a complete maintenance regime. It regards maintenance as the means to maintain the functions a user may require of machinery in a defined operating context. As a discipline it enables machinery stakeholders to monitor, assess, predict and generally understand the working of their physical assets. In determining required maintenance, the first and most fundamental question that must be answered is what can fail. This is embodied in the initial part of the RCM process which is to identify the operating context of the machinery, and write a Failure Mode Effects and Criticality Analysis (FMECA). The second part of the analysis is to apply the "RCM logic", which helps determine the appropriate maintenance tasks for the identified failure modes in the FMECA. Once the logic is complete for all elements in the FMECA, the resulting list of maintenance is "packaged", so that the periodicities of the tasks are rationalized to be called up in work packages; it is important not to destroy the applicability of maintenance in this phase. Lastly, RCM is kept live throughout the "in-service" life of machinery, where the effectiveness of the maintenance is kept under constant review by OCM and adjusted in light of the experience gained Keywords: Reliability Cantered Maintenance; Failure Mode Effects and Criticality Analysis; On Condition Monitoring I. Introduction Machines are the major components of any manufacturing system and, as a result, they represent a significant share of the capital investment in such systems. Machines are subjected to deterioration relative to both usage and age, which leads to reduced product quality and increased production costs. For proper management of life cycle of machines and manufacturing facilities, it is important to perform appropriate maintenance operations, and to keep machine status for better reuse and recycling opportunity. Maintenance is defined as those activities and actions that directly retain the proper operation of an item or restore that operation when it is interrupted by failure or some other anomaly. These activities and actions include removal and replacement of failed items, repair of failed items, lubrication, servicing (includes replenishment of consumables such as fuel), and calibrations. Other activities and resources are needed to support maintenance. Maintenance is broadly classified into three types: Corrective maintenance Preventive maintenance Condition based maintenance Corrective maintenance is maintenance required to restore a failed item to a specified condition. Restoration is accomplished by removing the failed item and replacing it with a new item, or by fixing the item by removing and replacing internal components or by some other repair action. Preventive maintenance or maintenance performed based on the condition of an item conducted to ensure safety, reduce the likelihood of operational failures, and obtain as much useful life as possible from an item. Reliability Centered Maintenance (RCM) is condition based maintenance. It systematic process with which to optimize reliability and associated maintenance tactics with respect to operational requirements. It is a logical, structured framework for determining the optimum mix of applicable and effective maintenance activities needed to sustain the operational reliability of systems and equipment while ensuring their safe and economical operation and support. Earlier as maintenance of mobile equipment fixed interval component replacements and S. Mandal (Editor), GJAES 2016 GJAES Page 17

26 A. K. Das, Reliability Centered Maintenance-A Tool for Better Machine Reliability: an Overview, Global Journal on Advancement in Engineering and Science, 2(1), March 2016, pp overhauls are being done. But now those traditional maintenance processes are now replaced by Condition Based Maintenance approach. Condition-based maintenance can be performed on the basis of observed wear or on predicting when the risk of failure is excessive. Previously the machines are fixed when they broke. This meant that the prevention of equipment failure was not a very high priority in the minds of most managers. At the same time, most equipment was simple and much of it was over-designed. This made it reliable and easy to repair. But, with the advent of competition, this thinking has changed & it is assumed that machines must run full time giving its full efficiency without any disruption. II. Objectives of RCM RCM seeks to preserve system or equipment function, not just operability for operability's sake. Redundancy improves functional reliability but increases life cycle cost in terms of procurement and life cycle cost. RCM is more concerned on maintaining system function than individual component function. The objective of conducting an RCM analysis is to rank all included equipment and systems by their relative importance, and risk, to the overall facility mission, and to prescribe PM tasks based on subsystem and system ranking RCM Analysis can be conducted using a traditional quantitative, qualitative, or flexible approach. Traditional quantitative approach can be used when there is sufficient failure rate data available to calculate criticality numbers. Qualitative analysis must be used when specific part or item failure rates are not available. Therefore, failure mode ratio and failure mode probability are not used in this analysis. The flexible technique is born of traditional qualitative analysis. Under this approach, RPN calculations will be generated by the same formulas as given by traditional qualitative approach. However, the arguments of the component level RPN calculation (O, S, D) will be defined differently. Where RPN = Risk associated with failure mode (Risk Priority Number) S = Severity level for failure mode (subjective) O D = Occurrence level for failure mode (Reliability Data) = Detection method level (Subjective) RPN = O S D (1) III. RCM implementation plan The tasks are as follows: Define the System- identify or document the boundaries, equipment included, indenture level of the analysis. Define Ground Rules and Assumptions Identify and document ground rules and assumptions used to conduct the analysis. Construct Equipment Tree Construct equipment block diagrams to indicate equipment configuration, down to the lowest indenture level intended to be covered by the analysis. Conduct FMECA Analyze failure modes, effects and criticality. Assign Maintenance Focus Levels Classify maintenance focus levels based on criticality Rankings. Apply RCM Decision Logic Apply RCM logic trees for items, especially those identified as being critical. Identify Maintenance Tasks Identify maintenance tasks to be performed on the given item. Package Maintenance Program Develop a maintenance tasking schedule for the analyzed equipment. The plan must address the supporting design phase analyses needed to conduct an RCM analysis. Based on the analysis, an initial maintenance plan, consisting of the identified PM with all other maintenance being corrective, by default, is developed. This initial plan should be updated through Life Exploration during which initial analytical results concerning frequency of failure occurrence, effects of failure, costs of repair, etc. are modified based on actual operating and maintenance experience. Thus, the RCM process is iterative, with field experience being used to improve upon analytical projections. The implementation of condition based maintenance (CBM) is growing by organizations that seek to gain a competitive advantage in the global economy. The CBM properly implemented, saves money by reducing lost opportunity costs, making maintenance actions more efficient in the use of resources and optimizing logistical support expenses. Now Reliability analysis of a system demands two initial actions, determination of a part or line replaceable unit (LRU) that is the elemental building block of the system, and an understanding of the failure mechanism that act on the part. The characterization of the reliability, maintainability and availability parameters are based on these S. Mandal (Editor), GJAES 2016 GJAES Page 18

27 A. K. Das, Reliability Centered Maintenance-A Tool for Better Machine Reliability: an Overview, Global Journal on Advancement in Engineering and Science, 2(1), March 2016, pp two decisions. The determination of the LRU is a maintainability decision that identifies how a system will be restored following a downing event or periodic maintenance action. A LRU is repaired by replacement, regardless of whether the unit is repaired or not. LRUs have a failure rate & a time to repair. A typical design configuration hierarchy includes a system comprised of assemblies that are comprised of subsystems that are comprised of LRUs. None of the design configuration assembly level fails. Only LRUs can fail. And also only an LRU is repairable. The understanding of failure mechanism requires an awareness of the operating environment of the LRU that is both the stresses acting on the LRU from the system operations and the external stresses acting on the system. The rough process of RCM is as follows is also shown in a block diagram. Target products or systems of maintenance should be clearly identified, and necessary data should be collected. All the possible failures and their effects on target product or system are systematically analyzed. Preventive or corrective maintenance are considered. Selection of operation is done based on rational calculation of effectiveness of such operations for achieving required maintenance and quality, such as reliability, cost etc. Next step is the core of the RCM process. It is generally very tedious and time-consuming, and its contents are fundamentally the same as Failure Mode and Effect Analysis (FMEA). Fig. 1: Block diagram of rough process of RCM Condition monitoring can be used as one the most reliable tool in RCM. Condition monitoring, also known as predictive maintenance, uses primarily nonintrusive testing techniques, visual inspection, and performance data to assess machinery condition. It replaces arbitrarily timed maintenance tasks with maintenance scheduled only when warranted by equipment condition. Continuing analysis of equipment condition monitoring data allows planning and scheduling of maintenance or repairs in advance of catastrophic and functional failure. For example, to obtain the total picture of a chilled watered system, a CM effort would have to collect the following data 1. Flow Rates. Chiller water flow would be measured using precision, flow detectors. 2. Temperature. Differential temperature would be measured to determine heat transfer coefficients and to indicate possible tube fouling. 3. Pressure. Differential pressures across the pump would be measured to determine pump performance, and differential pressures across the chiller evaporator and condenser sections should be measured to determine the condition of the chiller tubes (i.e., whether they were fouling). 4. Electrical. Motor power consumption would be used to assess the condition of the motor windings. 5. Ultrasonic Testing. Pipe wall thickness would be measured to determine erosion and corrosion degradation. S. Mandal (Editor), GJAES 2016 GJAES Page 19

28 A. K. Das, Reliability Centered Maintenance-A Tool for Better Machine Reliability: an Overview, Global Journal on Advancement in Engineering and Science, 2(1), March 2016, pp Vibration. Vibration monitoring would be used to assess the condition of rotating equipment (such as pumps and motors). Additionally, structural problems can be identified through resonance and model testing. 7. Lubricant Analysis. Oil condition and wear particle analysis would be used to identify problems with the lubricant, and to correlate those problems with vibration when wear particle concentrations exceed preestablished limits. 8. Fiber Optics. Fiber optic inspections would be used to determine component wear, tube fouling, etc. 9. Thermography. Thermography scans check motor control centers and electrical distribution junction boxes for high temperature conditions. The above tests are done by some of the following instruments: i) Stethoscope ii) Precision Thermometer iii) Thermography meter iv) Pressure gauge v) Ampere meter IV. Experimental Study of The Maintenance of Centre Lathe at Gmit Workshop Using Reliability Centered Maintenance: Specifications of lathe machine: 1. The length, width and the depth of bed. (325,238,165mm respectively). 2. The depth and width of the gap, if it is gap lathe (110&125 mm). 3. The swing over gap (500mm). 4. The number and range of spindle speeds ( RPM). 5. The number of seeds (8). 6. The lead screw diameter & pitch (25.4mm * 4 TPI ). 7. The number and range of metric threads that can be cut (19.5/0.50 to 8.0mm). 8. The tailstock spindle travel (115mm). 9. The tailstock spindle set over. Fig. 2: Lathe machine During practical application we had cleaned each part of centre lathe. Below it can be seen that figures of lathe machine parts before and after cleaning LEAD SCREW: Fig. 3. Lead screw: a) Before cleaning b) After cleaning S. Mandal (Editor), GJAES 2016 GJAES Page 20

29 A. K. Das, Reliability Centered Maintenance-A Tool for Better Machine Reliability: an Overview, Global Journal on Advancement in Engineering and Science, 2(1), March 2016, pp Using the waste cloth cleaning is done and then is been lubricated with servoline 46 with grease. The features of Servoline 46 as shown in table no.1. LATHE BED: Fig. 4: Lead Bed: a) Before cleaning b) After cleaning Proper removal of chips, cleaning of bed is done and then lubrication of bed is done by applying Servoline 32. Servoline 32 has lower viscosity and is applied for easier and smoother movement of the carriage on the bed. It is applied every time after some proper job is done. GEAR BOX: Fig. 5: Gear Box: a) Before cleaning b) After cleaning For better movement and meshing of teeth of the gear and protecting the gear from wearing high viscosity lubricating oil name servomesh-257 is recommended. This oil provides resistance to deposit formation, protect metal components against rust and corrosion, separate easily from water and non corrosive to ferrous and non ferrous metals. CARRIAGE: Fig. 6: Carriage: a) Before cleaning b) After cleaning For smooth movement of carriage over lathe bed and prevent wear servoline 32 lubricant is used. This oil provides less viscosity and protect parts against rust and corrosion. S. Mandal (Editor), GJAES 2016 GJAES Page 21

30 A. K. Das, Reliability Centered Maintenance-A Tool for Better Machine Reliability: an Overview, Global Journal on Advancement in Engineering and Science, 2(1), March 2016, pp Sl.no. 1 Product Machinery oils Servoline 32 Kinematic viscosity at 40 c Table1. Lubricant oils used for maintenance in lathe Flash pointcoc c Servoline Description/application Servoline provides good lubrication and protect parts against rust and corrosion and maintain thin film of oil under light and medium loads. It contains film strength and anti rust additives used in textile mills,paper mills and machine tools. 2 Gear oil Servomesh Servomesh oils are industrial gear oils blended with lead and sulphur. They provide resistance to deposit formation, rust, corrosion.used for industrial gears,anti-friction bearings subjected to shock and heavy loads. These oils are not used for food processing units. V. Conclusion From practical as well as theoretical study, we have come to a conclusion that reliability centered maintenance increases the working life of tool, machine & it ensures the high quality of produced product, decreases downtime, increases reliability of machine, low frequency of machine damage, worker satisfaction and the better wages for the workers. If we do maintenance of machine parts regularly then their life time and working efficiency increases. It is a process to ensure that assets continue to do what their users require in their present operating context. It enables the definition of complete maintenance regime. It is a cost effective maintenance process to prevent equipment failure and a process to establish the safe minimum levels of maintenance. It is generally used to achieve improvements in fields such as the establishment of safe minimum levels of maintenance, changes to operating procedures and strategies and the establishment of capital maintenance regimes and plans. VI. References [1] A systems approach towards reliability centered maintenance- of Wind Turbines-- Joel Igbaa,b, Kazem Alemzadeha, Ike Anyanwu-Ebob, Paul Gibbonsa, John Friisb in Elsevier, Science Direct March 19-22, [2] F. Besnard, K. Fischer and L. Bertling, Reliability-Centred Asset Maintenance A step towards enhanced reliability, availability, and profitability of wind power plants, in Innovative Smart Grid Technologies Conference Europe (ISGT Europe), Gothenburg, [3] Reliability-Centered Maintenance Methodology and Application: A Case Study-- Islam H. Afefy I, October 19,2010, Scientific Research [4] S. A. Abdulrohim, O. D. Salih and A. Raouf, RCM Concepts and Application: A Case Study, International Journal of Industrial Engineering, Vol. 7, No. 2, 2000, pp [5] Prabhakar, Deepak: Statistical Analysis as a tool for Tracking Reliability Improvement ; Proceeds of the National Conference on Case Studies in Process Plant Maintenance, HIMER, November, 2002 [6] M.E. Beehler, Reliability Centered Maintenance for Transmission Systems, IEEE Transactions on Power Delivery, Vol. 12, No. 2, April [7] Reliability Centered Maintenance (RCM) Guide Operating a More Effective Maintenance Program-Alan Chalifoux and Joyce Baird, US Army Corps of Engineers, S. Mandal (Editor), GJAES 2016 GJAES Page 22

31 Special Issue: Conference Proceeding of i-con-2016 Global Journal on Advancement in Engineering and Science (GJAES) Vol. 2, Issue 1 : March-2016, ISSN (Print): Original Research Work A Warning System to Alert Human-Elephant Conflict 1 Bhaskar Sarkar, 2 Jasmin Ara and 3 Sanghamitra Chatterjee 1,2,3 Department of Electronics and Communication Engineering, Camellia Institute of Technology, Madhyamgram, West Bengal , India 1 bhaskarsarkar222@gmail.com, 2 jasmin.ara95@gmail.com, and 3 sangha3030@gmail.com Abstract: Human-elephant conflict has been a major issue in West Bengal nowadays. Every third day there is an incident of either a death of an elephant or a human because of this human-elephant conflict. The major concerned coming up is the recent incident which happened on 27 th January, 2016 (the very first incident on human-elephant conflict in this year) where elephants were literally killed by spreading of electrical wires and death of two elephants happened by the stampede of their feet on those live wires. This has become a big and hot question on the safeguards of both human (including land, houses, other properties and lives and livestock) and also elephants. This is a very common problem in both North and South Bengal. For having the solutions of these problems, we are trying to develop a 360 o Ultrasonic SODAR device which will help in tracking the movement of elephants both day and night and the movement will be displayed on a screen for alerting the people near the buffer zone of any forest area to reduce human-elephant conflict Keywords: Elephant tracking system; ultrasonic sensor; human-elephant conflict; ultrasonic SODAR I. Introduction INDIA is a diverse country consisting of variety of species of living beings. Protection of both citizens of India and the elephants are the duties of ourselves. Human-Elephant conflict over the years has become a major concern for wildlife management in India. We are particularly focusing on South Bengal and North Bengal Terai and Duars region. 1 Here, we are concerned about both human and elephants. The elephant habitat in West Bengal extends over 4200 square kilometre. The northern districts of West Bengal provide a suitable habitat for elephants in the region extending from the Sankosh river in the east to the Mechi river in the west. West Bengal has two elephant reserves: Eastern Duars ER and Mayurjharna ER. About 650 elephants occur in West Bengal over two distinct regions: 1. North Bengal (Jalpaiguri and Darjeeling): 529, and 2. South Bengal (West Midnapur, Bankura and Purulia): 118. West Bengal also receives seasonal visits of other small groups of elephants from Assam/ Jharkhand/ Odisha in addition to above. In recent years, the increased human-elephant conflict in the districts of Purulia, Bankura and West Midnapur has become a serious challenge for the forest staff of these areas. These three districts of South Bengal are severely affected due to elephant depredation. On an annual average, 100 people become victim of this human-elephant conflict. II. Region of study In West Bengal, it was established through information received from the Divisional offices (Anon 2010, Anon 2013a) that there were three distinct groups of elephants that were moving around in the conflict zone of West Midnapore Bankura. One was the migratory elephants from the Dalma WLS, which consisted of two to three herds and having a total number of elephants. The second was a group of residential elephants that moved around in groups of 3 4. The third was again a group of elephants which has stayed back from retreating Dalma group and have also become residents but known as Mayurjharna group because of their distinguishable behaviour. (Shown in Map 1). Another human-elephant conflict in this region is the train hit accident which occurs when elephants try to pass the railway track in North Bengal region (Duars, Gorumara forest region) section in the forest area; this has resulted in the death of 20 elephants in the last five years. 2 According, to foresters around 30 elephants have been killed in the Duars since the narrow-gauge railway track was converted to broad-gauge. III. Literature review There is a race between humans and elephants to share a common space. This is the reason behind humanelephant conflict. This conflict can be minimized only by alert system for human-elephant conflict by tracking the elephants. Tracking is the science of observing animal paths and signs. The objective of tracking is to gain a S. Mandal (Editor), i-con 2016 GJAES Page 23

32 B. Sarkar et al., A Warning System to Alert Human-Elephant Conflict, i-con-2016, Global Journal on Advancement in Engineering and Science, 2(1), March 2016, pp clear knowledge about the elephant (tracked one). It also depends on environmental factors. But it is difficult to trace elephant habitation due to their movement from one place to other in search of new habitation. Elephant tracking involves technical and non-technical processes 5 : 1. Non-technical methods: The non-technical methods/ processes/ or solutions are not just historical but have relevance even today. These methods are followed by farmers for protecting their crops and to save themselves from the human-elephant conflict. The descriptions of various non-technical methods are as follows; crop guarding (Ranjit Manakadan 2010) is a method consisting of huts in the fields and trees which helps getting a clear view of elephant movements even from a distance. Noise and throw method (Osborn F.V. 2002) consists of creating huge noise and throwing (or showering) objects, demonstrating human aggression. 2. Technical methods: Seismic Sensor: Seismic sensor is a wireless sensor, which is becoming very popular in recent days in view of its capability for detecting even minute vibrations caused on earth. The combination of seismic sensors and human-elephant conflict were also exploited for military and security applications. Seismic sensors are extremely useful to classify the moving objects and detect vehicles and majorly human-elephant conflicts. 4 Markov Chain: An analytical model to capture the behaviour of elephants using a Markov chain is developed. Here, a three-staged Markov model is used to determine the probability of elephant movement from one village to another. The values of the derived probabilities assist in determining the habitat and migration behaviour of elephants during various seasons. The corridors in the forest border areas which the elephants cross during migration to enter human habitation could be easily identified using the derived information for the human-elephant conflict. The model is used to find the probability of elephants habituating in one region. Satellite Telemetry: Satellite tracking of elephants has advantages in the study of species that migrate across borders, have large home ranges and occupy remote and inaccessible areas. Satellite-based telemetry can be potentially used for setting up early warning system towards this purpose. 5 IV. Analytical discussion In this project, we have tried to develop an Arduino based DIY 360 o SODAR device which can track any moving object both day and night. We specially need for the purpose of night because attacks of the elephants at that time which would be unknown to the people. This device will track any moving objects (herein, elephants near forest buffer zone) and display it on screen. The SODAR device consists of an Arduino UNO circuit connected with a PING Ultrasonic distance sensor. It also consists of a stepper motor. The ultrasonic distance sensor will move through 360 o angle with the support of a stepper motor. A MATLAB program is written and interfaced to display the output. The device should be placed at a height of 7 ½ feet to 8 ½ feet and the area of radius of 20 metre should not contain any presence of big trees. V. Design methodology Figure 1: A normal diagram/drawing to show the implementation of our device and working of it. S. Mandal (Editor), i-con 2016 GJAES Page 24

33 B. Sarkar et al., A Warning System to Alert Human-Elephant Conflict, i-con-2016, Global Journal on Advancement in Engineering and Science, 2(1), March 2016, pp VI. Hardware implementation Figure 2: Circuit/block diagram of the device. VII. Output applications The device will be used to track elephants of heights of more than a height of a human being and to a height of normal Asian elephant (that is, at a height of 7 ½ to 8 ½ feet) when they come to attack to houses situated near any forest buffer zone areas. The buffer zone of the locality area will be installed with multiple such devices so that every side of the buffer zone could be alert of the elephants arriving near the buffer zone or rather entering them. VIII. Future challenge In future, we will try to increase the range of the SODAR so that it will help the people to be safe and secure for the locality before a long time of attacking elephants. We will also try to make it waterproof so that it can be placed outside the house. There should be few or no trees present of more than a certain height in or near the buffer zone of the forest areas. References [1] Kalyan Das, Man-elephant conflict in North Bengal, Teri University. [2] S. J. Sugumar and R. Jayaparvathy, An early warning system for elephant intrusion along the forest border areas. [3] Subhamay Chanda, IFS (1996 batch), Man-elephant conflict in South West Bengal. [4] Jerline Sheeba Anne, Arun Kumar Sangaiah, Elephant tracking with seismic sensors: A technical perspective review. [5] Arun B. Venkataraman, R. Sandeep, N. Baskaran, M. Roy, A. Madhivanan and R. Sukumar, Using satellite telemetry to mitigate elephant-human conflict: An experiment in Northern West Bengal, India. [6] Choudhury, A. U., Human-elephant conflicts in Northeast India, Human dimensions of wildlife 9, (2004). [7] Mayilvaganan, M., Devaki, M., Elephant localization and analysis of signal direction receiving in base station using acoustic sensor network, International journal of innovative research in computer and communication engineering 2(2) (2014). S. Mandal (Editor), i-con 2016 GJAES Page 25

34 Special Issue: Conference Proceeding of i-con-2016 Global Journal on Advancement in Engineering and Science (GJAES) Vol. 2, Issue 1 : March-2016, ISSN (Print): Original Research Work Simulation-Based Study of Two Reactive Routing Protocols in Wireless Sensor Network (WSN) Dipankar Saha 1, Debraj Modak 2 and Chandrima Debnath 1,3 Dept. of Electronics and Communication Engg., Global Institute of Management & Technology, NH-34, Palpara More, Krishnanagar, India 2 Dept. of Electronics and Communication Engg., Abacus Institute of Engineering & Management Mogra, Hooghly, India E mail: dipankar.hetc@gmail.com 1, ddebraj.hetc@gmail.com 2 and chandrima5debnath@gmail.com 3 Abstract: A sensor network is composed of a large number of sensor nodes that are densely deployed either inside the phenomenon or very close to it.. Sensor nodes have the capabilities to collect data and route data back to the sink. The communication among nodes is done in a wireless fashion, and thus, the name of wireless sensor networks. Topology of the Sensor network changes very frequently due to mobility of the nodes. Two reactive or on-demand routing protocols have been used namely RAODV (Reverse Ad-hoc on demand Distance Vector) and MAODV (Multicast AODV) in Wireless Sensor Network. RAODV (Reverse AODV) tries multiple route replies and enhances the network performances. MAODV does not send unicast traffic, it send multicast data packets. These two routing protocols are simulated using NS The simulation concentrates on finding best routing protocol in terms of some performance metrics like end to end delay, overhead by varying number of nodes which has not been done previously in Wireless Sensor Network. Keywords: WSN, RAODV, MAODV, SIMULATION I. Introduction Wireless sensor networks (WSN) [1] [5] consist of a large set of multi-functional, low cost, wireless networked sensor nodes. These sensor nodes have control components and communication functionality. A sensor network is composed of a large number of sensor nodes that are densely deployed either inside the phenomenon or very close to it. The position of sensor nodes need not be engineered or predetermined. This allows random deployment in inaccessible terrains or disaster relief operations. On the other hand, this also means that sensor network protocols and algorithms must possess self-organizing capabilities. Another unique feature of sensor networks is the cooperative effort of sensor nodes. Sensor nodes are fitted with an onboard processor. Instead of sending the raw data to the nodes responsible for the fusion, they use their processing abilities to locally carry out simple Computations and transmit only the required and partially processed data. Sensor nodes cooperate with each other and are deployed in environment monitoring, habitat monitoring, healthcare, home automation, traffic control and industrial system automation. This is particularly true within the past decade, which has seen wireless networks being adapted to enable mobility. There are currently two variations of mobile wireless networks. The first is known as the infrastructure network (i.e., a network with fixed and wired gateways). The bridges for these networks are known as base stations. A mobile unit within these networks connects to, and communicates with, the nearest base station that is within its communication radius. The second type of mobile wireless network is the infrastructure less mobile network, commonly known as an ad hoc network. In an ad hoc network [2], mobile nodes communicate with each other using multi-hop wireless links. There is no stationary infrastructure such as base stations. Each node in the network also act as a router, forwarding data packets for other nodes. However, reliable data transfer forms the backbone of several applications that are being used and are likely to be used in such environments. Hence the service provided by these routing protocols to TCP, the de facto standard for reliable data transfer on the Internet, is an issue of major significance. Among various routing protocols [3] MAODV has a very good performance. For the application of MAODV, a platform that supports IP functionality in WSNs is required. Since sensor networks have power consumption restriction, it is too complex to apply the full TCP/IP protocol stack with IPV6 functionality to them. In sensor networks, adverse nodes can freely join the network, listen to and/or interfere with network traffic. The main contribution of this paper is that we have carried out a simulation based study of ad hoc routing protocols to understand their behavior when used in a sensor network environment. The remainder of this paper is organized as follows. In the second section we give brief description of the routing protocols and the simulator that we have used for our simulation purpose. A discussion regarding the simulation environment is given in the fourth section. We analyze our results in the fifth section. S. Mandal (Editor), GJAES 2016 GJAES Page 26

35 D. Saha et al., Simulation-Based Study of Two Reactive Routing Protocols in Wireless Sensor Network (WSN), Global Journal on Advancement in Engineering and Science, 2(1), March 2016, pp II. Reverse Adhoc On-demand Distance Vector Routing Protocol(R-AODV) It is the extended reverse version of AODV. In AODV [6] and other on-demand routing protocol, source node initiates route discovery process [4] by broadcasting route request packet to its neighbor to find a route to the destination. Each neighboring node either responds the RREQ by sending a Route Reply (RREP) back to the source node or rebroadcasts the RREQ to its own neighbors after increasing the hop_count field. One of the disadvantages of AODV is that it is based on single route reply along the first reverse path to establish routing path. Rapid change of topology causes that the route reply could not arrive to the source node. Loss of RREP leads to the source node to reinitiates route discovery process which degrades the routing performances. In R- AODV, loss of RREP messages considered. The R-AODV protocol discovers routes on-demand using a reverse route discovery procedure. During route discovery procedure source node and destination node plays same role from the point of sending control messages. Thus, after receiving RREQ message, destination node floods reverse request (R-RREQ), to find source node. When source node receives an R RREQ message, data packet transmission is started immediately. The source node initiates route discovery procedure by broadcasting RREQ to its neighbor. Whenever the source node issues a RREQ, the broadcast ID is incremented by one. The source node broadcasts the RREQ to all other nodes in the network. When an intermediate node receives a RREQ, the node checks if already received a RREQ with the same broadcast id and source address. The node cashes broadcast id and source address and drops redundant RREQ messages. When broadcasted R-RREQ message arrives to intermediate node, it will check for redundancy. If it already received the same message, the message is dropped, otherwise forwards to next nodes. The RREQ packet contains the following fields: Table I: RREQ Packet Format of R-AODV Type Reserved Hop Count Broadcast ID Destination IP address Destination Sequence number Source IP address Source Sequence number Request Time When the destination node receives first route request message, it generates reverse request (R-RREQ) message and broadcasts it to neighbor nodes in the network. The R-RREQ packet contains the following fields: Table II: R-RREQ Packet Format of R-AODV Type Reserved Hop Count Broadcast ID Destination IP address Destination Sequence number Source IP address Reply Time When the source node receives first reverse request message, then it starts packet transmission and late arrived R-RREQs are saved for future use. III. Multicast Adhoc On-demand Distance Vector Routing Protocol(MAODV) MAODV is the multicast extended routing protocol of AODV. MAODV are on demand reactive routing protocols for ad-hoc networks. MAODV [7][8] for multicast traffic means that it send out multicast data packets. It creates the multicast group tree where composed of the all nodes which called as group members and which are not member of the group member, act as a several routers. So all the group member nodes, all tree members nodes, belong to the group tree. In every multicast tree, in the group member nodes creates a group leader, which is responsible for maintaining the group tree broadcasting Group-Hello (GRPH) messages periodically in the whole network. Every node has three tables in the network. Firstly, there is a table called Unicast route table which record the next hop for routes to other destinations for unicast traffic. Secondly, another table where every hop record the next hops for the tree structure of each multicast group and known is multicast route table. Each node and its next neighbor node is connected with each other either downstream or upstream depends on position. Now, If the next neighbor node is one-way nearer to the group leader node, the direction is upstream; otherwise, the direction is downstream. All the nodes in the tree should have one and only one upstream except the group leader nodes. In group leader nodes have no upstream nodes. The third table is the Group leader table. It stores the currently-known multicast group address, own group leader address and the next hop address. S. Mandal (Editor), GJAES 2016 GJAES Page 27

36 D. Saha et al., Simulation-Based Study of Two Reactive Routing Protocols in Wireless Sensor Network (WSN), Global Journal on Advancement in Engineering and Science, 2(1), March 2016, pp A. Route Discovery and Maintenance In MAODV, in the network each node tries to send out multicast traffic. If the data source node is not belongs to a tree member, then no one packet arrive to the multicast group member. In this case, there we can integrate two step. In first step, one route is to be established from that data source node to a tree member. Then the tree member receives the multicast data packets, and propagates the data through the whole tree, reaching every group member nodes. This mechanism used for route discovery and maintenance for propagating data in a specific node address. In the route discovery process, the source node knows a route to reach the group leader nodes if it has the group leader table[9]. In the Group leader table all information are stored. RREQ packet can be sent unicastly towards the group leader if this is the first time the node sends RREQ. When RREQ goes through the network, the reverse route towards the source node to next hop is constructed for RREP packet is to be sent. Through the reverse route when the RREP is sent back to the source node, every intermediate node and the source node automatically update the route to that tree member with the destination address. For this first step, the end node is a tree member. In the multicast tree construction second step is accomplished. Figure 1: A simple multicast tree network B. Multicast Tree Construction The control packets i.e. RREQ and RREP used in MAODV for tree construction which are borrowed from AODV. When any node is not belongs to a tree member then it initiates a RREQ with a join flag (RREQ-J) and creates multicast route table. After joining that multicast group, it identifies itself as a group member, but with an unknown group leader address. Generally, in the network RREQ-J is flooded. Firstly it sends RREQ-J to the multicast group member nodes then it can be sent directly towards the Group Leader through checking own Group Leader Table. C. Multicast Tree Maintenance Multicast tree maintenance procedure consists of Periodic Group-Hello Propagation, Neighbor Connectivity Maintenance, Group Leader Selection and Tree Marge. D. Periodic Group-Hello Propagation In this case, group leader have the main function and initiates a Group-Hello message (GRPH) throughout the whole network periodically, to specify the existence of that group and its current status. So the tree member node receives GRPH from its own upstream and it update their current group sequence number, current group leader and the current distance from the group leader. It requires the GRPH messages to be propagated to its own tree structure from upstream to downstream gradually. After receiving GRPH messages the tree member nodes first checks its group leader information stored in its Multicast Route Table. This GRPH is to be discarded if it is the same group leader address is specified, and the node waits for next GRPH from its own upstream. If its Multicast Route Table record the another group leader information which exists in another tree with the same multicast group address then two trees can be connected. E. Neighbor Connectivity Maintenance The neighbour connectivity is controlled by repairing the downstream node of a link in the tree realizes that the link is broken. Then it is not receiving any broadcast messages from that neighbour in a specific time. So the downstream node deleting next hop. Again source node sending out RREQ-J to determine a new limb for joining to the multicast group. Then the tree member nodes checking own hop count to the group leader and avoid the old branch and its own downstream nodes responding to the RREQ-J. Tree partition will be merged, when the request source node tries several times (RREQ_RETRIES) to repair that branch, but has not received any RREP-J then network partition should be created. F. Group Leader Selection S. Mandal (Editor), GJAES 2016 GJAES Page 28

37 D. Saha et al., Simulation-Based Study of Two Reactive Routing Protocols in Wireless Sensor Network (WSN), Global Journal on Advancement in Engineering and Science, 2(1), March 2016, pp In the partitioned tree, a new group leader must be choosing, if the group leader revokes its group membership. Then the current node selected as a group member, it will become the new group leader after partitioned the tree. Otherwise, it will force one of its tree neighbors to be the leader. If there is any downstream node, it removes the entry for that group in its Multicast route table and broadcast a multicast activation (MACT) message to this downstream node for maintaining the all nodes of the tree needs to a leader. If more than one downstream node is there, then the recent node selects one downstream, it become upstream link and broadcasts a group leader flag (MACT-GL) towards that node. Then it indicates other address in the tree and creates a new group leader node. The node changes the upstream direction into downstream after receiving MACT-GL from upstream. Otherwise, it continues the above procedure again. Figure 2: A simple multicast tree with Group Leader G. Tree Merge After receiving the GRPH packets which is generated by another group leader, the tree member nodes have larger address merged with the smaller group leader address. After confirming for the leader s permission for reconstruction of the tree, the tree member initiates the merge by unicastly sending a RREQ with a repair flag (RREQ-R) to the group leader. If the other nodes do not have the permission to reconstruct the tree, it can acknowledge a RREP with a repair flag (RREP-R) to that request node. The RREP-R follows this reverse route to the request node when receiving RREQ-R. If there is another tree for that group with a group leader having a larger address then, the RREQ-R and RREP-R cycle is discarded and the Group Leader has not allowed any other tree member to recreate the tree. IV. Performance Evaluation A. Simulation The simulations is done using Network Simulator 2 (NS-2.34)[11], particularly popular in the wireless networking community. The performance of RAODV is evaluated by comparing it with MAODV protocol varies with respect to number of nodes. In our simulation, MAC protocol is the IEEE standard Distributed Coordination Function (DCF) [10].The traffic sources are constant bit rate (CBR). Nodes move with a randomly through random way point mobility model. For mobile nodes, velocities ranged between the size of the topology was set by 500 X 500 grids. The data packet size is 512 bytes. The simulation run time set to ms. In this scenario, no of nodes in the network increase from 20 to 500 gradually. So that the nodes density is approximately constant, this would properly reflect the scalability of routing protocols. In graph an average of 5 simulations sample data is used. Three performance Metrics are evaluated: B. End-to-end delay The average delay includes all possible delays caused by route discovery, propagation and transfer time etc. Delay = Received packets sending packets Total number of connected pairs C. Packet Delivery Ratio(PDR) It is ratio of number of received data packets to the number of sent data packets. D. Overhead S. Mandal (Editor), GJAES 2016 GJAES Page 29

38 D. Saha et al., Simulation-Based Study of Two Reactive Routing Protocols in Wireless Sensor Network (WSN), Global Journal on Advancement in Engineering and Science, 2(1), March 2016, pp It is the sum of all the control packets sent by all the sensor nodes in the network to discover and maintain routes to the sink node. E. Result In the scenario, the scalability of the protocols are studied. For simulation nodes are randomly moves from one place to another place because all nodes are mobile node. The energy is uniformly distributed among all the mobile nodes. The results are shown below in Fig.3. Where, we use the END to END Delay, Packet Delivery Ratio (PDR) and Overhead with respect to Number of Nodes. PDR is better for R-AODV than MAODV with respect to vary the nodes from 20 to 500. Delay also calculated with respect to number of nodes i.e. successful packets are arriving time form source to destination. For Overhead, RAODV is better than MAODV as it has multicast traffic. Figure 3: Delay vs No. of nodes Figure 4: PDR vs No. of nodes S. Mandal (Editor), GJAES 2016 GJAES Page 30

39 D. Saha et al., Simulation-Based Study of Two Reactive Routing Protocols in Wireless Sensor Network (WSN), Global Journal on Advancement in Engineering and Science, 2(1), March 2016, pp Figure 5: Overhead vs. No. of nodes V. Conclusions The novel process regarding this approach can be used to determine the routing metric with good throughput and mobility in between two routing protocols namely on-demand routing protocol like Reverse Adhoc ondemand distance vector routing protocol and Multicast Adhoc On-demand Distance Vector Routing Protocol in freely available and tool command language ns-2 simulator. The performance comparison between two protocols and simulations were carried out in same sized topologies and mobile nodes in the network. In the result, performances were considered with respect to metrics like end to end delay, overhead, pdr and draw the plot in X-graph between these two protocols. Simulation results illustrate that RAODV provides less delay to deliver packets to a destination than other protocol MAODV. For Overhead also RAODV gives good response, better than MAODV. Implementing a new routing protocol that takes various challenges under which a routing protocol provides an energy efficiency condition in Wireless sensor network or wireless mesh network would be the future work. With all these challenges it has to firmly accept that an interesting stirring time ahead of us in the area of Hybrid Network. VI. References [1] K. Akkaya and M. Younis, A survey of Routing Protocols in Wireless Sensor Networks, Elsevier Ad Hoc Network Journal, 2005, pp [2] Akylidiz, W. Su, Sankarasubramaniam, and E.Cayrici, A survey on sensor networks, IEEE Communications Magazine, Volume: 40 Issue: 8, August 2002, pp [3] Royer, E. M. and Perkins, C. E.; "Multicast Operation of the Ad-hoc On-Demand Distance Vector Routing Protocol", Proceedings of the 5th Annual ACM/IEEE International Conference on Mobile Computing and Networking (MOBICOM.99), Seattle, WA, USA, August 1999, pages [4] H.D.Trung, W.Benjapolakul, P.M.Duc, Performance evaluation and comparison of different ad hoc routing protocols, Department of Electrical Engineering, Chulalongkorn University, Bangkok, Thailand, May 2007 [5] Gowrishankar.S, T.G.Basavaraju, SubirKumarSarkar, Issues in Wireless Sensor Networks, In proceedings of the 2008 International Conference of Computer Science and Engineering, (ICCSE 2008), London, U.K., 2-4 July, 2008 [6] M. Royer and C. Toh, A Review of Current Routing Protocols for Ad Hoc Mobile Wireless Networks, IEEE Personal Communications, pp , April 1999 [7] Kush, A., Taneja, S., A Survey of Routing Protocols in Mobile Adhoc Networks International Journal of Innovation, Management and Technology 1(3), (2010). [8] T.G.Basavaraju and Subir Kumar Sarkar, Adhoc Mobile Wireless Networks: Principles, Protocols and Applications, Auerbach Publications, [9] Royer, E. M. and Perkins, C. E.; "Multicast Ad hoc On-Demand Distance Vector (MAODV) Routing", IETF, Intemet Draft: draft- ietfmanet-maodv-00.txt, 2000 [10] "Wireless LAN Medium Access Control (MAC) and Physical Layer (PHY) Specifications", IEEE Standard , IEEE Standards Dept [11] /nsnam/ns. S. Mandal (Editor), GJAES 2016 GJAES Page 31

40 Special Issue: Conference Proceeding of i-con-2016 Global Journal on Advancement in Engineering and Science (GJAES) Vol. 2, Issue 1 : March-2016, ISSN (Print): Project Work Modelling of Road Traffic Signal Using Atmega-8µc Sudip Mandal 1 and Krittibas Bairagi 2 1,2 Department of Electronics and Communication Engineering, Global Institute of Management & Technology, NH-34, Palpara More, Krishnanagar, India E mail: and sudip.mandal007@gmail.com 1 and krittibas1992@gmail.com 2 Abstract: The control of traffic at road junction, which was done purely by human effort, proves to be inefficient owing to the increasing rate of both motorists as well as the complexity of road networks. This inadequacy brought about the use of discrete solid-state electronics up to the usage of a computer controlled microprocessor, but the intelligence of this method was still limited to meet the demand of modern age. Thus, the need for the development of a microcontroller-based traffic light controls system. This paper explores the design and implementation of a microcontroller-based traffic light system for road intersection control. The traffic light system is designed using Atmega-8µc microcontroller, power section, crystal oscillator and light emitting diode (LED) and 7 segment display. Then, for effective traffic control, the Atmega-8µc is implemented via an IC programmer written using AVR studio 4 in Basic language. The developed traffic light control system is tested by constructing a prototype that resembles the real application. Besides, the developed system can be employed as a training kit in learning traffic light control system design and operation. Also, it can be used as a teaching aid in schools for various road users Keywords: Traffic Light; Microcontroller, Atmega-8µc, AVR studio 4, 7 Segment display. I. Introduction Traffic congestion problem is a phenomenon which contributed huge impact to the transportation system in country. This causes many problems especially when there are emergency cases at traffic light intersections which are always busy with many vehicles. A traffic light controller system [1] is designed in order to solve these problems. Traffic lights, also known as traffic signals, traffic lamps, signal lights, stop lights and robots, and also known technically as traffic control signals are signaling devices positioned at road intersections, pedestrian crossings and other locations to control competing flows of traffic [2]. The first manually operated gas lit traffic light was installed in 1868 in London, albeit it was short-lived due to explosion. The first safe, automatic electric traffic lights were installed in the United States starting in the late 1890s. Traffic lights alternate the right of way accorded to road users by displaying lights of a standard color (red, yellow, and green) following a universal color code. In the typical sequence of color phases: 1. The green light allows traffic to proceed in the direction denoted, if it is safe to do so. 2. The yellow light provides warning that the signal will be changing from green to red (and from red to green in certain countries, such as England). Actions required by drivers varies, with some jurisdictions requiring drivers to stop if it is safe to do so, and others allowing drivers to go through the intersection if safe to do so. 3. A flashing yellow indication is a warning signal the red signal prohibits any traffic from proceeding a flashing red indication is treated as a stop sign. 4. Traffic signals will go into a flashing mode if the controller detects a problem, such as a program that tries to display green lights to conflicting traffic. The signal may display flashing yellow to the main road and flashing red to the side road, or flashing red in all directions. Flashing operation can also be used. The study in [3] was to design and implement a suitable algorithm and its simulation for an intelligent traffic signal simulator. The system developed was able to sense the presence or absence of vehicles within certain range by setting the appropriate duration for the traffic signals to react accordingly. In [4], a microcontrollerbased versatile traffic light control system/trainer was also implemented while the concept proposed in [5] involves the use of wireless sensor networks to sense presence of traffic near junctions and hence route the traffic based on traffic density in the desired direction. To make traffic light controlling more efficient, [6] exploited the emergence of new technique called as Intelligent traffic light controller. The main objective of our project is to give a highlight to the road traffic. We in this project also constructed an microcontroller program [7], [8], [9], [10] and a model having four signal post at the cheap and best price to avoid the accident and not only to save the life of human being but also to minimize the cost of money which cost to make cars and other property. S. Mandal (Editor), GJAES 2016 GJAES Page 32

41 S. Mandal et al., Modelling of Road Traffic Signal Using Atmega-8µc, Global Journal on Advancement in Engineering and Science, 2(1), March 2016, pp II. Design Methodolgy In our project there are four signal post. In which opposite two signals give the same signal at the same time as example, signal post A and C gives the same signal at the same time instant also A and C gives the same signal at the same time instant. So we make signal post A and C for the same connection and also B and D are connected to each other. So the number of wire is reduced to half. Now we have to make program for only two signal post as other two are connected to these. Each signal post contains one seven segment and three LED as red, green, yellow. Our program is about we show the time in segment led and red signal, then after 10 sec the yellow signal will be flashing 4 times, then the green signal will be glow for 10 sec. now it is not necessary to say that when signal post A show time and red signal then signal post C will show green signal and it will be green as long as signal post A Will be red. The same process for also signal post B and D. so it can be done by simply connecting the green signal to the red signal of opposite signal post. Thus we don t need to make any program for green signal. So we have to make two programs- one will show the time and red signal other will blink the yellow signal. Figure 1: Signal post connection Figure 2: Atmega 8 Microcontroller We use ATmega 8 IC for our project and AVR studio 4 to write program. We will use C language. This IC have 3 ports. Port B, port C and port D have 8, 7and 8 ports respectively. At first we will make a program for post A, suppose time will be show in seven segment led,, red signal will glow then yellow signal will be blinking after that green signal will be glow. Seven segments have 10 pins- 8 for led, 2 for ground. From 8 pins we need 7 pin as we don t need the dot led off the segment. And for red signal we need one pin. We don t need ground because all grounds pin are connected to each other. So for red signal 8 wires are to be connected. Port D has 8 pins as port D.0 to port D.7. We will keep port D.0 always 1 to glow red signal and port D.1 to port D.7 will change the time. Figure 3: 7 Segment LED display Figure 4: To display 1 If we want to show 1 in segment led then b, c will be 1 and others will be 0. Then the code will be: Table 1: Code to display 1 Port D.7 Port D.6 Port D.5 Port D.4 Port D.3 Port D.2 Port D.1 Port D.0 Code a b c d e f g Red Port D: Similarly we can display others number and corresponding code is given below. Table 1: Code for red signal S. Mandal (Editor), GJAES 2016 GJAES Page 33

42 S. Mandal et al., Modelling of Road Traffic Signal Using Atmega-8µc, Global Journal on Advancement in Engineering and Science, 2(1), March 2016, pp Number Port Port Port Port Port Port Port Port Code D.7 D.6 D.5 D.4 D.3 D.2 D.1 D.0 a b c d e f g Red Port D OFF We will use port C.0 to blink the yellow signal. The concept to blink the signal is to make the port C.0 toggle. So the code will be: Table 3: Code for Yellow Yellow signal Port Port C.5 Port C.4 Port C.3 Port C.2 Port Port C.0 Code C.6 C.1 ON OFF ON OFF ON OFF ON OFF After flashing the yellow signal, the green signal of the same post [post A] and the red signal of other post [post B] will be ON. Here we face a problem that the output port as port B.6 and port B.7 are connected to the crystal oscillator so we can t use this two port as output port. So to make our program complete we use port C.1 and port C.2 instead of port B.6 and port B.7. without these the all are same as before. Table 3: Code for Red and Green Number Port Port Port Port Port Port Port Port Code Code C.2 C.1 B.5 B.4 B.3 B.2 B.1 B.0 a b c d e f g Red Port B Port C OFF After this the yellow signal will be blinking. The program is same. This is a complete cycle and it will be repeated for infinite times. After writing the program in AVR studio 4 we run and build the program. After these we will get a *.hex file. This is the program file. It has to be now burn on the ATmega 8 IC. We use an educational purpose burner for this. Following figure shows the flowchart of the program. S. Mandal (Editor), GJAES 2016 GJAES Page 34

43 S. Mandal et al., Modelling of Road Traffic Signal Using Atmega-8µc, Global Journal on Advancement in Engineering and Science, 2(1), March 2016, pp Figure 5: Flowchart of the program III. Model and Output To build the traffic signal model, we need following items such as ATmega 8 microcontroller IC, 40 pins IC base, 12 MHz Crystal oscillator, 2 piece 22pF capacitor,ceramic capacitor (code-104), Tack switch, Vero board, 10k ohm resistor, 16 piece LED ( 5 piece Green, 5 piece Yellow, 6 piece Red ), 4 piece Seven segment LED,5 volt power suply,wire and programmer. In general the signals, green and red are shown for 60 seconds and yellow is for 5 second. As our project is small and time limited we change the time into 10 and 4 seconds respectively. We show in our project, first red signal and time will be shown in two signal post and other will show the green signal. Then after 10 second green and red signal will be stopped and all signal post will show the yellow flashing which indicate the previous signal will be changed. After that the previous signal will be alternate among the signal post. Then after 10 second green and red signal will be again stopped and all signal post will again show the yellow flashing. Figure 6: Circuit Diagram of the model S. Mandal (Editor), GJAES 2016 GJAES Page 35

44 S. Mandal et al., Modelling of Road Traffic Signal Using Atmega-8µc, Global Journal on Advancement in Engineering and Science, 2(1), March 2016, pp Following figures shows the designed model and portion when it s functioning. Figure 7: Designed model of Trafic Signal Figure 8: A screenshot during RED IV. Conclusion In our project we first try to give a brief concept about road traffic signal what is traffic signal, which signal indicates what, how it works automatic. To make readers proper understand we constructed a program, discuses about the program also make a model with four signal post and discuses about the connection among all the signal post. We want to modify the project using a LED display board which will show the current position of the signal and remaining time and also using two extra switches which will be used to change the signal. We also face a problem in this project that the two output ports as PORT B.6 and PORT B.7 is used for crystal oscillator, so when we want to use these two output ports the IC stop working and the signal remains still to its current position. We hope we will overcome this problem next time when we modify this project. V. References [1] N. M. Z. Hashim,, A. S. Jaafar, N. A. Ali., L. Salahuddin, N. R. Mohamad and M. A. Ibrahim, Traffic Light Control System for Emergency Vehicles Using Radio Frequency, IOSR Journal of Engineering, Vol. 3, No. 7, pp , [2] Traffic Light - [3] A. Albagul, M. Hrairi, Wahyudi and M. F. Hidayathullah, Design and Development of Sensor Based Traffic Light System, American Journal of Applied Sciences, Vol. 3, No. 3, pp , [4] N. V. Ifechi, Design and Implementation of a Microcontroller-based versatile y- and Cross-junction traffic light control System/trainer, An M.Eng Thesis in Electronics and Computer Engineering, 2010, Nnamdi Azikiwe Univesity, Awka, Nigeria. [5] V. Viswanathan and V. Santhanam, Intelligent Traffic Signal Control Using Wireless Sensor Networks, Proceding of 2nd International Conference on Advances in Electrical and Electronics Engineering, pp , [6] S. Rajeswari, Design of Sophisticated Traffic Light Control System, Middle-East Journal of Scientific Research, Vol. 12, No 19, pp , [7] Traffic Light Controller- [8] Micro controller programming: Making a set of traffic lights- Making-a-set-of-traf/ [9] Traffic Light- [10] Traffic Light Control Electronic Project using IC 4017 & 555 Timer- S. Mandal (Editor), GJAES 2016 GJAES Page 36

45 Special Issue: Conference Proceeding of i-con-2016 Global Journal on Advancement in Engineering and Science (GJAES) Vol. 2, Issue 1 : March-2016, ISSN (Print) : Project Work Smoke Detector Using LDR Saswati De 1 and Amit Kumar Singh 2 1,2 Department of Electronics and Communication Engineering, Global Institute of Management and Technology, NH-34, Palpara More, Krishnagar, India sas_03_23@yahoo.com 1, amitkumarsingh25@yahoo.in 2 Abstract: Out of all the three elements water, fire and air, Fire can be the most devastating. There are household fires, industrial fires, office fires, and many more. Fire can be devastating because it kills. To control and extinguish fires, we have come up with ideas like fire extinguishers, fire departments to do the deed, but the turning point was then when Scientists and physicists came up with ideas to detect fire and warn the beings in the surrounding areas to flee. The Fire alarm works on the principle of detecting fire with respect to the smoke it produces. When the smoke hits the alarm, it starts ringing alerting the people around it. The paper presents an smoke detection method by implementing this fire detection alarm, using LDR (Light Dependent Resistor).The LDR will detect the light and will set out the alarm. A smoke detector is a device that senses smoke, typically as an indicator of fire and issues an audible alarm to alert nearby people that there is a potential fire. It detects fire in early stages and is very attractive for the important military, social security, commercial applications. This paper discusses the fire alarm system, composition and working principle. Test results from the prototype system show that the fire alarm system achieves the design requirements. Keywords: LDR; Diode; fire alarm; smoke detector I. Introduction Smoke detector is a mechanical device that detects smoke, the result of fire. Smoke detector has two parts. A sensor is used to sense the smoke and electronic horn to alert the people. When smoke hits the alarm it starts ringing to alert the people nearby. Fire alarms have a wide range of applications in offices, industries, railways, etc. The introduction of residential smoke alarms and their widespread adoption has been tremendously successful in saving countless lives and assuring their safety in residential fires. Smoke alarms are reliable and economical to employ requiring occasional maintenance and battery replacement. Fire alarm designer undertakes to detail specific components, arrangements, and interfaces necessary to accomplish these goals[1]. The majority of smoke alarms in current use are based on sensor technologies that were developed more than 40 years ago. Since the introduction of residential smoke alarms in the 1970s, numerous incremental improvements have been made to the implementation of these technologies, but the underlying sensor technology has remained relatively static. There are two basic sensor types: ionization and photoelectric. Ionization and photoelectric aerosol sensors provide sensitivity to various types of smoke aerosols but also, unfortunately, to other aerosols, including cooking fumes, dust and fog [2],[3].The function Smoke detector can save lives. The people's lives and property has brought huge losses due to fire. So smoke detector equipment is a necessity to be placed in the buildings. Alarm will be activated when the system detects the occurrence of wildfire in a certain position. This device is very useful for security purposes. Wireless sensor networks are dense wireless networks of small, low-cost sensors, which collect and disseminate environmental data [4].Various circuits can be designed for smoke detector [5].Smoke detectors should be installed in each sleeping area for residential purposes. In the vicinity of bedrooms it should be installed. Especially the rooms where the AC or phone connections are present, the detector should be installed. It is recommended that per residential apartment there should be more than two smoke detectors. The smoke detector with smoke alarm and sometimes an additional alarm system should be installed in bedrooms and in each floor of all residence [6].The alarm should be clearly audible with all doors closed. If any component of the system fails to perform should be repaired immediately. The alarm system should be checked to make sure all the components are working. The alarm system fails to perform due to inadequate maintenance [7],[8].So maintenance is the key thing for a perfectly working alarm system. In this project we have designed a simple smoke detector circuit.[9].given these concerns, improvements in residential smoke alarms could have a huge impact upon residential fire safety, reducing the number of injuries and deaths. II. Hardware Implementation This alarm system consists of a number of devices working together. Here we are working on a simple alarm circuit based on a Light Dependent Resistor (LDR) and a light source. The alarm works by sensing the light falling on LDR. S. Mandal (Editor), GJAES 2016 GJAES Page 37

46 S. De et al., Smoke Detector Using LDR, Global Journal on Advancement in Engineering and Science, 2(1), March 2016, pp A. LDR A LDR is a light controlled variable resistor whose resistance varies significantly with light falling on it. The resistance of LDR a photoresistor decreases with increasing incident light intensity. It exhibits photoconductivity. In this alarm circuit LDR works as a sensor. It is made of a high resistance semiconductor. This circuit based project explains the principle of operation of LDR (light dependence resistor). The circuit has various applications like shadow alarm, automatic night/morning lamp. As the name suggests, LDR is a type of resistor whose working depends upon only on the light falling on it. The resistor behaves as per amount of light and its output directly varies with it. In general, LDR resistance is minimum (ideally zero) when it receives maximum amount of light and goes to maximum (ideally infinite) when there is no light falling on it. A critical factor that decides LDR s working is the frequency of light which should cross a threshold value so as to make LDR respond. A Light Dependent Resistor (LDR) or a photo resistor is a device whose resistivity is a function of the incident electromagnetic radiation. Hence, they are light sensitive devices. They area also called a photo conductors, photo conducive cells or simply photocells. They are made up of semiconductors materials having high resistance. There are many different symbols used to indicate a LDR, one of the most commonly used symbol is shown in the figure below. The arrow indicates light falling on it. A light dependent resistor works on the principle of photo conductivity. Photo conductivity is an optical phenomenon in which the materials conductivity (Hence resistivity) reduces when light is absorbed by the material. When light falls i.e. when the photons fall on the device, the electrons in the valence band of the semiconductor material are excited to the conduction band. These photons in the incident light should have energy greater than the band gap of the semiconductor material to make the electrons jump from the valence band to the conduction band. Hence when light having enough energy is incident on the device more & more electrons are excited to the conduction band which results in large number of charge carriers. The result of this process is more and more current starts flowing and hence it is said that the resistance of the device has decreased. This is the most common working principle of LDR. LDR s have low cost and simple structure. They are often used as light sensors. They are used when there is a need to detect absences or presences of light like in a camera light meter. Used in street lamps, alarm clock, burglar alarm circuits, light intensity meters, for counting the packages moving on a conveyor belt, etc. Figure 1: LDR B. Circuit Description The circuit uses readily available components and can be easily constructed. In dark, a photoresistor have a resistance several megaohms (MΩ),while in the light it has a resistance as low as few hundred ohms. The components used are as follows: R1=2.2 ohm POT, R2=220 ohm,r3=10k POT,R4=10K POT,R5=LDR,R6=1K,L1=9Vbulb,Q1=BC107,IC1=7805,IC2=UM66,IC3=TDA2002,D1=1N4007,D2=1N4007,C1=470µF,C2=1000µF,K1=SPKR.Instead of bulb we use bright LED with a 1K resistor in series with it.pot R4 is used to adjust the sensitivity of alarm.pot R3 is used for varying the volume of the alarm. The circuit can be powered from 9V battery or 9V DC power supply. When there is no smoke the light from the bulb will be directly falling on the LDR. The LDR resistance will be low and so the voltage across it (below 0.6V). The transistor will be OFF and nothing happens. When there is sufficient smoke to mask the light from falling on LDR, the LDR resistance increases and so do the voltage across it. It extends positive voltage to the base of the transistor. Now the transistor will switch to ON. This gives power to the IC1 and it outputs 5V. This powers the tone generator IC UM66 (IC2) to play a music. This music will be amplified by IC3 (TDA 2002) to drive the speaker. Resistor R6 is meant for protecting the transistor when R4 is turned towards low resistance values. Resistor R2 and R1 forms a feedback network for the TDA2002 and C1 couples the feedback signal from the junction of R1 & R2 to the inverting input of the same IC. C2 charges up to the positive voltage and increases on time of alarm. The diode D1 and D2 in combination drops 1.4 V to give the rated voltage (3.5V) to UM66. UM66 cannot withstand more than 4V. The basic principle of a smoke detector is that it consists of buzzer and a LDR. When the smoke comes in between the light and surface of the LDR, the circuit get cuts off, and the alarm S. Mandal (Editor), GJAES 2016 GJAES Page 38

47 S. De et al., Smoke Detector Using LDR, Global Journal on Advancement in Engineering and Science, 2(1), March 2016, pp goes off. In this project we are implementing this fire detector alarm, using LDR (Light Dependent Resistor). The LDR will detect the light and will set out the alarm. Figure 2: Circuit Diagram of Smoke Detector Figure 3: Implemented Project Prototype III. Conclusion This project is based on the simple principle of a smoke detector circuit. We tried implementing the principle and designing the circuit of the smoke detector, and eventually succeeded in designing it. Checked with the specifics and it operates pretty fine when the smoke comes in between the led and the LDR (although instead of smoke we have tried using any other obstruction possible). The circuit detects the obstruction perfectly and sets the alarm off, making the buzzer to ring at its highest. The circuit that we have designed is just a small implementation and a prototype of the original smoke detector, and in the near future we hope to develop it for better and heavier uses. In this project we are implementing this fire detector alarm, using LDR (Light Dependent Resistor). The LDR will detect the light and will set out the alarm. Fire Alarms invention is a turning point for The Fire Departments all over the world, because it helps them in reaching the locations before anyone else can even anticipate. This S. Mandal (Editor), GJAES 2016 GJAES Page 39

48 S. De et al., Smoke Detector Using LDR, Global Journal on Advancement in Engineering and Science, 2(1), March 2016, pp project has helped us gain a lot of knowledge about the fire sensing devices and has helped us learn about LDR more effectively. IV. References [1] Peter J. Finley, Jr., Executive Analysis Of Fire Service Operations In Emergency Management, Vineland Fire Department, Vineland, NewJersey [2] Fleming, Jay. "Smoke Detector Technology Research", retrieved [3] Cote, Arthur; Bugbee, Percy (1988). "Ionization smoke detectors". Principles of fire protection. Quincy, MA: National Fire Protection Association. p. 249.ISBN [4] M.Tubaishat and S.Madria, Sensor Networks: An Overview,IEEE Potentials, 2003,22(2):20-23 [5] Chenebert, A.,Breckon, T.P., Gaszczak, A. (September 2011). A Nontemporal Texture Driven Approach to Real-time Fire Detection.Proc. International Conference on Image Processing.IEEE.pp doi: /ICIP [6] Molla Shahadat Hossain Lipu, Md. Lushanur Rahman,Tahia Fahrin Karim,Faria Sultana, Wireless Security Control System & Sensor Network for Smoke & Fire Detection, 2010 IEEE. [7] Manav Jain & Dr. Mohammad JawaidSiddiqui, Electronic Fire Alarm,Advance in Electronic and Electric Engineering,ISSN ,Volume 4, Number 2(2014), pp [8] Suneel Mudunuru,V.Narasimha Nayak,G.Madhusudhana Rao,K.Sreenivasa Ravi, Real Time Security Control System for Smoke and Fire Detection using Zigbee,IJCSIT, Vol. 2(6), 2011, [9] Fire Alarm- S. Mandal (Editor), GJAES 2016 GJAES Page 40

49 Special Issue: Conference Proceeding of i-con-2016 Global Journal on Advancement in Engineering and Science (GJAES) Vol. 2 Issue 1 : March-2016 (ISSN: ) Original Research Work Comparative Study between Z-N & F-PID Controller for Speed Control of a DC Motor Sukanya Chatterjee 1, Priyanka Sil 2 and Pijush Dutta 3 1,2,3 Department of Electronics &Communication Engg, Global Institute of Management,India chatterjee.sukanya@gmail.com 1,priyankasil98@gmail.com 2,pijushdutta009@gmail.com 3 Abstract: Controlling the speed of a separated excited dc motor is a one of the critical need in many industrial plants. In this paper the motor is modelled as a 2nd order system and its response is studied. The setting and optimization of (PID) parameters have been always the important study topics in the automatic control field. PID controller is the most widely used control strategy in industry. The popularity of PID controller can be attributed partly to their robust performance and partly to their functional simplicity The current optimization design methods are often difficult to consider the system requirements for quickness, reliability and robustness The aim of this paper is to do the comparative study of conventional PID by Ziegler Nichols, and PID-Fuzzy controller in the area of speed control. The performance analysis of conventional PID, and PID-Fuzzy has been done by the use of MATLAB and simulink and in the end comparison of various time domain parameter is done to prove that the PID-Fuzzy logic controller has small overshoot and fast response as compared to PID controller Keywords: Transfer function of DC motor ZN, F-PID, Simulink I. Introduction The development of high performance motor drives is very important in industrial as well as other purpose applications such as steel rolling mills, electric trains and robotics. The speed of DC motors can be adjusted within wide boundaries so that this provides easy controllability and high performance. Speed controller of DC motors is carried out by means of voltage control in 1981 firstly by Ward Leonard The proportional integral derivative (PID) controller operates the majority of the control system in the world. The major problems in applying a conventional control algorithm (PI, PD, PID) in a speed controller are the effects of non-linearity in a DC motor. The nonlinear characteristics of a DC motor such as saturation and fiction could degrade the performance of conventional controllers [1], [2].It has been reported that more than 95% of the controllers in the industrial process control applications are of PID type as no other controller match the simplicity, clear functionality, applicability and ease of use offered by the PID controller [3], [4]. PID controllers provide robust and reliable performance for most systems if the PID parameters are tuned properly..generally, an accurate nonlinear model of an actual DC motor is difficult to find and parameter obtained from systems identification may be only approximated values. The field of Fuzzy control has been making rapid progress in recent years. Fuzzy logic control (FLC) is one of the most successful applications of fuzzy set theory, introduced by L.A Zadeh [11] and Mamdani [18] applied it in an attempt to control system that are structurally difficult to model.. Fuzzy control theory usually provides non-linear controllers that are capable of performing different complex non-linear control action, even for uncertain nonlinear systems. Unlike conventional control, designing a FLC does not require precise knowledge of the system model such as the poles and zeroes of the system transfer functions. Imitating the way of human learning, the tracking error and the rate change of the error are two crucial inputs for the design of such a fuzzy control system [6], [7]. PID controllers provide robust and reliable performance for most systems if the PID parameters are tuned properly. The major problems in applying a conventional control algorithm (PI, PD, PID) in a speed controller are the effects of non-linearity in a DC motor. The nonlinear characteristics of a DC motor such as saturation and fiction could degrade the performance of conventional controllers [1], [2].Generally, an accurate nonlinear model of an actual DC motor is difficult to find and parameter obtained from systems identification may be only approximated values. II. Motor Model The term speed control stand for intentional speed variation carried out manually or automatically DC motors are most suitable for wide range speed control and are there for many adjustable speed drives. S. Mandal (Editor), GJAES 2016 GJAES Page 41

50 S. Chaterjee et al., Comparative Study between Z-N&F-PID Controller for Speed Control of a DC Motor, Global Journal on Advancement in Engineering and Science, 2(1), March 2016, pp Figure1. Separated excited DC motor V a is the armature voltage. (In volt), E b is back emf the motor (In volt), I a is the armature current (In ampere) R a is the armature resistance (In ohm), L a is the armature inductance (In Henry), T m is the mechanical torque developed (in Nm), J m is moment of inertia (In kg/m²), B m is friction coefficient of the motor (In Nm/ (rad/sec)), ω is angular velocity (In rad/sec) In general, the torque generated by a DC motor is proportional to the armature current and the strength of the magnetic field. In this example we will assume that the magnetic field is constant and, therefore, that the motor torque is proportional to only the armature current i by a constant factor Kt as shown in the equation below. This is referred to as an armature-controlled motor. T=K t.i (1) The back emf, e, is proportional to the angular velocity of the shaft by a constant factor K e. e=k e dθ/dt (2) In SI units, the motor torque and back emf constants are equal, that is, K t = K e ; therefore, we will use K to represent both the motor torque constant and the back emf constant. From the figure above, we can derive the following governing equations based on Newton's 2nd law and Kirchhoff's voltage law. Jd 2 θ/dt 2 +bdθ/dt=k i (3) Ldi/dt+R.i=V-Kdθ/dt (4) Applying the Laplace transform, the above modeling equations can be expressed in terms of the Laplace variable s. s(js+b)θ(s)=ki(s) (5) (Ls+R)I(s)=V(s)-Ksθ(s) (6) We arrive at the following open-loop transfer function by eliminating I(s) between the two above equations, where the rotational speed is considered the output and the armature voltage is considered the input. θ'(s)/v(s)=k/(js+b)(ls+r)+k 2 (7) III. Ziegler Nichol's Method Ziegler Nichol's Method: The Ziegler Nichols tuning method is a heuristic method of tuning a PID controller. It was developed by john G. Ziegler and Nathaniel B.Nichols. It is performed by setting the I[integral] and D[derivative] gains to zero. The P (proportional) gain, K p is then increased (from zero/until it reaches the ultimate gain, K u, at which the output of the control loop oscillates with a constant amplitude, K u and the period of oscillation P u are used to set the P, I, and D gains depending on the controllers used. Closed loop Z-N method tuning formula: Table1.PID tuning parameters by Ziegler Nichols Controller K p T i T D P 0.5K U PI 0.45K U P U /1.2 PID 0.6K U P U /2 P U /8 IV. Fuzzy Logic Controller The fuzzy logic foundation is based on the simulation of people s opinions and perceptions to control any system. One of the methods to simplify complex systems is to tolerate to imprecision, vagueness and uncertainty up to some extent [10]. An expert operator develops flexible control mechanism using words like suitable, not very suitable, high, little high much and far too much that are frequently used words in people s life. Fuzzy logic control is constructed on these logical relationships. Fuzzy sets are used to show linguistic variables. Fuzzy Sets Theory is first introduced in 1965 by Zadeh to express and process fuzzy knowledge [11, 12]. There S. Mandal (Editor), GJAES 2016 GJAES Page 42

51 S. Chaterjee et al., Comparative Study between Z-N&F-PID Controller for Speed Control of a DC Motor, Global Journal on Advancement in Engineering and Science, 2(1), March 2016, pp is a strong relationship between fuzzy logic and fuzzy set theory that is similar relationship between Boolean logic and classic set theory. Fig.3 shows a basic FLC structure. Figure2: The structure of self-tuning fuzzy PID controller The input to the Self-tuning Fuzzy PID Controller are speed error "e(t)" and Change-in-speed error "de(t)". The input shown in figure are described by e(t)=wr(t)-wa(t) (9) e(t)=e(t)-e(t-1) (10) Using fuzzy control rules on-line, PID parameters KP", KI", KD" are adjusted, which constitute a self-tuning fuzzy PID controller shown in figure2 IV.A. Design of Membership Function Figure 3: Membership Function for e Figure 4: Membership Function for de Figure 5: Membership Function for K p Figure 6: Membership Function for Ki Figure 7: Membership Function for Kd Figure 8: Rule viewer for Kp, Ki & Kd S. Mandal (Editor), GJAES 2016 GJAES Page 43

52 S. Chaterjee et al., Comparative Study between Z-N&F-PID Controller for Speed Control of a DC Motor, Global Journal on Advancement in Engineering and Science, 2(1), March 2016, pp Figure 9: Rule surface viewer for kd Figure 10: Rule surface viewer for Kp Figure 11: Rule surface viewer for ki IV.B. Design of Fuzzy Rules Table 2: Fuzzy rule table for K P Table 3: Fuzzy rule table for K i Table 4: Fuzzy rule table for K d IV.C Matlab Simulation Figure 12: Manual tuning PID Ku=1.4;Pu=0.65 S. Mandal (Editor), GJAES 2016 GJAES Page 44

53 S. Chaterjee et al., Comparative Study between Z-N&F-PID Controller for Speed Control of a DC Motor, Global Journal on Advancement in Engineering and Science, 2(1), March 2016, pp Table 5: PID variables fromziegler Nicholas method Types of controller kp Ti Td p 05Ku infinity 0 pi 0.45Ku Pu/2 0 pid 0.6Ku=0.84 Pu/2=0.325 Pu/8= Figure13: Speed vs Response time of DC motor Figure14: Simulink model of FUZZY-PID controller Figure 15: Simulink Model for Speed Control of Separately Excited DC motor using self tuned fuzzy PID controller Figure16 : Speed vs time response of Fuzzy tuned PID controlled DC motor S. Mandal (Editor), GJAES 2016 GJAES Page 45

54 S. Chaterjee et al., Comparative Study between Z-N&F-PID Controller for Speed Control of a DC Motor, Global Journal on Advancement in Engineering and Science, 2(1), March 2016, pp Table 6: Comparison Result between Zeiglar Nicholas PID Tunning & fuzzy PID Tunning Types of controller Rise time Peak time Settiling %overshoot time PID tunning by Ziegler sec % Nicholas Fuzzy PID % V. Conclusion The three parameters "KP", "KI", "KD" of conventional PID control need to be constantly adjust adjusted online in order to achieve better control performance. Fuzzy self-tuning PID parameters controller can automatically adjust PID parameters in accordance with the speed error and the rate of speed error-change, so it has better selfadaptive capacity fuzzy PID parameter controller has smaller overshoot and less rising and settling time than PID controller with Z-N method and has better dynamic response properties and steady-state properties. Steady state error in case of self tuned fuzzy PID is less compared to conventional PID controller. The fuzzy controller adjusted the proportional, integral and derivate (KP, KI, KD) gains of the PID controller according to speed error and change in speed error. The self tuning Fuzzy PID has better dynamic response curve, shorter response time, small overshoot, small steady state error (SSE), high steady precision compared to the Z-N method. VI. References [1] B.J. Chalmers, Influence of saturation in brushless permanent magnet drives. IEE proc. B, Electr.Power Appl, vol.139, no.1, [2] C.T. Johnson and R.D. Lorenz, Experimental identification of friction and its compensation in precise, position controlled mechanism. IEEE Trans. Ind,Applicat, vol.28, no.6, [3] J. Zhang, N. Wang and S. Wang, A developed method of tuning PID controllers with fuzzy rules for integrating process, Proceedings of the American Control Conference, Boston, 2004, pp [4] K.H. Ang, G. Chong and Y. Li, PID control system analysis, design and technology, IEEE transaction on Control System Technology, Vol.13, No.4, 2005, pp [5] H.X.Li and S.K.Tso, "Quantitative design and analysis of Fuzzy Proportional-Integral-Derivative Control- a Step towards Auto tuning", International journal of system science, Vol.31, No.5, 2000, pp [6] Thana Pattaradej, Guanrong Chen and PitikhateSooraksa, "Design and Implementation of Fuzzy PID Control of a bicycle robot", Integrated computer-aided engineering, Vol.9, No.4, [7] Weiming Tang, Guanrong Chen and Rongde Lu, A Modified Fuzzy PI Controller for a Flexible -joint Robot Arm with Uncertainties, Fuzzy Set and System, 118 (2001) [8] PavolFedor, Daniela Perduková, A Simple Fuzzy Controller Structure, ActaElectrotechnica ET Informatica No. 4, Vol. 5, 2005 [9] Maher M.F. Algreer andyhyar.m.kuraz, Design Fuzzy Self Tuning of PID Controller for Chopper-Fed DC Motor drive. Kuraz [10] J.Klir. George, Yuan, Bo. Furry Sets and Fuzzy Logic- Theory and Applications [11] L. A. Zadeh, Fuzzy Sets Informal Control, vol.8, pp , 1965 [12] L. A. Zadeh, Outline of a new approach to the analysis Complex systems and decision processes IEEE Trans. Syst.Man Cybem, vol. SMC-3, pp. 2844, I973 [13] Y. Tipsuwan, Y. Chow, Fuzzy Logic Micmcontroller Implementation for DC Motor Speed Control. IEEE [14] M.Chow and A. Menozzi, on the comparison of emerging and conventional techniques for DC motor control proc.iecon,pp , [15] Ogata, K., Modern Control Engineering. Englewood Cliffs, NJ: Prentice Hall, 2001 [16] P.S Bhimbhra, electrical machinery, New Delhi, Khanna Publishers. [17] S. M. Metev and V. P. Veiko, Laser Assisted Microtechnology, 2nd ed., R. M. Osgood, Jr., Ed. Berlin, Germany: Springer-Verlag, [18] King, P.J &Mamdani (1975),The application of Fuzzy control to industrial process, in special interest discussion session on fuzzy automata & decission processes,sixth IFAC world congress,boston,mass. S. Mandal (Editor), GJAES 2016 GJAES Page 46

55 Special Issue: Conference Proceeding of i-con-2016 Global Journal on Advancement in Engineering and Science (GJAES) Vol. 2, Issue 1 : March-2016, ISSN (Print): Research Work On Unstructured Uncertainty Analysis in Higher Order Actuator Dynamics of the PI Controlled Missile Autopilot Biraj Guha Assistant Professor, Department of Electrical Engineering Techno India, Salt Lake, West Bengal, India - birajguha10@gmail.com Abstract: Analysis of unstructured uncertainty in higher order actuator dynamics of the PI controlled two loop autopilot system of tail controlled surface to surface missile has been presented in the paper. The missile autopilot has been characterized by the dynamics involving non-minimum phase zero. The frequency domain analysis of the unstructured uncertainty in actuator dynamics has been carried out by delta-sr method. Keywords: Autopilot, non-minimum phase, unstructured uncertainty, actuator, delta-sr method. Parameters identification K b : Airframe aerodynamic gain, sec -1 M p : Peak overshoot. K p : Lateral autopilot control gain outer loop. Q : Missile body rate in pitch, rad/sec. q d : Missile body rate demanded in pitch, rad/sec. K q : Fin servo gain, sec -1 T a : Incidence lags of airframe, sec. K s : Forward path gain is state feedback design. H : Elevator deflection, rad. : Missile flight path rate, rad/sec : Damping ratio of actuator. ω b : Weathercock frequency, rad/sec Σ : Missile flight path rate demanded, rad/sec ω a : Natural frequency of oscillation of Actuator, rad/sec. Γ : Missile flight path demanded, rad. M w : Moment derivative due to pitch incidence α, m 1 sec 1 Z η : Force derivative due to elevator, m sec 2 ξ a : Damping ratio of actuator M η : Moment derivative due to elevator deflection, sec 2 (Semi non-dimensional form) ζ : A quantity whose inverse determines the location of non-minimum phase zero in s-plane I. Introduction A systematic methodology for linear design of lateral autopilot of tail controlled missile in pitch plane, for a class of guided missiles characterized by the dynamics involving non-minimum phase zero, has been proposed in [1]. The structured design methodology in [1] provides a mean for choosing appropriate controller gains under nominal conditions for the missile flight path based on linear aerodynamic models. A frequency domain analysis of the lateral autopilot for the surface to surface (SSM) tail controlled missiles have been carried out in details and the design situation is considered where the missile actuator parameters (natural frequency ω a and damping ratio ξ a ) are given and the airframe environment parameters are represented by the aerodynamic parameters T a, m η, ω b, ζ 2 (Table III) and the performance achieved for such operating points have been illustrated in [1] with numerical examples. The methodology assures adequate stability margins at these operating points for chosen sets of nominal values of aerodynamic and actuator parameters. The outer loop of the two loop missile autopilot configuration is known as flight path rate demand whereas the inner loop consists of the pitch rate feedback and the servo gain K q as mentioned in [1]. The work done in [1] and the results obtained indicate that adequate transient response characteristics are achieved for various flight conditions satisfying desired specifications but the steady state performance indicates that there exists error in tracking the step input in the flight trajectory that relates aerodynamic control phase [2]. The PI controller has an edge in steady state performance to achieve accurate tracking in steady state with zero error for step input [5], [9], [10]. The work done in [2] utilizes the autopilot configuration in pitch plane with unity feedback derived from lateral autopilot configuration with one accelerometer and one rate gyro in pitch plane as proposed in [1]. The lateral autopilot control gain k p in outer loop has been replaced by PI controller as shown in Fig 1(A) whose tuning constants have been determined by Ziegler- Nichols design technique in [2], whereas the fin servo gain is kept fixed as obtained in [1] for the same flight conditions (Table III) as explained in [1]. For gain scheduling, it is essential for the controller gains at each equilibrium point to be producing guaranteed stability for actual flight condition. The present work is an attempt to develop a simple methodology to a certain stability robustness property of the two loop autopilot in spite of modeling errors due to high frequency unmodelled dynamics and plant parameter variations. S. Mandal (Editor), GJAES 2016 GJAES Page 47

56 B. Guha et al., On Unstructured Uncertainty Analysis in Higher Order Actuator Dynamics of the PI Controlled Missile Autopilot,Global Journal on Advancement in Engineering and Science, 2(1), March 2016, pp II. The performance characteristics of PI controlled missile autopilot in pitch plane The PI controlled two loop missile autopilot configuration as developed in [2] has been shown in Fig 1(A) and is utilized in the present work. The time response curve corresponding to unit step input has been shown in Fig 1(B) and the time domain characteristics as obtained in [2] have been provided in Table I. Figure 1(A) PI controlled two loop missile autopilot in pitch plane Figure 1(B) Step response of missile autopilot In the figure 1(A), G3 represents second order actuator dynamics whereas G1 and G2 are the aerodynamic transfer functions. Table I Step response of PI controlled two loop autopilot Ref input K p K i %M p Rise Time Peak Time Settling Time Steady state error Unit step sec sec sec 0 III. Effect of Unmodelled Actuator Dynamics and the Autopilot Robustness A. Autopilot performance limitations due to plant uncertainty Unstructured uncertainty arises typically from truncating a complex model by retaining only some of the dominant modes which usually lie in the low frequency range. The structured uncertainty is due to uncertainty in the corresponding linear approximation of the plant. The tolerance of both these types of uncertainties is qualitatively a problem of robust stability [4], [7]. The present work concentrates only on the unstructured uncertainty. It is quite justified to analyze the effect of unmodelled high frequency dynamics in a control loop subsystem like actuator system. The controller structure in the PI controlled two loop autopilot system designed in [2] is based on using a second order actuator dynamics, and the derived control gains are supposed to provide the desired performance in real systems with robustness in spite of inherent uncertainties in the modeling process of the actuator. The second order actuator model of the two loop actuator system does not consider the effects of unmodelled dynamics on missile performance. The performance analysis has been carried out using second order model of actuator system, with specified damping ratio (ξ a ) and natural frequency (ω a ). Structured uncertainty often called parametric uncertainty represents parametric variations in the plant dynamics where as unstructured uncertainty represents that aspect of system uncertainty associated with unmodelled dynamics, truncation of high frequency mode, non linearity and the effect of linearization, time variation and randomness in the system. This usually represents frequency dependent elements. The stability robustness of the PI controlled two loop autopilot in presence of unstructured uncertainty has been focused in the present work that incorporates higher order dynamic model (sixth order model) for the actuator system instead of the second order model with specified ξ a and ω n used in [1], [2].The robustness performance has been carried out by delta-sr (δ SR ) method in order to investigate the autopilot performance limitations using the design parameters for a typical operating point (Table III) of the flight path at the aerodynamic control phase. B. Methodology used for robustness analysis The unstructured uncertainty can be represented in the form of an additive or a multiplicative perturbation. For the multiplicative perturbation the true transfer function is G(s) = G 0 (s, θ) [1+l(s)] (1) Where, G 0 (s, θ) is a parameterized model of the plant with the structured uncertainty θ, which represents the plant parameter variations. G 0 (s, θ) is a known function, but the values of the parameter θ are uncertain. The function l(s) is an unstructured uncertainty and it is entirely unknown, except that it is limited in magnitude to l(jω) l 0 (ω), and l 0 (ω) is a known real scalar function. The bound can be viewed as a frequency-dependent radius of uncertainty of the true plant transfer functions G(s) about some model G 0 (s, θ) for a given θ. Fig 3(A) shows how such a bound can be calculated experimentally by simply comparing the actual plant with the model. S. Mandal (Editor), i-con 2016 GJAES Page 48

57 B. Guha et al., On Unstructured Uncertainty Analysis in Higher Order Actuator Dynamics of the PI Controlled Missile Autopilot,Global Journal on Advancement in Engineering and Science, 2(1), March 2016, pp Figure 3(A) Frequency dependent uncertainty bound Figure Figure 3(B) Compensated feedback system Assuming that, all the structured and unstructured uncertainties are lumped into a stable multiplicative perturbation. By Putting s=jω in eq. (1), the following expression is obtained G (jω) = G 0 (jω) + r (jω) (2) where r(jω) = G 0 (jω)l(jω) In general a good model will be well known at low frequencies, which result in small values of l(ω) and less well known at high frequencies, where the value of l(ω) is large. The system G(s) along with compensation D(s) is shown in Fig 3(B). The system G(s) tends to get unstable in presence of the uncertainty l(s). A compensator D(s) in inserted into the system for making the closed loop system stable. A typical nyquist plot of the compensated system is shown (Fig.4). Figure 4 Nyquist diagram modified by model uncertainty As G 0 (s) is perturbed, the nyquist plot moves around with an envelope as shown in the Fig 4, the system remains stable as long as the number of encirclements of -1 remains unchanged, which will be true for all l, if 1+D(jω)G(jω) 0.(3) As long as the perturbed nyquist diagram does not pass through the -1 point the system remains stable. The stability condition of eq. (3) can be rewritten as 0 ε 1 1+D(jω)G 0 (jω)(1+εl) 0 for.(4) 0 ω Here ε is a constant and its value is 1 when perturbation is maximum. When ε equals to 0, perturbation is minimum that indicates there is no uncertainty in the system and Eq. (4) is true if and only if (DG 0 ) 1 +1+εl > 0 and DG (5) It is required to express the requirement in terms of the PI controller gains of the nominal system DG 0. This can be done if for 0 ε 1, (DG 0 ) 1 +1+εl > 1+(DG 0 ) 1 - l >0 Or, 1+(DG 0 ) 1 > l.... (6) The system is guaranteed to be stable as long as eq. (6) is satisfied. Therefore, stability robustness is defined as δ SR = 1+ (DG 0 ) 1.. (7) As long as the model uncertainty remains below δ SR for all frequencies, the system is guaranteed to remain stable in spite of the perturbations l(s). Delta-SR (δ SR ) is simply the inverse of the closed-loop magnitude frequency response i.e. the inverse of the complementary sensitivity function T(s). Thus, in terms of the complementary sensitivity function, T(s) 1 > l..... (8) This means that model uncertainty dictates an upper bound on the magnitude of T(s). The quality l defines the minimum distance from the -1 point to the inverse nyquist plot. C. Stability Robustness of Autopilot with second Order and Sixth order Actuator Model Step response and frequency response (bode plot) of the second order and sixth order actuator dynamics have been presented in Fig 5(A) & 5(B) and the result is formulated in Table II for finding out whether the higher order model can be taken as the equivalence of second order model. S. Mandal (Editor), i-con 2016 GJAES Page 49

58 Phase (deg) Amplitude Magnitude (db) B. Guha et al., On Unstructured Uncertainty Analysis in Higher Order Actuator Dynamics of the PI Controlled Missile Autopilot,Global Journal on Advancement in Engineering and Science, 2(1), March 2016, pp Step response of 2nd order and 6th order actuator dynamics 1.4 Second order actuator 1.2 Sixth order actuator Frequency response of 2nd order and 6th order actuator dynamics System: Sixth order actuator Frequency (rad/s): 168 Magnitude (db): Time (seconds) Figure 5(A) Step response of actuator Second order actuator Sixth order actuator Frequency (rad/s) Figure 5(B) Frequency responses of actuator The autopilot design carried out in [2] used a second order actuator model for ξ a = 0.6 and ω a = 180 rad/sec for an operating point (Table III) with a transfer function G(s) = S 2. (9-A) +216 S The poles of the actuator are, S 1,2 = -108±144j. The present analysis is following the methodology described in previous section and using an equivalent sixth order actuator model with two dominant poles at -108±144j and two pair of complex poles at -600±500j, and-3500 ±3000j. This model reflects the presence of higher frequency modes in an actuator which is not explicit in the second order model. The sixth order transfer function model is e+17 G(s)= (s s e04) (s s + 6.1e05)(s 2... (9-B) s e07) It is observed that there is a very close matching in the gains of the two models up to a frequency of about 168 rad/sec (Fig 5(B)). It has also been noticed that the time response characteristics of the sixth order actuator model closely match with that of the time response characteristics of second order actuator model. Thus, the second order model may be regarded to same as equivalent lower order approximation of this sixth order model. Table II Step response characteristics of second order and sixth order actuator model design 2 nd order actuator 6 th order actuator Peak overshoot 9.47% 9.43% Settling time (sec) Rise time (sec) sec Steady state 1 1 In the methodology presented in [4], the unstructured uncertainty bound l(s) could be found experimentally as mentioned in the sectioned II (B). In order to ascertain whether the system remains stable in spite of perturbations l(s), the bode plot analysis for δ SR and l(s) have been evaluated together as functions of frequency and represented in Fig 6. Fig 6 Stability robustness S. Mandal (Editor), i-con 2016 GJAES Page 50

59 K. Bairagi et al., Modelling of Road Traffic Signal Using Atmega-8µc, Global Journal on Advancement in Engineering and Science, 2(1), March 2016, pp. xx-xx The expression for the stability robustness measure δ SR as obtained from eq. (7) is given in eq. (10) δ SR = 0.36 s s5 s e04 s e06 s e08 s e09 s 4.181e e04 s e05 s e08 s e09 s 4.181e09. (10) 7.52 s+94 where D = which is the PI controller gain as evaluated in [2]. It is observed that the system stability is s guaranteed for the perturbation considered, since the model uncertainty remains below δ SR as shown in Fig 6 for all frequencies (frequency range 10 4 rad/sec). The nominal values of autopilot system parameters for an operating point as obtained from [1] have been presented in table III. Table III Autopilot system design parameters T a, Sec ω b, rad/sec σ 2 m η, sec -2 ω a, rad/sec ξ a K b, sec -1 U IV. Conclusion The robustness study has been carried out for one typical operating point only, utilizing the PI controller designed by conventional ZN tuning method and k q and a higher order (sixth order) dynamics for actuator system. The actual actuator system used in a real autopilot must have still higher order dynamics. Since such higher order realistic model was not available, the robustness of the developed design methodology [2] has been investigated with a chosen sixth order model with some non-dominant high frequency mode. The analysis carried out in frequency domain indicates that the two plots for δ SR and l(s) do not intersect at any frequency (within 10 4 rad/sec) and the margin (avoidance in db) is quite large both at low frequency and very high frequency ranges. However, the two curves become close to each other at a frequency of 404 rad/sec and the db margin between them at that point is about 2.6 db, the system stability is thus guaranteed for the assumed perturbation. The uncertainty bound used in the present study is -86.2dB < l(s) < db. In case of a system where there is an intersection between the curves δ SR and l(s) indicating a performance limitation to qualify a system to be robust for a considered perturbation bound l(s). It is felt that a properly designed Notch filter could be implemented in the control loop (inner loop), to improve the margin (avoidance in db) between the curves at a particular frequency, where the curves intersect. V. References [1] G. Das, K.Datta, T.K.Ghoshal, S K.Goswami. Structured Design Methodology of Missile Autopilot, Journal of the Institution of Engineers (India), vol. 76, March [2] B.Guha. On PI and PID controller Design for a Two-loop Missile Autopilot in Pitch Plane, May [3] P.Garnell and D. J.East. Guided Weapon Control Systems. Pergamon Press, Printed in Great Britain, First edition, [4] Gene F-Franklin, J David Parell, Abbas Emani-Nalini, Feedback Control of Dynamic Systems, Addison-Wesley publishing company, June [5] K. Ogata, Modern Control Engineering, Fourth edition. Prentice Hall [6] Kuo, Benjamin, Automatic Control Systems. Prentice Hall of India, Seventh edition, [7] John Doyle, Bruce Francis, Allen Tannenbaum, Feedback Control Theory. Macmillan Publishing Co., [8] Defence Research & Development Organization. Guided Missiles. Popular Science and Technology Series, Published in [9] Brian R Copeland, The Design of PID Controllers using Ziegler Nichols Tuning. March [10] I. J. Nagrath, M. Gopal. Control System Engineering. Fifth edition. New Age International (P) Limited, Publishers (formerly Wiley Eastern Limited). [11] G. Das, K. Dutta, T.K. Ghoshal, S.K. Goswami. Structured Linear Design Methodology for Three-loop Lateral Missile Auto-Pilot, Journal of The Institiution of Engineers(India), [12] J. C. Basilio, S. R. Matos. Design of PI and PID Controllers with Transient Performance Specification, IEEE Transactions on Education, Vol. 45, No. 4, November S. Mandal (Editor), i-con 2016 GJAES Page 51

60 Special Issue: Conference Proceeding of i-con-2016 Global Journal on Advancement in Engineering and Science (GJAES) Vol. 2, Issue 1 : March-2016, ISSN (Print): Original Research Work Development of Advanced Glazing System for Energy Efficient Windows Sagnika Bhattacharjee 1, S. Neogi 2 and Protyusha Dutta 3 1,3 Department of Electrical Engineering, Global Institute of Management and Technology Palpara More, NH-34, Krishnagar, Nadia, West Bengal , India 2 School of Energy Studies, Jadavpur University, Kolkata , West Bengal, India - sagnika21@gmail.com 1, neogi_s@yahoo.co.in 2 and protyushadutta@gmail.com 3 Abstract: This paper deals with the development of advanced glazing system. The benefit of the advanced glazing system in the field of energy management and conservation as well as in increasing thermal comfort has been stated. A detail discussion on fabrication process has been done in this paper. However expensive the process of advanced glazing be its utilities are many. The future scope in the field of advanced glazing system is vast. Keywords: Glazing, thermal insulation, transmitivity, absorptivity, radiative, fabrication, energy management, vacuum Glazing, emittance, thermal comfort, striking angle, tinting, spacer pillars, flushing, thermocouple. I. Introduction It would be hard to find a city today no matter how small or big it is, that does not have at least one building made up of windows. These windows are not the regular windows. Externally, they must be resistant to weather conditions and impact, internally; they should offer thermal insulation, sound proofing, and security and must protect the people that work in those offices from UV light. These engineered windows can act as a major tool in energy conservation and energy management. Glazing technology has gone through radical evolution. Numerous research and development in glazing fabrication technique have led to a wider range of design options. Windows and glazings are characterized by Solar Heat Gain Coefficient (SHGC), U-factor, air leakage rate, visiblelight transmittance, and materials of construction. The heat transmittance through the windows is quantified by its U-value. The lower the U-value, the better is the performance. The single pane glass has a highest magnitude of visible light transmitivity and minimum magnitude of absorptivity. The corresponding overall heat transfer coefficient i.e. U value is around 5.6.Visual transmittance is important as it determines the amount of daylight through the glazing unit. Therefore the desired criteria of a glazing system from the point of view of saving energy is that it should possess higher transmittance in visible spectrum and lower transmittance in infrared region. The hotter region countries receive the highest possible solar radiation which in turn leads to an increase in the ambient temperature of that region. Hence, the heat gain by the building enclosure rises thereby increasing the cooling load of this region. The same goes for the cold countries where windows help to increase the heating load of the buildings. Almost 30% of the energy produced is utilized by this sector. Local climatic condition plays a vital role in determining the level of energy consumption. Thus the window influences the energy consumption and thermal comfort of the occupant of the building. Advanced glazing techniques are now being developed to design the energy efficient windows so as to reduce the energy consumption level and increase the residential comfort. II. Review of Earlier Work P.W. Griffitts et. al (1998) developed a method of fabrication of evacuated glazing at low temperature. Indium was found to be the ideal element for a vacuum seal. A finite volume simulation based on the unified model was also developed. It was observed that a low temperature edge sealing process and thin non-molten medium wire would provide greater control over edge seal thickness. N. Ng et.al (2006) had put forward a method by which the thermal conductance required for manufacturing a vacuum glazing can be measured. In the following year some methods for charactering the thermal insulating properties of vacuum glazing was devised. Yueping. Fang et. al (2007) calculated the net radiative heat flow within the finite volume model between the two plane parallel surfaces by the following equation: Q radiation = effective σa(t 1 4 T 2 4 ) (1) where, A= area of the parallel surface, T 1 andt 2 are surface temperatures, and, = emittance. Fang, Eames and Barton concluded that by increasing the thickness of the glass planes from 4mm to 6mm the average S. Mandal (Editor), GJAES 2016 GJAES Page 52

61 S. Bhattacharjee et al., Development of Advanced Glazing System for Energy Efficient Windows, Global Journal on Advancement in Engineering and Science, 2(1), March 2016, pp temperature was decreased but the exterior glass surface temperature was increased. Sun et al (2007) developed a technique which used XPS for the in situ analysis of vacuum glazing. H. Manz et al (2006) presented a detailed study of the triple vacuum glazing mainly focused on the effect or the impact of various parameters on thermal transmittance. Jun Fu Zhao et al (2007) discussed about a modified pump out technique which incorporated a novel pump out whole sealing process. III. Advanced Glazing System Advanced Glazing System proved to be a breakthrough in enclosing the thermal performance of window and building envelope by providing thermal insulation, solar gain control, energy radiation and improved visual and thermal comfort conditions. The National Fenestration Rating Council (NFRC) was founded in 1989 provided the window rating system. In a single glazed system the thickness of sheet of glass usually ranges between 3 to 5mm and permits high level of visible light transmission (88-90%). The thermal coefficient of glass is very low (R=0.18 W/degC m 2 ) and hence provides low performance. Double glazed systems are composed of a number of different materials like glass, space pillar, desiccant and sealant. The thickness may vary up to 8mm. It is more expensive than single glazed system but offers better performance. Coated glazed system use tinted and reflective film to improve the thermal performance of a vacuum glazing. Uncoated glazed systems do not use any tinted shade or coating. They have highest magnitude of transmitivity but low absorptivity. IV. Glazing Materials The thermal performance of a window system is based on the glazing material. The properties of glazing material are established according to the following thermodynamic relation: ρ + τ + α = 1 (2) Striking angle of the ray on the glass surface tend to affect the properties of the glazing materials. There are various types of glazing materials available these days like low-emissive glass, heat absorbing glass, and reflective glazing material, plastic glazing materials, tinted glazing, spectrally selective glazing and architectural glass. Here we have used tinted glazing. It is comparatively inexpensive way to reduce solar heat gain through windows. Bronze, gray and green are most common colors for it. V. Fabrication Process of Vacuum Glazing The development of vacuum glazing is a breakthrough in the area of low heat loss glazing system with great potential to reduce building heating and when combined with solar control glazing, cooling loads. Vacuum glazing is a low heat loss, high visible transmittance glazing system. It consists of two sheets of soda lime glass, 3 or 4mm thick, which are hermetically sealed around the edges with fused solder glass, and which include a narrow internal evacuated space. The separation of two glass sheets is maintained under the influence of atmospheric pressure by an array of support pillars. The high internal vacuum (< 0.1 pa) virtually eliminates convection and gas conduction between the glass sheets. Heat flow through the glazing is therefore predominantly due to radiation between the internal glass surfaces, and conduction through the support pillars. Additional heat flow can occur due to lateral conduction along the glass sheets, and through the edge seal. The first successful fabrication of vacuum glazing was reported in 1989 at the University of Sydney which was reported to be a high temperature process. Figure 1: A schematic diagram of a Vacuum Glazing (Philip C. Eames (2008)) Several fabrication processes are being used to develop a vacuum glazing system in corporating high or moderate temperature for the process. Largely there are two processes for fabrication namely, high temperature fabrication process and low temperature fabrication process. The latter is now used as it is more advantageous than the former. A. Low Temperature Fabrication Process In the late 1990s, a group at the University of Ulster investigated the potential of developing a lower temperature sealing method to effectively minimize the problems of the high-temperature method, i.e. coating degradation, loss of temper and high embodied energy. Initial investigation is based on the use of lower melting S. Mandal (Editor), i-con 2016 GJAES Page 53

62 S. Bhattacharjee et al., Development of Advanced Glazing System for Energy Efficient Windows, Global Journal on Advancement in Engineering and Science, 2(1), March 2016, pp temperature solder glasses or polymers to form the edge seal, durability was a problem with the low-melting temperature solder glasses due to the absorption of moisture, polymers also at the time proved not to be viable due to levels of gas permeability and out gassing that occurred. The main stages of the laboratory manufacturing process developed by the group at Ulster are outlined below: i. Glass cleaning, edge seal region processing if required and initial bake-out in a conventional oven at 200 C. ii. Deposition of a thin 6mm wide layer of indium around the periphery of the two glass sheets an ultrasonic soldering iron is used to promote good bonding between the indium and the glass. iii. The support pillar array consisting of 0.3mm diameter 0.15mm high pillars are located on one of the glass sheets spaced at 25mm intervals on a regular square Cartesian grid using a vacuum wand. iv. The upper glass sheet is located on the lower glass sheet so that the indium layers are aligned and the sample is introduced into the vacuum chamber. v. The vacuum chamber pressure is reduced to 10-5 Pa and initial out gassing is performed. vi. The vacuum chamber temperature is increased with a dwell period at 150 C for further out gassing. vii. The temperature of the vacuum chamber is increased to the level at which a seal is formed by indium reflow. viii. ix. The vacuum chamber is allowed to cool prior to flushing with nitrogen. The vacuum glazing sample is removed from the chamber, visual inspection is undertaken and a secondary water-tight adhesive seal is applied. x. The glazing is framed and characterized. Actually they developed two processes for producing vacuum glazing. One in which vacuum glazing is manufactured in a vacuum chamber and the seal is being formed under the vacuum pressure. Therefore, subsequent pump out is not required here. In the second method, this seal is formed either under vacuum or in an oxygen free nitrogen atmosphere with subsequent evacuation through a pump out hole and the pump out hole is being sealed by using glass disc over the hole with indium or indium alloy. B. Experimental process of fabrication of vacuum glazing The experimental process of the fabrication of vacuum glazing involves the help of multimeter and PC based online Data Logger, vacuum level of the vacuum Oven as well as that of the Vacuum Adaptor, which was measured till the stabilization occurred. a) Experimental Set up i. Selection of the glass: 4 mm transparent float glass is selected for the purpose to overcome the undulation in the normal sheet glass. At present, the standard size accepted for experiment is 250 mm x 250mm and 225 mm x 340 mm. Glazing specifications: Manufacturer: Saint Gobain; Thickness: 4mm.; Type: Transparent and Float glass. ii. Instruments used The instruments which were used for this experiment were: Diamond wheeled glass cutter, Glass drilling Machine, Glass edge grinding machine, Vacuum oven, Vacuum adapter, PC based data logger, Multimeter, Connecting wire (1.5m copper wire), Chemicals (Acetone and Isopropyl Alcohol) Figure 2: Layout diagram of the Vacuum System (High Hind Vacuum Pvt. Ltd. Manual) iii. Experimental Process In the present context, the low temperature fabrication process for vacuum glazing is being evaluated which involves some steps as follows: Glass pane cutting and sizing to the required shape, Drilling for pump-out and S. Mandal (Editor), i-con 2016 GJAES Page 54

63 S. Bhattacharjee et al., Development of Advanced Glazing System for Energy Efficient Windows, Global Journal on Advancement in Engineering and Science, 2(1), March 2016, pp glass edge grinding, Chemically degreasing of the glass pane, Soldering the edge of the glass pane with Indium wire, Locating spacer pillars, Vacuum sealing process. 1. Glass pane cutting and sizing to the required shape: Diamond wheeled glass cutter is used for cutting 4mm transparent float glasses incorporating diamond impregnated cutting wheel. The size of the glass sheet is used for the experiment is 250mm x 250mm and 225mm x 340mm. in order to overcome the undulation in the normal, float glasses are used for this experiment. The cutting equipment has a water cooling arrangement which may use to overcome the stress induced in the glass due to generation of heat during cutting operation. Figure 3: Diamond wheeled Glass Cutter 2. Drilling for pump-out process and glass edge grinding: Drill a small hole at the corner of the upper glass sheet for a pump-out process to achieve vacuum inside the glazing. A high speed Dremel tool as a high quality precision tool is used for drilling purposes. In order to reduce the risk of breakage, a glass edge grinding machine is used for roughing and brushing the glasses. Figure 4: Glass Drilling machine Figure 3: Glass edge grinding Machine 3. Chemically degreasing of the glass pane: For cleaning purposes, acetone has been selected for degreasing and Isopropyl alcohol is chosen for stain removal. 4. Soldering the edge of the glass pane with Indium wire: In this process the glass pane is placed on the PID controlled hot plate for carrying out Ultrasonic Soldering at a specified temperature. This is to initiate preheating of the glass pane so as to prevent thermal stress caused due to the localized heating caused due to the ultrasonic soldering iron. Indium wire of 0.7mm is used for soldering purposes. Figure 6: PID controlled hot plate Figure 7: Ultrasonic Soldering Iron 5. Locating spacer pillars: In this process a pre marked screen is designed as a base reference and the glass pane is placed on it to place the spacer pillar symmetrically in their exact position. The spacing of the pillars is found to be optimum based on the glass pane thickness. The spacing of 20, 25, 30,35mm is accepted for glass panes of 3, 4, 5 and 6mm respectively. The pillars of size 0.5mm x 0.25mm in height are chosen on to the glass pane using small brush. S. Mandal (Editor), i-con 2016 GJAES Page 55

64 S. Bhattacharjee et al., Development of Advanced Glazing System for Energy Efficient Windows, Global Journal on Advancement in Engineering and Science, 2(1), March 2016, pp Figure 8: Spacer Pillar placement arrangement 6. Vacuum sealing process: Two methods for edge and vacuum sealing are used here. i) Direct edge sealing within the vacuum furnace. ii) Pumping out method. i) Direct edge sealing within the vacuum furnace: The process of edge sealing is performed in a high vacuum stainless steel chamber with an infrared lamp heating system. This is basically a low temperature sealing technique. The initial out gassing of the internal surfaces and edge seal of the glazing are carried out within the vacuum chamber at a temperature below 200 C and at a pressure of around 1.5 x 10-3 and also at an atmospheric pressure level. Figure 9: Vacuum Oven Figure 10: Infrared lamp heater Figure 11: Sample inside the Vacuum oven ii) Pumping-out Method: In this process an evacuation cup of stainless steel is used to achieve high vacuum inside the glazing. It has three flanged ports, of which one is connected to the high vacuum system and other two are for electrical feed through providing power to a heating element and a thermocouple sensor. After formation of the edge seal the indium pre-coated glass disc is placed on the access hole and sample is started evacuating. When the internal space between the two glass panes is evacuated to a pressure below 2.0 x 10-3, the heating element is activated and its temperature is measured by a K- type thermocouple fitted into it. A PID controller is used to control the temperature of the heating element. Heat is transferred from the heating element to the indium pre-coated glass disc and sealed it when the temperature reaches the melting point of the indium. S. Mandal (Editor), i-con 2016 GJAES Page 56

65 S. Bhattacharjee et al., Development of Advanced Glazing System for Energy Efficient Windows, Global Journal on Advancement in Engineering and Science, 2(1), March 2016, pp Figure 12: Vacuum Adopter VI. Conclusion Advanced glazing has a huge prospect in the near future. Even though it is an expensive process its benefits outnumber the disadvantages. Experimental results have proved that advanced glazing systems hugely attribute to conservation and management of energy and provide thermal comfort as well. Still there are major works going on in this field. Though we have accomplished huge developments since its invention much is left to be done. VII. References [1] H. Manz, S. Brunner, L. Wullschleger study on Triple Vacuum Glazing: Heat transfer and basic mechanical design constraints, Solar Energy 80(2006) [2] Heinrich Manz study on Minimizing heat transport in architectural glazing,renewable Energy 33 (2008) [3] J. Karlsson, A. Ross study on Annual energy window performance vs. glazingthermal emittance the relevance of very low emittance values, Thin Solid Films392(2001) [4] Junfu Zhao, Philip C. Eames, Trevor J. Hyde, Yueping Fang, Jinlei Wang study on A modified pump-out technique used for fabrication of low temperature metalsealed vacuum glazing, Solar Energy 81 (2007) [5] J.M.Schultz, K.I. Jensen study on Evacuated aerogel glazings, Vacuum 82 (2008) [6] L. So, N.NG, M. Bilek study on Analysis of the internal glass surfaces of vacuumglazing, Materials Science and Engineering B 138 (2007) [7] N.NG, R.E. Collins, L.So study on Characterization of the thermal insulatingproperties of vacuum glazing, Materials Science and Engineering B 138 (2007) [8] N.NG, R.E. Collins study on Thermal conductance measurement on vacuumglazing, International Journal of Heat and Mass Transfer 49 (2006) [9] N.NG, R.E. Collins, L.So study on Thermal and Optical evolution of gas in vacuumglazing, Materials Science and Engineering B 119 (2005) [10] Philip C. Eames study on Vacuum glazing: Current performance and futureprospects, Vacuum 82 (2008) [11] P.W.Griffiths, M. Di. Leo, P. Cartwright, P.C. Eames, P. Yianoulis, G. Leftheriotisand B. Norton study on Fabrication of evacuated glazing at low temperature, SolarEnergy 63 (1998) [12] R.E.Collins, G.M. Turner, A.C.Fischer Cripps, J.-Z.Tang, T.M.Simko,C.J.Dey,D.A.Clugston, Q.-C.Zhang, J.D.Garrison study on Vacuum Glazing- A89new component for Insulating Windows, Building and Environment 30 (1995) [13] T.M.Simko, A.C.Fischer Cripps, R.E.Collins study on Temperature- inducedstresses in vacuum glazing: Modelling and experimental validation, Solar Energy63 (1998) [14] Yueping Fang, Philip C. Eames, Brian Norton, Trevor J. Hyde, Junfu Zhao, YeHuang study on Lowemittance coatings and the thermal performance of vacuumglazing, Solar Energy 81(2007) [15] Yueping Fang, Philip C. Eames study on Thermal performance of anelectrochromic vacuum glazing, Energy conversion and management 47 (2006) [16] Yueping Fang, Philip C. Eames, Brian Norton study on Effect of glass thickness onthe thermal performance of evacuated glazing, Solar Energy 81 (2007) 395- S. Mandal (Editor), i-con 2016 GJAES Page 57

66 Special Issue: Conference Proceeding of i-con-2016 Global Journal onadvancement in Engineering and Science (GJAES) Vol. 2, Issue 1: March-2016, ISSN (Print): Original Research Work Ray Tracing Study of Linear Fresnel Reflector System Gaurab Bhowmick 1, Subhasis Neogi 2 1,2 School of Energy Studies Jadavpur University, 188, Raja S.C. Mallick Road, Kolkata , India gaurabh.bhowmick@gmail.com 1, neogi_s@yahoo.com 2 Abstract: Linear Fresnel Reflector systems are considered as a promising technology due to its low cost and simplicity in design. Ray tracing study was carried out to study the variation of solar altitude angle on the effect of concentrating the incident sun s rays onto the central linear receiver. Mirror elements considered were individually trackedto align itself perpendicular to the sun s incident rays. Ray tracing was carried out using Tonatiuh, an open source ray tracing software package. Keywords: Linear Fresnel Reflector, Ray tracing, Tonatiuh, Concentrated Solar Thermal. I. Introduction Present world energy situation, mainly concentrated around conventional energy usage. This causes damage to the environment by emissions due to burning of fossil fuels.it calls for measures to reduce the dependence on non-renewable energy and improvement in technology to produce clean energy. Renewable energy utilization is thus being thrust upon for conserving the conventional sources.use of solar thermal applications is being carried out from the last thousands of years. Since last few decades, electricity is being produced from concentrated solar thermal system. (C.J. Dey, 2003) reported amongst the other concentrated system, Linear Fresnel Reflector (LFR), also known as Linear Fresnel Collector (LFC), is considered to be a promising technology. Simple and inexpensive designs are its prime features. LFCs have been proposed for over 30 years. (Francia, 1968) was the first to discuss an elevated linear absorber.(di Canio et al., 1979) of FMC Corporation USA, examined a linear solar thermal system having several absorber designs and field geometries. (J.D. Nixon, 2011) reported that of the most important developments has been Puerto Errado 1, in southern Spain, the world s first LFC power plant. (Richter, C,2008) reported since 2005, several LFR systems have been constructed for industrial applications and solar cooling across various European towns and locations across USA.(Guangdong Zhu, 2013) reported, Linear Fresnel Reflectors generally uses flat mirror elements of equal width to focus the sun s rays onto a linear receiver as infigure 1. Various receiver designs exist and the performance can be increased by a well designed receiver and reflectors. Figure 1: LFR directing sun s rays onto a horizontal central receiver. One major difficulty with LFR is shading and blocking caused by adjacent rows of mirrors. These effects can be reduced by increasing the spacing between mirror rows or height of the receiver. But these can increase the cost of the system. (Ole Jørgen Nydala, 2014) reported that in a ray tracer, solar rays are followed from an origin, through all reflection possibilities until they terminate at the absorber or escape the system. Panels are defined as base elements in a ray tracer. A panel can be a single flat element or a mathematical description for different base shapes. Parabolic, spheres, cylinder, flat squares and flat dishes, etc. are the different base shapes. Panels can be positioned and rotated individually. Sun is defined as an assembly of rays, with an uniform distribution on a user defined square grid. The purpose is to present a ray tracing study of a Linear Fresnel Reflector System for various altitude angles of Sun. S. Mandal (Editor), GJAES 2016 GJAES Page 58

67 G. Bhowmick et al., Ray Tracing Study of Linear Fresnel Reflector System, Global Journal on Advancement in Engineering and Science, 2(1), March 2016, pp II. Ray Tracing In this study, ray tracing was performed using open source software Tonatiuh. The Linear Fresnel Collector system was considered as in figure 2. Table 1 shows the different study parameters considered for ray tracing: Table 1: Different Study Parameters Parameter Value Number of mirrors 20 Width of each mirror 100mm Length of each mirror 2000mm Reflectivity 0.7 Irradiance 800W/m 2 Altitude angle Figure 2: Linear Fresnel Collector System in Tonatiuh. III. Results Figure 3: For an elevation angle of 15 Figure 4: For an elevation angle of 30 Figure 5: For an elevation angle of 45 Figure 6: For an elevation angle of 60 S. Mandal (Editor), GJAES 2016 GJAES Page 59

68 G. Bhowmick et al., Ray Tracing Study of Linear Fresnel Reflector System, Global Journal on Advancement in Engineering and Science, 2(1), March 2016, pp Figure 7: For an elevation angle of 75 Figure 8: For an elevation angle of 90 IV. Conclusion From the ray trace study it was found that for different elevation or altitude angles the incident rays were reflected to the central receiver. As the individual mirror is tracked, the angular position with respect to the position of the sun is always normal to the incident rays. During sun set or sun rise conditions, when elevation angle is very low, it is seen that the end rows of mirror (west field during sun rise and east field during sun set) were almost vertical to the ground. While during solar noon, mirrors near the receiver were almost horizontal in position. If the system is not tracked, the incident rays from the sun will be scattered in random directions, thus reducing the effect of concentration. Thus, it can be concluded that the design parameters, optical properties of reflectors and proper tracking are essential to obtain the maximum possible efficiency of the system. V. References [1] Dey, C.J., Heat Transfer Aspects of an Elevated Linear Absorber, Solar Energy, vol. 76, 2004, pp [2] Canio, Di,D.G, 1979, Line focus Solar Thermal Central Receiver Research Study, Final Report, FMC Corporation. [3] Guangdong Zhu, Development of an analytical optical method for linearfresnel collectors, Solar Energy, vol. 94, 2013, pp [4] Nixon, J.D., Davies, P.A., Cost-Exergy Optimisation of Linear Fresnel Reflectors, Solar Energy, vol. 86, 2012, pp , [5] Nydala Ole Jørgen, Ray tracing for optimization of a double reflector system for direct illumination of a heat storage, Energy Procedia, vol. 57, 2014, pp [6] Richter, C.(Ed.), International Energy Agency (IEA), in Solar Power and Chemical Energy Systems, SolarPACES Annual Report, S. Mandal (Editor), GJAES 2016 GJAES Page 60

69 Special Issue: Conference Proceeding of i-con-2016 Global Journal on Advancement in Engineering and Science (GJAES) Vol. 2 Issue 1 : March-2016 (ISSN: ) Original Research Work Long Term Scheduling of a Hydrothermal System over a Year Amirul Ali Mallick 1, Pratyush Das 2, Bikas Kumar Paul 3 1,2,3 Department of Electrical Engineering, Global Institute of Management & Technology, India amirulalimallick@yahoo.com 1, pratyush.85@gmail.com 2, bikas380@gmail.com 3 Abstract: This paper gives a complete idea of Long Term Hydrothermal scheduling without considering transmission loss. The Hydrothermal scheduling has an important part on hydroelectric power system. The objective of this problem is to describe an optimal operation that uses stored water as economically as possible. A simple Lagrangian technique is used to solve a Hydrothermal scheduling problem over a year for a small area. In this paper we are mainly interested to find out the operation time of thermal unit and the sharing of power generation by Thermal and Hydro unit. Keywords: Long Term Hydrothermal Scheduling, Lagrangian Function, Optimization. I. Introduction In modern circumstances a Hydrothermal unit is very much vital for completion of obligatory load demand. Actually the installation cost of Hydro power plant is very sky-scraping but the running cost is very low, because in that plant the resource of power is water, which has no cost. But still we consider some water cost in scheduling problem, this price is actually due to capacity of storage, agricultural requirement and cost of running the plant in dry season. Again thermal power plant has low installation cost and high running cost. So, in those places, where the availability of water is a lot, we are going for a hydro plant, and in other places we are interested to install a thermal plant. There is another problem for hydro plant, i.e. in dry season it is difficult to operate a hydro plant. So, due to many aspects our total power system is actually mixture of thermal and hydro generation plant and for this reason we have to think about Hydrothermal Scheduling. Hydrothermal Scheduling is basically two types: Long Term Scheduling and Short Term Scheduling. By long term we schedule our loads for one month or year. This type of scheduling involves mainly the scheduling of water release. The benefit of this scheduling is to save the cost of generation, in addition to meeting to agricultural and irrigational requirements. Here main unknown quantities are loads, hydroelectric inflows, unit availability etc. Short term scheduling is done for one week or one day. Here loads, water inflows, unit availability all are knows. In a paper N.Nabona [1] et al solve a problem on hydrothermal scheduling to indicate distribution of hydrothermal generation over a time period. Also they try to find acquisition and use of fuel for each unit in order to minimize the cost of fuel. In their study they take Bézier curve as a constraint. A Lagrangian relaxation technique is used to solve those problems in the paper named Scheduling of Hydrothermal Power System [2] and the results are compared with scheduling of thermal unit by Lagrangian Relaxation method and with scheduling hydro unit by heuristics. In a literature [3] Interior-point method and in another one [4] Stochastic Dual Dynamic Programming is used for scheduling. Particle Swarm Optimization technique is used in a paper [5] by M.M.Salama. In the other literatures [6], [7] some good reviews are given on the different research works which are based on different optimization techniques with various constraints. Here we consider simple system consisting a thermal and a hydro unit. With the help of Lagrangian technique we schedule this hydro thermal system for the variation of loads in different months over a year. II. Problem Formulation A hydro thermal power system is considered here having a hydro and a thermal. Let, P th = Operating MW power capacity of thermal plant P h = Operating MW power capacity of hydro plant P load = Total load demand τ = Time period for which the generator supply the load T M = Total operation time interval for which the generator scheduling is done n τ = No of hours in τ time T th = Total operation time interval of thermal generation S. Mandal (Editor), GJAES 2016 GJAES Page 61

70 A. A. Malllick et al., Long Term Scheduling of a Hydrothermal System over a Year, Global Journal on Advancement in Engineering and Science, 2(1), March 2016, pp Figure 1: Model of a Hydrothermal System Generally a hydro plant is capable to supply the full load. But we consider it is capable to deliver the load for the period τ and rest of the load is delivered by the thermal part. Thus we can write: Here, max max Ph n Pload n (1) 1 1 max 1 Pn h = Total hydro energy (2) max P loadn = Total load (3) 1 max n = T M =Total operation time (4) 1 As total load is equal to the power generated by hydro and thermal unit, so we can write: Or, (Load Energy-Hydro Energy)= Thermal Energy (E th ) (5) max max Pload n Ph n Eth (6) 1 1 Again this thermal energy is equal to the thermal energy generation. So, we can write a constraint equation by taking a constraint function Φ in following manner: N Pth n Eth (7) 1 Where n is the number of period for which the thermal plant is in service. N So, n T th, When Tth TM (8) 1 The cost function of thermal plant is defined as: N Fc F( Pth ) n (9) 1 Now by applying Lagrangian method we get the following Lagrangian equation: Or, L F (10) N c L F( P ) n [ E P n ] th th th 1 1 N (11) Here, L= Lagrangian function λ= Lagrangian multiplier So, the system is economised only when first order derivative of L with respect to independent variables zero. ie, L 0 P th df( P ) or, th dp th (12) S. Mandal (Editor), GJAES 2016 GJAES Page 62

71 A. A. Malllick et al., Long Term Scheduling of a Hydrothermal System over a Year, Global Journal on Advancement in Engineering and Science, 2(1), March 2016, pp Now consider applying this economic condition we get a optimum thermal generation of so, E or, E or, T th N 1 th0 th th0 th th P T E th P n P th0 We can also draw a energy of the total hydrothermal system in following manner: Pth 0 MW. (13) Figure 2: Energy Diagram of a Hydrothermal System From the diagram we can say the total thermal energy is area under the rectangle abcd. ie, E P T (14) Or, T th th0 th E th th (15) Pth 0 Now our objective is to find out the value of Where, Then, or, or, N 1 th0 P th 0. So, at optimal condition of thermal generation, we can write: Fc F( P ) n (16) F( P ) [ P P ] (17) 2 th0 th0 th0 N F F( P ) n F( P ) T (18) c th0 th0 th 1 2 E F [ ] th c Pth P 0 th (19) 0 P th c th th th0 Pth 0 th0 E F E E P (20) So, optimal condition is found out by: dfc 0 dp (21) th0 or, P th0 (22) III. Exemplication and Discussion Here we consider a hydrothermal system having one thermal unit and one hydro unit. The cost function of thermal unit is given by: S. Mandal (Editor), GJAES 2016 GJAES Page 63

72 A. A. Malllick et al., Long Term Scheduling of a Hydrothermal System over a Year, Global Journal on Advancement in Engineering and Science, 2(1), March 2016, pp F 54 11P 0.02P (23) 2 c th th The hydro plant delivers 500 MW power in each months throughout a year and here we take as a constant. Now we are interested to find out the power delivered by the thermal unit and its running period if the load varies in following manner: Table 1: Load Demand over a Year Months January February March April May June July August September October November December Load Demand (MW) IV. Figures and Tables of Outputs The following results are found out after doing the MATLAB programming. These results show that the operating time of thermal unit is increases with the increment of load and vice versa. Similarly the power shared by thermal unit is also increases with increase of load. Table 2: MATLAB Programming Results Load Demand ( MW ) T th (hr) Hydro Power (MW) Thermal Power (MW) Figure.3 shows the increases of operating time of a thermal unit due increase of load demand. Here the operating time is measured for a total month in terms of hour. Figure 3: Variation of Operating Time of Thermal Unit with Load Demand In the following figure the comparison between hydro energy and thermal energy is shown. Here the hydro power is constant at 500 MW and thermal power increases with increment of load demand. S. Mandal (Editor), GJAES 2016 GJAES Page 64

73 A. A. Malllick et al., Long Term Scheduling of a Hydrothermal System over a Year, Global Journal on Advancement in Engineering and Science, 2(1), March 2016, pp Figure 4: Variation of Generation of with Load Demand In Figure.5 and Figure.6 the changes of operating time in every month and month wise load distribution between hydro and thermal unit is described respectively. Figure 5: Variation of Operating Time of Thermal Unit in Different Months Figure 6 Variation of Generation in Different Months S. Mandal (Editor), GJAES 2016 GJAES Page 65

74 A. A. Malllick et al., Long Term Scheduling of a Hydrothermal System over a Year, Global Journal on Advancement in Engineering and Science, 2(1), March 2016, pp V. Conclusion By this discussion we are able to schedule a hydrothermal system efficiently by the Lagrangian method. This technique is also applicable for short term scheduling. Short term scheduling is basically is done on one day basis. In future we are interested for short term scheduling. Also here we consider the hydro energy is constant. So we are interested to see the result for variable hydro energy. VI. References [1] N.Nabona, J. Castro and J.A.Gonzalez, Long Term Hydrothermal Coordination of Electricity Generation With Power and Energy Constrains, Jurnal on Numerical Methods In Engineering, Elsvier, pp , [2] H.Yan, P.B.Luh, X.Guan and P.M.Rogan, Schedulig Of Hydrothermal Power System, IEEE transaction on power system, Vol.8, No.3, pp , August [3] A.T.de Azevedo,A.R.L.de Oliveira and S.S.Filho, An Interior-Point Method For Long Term Scheduling Of large Scale Hydrothermal System, [4] Vitor L. de Matos, Andrew B. Philpott, Erlon C. Finardi and Ziming Guan, Solving Long Term Hydrothermal Scheduling Problems, [5] M.M.Salama, M.M.Elgazar, S.M.Abdelmaksoud and H.A.Henry, Short Term Optimal Generation Scheduling of Multichain Hydrothermal System Using Constraint Function Based Particle Swarm Optimization Technique (CFPSO), International journal of scintific and research publications, Volume.3, Issue.4, April [6] Rajesh Kumar, Vijay Garg and Bharat Lal, A Review Paper on Hydro-Thermal Scheduling, International Journal of Emerging Technologies in Computational and Applied Sciences (IJETCAS), 5(5), June-August, 2013, pp [7] Ve Song Vo, Cuong Duc Minh Nguyen and Tam Thanh Dao, Short-Term Hydrothermal Scheduling Based on Lagrange Function and Determining Initial Hydrothermal Generations, International Journal of u- and e- Service, Science and Technology Vol.8, No.3 (2015), pp [8] Pratyush Das, Raju Patwary and S.C. Konar, Combined Economic and Emission Dispatch with and without Considering Transmission loss, ACER 2013, pp , CS & IT-CSCP 2013 DOI : /csit [9] Dr.A.Allirani, K.Thenmalar and S.Yuvasri. Economic Thermal Power Dispatch With Emission Constraints Using VSFA Algorithm [10] Ugur Güvenç. Combined Economic Emission Dispatch Solution Using Genetic Algorithm on Similarity Crossover, Scientific Research and Essays, Vol. 5(17), pp , 4 September, 2010 [11] K.Sathish Kumar, V.Tamilselvan, N.Murali, R,Rajaram, N. Shanmuga Sundaram,and T. Jayabarathi, Economic Load Dispatch with Emission Constraints using Various PSO Algorithms, International Journal of Recent Trends in Engineering, Vol 2, No. 6, November 2009 [12] M.E. El-Hawary and G.S. Christensen. Optimal Economic Operation Of Electric Power Systems. [13] Sandeep Kaur, G.S. Kochar, D.S. Mahal and Sunita Goyal. Economic Load Dispatch Solution Using Dynamic Programming. [14] Jerom. K. Delson and S. M. Shahidehpour Linear Programming Applications To Power System Economics, Planning And Operation. IEEE Transactions on power system, Vol-7, No-3, August S. Mandal (Editor), GJAES 2016 GJAES Page 66

75 Special Issue: Conference Proceeding of i-con-2016 Global Journal on Advancement in Engineering and Science (GJAES) Vol. 2, Issue 1: March-2016, ISSN (Print): Original Research Work A Technique to Identify Faults on FMCG Packets using Image Processing Kaustav Roy 1 and Pritam Debnath 2 1, 2 Department of Electrical Engineering, Global Institute of Management & Technology, Krishnanagar, India 1 roysuhrid@yahoo.in and 2 debnathpritamagt@gmail.com Abstract: In this paper, an image processing based fault detection system for fast moving consumer goods (FMCG) packets is proposed. The technique is carried using LabVIEW software interfaced with National Instrument smart camera NI For the sake of demonstration cigarette packets are considered. The objectives of the proposed work are (i) to count the number of cigarettes in each packet, (ii) matching of barcode of cigarette packet with the database. Smart camera is used to capture images of the cigarette packets in the packaging line and the data is sent to LabVIEW for processing. Further a program is written using LabVIEW to process the data and detect the faulty packages. The system was subjected to test and the results proved satisfactory. Keywords: Automation; barcode detection; image processing; LabVIEW I. Introduction Packaging plays a vital role for any product; the basic purpose of packing is to protect the items inside. It also plays a very dominant part in marketing of the product, as customer buy the product looking the package first rather than what is there inside. So, it would be very important for any industry to design a proper packet. Packets provide a lot of information for the user while purchasing the items, like brand, ingredients, most important span of usage and price tag. These information are provided in two ways one by using a printed note and other by a coded information in the form of a barcode. The industry should check for the packets for correctness before it is dispatched for the market. There are many techniques that can be adopted for the fault detection in packets. In [1, 2], identification of individual cigarette and paper spoon in the tin packing using image processing and morphological operations are reported. An automated vision selection methodology is reported in [10], for solder defect inspection system. In [11], an automated visual inspection system is discussed for detection of missing or broken tablets based on color and size. A sorting method for apples is reported in [12] using machine vision. In [13], automatic detection and discrimination of the defects such as rust, scratches, roller imprints, and pits is discussed using vision. In [14-18], real time barcode detection method and its implementation on chip [14] are reported. From survey of reported work it is clear many automation techniques have been used in fault detection of FMCG production line. But relatively little work has been done in automated defect inspection on cigarette packaging system and detection of cigarette packet barcode. In this paper, a LabVIEW based algorithm interfacing with NI-1744 is designed to count the number of cigarettes per packets as well as detection of barcode in each packet. The results are tested in offline mode. The paper is organized as follows: After introduction in Section I, Section II describes the Block diagram of the proposed technique. Section III deals with the problem statement followed by proposed solution in Section IV. Finally, results and conclusions are given in Section V. II. Block Diagram of The Proposed Technique To demonstrate the working of the proposed technique a system is designed to capture the images of cigarettes and analysis for any faults and display the results. Fig. 1 shows the block diagram of the same, NI-1744 smart camera is made use of to capture the image. The image captured is communicated to PC for processing. A program is written to carry on processing using LabVIEW software. Figure 1: Block diagram of the proposed technique S. Mandal (Editor), GJAES 2016 GJAES Page 67

76 K. Roy et al., A Technique to Identify Faults on FMCG Packets using Image Processing, Global Journal on Advancement in Engineering and Science, 2(1), March 2016, pp National Instrument smart camera (NI-1744) is used to take the images of the packets which is to check for any fault. NI-1744 [19] has following features: 533MHz Power PC Processor, 128 system memory 1280x1024 resolutions, ½ in. CCD image size, 8 bit pixel depth acquisition rate of 13fps (frame per second at max resolution) 2x10/100/1000 communication interface (Ethernet, RS 232) III. Problem Statement Once the image of the cigarette package is captured using NI-1744 smart camera, the data is communicated to PC where a LabVIEW program is written to achieve the following objectives: i. Initialize the Smart Camera. ii. Acquire the image on to LabVIEW iii. Process the data for analysis of barcode and counting the cigarette. iv. Display the barcode and number of cigarette v. Check if the output is as desired if not initialize a mechanism to remove faulty packages from production line. IV. Problem Solution: Offline Testing To achieve the set objectives discussed in previous section a program is written on LabVIEW. LabVIEW platform is chosen because it uses graphical programming and it can be programmed on a real time constraint. LabVIEW program contains two parts: front panel and block diagram. Front panel window is where the control and indication task is carried on. Block diagram window is the window where the program for processing is written [20-25]. To achieve the desired objectives offline test is carried out. Offline testing is done to count the number of cigarette and barcode detection using the captured image of the production line. It consists of three Boolean indicators which display the inspection status result. One indicates the desired number of cigarettes in pack, other indicates matching of barcode, third if both the actions are fulfilled. Two numerical indicators are used to display number of cigarettes in every packet, and barcode of every packet respectively. Two displays are used to display the actual image of barcode and cigarette pack. At first, we acquired images in LabVIEW using NI Vision Acquisition Express VI. Next step would be to select acquisition source (folders of images), acquisition type (continuous acquisition with inline processing); configure acquisition settings and controls & indicators. IMAQ Count Object 2 vi is used to locate, count, and measure objects in a rectangular search area. Thresholding is done to look for bright objects in the image background and count the number of bright objects. If number of cigarettes is 10 then it will pass otherwise fails. For barcode detection IMAQ Read Barcode vi is used. The computation is carried using EAN-13 standard barcode format. If barcode matches with one from the user data base then it passes otherwise fail will be indicated. We have also created a database for barcode detection. The flow chart of the proposed system is shown in Fig. 2. Figure 2: Flow chart of the proposed system S. Mandal (Editor), GJAES 2016 GJAES Page 68

77 K. Roy et al., A Technique to Identify Faults on FMCG Packets using Image Processing, Global Journal on Advancement in Engineering and Science, 2(1), March 2016, pp V. Results and Conclusions In this paper, an image processing based fault detection system is designed in LabVIEW. The system efficiently counts number of cigarettes and detects barcode properly. The proposed technique was subjected to test in offline environment. The offline results obtained are shown in Fig. 3. Block diagram of the proposed system is shown in Fig. 4. (a) (b) (c) Figure 3: Result as seen on the front panel vi showing (a) correct case, (b) number fault, (c) barcode fault and (d) number & barcode fault in offline mode (d) Figure 4 Block diagram vi of the proposed system From the results obtained it is clear that the proposed technique has achieved its proposed objectives satisfactorily. Since the entire processing is carried on software, up gradation and modification for carrying the S. Mandal (Editor), GJAES 2016 GJAES Page 69

78 K. Roy et al., A Technique to Identify Faults on FMCG Packets using Image Processing, Global Journal on Advancement in Engineering and Science, 2(1), March 2016, pp proposed task on similar FMCG products can be implemented easily. In future, it can be applicable for online mode also. Further, a real time model will be fabricated for implementation. References [1] M. Park, J. S. Jin, S. L. Au, SuhuaiLuo, Pattern recognition from segmented images in automated inspection systems, IEEE Int. Sym. on Ubiquitous Multimedia Computing, pp , [2] M. Park, J. S. Jin, S. L. Au, SuhuaiLuo, Yue Cui, Automated defect inspection systems by pattern recognition, Int. Jr. of Signal Processing and Pattern Recognition, 2009, Vol. 2. [3] Cui Yue, J. S Jin, SuhuaiLuo, M. Park, S. L Au, Automated pattern recognition and defect inspection system, 5 th IEEE Int. Conf. Image and Graphics, pp , [4] He Wenping, Qiu Chao, Zhang Dexian, Design of a video system to detect and sort the faults of cigarette package, IEEE Int. Sym. on IT in Medicine and Education, pp , [5] G. Antonini, J. P. Thiran, Counting pedestrians in video sequences using trajectory clustering, IEEE Trans. on Circuits and Systems for Video Technology, Vol. 16, No. 8, 2006, pp [6] T. Semertzidis, K. Dimitropoulos, A. Koutsia, N. Grammalidis, Video sensor network for real-time traffic monitoring and surveillance, IET Intelligent Transport Systems, Vol.4, No. 2, 2010, pp [7] Stephan R. Harmsen, J. Nicole, J. P. Koenderink, Multi-target tracking for flower counting using adaptive motion models, Jr. on Computers and Electronics in Agriculture, Elsevier, Vol. 65, No.1, 2009, pp [8] Hao Shen, Li Shuxiao, DuoyuGu, Hongxing Chang, Bearing defect inspection based on machine vision, Jr. on Measurement, Elsevier, Vol. 45, No. 4, 2012, pp [9] Yu Han, Gao Jingge, Zhang Shuqiang, Research on the automatic detection system for cracked egg based on LabVIEW, Int. IEEE Conf. on Measuring Technology and Mechatronics Automation, Vol. 3, pp , [10] Oyeleye Olagunju, E. Amine Lehtihet, A classification algorithm and optimal feature selection methodology for automated solder joint defect inspection, Jr. of Manufacturing Systems, Vol. 17, No. 4, 1998, pp [11] Derganc Jože, Likar Boštjan, Rok Bernard, DejanTomaževič, FranjoPernuš, Real-time automated visual inspection of color tablets in pharmaceutical blisters, Jr. on Real-Time Imaging, Elsevier, Vol. 9, No. 2, 2003, pp [12] B. S. Bennedsen, D. L. Peterson, Performance of a system for apple surface defect identification in Near-infrared images, Jr. on Biosystems Engineering, Elsevier, Vol. 90, No. 4, 2005, pp [13] Xiaojie Duan, Duan Fajie, Han Fangfang, Study on surface defect vision detection system for steel plate based on virtual instrument technology, Int. Conf. on Control, Automation and Systems Engineering, pp. 1 4, [14] Gayathri Rema Narayan, Mr. Vinoth James, Barcode recognition from video by combining image processing and xilinx, Int. Conf. on Modelling, Optimization and Computing, Elsevier, Vol. 38, 2012, pp [15] K. Houni, W. Sawaya, Y. Delignon, Spatial resolution of 1D image- based barcode reading, 3 rd Int. Symposium on Communications, Control and Signal Processing, pp , [16] Yang Huijuan, A. C. Kot, Xudong Jiang, Binarization of low-quality barcode images captured by mobile phones using local window of adaptive location and size, IEEE Trans. on Image Processing, Vol. 21, No. 1, 2012, pp [17] Hiroo Wakaumi, Chikao Nagasawa, A ternary barcode detection system with a pattern-adaptable dual threshold, Jr. on Sensors and Actuators, Elsevier, Vol , 2006, pp [18] Hiroo Wakaumi, A high-density ternary barcode detection system with a fixed-period delay method, Eurosensor XXIV Conf., Procedia Engineering, Elsevier, Vol. 5, 2010, pp [19] Smart Camera for Embedded Machine Vision, National Instruments, [20] Title. LabVIEW detials [online]. Available: [21] Kevin James, PC Interfacing and Data Acquisition: Techniques for Measurement, Instrumentation and Control, Newnes Publishers, [22] Jovitha Jeroma, Virtual Instrumentation by LabVIEW, Prentice Hill Publishers, [23] Lisa K. wells, Jeffrey Travis, LabVIEW for everyone, Prentice Hall, New Jersey, [24] Sanjay Gupta, Joseph John, Virtual Instrumentation using LabVIEW, Tata McGraw Hill Publishing Co Ltd, [25] LabVIEW User Manual, National Instruments Acknowledgement Authors are very much thankful to the electrical engineering department, National Institute of Technology, Silchar for providing the pictures taken by NI-1744 smart camera. S. Mandal (Editor), GJAES 2016 GJAES Page 70

79 Special Issue: Conference Proceeding of i-con-2016 Global Journal on Advancement in Engineering and Science (GJAES) Vol. 2, Issue 1: March-2016, ISSN (Print): Original Research Work Simplified Method for Direct Measurement of Dissipation Factor of an Electrical Machine or Insulating Material Arindam Pal 1 and Atanu Paul 2 1,2 Department of Electrical Engineering, Global Institute of Management & Technology Krishnagar, Nadia INDIA 1 arindam_pal_ju@rediffmail.com and 2 atanu.gimt.ap@gmail.com Abstract: The dissipation factor of an electrical insulating material can be measured by using the AC bridge technique. But this bridge technique has some errors due to the effect of stray capacitance and stray inductance in between the bridge output lead wires and also in between the lead wires and ground. Using Wagner-earth technique these errors can be minimized but it has one major drawback. The repeatations of bridge balance and Wagner-earth balance is necessary in each observation. Here a modified operational amplifier based design of Schering bridge network has been proposed; the effect of stray capacitance and inductance may be assumed to be negligible. The proposed circuit will provide the tan value in terms of voltage by using a voltmeter or multimeter. Thus the hardware and software simulated results gives the satisfactory output and also the satisfactory performance of the bridge network. Keywords: Schering bridge, dissipation factor, operational-amplifier, Wagner earth mechanism. I. Introduction The quality of insulation of electrical apparatus should be very high in electrical power generation, transmission and utilization system. However that same insulation can unnecessarily be the cause of equipment failure resulting in costly replacement or repairs and downtime of the plant. To avoid this situation, the test of Capacitance and Dissipation Factor are very much necessary periodically which can detect moisture, contamination or deterioration of insulation. A gaseous medium may provide the main insulation, as is the case in overhead transmission lines. If the high voltage is to be insulated within a small space, a compressed gas, solid, liquid or compound insulation is required (e.g. transformers, motors, generators, cables etc.). In all cases, the insulation must be designed so that its breakdown strength is high enough to withstand occasional surges, which may be several times of the working voltage. The dielectric losses must be low and the insulation resistance high in order to prevent thermal breakdown. For design a high voltage transformer, motor, generator, capacitor, cables etc. the accurate measurement of loss angle is necessary. The dielectric constant and loss angle of a material may be measured from the accurate measurement of the capacitance of a capacitor of particular shape, using the material as the dielectric. Generally this capacitance is very small and may sometimes be comparable with the stray capacitance between the bridge output lead wires or between any lead wire and the ground. So, in all these applications, the measurement of the capacitance of a capacitor is required with high accuracy. There are few different bridge measurement techniques [1,2,11,12,22] for the measurement of capacitance, but Schering bridge technique may, perhaps, be considered to be one of the most sensitive bridge measurement techniques for measuring loss tangent and capacitance at high voltage at power frequency. Therefore, the Schering bridge is used to measure the dielectric loss and the capacitance of insulating material between the windings and ground and investigate the effect of increasing voltage on dielectric loss in the electrical machine. These investigations have been done by comparing it with a standard capacitor, which has negligible loss over a wide voltage range. The effect of stray capacitance between output nodal points and between any nodal point and ground becomes predominant in high voltage application and so the measurement of dielectric parameters by using bridge technique may suffer from errors. In the Schering bridge technique, these errors are minimized by using Wagner earth mechanism [1, 2]. But in this mechanism a repeated number of bridge balance and Wagner earth balance are required in each observation. There are different other techniques proposed by various investigators to minimize the error due to the effect of stray capacitances. D. Morioli. et al. [8] and P. Holmberg [13] have proposed self balancing techniques to achieve high accuracy in measurement. A modified approach of the balancing technique of AC Wheatstone bridge network has been reported by E. Takagishi [5]. C. Kolle et al. [14] have suggested a synchronous modulation and demodulation technique for the precision measurement of the capacitance of a capacitor. Yang W. Q. et al. [15] have suggested an electrical capacitance tomography (ECT) technique for the measurement of change of capacitance of a multi electrode capacitive transducer. Zhi-Niu Xu et al. [19] have suggested Hanning windowing interpolation algorithm based on FFT to reduce the error of dielectric loss angle measurement. Ahmed M. [17] presented a very simple electronic circuit for direct measurement of loss angle of a leaky capacitor in terms of pulse count. A self balancing type capacitance to DC converter has been proposed by N. Hagiwara et al [4] for measurement of capacitance in low voltage applications. Bera S. C. et al. [16] have designed an operational S. Mandal (Editor), GJAES 2016 GJAES Page 71

80 A. Pal et al., Simplified Method for Direct Measurement of Dissipation Factor of an Electrical Machine or Insulating Material, Global Journal on Advancement in Engineering and Science, 2(1), March 2016, pp amplifier (OP-AMP) based modified Schering bridge for the measurement of dielectric parameters of a material and the capacitance of a capacitive transducer. S. Chattopadhyay et al. [18] suggested a simplified method for the measurement of loss angle of a high voltage transformer. In this paper, modified Schering bridge network has been taken as a primary circuit for the measurement of dissipation factor (tan ) of a high voltage electrical machine. Here the proposed circuit will give the tan value in terms of voltage. So to measure this tan value, we need a voltmeter or multimeter only. II. Method of Approach A. Review Stage: The conventional Schering bridge network designed by M/s H. Tinsley & Co. [22] is shown in Fig. 1. It is modified with an operational amplifier based network as shown in Fig. 2. Figure: 1: Conventional Schering bridge network with Wagner-earth arrangement Figure 2: Modified Schering bridge network Hence the output nodal points B & D of the bridge network are both virtually at the same potential with respect to the ground. Hence the effect of the stray capacitance between the output lead wires and ground may be assumed to be negligibly small. If the bridge arm impedances in the arms AB, AD, BC, and CD be Z 1, Z 2, Z 3 and Z 4 respectively then the currents I 1, I 2, I 3, and I 4 through these bridge impedances. At the balance condition of the bridge, V 0 = 0. Hence the same balance condition, as in the conventional Wheatstone bridge network is obtained and is given by, Z 2Z 3 Z1Z 4 (1) Now for the network as shown in the Fig. 2 RM Z1 1 J CMR Z 3 R3, Z 4 M, Z Hence the equation (1) at balance is reduced to 2 1 J C 4R J C 4 1 J C RM (1 J C 4R (1 J CMRM ) J C 4 4 N, 4) R3 J C N (2) or, J R3C 4 2 C 4C 2 C 4CMR3R N R 4 R M M J R Equating the real and imaginary parts of the above equation, we obtain, 2 2 C4CMR3RM C4CNR 4 C R R 4 M CN 3 R M M C N (3) S. Mandal (Editor), GJAES 2016 GJAES Page 72

81 A. Pal et al., Simplified Method for Direct Measurement of Dissipation Factor of an Electrical Machine or Insulating Material, Global Journal on Advancement in Engineering and Science, 2(1), March 2016, pp or, (4) and R3C 4 RMCN or, C 4 RM R3 CN (5) B. Dissipation Factor Analysis: As capacitors with liquid and solid insulation possess dielectric losses under ac voltage stress, because there is a real current I R in addition to capacitive charging current I C, the total current I 1 leads the voltage by an angle less than 90. This real current is due to the residual conductivity of the dielectric. Generally, a parallel equivalent circuit of an ideal capacitor and an ohmic resistor models lossy capacitor. From the phasor diagram as shown in the Fig. 3, the dissipation factor for the electrical machine equivalent of the parallel resistor and capacitor between insulating materials or between insulating materials and the ground when a voltage V is applied to it is given by 1 tan CMRM Figure 3: Phasor diagram of a lossy dielectric material Now doing equation (4) (5), we have or, 1 1 tan (6) C 4R4 CMRM Therefore, the effective capacitance between the electrical machine insulating materials or between the insulating materials and the ground can easily be measured by using equation (4) and also the tan by using equation (6). C. Principle of Operation: From equation (6), we get 1 tan C 4R 4 X R C I 4X I 4R C V V C R [Since, R & C are in series (Fig. 2)] or, or, or, or, tan V V C R VC log(tan ) log VR log(tan ) log V log log(tan ) log( V Anti log C VR X ) log tan Anti log log V X S. Mandal (Editor), GJAES 2016 GJAES Page 73

82 TAN(delta)*10-3 LOSS ANGLE(delta) IN DEGREE A. Pal et al., Simplified Method for Direct Measurement of Dissipation Factor of an Electrical Machine or Insulating Material, Global Journal on Advancement in Engineering and Science, 2(1), March 2016, pp or, tan VX This equation shows that tan can be measured in terms of voltage. Here, Fig. 4 represents the block diagram of the proposed circuit. Figure 4: Block diagram of proposed circuit III. Experimental Results A. Simulated Results: The data taken from a squirrel cage induction motor of 130 KW, 3.3 KV, 3-Phase, 50Hz EMIC Motors Pvt. Ltd. Make, M/C No.- DSP/3.3KV/130KW/01/08. At normal condition capacitance between phase (star connection) and ground is 24nF and insulation resistance is 10MΩ. Using above values as CM and RM, observed the circuit output values at different voltages applying in the bridge input terminals [1KV-6KV]. The values of tan can be collected by using a multimeter or millivoltmeter, here the CIRCUIT MAKER (Student Version) software is used for finding simulated results. The excitation voltage vs. tan and excitation voltage vs. delta( ) curves, Fig. 5.1 & Fig. 5.2 respectively are plotted using MATLAB software version 6.5. VOLTAGE Vs. TAN(delta) CURVE OF A HIGH VOLTAGE SQ. CAGE INDUCTION MOTOR 50 VOLTAGE Vs. LOSS ANGLE(delta) CURVE OF A HIGH VOLTAGE SQ. CAGE INDUCTION MOTOR VOLTAGE IN KV VOLTAGE IN KV Fig. 5.1: Characteristics of excitation voltage vs. tan Fig. 5.2: Characteristics of Excitation voltage vs. delta( ) B. Experimental Results: A low voltage transformer of 1KVA, / V, 1-Phase, 50Hz is used as a testing device. At normal condition capacitance between high voltage side and low voltage side is 3.3nF and insulation resistance is 2MΩ. In the experiment, the bridge is divided into four different parts. It consists of one arm (first arm) of the bridge connected with the high voltage side and low voltage side. The first arm of the bridge consists of a sample like transformer to measure its dielectric loss in between high voltage and low voltage windings at different voltages [40V to 260V] and constant frequency of 50Hz. The bridge components of the network were the known fixed value of capacitor, variable capacitors, decade capacitors, decade resistors and true RMS digital multimeter as a detector. S. Mandal (Editor), GJAES 2016 GJAES Page 74

83 % of Error % of Error Tan(Delta) Loss Angle(Delta) A. Pal et al., Simplified Method for Direct Measurement of Dissipation Factor of an Electrical Machine or Insulating Material, Global Journal on Advancement in Engineering and Science, 2(1), March 2016, pp Excitation Voltage in Volts Graph of Experimental Results Actual Graph From Best Fit Curve Fig. 5.3: Experimental characteristis of cxcitation voltage vs. tan Graph of Experimental Results Excitation Voltage in Volts Actual Graph From Best Fit Curve Fig. 5.4: Experimental characteristis of cxcitation voltage vs. delta( ) Number of Observed Value Taken for Tan(Delta) -2 Number of Observed Value Taken for Loss Angle(Delta) Fig.5.5: Excitation voltage vs. % of error (tan ) charts Fig.5.6: Excitation voltage vs. % of error ( ) charts Observing the two graphs of Fig. 5.5 and Fig. 5.6, taking data from actual value of best fit curve as reference, it is found that the hardware output values mostly has an % of error is <1.5%. IV. Discussion To obtain optimum sensitivity the bridge arms impedances should be selected to be nearly identical, like all other bridge networks. Before connecting the ground to the common terminal of the network, care was taken to ensure that the ground wire was at nearly zero potential and the high ground potential did not damage the ICs. The bridge balance condition was found not to be disturbed due to any change of orientation of the lead wires. The following points must be noted when conducting Schering bridge experiment: Insulation test on primary side (short all the windings of primary side in case of polly-phase winding) of the machine must be done using mega-ohm test. Any fail of test result from the machine must not be used for experiment, i.e. readings of low impedance < 1M. For HV machine, winding of the HV brushing on the transformer should be checked for existing connection by using multimeter equipment. The value of standard test reference capacitance (C N ) is 1nF, must not change during the whole course of experiment, so as to ensure consistence in readings. Before the start of every experiment, a test on the specimen, i.e. using any reference capacitor should be done so as to ensure the reliability of the result. In this experiment reference capacitance (4.7nF, 400V) is used. V. Conclusion These measurements can be used to track the deterioration of service aged insulation of a high voltage electrical machine and also establish baseline readings on new machine installations. Changes in tan measurements can indicate degradation of the insulation, which can be used to make engineering decisions about the service life of the machine. These measurements technique could be used when the machine is in working. During experiment, it was found that the results have very good repeatability in both increasing and decreasing modes of experiment with the change in orientation of the connecting wires with respect to the ground. Hence the results appear to have minimum error due to stray capacitance. S. Mandal (Editor), GJAES 2016 GJAES Page 75

84 A. Pal et al., Simplified Method for Direct Measurement of Dissipation Factor of an Electrical Machine or Insulating Material, Global Journal on Advancement in Engineering and Science, 2(1), March 2016, pp The bridge also is used effectively to measure a small change in capacitance of a capacitive transducer over a wide range. This is a simple and cost effective circuit for measuring loss angle of a high voltage electrical machine. VI. References [1] E.Kuffel & W.S.Zaengl,"High Voltage Engineering Fundamentals, (Book) Robert Maxwell, M.C., [2] Golding E. W. and Widdis F. C, "Electrical Measurement and Measuring Instruments,(Book) LBS and PITMAN,1991. [3] Hague B., "Alternating Current Bridge Methods,5th, revised edition, Sir Isaac Pitman and Sons Ltd,London, [4] Hagiwara N., Yanase M. and Saegusa T., "A self balancing type capacitance to D. C. voltage converter for measuring small capacitance, IEEE Transaction on Instrumentation and Measurement, Vol. 36, No. 2, June 1978, pp [5] Takagishi.E, "On the Balance of an AC Wheatstone Bridge, IEEE Transaction on Instrumentation and Measurement, Vol. IM-29, No. 2, June 1980, pp [6] S.Huang, R.G.Green, A.Polaskowski & M.S. Beck, "A High Frequency Stray-Immune Capacitance Transducer Based on the Charge Transfer Principle, IEEE Transaction on Instrumentation and Measurement, Vol. -37, No. 3,Sept. 1988, pp [7] A. F. P. Van Putten, Thermal feedbach drives sensor bridge simultaneously with constant supply voltage and current, IEEE Trans. On Instrum. and Meas., vol. 39, no. 1, pp , Feb [8] Marioli D., Sardini, E. and Taroni, A, "High accuracy measurement techniques for capacitance transducers, IOP Meas. Sc. and Tech., Vol. 4, 1993, pp [9] Alfonso Carlosena, Rafael Cabeza, and Luis Serrano, "A New Method for Low-Capacitance Probing IEEE Trans. On Instrum. and Measurement, Vol. 43, no. 3, June, 1993, pp [10] A. Baccigalupi, P. Daponte, and D. Grimaldi, "On a Circuit Theory Approach to Evaluate the Stray Capacitances of Two Coupled Inductors, IEEE Trans. On Instrum. and Measurement, Vol. 43, no. 5, Oct. 1994, pp [11] D.V.S.Murthy, "Transducer & Instrumentation, PH(I) Pvt. Ltd., New Delhi [12] J.P.Bentley, "Principles of Measurement Systems, 3rd edition,-longman Singapore Publishers Ltd., [13] Holmberg Per, "Automatic Balancing of linear A. C. Bridge circuits for capacitive sensor elements, IEEE Transaction on Instrumentation and Measurements, Vol. 44, No. 3, June 1995, pp [14] C.Kolle and P.O. Leary, "Low cost high precision measurement system for capacitive sensors IOP-Measurement Science Technology, Vol.9, Issue 3, March 1998, pp [15] Yang W. Q., York T. A., "New A. C. Based capacitance tomography system, IEE Science, Measurement and Technology, Vol. 146, No. 1, Jan. 1999, pp [16] Bera S. C. and Chattopadhyay S., "A modified Schering bridge for measurement of the dielectric parameters of a material and the capacitance of a capacitive transducer, Measurement, Vol. 33, Issue 1, January 2003, pp [17] Ahmed M., "A simple scheme for loss angle measurement of a capacitor, IEEE Transactions on Energy Conversion, Vol. 19, Issue 1, March 2004, pp [18] Chattopadhyay S., Mazumder K. B. and Bera S. C., "Simplified method for the measurement of loss angle of a high voltage transformer, Proceedings of the 8th International conference on Electrical Machines and systems (ICEMES), 2005, Vol. 3, pp [19] Zhi-Niu Xu, Fang-Cheng Lu, Li-Juan Zhao, "Analysis of dielectric loss angle measurement by Hanning windowing interpolation algorithm based on FFT, Automation of electric power systems, Vol. 30, No. 2, January 2006, pp [20] Bera S. C. and Kole D. N.,"Study of a Modified AC Bridge Technique for Loss Angle Measurement of a Dielectric Material, Sensors & Transducers Journal, Vol. 96, Issue 9, September 2008, pp [21] S.Chattopadhyay and A.Pal, Simplified Method for the Measurement of Loss Angle of a High Voltage Electrical Machine, Proceedings of the National Conference on Recent Trends in Engineering and Education (RTEE ) on 28th 29th January 2010, Organized by National Institute of Technical Teachers Training and Research, Kolkata, India. [22] "Schering Bridge for Dielectrics - Type 4030-B made by M/S H. Tinsley and Co. Ltd. London SE-25. [23] A.K.Sawhney & P.Sawhney, "A Course in Electrical and Electronic Measurements and Instrumentation, (Book) Dhanpat Rai & Sons Delhi, [24] D. Roy Choudhury and Shail B. Jain, Linear Integrated Circuits, (Book) New Age International Publication, [25] H. S. Kalsi, Electronic Instrumentation, (Book) Tata McGraw Hill Publishing Company Limited, S. Mandal (Editor), GJAES 2016 GJAES Page 76

85 Special Issue: Conference Proceeding of i-con-2016 Global Journal on Advancement in Engineering and Science (GJAES) Vol. 2, Issue 1 : March-2016, ISSN (Print): Original Research Work Thermal Performance of a Heat Pipe Embedded Evacuated Tube Collector in a Compound Parabolic Concentrator under different sky conditions Debabrata Pradhan 1, Debrudra Mitra 2, Subhasis Neogi 3 1 Electrical engineering department, Global Institute of Management & Technology, Krishnanagar 2,3 School of Energy Studies, Jadavpur University, Jadavpur, Kolkata , West Bengal India 3 address: neogi_s@yahoo.com Abstract: This paper presents thermal performance of a evacuated tube solar collector (ETC) embedded with heat pipe in a compound parabolic concentrator (CPC) under different sky conditions in Kolkata. A evacuated tube solar collector (ETC) embedded with heat pipe in a CPC has been developed and tested at Jadavpur University in Kolkata, India. The collector has been tested under three sky condition namely (i) clear sky (ii) partly cloudy and (iii) densely cloud covered sky. An experimental set-up, involving a single piece of evacuated tube heat pipe solar collector has been used to collect the solar radiation which is concentrated by the CPC. The condenser of the heat pipe has been directly inserted into the storage tank and heated the storage water of the insulated storage tank. Thermal efficiency of the system under three different sky conditions have also been evaluated in this paper. Keywords: Evacuated tube solar collector; heat pipe; Compound Parabolic Concentrator; System thermal efficiency. I. Introduction Solar thermal energy is more world widely used energy as it is the most economical choice among all renewable energy. Flat Plate Collectors are used for low temperature application. It is not efficient enough to deliver water for medium temperature application. Performance of flat plate collector degrades at low ambient temperature and high windy condition. Evacuated tube collectors (ETCs) have high efficiency as it provides the combined effects of highly selective surface coating and vacuum insulation. The vacuum envelope reduces convection and conduction losses so that the collectors can operate at higher temperatures than flat plate collectors. Many evacuated-tube designs have been developed and are being used among which the heat pipe evacuated tube collector is very popular because of its higher heat extraction efficiency and fast response [1]. To attend higher level temperature applications, i.e. applications in medium temperature range between C, it is bound to go for concentrating the solar radiation. Different type of solar concentrator used for concentrating the solar radiation like compound parabolic concentrator, parabolic trough concentrator, fixed reflector-moving receiver, fixed receiver-moving reflector etc [2]. Compound Parabolic Concentrator (CPC) is considered to be among collectors having the highest possible concentrating ratio and due to its large aperture area, only intermittent tracking is required. For this reason Compound Parabolic Concentrator (CPC) is used in such kinds of applications where medium temperatures at around C are required. The disadvantage of solar energy is that the sun doesn't shine 24 hours of a day and all the days of a year not equally shiny. In cloudy atmosphere due to cloud sun is shaded and total amount of solar radiation on a surface decrease. Solar radiation is partly absorbed, scattered and reflected by molecules, aerosols, water vapor and clouds as it passes through the atmosphere. The direct solar beam arriving directly at the earth s surface is called direct solar radiation. The total amount of solar radiation falling on a horizontal surface is referred as global solar radiation. As CPC is a non imaging concentrators, it can work for both diffuse and beam radiation. Heat pipe also works for both diffuse and beam so even cloudy condition it works but the efficiency decreases due to lower concentrating capability of the CPC. Various parameters such as tilt angle, the weather conditions, sky condition, collector dimensions, etc affect the performance of collectors. D. N. Nkwetta et al.[3] analyzed the performance of an evacuated tube heat pipe collector (ETHPC) compared to a concentrated evacuated tube single sided coated heat pipe absorber (SSACPC) for medium temperature applications and reported that for medium temperature applications truncated SSACPC has a better temperature improvement with higher outlet and inlet fluid temperature differential and lower heat loss coefficient compared to the control evacuated tube heat pipe collector. M. Hayek et al.[4] that the heat pipe collectors have a much better efficiency than the water in glass collectors. T. T. Chow et al.[5] did the numerical evaluation of single phase and two phase solar water heaters in different climate zones of China and reported that Two-phase closed thermosyphon system is technically advantageous because of its higher thermal efficiency achievable with indirect fluid circulation. It has larger climate adaptability than the single-phase system. S. Mandal (Editor), GJAES 2016 GJAES Page 77

86 D. Pradhan et al., Thermal Performance of a Heat Pipe Embedded Evacuated Tube Collector in a Compound Parabolic Concentrator under different sky condition, Global Journal on Advancement in Engineering and Science, 2(1), March 2016, pp A. Experimental set up II. Methodology In the present work A pyranometer was set on the CPC plane to measure global radiation on the CPC aperture plane. A thermocouple was placed inside the Stephenson s screen to measure the ambient temperature. One thermocouple was connected to the condenser of the heat pipe to measure the temperature of the heat pipe condenser. The thermocouple was attached to the heat pipe condenser using aluminum tape such that no air was kept inside the aluminum tape. Another thermocouple was placed at the middle of the storage water to measure water temperature. This thermocouple position was in the middle of the contained water. The compound parabolic concentrator with evacuated tube heat pipe solar collectors were mounted on the two stand of the system, and it was movable. The compound parabolic concentrating collector was positioned such that the absorbers were aligned in a north -south orientation, facing the sun at tilt angle of Intermittent tracking was done for getting maximum solar radiation at the aparture of the CPC. It tracked the sun from east to west. If the tracking was done properly,the bright spot of the concentrated solar radiation falls on the evacuated tube heat pipe. Figure 1 Heat Pipe Embedded Evacuated Tube Collector in a Compound Parabolic Concentrator.. B. Test procedure Three typical weather conditions in Kolkata were used to analyze the daily performance of the systems. They consist of clear sky, partly cloudy and densely cloud covered sky. The average daily solar radiation on the CPC plane was 872 W /m 2 on the clear sky day, 666 W/ m 2 on the partly cloudy day and 612 W/ m 2 on the day with mostly cloudy. The average ambient temperature during the test was C on of clear sky day, C on the partly cloudy day and C on the day with densely cloud. Before the start of test the CPC was cover by a white coloured cover such that no radiation can fall on the CPC before starting of the experiment. Water was pouerd into the contaier and it was covered by top cover and XPS insulation.then the cover of the CPC was opened and temperature of the heat pipe and ambient were monitored and collected throgh the data logger systems. C. System thermal efficiency System thermal efficiency [6]during heating was calculated using eq. (1) η = (M w C w +M c C c )(T f T i ) I A a T (1) ɳ System thermal efficiency C c Specific heat of the material of container, J/(kg-k) C w Specific heat of water, J/(kg-K ) S. Mandal (Editor), GJAES 2016 GJAES Page 78

87 D. Pradhan et al., Thermal Performance of a Heat Pipe Embedded Evacuated Tube Collector in a Compound Parabolic Concentrator under different sky condition, Global Journal on Advancement in Engineering and Science, 2(1), March 2016, pp M c Mass of container, kg M w Mass of water kept during the test, kg Τ Duration of the test, s A a Aperture area of the CPC I Average solar radiation during the interval, W/m 2 T f Water Final temperature, 0 C T i Water initial temperature, 0 C T w Water temperature, 0 C Average ambient temperature during the interval, 0 C T a III. Results and Discussion The variation of water temperature in the container and heat pipe condenser temperature during the were plotted in figure. It was observed that the temperature of the heat pipe simultaneously changes with the variation of the solar radiation. But the temperature of the water in the container does not change simultaneously. Figure 2 Variations of insolation and temperature of the heat pipe,water and ambient with time (Clear sky) The steady state temperature of water in the container for the clear sky condition and partly clouded condition remains almost same i.e. near 97 0 c. But in densely cloudy condition, the steady state temperature doesn t reach up to 97 0 c. In this case steady state temperature of water nearly becomes 73 0 c. To reach 95 0 c temperatures, in the clear sky condition, it takes around 75 minutes, but in the case of partly cloudy condition, it takes around 130 minutes. Sky condition Mass of water(kg) Table I Initial temperature ( 0 C) Performance under different sky condition Final temperature ( 0 C) Average radiation on CPC(W/m2) Ambient temperature( 0 C) Efficiency (%) clear Partly clouded Densely cloudy Figure 3 Variations of insolation and temperature of the heat pipe,water and ambient with time (partly clouded) S. Mandal (Editor), GJAES 2016 GJAES Page 79

88 D. Pradhan et al., Thermal Performance of a Heat Pipe Embedded Evacuated Tube Collector in a Compound Parabolic Concentrator under different sky condition, Global Journal on Advancement in Engineering and Science, 2(1), March 2016, pp Figure 4 Variations of insolation and temperature of the heat pipe,water and ambient with time (densely clouded). It was also found that the efficiency of the system was % in the clear sky condition and % in the partly cloudy condition and in the mostly cloudy condition it was %. So it was observed that system S. Mandal (Editor), GJAES 2016 GJAES Page 80

89 D. Pradhan et al., Thermal Performance of a Heat Pipe Embedded Evacuated Tube Collector in a Compound Parabolic Concentrator under different sky condition, Global Journal on Advancement in Engineering and Science, 2(1), March 2016, pp efficiency was higher during the clear sky condition with high intensive solar radiation and high percentage of beam radiation. When the cloud increases in the sky, the radiation also decreases and gets diffused. Increase in efficiency for clear sky condition occurs due to high concentrating capability of the CPC for beam radiation and lower for the case of fully cloudy condition due to lower concentrating capability for diffused radiation IV. Conclusions In the present study, it has been found that the system thermal efficiency was higher at clear sky condition compared to partly cloudy and densely cloudy conditon. From the experimental results it has been found that the system could provide adequate temperature needed for medium temperature water application for clear and partly dense condition. System could not able to provide efficient temperature during densely cloudy condition.. V. References [1] S. P. Sukhatme, Solar Energy, Tata Mcgraw-Hill publishing company limited, New Delhi, [2] J. A. Duffie, W. A. Beckman, Solar Engineering of Thermal Processes, John wiley&sons, New York, [3] D. N. Nkwetta, M. Smyth, A. Zacharopoulos and T. Hyde In-door experimental analysis of concentrated and non-concentrated evacuated tube heat pipe collectors for medium temperature applications,energy and Buildings, 2012, 47, [4] M. Hayek, J. Assaf, W. Lteif, Experimental Investigation of the Performance of EvacuatedTube Solar Collectors under Eastern Mediterranean Climatic Conditions, Energy Procedia, 2011,6, [5] T. T.Chow, Y. Bai, Z. Dong and K. F. Fong Selection between single-phase and two-phase evacuatedtube solar water heaters in different climate zones of China,Solar Energy, 2013, 98, [6] D. Pradhan, D. Mitra, S. Neogi, Thermal Performance of a Heat Pipe Embedded Evacuated Tube Collector in a Compound Parabolic Concentrator, Energy Procedia, In progress. S. Mandal (Editor), GJAES 2016 GJAES Page 81

90 Special Issue: Conference Proceeding of i-con-2016 Global Journal onadvancement in Engineering and Science (GJAES) Vol. 2, Issue 1 : March-2016, ISSN (Print): Original Research Work Effect of Green Roof on Heat Flow of a Building: An Experimental Study Arna Ganguly 1, Subhasis Neogi 2 School of Energy Studies, Jadavpur University,Kolkata India arna.g.90@gmail.com 1, 2 Abstract: Increase in energy requirement mainly during summer for tropical humid climates has gained significant importance in this limited storage of fossil fuels. Building is one of the major energy consumers in developed and developing countries. Building envelopes like roofs are huge absorbers of solar radiation. They tend to affect the building surface temperature directly, increasing the cooling load during the warm hours of the day. A green vegetated roof is a roof covered with vegetation. This paper examines the effect of green vegetation in warm climate. It was found that the peak surface temperatures of the green roof are much lower compare to the standard roof. The heat flow rate through the roof is also reduced significantly for green roof compared to standard roof. It implies that green roof is suitable for warm climatic conditions to reduce the indoor room temperature and heat flow rate. Keywords: Green roof, Temperature difference, Heat flow rate I. Introduction The increasing global warming phenomenon together with the rising development of urban concentrations of roads and buildings has amplified severe environmental issues. One of the most alarming issues being urban heat island effect which increases the surface and air temperature of the built environment. Building envelopes like roofs are huge absorbers of solar radiation. They tend to affect the building surface temperature directly, increasing the cooling load during the warm hours of the day. The energy efficiency of buildings depends directly on the thermal envelope of each building specially roofs. A green vegetated roof is a roof covered with vegetation. Between the bare or standard roof surface and the growing medium i.e. the vegetation layer, the system may be composed of a number of layers like water proofing membrane, drainage and other insulation layers [Dominique MORAU et al, 2012]. According to the thickness of layers beneath the green vegetated roof, the latter is classified into two categories: (i) extensive green roofs, which may be established on a very thin layer of soil with minimum maintenance requirement and (ii) intensive green roofs, which have a minimum soil layer of 20 cm or more [Orna Schweitzer et al, 2014]. A noticeable decrement in roof surface temperature has been observed due to the presence of extensive green vegetated roof [Dominique MORAU et al, 2012] installed in Reunion Island. Similar kind of effect was also observed in Athens, Greece [A. Spala et al, 2008]. It was also denoted that lowest surface temperatures of the green roof were observed at places covered by thick dark green vegetation while the surface temperatures were comparatively higher in case of thinly distributed vegetation [A. Niachou et al, 2001].Thermal comfort was recorded in Tel. Aviv. With a significant decrease in interior air temperature when compared to the reference, for all the four species of plants used in the experimental study [Orna Schweitzer et al, 2014].The heat transfer coefficient (U-value) decreased due to green roof plantation [Dominique MORAU et al, 2012, A. Niachou et al, 2001] that ensured thermal insulation of the building. Thus green roof studies indicate the importance of green roofs on the temperature regulation of buildings from thermal comfort point of view. Moreover, a green roof provides a direct decrease of heating and cooling energy use and also helps to mitigate the urban heat island effects. This paper contributes to the comparative study of a bare roof and green vegetated roof. It denotes the decrease in roof surface and ceiling surface temperature of the room under the planted roof. Eventually a reduction in heat gain and loss through the green vegetated roof is observed. II. Experimental Setup The experimental study was conducted at School of Energy Studies of Jadavpur University in Kolkata, India. The building is a four storied building. The rooms selected for experimental study is located at the top floor of the building. Green roof is placed on a room of dimensions of 3.75m (length) X 7.13m (width) X 3.15m (height). Bare roof of the room of dimension 7.9m (length) X 7.13m (width) X 3.15m (height) were used as standard roof. Roof is constructed using reinforced continuous concrete of thickness 100mm. S. Mandal (Editor), GJAES 2016 GJAES Page 82

91 A. Ganguly et al., Effect of Green Roof on Heat Flow of a Building: An Experimental Study, Global Journal on Advancement in Engineering and Science, 2(1), March 2016, pp Figure 1 Schematic of the experimental roof Grass + soil l 2 Roof concrete d Bare roof 100 mm Roof concrete d Green roof Solar pyranometer is used to measure the intensity of solar radiation. To measure the surface temperatures, 3 T types and 1 K type thermocouples are used. A K type thermocouple is used to measure the ambient temperature. All the readings are logged using a data logger. Experiment was done from 25 th February to 28 th February, III. Results and discussions Figure 2 Surface temperatures and solar radiation variations with time For all four days, peak temperatures of the standard roof surface are much higher than the green roof surface. Peak temperatures for inner surface also higher for standard roof compare to green roof. The differences between the maximum and minimum temperatures for both the ceiling and roof surface are lower for green roof compare to standard roof. The heat flow rate through the green and standard roof is also calculated. Positive heat flow implies heat gain by the inner surface of the roof from ambient, whereas negative heat flow rate implies heat loss by the inner surface of the roof to ambient. S. Mandal (Editor), GJAES 2016 GJAES Page 83

92 A. Ganguly et al., Effect of Green Roof on Heat Flow of a Building: An Experimental Study, Global Journal on Advancement in Engineering and Science, 2(1), March 2016, pp Figure 3 Variation of heat flow rate through roof with time Here we can see that for green roof, daytime heat gain by the standard roof is much higher compared to the green roof. Similarly in nighttime also, the heat loss by the green roof is much lower compared to the standard roof. The heat flow rate varies within -10W/m 2 to 20 W/m 2 whereas for standard roof, maximum heat gain and heat loss by the roof is around 50 W/m 2 and 20 W/m 2 respectively. So for green roof, heat flow rate through the roof reduced significantly. TableI Daily percentage reduction in heat gain and heat loss through the roof Date Percentage reduction in Heat gain Percentage reduction in Heat loss 25 th February th February th February th February th to 28 th February For all the 4 days, percentage reduction in heat gain and heat loss for green roof is shown in Table 1. The effect of green roof on reduction of heat loss is higher compare to the reduction of heat gain through the roof. Reduction in heat loss for green roof is around 80% for all four days. IV. Conclusion The green vegetated roof thus shows a much decreased roof surface as well as internal ceiling surface temperature compared to the bare or standard roof during the warmest hours of the day. A reduction in heat gain is also observed during daytime compared to the bare roof, thus ensuring the cooling effect due to the presence of the planted roof. Similarly during night when the ambient temperature comes down the heat loss through the green vegetated roof is lesser compared to the bare roof thus ensuring good built environmental temperature within the room under the green roof. V. References [1] Niachou, K. Papaknstantinou, M. Santamuris, A. Tsangrassoulis, G. Mihalakaku, Analysis of the Green Roof Thermal Properties and Investigation of its Energy Performance, Energy And Buildings 33 (2001) [2] Spala, H.S. Bagiorgas, M.N. Assimakopoulos, J. Kalavrouziotis, D. Matthopoulos, G. Mihalakakou, On th green roof system. Selection, state of art and energy potential investigation of a system installed in an office building in Athens, Greece, Renewable Energy 33 (2008) [3] Dominique MORAU, Teddy LIBELLE, François GARDE, Performance Evaluation of Green Roof for Thermal Protection of Buildings In Reunion Island,Energy and Buildings 14 (2012) [4] Orna Schweitzer, Evyatar Erell, Evaluation of the energy performance and irrigation requirements of extensive green roofs in a water- scarce Mediterranean climate, Energy and Buildings 68 (2014) S. Mandal (Editor), GJAES 2016 GJAES Page 84

93 Special Issue: Conference Proceeding of i-con-2016 Global Journal on Advancement in Engineering and Science (GJAES) Vol. 2, Issue 1 : March-2016, ISSN (Print): Original Research Work Geo-dependence of Facial Features and Attributes Nilanjan Mukhopadhyay 1, Rajib Dutta 2 and Dipankar Das 3 Department of ECE 1,3, Department of CSE 2 Global Institute of Management and Technology, Krishnagar, India nilu.opt@gmail.com 1, rajibdutta2007@gmail.com 2 and das.dipankar675@gmail.com 3 Abstract: In the field of image processing lots of work has been done on facial analysis.the dependence of facial appearance on the geographic location from a captured image has to be illuminated.our analysis is based on a large number of geo tagged face images and study the geo-dependence of facial feature and attributes, such as ethnicity, gender, or the presence of facial hair with the geographic location globally. Keywords: Images; geographic location; facial feature; extraction I. Introduction The looks of people from Asia and that of Europe are always different. How they are different? The answer of the question has been studied manually on direct observation [1].There is another ways of analysis of the phenomena. First, every day, a growing number of (geo tagged) images are uploaded to social media sites. On one popular social media site [2], geo tagged photos are uploaded at a rate of around 500 per minute. Second, the state-of-the-art algorithms in computer vision have reached a level of accuracy and robustness that allows detailed scene information (e.g., people, objects, background) to be automatically extracted from images [3].In this paper we have analyze facial appearance using publicly available imagery and extracting aligned frontal face patches. This resulted in a dataset, Geo Faces, of approximately 0.8 million geo tagged faces, which, to our knowledge, is the largest publicly available dataset of its kind [3].we also focused on the extracted visual attributes such as gender, ethnicity, and facial hair. Based on the dataset a variety of statistical models have been used to explore the location dependence of human face appearance and visual attributes. This will find out the underlying patterns hidden in the data. Our work also follows,visualizations, that highlight the geo-dependence of visual appearance and facial attributes and a quantitative analyze that reflects the relation between them. II. Face image analysis The structure of the human face can vary globally and one of the most intensely studied object types. We give a brief overview of recent work in the following areas: detection [7, 8], pose normalization [7, 9, 10], attribute estimation [10 12], and recognition/verification [13, 14]. A variety of methods have been proposed, including the approach by Shen et al. which uses exemplarbased image retrieval [7] and the approach by Scherbaum et al. [8], which uses a traditional AdaBoost-based technique augmented with novel synthetic training imagery. Approaches for pose normalization [7, 9, 10] use either 2D or 3D warping. For attribute estimation, Kumar et al. developed a method for pairwise face verification by comparing sets of human-describable features and visually descriptive similes [11]. Another approach built generative models for opposing facial attributes (smiling-tofrowning, etc.) [12]. Xiong et al. recently introduced IntraFace, a tool for identifying human facial features [10]. Recent work in face recognition has progressed along two fronts, developing methods for extracting more robust features [13] and using improved learning-based algorithms for classification [14]. In our model we have highlighted the relation between the dependence of facial appearance and the geo location. III. Dependency on Geo location To find out the dependency on geo location there has been a number of methods focusing on image localization [12, 13], filtering architectural styles [15], and extracting geo-informative features [14, 16]. There are other methods to find out the relationship between scene appearance and geographic location.here We have extended it a little further to examine the geo-dependence of facial appearance and facial attributes. IV. Dataset validation and cleanup technique for GeoFaces A large dataset of geo located face patches is needed for build a Geo Face, for this reason we downloaded images from internet with different types of faces across the world. For each image, a commercial face detector [18] was used to detect faces and fiducial points. The face detector only takes the frontal faces. From each face S. Mandal (Editor), GJAES 2016 GJAES Page 85

94 N. Mukhopadhyay et al., Geo-dependence of Facial Features and Attributes, Global Journal on Advancement in Engineering and Science, 2(1), March 2016, pp patch, the face detector detects the estimated pose direction and detection confidence and also confidences of pre-defined fiducial control points (e.g. nose, mouth, eye, lips). Alignment is automatically done for each face patch to a common frame using a similarity transform. To remove false image (except faces) and non-frontal faces from the facial image patches we filter the images. Face detection software provided confidence values for each image. From those images we retained images with an estimated pose of zero degrees (directly facing the camera), and we determined that a detection confidence greater than 600 from face detection software assign confidence value between 0 to The face detection software detected non-face false positive images. From initially detected images, this simple thresholding preserved roughly 30% images were selected from patches and eliminated most of the non-frontal patches. This detection was quite reliable. From detected confidence values and pose estimates were often unreliable for small image patches. For additional filtering, we trained a classifier using the detected pose and the correlation of the intensity gradient of the image patch with a set of reference faces as features. Using roughly 100 examples (split evenly between positive and negative front-facing patches), we trained a C-support vector machine (SVM) classifier with linear kernel (c = 1) [18]. Figure 1 and Figure 2 shown representative initial detection and final aligned patches from dataset. V. Facial attributes fetching techinque For each face in our dataset, we extracted facial attributes using IntraFace [7]. The software extract three facial attributes: eye, nose, gender. Expect any types of glasses on the eye. The outputs of the facial attributes are binary. Real value output of each attribute reflected the degree of confidence in the selected label. Now we make particular binary/categorical label and discard the confidence values. The face detection, alignment took roughly 3 to 5 seconds per image for compute the attribute detection. GeoFaces will collect more images and improve the methods for detecting, alignment and filtering. The full dataset, including face patches and visual attribute values, is freely available online [17]. This work describe relationship between human appearance and geographic location according their facial attributes. Figure 1: Raw images detection S. Mandal (Editor), GJAES 2016 GJAES Page 86

95 N. Mukhopadhyay et al., Geo-dependence of Facial Features and Attributes, Global Journal on Advancement in Engineering and Science, 2(1), March 2016, pp Figure 2: After filtering and alignment Here We have also found the relative distribution of attribute values e.g., Gender", Moustache, Eye and plotted the relative density histogram (Fig. 3), which reflects the relative density of attribute values. (a) Moustache (b) Gender (c) Eye Figure 3: Distribution of three facial attributes VI. Conclusion With the help of large no of facial images collected from the Internet the geo-dependence of human facial appearance have been shown here with some statistical techniques. We have used the well known techniques for face detection, filtering, appearance normalization. There are lots of scope for improvement of accuracy of the system by improving the algorithm used here so that it will increase the quality of facial image patches. There are many future applications of this type of analysis. In addition to it this work will motivate similar work for other types of objects, both natural and man-made. The major challenge is to increase the source of the Geo Faces to improve the computer vision tools and to reduce dataset bias. S. Mandal (Editor), GJAES 2016 GJAES Page 87

96 N. Mukhopadhyay et al., Geo-dependence of Facial Features and Attributes, Global Journal on Advancement in Engineering and Science, 2(1), March 2016, pp VII. References [1] RC Lewontin, W Freeman, Human diversity. (Scientific American Library New York, 1982). [2] Facebook. [3] Mohammad T. Islam, Connor Greenwell, Richard Souvenir and Nathan Jacobs, Large-scale geo-facial image analysis, EURASIP Journal on Image and Video Processing (2015) 2015:17,DOI /s [4] X Shen, Z Lin, J Brandt, Y Wu. IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2013). [5] K Scherbaum, J Petterson. IEEE International Conference on Computer Vision (IEEE, 2013). [6] O Rudovic, M Pantic, in IEEE International Conference on Computer Vision. Shape-constrained gaussian process regression for facial-point-based head-pose normalization (IEEE, 2011). [7] X Xiong, F De la Torre, in IEEE Conference on Computer Vision and Pattern Recognition. Supervised descent method and its applications to face. alignment (IEEE, 2013). [8] N Kumar, AC Berg, PN Belhumeur, SK Nayar, in IEEE International Conference on Computer Vision. Attribute and simile classifiers for face verification (IEEE, 2009). [9] D Parikh, K Grauman, in IEEE International Conference on Computer Vision. Relative attributes (IEEE, 2011). [10] D Yi, Z Lei, SZ Li, in IEEE Conference on Computer Vision and Pattern Recognition. Towards pose robust face recognition (IEEE, 2013). [11] X Cao, D Wipf, F Wen, G Duan, in IEEE International Conference on Computer Vision. A practical transfer learning algorithm for face verification (IEEE, 2013). [12] J Hays, AA Efros, in IEEE Conference on Computer Vision and Pattern Recognition. IM2GPS: estimating geographic information from a single image (IEEE, 2008). [13] DJ Crandall, L Backstrom, D Huttenlocher, J Kleinberg, in International WorldWide Web Conference. Mapping the world s photos (ACM, 2009). [14] E Kalogerakis, O Vesselova, J Hays, AA Efros, A Hertzmann, in IEEE International Conference on Computer Vision. Image sequence geolocation with human travel priors (IEEE, 2009). [15] C Doersch, S Singh, A Gupta, J Sivic, AA Efros, What Makes Paris Look like Paris? ACM Trans. Graphics (SIGGRAPH). 31(4), 101:1 101:9(2012). [16] S Lee, H Zhang, DJ Crandall, in IEEE Winter Conference on Applications of Computer Vision. Predicting geo-informative attributes in large-scale image collections using convolutional neural networks (IEEE, 2015). [17] C-C Chang, C-J Lin, LIBSVM: A library for support vector machines. ACM Trans. Intell. Syst.Technol. 2, (2011). Software available at [18] Detecting Facial Parts. okao03.html. Omron, S. Mandal (Editor), GJAES 2016 GJAES Page 88

97 Special Issue: Conference Proceeding of i-con-2016 Global Journal on Advancement in Engineering and Science (GJAES) Vol. 2, Issue 1 : March-2016, ISSN (Print): Original Research Work Metadata Based Data Extraction from Industry Data Warehouse Sukanta Singh 1 and Bhaskar Adak 2 1,2 Department of Computer Science & Engineering, Global Institute of Management & Technology, Krishnagar, Nadia, West Bengal sukantasingh2008@gmail.com 1, bhaskaradak.edu@gmail.com 2 Abstract: Now days the definition of Industrial companies are going to be changed and they are now being taken into consideration as a business oriented Organizations. The sizes of such Industries are growing and to keep all the business oriented data will not sufficient by using DBMS. The only thing which can be deal with large, huge project oriented Data is Data Warehouse. This paper is intended to introduce by starting the general concept of data warehouse and metadata and finally extraction of related data by using proposed system. In this process, a number of aggregations and calculations with the data are performed to speed up the performance of the tool. This paper will use these rules that must be used to perform the calculations of project summary data using of the metadata. Metadata can be used to express the star schema and maintain the warehouse with various drilldowns and slier conditions with suitable parameters which provide a complete business solution which is helpful for monitor the company inflow and outflow. Keywords: Data warehouse Architecture, Dimension Table, Fact table, Star schema, Metadata I Introduction By working with real deployment scenarios, we gain a complete understanding of how to work with the tools. Our goal is to address the full gamut of concerns that a large company would face during their own real-world deployment. This paper focuses on the SQL Server Integration Services (SSIS) extraction, transformation, and loading (ETL) design for Industry Data Warehouse. The data stored in DW and OLAP is collected, integrated and centralized from various operational data store systems. Recently, organizations have increasingly emphasized applications in which current and historical data are comprehensively analyzed and explored, identifying useful trends and creating summarizes of the data, in order to support high-level decision making. [4]Organizations now consolidate information from several databases into a data warehouse. Organization decision making requires a comprehensive view of all aspects of an enterprise, and many organizations have therefore created consolidated data warehouses that contain data drawn from several databases maintained by different business units together with historical and summary information. The trend toward data warehousing is complemented by an increased emphasis on powerful analysis tools. [2] In this system, metadata plays an important role and provides the foundation for all actions in all stages. It can be considered as glue sticking together all individual parts of these systems. In this paper, we propose our data warehouse architecture with new metadata layer and describe the design and implementation of star schema in conceptual data model. II Data warehouse architecture According to INMON, a data warehouse is a - subject oriented, integrated, non-volatile, and time variant collection of data which serves as an infrastructure for management decisions. In order to keep data warehouse content up to date, it is necessary to establish a technological infrastructure which extracts relevant data from the operational information systems and consolidates these data within a well documented database system. Consequently, the data in the data warehouse is made up of snapshots of the enterprise's multiple operational databases. The resulting data warehouse architecture is depicted in Fig. 1. These early tasks of the data warehouse process are executed by use of ETL-tools (extraction, transformation, loading). These tools provide connectivity to a broad set of different data storage formats (e.g. different database systems like Oracle, DB2 or SQL Server or different text file formats).[3] In order to turn warehouse data into decisive information, it must be tailored to the needs of the end users which are located in different organizational units (e.g. functional departments). Typically, the informational needs of the marketing department differ from those of the accounting department. As a consequence of this, specific departmental views on the data have to be created. These views, which are called data marts, can be further customized in order to comply with the informational needs of single users (e.g., a specific salesperson in a defined region). To get information from these data marts, end users are provided with a set of tools which allow analytical processing. Most commonly are report generation tools which support simple aggregations (e.g. calculation of S. Mandal (Editor), GJAES 2016 GJAES Page 89

98 S. Singh et al., Metadata Based Data Extraction from Industry Data Warehouse, Global Journal on Advancement in Engineering and Science, 2(1), March 2016, pp statistical measures like mean values, etc.). In order to provide interactive analysis with user defined views, OLAP tools are frequently used. While report generation tools and OLAP provide more or less simple analytical operations, data mining tools permit the analysis of complex patterns. Figure 1: Data Warehouse Architecture III Dimension Table Dimension tables contain attributes that describe fact records in the fact table. Some of these attributes provide descriptive information; others are used to specify how fact table data should be summarized to provide useful information to the analyst. Dimension tables contain hierarchies of attributes that aid in summarization. Dimensional modeling produces dimension tables in which each table contains fact attributes that are independent of those in other dimensions. For example, a customer dimension table contains data about customers, a product dimension table contains information about products, and a store dimension table contains information about stores. Queries use attributes in dimensions to specify a view into the fact information. IV Fact Table Each data warehouse or data mart includes one or more fact tables. Central to a star or snowflake schema, a fact table captures the data that measures the organization's business operations. A fact table might contain business sales events such as cash register transactions or the contributions and expenditures of a nonprofit organization. Fact tables usually contain large numbers of rows, sometimes in the hundreds of millions of records when they contain one or more years of history for a large organization. Each fact table also includes a multipart index that contains as foreign keys the primary keys of related dimension tables, which contain the attributes of the fact records. Fact tables should not contain descriptive information or any data other than the numerical measurement fields and the index fields that relate the facts to corresponding entries in the dimension tables. V Star Schema Star schema architecture is the simplest data warehouse design. The main feature of a star schema is a table at the center, called the fact table and the dimension tables which allow browsing of specific categories, summarizing, drill-downs and specifying criteria. Typically, most of the fact tables in a star schema are in database third normal form, while dimensional tables are de-normalized (second normal form). Despite the fact that the star schema is the simplest data warehouse architecture, it is most commonly used in the data warehouse implementations across the world today (about 90-95% cases). The star schema consists of a fact table with a single table for each dimension. In star schema a single fact table and for each dimension one dimension table and does not capture hierarchies directly. The primary keys of each of the dimension tables are linked together to form the composite S. Mandal (Editor), GJAES 2016 GJAES Page 90

99 S. Singh et al., Metadata Based Data Extraction from Industry Data Warehouse, Global Journal on Advancement in Engineering and Science, 2(1), March 2016, pp primary key of the fact table. In a star schema design, there is only one de-normalized table for a given dimension. Figure 2: Star Schema for Industry Data Warehouse VI Metadata When you deal with a data warehouse, various phases like Business Process Modeling, Data Modeling, ETL, Reporting etc., are inter-related with each other and they do contain their own metadata. For example in ETL, it will be very difficult for one to extract, transform and load source data into a data warehouse, if there is no metadata available for the source like where and how to get the source data. As a Conceptual metadata Model is expanded into a Logical and then Physical Data Models, this phenomenon will occur many times. Eventually, as the Physical Data Model is normalized, removing redundancy, redundant entities will be consolidated. The consolidated entity will be referenced within each subject area. This highlights the need for consistent entity names. For every instance of business meaning, use the same entity name. You don t want to discover redundant data after a data warehouse has been implemented, and someone remarks, Hey, did you know these two tables have different names, but the same data? This situation can be worse: If the two tables do not have exactly the same rows. The first defense against such confusion is consistent entity names. S. Mandal (Editor), GJAES 2016 GJAES Page 91

100 S. Singh et al., Metadata Based Data Extraction from Industry Data Warehouse, Global Journal on Advancement in Engineering and Science, 2(1), March 2016, pp VII Proposed Model Flow Chart Figure 3: Flow chart of Proposed Model VIII Implementation of Proposed System The primary goal of our work was to expedite the processing of user queries through the data warehouse environment. The proposed model show chart view of Internet Sales Amount and Internet Gross Profit by geography by date according to the industry need. Figure 4: Sample report of Proposed Model S. Mandal (Editor), GJAES 2016 GJAES Page 92

101 S. Singh et al., Metadata Based Data Extraction from Industry Data Warehouse, Global Journal on Advancement in Engineering and Science, 2(1), March 2016, pp IX Conclusion Data warehouses are still an expensive solution and typically found in large firms. Data warehousing is the leading and most reliable technology used today by companies for planning, forecasting, and management. After the evolution of the concept of data warehousing during the early 90 s it was thought that this technology will grow at a very rapid pace but unfortunately it s not the reality. A major reason for data warehouse project failures is poor maintenance. Without proper maintenance desired results are nearly impossible to attain from a data warehouse. The development of a central warehouse is a huge undertaking and capital intensive with large, potentially unmanageable risks. Unlike operational systems data warehouses need a lot more maintenance and a support team of qualified professionals is needed to take care of the issues that arise after its deployment including data extraction, data loading, network management, training and communication, query management and some other related tasks. References [1] W.H.Inmon, Building the Data Warehouse (Second Edition), John Wiley & Sons, Inc., [2] Sukanta Singh Sales Based Data Extraction for Business Intelligence, ACER 2013 pp , CS & IT-CSCP 2013 DOI : /csit ISBN : [3] S. Singh and R. Dutta, Web Based Business Intelligence Tool for a Financial Organization, Global Journal on Advancement in Engineering and Science, 1(1), March 2015, pp ISSN: [4] R. Dutta and S. Singh, Universal Data Warehouse System Architecture for Health Care Organization, Global Journal on Advancement in Engineering and Science, 1(1), March 2015, pp ISSN: [5] Nilakanta. Sree, Scheibe. Kevin, Dimensional issues in agricultural data warehouse designs, Computers and electronics in agriculture, Vol.60, No.2, 2008, pp [6] Jan Chmiel, Tadeusz Morzy, Robert Wrembel, Multiversion Join Index For Multiversion Data Warehouse, Information and Software TechnologyVol.51No.1, 2009, pp [7] The computer world magazine /story/0,10801, 89534,00.html. [8] Fundamentals of database systems. 4th Edition. Persons international and Addison Wesley. Ramez Elmasri and Shamkant B. Navathe. [9] Hanson, Joseph H. An Alternative Data Warehouse Structure for Performing Updates. December 1996, UMI Press. [10] Labio, W.J. and H. Garcia-Molina, Eficient Snapshot Diferential Algorithms for Data Warehousing, Technical Report. 1996, Stanford Univ: Palo Alto. [11] McElreath, J., Data Warehouses: An Architectural Perspective. Perspectives, November Computer Sciences Corporation, El Segundo, CA: p. 13. [12] Meta Software Corp., Using DesigdIDEF to Simulate Workflow Models with ServiceMOdel, 1995, Meta Software Corp.: Cambridge MA. p S. Mandal (Editor), GJAES 2016 GJAES Page 93

102 Special Issue: Conference Proceeding of i-con-2016 Global Journal on Advancement in Engineering and Science (GJAES) Vol. 2, Issue 1 : March-2016, ISSN (Print): Original Research Work Data Warehouse System Architecture for a Typical Health Care Organization Rajib Dutta 1, Vicky Mondal 2 Department of Computer Science & Engineering, Global Institute of Management & Technology, Krishnanagar, Nadia, West Bengal. rajibdutta2007@gmail.com 1, vmondal@yahoo.com 2 Abstract: In current scenario large enterprises depends on database systems to manage their huge data and information. These huge data and information are very useful for daily business transactions. The tough competition in the business market has most used the concept of data mining in which data are analyzed to derive effective business strategies and discover better ways in carrying out business by through the Decision Support System. To perform data mining, we have to convert regular databases into what so called informational databases also known as data warehouse. Shifting of healthcare data in database management system is not a proper solution in now a day, so huge volumes of data have been accumulated in organizations related to medical are stored in data warehouse. This paper presents a design of system architecture for building data warehouse of a typical health care organization system. Keywords: Health care data warehouse, Data Mining, Extract Transform and Load (ETL), Multidimensional databases (MDDBs) and Decision Support System (DSS). I. Introduction A Data Warehouse (DW) is defined as a subject-oriented, integrated, time-variant, non-volatile collection of data in support of management s decision-making process [1]. The process of developing a data warehouse starts with identifying and gathering requirements, designing the dimensional model followed by testing and maintenance. The design phase is the most important activity in the successful building of a data warehouse [2]. In current scenario the world fast growing, largest and most information s are available in the health care industry. The stored data in health care organization, data may be recorded as patient s details, patient s hobbies details, diseases record, individual patient pathology report, physician s details, physician s order entry, physician s decision support system, medicine, billing section. Most of the health organization is still stand along, they are not communicating with other health organization, and they don t share their documents like patient s details, diseases record, individual patient pathology record, their previous treatment history with others. To overcome this stand aloneness problem we proposed a health care system architecture which works universally, means different health organization can share needful documents with others. Also it is useful to the patient s, they can find such a health care organization where best doctor s list, best treatment for them and lowest cost treatment. II. Background Operational Database: Operational Database is the database-of-record, consisting of system-specific reference data and event data belonging to a transaction-update system. It may also contain system control data such as indicators, flags, and counters. The operational database is the source of data for the data warehouse. Informational Database: An informational database is a special type of database that is designed to support decision making based on historical point-in-time and prediction data for complex queries and data mining applications. A data warehouse is an example of informational database. [5] Data Warehouse: a data warehouse (DW or DWH), also known as an enterprise data warehouse (EDW), is a system used for reporting and data analysis. DWs are central repositories of integrated data from one or more disparate sources. They store current and historical data and are used for creating analytical reports for knowledge workers throughout the enterprise. Examples of reports could range from annual and quarterly comparisons and trends to detailed daily sales analysis. Data Mining: It is a knowledge discovery process that uses a blend of statistical, machine learning, and artificial intelligence techniques to detect trends and patterns from large data-sets, often represented as data warehouse. The purpose of data mining is to discover news facts about data helpful for decision makers [6]. S. Mandal (Editor), GJAES 2016 GJAES Page 94

103 R. Dutta et al., Data Warehouse System Architecture for a Typical Health Care Organization, Global Journal on Advancement in Engineering and Science, 2(1), March 2016, pp III. Operational Database Design Essentially, the operational database used to derive the data warehouse later on, encompasses fourteen distinct relations or tables associated together by means of relationships. It is a relational model database implemented under MS Access [8]. This database represents the business inside a typical health care organization. It includes a front end registration system for handling patient registration processes where all details of, patient s hobbies details, diseases record, individual patient pathology report, an accounting system for managing patient payments, a departmental management system for managing patient s according to their diseases and assigning them particular pathology, physician, and an assets system for distributing items such as medicine, equipment, and bed allocation over different departments. Figure 1 depicts the conceptual schema of the operational database. Figure 1: Conceptual Schema of the Operation Database IV. System Architecture Design Now a day s both effective health care and to financial survival is the most important things for any health care organizations. Data about the accuracy of diagnoses, effectiveness of treatments, efficient doctors, and proper cost is the crucial things about a health care centre. Different health care centre has different cost for same type treatments. The health care industry is unique in that it needs to bring together efforts to improve the quality of individuals health with the effort to cut costs to employers and governments. Recent in India there are several types of health care organization are there. It is quite different from other industry to build the health care centre data warehouse for diagnoses. But just like software development project, data warehouse maintain the stage by stage procedure. Our main aim is to make some data warehouse architecture where a patient's get proper treatment with low cost. Two basic stages for building a health care data warehouse for a healthcare organization is describe in this paper as following. A. Business Analysis. B. System Architecture Design. C. Data Architecture Design. A. Business Analysis A data warehouse is an information delivery system for business intelligence. Business intelligence (BI) is a technology-driven process for analyzing data and presenting actionable information to help corporate executives, business managers and other end users make more informed business decisions. It is solving users problems and providing strategic information to the user. In the phase of defining requirements, need to concentrate on what information the users need. With the purely top-down approach, the data warehouse will be developed based on the third normal form relational data model. This relational database will form the data warehouse. The business analysis stages consist of business process analysis and business requirement analysis. A. 1. Business Process Analysis In that process four actors are Patient, Doctors, Pathologist, and executive manager. Here all patients are coming to the health care centre for treatment. According to the symptoms of the patient, the doctors send them to the pathology for some medical test. As per their medical test report, doctors decide the type of diseases and then started the treatment. S. Mandal (Editor), GJAES 2016 GJAES Page 95

104 R. Dutta et al., Data Warehouse System Architecture for a Typical Health Care Organization, Global Journal on Advancement in Engineering and Science, 2(1), March 2016, pp Case 1: Seek Consultation Patients go to doctor when certain symptoms are noticed by the patient. According to their diseases symptom the heath care organization send them to the doctors. Case 2: Perform Diagnosis The doctors and the pathologist will together perform a series of test like Blood test (white blood cell differential), Chest x-ray, Auscultation (to detect abnormal breath sounds), Nasopharyngeal culture to determine the type of diseases he/she has. Case 3: Propose Treatment According to their pathological report doctors start their treatment. In their treatment need some consult, they just sheared their medical report to other doctor and then started their treatment. A. 2. Business Requirement Analysis Here some important requirements for health care data warehouse to support different diseases and treatment recommended by the doctors. The propose requirements are: Minimum level of dimensional nature of business data about the patient required where patient details is stored. In the record included portion are: Full Name, Date of Birth, Gender, Age, Marital Status, Address, Contact Number, Occupation, Disease Details, Treatment under which Doctors and etc. All patient record details must be recognized by unique it may be using ID number which may be declaring as a primary key. Thus may prevent data duplication, and easy to search. The medical diagnosing function requires to updates patient medical history report, symptoms, drug interaction before and after the patient treatments. The system must be able to display at both summary and details levels, by which the use (doctors) may get specific idea about the disease and analyze the result. B. System Architecture Design for Health Care Organization Figure 2 show that total proposed architecture for the health care organization data warehouse. Architecture of health care organization data warehouse system builds with Source Data components in the left side where multiple data are comes from different data source and transform into the Data Staging area before integrated. The Data Staging component present at the next building block. Those two blocks is under Data Acquisition Area. In the middle Data Storage component that manages the data warehouse data. This component also with Metadata, that also keep track of the data and also with Data Marts. Last component of this architecture is Information Delivery component that shows all the different ways of making the information from the data warehouse available to the user for further analysis. Figure 2: Universal Data Warehouse Architecture for Health Care Organization B. 1. Data Acquisition This portion data are mainly medical files which are store in Microsoft Access database. Medical files such as patient medical reports, blood tests result, x-ray results, auscultation test results, Nasopharyngeal culture reports, etc. Those data are coming from multiple sources. In data extraction, select data from those source data and moving all the extracted data to the staging area. Now in the Data Transformation portion, map extracted data for the data of the data warehouse. Combining pieces of data from different data source is a part of data transformation. When data transformation function ends then the collection of integrated data that is cleaned, standardized and summarized. In this stage there are a set of functions and services such as: S. Mandal (Editor), GJAES 2016 GJAES Page 96

105 R. Dutta et al., Data Warehouse System Architecture for a Typical Health Care Organization, Global Journal on Advancement in Engineering and Science, 2(1), March 2016, pp Data Extraction Select data from Medical files and determine the types of filters to be applied to individual sources. Generate automatic extract files from operational systems using replication and other techniques. Create intermediary files to store selected data to be merged later. Transport extracted files from multiple platforms. Provide automated job control services for creating extract files. Generate common application codes for data extraction. Resolve inconsistencies for common data elements from multiple sources. Data Transformation Map input data to data for data warehouse repository. Clean data, deduplicate, and merge/purge. Denormalize extracted data structures as required by the dimensional model of the different disease data warehouse. Convert data types, Calculate and derive attribute values, Check for referential integrity, Aggregate data as needed. Resolve missing values, Consolidate and integrate data. Figure 3: Data acquisition: Health Care Organization B. 2. Data Storage This portion, the data from the staging area load into the data warehouse repository. Medical Files data and Microsoft Access Data are loaded in to the data warehouse in the day-to-day basic. Data repositories contain the data structure in highly normalize form for fast and efficient processing. Large amount of historical data of the patients are needed in data warehouse for analysis. The data storage in data warehouse is kept separate for quick retrieval of individual pieces of information. Data warehouse are read-only data repositories. Figure 4: Data acquisition: Health Care Organization In this there are a set of function and services such as: Load health care data for full refreshes into data warehouse tables. Perform incremental loads at regular prescribed intervals. Loading details and summarized levels of patient s data into multiple tables. Optimize the data loading process. B. 3. Information Delivery In this stage the Doctors collect information from data warehouse. To collect the information from data warehouse, information delivery components is use to make it easy to access and decision making to access the information directly from the health care data warehouse. There are different information delivery methods for different user. Ad hoc reports are predefined reports primarily meant for novice and casual user i.e. staffs of the health care. Provision for complex queries, multidimensional analysis and statistical analysis cater to the needs of the business analysts and power users i.e. Doctors are this type of user. Information fed into Executive Information Systems is mint for senior executive and high level managers. The primary data warehouse feeds data to proprietary multidimensional databases (MDDBs) where summarized data is kept as multidimensional S. Mandal (Editor), GJAES 2016 GJAES Page 97

106 R. Dutta et al., Data Warehouse System Architecture for a Typical Health Care Organization, Global Journal on Advancement in Engineering and Science, 2(1), March 2016, pp cubes of information. Based on fact table and multiple dimensions table, the star model is adopted. Then from dimension table create hierarchy which is useful to create reports. These stages there are a set of functions and services such as: It allows the doctors and other decision maker to brows disease data warehouse content. It also provides security for unknown user to access data. It totally hides the complexities of data storage from the user and allows them simply access the data. Reformat the queries automatically for optimal execution. Provide self-service report generation for users, consisting of a variety of flexible options to create, schedule, and run reports. Queries and reports result set are store for future use. Provide event triggers to monitor data loading and multiple levels of data granularity. Figure 5: Information Delivery: World Health Care Organization Architecture C. Data Architecture Design The star schema demonstrates the data layer architecture of the health care data warehouse which is shown in Figure 6. The design star schema of health care data warehouse uses a de-normalize schema which contain denormalize or redundant data. Business Intelligence techniques and data mining may uses facilitate of denormalized data. Figure 6: Health Care Data Warehouse Star Diagram The fact table that describe all about medical report is named Medical Fact. The table consists of Patient_ID, Date_of_Entry, Habit_ID, Disease_ID, Risk_Factor_ID, Treatment, Symptom_ID, Diagnostic_status etc. The symptoms of a patient diagnosed condition and result are stored in Symptom table. The dimension table that store all details of each entity of every table. In Medical_Fact table the entity are Patient_Dimension, Treatment_Dimension, Symptom_Dimension, Disease_Type_Dimension. S. Mandal (Editor), GJAES 2016 GJAES Page 98

107 R. Dutta et al., Data Warehouse System Architecture for a Typical Health Care Organization, Global Journal on Advancement in Engineering and Science, 2(1), March 2016, pp Patient_Dimension: A table that stores patient information, such as patient name, sex, age, address, disease, test report, blood group etc. This data are use for find out which type of disease. Treatment_Dimension: A table that is used to stores all possible treatment option. Symptom_Dimension: A table that is used to stores the entire symptom, the normal condition values and abnormal condition values. Disease_Type_Dimension: A table that is used to stores all the diseases and the types of the diseases. In this case only all types of diseases related data are stores. Figure 7: Data in Patient Dimension table V. Conclusion Our aim is to design such type of data warehouse architecture for health care organization that gives best treatment with low cost. Developing a typical health care data warehouses, places data quality high on the agenda. Health care related data warehouses is challenging because definitions for individual items must be clear and unambiguous throughout the organization while in practice shared data elements have alternative definitions, owing to a range of different users with a variety of different information needs. This health care industry is the fast developing, most data sufficient industry. For taking that advantage, build a typical health care organization oriented data warehouse. In world health care organization related data warehouse, integrate between medical files and operational data. Then analysis on patient s medical report data make easy by using OLAP cubes. By using those multilevel viewing data, anyone can analysis the diseases, cost of the treatment, date rate of specific type of diseases and impact of particular drug. The proposed classifier has been implemented using JAVA (JDK 1.6_16), Microsoft Sql Server 2008, Microsoft Office Access 2007, Microsoft Sql Server Integration Service 2008, Microsoft Sql Server Reporting Service 2008, Microsoft Sql Server Analysis Service VI. References [1] Inmon, W.H., Hackathorn, and R.D (1994) Using the data warehouse. Wiley-QED Publishing, Somerset, NJ, USA.. [2] Rajni Jindal and Shweta Taneja, COMPARATIVE STUDY OF DATA WAREHOUSE DESIGN APPROACHES: A SURVEY, International Journal of Database Management Systems ( IJDMS ) Vol.4, No.1, February 2012 [3] William Inmon, Building the Operational Data Store, 2nd ed., John Wiley & Sons, [4] Matteo Golfarelli & Stefano Rizzi, Data Warehouse Design: Modern Principles and Methodologies, McGraw-Hill, Osborne Media, [5] Youssef Bassil, A Data Warehouse Design for A Typical University Information System, ournal of Computer Science & Research (JCSCR) - ISSN X, Vol. 1, No. 6, Pages , December 2012 [6] Pang-Ning Tan, Michael Steinbach, Vipin Kumar, Introduction to Data Mining, Addison Wesley, 2005 [7] Robert Laberge, The Data Warehouse Mentor: Practical Data Warehouse and Business Intelligence Insights, McGraw-Hill Osborne Media, [8] Vineetha Appidi, Dr Syed Umar, Sushma Vallamkonda Development of a Data Warehouse for Cancer Diagnosis andtreatment Decision Support International Journal Engg Techsci Vol 5(3) 2014, [9] Paulraj Ponniah Data Warehousing Fundamentals A Comprehensive Guide for IT Professionals,Wiley India Pvt.Ltd, ISBN: S. Mandal (Editor), GJAES 2016 GJAES Page 99

108 Special Issue: Conference Proceeding of i-con-2016 Global Journal on Advancement in Engineering and Science (GJAES) Vol. 2, Issue 1 : March-2016, ISSN (Print): Original Research Work Fenton s Treatment of Tannery Wastewater Ranajit Basu 1 Department of CE 1 Global Institute of Management and Technology, Krishnagar, India ranajit123basu@gmail.com 1 Abstract: Fenton process has been found to be an effective treatment method for industrial effluents. It uses hydrogen peroxide and ferrous sulphate to reduce the concentration of various parameters. Fenton s treatment reduces the COD, chromium (VI), nitrate and total alkalinity of tannery wastewater much nearer to the BIS acceptable limit. Keywords: COD, chromium (VI), Fenton s treatment, tannery wastewater. I. Introduction Leather industry is a highly polluting industry. Tannery wastewater is a major source of environmental pollution, causing serious environmental impacts on water. It contains high oxygen demand and toxic chemical constituents. The inorganic pollutants from tanning industry include chromium, chloride, sulphate, sulphide, ammonia etc (Durai and Rajasimman, 2011). The different wastewater treatment methods include activated sludge process, trickling filters, membrane bioreactors, rotating biological contactor, aerated lagoon, upflow anaerobic sludge blanket and advanced oxidation processes among others. Each of these have their own advantages and limitations. Advanced oxidation process is comparatively new approach and is used nowadays. Advanced oxidation processes are chemical treatment procedures designed to remove organic (and sometimes inorganic) materials in water and wastewater by oxidation through reaction with hydroxyl radicals. The main goal of AOPs is reduction of the contaminants to such an extent such that the cleaned wastewater can be reintroduced into receiving streams (Sharma et al., 2011). Fenton s process consists of non-selective and highly efficient oxidation of organic compounds by means of hydroxyl radicals, which are formed in a chain process of hydrogen peroxide decomposition in the presence of bivalent iron salts (Kos et al.).ferrous Iron (II) is oxidized by hydrogen peroxide to ferric iron (III), a hydroxyl radical and a hydroxyl anion. Iron (III) is then reduced back to Iron (II), a superoxide radical and a proton by the same hydrogen peroxide. The net effect is a disproportionation of hydrogen peroxide to create two different oxygen-radical species, with water (H+ +OH-) as a byproduct. The reactions involved are: Fe2+ + H2O2 ----> Fe3+ +.OH + OH- Fe3+ + H2O2 ----> Fe2+ +.OOH + H+ The free radicals generated by this process then engage in secondary reactions. For example, the hydroxyl radical is a powerful, non-selective oxidant. Oxidation of any organic compound by Fenton s reagent is rapid and exothermic, and results in the oxidation of the contaminants to primarily carbon dioxide and water (Sharma et al., 2011). II. Sampling location The sample was collected from a tannery industry at Kolkata, India. III. Materials and methods The chemicals used for the treatment were potassium dichromate, mercuric sulphate, silver sulphate, ferrous sulphate, brucine sulphate, sulfanillic acid, hydrochloric acid, sulphuric acid, barium chloride, glycerol, anhydrous sodium sulphate,1,5-diphenyl carbazide etc. 1.Characterization: Collected sample was characterized for its various physico-chemical parameters like ph, COD, total alkalinity, nitrate, chloride, chromium (VI) and total hardness as per Standard Methods (APHA, 1995). 2. Experimental setup: Fenton s treatment was carried out at room temperature.500 ml of tannery wastewater sample was taken in a beaker and placed on a magnetic stirrer for continuous stirring. ph of the sample was adjusted at with 1N HCl and 1 N NaOH solution. Depending on the H2O2:Fe2+ ratio, the calculated amount of FeSO4.7H2O was added to the sample as a catalyst. The H2O2: COD was taken 1:1, and accordingly the required amount of H2O2 was added. At regular intervals of 0, 15, 30, 45, 60, 120, 180 min and 24 hours, sample were withdrawn from the S. Mandal (Editor), GJAES 2016 GJAES Page 100

109 R Basu, Fenton s Treatment of Tannery Wastewater, Global Journal on Advancement in Engineering and Science, 2(1), March 2016, pp ml beaker and then the various above mentioned parameters were determined along with residual hydrogen peroxide, in order to understand the efficiency of removal of the various parameters by Fenton s treatment. IV. Conclusion The results obtained by performing the above experiment and by analyzing the samples obtained, indicate that, the Fenton s treatment process is highly efficient for reducing the concentration of various polluting parameters, especially chemical oxygen demand and chromium, as these are extremely harmful to the environment. But, as chemical treatment is not cost effective compared to biological treatment, so, maybe, combined treatment with chemical followed by biological treatment can help to reduce the treatment cost which may be adopted in industrial application. References: [1] Anglada. A., Urtiaga, A. and Ortiz, I Contributions of electrochemical oxidation to waste-water treatment: fundamentals and review of applications, J. Chem. Technol. Biot., 84 (12), [2] Ahn.D.H., Chang. W.S. and Yoon T.I., 1999, Dyestuff wastewater treatment, using chemical oxidation, physical adsorption and fixed bed biofilm process. Process Biochem, 34: [3] Apaydin O, Kurt.U. and Gonullu. M.T., An investigation on the tannery wastewater by Electrocoagulation. Global NEST J, 11: [4] APHA, AWWA, WPCF Standard methods for the examination of water and wastewater, American Public Health Association, Washington, DC. [5] CPCB, Recovery of better quality reusable salt from soak liquor of tanneries in solar evaporation pans, Central Pollution Control Board (CPCB) Ministry of Environment& Forests Control Of Urban Pollution Series: Cups/ [6] Ganesh, R., Balaji. G. and Ramanujam. R.A.,2006. Biodegradation of tannery wastewater using sequencing batch reactor-respirometric assessment. Bioresour.Technol.,97: [7] Genschow,E., Hegemann. W. and Maschke. C., Biological sulfate removal from tannery wastewater in a two-stage anaerobic treatment. Water Res., 30: [8] GOI, A.; TRAPIDO, M.2002 Hydrogen peroxide photolysis Fenton reagent and photo-fenton for the degradation of nitro phenols: a comparative study. Chemosphere, Kidlington, v. 46, p [9] Kos.L, Michalska.K, PerkowskiJan,2010;Textile wastewater treatment by the Fenton Method. [10] Schrank. S.G., Jose. H.J., R.F.P.M. Moreira, Schroder. H.F., Elucidation of the behavior of tannery wastewater under advanced oxidation conditions, Chemosphere 56 (2004) [11] Szpyrkowicz L., Kaul. S.N., R.N. Neti, Tannery wastewater treatment byelectro-oxidation coupled with a biological process, J. Appl. Electrochem.35 (2005) [12] Tunay. O., I. Kabdasli, D. Orhon, E. Ates, Characterization and pollution profile of leather tanning industry in Turkey, Water Sci. Tech. 32 (1995)1 9. S. Mandal (Editor), GJAES 2016 GJAES Page 101

110 Special Issue: Conference Proceeding of i-con-2016 Global Journal on Advancement in Engineering and Science (GJAES) Vol. 2, Issue 1 : March-2016, ISSN (Print): Review Work Advanced Analysis of a Structure using Staad Pro Mainak Ghosal 1,2,3,4 1. Research Scholar,Department of Civil Engineering,Indian Institute of Engineering Science&Technology,Shibpur,Howrah. 2.Adjunct Assistant Professor(Civil),JIS College of Engineering 3.Formerly Assistant Engineer(Civil) in Public Health Engineering Directorate,Govt. of W.Bengal 4. Registered Chartered Civil Engineer & IT Consultant,CBDT,Govt.. of India. id: mainakghosal2010@gmail.com Abstract: : The advent of high speed computers & softwares is a boon to civil engineering as the daunting task of analysis & design is virtually computerized nowadays. Staad Pro Vi8 has become an essential software in civil/mechanical (structural) engineering world today. Nearly more than 95% of the Design Firms use Staad Pro.It can be used practically for all types of structures starting from concrete, steel to aluminum, timber and even piping design. Both static and dynamic analysis including PDelta, Pushover, Time History,Response Spectrum,Buckling Analysis etc. can be performed here. The analysis of structural systems which used to be manual a few decades ago, can now undergo several iterations with different alternatives with the help of computers. Hence the present day engineers are paying more attention to know how to use the software i.e., how to efficiently give the input to the computer and how to interpret the results from the computer output. As a result the engineers are now mostly entirely depending on the software for the analysis and design. But we need a skilled structural engineer to drive it otherwise we will be losing our understanding on the behavior of the structure and merely be working with numerical results thereby compromising with our engineering sense and also the business ethics part of our working culture. Key words: Concrete, Design, Staad, Structures I. Introduction Modern day computer software Staad Pro have reduced the task of the engineers to a great extent. Even the most sophisticated analysis is performed within a very short time. Assumptions and approximations in analysis have been considerably reduced. Every element is scanned at a large number of points to determine the worst stresses developed in it and designed accordingly to satisfy the strength requirements. The serviceability requirements are also checked by the software. Without the knowledge of Staad it will practically impossible for the new generation budding engineers to gain a foothold in the industry. Though there are various other structural design software like E-Tabs, SAP, ROBOT, S-Frame etc., Staad stands apart from the rest for its USP and versatility. It has been 20 years since this particular software was first introduced in the market by Research Engineers International, USA which later was bought by Bentley Systems. The word STAAD is the abbreviation for Structural Analysis and Design and it thoroughly complies with all the norms and regulations of ISO 9001 certification. It is a vital software application through Finite Element Method Analysis (FEM) that allows structural engineers to fulfill their job done with massive speed and high accuracy. Some of the top firms in the world including structural consultants, engineering colleges use Staad Pro. Could you imagine how a 10-storied building which at least took a month to analyze (leave the design part) could be completed in 10 minutes. II. Discussion STAAD.Pro is the structural engineering professional s choice for steel, concrete, timber, aluminum, and coldformed steel design of virtually any structure including culverts, petrochemical plants, tunnels, bridges, piles, and much more. 2.1 STAAD STRUCTURE TYPES There are four types of structures for users choice 1. Space 2. Plane 3. Floor 4. Truss 2.2 STAAD STRUCTURAL ELEMENTS STAAD provides four types of elements 1. Beams 2. Plates 3. Solids 4. Surface 2.3 STAAD.PRO SEQUENCING OF OPERATIONS S. Mandal (Editor), GJAES 2016 GJAES Page 102

111 M Ghosal, Advanced Analysis of a Structure using Staad Pro, Global Journal on Advancement in Engineering and Science, 2(1), March 2016, pp How to use the software? 1. Creating the structure by using Graphical User Interface (GUI).The model can either be drawn on the graph sheet which appears on the computer screen as a planer one and create the 3D model by using relevant icons(beam, plate or solid element icons) or the nodal coordinates can be inserted by repeat commands to get the model. 2. Basic Geometry: Start by defining the basic geometry of the structure using beams, columns, plates and/or solid elements. 3. Next, define the Member Properties: The size of the member, depth, width, cross sectional shape, etc. 4. Specify the Materials Constants: Are the members made of timber, steel, concrete, or aluminum? What is the Poisson s Ratio, Coefficient of Thermal Expansion, density etc.? 5. Define Member Specifications unique to the structure, for instance connections, supports etc. 6. Assign the Loads: Self-Weight, Dead Loads, Live Loads, Wind Loads, Earthquake Loads and various load combinations. 7. Enter Perform Analysis instructions. 8. Enter Run Analysis instructions. 9. Specify Design Commands(for steel,concrete,timber etc.). 2.4 STAAD COORDINATES STAAD.Pro uses two types of coordinate systems to define the structure geometry and loading patterns. The Global coordinate system is an arbitrary coordinate system in space which is utilized to specify the overall geometry and loading pattern of the structure. A Local coordinate system is associated with each member (or element) and is utilized in Member End Force output or local load specification. 2.5 STAAD MEMBER PROPERTIES STAAD.Pro uses the following types of member property specifications [a] Prismatic property specifications [b] Standard Steel shapes from build-in section library [c] User created steel tables [d] Tapered sections [e] Through Assign command. 2.6 STAAD SUPPORTS STAAD.Pro uses supports as Pinned, Fixed, or Fixed with different releases. A pinned support has restraints against all translational movement and none against rotational movement. In other words, a pinned will have reactions for all forces but will resist no moments. A fixed support has restraints against all directions of movements. The restraints of a fixed support can also be released in any desired direction. 2.7 STAAD LOADS Loads in a structure can be defined as joint load, member load, temperature load and fixed end member load.staad can also generate the self weight of the structure and use it as uniformly distributed member loads in analysis. Any fraction of this self weight can also be applied in any desired direction Joint Load Joint loads, both forces and moments, may be applied to any free joint of a structure. These loads act in the global coordinate system of the structure. Positive forces act in the positive coordinate directions. Any number of loads may be applied on a single joint, in which case the loads will be additive on the joint Member Load Three types of member loads may be applied directly to a member of a structure. These loads are uniformly distributed loads, concentrated loads, and linearly varying loads (including trapezoidal).uniform loads act on the full or partial length of a member.concetrated loads act at any intermediate specified point. Linearly varied loads act over the full length of a member. Trapezoidal linearly varying loads act over the full or partial length of a member. Trapezoidal loads are converted into a uniform load and several concentrated loads. S. Mandal (Editor), GJAES 2016 GJAES Page 103

112 M Ghosal, Advanced Analysis of a Structure using Staad Pro, Global Journal on Advancement in Engineering and Science, 2(1), March 2016, pp Any number of loads may be specified to act upon a member in any independent loading condition. Member loads can be specified in the member coordinate system or the global coordinate system.unformly distributed member loads provided in the global coordinate system may be specified to act along the full or projected member length Area Load Many times a floor (bound by X-Z plane) is subjected to a uniformly distributed load. It could require a lot of work to calculate the member load for individual members in that floor. However, with the Area Load command, the user can specify the area loads (unit load per unit square area) for members. The programme will calculate the tributary area for these members and provide the proper member loads. There are also other loads like Floor load, Hydrostatic load, Prestress load etc in the loading commands. (I)Model Generation and Analysis:-Node, Beam and Plate are the three main utilities here. Beam and column members are represented using lines. Walls, slabs and panel type entities are represented using triangular and quadrilateral finite elements. Creating a new structure using Space, Plane, Floor and Truss. Choosing the Length and Force Units. Constructing the model using Add Beams, Add Plates or Add Solids by creating the node (Snap Node/Beam). Or use Structural Wizard for alternate readymade structures. From Command, specify the Section Properties of the member created. From Command, assign the Support conditions. From Command, assign the Loading conditions i.e., Dead Load (incl. Self Weight), Live Load, Wind Load & Seismic Load. From Command, click Analysis -> Perform Analysis. From Analyze, click Run Analysis. View the Output File. Go to the Post Processing Mode to view the structural diagrams/graphs. To vibrate the structure click animate in this mode. III. Conclusions Last but not the least, though Staad is being progressively used in various types engineering structures viz., high-rises, bridges, foundations, pilings and even piping..as a result the engineers are now mostly entirely depending on the software for the analysis and design.we are losing our understanding on the behavior of the structure and are merely working with numerical results thereby compromising with proper engineering sense.so it must be clearly understood that even the most sophisticated analysis/design packages should require a skilled structural engineer to drive it. Also STAAD has certain inherent drawbacks. It is concluded that in spite of its numerous advantages Staad Pro is silent on certain aspects like Soil- Structure Interaction and High Strength Concrete. Staad Pro does not throw light on concrete grade beyond M40 and also steel grade beyond Fe500 in its RC Designer part whereas other softwares like SAP, E Tabs etc. have provision to change/modify the material properties in this issue. In future Staad needs to update itself in modifying the material properties part and also in assigning member properties by highlighting more on the local aspects like developing more pop-ups in individual local floor levels. Like the recently introduced software Tekla, it should have provisions in investigating any local aspects by simply windowing on the structure itself. S. Mandal (Editor), GJAES 2016 GJAES Page 104

113 M Ghosal, Advanced Analysis of a Structure using Staad Pro, Global Journal on Advancement in Engineering and Science, 2(1), March 2016, pp Figure 1: STAAD.Pro opening popup window References [1] Dr.Santanu Bhanja.,Short Term Training Programme on Advanced Course on Computer Aided Structural Design conducted by National Institute of Teachers Training & Research,Kolkata ,dated [2] 2.Hammad Salahuddin, Saqib Habib, Talha Rehman (2010), Comparison of design of a building using ETABS V 9.5 & STAAD PRO 2005, University of Engineering and Technology, Taxila, Pakistan. [3] Prashanth.P, Anshuman.S, Pandey.R.K, Arpan Herbert(2010), Comparison of design results of a Structure designed using STAAD and ETABS Software,International Journal of Civil and Structural Engineering. [4] Munir Hamad,, Using staad Pro 2007,SPD Publications(2009) [5] Sivakumar Naganathan,Learn Yourself Staad.Pro V8i, 2012 [6] 6.T.S Sarma,Staad Pro V8i for Beginners. (2014) [7] 7. Sham Tickoo,Learning Bentley Staad.Pro V8I for Structural Analysis, S. Mandal (Editor), GJAES 2016 GJAES Page 105

114 Special Issue: Conference Proceeding of i-con-2016 Global Journal on Advancement in Engineering and Science (GJAES) Vol. 2, Issue 1 : March-2016, ISSN (Print): Original Research Work Improvisation of Locally Available Soil for Economical Foundation Saroj Kundu 1,Sukanya Basu 2, Pritam Dhar 3 U.G student, SDET Brainware Group of Institutions, Civil Engineering Department 1,2, Assistant Professor, SDET Brainware Group of Institutions, Civil Engineering Department 3 kundu.saroj44@gmail.com 1, sukanyabs64@gmail.com 2,pritamdhar21feb@gmail.com 3 Abstract: For rapid growth of our present day works of construction work is also increasing day by day. In older day, there were much available lands to construct buildings. At that time one can leave space at front, rear and side of the buildings. But presently the availability of large land for building constructions is becoming lesser. So in available small area one have to construct high rise buildings as there is no opportunity to spread in horizontally. But if the property of the soil below ground in the foundation is very weak or in situ bearing capacity is very low, for constructing the building one has to adopt pile foundation. But such foundation is costly. Hence but if one can improve the in situ soil behaviour by using some locally available material for mixing with the in situ soil cheaper foundation can be provided. And in this paper method have been suggested for such improvement of in situ soil behaviour causing load carrying capacity of the soil to cut down the cost of adopted foundation Keywords: Foundation, improvisation, Grain size distribution, Standard proctor test I. Introduction In older days, there were much available lands to construct buildings. At that time one can leave space at front, rear and side of the buildings. But standing in this day the availability of large land for building constructions is becoming lesser. So in the available small area one has to construct high rise buildings. In other hand we have no opportunities to choose the land for construction. For this purpose we have to make the land suitable for construction. If the property of the soil of that area is very week or it has low capacity for resisting the surcharge load, then we can increase the soil behaviour by using some locally available material by mixing with soil and thus the cost of the construction will reduce. This concept not only valid for building it is also useful for preparing subgrade of pavement, embankment dam II. Scope of study In Kolkata, very few lands are left which can be used for construction purposes. The soil quality of those lands is not good because most of them are not virgin soil. Low land areas are landfilled by dumping garbage and thus they became acceptable for construction. So these lands have very poor bonding strength between soil particles.so they are not suitable for construction. But if we use some locally available material like fly ash, lime along with soil which are wastage material and commonly available from Thermal Power Plant in Kolkata soil quality can become batter. As fly ash has good bonding capacity and it was much like how Portland cement is used to bond aggregate together to make concrete. It controls the shrinkage and swelling by cementing the soil grain together and this bond resting particle movements. The strength of the soil is increased because of the strong bonds between the soil grains and the chemical reactions of fly ash that consume moisture within the soil. The reaction between the binders and water within the soil will absorb moisture, reduce voids and therefore, create a denser material. Besides we also use lime to strengthen the soil, it also helps to reduce permeability and controls OMC and consequently mixing with soil for long time it enhance the bearing capacity of soil. Thus fly ash and lime in various proportion can enhance the quality of soil. III. Materials The soil sample is taken from our college campus. It can be considered as a virgin and active soil. The properties viz. liquid limit, plastic limit, grain size distribution and other physical properties are shown below in details in table 1 S. Mandal (Editor), GJAES 2016 GJAES Page 106

115 S.Kund et al., Improvisation of Locally Available Soil for Economical Foundation, Global Journal on Advancement in Engineering and Science, 2(1), March 2016, pp Table 1 : Physical properties os soil Properties of soil Values Liquid limit 29.25% Plastic limit(wp) Flow index (If) Plasticity index (Ip) 4.34 Liquidity index (Il) 0.98 Consistency index (Ic) 0.98 Toughness index (IT) 0.21 Shrinkage limit (Ws) Shrinkage ratio (SR) Volumetric Shrinkage(Vs) 0.24 Grain size distribution of soil by Sieve Analysis method is shown in figure 1. Fig: 1 Grain size distribution curve of soil by From figure 1 we can determine that we have the particle size values of D60, D30 and D10 and they are respectively 1.35, 0.52 and 0.2 and this soil exhibits the uniformly graded property After sieve analysis we carried out a hydrometer analysis of the residue particles which are passes from 75 µ and grain size distribution of 75 µ passing soil by Hydrometer is showing in figure 2 From figure 2 we can determine that Fig: 2 Grain size distribution of soil by Hydrometer process To enhance the quality of soil viz. soil strength, resistivity to surcharge, we have used fly ash and lime. This fly ash is collected from Titagarh Thermal Power Plant. Here fly ash is considered as sandy silt and it is used in dried condition. The grain size distribution curve is showing fig.4. S. Mandal (Editor), GJAES 2016 GJAES Page 107

116 S.Kund et al., Improvisation of Locally Available Soil for Economical Foundation, Global Journal on Advancement in Engineering and Science, 2(1), March 2016, pp Fig: 4 Grain size distribution of fly ash by sieve Analysis All the tests are conduct in our laboratory of Brainware Group Of Institution. The other essential material is lime which is readily available in market. It is kept in water for 24 hours to make it hydrated. RESULTS AND DISCUSSION Some results obtained from Direct Shear test and Standard Proctor Test. The values of c and φ obtained by varying the proportions of lime and fly ash are used to design embankment with a higher Factor of Safety than that designed with c and υ of the normal soil. The seminar concludes with a summary of the comparison between the OMC s,c and φ and Factor of Safety of the soil samples and also mentioning how economical it is than that of the normal soil. The tests involved in this project are:. a) Standard Proctor Test b) Direct Shear Test a. Standard proctor test was done to determine the optimum moisture content of soil. For OMC determination Lime and fly ash mixed with normal soil and some results are comes out. First we carried out the OMC of normal soil, then mixing with 1% lime and 4% of fly ash with 95% soil and further mixing with 95% soil, 2% of lime and 3% of fly ash, and those result are shown in fig 5. Fig 5: Compaction curve for all soil types Results of optimum moisture content of various proportion of soil are given in tabulated form in table 2. Table 2 : OMC of soil by mixing with various percentage of fly ash and lime Soil Sample OMC Normal soil 15.7% Soil+1:4LFA 16.5% Soil+2:3LFA 17.5% S. Mandal (Editor), GJAES 2016 GJAES Page 108

117 S.Kund et al., Improvisation of Locally Available Soil for Economical Foundation, Global Journal on Advancement in Engineering and Science, 2(1), March 2016, pp So, we can see that addition of lime and fly ash tends to increase the optimum moisture content (OMC) and the maximum dry density (MDD) decreases with the increase in lime content. This change is considered as an indication of the stabilization of the soil.the reduction in the dry density occurs because the agglomerated and flocculated particles of soil occupy larger spaces and the reason for increasing OMC is that, the lime requires more water for the pozzolanic reactions. b.for determination of direct shear strength: Direct shear test was carried out in different normal load of those mixing soil sample and results are below: Fig 6: Shear stress v/s shear strain curve for various soil, for normal load 0.5 kg/cm2 Fig7: Shear stress v/s shear strain curve for various soil, for normal load 1 kg/cm2 So, from the above graphs we can see that there is an increase in shear stress at the failure points with the increase in the lime content in the sample. When lime comes to contact with a substance containing soluble silicates and aluminates (such as clay and silt), it forms hydrated calcium aluminates and calcium silicates and this gives rise to a pozzolanic reaction. This bonding process brings about improved resistance to frost and a distinct increase in the soil s Shearing strength. Cohession value and angle of resistant value of soil is changing by mixing with different percentage of lime and fly ash ratio. For normal soil cohesion value and angle of resistance value is shown in figure 8. After that 1:4 ratio lime and fly ash was mixed with normal soil, and shear strength parameter value was shown in figure 9, then 2:3 ratio lime and fly ash was mixed with normal soil and the shear strength values are shown in figure 10. For the case of figure 9 and figure 10 we took 95 % soil and 5% lime and fly ash ratio. For normal soil: S. Mandal (Editor), GJAES 2016 GJAES Page 109

118 S.Kund et al., Improvisation of Locally Available Soil for Economical Foundation, Global Journal on Advancement in Engineering and Science, 2(1), March 2016, pp Fig 8: Normal stress v/s shear stress curve of normal soil For soil with mix 1:4 LFA: Fig 9: Normal stress v/s shear stress curve for soil of mix 1:4 LFA For soil with mix 2:3 LFA: Fig 10: Normal stress v/s shear stress curve for soil of mix 2:3 LFA The soil shear strength parameters value of the given soil in normal condition and adding mixture of lime and fly ash ratio in various percentage are shown in tabulated form in the table 3 and in bar chart form Table 3: Changing of soil shear strength parameters of soil by mixing of various percentage of lime and fly ash ratio S. Mandal (Editor), GJAES 2016 GJAES Page 110

119 S.Kund et al., Improvisation of Locally Available Soil for Economical Foundation, Global Journal on Advancement in Engineering and Science, 2(1), March 2016, pp SOIL LIM FLY COHESIO FRICTION SAMPLE E ASH N X10-3 ANGLE 100% 0% 0% 99 13o 95% 1% 4% o 95% 2% 3% o Change of shear strength parameter with various percentage of lime and fly ash ratio: Fig 11: Increment of shear strength parameters using lime and fly ash From the table and bar chart it has been clearly shown that c and φ values of the sample are gradually increases with the increase in lime and fly ash content of the soil sample. From these values it is surely to say that the soil strength also gradually increases with increases of lime and fly ash ratio and that soil is also more effective or suitable for foundation, pavement, road or embankment design purpose than normal soil IV. Conclusion 1. Lime and fly ash is used as an excellent soil stabilizing materials for highly active soils which undergo through frequent expansion and shrinkage. 2. Lime and fly ash acts immediately and improves various property of soil such as bearing capacity of soil, resistance to shrinkage during moist conditions, increase in OMC value and subsequent increase in the compression resistance with the increase in time. 3. The reaction is very quick and stabilization of soil starts within few hours. 4. The graphs presented above give a clear idea about the improvement in the properties of soil after adding lime and fly ash. 5. The Factor of safety of a typical finite slope decreases with the addition of lime hence making it more stable V. Future Scope Of Studies The experiment which we have done is a short term procedure. Before the experiment the lime was kept in water only for 24 hours, but the experiment may be done by increasing hydration time of lime. Increment of hydration time gives more strength to soil. This experiment can further be proceed considering following technics Water effect is not considered here. The strength behaviour of the soil may be check considering water surrounding. Here we use 5% admixture (1% lime & 4% fly ash, 2% lime & 3% fly ash) along with 95% soil. The proportions can be altered on the basis of experimental results. We may use 10% admixture with 90% soil and so on. Besides we may use lime or fly ash individually with soil and check what changes come. S. Mandal (Editor), GJAES 2016 GJAES Page 111

120 S.Kund et al., Improvisation of Locally Available Soil for Economical Foundation, Global Journal on Advancement in Engineering and Science, 2(1), March 2016, pp References 1. Chaddock, B. C. J., (1996), The Structural Performance of Stabilized Road Soil in RoadFoundations, Lime stabilization. 2. Evans, P., (1998). Lime Stabilization of Black Clay Soils in Queensland, Australia, Presentation to the National Lime Association Convention, San Diego, California. 3. Basma, A. A., and Tuncer, E. R., (1991), Effect of Lime on Volume Change and Compressibility of Expansive Clays, Transportation Research 4. Dawson, R. F., and McDowell, C., (1961), A Study of an Old Lime-Stabilized Gravel Base, Highway Research Board, Lime Stabilization: Properties, Mix Design, Construction Practices and Performance, Bulletin Doty, R., and Alexander, M. L., (1968) Determination of Strength Equivalency for Design of Lime- Stabilized Roadways, Report No. FHWA-CAT. S. Mandal (Editor), GJAES 2016 GJAES Page 112

121 Special Issue: Conference Proceeding of i-con-2016 Global Journal on Advancement in Engineering and Science (GJAES) Vol. 2, Issue 1 : March-2016, ISSN (Print): Original Research Work Effect of Porosity of Alumina Wheel in improving Grinding Performance Sujit Majumdar 1, Ahin Banerjee 2, Santanu Das 3, Samik Chakroborty 4, Debasish Roy 5 1 Mechanical Engineering Department, Global Institute of Management & Technology, Krishnanagar, Nadia, India, sujitmajumdar2010@gmail.com 2 Mechanical Engineering Department, Indian Institute of Technology, Varanasi, India, ahin49banerjee@gmail.com 3 Mechanical Engineering Department, Kalyani Government Engineering College, Kalyani, India, sdas.me@gmail.com 4 Indian Maritime University, Kolkata, India, chakrasamik@gmail.com 5 Mechanical Engineering Department Jadavpur university, Kolkata, India, debasish_kr@yahoo.co.in Abstract: In this paper, the effect of porosity and roughness of the cutting face of alumina grinding wheel on its grinding performance is reported. The extent of air pressure and boundary layer thickness due to the presence of porosity and roughness are tried to be quantified. Rexine-pasted wheel is used to compare the characteristics of air boundary layer and grinding performance with the alumina grinding wheel. Roughness of porous grinding wheel is found to increase the boundary layer thickness and air pressure around it. It is also found to give poor performance in wet grinding. A suitable method has been suggested to suppress this air pressure and thereby improving grinding performance. Key words: Boundary air-layer, porosity, rexine-pasted wheel, grinding performance. I. Introduction The process of grinding is associated with high friction between grits of the wheel and work-piece. This leads to high heat generation and consequently burn of work surface, metallographic change of work material, wheel loading, etc. [1]. Conventional wet grinding has proved to be less effective in removing such thermal-related problems as the presence of air around the wheel intervenes the entry of liquid jet successfully into the grinding zone. Guo and Malkin have seen [2] that 5-30% of liquid can enter effectively into the wheel-work interface. As a result, grinding suffers from the setback of high friction and heat generation. Therefore, entry of grinding fluid into grinding zone is essentially required to be increased by which friction in grinding can be controlled. The present paper aims at improving the grinding performance by minimizing the detrimental problem of high friction between wheel and work-piece. The process of grinding is associated with high friction between grits of the wheel and work-piece. This leads to high heat generation and consequently burn of work surface, metallographic change of work material, wheel loading, etc. [1]. Conventional wet grinding has proved to be less effective in removing such thermal-related problems as the presence of air around the wheel intervenes the entry of liquid jet successfully into the grinding zone. Guo and Malkin have seen [2] that 5-30% of liquid can enter effectively into the wheel-work interface. As a result, grinding suffers from the setback of high friction and heat generation. Therefore, entry of grinding fluid into grinding zone is essentially required to be increased by which friction in grinding can be controlled. The present paper aims at improving the grinding performance by minimizing the detrimental problem of high friction between wheel and work-piece. Akiyama et al. have observed [3] grinding performance can be improved with conventional flood cooling when used with a barrier. Morgan has suggested [4, 5] a coherent nozzle design to improve the fluid delivery into grinding zone. Apart from these, use of scraper board against the grinding wheel in conventional flood cooling can reduce the friction between wheel and work-piece [6]. Use of baffle plates can reduce the effect of rotating air around the wheel by 35 to 63% [7]. Use of coolant shoe by Ramesh and others has found to improve grinding forces, power flux, surface integrity, wheel wear and microstructure of ground material [8]. The presence of air boundary layer around grinding wheel increases with the increase of speed of the wheel [9]. This boundary layer gradually gets weak along the radial outward direction of the wheel [2,3,6,7,10,11]. This paper is going to signify the presence of pores and roughness of grinding wheel in increasing the boundary layer S. Mandal (Editor), GJAES 2016 GJAES Page 113

122 S. Majumdar et al., Effect of Porosity of Alumina Wheel in improving Grinding Performance, Global Journal on Advancement in Engineering and Science, 2(1), March 2016, pp pressure. Change in turbulence in the air boundary layer with the presence and absence of pores is also observed. The effect of the pores on grinding performance is also experimented. II. Experimental setup Detail of the experimental set up used is given in tabulated manner in Table 1. Table 1 Detail of experimental set up Machine Tool Surface grinding machine Make: HMT Limited, India, Model 452P Infeed resolution: 1 µm, Speed range: 2880 RPM, Main motor power: 0.18 kw Wheel 1. Disc type alumina grinding wheel, 2. Rexine-pasted grinding wheel Make Carborandum Univ. Ltd., India Specification AA 46/54 K5 V8 Size ф ф31.75 mm Work piece Low alloy steel Hardness: 339 BHN Measuring Prandtl type pitot tube. Make: Mitutoyo, Japan Instruments U-tube Manometer. Manometric fluid: Water 3-Axis force dynamometer. Make : Sushma Industries Ltd., Bengaluru, India Model: SA116. Range: 100 g-100kg. Resolution: 10g Load indicator. Make : Sushma Industries Ltd., Bengaluru, India Experimental Conditions Model: SA115A. Up grinding Surface velocity: 30 m/s. Infeed: 20 micron. Table feed: 258 mm/min. III. Result and discussion As the grinding wheel rotates, air around it also rotates. Air layer around it has to satisfy the no-slip condition. The centrifugal effect causes air to leave the wheel in radial direction. This happens to all types of discs which rotate within fluid media. As the wheel rotates, air from the sides also rushes towards two side faces of wheel [Fig 1]. It happens due to formation of the low pressure zone near the side faces. But the grinding wheel being porous, air may enter through the wheel pores. To prevent the suction of air through its side faces, rexine is pasted on both the faces of wheel. Fig 1 Phenomenon to describe the rushing of air towards the side faces of wheel when rotates in a fluid media [12] IV. Measurement of air pressure Results obtained of manometric pressure of rotating air and grinding forces with two different types of wheels are given below. Rotation of air along the wheel creates a ring of air to remain always with the wheel. As the peripheral velocity of any circular motion is tangential, the pitot tube employed to measure the air pressure is kept tangential to the wheel as shown in Fig 2. As the diameter of the pitot was 6.32 mm, the axis of it is maintained at 3.5 mm away from the surface of the wheel to avoid the grinding of itself. Axial suction of air is found non-measurable by pitot tube. However, measuring this axial suction is important in solving the problem of issuing grinding fluid into the grinding zone. Measurements are made across the wheel thickness at 2 mm succession from one side face of the wheel to another. These positions of pitot are changed by the table movement across the wheel S. Mandal (Editor), GJAES 2016 GJAES Page 114

123 Reynolds number Average Air Pressure (Pa) S. Majumdar et al., Effect of Porosity of Alumina Wheel in improving Grinding Performance, Global Journal on Advancement in Engineering and Science, 2(1), March 2016, pp thickness. Three readings are taken at each position of the pitot. There are 11 such positions across the 20 mm wheel width. Averages of those measured pressure are considered for the present investigation. Fig 3 shows the air pressure around grinding wheel and rexine pasted grinding wheel. The average pressure of air around grinding wheel is found 305 Pa. Rexine pasted grinding wheel is found to give lesser air pressure of 213 Pa. This may be due to the fact that the suction of air through side faces of wheel is restricted by pasting the impermeable rexine, and consequently cannot reinforce rotating air around. In this work, around 30% reduction of air pressure is observed. Even the degree of turbulence of the rotating air around the wheel also decreases with rexine pasted wheel. Fig 4 shows the Reynolds number of both type of wheels used in the experiment. Uses of rexine have reduced the exposed pores through which the suction of air can take place. Consequently, quantity of axially sucked air is reduced, which in turn reduces the turbulence. Fig 2 Pitot tube to measure air pressure around the rexine pasted wheel Grinding Wheel Types of wheel Rexine pasted wheel Fig 3 Comparison of air pressure built around two types of wheels Grinding Wheel Rexine pasted wheel Types of wheel Fig 4 Comparison of Reynolds number of rotating air for two types of wheels S. Mandal (Editor), GJAES 2016 GJAES Page 115

124 Normal Force (kgf) S. Majumdar et al., Effect of Porosity of Alumina Wheel in improving Grinding Performance, Global Journal on Advancement in Engineering and Science, 2(1), March 2016, pp V. Grindability Test Grinding operations are performed to check the grinding performance with the rexine pasted wheel. Fig 5 presents the normal force of dry grinding, wet grinding and wet grinding with rexine pasted wheel at the 10th pass Dry grinding Wet grinding Wet grinding with rexine pasted wheel Grinding Conditions Fig 5 Observation of normal force component under varying environment Low value of normal component of grinding force is observed when low alloy steel specimen is ground with rexine-pasted wheel under conventional grinding fluid application system. Maximum force is observed in case of wet grinding with plain grinding wheel. As the presence of obstructing air layer around the wheel becomes less, turbulence of air around also decreases after covering the grinding wheel with rexine cloth. Consequently, more amount of coolant may be entering into the interface of wheel- work surface that may cause the reduction of friction between grits of wheel and work piece, and cooling the area, thereby reduces grinding force. VI. Conclusion From the experimental investigation carried out, following conclusions may be made. 1. There is a significant effect of air layer covering a rotating grinding wheel towards restricting supply of grinding fluid in to the grinding zone. 2. Rexine pasted wheel tends to suppress suction of air through side faces of grinding wheel thereby reducing the air layer pressure. This, in turn, facilitates supply of more quantity of grinding fluid in to the grinding zone reducing grinding temperature and grinding force requirement. References: 1. S. Malkin, Grinding Technology: Theory and Application of Machining with Abrasives, Ellis Harwood Chichester: U.K., C. Guo and S. Malkin, Analysis of fluid flow through thegrinding zone, ASME Journal of Engineering for Industry, vol. 104, 1992, pp T. Akiyama, J. Shibata and S. Yonetsu, Behaviour of grinding fluid in the gap of the contact area between a grinding wheel and a workpiece, Proc. of 5th Int. Conf. on Prod. Engg., 1984, pp M. N. Morgan and V. Baines-Jones, On the Coherent Length of Fluid Nozzles in Grinding, Special Topic Volume: Progress in Abrasive and Grinding Technology, Trans Tech Publications, Switzerland KEM. 5. M.N. Morgan, A.R. Jackson, H. Wu, V. Baines-Jones, A. Batako and W.B. Rowe, Optimisation of fluid application in grinding, CIRP Annals - Manufacturing Technology, vol. 57, 2008, pp B. Mandal, S. Majumder, S. Das and S. Banerjee, Formation of a significantly less stiff air layer around a grinding wheel pasted with rexine leather, Int. J Precision Tech., Vol. 2 /1, 2011, pp E. Catai, L. R. D. Silva, E. C. Bianchi, P. R. D. Aguiar, F. M. Zílio, I. D. D. Valarelli and M. H. Salgado, Performance of Aerodynamic Baffles in Cylindrical Grinding Analyzed on the Basis of air Layer Pressure and Speed Rodrigo, J. of the Braz. Soc. of Mech. Sci. & Eng, vol. XXX, no. 1, 2008, pp K. Ramesha, S.H. Yeob, Z.W. Zhongb and K.C. Simc, Coolant shoe development for high efficiency grinding, Journal of Materials Processing Technology, vol. 114, Issue 3, August 2001, pp S. Majumdar, B. Mondal, S. Das and S. Chakroborty, Modeling Air Layer Pressure around a Rotating Grinding Wheel Global Journal on Advancement in Engineering and Science (GJAES), vol.1, issue 1, 2015, pp B. Mandal, S. Majumdar, S. Das, and S. Banerjee, Predictive modeling and Investigation on the formation of stiff airlayer around the grinding wheel, Advanced Materials Research: Advances in Materials and Processing Technologies, Vols , 2010, pp B. Mandal, R. Singh, S. Das, and S. Banerjee, Improving grinding performance by controlling air flow around a grinding wheel, International Journal of Machine Tools & Manufacture, vol. 51, 2011, pp , doi: /j.ijmachtools S. Mandal (Editor), GJAES 2016 GJAES Page 116

125 Special Issue: Conference Proceeding of i-con-2016 Global Journal on Advancement in Engineering and Science (GJAES) Vol. 2, Issue 1 : March-2016, ISSN (Print): Original Research Work To Study the Impact of Temperature Boundary Conditions for Overall Heat Transfer Coefficient Measurement Suitable for Adaptation in Tropical Climate for Energy Efficient Building Debrudra Mitra 1, Subhasis Neogi 2 School of Energy Studies, Jadavpur University, India debrudramitra@yahoo.com 1, 2 Abstract: Building sector is one of the major energy consumers in most of the developed countries. To design energy efficient building systems heat transfer through the building components needs to be calculated which depends on the overall heat transfer coefficient or U value. Guarded Hot Box Test Facility is used to determine the U value for a specified temperature difference. Due to wide range of weather conditions, more than one set point temperatures are required to measure the U value. Temperature difference of 10 C and 20 C are required to determine the U value of building component in Guarded Hot Box Test Facility. Keywords: U value, Guarded Hot Box, Temperature difference, Weather Condition I. Introduction Buildings energy consumption accounts for a 20 40% of total energy use in developed countries and it is more than industry and transport sector in the European Union and the United States of America [L. Pérez-Lombard et al, 2008]. In India, building sectors consume 33% of total energy produced nationally [S. Kumar et al, 2010]. Therefore, it is important to develop techniques that can reduce the building energy consumption. To design energy efficient building system heat flow through the building components should be minimised. For doing this, the thermal performance of any building material should be evaluated. Amount of heat transfer through a component depends on the overall heat transfer coefficient (U value) of that component. Heat transfer through building component increases with the increase in U value of the material. The U-value of a building material can be determined by the Guarded Hot Box Test Method [BS 874: Part 3: Section 3.1:1987]. The method described in the International Standards measures the total magnitude of heat transferred from one side of the specimen to the other side for a given temperature difference. The U value depends upon the temperature difference between the hot and cold side of the sample [Xiande Fang, 2001]. As the temperature difference across the sample increases, U value also increases. So for calculating the U value it is important to specify the proper temperature difference across the sample [BS EN ISO 8990:1996]. The British standard [BS 874-part 3-section 3.1:1987] has recommended maintaining a minimum difference of 20 C between the air temperatures in the hot and the cold sides for obtaining accurate thermal transmittance. According to American standard ASTM C1058 It is recommended that thermal properties of insulation materials be evaluated over a mean temperature range that represents the intended end use. For this situation, the lowest and greatest mean temperatures need to be within 10 C of the maximum and minimum mean temperature of interest. For building envelopes in moderate climates with an anticipated exterior temperature range of 0 C to 50 C [30 F to 120 F]), recommended mean temperatures are 4 C, 24 C, and 43 C [40 F, 75 F and 110 F]. Defined mean temperatures for evaluating thermal properties of building envelope are -4 C, 4 C, 10 C, 24 C, 38 C and 43 C i.e. 25 F, 40 F, 50 F, 75 F, 100 F and 110 F. [ASTM C1058]. But there are no specified set point temperatures for U value measurement in the Guarded Hot Box Test Facility for tropical climate countries like India. The objective of this study is to define the temperature differences across the test sample for Indian climatic condition. II. British and American Set Point Temperatures Great Britain consists of England, Northern Ireland, Scotland and Wales. In England summer accounts a temperature from 20.9 C to 11.7 C (July) and during winter the temperature ranges from 7.2 C to 1.1 C (February).The yearly maximum and minimum temperatures are 13.5 C and 5.9 C respectively. In Northern Ireland summer has a temperature from 18.4 C to 10.6 C (July) and during winter the temperature ranges from 6.7 C to 1.2 C (February).The yearly minimum and maximum temperatures are 12.2 C and 5.2 C respectively. In Scotland summer accounts for a temperature from 16.9 C to 9.3 C (July) and during winter the temperature ranges from 5.0 C to -0.2 C (January).The yearly maximum and minimum temperatures are 10.5 C and 4.0 C respectively. In Wales summer accounts a temperature from 19.1 C to 10.9 C (July) and during winter the temperature ranges from 6.6 C to 1.1 C (February).The yearly maximum and minimum temperatures are 12.3 C S. Mandal (Editor), GJAES 2016 GJAES Page 117

126 D. Mitra et al., To Study the Impact of Temperature Boundary Conditions for Overall Heat Transfer Coefficient Measurement Suitable for Adaptation in Tropical Climate for Energy Efficient Building, Global Journal on Advancement in Engineering and Science, 2(1), March 2016, pp and 5.5 C respectively. Relative humidity in UK varies within 70% to 90%. So the minimum average ambient temperature is found to be around 5 C (4.9 C). Fig 1.(a) Mean temperature summer average and Fig (b) Mean temperature winter average of United Kingdom Fig (a) Fig (b) For thermal comfort condition inside the room, temperature is required to be maintained within 22 C to 27 C, and relative humidity must be within 40% to 60% [BS EN ISO 7730:2005]. So, sensible heating of ambient air of around 20 C is required for maintaining comfortable indoor condition. This is probably the reason behind the temperature difference of at least 20 C across the test sample is specified in the BS874: Part3:Section3.1:1987 standard. Figure2. Mean annual temperature in United States In USA maximum temperature occurs in the month of July which is around 95 F (35 C) and relative humidity is around 55% to 65%. February is the coldest month of USA with a temperature around 0 F (-17.8 C) [Guide to Determine Climate Regions by County, 2010]. Average temperature in USA in summer condition is around 80 F (26.7 C) whereas it is 0 F (-17.8 C) in winter months. In winter, temperature in between 67 F (19.5 C) to 75 F (24 C) and relative humidity within 50% to 80% create comfortable indoor condition in USA. In summer condition comfort zone is bounded by temperature 74 F S. Mandal (Editor), GJAES 2016 GJAES Page 118

127 D. Mitra et al., To Study the Impact of Temperature Boundary Conditions for Overall Heat Transfer Coefficient Measurement Suitable for Adaptation in Tropical Climate for Energy Efficient Building, Global Journal on Advancement in Engineering and Science, 2(1), March 2016, pp (23.3 C) to 81 F (27.2 C) and 50% to 80% relative humidity. So in USA to maintain the comfortable summer condition we have to go for cooling and dehumidification process of ambient air, whereas for maintaining winter comfortable condition heating and humidification process of ambient air is required. So, in summer months for comfortable indoor condition average ambient air of 80 F (26.7 C), 60% relatively humidity has to be conditioned to around 70 F (21.1 C), 60% relative humidity. So, the mean temperature between the ambient air and indoor comfort condition is around 75 F (24 C). Similarly in winter months ambient air temperature is around 0 F (-17.8 C). So the mean temperature between the ambient air and comfort condition is around 40 F (4.5 C). III. Indian Weather Condition The climate of India comprises a wide range of weather conditions across a vast geographic scale and varied topography, making generalisation difficult. Based on different weather condition, India hosts five major climatic subtypes, ranging from arid desert in the west, alpine tundra and glaciers in the north and humid tropical regions supporting rainforests in the south-west and the island territories. Many regions have starkly different microclimates. Figure3. Different weather climate zones in India HOT AND DRY WARM AND HUMID MODERATE COLD COMPOSITE Thus the major weather zones according to their corresponding conditions are namely--- 1.Hot & Dry climate zone, 2.Warm & Humid climate zone, 3.Moderate climate zone, 4.Cold climate zone and 5.Composite climate zone [Energy Conservation Building Code (ECBC) User Guide, 2009]. 1. Hot & Dry climate includes places like Western and Central part of India like Jodhpur, Jaisalmer, Sholapur. In summer maximum temperature is from 40 C to 45 C during daytime and 20 C to 30 C at night. In winter daytime temperature accounts for 5 C to 25 C during daytime and 0 C to 10 C during night. Relative humidity is around 55%. 2. Warm & Humid climate includes the coastal parts of India like Mumbai, Chennai and Kolkata In summer maximum temperature is from 30 C to 35 C during daytime and 25 C to 30 C during night. In winter daytime temperature accounts for 25 C to 30 C during daytime and 20 C to 25 C during night. Relative humidity is very high in this climatic region. 3. Moderate climate includes cities like Pune and Bangalore. In summer maximum temperature is from 30 C to 34 C during daytime and 17 C to 24 C during night. In winter daytime temperature accounts for 27 C to 33 C during daytime and 16 C to 18 C during night. S. Mandal (Editor), GJAES 2016 GJAES Page 119

128 D. Mitra et al., To Study the Impact of Temperature Boundary Conditions for Overall Heat Transfer Coefficient Measurement Suitable for Adaptation in Tropical Climate for Energy Efficient Building, Global Journal on Advancement in Engineering and Science, 2(1), March 2016, pp Cold climate zone includes the northern part of India that is regions situated at high altitudes like Shimla, Shillong, Srinagar etc. In summer maximum temperature is from 20 C to 30 C during daytime and 17 C to 27 C during night. In winter daytime temperature accounts for 4 C to 8 C during daytime and -3 C to 4 C during night. Relative humidity generally around 70% to 80%. 5. Composite climate includes mainly the Central part of India like Delhi, Kanpur and Allahabad. In summer maximum temperature is from 32 C to 43 C during daytime and 27 C to 32 C during night. In winter daytime temperature accounts for 25 C to 10 C during daytime and 10 C to 4 C during night. Relative humidity is around 20% to 25% in dry period and 55% to 95% in wet period. In India indoor thermal comfort condition varies from temperature 23 C and 30 C and relative humidity is around 40% to 60%. So, in India amount of heat flow through the building material in different climatic conditions needs to be measured. IV. Proposed Indian Set Point Temperature So, in India for different climate zones, we need different set point temperatures for the testing and evaluating of thermal properties of building material in Guarded Hot Box U value test facility system. 1. For Hot and dry climate Maximum temperature in around 45 C during day time in summer and around 5 C during night time in winter. As thermal comfort zone for Indian climatic conditions varies in between 23 C to 30 C temperature, so during summer time cooling and dehumidification and during winter heating and humidification is required to maintain comfort indoor condition. During summer time maximum temperature difference between the ambient air and indoor air is around 20 C (45 C -25 C) and during winter also maximum temperature difference is around 20 C (25 C-5 C).Temperature difference of 20 C across the test sample is required. 2. For Warm and Humid climate maximum temperature rises to 35 C. Cooling and dehumidification is required to maintain indoor thermal comfort condition. In winter months ambient air only requires dehumidification for maintaining thermal condition. So for testing in this type of weather conditions temperature difference across the test sample of 10 C (35 C-25 C=10 C) is adequate. 3. Scenario for Moderate climate zones is also almost similar to Warm and Humid climate conditions. So during summer time cooling and dehumidification and in winter season heating and humidification is required. So average temperature difference between the ambient air and indoor air (35 C -25 C =10 C) in summer month and (25 C-15 C=10 C) in winter month is 10 C. So temperature difference of 10 C across the test sample is adequate for this climate zone also. 4. In Cold climates, daytime temperature is around 17 C but night time temperature is around 0 C. During summer time only sensible heating of around 8 C (25 C -17 C =8 C) is required but, in winter time heating and humidification is required. Average temperature difference between outdoor and indoor air condition in winter is around 20 C. So a temperature difference of 20 C is required for testing of building material. 5. In Composite weather condition maximum temperature is around 43 C and minimum temperature is around 5 C. In summer heating and humidification and in winter cooling and dehumidification is required. Maximum temperature difference in summer is 43 C-25 C=18 C and in winter is 25 C-5 C=20 C. So for maintaining indoor comfort condition temperature difference of 20 C across the test sample is required. V. Conclusion To design energy efficient building systems, heat flow through the building components have to be minimised and for that reason U value of the building material needs to be calculated. Temperature difference across the sample is an important criterion to determine the U value of the building components using Guarded Hot Box Test Facility. In both British and American standard, the temperature difference was specified based on their climatic condition and indoor thermal comfort conditions. Various factors affect Indian weather that has created regional diversities in climatic conditions. Based on that, Indian climatic conditions can be divided into five major weather zones. To accommodate this huge variation in weather condition, more than one set point temperature difference across the test sample must have been defined. For Hot and Dry, Cold climate and Composite climatic conditions a temperature difference of 20 C whereas 10 C temperature difference is required for Warm and Humid condition and Moderate climatic conditions for measuring the U value using Guarded Hot Box Test Facility. S. Mandal (Editor), GJAES 2016 GJAES Page 120

129 D. Mitra et al., To Study the Impact of Temperature Boundary Conditions for Overall Heat Transfer Coefficient Measurement Suitable for Adaptation in Tropical Climate for Energy Efficient Building, Global Journal on Advancement in Engineering and Science, 2(1), March 2016, pp VI. References [1] ASTM C1058, Standard Practice for Selecting Temperatures for Evaluating and Reporting Thermal Properties of Thermal Insulation. [2] British Standard BS EN ISO 7730:2005 Ergonomics of the Thermal Environment- Analytical Determination and Interpretation of Thermal Comfort using Calculation of the PMV and PPD indices and Local Thermal Comfort Criteria [3] British Standard BS EN ISO 8990:1996, Thermal insulation Determination of steady state thermal transmission properties Calibrated and guarded hot box. [4] British Standard BS 874: Part 3: Section 3.1:1987, British Standard Methods for determining thermal insulating properties Part 3. Tests for thermal transmittance and conductance Section 3.1 Guarded hot-box method. [5] Energy Conservation Building Code (ECBC) User Guide, July 2009 [6] F. Asdrubali, M. Bonaut, M. Battisti, M. Venegas:, Comparative study of energy regulations for buildings in Italy and Spain, Energy and Buildings 40 (2008) [7] Guide to Determine Climate Regions by County, Building Technologies Program, U.S. Department of Energy, 2010 [8] L. Pérez-Lombard, J. Ortiz, C. Pout, A review on buildings energy consumption information, Energy and Buildings 40 (3) (2008) [9] S Kumar, R Kapoor, R Rawal, S Seth, A Walia., Developing Energy Conservation Building Code Implementation Strategy in India. Proceedings of ACEEE Summer Study on Energy Efficiency in Buildings, Pacific Grove, CA, August [10] Xiande Fang, A STUDY OF THE U-FACTOR OF A WINDOW WITH A CLOTH CURTAIN, Applied Thermal Engineering 21 (2001) S. Mandal (Editor), GJAES 2016 GJAES Page 121

130 Special Issue: Conference Proceeding of i-con-2016 Global Journal on Advancement in Engineering and Science (GJAES) Vol. 2, Issue 1 : March-2016, ISSN (Print): Original Research Work Consumer Behavior & Brand Preference towards Sonata Wrist Watches A study with reference to Asansol City, West Bengal. Mayukh Thakur Research Scholar, Ph.D. in Management, JIS University, Kolkata Abstract: Consumer is king the statement carries profound truth in it. Today the success of any firm depends upon the satisfaction of consumers. For satisfying the consumers the firm should know about the behavior of the consumers. In these circumstances understanding consumer is a very difficult task because of the changing technology, innovation, and changes in life style. Researchers conducted many research in this area, and they give only few suggestions, but there is no final conclusion. With the inevitability of change looming large over the horizon, Indian companies must learn from their western counterparts; not only to identify the sources, timing and direction of the changes likely to affect India, but also the new competencies and perspective that will enable them to respond to these changes, comprehensively and effectively. Today human begins work with the time. The various activities to be performed are generally prescribed on the basis of time factor. Thus time is considered to be an important factor in every walk of life. Now-a-days we find no person without a wrist watch and a home without a clock. Thus the watches have become almost a necessity for human being, to whichever economic class they belong. This paper contains the consumer behaviour and brand preference while selecting the watches with special reference to Sonata Wrist Watches at Asansol city, West Bengal. Key words: Consumer Behaviour, Brand Preference, Customer Satisfaction. I. Introduction Consumer behavior is stated as the behavior that consumer display in searching for, purchasing, using, evaluating, and disposing of products, services and ideas that they expect will satisfy their needs. The study of consumer behavior is concerned not only with what consumers buy, but also with why they buy it, when and how they buy it, and how often they buy it. It is concerned with learning the specific meanings that products hold for consumers. Consumer research takes place at every phase of consumption process, before the purchase, during the purchase and after the purchase. According to Philip Kotler, consumer behavior is all psychological customers as they become aware of evaluate, purchase, consume and tell other about products and services. The scope of consumer behavior includes not only the actual buyer and his act of buying but also various roles played by different individuals and the influence they exert on the final purchase decision. Individual consumer behavior is influenced by economic, social, cultural, psychological, and personal factors. Growing economy and rising consumerism endorses itself in the Indian Watch Market also. The market more than mirrors the radical transformation of consumer markets in India as well as the promising future held by the strong fundamentals of a robust economy. The Indian Watch Industry, a white paper, prepared by the apex time-wear industry association in India, the All India Federation of Horological Industries (AIFHI) along with Technopak-Advisor, released in January this year, reveals that factors like growing economy, increasing consumerism and favorable demographics hold phenomenal prospects for time-wear products in the Indian consumer markets. The watch and clock industry has been a market with great longevity throughout the years, as these timepieces have always been needed and in demand by consumers. Though Switzerland is often touted as the leader in the watch and clock industry globally, many other countries also produce these sorts. Several Indian watch manufacturers have global ambition in todays globalized, modernized and economically stable and strong country. In the 18 th and 19 th century watch industry has flourished in western world only, specifically Switzerland but the second half of the 20 th century has seen India emerging an important manufacturer of watches. Sonata, a TATA group brand has created history in the Indian watch industry by manufacturing and marketing different brands of watches not only in Indian market but also in the international market. This research study attempts to study this changing perception and preferences of the consumer towards wrist watches as the market readies to offer a plethora of opportunity to domestic and international marketers. It outlines consumer likings and purchase patterns as watch vendors evolve product and promotion strategy in dynamic market conditions. The research work is based in Asansol city, West Bengal, which is one of the most vibrant markets in Eastern India, it stands out as extremely attractive for any goods and services provider. S. Mandal (Editor), GJAES 2016 GJAES Page 122

131 M. Thakur, Consumer Behavior & Brand Preference towards Sonata Wrist Watches A study with reference to Asansol City, West Bengal., Global Journal on Advancement in Engineering and Science, 2(1), March 2016, pp II. Literature Review Consumer behavior has always been of great interest to marketers. The knowledge of consumer behavior helps the marketer to understand how consumers think, feel and select from alternatives like products, brands and the like and how the consumers are influenced by their environment, the reference groups, family, and salespersons and so on Consumer s personal and psychological factors. Most of these factors are uncontrollable and beyond the hands of marketers but they have to be considered while trying to understand the complex behavior of the consumers. In this study, the researcher emphasizes the importance of lifestyle and its impact on the buyer behavior. There are two factors mainly influencing the consumers for decision making: Risk aversion and innovativeness. Risk aversion is a measure of how much consumers need to be certain and sure of what they are purchasing (Donthu and Gilliland, 1996).Highly risk adverse consumers need to be very certain about what they are buying. Whereas less risk adverse consumers can tolerate some risk and uncertainty in their purchases. The second variable, innovativeness, is a global measure which captures the degree to which consumers are willing to take chances and experiment with new ways of doing things (Donthu and Gilliand, 1996).The shopping motivation literature is abound with various measures of individual characteristics (e.g., innovative, venturesome, cosmopolitan, variety seeking), therefore, innovativeness and risk aversion were included in this study to capture several of these traits. Measures by Donthu and Gilliland (1996) were used to measure innovativeness and risk aversion. Perception is a mental process, whereby an individual selects data or information fromthe environment, organizes it and then draws significance or meaning from it. Product class knowledge is a measure of consumers perceptions of how much they know about a specific class of products (e.g., cars)this type of measure is consistent with what Brucks (1985) called subjective knowledge, that is, consumers self -perceptions of knowledge levels. This is often contrasted with objective knowledge, which is what consumers actually know. Park and Lessing (1981) proposed that subjective knowledge provides a better understanding of consumers decision making processes because consumers level of confidence in their search and decision making behavior, independent of their objective knowledge. Past research indicates that consumers purchase and channel decisions might be influenced by the type of product being investigated (Cox and Rich 1964; Lumpkin and Hawes 1985; Morrison and Roberts 1998; Papadopoulos 1980; Prasad 1975; Sheth 1983; Thompson 1971). In particular, these authors state that certain products might be more appropriate for one channel or another, which ultimately influences consumers channel preference and choice. Packaging establishes a direct link with the consumers at the point of purchase as it can very well change the perceptions they have for a particular brand. A product has to draw the attention of the consumers through an outstanding packaging design. Earlier packaging was considered only a container to put a product in, but today, research in to the right packaging is beginning at the product development stage itself. Packaging innovation has been at the heart spends large sums annually on packaging research. We have been laying emphasis appeal and convenience for consumer says Deepak M., a senior market analyst. The greatest challenge faced by companies today is holding and increasing their market share and value. This is always a strenuous exercise and one of the tools for the same is marketing. There is no specific game rule available for using these marketing tools.the reason is: each promotional tool has its own characteristics. Consumer s familiarity with a channel is a measure of products through specific channels (i.e. catalogue, internet, and bricks-and-mortar retailer). Through frequent use consumers should become accustomed to using the channel, which reduces their apprehension and anxiety in purchasing products through the channel. According to Rossiter and Prey (1987), brand awareness precedes all other steps in the buying process. A brand attitude cannot be performed, unless a consumer is aware of the brand. In memory theory, brand awareness is positioned as a vital first step in building the bundle of associations which are attached to the brand in memory (Stokes, 1985). A family exerts a complex influence on the behaviors of its members. Prior family influence research has focused on inter-generational rather than intra-generational influence in consumer generationalization. As has been compellingly demonstrated, parents influence children (Moore, Wilkie, and Lutz2002; Moschis 1987).Yet, consumption domains clearly exist where sibling efforts may also be exerted. Shopping motives are defined as consumer s wants and needs as they relate to outlets at which to shop. Two groups of motives, functional and nonfunctional, have been proposed by Sheth (1983). Functional motives are associated with time, place, and possession needs and refer to rational aspects of channel choice. Whereas, non-functional motives related to social and emotional factors are reasons for patronage. The functional motives included: convenience, price comparison, merchandise assortment. 3.Objectives Of The Study: 1. To know the market share of Sonata wrist watches. 2. To know the extent of satisfaction among Sonata wrist watch users. 3. To find the extent of brand loyalty among consumers. 4. To study the factor affecting the buyers behavior. 5. To study consumer attitude towards pricing policies of Sonata wrist watches. 6. To study the consumer attitude towards company s promotional activities. S. Mandal (Editor), GJAES 2016 GJAES Page 123

132 M. Thakur, Consumer Behavior & Brand Preference towards Sonata Wrist Watches A study with reference to Asansol City, West Bengal., Global Journal on Advancement in Engineering and Science, 2(1), March 2016, pp To study buyer reaction to after sales service. 8. To study the marketing strategy adopted in the dealers. 9. To analyze statistically responded opinion. 10. To determine consumer demographics. 11. III. Scope Of The Study: This study covers the following: 1. Study covers the awareness of the consumer towards Sonatawrist watches. 2. Study covers the market share of Sonatawrist watches. 3. Study covers the reasons of buying the Sonatawrist watches. 4. Study covers the consumer attitude towards price of the Sonatawrist watches. 5. Study covers the various marketing channels of Sonata wrist watches. 6. Study covers the various problems faced by the company and the dealer. 7. Study is restricted to AsansolCity only. IV. Research Methodology Research methodology is the process of solving the problem systematically by research using available data. Research design is a detailed blue print used to guide the research study towards its objectives.descriptive research can be either quantitative or qualitative. It can involve collections of quantitative information that can be tabulated along a continuum in numerical form, such as scores on a test or the number of times a person chooses to use a-certain feature of a multimedia program, or it can describe categories of information such as gender or patterns of interaction when using technology in a group situation. Descriptive research involves gathering data that describe events and then organizes, tabulates, depicts, and describes the data collection (Glass & Hopkins, 1984). It often uses visual aids such as graphs and charts to aid the reader in understanding the data distribution. Because the human mind cannot extract the full import of a large mass of raw data, descriptive statistics are very important in reducing the data to manageable form. When in-depth, narrative descriptions of small numbers of cases are involved, the research uses description as a tool to organize data into patterns that emerge during analysis. Those patterns aid the mind in comprehending a qualitative study and its implications. V. Data Collection In the study both primary and secondary data have been used for the purpose of collecting data.the primary data have been collected through the consumer survey and discussions were carried out with the consumer personally by the help of proper questionnaire. The secondary data has been collected from various published literature (like text books, magazines, newspapers) and internet. The information regarding the organization has been collected from report and record provided by the dealers of Sonata wrist watches. 5.Sampling In order to collect the primary data 200 consumers were selected on stratified random sampling method from the Asansol city. A structured question was used to collect information from the sample consumer contacted. Even personal interviews were held with the respondents to gather unbiased information. Observation method also made used to understand the real feelings of the respondents so that study become more realistic and viable. VI. Data Analysis & Data Interpretation Consumer survey is necessary in any form of market as the behaviour changes day by day. Selection of products by the consumer reflects the faith in the products. The buyer s behavior changes accordingly the purchasing always depends on the quality and price. The study of consumer satisfaction is necessary to know the opinion of different consumers to implement the most effective marketing policy of the firm. To conduct the consumer survey, questionnaire method was used. Questionnaire is the most common research instrument. A questionnaire is a set of questions with or without space for recording answer. The question can secure relevant facts or opinions from informed and interested respondents included in the sample survey. In the following subsequent section, the data obtained from the respondent are analyzed statistically. A convenient sampling technique was made use for this survey and the number of respondents chosen was 200 from Asansol city. The demographic factors pertaining to consumers are mainly age, education, occupation and income. S. Mandal (Editor), GJAES 2016 GJAES Page 124

133 M. Thakur, Consumer Behavior & Brand Preference towards Sonata Wrist Watches A study with reference to Asansol City, West Bengal., Global Journal on Advancement in Engineering and Science, 2(1), March 2016, pp Table-1: Showing Gender Wise Classification Of Respondents Sex No. of respondents Percentage Male Female TOTAL As per the table, of the respondents 80% are male and 20% are female. Table 2: Showing Classification Of Respondents On The Basis Of Age Group Age group (in years) No. of respondents Percentage Above Total The above table indicates that majority of the respondents (60%) belongs to the age group of 20 to 30 years, 20% belong to age group of 10 to 20 years and 10% each belongs to the age groups of 30 to 40 years and above 40 years. Table-3: Showing Classification Of Respondents Based On Occupation Occupation No.of respondents Percentage Student Businessmen / Professional Government employee 16 8 Others Total The above table reveals the occupation of the respondents. Out of the total respondents60% are student, 18% are businessmen/ professionals, 8% are government employees and remaining 14% are other (like housewives, retired people etc.) Table-4: Classification Of Respondents On The Basis Of Monthly Income Of Family Monthly income (in Rs.) No.of respondents Percentage Below Above Total This table classifies the respondents based on monthly income. Of the respondents 40% belong to the income group of below Rs. 5000, 36% to the income group of Rs to Rs , 14% to the income group of Rs.5000 to Rs and 10% to above Rs group. Table 5: Classification Of Respondents Based On Qualification Qualification No. of respondents Percentage Madhyamik/H.S Graduate Post-graduate 8 4 Others Total From the above table it can be seen that of the respondents, 10% have studied up to Madhyamik or H.S. level, 60% are graduates, 4% are post graduate and remaining 26% are others (like P.U.C., Engineering). S. Mandal (Editor), GJAES 2016 GJAES Page 125

134 M. Thakur, Consumer Behavior & Brand Preference towards Sonata Wrist Watches A study with reference to Asansol City, West Bengal., Global Journal on Advancement in Engineering and Science, 2(1), March 2016, pp Table 6: Showing Classification Of Respondents On The Brand Of Watches Owned Brand of watches No. of respondents Percentage Sonata Titan HMT Fast-track 12 6 Others Total The table shows that 40% of respondents owned Sonata brand watches, 24% Titan brand of watches, 16% HMT brand of watches, 6% Fast-track brand of watches and remaining 14% owned other brand like Citizen and Maxima, etc. Table 7: Classification Of Respondents Based On Source Of Information About Sonata Watch. Source of information No. of respondents Percentage Advertisement Relatives Friends Total The above table indicates the sources of information about Sonata wrist watches. Of the 200 respondents, 35% each came to know about the Sonata brand of wrist watches through advertisement and relatives and 30% through their friends. Table 8-: Classification Of Respondents On The Basis Of Plan To Change Watch Period (in years) No.of respondents Percentage Less than year to to Above 5 years Total As per the above table, 34% of the respondents want to change the watch in a period of 3 years, 28% in 3 to 5 years, 20% in less than a year and 18% after 5 years. Table-9: Showing Classification of Respondent On The Basis Of Opinion About The Performance Of Sonata wrist watches. Opinion No. of respondents Percentage Excellent Good Satisfied Unsatisfied Total The above table indicates the opinion of the respondents regarding the performance of Sonata wrist watches. Of the 200 respondents, 35% consider the performance as excellent, 40% as good, 15% as satisfied and remaining 10% are not satisfied with the performance of Sonata wrist watches. VII. Feedback of the Respondents Regarding After Sales Service Of Sonata wrist watch Of the respondents 90% of the satisfied of the after sales service of the dealer and remaining 10% feel dissatisfied with the after sales service of the dealers. S. Mandal (Editor), GJAES 2016 GJAES Page 126

135 M. Thakur, Consumer Behavior & Brand Preference towards Sonata Wrist Watches A study with reference to Asansol City, West Bengal., Global Journal on Advancement in Engineering and Science, 2(1), March 2016, pp VIII. Findings: 1. Majority of the respondents are male. 2. Most of the respondents belong to the age group of years. 3. Most of the respondent belongs to the occupation of student. 4. Most of the respondents belong to the qualification of graduates. 5. The majority of respondents owned Sonata brand wrist watches. 6. The majority of respondents have come to know about the Sonata brand wrist watches through advertisement and relatives. 7. Most of the respondents belong to the plan to change watch in a period of 3 years. 8. Most of the respondents belong to the wish price range of Rs The majority of respondents feel that the price of the Sonatawrist watches as high. 10. Most of the respondents belong to the mode of owning watches which are bought. 11. The majority of respondents received watch on the occasion of birthday. 12. The majority of respondents have watches bought onthe birthday. 13. Most of respondents belong to the opinion about performance of Sonata wrist watch as good. 14. The majority of respondents feel satisfied about the after sales service of Sonata. 15. IX. Recommendations& Discussion: The survey of consumers has revealed the likes and dislikes and taste regarding wrist watch and satisfaction level in relation to Sonata. The consumers have forwarded the following suggestions and recommendations for the consideration of the company and dealers. 1. The respondents feel that the price of Sonata wrist watches is too high. They anticipate a reduction in the price, which can be affordable to the common class of people. 2. The service for the new watches should be improved. 3. One service mechanic must be provided by the company at every showroom to ensure consumers good service and advice. 4. Some respondents feel that the price of spares of Sonata watches is high and suggest for a reduction in prices. 5. Some more attractive festival offers and gifts should be given on purchases. 6. All varieties of watches should be made available in show room, which cater to the taste of different income group customers. 7. Quality of the straps (belt) of watches should be improved. 8. Advertisements in local media should be increased. This may cover rural areas also. 9. Guarantees should be given for costly interior parts of watch. 10. The dealer has to improve after sales service to some extent to satisfy the customers. X. Conclusion: The Sonata brand of wrist watches is known for quality and performance in the domestic and international markets. The consumer of Sonata brand wrist watches are highly satisfied customers having pride in owning and wearing the most sophisticated, highly reliable and superior performance watch. Sonata brand wrist watches are in great demand not only in India but also abroad. It is owing to the fact that they come from a Tata group company. The turnover of Sonata brand of wrist watches has shown uptrend from year to year. Timex wrist watches enjoy a lion s share in the domestic watch market. Though there is increasing demand for all varieties of Sonata wrist watch, a few suggestions given by the respondents is to be considered by the manufacturers of Sonata. The company has to put its efforts in improving quality of its watches, introduce new varieties with changing outlook to appeal and attract potential customers for its products. Again the company can also consider for a reduction in the prices which may make it market leader in the years to come. Finally it can be said that the performance of Sonata wrist watches is not only amazing but also highly satisfactory. The company can achieve further success by improvement from the suggestions of the consumers. XI. Limitations 1. The study has been restricted to the users of Sonata wrist watches only. 2. The data and opinion collected are assumed to be objective. 3. The survey is restricted to 200 respondents. 4. Lack of consumer awareness about different watches. 5. Time constraint. 6. The sample size is supposed to be representative of the views of the consumers. 7. The study has been restricted to Asansol city only. S. Mandal (Editor), GJAES 2016 GJAES Page 127

136 M. Thakur, Consumer Behavior & Brand Preference towards Sonata Wrist Watches A study with reference to Asansol City, West Bengal., Global Journal on Advancement in Engineering and Science, 2(1), March 2016, pp VI. References [1] Assunçao, J.L., and Meyer, J.R., (1993): The Rational Effect of Price Promotions on sales and Consumption, Management Science, 39(5): [2] Bawa, K., and Shoemaker. (1987): The Coupon-Prone Consumer: Some Findings Based on Purchase Behavior across Product Classes, Journal of Marketing, 51(4), [3] Bell, D.R., Chiang, J., and Padmanabhan, D. (1999): The Decomposition of Promotional Response, Marketing Science, 18 (4): [4] Daly, and Ellen. (1993): Advertising impact: Using the power of promotional gifts, The American Salesman, 38 (10):16. [5] Del Hawkins, I., Roger Best, J. & Kenneth Coney, A. (2002): Consumer behaviour: Tata McGraw Hill Publishing Co Ltd: New Delhi. [6] Everything you wanted to know about toothpaste. Retrieved from [7] Fader, P.S., and Lodish, L.M. (1990): A Cross-Category Analysis of Category Structure and Promotional Activity for Grocery Products. Journal of Marketing, 54 (4): [8] Gupta, Sunil. (1988): Impact of sales promotions on when, what and how much to buy. Journal of Marketing Research, 25 (4): [9] Harish, K. S. (2009): Consumer Motivation Perception. Marketing Mastermind, 9(12): [10] Jacoby, J,. And Chestnut. (1978): Brand Loyalty, Measurement and Management. New York: John Wiley & Sons. [11] Narasimhan, C. (1984): A price discrimination theory of coupons. Marketing Science, 3(2): [12] Nitin Mehta, Xinlei(Jack) Chen,OmNarasimhan (2008): Informing, Transforming, and Persuading: Disentangling themultiple Effects of Advertising on brand choice decision, Marketing Science 27(3):33. [13] Rajendra, N. (2006): Market Research, New Delhi, ND: Tata McGraw Hills. [14] Suja N., R. (2007): Consumer behavior in Indian perspective, Mumbai: Himalaya publishing House. [15] Sumathi, S., &Saranavel, P. (2003): Marketing Research &Consumer Behavior, Vikas publishing house: New Delhi. [16] The history of teeth cleaning, Retrieved from [17] Tooth care products. Retrieved from [18] Vani, G., Ganesh, B., M., &Panchanatham, N. (2009): Segmentation in Indian Oral care industry: An overview, SRMmanagement digest. [19] Vani, G., Ganesh B, M., &Panchanatham, N. (2010): Oral care industry: Colgate s smileyadvertisingexpresspromos. [20] Wansink, B., and Desphande, R. (1994:.Out of Sight, Out of Mind: Pantry Stockpiling and Brand-Usage Frequency, Marketing Letters, 5(1): [21] Zeithaml, V.A. (1988): Consumer Perceptions of Price, Quality, and Value: A Means-End Model and Synthesis of Evidence, Journal of Marketing, 52 (3): S. Mandal (Editor), GJAES 2016 GJAES Page 128

137 Special Issue: Conference Proceeding of i-con-2016 Global Journal on Advancement in Engineering and Science (GJAES) Vol. 2, Issue 1: March-2016, ISSN (Print): Original Research Work EXPERIMENTAL INVESTIGATION ON GRINDABILITY OF TITANIUM GRADE 1 USING SILICON CARBIDE WHEEL UNDER DRY CONDITION Manish Mukhopadhyay 1, Ayan Banerjee 2, Arnab Kundu 3, Sirsendu Mahata 4, Bijoy Mandal 5 and Santanu Das 6 Department of Mechanical Engineering, Kalyani Government Engineering College,Kalyani, Nadia manishmukhopadhyay@gmail.com, 2 ayan @gmail.com, 3 arnab @gmail.com, 4 mahatasirsendu@gmail.com, 5 bijoymandal@gmail.com, 6 sdas.me@gmail.com Abstract: Titanium alloys find their application in a variety of engineering fields, namely in aerospace, automotive, petrochemical and biomedical industry, due to their properties like high corrosive resistance, low specific gravity, high specific strength, non magnetic property and bio compatibility. However, this material is hard to grind owing to its low thermal conductivity, high hardness at elevated temperature and high chemical reactivity resulting in high force requirement, severe wheel loading, high grinding ratio, etc. For these reasons, proper selection of cutting parameters like wheel speed, table feed and infeed (depth of cut) plays a significant role. The present experimental investigation is aimed at finding better grinding parameters, comparing two different infeed values. Grinding forces, surface roughness, grinding chip forms and ground surface morphology are observed in case of surface grinding of Titanium Grade 1 using silicon carbide wheel, under dry condition. The results suggest that grinding forces as well as surface roughness values increase with increase in infeed value. Keyword:Grinding, Titanium Grade 1, Silicon Carbide Wheel, Grinding Ratio, Surface Roughness. I. Introduction Grinding is a material removal process, generally used to shape and finish components made of metals and other materials. Grinding is a widely used machining process in industry for surface smoothing and finishing. The precision and surface finish obtained through grinding can be up to ten times better than that with either turning or milling. Grinding employs an abrasive tool, usually in the form of a rotating wheel brought into controlled contact with a work surface [1], [2]. Grinding is one of the most complex manufacturing processes with respect to material removal. Although classified as a conventional machining process, it differs significantly from the more traditional processes like milling, drilling and turning, as the material is removed by undefined cutting edges. With high negative rake angle, the material removal in grinding occurs with a very large number of these undefined cutting edges, whose shape, orientation and distribution are random due to the manufacturing process of the grinding wheel. The cutting edges are the protruding geometry of hard abrasive grains which are immersed in a bond structure forming a grinding wheel. It is the random nature of these grains and their interactions with the work material that make the process so complex [3]. Progress of the science and technology has called for a great variety of materials with diversified properties, and various new materials such as hardened steel, titanium alloy, nickel based alloy, etc. have been developed and applied continuously. These materials are generally difficult to machine with low machinability rating, and machining of these materials is always a big challenge [4]. Among these materials Titanium and its alloys are a big hit in manufacturing industry. Titanium alloy is a high strength-to-weight ratio material with superior fatigue strength. It is non-magnetic, non-poisonous, corrosion-resistant and heat-resistant. These favourable properties have brought about its wide application in daily life and industry. However, from the machining view point, titanium alloy is chemically active, and the chips tend to adhere easily onto the wheel surface in grinding due to very high local temperature and pressure at the grinding zone. Machining and grinding of titanium and its alloys are difficult due to their chemical reactivity beyond 350o C, low thermal conductivity and high hot strength [5], [6]. Unlike grinding of conventional steels where heat generated spreads quickly from high temperature grinding zone, grinding heat gets accumulated during grinding of titanium alloys due to their low thermal conductivity. Grinding temperature rises sharply during initial wheel-work contact, attains a quasi-steady state with a long workpiece, and increases further when wheel-work is disengaged [7], [8]. Titanium grade 1 is a super alloy that is widely used in aeronautical industry for making airframe components, components of chemical desalination plants, cryogenic vessels, heat exchanger tubes, biomedical industry, petroleum industry, etc. [9], [10]. During grinding of titanium grade 1 alloy common problems such as surface damage, surface burn, intense wheel loading, etc are commonly reported [11], [12], [13]. Apart from that, problems like chip re-deposition might also occur on the job surface. This re-deposition creates progressively S. Mandal (Editor), GJAES 2016 GJAES Page 129

138 Manish Mukhopadhyay, Ayan Banerjee, Arnab Kundu, Sirsendu Mahata, Bijoy Mandal and Santanu Das et al, experimental investigation on grindability of titanium grade 1 using silicon carbide wheel under dry condition, Global Journal on Advancement in Engineering and Science, 2(1), March 2016, pp increasing surface damage with the increase in hardness of wheel [15]. Proper selection of grinding parameters plays a very significant role in this process. Selection of grinding wheel is also an important consideration. Dense wheels are suitable for harder material while less dense structure is better for softer materials. Bonding strength of grinding wheel is also important to withstand centrifugal forces, to resist shock loading of wheel and to hold abrasive grains rigidly [16]. According to Malkin [17] and Rowe [18], Silicon Carbide wheels are better suited for non ferrous materials like titanium. The present research work is aimed at finding the suitable infeed value for which better grinding results are observed when comparing two different infeed values. The experimental observations are made in case of plunge surface grinding of titanium grade 1 alloy using a silicon carbide wheel in dry condition. Analysis was done considering certain parameters such as force requirement, surface roughness, chip forms and ground surface morphology. Surface Grinding Machine Make : HMT Praga Division Model : 452 P Infeed Resolution : 1 µm Main Motor Power : 1.5 kw Maximum Spindle Speed : 2800 rpm Grinding Wheel Make : Carborundum Universal Limited Type : Disc Type Size : Specification : CGC 60 K 5 V Workpiece Material : Titanium Grade 1 Dimension : 120 mm 55 mm 6 mm Hardness : 22 HRC Environment Force Dynamometer Wheel Dresser Surface Roughness Tester Tool Makers Microscope II. Experimental Procedures Workpiece Material: The workpiece material used is titanium grade 1 alloy having hardness of 22 HRC and size 120 mm 55 mm 6 mm, whose composition is given in Table 1. It is a widely used alloy of titanium in aerospace and biomedical industry. The material has high impact toughness and is readily weldable. The material is capable of deep drawing, and used for plate, frame, and tube heat exchangers [19]. Table 1: Composition of titanium grade 1alloy. Titanium Iron Oxygen Nitrogen Experimental setup and measurement: Experiments are carried out on plunge surface grinding machine of HMT Praga division. Force readings are taken for 20 upgrinding passes at 10 and 20 micron infeed on Sushma made strain gauge type dynamometer. Grinding chip and ground surface morphology are observed under toolmakers microscope. Surface roughness values are measured on a portable surface roughness tester (Mitutoyo make). Details of experimental condition and equipment used are provided in Table 2. Table 2: Experimental conditions and equipment used Dry Make : Sushma Grinding Dynamometer, Bengaluru Model : SA 116 Range : kg Resolution : 0.1 kg Make : Solar, India Specification : 0.5 carat Single Point Diamond Tip Dressing Infeed : 20 µm Make : Mitutoyo, Japan Model : Surftest 301 Range : µm Resolution : 0.05 µm Make : Mitutoyo, Japan Model : TM 510 S. Mandal (Editor), i-con 2016 GJAES Page 130

139 Manish Mukhopadhyay, Ayan Banerjee, Arnab Kundu, Sirsendu Mahata, Bijoy Mandal and Santanu Das et al, experimental investigation on grindability of titanium grade 1 using silicon carbide wheel under dry condition, Global Journal on Advancement in Engineering and Science, 2(1), March 2016, pp III. Experimental results and discussion The following section deals with the results obtained for different experiments and their possible explanations. Grinding Forces: Grinding force is one of the most important factors in evaluating the performance of grinding process. The force in surface grinding has two components: tangential grinding force and normal grinding force. Grinding forces were observed for 20 passes in upgrinding operation at 10 micron and 20 micron. Fig. 1: Variation of grinding forces with number of grinding passes under dry condition at 10 micron and 20 micron infeed The plot in fig. 1 depicts number of passes on abscissa and grinding forces on ordinate. Both tangential and normal forces are shown in the same plot for 10 micron and 20 micron. From the trend it can be easily seen that value of normal force is always greater than tangential force component value for both infeeds. A general increasing trend is observed up to 8 passes. This may be because of the fact that during first few passes grinding wheel is unable to take the given infeed due to stiffness of the system. After 8 th pass a general decrease in force value is observed. This may be due to the autosharpening operation which becomes inevitable due to wheel loading during previous passes. Both tangential and normal force component values are higher in case of 20 micron infeed. The 19 th and 20 th pass value differs from this general trend. This may be due to the effect of high wheel material removal in previous passes which results in lower penetration of grinding wheel in last two passes. Overall the force values are higher in case of 20 micron infeed which is normally expected. Surface Roughness: Surface roughness often simply termed as roughness is a component of surface texture. It is quantified by the deviations in the direction of the normal to a real surface from its ideal form. If these deviations are large, the surface is rough; if they are small, the surface is smooth [20]. Fig. 2: Comparison of surface roughness (micron) in transverse direction after 20 grinding passes S. Mandal (Editor), GJAES 2016 GJAES Page 131

140 Manish Mukhopadhyay, Ayan Banerjee, Arnab Kundu, Sirsendu Mahata, Bijoy Mandal and Santanu Das et al, experimental investigation on grindability of titanium grade 1 using silicon carbide wheel under dry condition, Global Journal on Advancement in Engineering and Science, 2(1), March 2016, pp Surface roughness values are observed on a portable surface roughness tester. Average surface roughness values (Ra) are taken as the average of five different roughness values observed at different locations in transverse direction on the ground surface after 20 passes. From the above histogram (fig. 2), it can be clearly seen that average surface roughness value at 10 micron infeed is much smaller compared to that at 20 micron infeed. This is due to the fact that at higher value of infeed, force requirement is more and more heat is generated, resulting in poor surface finish. Grinding Ratio: An important parameter in assessing the grinding performance is the Grinding Ratio (G ratio). It is defined as the ratio between volume of work material removed to the volume of wheel material removed. From the definition of G-ratio, it is obvious that, higher amount of G ratio is desirable. So, from the calculated values as presented in fig. 3, it can be inferred that grinding with 10 micron is preferable than with 20 micron infeed. Fig. 3: Comparison of Grinding Ratio after 20 passes Chip study and surface morphology: Chip form and ground surface study play is important in predicting and analysing a grinding operation. Fig. 4 and fig. 5 shows the observed chip form and ground surface respectively. (a) (b) Fig. 4: Chip form observed after 18 passes (a) 10 micron; (b) 20 micron (a) (b) Fig. 5: Surface topography observed after 20 passes (a) 10 micron; (b) 20 micron S. Mandal (Editor), GJAES 2016 GJAES Page 132

141 Manish Mukhopadhyay, Ayan Banerjee, Arnab Kundu, Sirsendu Mahata, Bijoy Mandal and Santanu Das et al, experimental investigation on grindability of titanium grade 1 using silicon carbide wheel under dry condition, Global Journal on Advancement in Engineering and Science, 2(1), March 2016, pp Chips are collected after 18 passes. Large number of blocky and fragmented chips is observed suggesting higher wheel loading. Very few chips are leafy. Surface form was observed after 20 passes under toolmakers microscope. Long and deep lay marks are observed on the surface. Chip re-deposition is also seen at places which suggest favourable grinding has not taken place. It is expected that use of suitable grinding fluid may improve chip form and ground surface morphology. Future experimental works would be done in this respect. IV. Conclusion Analysing the different parameters obtained during grinding of titanium grade alloy using silicon carbide wheel at 10 and 20 micron infeed, the following conclusions are drawn: Tangential force values are lower than normal force values in all the cases as usual. Force requirement in case of 20 micron is greater than that at 10 micron for all the passes except 19 th and 20 th pass. Surface finish and grinding ratio are found to be better at 10 micron infeed than that at 20 micron infeed. Further experiments may be done using an appropriate grinding fluid to improve grinding performance while surface grinding titanium grade 1. V. References 1. accessed on R. B. Kinalkar and M. S. Harne, A Review on Various Cooling System Employed in Grinding, International Journal of Innovative Technology and Exploring Engineering; Vol. 4 (2014), pp P. Govindan, Investigations on the Influence of Processing Conditions on Grinding Process, International Journal of Engineering Science and Research Technology, Vol. 2 (2013), pp Y. S. Liao, Y. P. Yu and C. H. Chan, Effects of Cutting Fluids with nano-particles in Grinding of Titanium Alloys, Advanced Materials Research, Vol (2010), pp A. B. Chattopadhyay, Machining and Machine Tools, Wiley India Pvt. Ltd., India R. D. Palhade, V. B. Tungikar and G. M. Dhole, Application of Different Environments in Grinding of Titanium Alloys (Ti-6Al-4V): Investigations on Precision Brazed Type Monolayered Cubic Boron Nitride (CBN) Grinding Wheel, Institution of Engineers (India) Journal Production Engineering Division, Vol. 90 (2009), pp S. Malkin and G. Guo, Thermal Analysis of Grinding, Annals of the CIRP, Vol.56 (2007), pp S. Malkin and R. B. Anderson, Thermal Aspects of Grinding, Part-I, Energy Partition, Transactions of the ASME, Journal of Engineering for Industry, Vol.94 (1974), pp Midhani Product: Super Alloys; Titanium and Titanium Alloys, B. Mandal, D. Biswas, A. Sarkar, S. Das and S. Banerjee, Improving Grindability of Titanium Grade 1 using Pneumatic Barrier Reason- A Technical Journal; Vol. 12 (2011), pp M. C. Shaw and A. Vyas, Heat-Affected Zones in Grinding Steel, Annals of the CIRP, Vo1.43 (1994), pp B. Mandal, S. Majumdar, S. Banerjee and S. Das, Predictive model and Investigation of the Formation of Stiff Air Layer around the Grinding Wheel, Advanced Material Research, Vol. 83 (2010), pp A. Bhattacharya, Metal Cutting Theory and Practice, New Central Book Agency (P) Ltd., Calcutta, D. M. Turley, Factors Affecting Surface Finish when Grinding Titanium and Titanium Alloy (Ti-6Al-4V), Materials Research Laboratories, Defence Science and Technology Organization, Australia, Vol. 104(1982) pp Y. Li, W. B. Rowe and B. Mills, Grinding Conditions and Selection Strategy, Journal of Engineering Manufacture, Vol.213 (1999), pp K. V. Kumar and M. C. Shaw, Metal Transfer and Wear in Fine Grinding, Wear; Vol. 82 (1982), pp S. Malkin, Grinding Technology. Industrial Press; New York, W. B. Rowe, Principles of Modern Grinding Technology, William Andrew, Ney York, Titanium Grade -1: Titanium Alloy; Arcam AB; Molndal, Sweden; (Accessed on 2nd Sep 2015) Accessed on S. Mandal (Editor), GJAES 2016 GJAES Page 133

142 Special Issue: Conference Proceeding of i-con-2016 Global Journal on Advancement in Engineering and Science (GJAES) Vol. 2, Issue 1 : March-2016, ISSN (Print): Original Research Work ON THE PERFORMANCE OF DRY GRINDING OF TITANIUM GRADE 1 USING ALUMINA WHEEL Ayan Banerjee 1, Manish Mukhopadhyay 2, Arnab Kundu 3, Sirsendu Mahata 4, Bijoy Mandal 5 and Santanu Das 6 Department of Mechanical Engineering, Kalyani Government Engineering College Kalyani, Nadia , INDIA 1 ayan @gmail.com, 2 manishmukhopadhyay@gmail.com, 3 arnab @gmail.com, 4 mahatasirsendu@gmail.com, 5 bijoymandal@gmail.com, 6 sdas.me@gmail.com Abstract: Titanium and its alloys are considered to be difficult-to-machine material due to their poor heat conductivity and high chemical reactivity at elevated temperature. But owing to their excellent properties such as high strength-to-weight ratio, low density, resistance to corrosion, etc., titanium and its alloys find wide applications in automotive, aerospace, shipping industries and others. Hence, grinding process is adopted to remove material from such exotic metals and their alloys to achieve the desired surface finish of the product. However, wheel-loading, wheel material removal, grit wear are some of the major problems encountered during grinding. Selection of proper wheel with appropriate combination of process parameters is thus extremely important prior to grinding. In the present work, grinding has been performed on Titanium Grade 1 using alumina wheel under dry environment. Observations with respect to grinding force, surface roughness, ground chip-forms and workpiece surfaces are taken for two infeed. Grinding ratio is also calculated. Results show that a relatively better grindability can be achieved while working at an infeed of 10 µm under dry condition. Keywords: Grinding, Titanium Grade 1; Alumina wheel; Grinding force; Ground surface; Grinding ratio; Ground chips. I. Introduction The advancement of material science and technology has facilitated the discovery of new elements, metals and alloys having high hardness, strength, ductility, toughness and low thermal conductivity, thereby making them difficult to machine. These metals/alloys not only possess the ability to sustain high temperature but also retain their integrity with minimum environmental impact [1]. Thus, material like titanium, molybdenum, rhenium, tungsten, cobalt, tantalum, niobium, chromium, hastelloy, nimonic, waspaloy, udimet etc have found profound use in the aerospace, vehicles, engines and gas turbines, nuclear and biomedical industrial sectors [2]. But these materials also require proper machining and/or grinding before being readied for use in the industry. In the present paper, one such material namely Titanium has been chosen to work on. Following the past research works, Titanium and its alloys are experienced to be difficult-to-machine material. Titanium is 30% stronger and nearly 50% lighter than steel, while it is 60% heavier than aluminum but twice as strong [3]. With its low density, high strength, and excellent resistance to corrosion, titanium is believed to solve many engineering challenges. But, Titanium is a poor conductor of heat [4]. When it comes to machining titanium, heat generated by the cutting action does not dissipate quickly, rather it gets concentrated on the cutting edge and the tool face. It also has a strong alloying tendency or chemical reactivity at high temperature which may cause galling, welding and smearing along with rapid wear of the cutting tool. These two factors together with its work-hardening characteristics and low modulus of elasticity makes titanium a difficult-to-machine material [3]. Grinding of titanium is also challenging as evident from its previous works. Grinding at high speed requires large force and generates high heat which may cause surface burns and re-deposition of chips on the ground surface. Apart from that, intense wheel loading and wheel material removal are the possible adverse phenomena while grinding [5], [6]. But due to its huge demand, grinding of titanium is essential inspite of the difficulties already stated. Hence it should be done by selecting the proper combination of environment, abrasive wheel, and grinding process parameters. In the present paper, efforts have been made to compare the grindability at two different infeed values during surface grinding of Titanium Grade 1, using alumina wheel, under dry condition. Analysis has been done on the basis of force values, grinding ratio, ground surface and chip-forms observed. S. Mandal (Editor),GJAES 2016 GJAES Page 134

143 A. Banerjee et al.,on the Performance of Dry Grinding of Titanium Grade 1 using Alumina Wheel, Global Journal on Advancement in Engineering and Science, 2(1), March 2016, pp II. Experimental Procedure Workpiece and Wheel Material: Commercially pure Titanium Grade 1 is best known for its corrosion applications than titanium alloys, especially when high strength is not a requirement [7]. Apart from this, its applications can distinctively be found in surgical implants and prosthetic devices due to its inertness in thehuman body, that is, resistance to corrosion by body fluids [7], [8]. The present set of experiments includes Titanium Grade 1 plate of dimension 120 mm 64 mm 6 mm as workpiece, the composition of which is given below in Table 1. Table 1: Composition of titanium grade 1alloy. Titanium Iron Oxygen Nitrogen The selection of grinding wheel is a very important factor in case of grinding of titanium. At high temperature, titanium has strong affinity for nitrogen, oxygen and carbon. Reports pertaining to the fact that nitrogen, oxygen and carbon react with titanium at high temperature and tend to make the material harder, stronger and less ductile, can be found in the works of Mandal et al.[9]. Hence the wheel chosen here to work with is an alumina wheel of specification AA 60 K 5 V. Also alumina wheel is cheap and widely used. Experimental set-up and procedure: Grinding experiments have been performed on a Surface Grinder of HMT Praga division make. Two infeed 10µm and 20 µm were selected for the experiments. Each experiment comprised of 20 passes in up-grinding mode and under dry environment. Wheel dressing is performed with a dressing depth of 20 µm, at a speed of 2.3 m/min, using a single point 0.5 carat diamond dresser. Tangential (Ft) and normal (Fn) force values were obtained using a Sushma make strain gauge type dynamometer. Grinding chip and ground surface morphology are observed under toolmakers microscope of Mitutoyo make. Surface roughness was measured on a portable surface roughness tester of Mitutoyo make. The experimental details are furnished in Table 2. Table 2: Experimental set-up details Surface Grinding Machine Make : HMT Praga Division, Model : 452 P Infeed Resolution : 1 μm, Main Motor Power : 1.5 kw Maximum Spindle Speed : 2800 rpm Grinding Wheel Make : Carborundum Universal limited Type : Disc Type, Size : Specification : AA 60 K 5 V Workpiece Material : Titanium Grade 1 Dimension : 120 mm 55 mm 6 mm Hardness : 22 HRC Working Environment Dry Force Dynamometer Make : Sushma Grinding Dynamometer, Bengaluru, Model : SA 116 Range : kg, Resolution : 0.1 kg Wheel Dresser Make : Solar, India Specification : 0.5 carat Single Point Diamond Tip Dressing Infeed : 20 μm, Dressing speed: 2.3m/min Surface Roughness Tester Make : Mitutoyo, Japan, Model : Surftest 301 Range : μm, Resolution : 0.05 μm Tool Makers Microscope Make : Mitutoyo, Japan, Model : TM 510 III. Experimental results and discussion Grinding Force: The plot in Fig 1 shows the variation of grinding force with the number of passes for both the infeed of 10µm and 20µm. Grinding at 10µm infeed showed force values rising high at the 4 th pass and thereafter rising gradually and falling again. But grinding at 20µm infeed, showed force values having a steep rise at the 6 th pass and then falling deep down at 14 th pass. The reason may be explained as dulling of grits which have resulted in more friction rather than material removal. Grain pull-out also may have occurred in this case, resulting in an inability for the grinding wheel to cut at desired infeed value. Hence force required was high. Gradually as fresh grits came out, force requirement decreased and normal cutting action resumed. S. Mandal (Editor), GJAES 2016 GJAES Page 135

144 Ra value (µm) A. Banerjee et al.,on the Performance of Dry Grinding of Titanium Grade 1 using Alumina Wheel, Global Journal on Advancement in Engineering and Science, 2(1), March 2016, pp Fig 1: Variation of grinding forces with number of grinding passes under dry condition at 10 µm and 20 µm infeed Ground Surface Observed: Images show a better surface finish at 10 µm than at 20 µm. Fig 2(b) of ground surface for 20µm infeed show deeper grinding marks and traces of temperature induced deformation. Vibrations are also noticed while grinding at 20µm infeed. This is a clear indication of high wheel-loading and glazing, resulting in generation of high grinding zone temperature [10]. Lay marks (a) Deeper grinding lay marks Crater (b) Fig 2: Ground surface observed after 20 passes (a) 10µm; (b) 20µm Surface roughness values clearly indicates a better surface in case of grinding at 10µm infeed value. The normal grinding force (Fn) has an influence upon the surface roughness of the workspiece[11]. The variation of average surface roughness (Ra) values obtained from the ground surfaces with respect to infeeds have been shown in Fig µm 20µm Infeed Fig 3: Variation of surface roughness in transverse direction w.r.t. infeed after 20 grinding passes S. Mandal (Editor), GJAES 2016 GJAES Page 136

145 Grinding Ratio A. Banerjee et al.,on the Performance of Dry Grinding of Titanium Grade 1 using Alumina Wheel, Global Journal on Advancement in Engineering and Science, 2(1), March 2016, pp Here it is seen that roughness increases along with increase in infeed. Thus the grits retained their sharpness for long and facilitated material removal by shearing and fracturing, producing sharp striations. Heat generated at 20µm infeed was higher and hence contributed towards a greater roughness [12]. Grinding ratio: It is defined as the ratio of the volume of work-piece material removed to the volume of wheel material removed. As evident from the plot in Fig 4, grinding with 10µm infeed gives better results in terms of material removal as compared to that for grinding with 20µm infeed. The reason may be explained as less heat generation while grinding at 10µm which led to the longer retention of grit sharpness and less wheel material removal compared to grinding at 20µm [13] µm 20µm Infeed Fig 4: Comparison of Grinding Ratio after 20 passes Chip-forms observed: The chip morphology clearly indicates the mechanism of grinding at two different infeed conditions. Serrated lamellar or blocky chips are seen while grinding at 10µm. This indicates the presence of high pressure. Ribbon like chips are also obtained, grinding at 10 µm. Spherical chips are obtained in case of grinding at 20µm which justifies the phenomenon of high heat generation. Some leafy chips can also be seen in Fig 5(b) which proves the softening of the ground surface at high temperatures. Serrated lamellar Ribbon like Spherical Leafy (a) (b) Fig 5: Chip form observed after 20 passes (a) 10 micron; (b) 20 micron IV. Conclusion Following conclusions may be drawn from the observations made out of the experimental work done. Force values recorded for 20µm infeed show remarkable rise up to the 6th pass and drop sharply up to the 14th pass, while those recorded for 10µm shows a gradual increasing trend up to around 13th pass and then decreases gradually. Surface finish is better for Titanium Grade 1 at 10µm than at 20µm under aforesaid grinding conditions. Since the environment was kept dry, chip study and ground-surface study indicated generation of high temperature. Hence it is necessary to use proper grinding fluids in order to achieve better grinding performance. V.References [1] H.K.D.H. Bhadeshia, Materials Science & Metallurgy, Alloys, Part.II, Course.C9, pp [2] A. Shokrani, V, Dhokia, and S.T. Newman, Environmentally Conscious Machining of Difficult-to-machine Materials with Regard to Cutting Fluids. International Journal of Machine Tools and Manufacture, vol.57, 2012, pp S. Mandal (Editor),GJAES 2016 GJAES Page 137

146 A. Banerjee et al.,on the Performance of Dry Grinding of Titanium Grade 1 using Alumina Wheel, Global Journal on Advancement in Engineering and Science, 2(1), March 2016, pp [3] Machining Titanium, Cimcool Technical Report, Milacron Marketing Co., Global Industrial Fluids, Cincinnati, Ohio, vol. 3, pp. 1-3, Date of Accession [4] Titanium Alloy Guide, RMI Titanium-An RTI International Metals, Inc. Company, Date of accession: 02/09/2015. [5] S. Malkin and G. Guo, Thermal Analysis of Grinding, Annals of the CIRP, vol.56, 2007, pp [6] S. Malkin and R. B. Anderson, Thermal Aspects of Grinding, Part-I, Energy Partition, Transactions of the ASME, Journal of Engineering for Industry, Vol.94, 1974, pp [7] J.D. Destefani, Properties and Selection: Nonferrous Alloys and Special Purpose Materials, ASME Handbook, vol.2, 1992, pp [8] Midhani Product: Super Alloys; Titanium and Titanium Alloys", [9] B. Mandal, D. Biswas, A, Sarkar, S. Das and S. Banerjee, Improving Grindability of Titanium Grade 1 using a Pneumatic Barrie r, Reason- A Technical Journal, vol. XII, 2013, pp [10] D. Biswas, A. Sarkar, B. Mandal and S. Das, Exploring Grindability of Titanium Grade 1, using Silicon Carbide Wheel, vol. XI, 2012, pp [11] M.H. Sadeghi, M.J. Haddad, T. Tawakoli and M. Emami, Minimal Quantity Lubrication- MQL in Grinding of Ti 6Al 4V Titanium Alloy, International Journal of Advanced Manufacturing Technology, vol. 44, 2009, pp [12] W.B. Rowe, M.N. Morgan, S.C.E. Black and B, Mills, A Simplified Approach to Control Thermal Damage in Grinding, CIRP Annals Manufacturing Technology, vol. 45, 1996, pp [13] M. Das, B. Mandal and S. Das, An Experimental Investigation on Grindability of Titanium Grade 1 under different Environmental conditions, Manufacturing Technology Today, vol. 14, Issue. 2, 2015, pp S. Mandal (Editor), GJAES 2016 GJAES Page 138

147 Special Issue: Conference Proceeding of i-con-2016 Global Journal on Advancement in Engineering and Science (GJAES) Vol. 2, Issue 1 : March-2016, ISSN (Print): Original Research Work Investigating Milling Burr Formation under Varying Tool Exit Angle Arijit Patra 1, Arijit Hawladar 2, Sanjay Samanta 3, Santanu Das 4* 1 JIS College of Engineering, Kalyani ,2,3,4 Department of Mechanical Engineering, Kalyani Government Engineering College, Kalyani , Dist. Nadia, West Bengal, India 1 arijit.patra9@gmail.com, 2 arijit.hawlader@gmail.com, 3 sanju.eklaghar_kgec@rediffmail.com, 4 sdas.me@gmail.com Abstract: Burr, the unwanted projection generated at the edge of job after any material removal process, creates difficulties during assembly of different parts and is also detrimental to workers at the time of handling. Removal of burr is needed from an edge requiring an additional deburring process for which the cost of product increases and productivity decreases. So, one has to explore appropriate machining parameters to produce fewer burrs at an edge, or, totally burr free edge. The aim of this work is to investigate the influence of tool exit angle on burr formation during slot milling on aluminium block at a constant depth of cut of 3 mm and feed of 0.1 mm/tooth in dry condition. Suitable machining condition is tried to find out at which quite less can be produced. It is observed that at 60 tool exit angle, quite less burr is formed at this machining condition, and hence, may be recommended to adopt that needs a low deburring cycle. Keywords: Machining; milling; burr; tool exit angle; burr height; burr reduction. I. Introduction Burr is formed at the edge of a workpiece during any material removal process, and requires additional deburring process to eliminate it increasing cost of manufacture [1]-[2]. It also creates difficulties during assembly and often results in product malfunctioning during operation. It is detrimental for workers during handling components. Burr formation can be controlled at different stages of manufacturing which are design, process planning, and tool path planning and also by improving material properties, tool engagement condition, tool geometry and cutting parameters such as feed, speed and depth of cut [3]-[5]. Aurich et al. [6] stated that burr formation could not be prevented fully rather, it could be minimized by additional deburring operation. But According to Narayanswami et al. [7] additional deburring may also damage the object. Gillespie et al. [8] observed that burr could be minimized by changing tool geometry and cutting parameters. Pratim and Das [9] investigated the effect of bevel angle on burr formation of aluminium alloy. It was found that beveling at edge reduced burr size and feed rate had more effect on burr formation than cutting velocity. Silva et al. [10] studied the effect of cutting conditions such as cutting speed, feed per tooth and depth of cut on burr minimization in face milling of mould steel. They found the optimum parameter where fewer burrs were produced that are 100m/min cutting speed, mm /tooth feed per tooth and 0.3 mm depth of cut. Lin [11] found that larger feed rate produced larger burr height in face milling of stainless steel whenever high cutting speed produced smaller burr. Saha and Das [12] did research about the exit burr formation in medium carbon steel during face milling by exit edge beveling. They concluded that at 15 exit edge bevel angle along with high cutting speed and feed rate, very small burr could be formed. Wyen et al. [13] performed an experiment to observe the effect of tool geometry on burr formation. It was seen that in up milling the formation of burr increased with the increase of cutting edge radius but it had little influence in down milling. Beside the different cutting parameters, Heisel et al. [14] discovered the influence of cutting fluid on burr formation. They saw that when the feed per tooth varied, the exit burr size at the lateral face increased in dry machining than that at low quantity lubrication. The aim of this work is to investigate the influence of tool exit angle on burr formation during slot milling on aluminium block at a constant depth of cut of 3 mm, feed of 0.1mm/rev (mm/tooth) and cutting velocity of 100 m/min in dry condition. S. Mandal (Editor), GJAES 2016 GJAES Page 139

148 Arijit Patra et al.,investigating Milling Burr Formation under Varying Tool Exit Angle, Global Journal on Advancement in Engineering and Science, 2(1), March 2016, pp II. Experimental Investigation Slot milling is done in the present work on an aluminium block by a coated carbide inserted end milling cutter on a vertical axis CNC milling machine. Only one insert is fitted in the cutter for machining. Experiments are performed in dry environment at a constant velocity of 100m/min. Details of experimental set up are tabulated in Table 1. Machine Tool Cutting Tool Job Material Cutting Conditions Table 1: Experimental set up Vertical Axis CNC Milling Machine, Make: Bharat Fritz Werner, Bangalore, India, Type: AksharaVF30CNC Accuracy: positioning ± 10 microns, Repeatability : ± 5 microns End milling cutter, Diameter: 16mm (Sandvik, India make), TiN coated Carbide insert Aluminum block, Size: 160 mm 108 mm. 72 mm Cutting velocity:100m/min, Environment: Dry Feed rate (mm/rev) Depth of cut (mm) Tool exit angle, ψ , 70, 90, 105, 115 Experiments are done at a depth of cut of 3 mm, feed of 0.1 mm/rev and V C of 100m/min by varying tool exit angle (ψ) from 60 to 115. ψ is the angle between velocity vector and feed direction at the point of tool exit from workpiece as defined in figurei. Tool exit angles chosen are 60, 70, 90, 105 and 115. At each tool exit angle, five successive measurements of burr height are made along the machined exit edge of the specimen. Fig.1: Tool exit angle (ψ) definition [OA indicates feed direction, and OB is the direction of cutting velocity at the point of tool exit]. III. Results and Discussion The data of burr height and graphical representation of it in the form of bar chart are shown in Table II and Figure II respectively. It is observed that average burr length is quite less (0.054 mm) at ψ = 60 and the highest one found in this is of mm at 105 tool exit angle. Figure III shows the microscopic view of the burr at different tool exit angles. At low tool exit angle, lower burr height formation is expected as exit velocity of cutter becomes close to the cutting edge and gets some support from inside the edge of the specimen [15]. Table 2: Height of burr with different tool exit angles under 3mm depth of cut and feed rate of 0.1mm/rev. Sl. No. Tool exit angle (ψ) Burr height at different portions of exit edge of workpiece (mm) Average height of burr (mm) , 0.06, 0.04, 0.03, , 0.10, 0.18, 0.08, , 0.16, 0.03, 0.04, , 0.31, 0.49, 0.36, , 0.11, 0.07, 0.29, S. Mandal (Editor), GJAES 2016 GJAES Page 140

149 Burr height(mm) Arijit Patra et al.,investigating Milling Burr Formation under Varying Tool Exit Angle, Global Journal on Advancement in Engineering and Science, 2(1), March 2016, pp Tool exit angle Fig.2: Variation of height of burr with different tool exit angles under a constant depth of cut 3 mm and feed rate of 0.1 mm/rev. (a) (b) (c) (d) Fig. 3a: Microscopic view (X20) of exit edge of workpiece at a depth of cut of 3 mm and tool exit angle of (a-b) 60 (c-d) 70 (e) (f) S. Mandal (Editor), GJAES 2016 GJAES Page 141

150 Arijit Patra et al.,investigating Milling Burr Formation under Varying Tool Exit Angle, Global Journal on Advancement in Engineering and Science, 2(1), March 2016, pp (g) (h) (i) (j) Fig. 3b: Microscopic view (X20) of exit edge of workpiece at a depth of cut of 3 mm and tool exit angle of (e-f) 90 (g-h) 105 and (i-j) 115. IV. Conclusion From the experimental observation on slot milling operation of aluminium block using coated carbide tool (single inserted cutter), at a cutting velocity of 100 m/min, depth of cut of 3mm and feed of 0.1 mm/rev, it is concluded that at a low tool exit angle of 60, quite less burr height is observed as expected because exit velocity of cutter becomes close to the cutting edge and gets some support from inside the edge of the specimen. V. Acknowledgements The help of Mr. Ashis kumar Bhattacharya during the investigation at Manufacturing Technology Laboratory of Mechanical Engineering Department, Kalyani Government Engineering College, Kalyani is thankfully acknowledged. VI. References 1. L. K. Gillespie, Burr down, Cutting Tool Engineering Magazine, Vol. 58(12), August, P.P Saha, D. Das and S. Das, Effect of Edge Beveling on Burr Formation in Face Milling, 35 th International Matador Conference, Taiwan, Book Part:10, 2007, pp , doi: / _ S. Tripathi and D. Dornfeld, Review of Geometric Solutions for Milling Burr Prediction and Minimization, Proceedings 7th International Conference, Berkeley, CA, Vol. 220, June 2005, pp , doi: / X S. H. Lee and D. Dornfeld, Prediction of Burr Formation during Face Milling Using an Artificial Neural Network with Optimized Cutting Conditions, Proceeding of Institution of Mechanical Engineers, Part B: Journal of Engineering Manufacture, Vol. 221, March 2007, pp , doi: / JEM D. Dornfeld, Strategies for Preventing and Minimizing Burr formation, Laboratory for Manufacturing and Sustainability, Consortium on Deburring and Edge Finishing, University of California, Berkeley, J.C. Aurich, D. Dornfeld, P.J. Arrazola, V. Franke, L. Leitz and S. Min, Burrs Analysis, control and removal, CIRP Annals - Manufacturing Technology, Vol.58, October 2009, pp , doi: /j.cirp R. Narayanaswami and D. Dornfeld, Burr Minimization in Face Milling: A Geometrical Approach, Transactions American Society of Mechanical Engineers, Journal of Engineering for Industry, Vol. 119, No. 2, May 1997, doi: / , pp S. Mandal (Editor), GJAES 2016 GJAES Page 142

151 Arijit Patra et al.,investigating Milling Burr Formation under Varying Tool Exit Angle, Global Journal on Advancement in Engineering and Science, 2(1), March 2016, pp L. Gillespie and P. Blotter, Formation and Properties of machining burrs, Transactions American Society of Mechanical Engineers, Journal of Engineering for Industry, Vol. 98(1), February1976, pp , doi: / S.P. Pratim and S. Das, Burr minimization in face milling: an edge beveling approach, Proceeding of Institution of Mechanical Engineers, Part B: Journal of Engineering Manufacture, Vol. 225, May 2011, pp , doi: / J.D. Silva, S.F.P. Saramago and A.R. Machado, Optimization of the cutting conditions(v C, fz and doc) for Burr minimization in face milling of mould steel, Journal of Brazilian Society Of Mechanical Science and Engineering, Rio de Janeiro,Vol. 31, pp , April-June, T.R. Lin, Experimental study of burr formation and tool chipping in the face milling of stainless steel, Journal of Materials Processing Technology, Taiwan, Vol.108, December 2000, pp.12-20, doi: /S (00) P.P. Saha and S. Das, Minimization of exit burr in face milling of medium carbon steel by exit edge beveling, Production Engineering Research and Development. Vol.8, August 2014, pp , doi: /s C.F. Wyen, D. Jaeger and K. Wegener, Influence of cutting edge radius on surface integrity and burr formation in milling titanium, International Journal Advance Manufacturing Technology, Vol.67, July 2012, pp , doi: /s U. Heisel, M. Schaal and G. Wolf, Burr Formation in milling with minimum quantity lubrication, Production Engineering; Research and Development, Vol. 3, November 2008, pp , doi: /s P.P. Saha, A. Das and S. Das, On Reduction of Formation of Burr in Face Milling of 45C8 Steels, Journal of Materials and Manufacturing Processes, Vol. 28, May 2013, pp , doi: / S. Mandal (Editor), GJAES 2016 GJAES Page 143

152 Special Issue: Conference Proceeding of i-con-2016 Global Journal on Advancement in Engineering and Science (GJAES) Vol. 2, Issue 1 : March-2016, ISSN (Print): Original Research Work Escape Velocity of a Particle on a Riverbank with Partially Saturated Soil under Cohesion Sanchayan Mukherjee Department of Mechanical Engineering Kalyani Government Engineering College Kalyani , West Bengal INDIA sanchayan02@yahoo.com Abstract: There are numerous factors that influence river formation. These factors are quite complex and interrelated. One of these factors is the amount and rate of supply of water and sediment into stream systems. The force of cohesion is mainly responsible as the particles on the riverbank are bound to one another. The small volume of water entrapped between the particles and the inter-particle distance has a major role to play in it. It has been shown that the behaviour of them is actually a function of the inter-particle distance, the radii of the particles and the volume of water entrapped between them. A recent analytical model called the Truncated Pyramid Model has been used in the present work for determination of the escape velocity of a particle on a riverbank under several degrees of freedom. Comparison has been made between the values obtained from research work published before and the values obtained based on this model considering the same value of the mean diameter and ±1 percent variation of the same. The escape velocity is related to other parameters like entrainment rate and volumetric rate of bank erosion. Keywords: Cohesion, escape velocity; inter-particle distance; Truncated Pyramid Model; volume of the liquid bridge I. Introduction Erosion may take place in the riverbanks due to many reasons causing bank instability despite the fact that the riverbed in dynamic equilibrium neither degrades nor aggrades. Reference [1] made the stability analysis for a steep cohesive riverbank using a suitable computational technique. Reference [2] introduced a new model for simulating stream flow, sediment transport, and the interactions of sediment with other parameters related to water quality. Reference [3] opined and proved that the escape velocity of the sediment particle can be obtained from force analysis considering dynamic equilibrium, and the predominant forces acting on the particle are lift force, submerged weight of the particle and cohesive force between the particles. She showed the cohesive force as a function of a number of parameters related to bank material. Reference [4] coupled bed deformation and bank erosion models by discriminating bed material and bed-material load fractions via the use of a mixed-size sediment transport function. Reference [5] calculated the escape velocity of the Truncated Pyramid Model for varying inter-particle distance. They showed their results for a constant volume of water bridge formed between a pair of particles and for a fixed value of mean radius. They proposed a general equation for determination of the impending acceleration and, in turn, the escape velocity. II. Cohesive Force between Two Particles At low water content, the macroscopic cohesion is influenced by the presence of pendular liquid bridges between particles. Reference [6] expressed capillary force as an explicit function of local geometrical and physical parameters. They suggested a relation between the geometric parameters of the particles of unequal radii and the force acting between those particles of radius R 1 and R 2 having inter-particle distance (between periphery) D. The equation proposed by them is D F R R c exp a b (1) 1 2 R Where the coefficients a, b and c are functions of the volume of the liquid bridge, is the surface tension, the contact angle and R = max (R 1,R 2 ). Also 0.53 a (2a) R 2 b 0.148ln ln (2b) R R S. Mandal (Editor), GJAES 2016 GJAES Page 144

153 Sanchayan Mukherjee, Escape Velocity of a Particle on a Riverbank with Partially Saturated Soil under Cohesion, Global Journal on Advancement in Engineering and Science, 2(1), March 2016, pp c ln (2c) R These equations can be used to express the capillary cohesion between two soil particles. III. General Equations of Acceleration of Particles In the Truncated Pyramid Model particles are assumed to be spherical and materially homogeneous. Therefore, Eq. (1) - (2c) hold good for this analysis and the value of the surface tension for a particular application remains same for all particles. The size of the particles increases slightly row-wise and column-wise. This is because larger particles are likely to be further away from the bank and towards the bottom. Since the increase in the size of the particles is assumed to be very less (slight) the mean radius thus can be considered to remain as constant throughout the domain. The position of a particle is indicated by suffices i and j, where i and j denote the row number and the column number, respectively. Also, frictional force between particles and water is neglected. Here angles are expressed in terms of approximate trigonometric relations. The analysis of the components of the forces requires the calculation of angular positions of the particles. As the model itself defines the geometry of arrangement the angles can be found out in terms of the radii of three adjacent particles. The general equation of impending acceleration as given by [5] can be written for x and y direction in a slightly modified form as follows: 3 x F R, R F R, R F R, R F R, R F R, R F ( R, R ) (3) ij 3 1x ij i 1, j 1 2x i 1, j ij 3 ij i, j 1 4x ij i 1, j 5x i 1, j 1 ij 6 i, j 1 ij 4Rij s Where F R R = Part of x-component of force between particles ij and i+1,j+1 = 1 x ij, i 1, j 1 D 2RR ij i 1, j Rij Ri 1, j 1 ci 1, j 1 exp ai 1, j 1 bi 1, j 1 1 R i 1, j 1 Rij Ri 1, j 1 Ri 1, j Ri 1, j 1 F R R = 2x i 1, j, ij = Part of x-component of force between particles i-1,j and ij D 2Ri 1, jri, j 1 Ri 1, jrij cij exp aij bij 1 R ij Ri 1, j Rij Rij Ri, j 1 3 ij i, j 1 F R, R = Part of force between particles ij and i,j+1 = D R R c exp a b ij i, j 1 i, j 1 i, j 1 i, j 1 Ri, j 1 4x ij i 1, j F R, R = Part of x-component of force between particles ij and i+1,j = D 2RR ij i 1, j 1 Rij Ri 1, j ci 1, j exp ai 1, j bi 1, j 1 R i 1, j Rij Ri 1, j Ri 1, j Ri 1, j 1 F, 5x Ri 1, j 1 Rij = Part of x-component of force between particles i-1,j-1 and ij = D 2Ri 1, j 1 Ri, j 1 Ri 1, j 1Rij cij exp aij bij 1 R ij Ri 1, j 1 Rij Ri, j 1 Rij F6 ( Ri, j 1, Rij ) = Part of force between particles ij and i,j-1 = D Ri, j 1Rij cij exp aij bij (4f) R ij 3 yij 1 g F 3 1y Rij, Ri 1, j F2 y Rij, Ri 1, j 1 F3 y Ri 1, j 1, Rij F4 y Ri 1, j, Rij (5) s 4Rij s Where (4a) (4b) (4c) (4d) (4e) S. Mandal (Editor), i-con 2016 GJAES Page 145

154 Sanchayan Mukherjee, Escape Velocity of a Particle on a Riverbank with Partially Saturated Soil under Cohesion, Global Journal on Advancement in Engineering and Science, 2(1), March 2016, pp y ij i 1, j F R, R = Part of y-component of force between particles ij and i+1,j = D Rij Ri 1, j Ri 1, j Ri 1, j 1 Rij Ri 1, j 1 RijRi 1, j ci 1, j exp ai 1, j bi 1, j 1 R i 1, j 2 Rij Ri 1, j Ri 1, j Ri 1, j 1 2 y ij i 1, j 1 F R, R = Part of y-component of force between particles ij and i+1,j , 3y i 1, j 1 ij = Part of y-component of force between particles i-1,j-1 and ij = D Rij Ri 1, j 1 Ri 1, j Ri 1, j 1 Rij Ri 1, j RijRi 1, j 1 ci 1, j 1 exp ai 1, j 1 b i 1, j 1 1 R i 1, j 1 2 Rij Ri 1, j 1 Ri 1, j Ri 1, j 1 F R R = D Ri 1, j 1 Rij Ri, j 1 Rij Ri 1, j 1 Ri, j 1 Ri 1, j 1Rij cij exp aij b ij 1 R ij 2 Ri 1, j 1 Rij Ri, j 1 Rij F4 y Ri 1, j, Rij = Part of y-component of force between particles i-1,j and ij = Ri 1, j Rij Rij Ri, j 1 Ri 1, j Ri, j 1 D Ri 1, jrij cij exp aij b ij 1 R ij 2 Ri 1, j Rij Rij Ri, j 1 Here, s = densities of water and sediment particles, respectively, and g = acceleration due to gravity. The resultant impending acceleration can be calculated as f x y ij ij ij From the momentum law, the escape velocity of the particle ij would be V 2R f (8) sij ij ij IV. Calculation of Sediment Particle Escape Velocity from Bank As per the calculation made by [3] the escape velocity of the sediment particle from bank normal to the bank surface having mean diameter of 0.8 mm comes out to be V sn = ms -1. In this paper, at first, all particles are assumed to have diameter equal to the mean diameter assumed by Duan, i.e, 0.8 mm. Also, escape velocities of only the particles on the bank surface (i.e. with j = 1) have been calculated. The calculations are repeated for a ±1 percent variation of this value of particle radius to note the effect of small variation of average size of the particles on the escape velocity. The equation given by [6] to find out the cohesive force between two particles having entrapped water between them has been used in this model. They have suggested following values in connection to this: Volume of the liquid bridge,, 10 nl and 20 nl Surface tension, = N/m Contact angle, = 0 The values of the coefficients a ij, b ij and c ij in the equation of cohesive force between two particles are functions of their radii and the volume of the liquid bridge. Here values of a ij, b ij and c ij have been calculated for mean radius of 0.4 mm, mm and mm for the volumes of the liquid bridge 10 nl and 20 nl. This means that total six sets of values of a ij, b ij and c ij are obtained. Based on these values impending acceleration and, therefore, the escape velocity can be calculated using the equation proposed in the Truncated Pyramid Model. V. Results and Discussions Fig. 1 and 2 show the escape velocities of the particles having radii mm, 0.4 mm and mm for different inter-particle distances corresponding to the volume of the liquid bridge 10 nl and 20 nl, respectively (6a) (6b) (6c) (6d) (7) S. Mandal (Editor), i-con 2016 GJAES Page 146

155 Sanchayan Mukherjee, Escape Velocity of a Particle on a Riverbank with Partially Saturated Soil under Cohesion, Global Journal on Advancement in Engineering and Science, 2(1), March 2016, pp Figure 1 Escape velocity vs. inter-particle distance for volume of liquid bridge 10 nl Figure 2 Escape velocity vs inter-particle distance for volume of liquid bridge 20 nl Fig. 3 shows the inter-particle distance, each for mean radius of the particle being mm, 0.4 mm and mm, having zero-deviation value of the escape velocity with that obtained by [3]. Fig. 3 shows the values for the volume of the liquid bridge 10 nl. Fig. 4 shows similar results for the volume of the liquid bridge of 20 nl. Figure 3 Zero-deviation inter-particle distance for volume of the liquid bridge 10 nl S. Mandal (Editor), i-con 2016 GJAES Page 147

156 Sanchayan Mukherjee, Escape Velocity of a Particle on a Riverbank with Partially Saturated Soil under Cohesion, Global Journal on Advancement in Engineering and Science, 2(1), March 2016, pp Figure 4 Zero-deviation inter-particle distance for volume of the liquid bridge 20 nl From the results shown in Fig. 1 and 2 it is clear that for a particular volume of the liquid bridge the escape velocity of the particles increases as the radius of the particle decreases. This is because smaller particles will be attracted with a greater magnitude of force by surrounding particles. Therefore, more force will be required to separate them, and, this is why, they will need greater velocity to escape from the bank. Also the values of the escape velocity will be different for different volumes of the liquid bridge. Fig. 3 and 4 indicate that the value of the escape velocity that has zero deviation with that obtained by [3] has changed with change in the volume of the liquid bridge. The range of these zero-deviation values of the inter-particle distance has increased as the volume of the liquid bridge has increased. VI. Conclusion The results indicate that the escape velocity of the particles is greatly influenced by the particle radii, interparticle distances and the volumes of the liquid bridge between the particles. For a fixed volume of liquid bridge the escape velocity will increase if the mean radius of the particle becomes less. Smaller particles experience a greater magnitude of cohesive force and the velocity required by them to separate from the surface will be more. However, the escape velocity decreases, as the inter-particle distance increases. It has also been shown that the volume of the liquid bridge has also a significant role to play. The range of the zero-deviation values of the inter-particle distance will increase with increase in the volume of the liquid bridge. Also the Truncated Pyramid Model yields a good solution for determination of escape velocity of the particle as well as the other relevant parameters in different circumstances as it has the potential to take into account the probable variations likely to occur in different practical situations. In addition, this method also helps to estimate the parameters for individual particles and their behaviour. This method being quite general and flexible in nature could contribute in analyzing the force system a particle on a riverbank is subjected to. In a nutshell, the present method based on the Truncated Pyramid Model not only takes into account the influence of all the neighbouring particles at micro-level, but also captures the influence of the variability of the water- table in terms of the entrapped liquid bridge. References [1] S.E. Darby, D. Gessler and C.R. Thorne, Computer program for stability analysis of steep, cohesive riverbanks, Earth Surface Processes and Landforms, vol. 25, pp , [2] W. Zeng and M. B. Beck, STAND, A dynamic model for sediment transport and water quality, Journal of Hydrology, vol. 277, No. 1 & 2, pp , [3] J. G. Duan, Analytical approach to calculate rate of bank erosion, Journal of Hydraulic Engineering, vol. 131, No. 11, pp , [4] E. Amiri-Tokaldany, S. E. Darby, and P. Tosswell, Coupling bank stability and bed deformation models to predict equilibrium bed topography in river beds Journal of Hydraulic Engineering, vol. 133, No. 10, pp , [5] S. Mukherjee and A. Mazumdar, Study of effect of the variation of inter-particle distance on the erodibility of a riverbank under cohesion with a new model, Journal of Hydro-Environment Research, vol. 4, No. 3, pp , [6] F. Soulie, M. S. El Youssoufi, F. Cherblanc, and C. Y. Saix, Capillary cohesion and mechanical strength of polydisperse granular materials, European Physical Journal, vol. 21, pp. 21, S. Mandal (Editor), i-con 2016 GJAES Page 148

157 Special Issue: Conference Proceeding of i-con-2016 Global Journal on Advancement in Engineering and Science (GJAES) Vol. 2, Issue 1 : March-2016, ISSN (Print): Original Research Work AN EXPERIMENTAL INVESTIGATION ON THE GRINDABILITY OF INCONEL USING ALUMINA WHEEL UNDER DRY CONDITION Arnab Kundu 1, Ayan Banerjee 2, Manish Mukhopadhyay 3, Sirsendu Mahata 4, Bijoy Mandal 5 and Santanu Das 6 Department of Mechanical Engineering, Kalyani Government Engineering College Kalyani, Nadia, West Bengal , INDIA 1 arnab @gmail.com, 2 ayan @gmail.com, 3 manishmukhopadhyay@gmail.com, 4 maha_200431@rediffmail.com, 5 bijoymandal@gmail.com, 6 sdas.me@gmail.com Abstract: In this ever changing world of manufacturing industries, constant research and development has led to extensive use of Inconel alloys which are Nickel base superalloys. These alloys are widely used in gas turbine blades, seals and combustors, as well as turbocharger rotors and seals, high temperature fasteners, chemical processing and pressure vessels, heat exchanger tubing, steam generators, etc. Certain properties of these Inconel alloys viz. high strength and high resistance to temperature and corrosion make them commercially attractive and make Inconel a difficult-to-grind material, mainly due to high intense wheel loading, workpiece surface deterioration, and high heat generation. A proper wheel has to be selected to minimize cutting forces, and to reduce wheel wear as well as cutting temperature, particularly during dry grinding. In the present investigation, experiments have been performed to make a comparative study on grindability of Inconel 600 alloy under two different infeed values. It has been observed that grindability of Inconel 600 at 10 μm infeed is better than a 20 μm infeed in case of dry grinding, with respect to grinding forces, surface roughness, grinding ratio and the observed chip forms. Keywords: Grinding, Inconel, wheel loading, grinding forces, surface roughness. I. Introduction Grinding is a well-known abrasive machining process that employs a grinding wheel as the cutting tool. Excess workpiece material is removed in the form of microscopic chips by the grinding wheel which is composed of a large number of cutting edges constituted by the hard and sharp abrasive grits held strongly in the wheel by a suitable bond material. Average surface roughness (Ra) as low as 0.1 μm is obtainable through grinding, which is up to ten times better than with either turning or milling [1]. Advanced grinding processes find major applications in aerospace, energy and transport industries where the surface and subsurface quality of the components manufactured are of prime importance as the components fail mainly by fatigue, creep, stress. A major aerospace alloy is Inconel, which is a Nickel-Chromium superalloy. During the last 20 years, use of these super-alloys has significantly increased in various industries, due to their excellent properties. These alloys are widely used in gas turbine blades, seals, and combustors, as well as turbocharger rotors and seals, high temperature fasteners, chemical processing and pressure vessels, heat exchanger tubing, steam generators etc. Inconel 600 is a Nickel-Chromium alloy having high creep-rupture strength at high temperatures to about 700 C (1290 F). The versatility of Inconel 600 has led to its use in a variety of applications involving temperatures from cryogenic to above 2000 F [2]. High hardness, high hot strength and low thermal conductivity make it a difficult-to-machine material. High cutting forces and heat generated during grinding of Inconel lead to poor surface quality, thus shortening wheel life. Hence, proper selection of grinding parameters has to be made. As far as the grinding of Inconel is concerned, few studies have been reported till date. These include investigation of grinding mechanism and surface integrity. Vijey et al. [3] stated that oxide scales developed on the alloy under purely oxidizing condition stick to the surface of Inconel 600 except at a temperature of 750ºC. The oxide formation follows parabolic law. Tso [4] investigated grindability of Inconel 718 with green carbide (GC), alumina (WA), and cubic Boron Nitride (cbn) grinding wheels. The cbn wheel produced a better surface finish in comparison to the others. Also the surface roughness increases with decreasing wheel speed, and increasing table speed and infeed. The GC wheel is unsuitable when grinding fluid is used due to the large increase in grinding forces, while high wheel wear is observed with WA wheel. Dry grinding is better for the GC wheel, while cbn wheel has the longest wheel life under proper cutting conditions. The cbn wheel is most suitable for grinding Inconel 718 although it is quite costly. The role of monolayer cbn wheels on the High Efficiency Deep Grinding (HEDG) of Inconel 718 was found to be highly favourable in terms of grinding forces, chip forms observed and specific energy requirement by Patil et al. [5]. There exists an optimum range of dressing depth for minimum grinding forces. Sinha et al. [6] conducted several experiments on Inconel 718 to identify the optimum dressing parameters. In case of Inconel 718, for minimum specific grinding forces the optimum dressing depth range is 30 to 40 μm. Specific grinding forces vary inversely with dressing lead. Wear surfaces of cutting tools are analysed to study the wear mechanism of cemented carbide tools in turning Inconel 718 superalloys [7]. SEM analysis indicated that the wear of carbide was caused by diffusion of elements (Ni or Fe) in workpiece into tool's binder (Co) by a S. Mandal (Editor), GJAES 2016 GJAES Page 149

158 A. Kundu et al., An Experimental Investigation On The Grindability Of Inconel Using Alumina Wheel Under Dry Condition, Global Journal on Advancement in Engineering and Science, 2(1), March 2016, pp grain boundary diffusion mechanism, so diffusion wear is dominant. Also, a longer tool life was obtained with low content of cbn (45-60%), small grain size and ceramic binder. Anderson et al. [8] used a 1.5 kw CO 2 laser to preheat the surface of Inconel 718 superalloy. It was seen that specific cutting energy decreased significantly during Laser Assisted Machining (LAM) from conventional machining. Surface finish improved two-fold as temperature increases from room temperature to 540 o C. This process is economically beneficial as large savings in cost are achieved. Mandal et al. [9], [10] and Singh et al. [11] compared grindability of Inconel 600 under dry conditions, flood cooling and wet with pneumatic barrier setup. It was reported that force requirements, wheel wear, surface roughness were reduced by using the pneumatic barrier setup as compared to the other systems. In the present experimental work, grindability of Inconel 600 with two different infeed of 10 μm and 20 μm has been compared in terms of grinding forces, surface roughness, G-ratio and type of chips observed. Surface grinding has been performed in a horizontal axis grinding machine using an alumina wheel in dry environment. II. EXPERIMENTAL DETAILS The workpiece material used is a rectangular plate of Inconel 600 alloy having dimensions of 120mm x 60mm x 6 mm and a hardness of 90 HRB. The chemical composition of Inconel 600 alloy used in this experiment is given in Table 1. Before grinding, Rockwell hardness test has been conducted to measure the hardness of the workpiece. Table 1: Chemical composition of Inconel 600 Units Nickel Chromium Iron Manganese Carbon % Grinding has been performed on a horizontal axis surface grinding machine. The complete specifications and other equipment used are detailed in Table 2. Up grinding has been performed for 20 passes at 10 and 20 μm infeed. A constant wheel speed of 30 m/s and a table feed of 14 m/min are maintained throughout the whole experimental investigation. Force values have been measured by a Sushma make strain gauge type dynamometer. During each pass, both the tangential (Ft) and normal (Fn) components of forces have been measured and recorded. Surface roughness has been measured by a portable surface roughness tester of Mitutoyo make. Table 2: Experimental and equipment details Surface Grinding Machine Make: HMT Praga Division Model: 452 P Infeed Resolution: 1 μm Main Motor Power: 1.5 kw Maximum Spindle Speed: 2800 rpm Grinding Wheel Make: Carborundum Universal limited Type: Disc Dimensions: Specification: AA60K5V Workpiece Material: Inconel 600 Dimensions: 120 mm 60 mm 6 mm Hardness: 90 HRB Force Dynamometer Make: Sushma Grinding Dynamometer, Bengaluru Model: SA 116 Range: kg Resolution: 0.1 kg Wheel Dresser Make: Solar, India Specification: 0.5 carat single point diamond tip Dressing Infeed: 20 μm Surface Roughness Tester Make: Mitutoyo, Japan Model: Surftest 301 Range: μm Resolution: 0.05 μm Tool Makers Microscope Make: Mitutoyo, Japan Model: TM 510. S. Mandal (Editor), GJAES 2016 GJAES Page 150

159 A. Kundu et al., An Experimental Investigation On The Grindability Of Inconel Using Alumina Wheel Under Dry Condition, Global Journal on Advancement in Engineering and Science, 2(1), March 2016, pp III. RESULTS AND DISCUSSION Fig. 2 represents variation of tangential and normal forces with the number of passes at 10 μm infeed, while Fig. 3 represents variation of the same forces with a 20 μm infeed. Fig 2: Variation of grinding forces with numberof passes at 10 μm infeed Fig 3: Variation of grinding forces with number of passes at 20 μm infeed Figure 4: Comparison of surface roughness (Ra) for 10 μm and 20 μm infeed The plots above make it quite clear that normal force (Fn) is higher than the tangential force (Ft) in both the cases. Fig. 1 shows a gradually increasing trend of forces. This is due to the fact that forces increase with the increase in infeed and rapid dulling of wheel grits and wheel loading. Fig. 3 depicts a rising trend of forces up to the 13 th pass. After that, the forces gradually decrease. This may be due to autosharpening of the wheel, where dull grits get dislodged bringing fresh grits to the wheel surface, thus improving cutting action and decreasing the force values. S. Mandal (Editor), GJAES 2016 GJAES Page 151

160 A. Kundu et al., An Experimental Investigation On The Grindability Of Inconel Using Alumina Wheel Under Dry Condition, Global Journal on Advancement in Engineering and Science, 2(1), March 2016, pp Average surface roughness values (Ra) has been measured on a portable surface roughness tester, Surftest 301 of Mitutoyo, Japan. An average of three different roughness values observed at different locations in transverse direction on the ground surface of the workpiece after 20 passes is taken. From Fig. 4, it can be clearly seen that the surface roughness (Ra) at 10 μm infeed is lower than that at 20 μm infeed. This can be attributed to the low thermal conductivity of the workpiece which generates more heat at 20 μm infeed. Also, strong adhesion between the wheel and workpiece can be responsible for higher roughness values. Grinding ratio is the ratio of material removal rate to the wheel material removal rate. It is an important parameter in judging grindability. Higher G-ratio indicates good grindability, but not always. For instance, the wheel may be too hard for the workpiece material which can cause an increase in forces and lead to a poor surface texture. Fig.5: Comparison of G-ratio for both 10 and 20 μm infeeds. From Fig. 5, it is clearly seen that G-ratio is higher in case of 10 μm infeed than 20 μm infeed indicating better grindability achieved at 10 μm infeed. The chips obtained and the ground surface have been observed under a tool maker s microscope after 20 passes. Fig. 6 shows the chip morphology after 20 passes in case of 10 and 20 μm infeeds. (a) (b) Fig.6: Chip morphology in case of (a) 10 μm, and (b) 20 μm (a) (b) Fig.7: Ground morphology in case of (a) 10 μm, and (b) 20 μm Chips are collected from the 17 th pass onwards. Fig. 6(a) shows mainly blocky and fragmented chips along with pulled out grains indicating high wheel wear and high wheel loading. Fig. 6(b) shows curled chips, both continuous and discontinuous, indicating favourable grinding. The surface topography shows chip redeposition as evident from Fig. 7(a). Chip redeposition occurs on account of the chips adhering to the extremely heated surface of the workpiece. S. Mandal (Editor), GJAES 2016 GJAES Page 152

161 A. Kundu et al., An Experimental Investigation On The Grindability Of Inconel Using Alumina Wheel Under Dry Condition, Global Journal on Advancement in Engineering and Science, 2(1), March 2016, pp IV. Conclusion In the present work, the effect of infeed on Inconel 600 using an alumina wheel has been studied experimentally. The main results obtained are summarized as follows: The normal force component (Fn) is higher than the tangential (Ft) force in all the cases. Both tangential and normal forces in case of 20 μm infeed are higher than that of 10 μm infeed. Surface roughness values are lower in case of 10 μm infeed, indicating a higher surface finish. Grinding ratio or G-ratio is higher in case of 10 μm infeed. The observed chip images reveal more shear type chip formation at 10 μm infeed. Chip redeposition is found on the surface of the workpiece, indicating very high heat generation. On the whole, the grindability of Inconel 600 under dry conditions with an infeed of 10 μm is found to be better than that at 20 μm infeed. References [1] manufacturing.stanford.edu/processes/grinding.pdf, accessed on 14/08/2015. [2] accessed on 14/08/2015. [3] T.A. Vijey and V. Surianarayanan, Studies on oxidation behavior of Inconel based superalloy (Inconel 600), International Journal of Engineering Sciences & Research Technology, Vol. 2 (2013), pp [4] P. L. Tso, Study on the grinding of Inconel 718, Journal of Materials Processing Technology, Vol. 55 (1995), pp [5] D.V. Patil, S. Ghosh, A. Ghosh and A.B. Chattopadhyay, On grindability of Inconel 718 under high efficiency deep grinding by monolayer cbn wheel, International Journal of Abrasive Technology, Vol. 1(2007), pp [6] M. K. Sinha, D. Setti, Ghosh, S. Ghosh and P. V. Rao, An investigation into selection of optimum dressing parameters based on grinding wheel grit size, Proceedings of the 5th International & 26th All India Manufacturing Technology, Design and Research Conference, Guhawati, [7] Y. S. Liao and R. H. Shiue, Carbide tool wear mechanism in turning of Inconel 718, Wear, Vol. 193 (1996), pp [8] M. Anderson, R. Patwa and Y. C. Shin, Laser-assisted machining of Inconel 718 with an economic analysis, International Journal of Machine Tools & Manufacture, Vol. 46 (2006), pp [9] B. Mandal, A. Sarkar, D. Biswas, S. Das and S. Banerjee, An effective grinding fluid delivery technique to improve grindability of Inconel-600, Proceedings of the 5th International & 26th All India Manufacturing Technology, Design and Research Conference, Guwahati, [10] B. Mandal, D. Biswas, A. Sarkar, S. Das and S. Banerjee, Improving grindability of inconel 600 using alumina wheel through pneumatic barrier assisted fluid application, Advanced Materials Research, Vol (2013), pp [11] S.K. Singh, S.R. Dutta and R. Ranjan, Grindability of inconel-600 under different environmental conditions, International Journal of Advanced Technology in Engineering and Science, Vol. 2(2014), pp S. Mandal (Editor), GJAES 2016 GJAES Page 153

162 Special Issue: Conference Proceeding of i-con-2016 Global Journal on Advancement in Engineering and Science (GJAES) Vol. 2, Issue 1 : March-2016, ISSN (Print): Original Research Work EXPERIMENTAL INVESTIGATION ON GRINDABILITY OF LOW ALLOY STEEL USING ALUMINA WHEEL UNDER DRY CONDITION Pinaki Das 1, Sujit Majumdar 2 1 Department of Mechanical Engineering, Kalyani Government Engineering College Kalyani, Nadia, India, 1992pinakidas2014@gmail.com 2 Department of Mechanical Engineering, Global Institute of Management & Technology, Krishnanagar, Nadia, India, sujitmajumdar2010@gmail.com Abstract: Low alloy steel has wide range of applications, like vehicle parts, construction equipment, pressure vessels, piping, structural steel, etc, because of its properties like good machinability, good weldability, high ductility, notch toughness. Due to high ductility and softness of low alloy steel, it is hard to grind because of very high wheel loading which effects grinding forces (tangential and normal force), surface roughness, work piece quality, chip quality, grinding ratio. The problems can be minimised by selecting appropriate values of machining parameters like wheel speed, table feed, and depth of cut (infeed). The aim of the present experiment is to find variations in grinding parameters of three different infeeds. Grinding parameters, like grinding forces, surface roughness, G-ratio, chip quality are observed for finding better grinding condition (infeed) in case of dry grinding of low alloy steel by using alumina wheel. The results suggest that with increase of infeed, the values of grinding forces and surface roughness and G-ratio increases. Keywords: grinding, Low alloy steel, alumina wheel, wheel loading, grinding forces, surface roughness, G- ratio, chip quality. I. Introduction Grinding is a material removal process where, instead of a single or a few uniformly spaced and oriented cutting edges of identical and well defined geometry, a very large number of randomly distributed quite hard and stable abrasives of widely varying size, shape and geometry accomplish the material removal in the form of microscopic chips. So grinding is very complex machining process for the random nature of grits and interaction with work piece. Generally, machining is done for bulk material removal and grinding is done for finishing to obtain high dimensional accuracy, good surface finish [1]. The grains at the surface of the wheel that actually perform the cutting operation are called active grains. In peripheral grinding, each active grain removes a short chip of gradually increasing thickness in a way that is similar to the action of a tooth on a slab milling cutter [2]. Low alloy steel has very large number of engineering applications because through the addition of particular alloys, low-alloy steels possess precise chemical compositions and provide better mechanical properties than many conventional mild or carbon steels [3]. Applications for low-alloy steels range from military vehicles, earthmoving and construction equipment, and ships to the cross-country pipelines, pressure vessels and piping, oil drilling platforms, and structural steel [4]. The surface quality is very important factor for those applications. So grinding is required for obtaining better surface finish. Though machinability is good for low alloy steel, grinding is difficult because of high wheel loading [5]. To improve the grinding performance of low alloy steel many researchers performed various experiments. Engineer et al. [6] have measured that during grinding only 5-30% of grinding fluid can penetrate the wheel work piece contact zone, at high cutting speed, due to presence of stiff air layer around the wheel and centrifugal force acting on the wheel. Most of the fluid cannot penetrate the grinding zone because of the centrifugal force acting on the wheel and high pressure contact between work piece and wheel [7].Yossifon et al. [8] observed that if for a grinding wheel, hardness grade increases and grit size decreases then the grinding force components increase. Ghosh et al. [9] reported after single grit grinding test on aluminum workpiece that specific energy consumption decreases with the increase of infeed because of low ploughing at high infeed. Nakamaya et al. [10] discovered while researching the Z-Z method of cooling that much effective control of grinding temperature could be achieved than flood cooling. The centrifugal force generated by fluid flow caused spin-off and wastage. Shaji et al. [11] observed during an attempt to avoid environmentally harmful cutting fluid, investigation on application of CaF 2 as solid lubricating medium had been conducted with surface grinding. If proper application of the solid lubricant to the grinding zone could be ensured, with the means for substituting the flushing function of the coolant, it would be an effective alternative for the conventional flood coolants. The present experiment is S. Mandal (Editor), i-con 2016 GJAES Page 154

163 P. Das et al., Experimental Investigation on Grindability of Low Alloy Steel using Alumina Wheel under Dry Condition, Global Journal on Advancement in Engineering and Science, 2(1), March 2016, pp to check the grindability of low alloy steel under different infeeds. Grindability is checked by the experimental study of grinding force, surface roughness, G-ratio and chip quality. II. Experimental Details Workpiece Material: The workpiece material used is low alloy steel having hardness of 170 HV and size 100mm 50 mm 10 mm, whose composition is given in Table 1. Table 1: Composition of low alloy steel. Iron Carbon Manganese Phosphorus Silicon Sulphur Experimental setup and measurement: Experiments are done on horizontal surface grinding machine of HMT Praga division. Force readings (both tangential and normal force) are taken for 20 grinding passes (up) at 5, 10 and 15 micron infeed on Sushma made strain gauge type dynamometer. Grinding chip and ground surface morphology are observed under toolmakers microscope. Surface roughness values are measured on a portable Mitutoyo make surface roughness tester. Experimental details and equipment used are provided in Table 2. Table 2: Experimental details and equipment used. Surface Grinding Machine Grinding Wheel Workpiece Environment Force Dynamometer Wheel Dresser Surface Roughness Tester Tool Makers Microscope Make : HMT Praga Division Model : 452 P Infeed Resolution : 1 µm Main Motor Power : 1.5 kw Maximum Spindle Speed : 2800 rpm Make : Carborundum Universal Limited Type : Disc Type Size : mm Specification : AA46/54 K5V8 Material : Low Alloy Steel Dimension : 100 mm 50 mm 10 mm Hardness : 22 HRC Dry Make : Sushma Grinding Dynamometer, Bengaluru Model : SA 116 Range : kg Resolution : 0.1 kg Make : Solar, India Specification : 0.5 carat Single Point Diamond Tip Dressing Infeed : 20 µm Make : Mitutoyo, Japan Model : Surftest 301 Range : µm Resolution : 0.05 µm Make : Mitutoyo, Japan Model : TM 510 III. Experimental results and discussion The following results are observed during the experiments and the possible reason behind the results are explained in this section. Grinding Forces: Grinding force is very important parameter by which grindability can be determined. There are two components of grinding force: tangential force and normal force. In the present experiment grinding forces in 5, 10 and 15micron infeed are observed. S. Mandal (Editor), GJAES 2016 GJAES Page 155

164 Surface Roughness in Ra Grinding Forces (N) P. Das et al., Experimental Investigation on Grindability of Low Alloy Steel using Alumina Wheel under Dry Condition, Global Journal on Advancement in Engineering and Science, 2(1), March 2016, pp No. of Passes Fig. 1: Plot of grinding forces vs number of grinding passes under dry condition at 5, 10 and 15 micron infeed. From the plot it is clear that normal force is greater than tangential force in all the infeeds. The requirement of force increases with the increase of infeed. Some linear increasing trend of grinding force is observed up to 6 passes. The possible reason may be the auto sharpening effect on the wheel. There after it is found to give the curve highly non-linear in nature. This may be due to the effect of high wheel loading and grit wear. The just previous trend is found to differ in last 2 grinding passes. Surface Roughness: Grinding is mainly surface finishing operation. So, surface roughness is very important grinding parameter to measure grindability. In the present experiment surface roughness was measured after 20grinding passes by a portable surface roughness tester. Very high surface roughness depicts very low surface finish and if the value of surface roughness is less then surface finish is good. Average surface roughness values (Ra) are observed on the grind surface in the transverse direction. Then plot the average of 5 Ra values in histogram Infeed 5micron Infeed 10micron Infeed 15micron Fig. 2: Comparison of surface roughness after 20 grinding passes Ft(5micron) Fn(5micron) Fn(10micron) Ft(10micron) Ft(15micron Fn(15micron) It is clear from the histogram (Fig-2) that the average surface roughness is lowest for 5micron infeed and highest for 15micron infeed. So, better surface finish can be obtained by grinding with 5micron infeed. Grinding with 10micron infeed has better surface finishing ability than grinding with 15micron infeed. The reason may be high force requirement, high heat generation and low rubbing action between work piece and grinding wheel. Grinding Ratio: Grinding ratio is defined by the ratio of volume of work material removed to the volume of wheel material removed. It indicates the grind ability of a material. Grinding ratio is calculated after 20 passes. S. Mandal (Editor), GJAES 2016 GJAES Page 156

165 Grinding Ratio P. Das et al., Experimental Investigation on Grindability of Low Alloy Steel using Alumina Wheel under Dry Condition, Global Journal on Advancement in Engineering and Science, 2(1), March 2016, pp micron infeed 10micron infeed 15micron infeed Fig. 3: Comparison of Grinding Ratio after 20 passes From the fig. 3 it is clear that value of grinding ratio is maximum for grinding with 15micron infeed and minimum for grinding with 5micron infeed. G-ratio is found to increase with the increase of infeeds. The reason may be for higher force generation, penetration of grinding wheel in the work piece is more. So cutting action may be good for higher infeed and work piece removal rate is more. Chip Quality: By observing quality of formed chips in grinding process, grindability can be resolute. In the present experiment chips are collected after 18 grinding passes and observed under tool makers microscope. (a) (b) (c) Fig. 4: Chip form observed after 18 passes (a) 5 micron; (b) 10 micron; (c) 15 micron In grinding with 5micron infeed long length leafy chips and very small number of spherical chips were observed which indicates low force requirement in grinding. In grinding with 10micron infeed the length of leafy chips is smaller than grinding with 5micron infeed and large number of spherical chips were observed, which depicts higher force values and higher wheel loading. In case of grinding with 15micron infeed, length of leafy chips is smallest among those three infeeds, and some blocky chips and some abrasive grits were observed which denotes higher force requirement, higher wheel loading, higher wheel material removal. IV. Conclusions Analysing the different parameters obtained during grinding of low alloy steel using alumina wheel at 5, 10 and 15 micron infeed, the following conclusions can be made: Force requirement is maximum for grinding with 15 micron infeed and minimum for grinding with 5micron infeed. The force values in 15micron infeed is greater than the force values in 10micron infeed and force values in 10micron infeed is greater than force values in 5micron infeed. The average surface roughness is lowest for 5micron infeed and highest for 15micron infeed. So, better surface finish can be obtained by grinding with 5micron infeed. Grinding with 10micron infeed has better surface finishing ability than grinding with 15micron infeed. Grinding ratio is maximum for grinding with 15micron infeed and minimum for grinding with 5micron infeed. Grinding with 10micron infeed has higher grinding ratio value than grinding with 5micron infeed and lower than grinding with 15micron infeed. Chip quality also better for grinding with 5micron infeed and worse for grinding with 15micron infeed. S. Mandal (Editor), GJAES 2016 GJAES Page 157

166 P. Das et al., Experimental Investigation on Grindability of Low Alloy Steel using Alumina Wheel under Dry Condition, Global Journal on Advancement in Engineering and Science, 2(1), March 2016, pp V. References [1]. Malkin. S, Grinding Technology Theory And Application of Machining with Abrasive, Ellis Harwood Publication, UK, [2]. Accessed on [3]. Maynier. P, Jungmann. B,Dollet. J, Creusot--Loire System for the Prediction of the Mechanical Properties of Low Alloy Steel Products, Hardenability Concepts with Applications to Steel, 2(1), 1977, [4]. Accessed on [5]. Rao. P. N, Metal Cutting And Machine Tools, Tata Mcgraw-Hill Publishing Company Limited, New Delhi, [6]. Engineer. F, Guo. C, Malkin. S, Experimental Measurement of Fluid Flow Through The Grinding Zone, ASME Journal of Engineering for Industry, 114(1), 1992, [7]. Baheti. U, Gou. C, Malkin. S, Environmentally Conscious Cooling And Lubrication For Grinding, Proceedings of the International Seminar on Improving Machine Tool Performance, San Sebastian, Spain, 2, 1998, Ghosh. S, [8]. Yossifon. S, Rubenstein. S, The Grinding of Workpiece Materials Exhibiting High Adhesion. Part 2: Forces, ASME, 103(2), 1981, [9]. Chattopadhyay. A. B, Paul. S, Study of grinding mechanics by single grit grinding test, International Journal of Precision Technology, 1(3), 2010, [10]. Nakayama. K, Takagi. J, Abe. T, Grinding Wheel with Helical Groove An Attempt To Improve The Grinding Performance, Annals of the CIRP, 25(1), 1977, [11]. Shaji. S, Radhakrishnan. V, A Study on Calcium Fluoride As A Solid Lubricant In Grinding, International Journal of Environmentally Conscious Design & Manufacturing, 11(1), 2003, S. Mandal (Editor), GJAES 2016 GJAES Page 158

167 Special Issue: Conference Proceeding of i-con-2016 Global Journal on Advancement in Engineering and Science (GJAES) Vol. 2, Issue 1 : March-2016, ISSN (Print) : Original Research Work Uncertainty of Mathematical Modeling for River Water Quality Sujit Kumar Dey Department of Applied Science and Humanities Global Institute of Management and Technology NH34, Palpara More,Krishnagar,Nadia, West Bengal India sujit_d25@rediffmail.com Abstract: The case is given for attention to the evaluation of uncertainty in water quality modeling, in the contexts of new demands for assessment of risk to water quality status, and typical paucity of supporting data. A framework for the modeling of water quality is outlined and presented as a potentially valuable component of broader risk assessment methodologies, and potentially useful methods of numerical uncertainty analysis are reviewed and demonstrated. In this paper the important sources of uncertainty and their propagation, cause analysis of uncertainty are discussed Keywords: uncertainty; water quality, modeling; risk assessment, propagation. I. Introduction Mathematical modeling is a process of representing real world problems in mathematical terms in an attempt to find solutions to the problems. A mathematical model can be considered as a simplification or abstraction of a real world problem or situation into a mathematical form, thereby converting the real world problem into a mathematical problem. The mathematical problem can then be solved using whatever known techniques to obtain a mathematical solution. This solution is then interpreted and translated into real terms. The case is given for attention to the evaluation of uncertainty in water quality modeling, in the contexts of new demands for assessment of risk to water quality status, and typical paucity of supporting data. A framework for the modelling of water quality is outlined and presented as a potentially valuable component of broader risk assessment methodologies, and potentially useful methods of numerical uncertainty analysis are reviewed and demonstrated. A selective library of dynamic models and numerical tools for model solving and uncertainty analysis are compiled into novel software for model uncertainty analysis and risk-based decision-support. This software is applied to a series of case studies in an exploration of the underlying numerical problems and their relevance to modelling and management objectives using relatively sparse data sets. Issues examined in some detail are the importance of reconciling numerical solution tolerances with overall model precision; relative effects of numerical approximations, data and model structural biases on optimal design of field experiments and on prediction reliability; and the value and limitations of extending established methods of uncertainty analysis to decision-support. These investigations lead to discussions about priorities for the water quality modelling research community, in the face of contemporary and emerging numerical, technological and management problems. The main conclusion is that the current generation of modelling software can make very limited contribution to risk-based decision support, due to general absence of formal uncertainty analysis capabilities. II. Model Uncertainty The water quality measurements are frequently applied for river quality management purposes, e.g. for the assessment of current and historical state of surface waters, for water quality risk assessment, for calibration and validation of river quality simulation models based on available water quality data. The measurements are, however, subject to errors, which might be considerable in some cases or for some variables. The errors cause uncertainties in the assessments and in the model calibration and validation. A level of uncertainty applies to all models and application of any model should include testing and sensitivity analysis. Sensitivity analysis shows how variation of a single factor affects model outputs. Uncertainty affects data collection and all stages of the modeling process and tends to increase with both the number of processes that feed onto the model along the DPSIR chain, and with complexity within the relevant model domain. In predictive models, uncertainty arises from inherent variability in natural processes, model uncertainty and parameter uncertainty. While the importance of uncertainty analysis is well recognised [8] it is usually not S. Mandal (Editor), GJAES 2016 GJAES Page 159

168 S. K. Dey, Uncertainty of Mathematical Modeling for River Water Quality, Global Journal on Advancement in Engineering and Science, 2(1), March 2016, pp included in pollutant transport models. This is a serious omission because if variability of input variables are large, so too will be output predictability[3].if within ecosystem variability is large, many samples need to be analysed to provide a given, defined, level of certainty in a mean value. Combined spatial, temporal and analytical uncertainty may be particularly high for measurements of some of the most important chemical ecosystem drivers, e.g. total phosphorus. This has profound implications for the reliability of use of simple models that predict ecosystem response from, e.g. nutrient loadings. Model uncertainty is clearly of importance in the conceptualization of the process for which predictions are required. For example, a one-dimensional hydrological model would be expected to have greater predictive power than an ecological food-web systems model for lakes. Investigations into, e.g. nutrient response models, suggest that prediction errors in both empirical and mechanistic models are unlikely to be under +30% and can be more than + 100%.[8]. However it is possible that the impact on modelling of individual error terms may be overestimated compared with combined effect of pairs of related parameters. However, it is also clear that error estimation is often neglected when it should not be. Increasingly, however, techniques such as Monte Carlo simulation are applied to predict frequency distribution of variables, especially in sparse data sets[9].further discussion of uncertainty and techniques to address this are given in. II. Sources of Uncertainty and their Propagation A definition of uncertainty analysis is the means of calculating and representing the certainty with which the model results represent reality. The difference between a deterministic model result and reality will arise from, (a) Model parameter error, (b) Model structure error (where the model structure is the set of numerical equations which define the uncelebrated model), (c) Numerical errors - truncation errors, rounding errors and typographical mistakes in the numerical implementation, (d) Boundary condition uncertainties. As reality can only be approximated by field data, data error analysis is a fundamental part of the uncertainty analysis. Data errors arise from, e. sampling errors (i.e. the data not representing the required spatial and temporal averages), (a) Measurement errors (e.g. due to methods of handling and laboratory analysis), (b) Human reliability. Realising that an error-free model would equate to the error-free observations, the relationship between the actual model result M and the actual observations O can be summarized by, M ε 1 ε 2 ε 3 ε 4 = O ε 5 ε 6 ε 7 where ε 1 to ε 4 represent the model error arising from the four sources in the order listed above, and ε 5 to ε 7 represent the data error arising from the sources listed above. Representing the overall error on either side of the above equation is not generally a simple task of adding the error variances together, as might be implied by the equation. This is because the errors may be unknown, and/or not of a random nature (see below), and/or the model output may be interdependent on the various sources of error in a manner that precludes their simple addition. It is the goal of the modeller to achieve, to within an arbitrary tolerance, an error-free model by removal of ε 1 to ε 4. However, the modeller is generally neither in control of model structure errors ε 2, nor numerical errors ε 3, nor boundary condition errors ε 4. Commonly, only the values of the model parameters are under the direct control of the modeller. The aim would then become one of compensating as far as possible for ε 2 to ε 4 by identification of optimum effective parameter values. Central to this paper is the argument that there is always some ambiguity in the optimum effective parameter values caused by the unknown natures of, and inseparability of, ε 2 to ε 7, and that this ambiguity can be represented by parametric uncertainty. As such, the model parameters are used as error-handling variables, and are identified according to their ability to mathematically explain ε 2 to ε 7. The difficult task of identifying parameter uncertainty is generally approached using methods of calibration which derive, from the pre-calibration (a priori) parameter distributions, calibrated (a posteriori) distributions. In hydrological modelling, due to lack of prior knowledge, the a priori distributions are often taken as uniform and independent [7]. On the other hand, the a posteriori distributions, constrained by the data, may be multimodal and non-linearly interdependent [10].. Inter-dependency arises when the model result is simultaneously significantly affected by two or more parameters, such that the distribution of each parameter must be regarded as conditional on the value of all interdependent parameters. Therefore, it is necessary to refer to the joint S. Mandal (Editor), GJAES 2016 GJAES Page 160

169 S. K. Dey, Uncertainty of Mathematical Modeling for River Water Quality, Global Journal on Advancement in Engineering and Science, 2(1), March 2016, pp parameter distribution which is defined by a continuous function of all the parameters, and to sampled parameter sets rather than individual parameter values. Fig 1: Experimental Frame Model relationship IV. Cause analysis of uncertainty Uncertainty in a water quality simulation model is inevitable due to the difficulty of identifying a single model which can accurately represent the water quality under all required model tasks. Although we have extensive knowledge about water quality processes from laboratory experiments, extrapolation of this knowledge to models of the real environment has consistently proven to be difficult. This is partly because the modelling scale is different to the laboratory scale, and the diversity of species and heterogeneity found in natural environments must be modeled approximately using lumped state variables. This means that formulations and parameter values identified at laboratory scale can only be used as a starting point for model design, rather than as a definitive end result. Nor is there yet any basis for regionalization of water quality models. Therefore, models identified for one case study cannot be used with any confidence for another. Literature which describes established formulations and parameter values is evidence of the wide range of models which are equally justified prior to observing a system s behaviour in detail, and that the uncertainty associated with modelling water quality on the basis of prior knowledge is extremely large. Given that it is desirable to evaluate the performance of models with respect to observed water quality data, the accuracy, frequency and relevance of the available data dictates the attainable degree of certainty in the model. Unfortunately, water quality data can be expensive to collect and analyse, often requiring special handling and analysis in laboratories. This means that data to support model identification are generally sparse, often coming from sampling programmes which are fixed in frequency and location for regulation purposes, rather than designed to encapture the system s dynamic responses as required for successful model identification. Also, water quality data are susceptible to noise and bias due to sampling, handling and measurement procedures. In addition, information about model boundary conditions such as sources of pollution, often suffers from the same short-comings, especially for distributed variables which are difficult to measure. In summary, lack of good quality data to support model identification is a major cause of model uncertainty. Closely related to the issue of data quality is model equifinality, whereby different models appear equally justified at the model design stage, but may give widely different realisations of the future. The choice of method partly depends on the description of the parameter uncertainty, and partly on the computational resources, with the Monte Carlo methods generally (but not always) being more reliable and computationally demanding. S. Mandal (Editor), GJAES 2016 GJAES Page 161

170 S. K. Dey, Uncertainty of Mathematical Modeling for River Water Quality, Global Journal on Advancement in Engineering and Science, 2(1), March 2016, pp V. Conclusion In this paper the cause analysis, sources of uncertainty and its propagation are discussed for water quality mathematical model. In most environmental modelling problems, significant bias in one or more of these errors will inevitably lead to biased parameter estimates. While the ideal solution would be to eliminate bias, for example by compensatory adjustments to data or by model structure refinement, such measures are often not practical and never comprehensive. In recognition of this, the potential importance of biased model calibration was illustrated in this paper, and significant attention is given to methods of uncertainty analysis which aim to deliver some robustness to bias. VI. References [1] Adams, B. and Reckhow, K.H An examination of the scientific basis for mechanisms and parameters in water quality models. Unpublished paper. Available from second author at www2.ncsu.edu/ncsu/cil/wrri/adamsreckhow.pdf [2] Beck M. B. and Reda A. (1994). Identification and application of a dynamic model for operational management of water quality. Wat. Sci. Tech., 30(2), [3] Beck, M.B Uncertainty in water quality models. Water Resources Research 23(8), [4] Berthouex, P.M. and Brown, L.C Statistics for environmental engineers, CRC Press. [5] Cox B. A. (2003). A review of dissolved oxygen modelling techniques for lowland rivers. Sci. Total Environ., , [6] Gupta, M and Sharma, S., Acta Ciencia Indica, Vol XVII(M), No 2(1991). [7] Hornberger, G.M. and Spear, R.C Eutrophication in Peel Inlet, 1. Problemdefining behaviour and a mathematical model for the phosphorus scenario. Water Research 14, [8] Reckhow, K.H Water quality simulation modeling and uncertainty analysis for risk assessment and decision making. Ecological Modelling 72(1-2), [9] Shanahan P., Henze M., Koncsos L., Rauch W., Reichert P., Somlyódy L. and Vanrolleghem P. (1998). River Water Quality Modelling: II. Problems of the Art. Wat. Sci. Tech., 38(11), [10] Sorooshian, S. and Gupta, V.K Model calibration. In Computer Models of Watershed Hydrology, Singh, V.P. (Ed.), Water resources Publications. S. Mandal (Editor), GJAES 2016 GJAES Page 162

171 Special Issue: Conference Proceeding of i-con-2016 Global Journal on Advancement in Engineering and Science (GJAES) Vol. 2, Issue 1 : March-2016, ISSN (Print): Review Work Mathematical Modeling: An Overview Sujit Kumar Dey Department of Applied Sciences and Humanities, GIMT, Krishnagar, Nadia, India sujit_d25@rediffmail.com Abstract: The process of developing model or making simplified abstract representation of real-world events / activities or systems by capturing their behavioral characteristics are known also as modeling. Model building helps scientists, Engineers and researchers in better understanding of the system. It better explains the past behavior and predicts future behavior of the system. It also helps for better and efficient planning and for evaluating policies and strategies of controlling the system in a desired fashion without disturbing it. In this paper the attempt of understanding the system behavior by using mathematical modeling has been discussed. Keywords: abstract representation; model; planning; system behavior I. Introduction Mathematical modeling is a fast developing area of research and development field, which has a tremendous scope with respect to Environmental planning and conservation 1. Mathematical modelling is a process of representing real world problems in mathematical terms in an attempt to find solutions to the problems. It is in human nature to want to understand dynamic systems, control them, and above all predict their future behavior. During the last century, this desire has lead to inter-disciplinary research into modeling and simulation, bringing together results from mathematics, computer science, cognitive sciences, and a variety of application domainspecific research. A mathematical model can be considered as a simplification or abstraction of a (complex) real world problem or situation into a mathematical form, thereby converting the real world problem into a mathematical problem. The mathematical problem can then be solved using whatever known techniques to obtain a mathematical solution. This solution is then interpreted and translated into real terms. Figure 1 shows a simplified view of the process of mathematical modeling. A mathematical model is a qualitative representation of a system based on mathematical relationships. Actually models are built in order to: understand functioning and internal structure of a system, program measuring campaigns or positioning sensors and monitor the system and control interventions. The model methodology can be described as follows II. Model Methodology Figure: 1: Flowchart of model methodology S. Mandal (Editor), GJAES 2016 GJAES Page 163

172 S. K. Dey, Uncertainty of Mathematical Modeling for River Water Quality, Global Journal on Advancement in Engineering and Science, 2(1), March 2016, pp III. Physical System Model Building The following mathematical tools are most frequently used in the development of models. 1. Set theory and Transformations: Mostly used to represent any kind of model and it is employed in the development of state change of state models. 2. Matrix algebra: It is concerned with the description and manipulation of lists and tables of numbers. 3. Difference and Differential equations: These are used to develop models that describe quantatively the way systems change over time. Building a model involves a number of steps, but it is not a straight procedure, as illustrated in Figure 2. Where more than one model is possible for a judicious choice of a particular model based on any system physical simulation is very important. Figure 2: Building a model The models of any physical system can be classified into two class viz. deterministic models and probabilistic models 2. Deterministic models are those in which, all parameters and functional relationships are known with certainty. In case of probabilistic models at least one parameter or decision variable is a random variable. These models reflect to some extent the complexity of the real world and uncertainty surrounding it 8. Static models are time independent, so they are useful to represent average values (e.g. energy balances in ecosystems). The relationships between inputs, disturbances and outputs are instantaneous since they are algebraic (y=f (u, d)), and they are based on mass and energy static balances and on static equilibrium equations. Dynamic models describe the behavior of time-varying quantities, and they are necessary to study the impact of any internal structure and / or external time-varying stimuli (e.g. populations dynamics, energy fluxes in ecosystems, etc.). These parameters are based on differential equations. Generally the models are based on differential equation of the following two types: # Ordinary Differential Equations (ODE), e.g. continuous population dynamics # Partial Differential Equation (PDE), e.g. pollutants diffusion 4. Numerical models can be statistical, stochastic or deterministic in nature. A statistical model doesn t try to explain causal connections nor internal dynamics in the system, just traces the overall characteristics of the available data sets 7. However, we can make a qualitative deduction on phenomena that generated the data, their statistical properties and recognition of anomalous data. A stochastic model reproduces the temporal progress of data without the claim of understanding this progress. It is useful for predictive intents but doesn t improve the knowledge of the system, and its use must be preceded by a structural analysis in order to establish causal input / output correspondences; what s more, for its calibration it needs a huge amount of data even if its structure is very simple. It is advantageous when we want to obtain an operative instrument that reproduces at best the observed output 6. Deterministic models try to explain the internal mechanism of the process. The complexity of the model depends both on the available knowledge and the use we have for the model. A sound knowledge of the fundamental laws that rule the system is necessary. S. Mandal (Editor), GJAES 2016 GJAES Page 164

173 S. K. Dey, Uncertainty of Mathematical Modeling for River Water Quality, Global Journal on Advancement in Engineering and Science, 2(1), March 2016, pp Actually model gives an accurate description of a system within the context of a given experimental frame. The term accurate description needs to be defined precisely. Usually, certain properties of the system s structure and/or behaviour must be reflected by the model within a certain range of accuracy. Figure 3: System versus Experimental Frame IV. Modeling and Simulation Modeling covers the understanding and representation of structure and behavior at an abstract level, whereas simulation produces behavior as a function of time based on an abstract model and initial conditions. Model validation is the process of comparing experimental measurements with simulation results within the context of a certain experimental frame. When comparison shows differences, the formal model built may not correspond to the real system. A large number of matching measurements and simulation results, though generates confidence, however does not always prove validity of the model 3. Figure 4: Modeling Simulation Morphism Various kinds of validation can be identified; e.g., conceptual model validation, structural validation, and behavioural validation 5. Conceptual validation is the evaluation of a conceptual model with respect to the system, where the objective is primarily to evaluate the realism of the conceptual model with respect to the goals of the study. Structural validation is the evaluation of the structure of a simulation model with respect to perceived structure of the system. Behavioural validation is the evaluation of the simulation model behaviour. An overview of verification and validation activities is shown in Figure 4. It is noted that the correspondence in generated behavior between a system and a model will only hold within the limited context of the experimental frame. V Conclusion In this paper, the relevance of teaching mathematical modelling as an important part of engineering science was discussed. The model building and methodology, validation is presented. Actually mathematical modelling is a process of representing real world problems in mathematical terms in an attempt to find solutions to the problems and it can be considered as a simplification or abstraction of a (complex) real world problem or S. Mandal (Editor), GJAES 2016 GJAES Page 165

174 S. K. Dey, Uncertainty of Mathematical Modeling for River Water Quality, Global Journal on Advancement in Engineering and Science, 2(1), March 2016, pp situation into a mathematical form, thereby converting the real world problem into a mathematical problem. The mathematical problem can then be solved using whatever known tools or techniques to obtain a mathematical solution which is then interpreted and translated into real terms. Here it was the attempt to define it and from this paper further improvement can be done for future work on the basis of mathematical modeling. VI Reference 1. Adrian, D.D. and Sanders, T.G., , Oxygen Sag Equation for Half Order BOD Kinetics, Journal of Environmental Systems, Vol. 22, No. 4, pp Basmadjian, Diran (2003), Mathematical modeling of physical systems: An introduction, Oxford University Press, New York, Bender A. Edward,1978, An Introduction to Mathematical Modeling, John Wiley & Sons. ISBN Box, G.E.P. and Jenkins, G.E Time series analysis forecasting and control, Holden-Day, pp Garratt M. (1975). Statistical techniques for validating computer simulation models. Technical report No. 286, Colorado State University, Fort Collins, CO, USA. 6. Harr. M.E Probabilistic estimates for multi-variate analyses. Applied Mathematical Modeling 13(5), Kapur J.N, 1979a, Mathematical Modelling, a New Identity for Applied Mathematics,Bull.Math.Ass.Ind.11, Dey Sujit Kumar (2008) Mathematical modeling for river water quality: A case study PhD thesis, University of North Bengal pp 87 (ch-1). S. Mandal (Editor), GJAES 2016 GJAES Page 166

175 Special Issue: Conference Proceeding of i-con-2016 Global Journal on Advancement in Engineering and Science (GJAES) Vol. 2, Issue 1 : March-2016, ISSN (Print): Original Research Work L(3,2,1)-Labelling of Cycles Nasreen Khan 1, Kaskshan Khan 2 Department of Mathematics 1 and Department of Computer Science Engineering 2 {1,2} Global Institute of Management and Technology, NH-34, Palpara More, Krishnagar, Nadia , West Bengal, INDIA nasreen.khan10@gmail.com 1 and kak786kaskshan@gmail.com 2 Abstract: Radio signal interference can be modeled using distance labelling where the labels assigned to each vertex depend on the distance between vertices and the strength of the radio signal. An L(3,2,1)-labeling is a simplified model for the channel assignment problem. An L(3,2,1)-labeling of a graph G is a function f from the vertex set V (G) to the set of positive integers such that for any two vertices x,y, if d(x,y) = 1, then f(x) f(y) 3; if d(x,y) = 2, then f(x) f(y) 2; and if d(x,y) = 3, then f(x) f(y) 1. The L(3,2,1)-labeling number k(g) of G is the smallest positive integer k such that G has an L(3,2,1)-labeling with k as the maximum label. In this paper we determine the L(3,2,1)-labeling number for cycles (finite length) and bouquet of cycles(finite lengths) joining at a vertex. We also present an upper and lower bounds for k(g) in terms of the maximum degree of G. There are many real life applications in graph theory. Some of them are network communication system, radio communication system, computer scheduling, searching of files and folders in a computer, etc. Keywords: distance labelling; distance labelling; radio labelling; graph colouring; lambda-labelling; L(h,k)- labelling; L(d,1,1)-labelling I. Introduction The channel assignment problem is an engineering problem in which the task is to assign a channel (nonnegative integer) to each FM radio station in a set of given stations such that there is no interference between stations and the span of the assigned channels is minimized. The level of interference between any two FM radio stations correlates with the geographic locations of the stations. Closer stations have a stronger interference, and thus there must be a greater difference between their assigned channels. In 1980, Hale introduced a graph theory model of the channel assignment problem where the problem was represented using the idea of vertex coloring. [1]. Vertices on the graph correspond to the radio stations and the edges show the proximity of the stations. The channel assignment problem is an engineering problem in which the task is to assign a channel (nonnegative integer) to each FM radio station in a set of given stations such that there is no interference between stations and the span of the assigned channels is minimized. The level of interference between any two FM radio stations correlates with the geographic locations of the stations. Closer stations have a stronger interference, and thus there must be a greater difference between their assigned channels. In 1980, Hale introduced a graph theory model of the channel assignment problem where the problem was represented using the idea of vertex coloring. [1]. Vertices on the graph correspond to the radio stations and the edges show the proximity of the stations. In 1991, Roberts proposed a variation of the channel assignment problem in which the FM radio stations were considered either close or very close. Close stations were vertices of distance two apart on the graph and were assigned channels that differed by two; stations that were considered very close were adjacent vertices on the graph and were assigned distinct channels [2]. More precisely, Griggs and Yeh defined the L(2,1)-labeling of a graph as a function f which assigns to every vertex a label from the set of positive integers such that the following conditions are satisfied: f(v x ) f(v y ) 2 if the distance between v x, v y, D(v x, v y ), = 1 and f(v x ) f(v y ) 1 if D(v x, v y ) = 2 [3]. L(2,1)-labeling has been studied in recent years. In 2001, Chartrand et al. introduced the radio-labeling of graphs; this was motivated by the regulations for the channel assignments in the channel assignment problem [4]. Radio-labeling takes into consideration the S. Mandal (Editor), GJAES GJAES Page 241

176 S. K. Dey, Uncertainty of Mathematical Modeling for River Water Quality, Global Journal on Advancement in Engineering and Science, 2(1), March 2016, pp diameter of the graph, and as a result, every vertex is related. Practically, interference among channels may go beyond two levels. L(3,2,1)-labeling naturally extends from L(2,1)-labeling by taking into consideration vertices which are within a distance of three apart, but it remains less difficult than radio-labeling. In this paper the L(d, 2, 1)-labeling number for paths, cycles, complete graphs and complete bipartite graphs is determined. The results of Clipperton et al [5] are used as a basis for the unknown value d. [Introduction adapted slightly from that by Clipperton et al [5].] In multi-hop radio networks, one of the problems that have been studied extensively is the radio-frequency assignment problem. Each station and its neighbors are assigned frequencies so as to avoid signal collisions. This is equivalent to a graph coloring problem, where vertices are stations and edges represent interferences between the stations. The type of graph coloring problem varies depending on the kinds of frequency collisions that are to be avoided. If the only requirement is to avoid direct collisions between two neighbors, then this coincides with the normal graph coloring problem with its associated chromatic number. We call this L(1)-labeling problem of a graph G. Should it be desired that each station and all of its neighbors have distinct frequencies, we have the L(1, 1)- labeling problem. This is also known as the distance-two coloring of a graph or coloring of the square of the graph, and has been well-studied. In practice, the distances in some wireless networks can be quite close (for example, the cellular network). Thus it may be necessary that not only stations of distance two apart must have distinct frequencies, but perhaps distance three or more. Let G = (V, E) be a graph and f be a mapping f : V N. The distance between two such vertices is represented by D(v x, v y ) and the mapping of f is an L(3, 2, 1)-labelling of G if for all vertices v x, v y V, 3, if D(v x, v y ) = 1; f v x f(v y ) 2, if D(v x, v y ) = 2; 1, if D(v x, v y ) = 3. The L(3,2,1)-labeling number k(g) of G is the smallest positive integer k such that G has an L(3,2,1)-labeling with k as the maximum label. In this paper we only focus on L(3,2,1)-labeling of one cycle and a bouquet of cycles (joining at a common cut vertex). And also we find out the upper and lower bounds of L(3,2,1)-labeling number k(g) such that G has an L(3,2,1)-labeling with k as the maximum label. II. L(3,2,1)-labelling of a cycle Lemma 1: Let C n be a cycle of finite length n. Then k lies between + 4 and + 10, where = 2 is the degree of the cycle. Proof: Let be a cycle of length n. We classify C n into five groups, viz., C 3, C 4k, C 4k+1, C 4k+2 and C 4k+3 respectively. Let v 0, v 1, v 2,., v n 1 are the vertices of C n. Then the L(3,2,1)-labelling procedure are given as follows. Case 1: For n = 3. 0, if i = 0; f v i = 3, if i = 1; 6, if i = 2. Then k(c 3 )=6= + 4. Case 2: For = 4k 0 (mod 4). We label the vertices v 0, v 1, v 2,., v 4k 1 as follows 0, if i 0 (mod 4); 3, if i 1 (mod 4); f v i = 6, if i 2 mod 4 ; 9, if i 3 mod 4. Here k(c 4k )=9= + 7. Case 3: For = 4k (mod 4). Here we first label the vertices v 0, v 1, v 2,., v 4k 1 using the procedure as developed in case 2. And then we label the remaining vertex v 4k as f(v 4k ) = 12. S. Mandal (Editor), GJAES GJAES Page 242

177 S. K. Dey, Uncertainty of Mathematical Modeling for River Water Quality, Global Journal on Advancement in Engineering and Science, 2(1), March 2016, pp So, k(c 4k )=12= Case 4: For = 4k (mod 4). We label the vertices v 0, v 1, v 2,., v 4k 1 using the procedure as given in case 2. And then we label the remaining vertices v 4k and v 4k+1 as f(v 4k ) = 4 and f(v 4k+1 ) = 7. So, k(c 4k )=9= + 7. Case 5: For = 4k (mod 4). We label the vertices v 0, v 1, v 2,., v 4k 1 using the procedure as given in case 2. And then we label the remaining vertices v 4k, v 4k+1 and v 4k+3 are as follows f(v 4k ) = 1, f(v 4k+1 ) = 4 and f(v 4k+2 ) = 7. Here k(c 4k+3 )=9= + 7. From above all the cases we see that L(3,2,1)-chromatic number k(c n ) lies between + 4 and + 10, i.e., + 4 k C n III. L(3,2,1)-labelling of two cycles joining at a common cutvertex Lemma 2: Let C n and C m be two cycles of finite lengths n and m respectively and they joined at a common cutvertex v 0. Then the value of k(c n v0 C m ) lies between + 5 and + 10, where = 4 is the degree of v 0. Proof: Let v 0, v 1, v 2,., v n 1 are the vertices of C n and v 0, v 1, v 2,., v m 1 are the vertices of C m respectively. Case 1: For n = 3 and m = 3. We label the vertices of first C 3 as the same procedure as given in case 1 of lemma 1. Then we label the vertices of second C 3 as follows f v 1 = 8, and f v 2 = 11. Here k(c 3 v0 C 3 )=11= + 7. Case 2: For n = 4k + i and m = 3, for i = 0,1,2,3. Case 2.1: For n = 4k 0 (mod 4) and m = 3. Here the labeling procedure of C n is same as given in case 2 of lemma 1. Now the labelling procedure for another cycle is as follows. f v 1 = 5, and f v 2 = 11. So, k(c 3 v0 C 3 )=11= + 7. Case 2.2: For n = 4k 1 (mod 4) and m = 3. Here the labeling procedure of C n is same as given in case 2 of lemma 1. Now the labelling procedure for another cycle is as follows. f v 1 = 5, and f v 2 = 8. So, k(c 3 v0 C 3 )=12= + 8. Case 2.3: For n = 4k 2 (mod 4) and m = 3. Here the labeling procedure of C n is same as given in case 2 of lemma 1. Now the labelling procedure for another cycle is as follows. f v 1 = 5, and f v 2 = 9. Here, k(c 3 v0 C 3 )=9= + 5. Case 2.4: For n = 4k 3 (mod 4) and m = 3. Here the labeling procedure of C n is same as given in case 2 of lemma 1. Now the labelling procedure for another cycle is as follows. f v 1 = 5, and f v 2 = 9. Here, k(c 3 v0 C 3 )=9= + 5. Case 3: For n = 4k 0 (mod 4) and m = 4k + i, for i = 0,1,2,3. Case 3.1: For n = 4k 0 (mod 4) and m = 4k 0 (mod 4). Here we label the vertices of C n as the same rule as given in case 2 of lemma 1. Then we label the vertices of C m given below. f v 1 = 5, f v 2 = 2, f v 3 = 7 and for the remaining vertices j = 4,5,6,., m 1 = 4k 1 the labeling procedure is as follows S. Mandal (Editor), GJAES GJAES Page 243

178 S. K. Dey, Uncertainty of Mathematical Modeling for River Water Quality, Global Journal on Advancement in Engineering and Science, 2(1), March 2016, pp , if j 0 (mod 4); 5, if j 1 (mod 4); f v j = 2, if j 2 mod 4 ; 7, if j 3 mod 4. So, we can see that k(c n v0 C m )=9= + 5. Case 3.2: For n = 4k 0 (mod 4) and m = 4k (mod 4). Here we label the vertices of C n as the same rule as given in case 2 of lemma 1. And also we label the vertices v j ; j = 1,2,3,.,4k 1 as the same procedure as developed in case 3.1 of this lemma. Then we label the last vertex v 4k as f(v 4k ) = 11. Here, k(c n v0 C m )=11= + 7. Case 3.3: For n = 4k 0 (mod 4) and m = 4k (mod 4). Here we label the vertices of C n as the same rule as given in case 2 of lemma 1. And also we label the vertices v j ; j = 1,2,3,.,4k 1 as the same procedure as developed in case 3.1 of this lemma. Then we label the remaining vertices as f(v 4k ) = 4 and f(v 4k+1 ) = 11. Here, k(c n v0 C m )=11= + 7. Case 3.4: For n = 4k 0 (mod 4) and m = 4k (mod 4). Here we label the vertices of C n as the same rule as given in case 2 of lemma 1. And also we label the vertices v j ; j = 1,2,3,.,4k 1 as the same procedure as developed in case 3.1 of this lemma. Then we label the remaining vertices as f(v 4k ) = 4, f(v 4k+1 ) = 9 and f(v 4k+2 ) = 14. Here, k(c n v0 C m )=14= Case 4: For n = 4k (mod 4) and m = 4k + i, for i = 1,2,3. Case 4.1: For n = 4k (mod 4) and m = 4k (mod 4). Here we label the vertices of C n as the same rule as given in case 3 of lemma 1. Also we label the vertices v j ; j = 1,2,3,.,4k 1 using the same procedure as given in case 3.1 of this lemma. Then we label the last vertex v 4k as f(v 4k ) = 10. Here, k(c n v0 C m )=14= + 6. Case 4.2: For n = 4k (mod 4) and m = 4k (mod 4). Here we label the vertices of C n as the same rule as given in case 3 of lemma 1. Also we label the vertices v j ; j = 1,2,3,.,4k 1 using the same procedure as given in case 3.1 of this lemma. Then we label the vertices v 4k and v 4k+1 as f(v 4k ) = 4 and f(v 4k+1 ) = 10. Here, k(c n v0 C m )=12= + 8. Case 4.3: For n = 4k (mod 4) and m = 4k (mod 4). First we label the vertices of C n as the same rule as given in case 3 of lemma 1. And then we label the vertices of C m using the same procedure as developed in case 3.4 of this lemma. Here, k(c n v0 C m )=14= Case 5: For n = 4k (mod 4) and m = 4k + i, for i = 2, 3. Case 5.1: For n = 4k (mod 4) and m = 4k (mod 4). First we label the vertices of C n as the same rule as given in case 4 of lemma 1. And then we label the vertices of C m using the same procedure as developed in case 4.2 of this lemma. So, we get, k(c n v0 C m )=10= + 6. Case 5.2: For n = 4k (mod 4) and m = 4k (mod 4). First we label the vertices of C n as the same rule as given in case 4 of lemma 1. Now we label the first 4k + 2 vertices v j ; j = 1,2,3,.,4k + 1 of C m using the same procedure as given in case 4.2 of this lemma. For the remaining vertex v 4k+2, we label it as f(v 4k+2 ) = 12. So, k(c n v0 C m )=12= + 8. Case 6: For n = 4k (mod 4) and m = 4k (mod 4). We label the vertices of C n as the same rule as given in case 5 of lemma 1. Then we label the vertices of second cycle C m according to the procedure as given in case 5.2 of this lemma. Here, k(c n v0 C m )=12= + 8. So, from all the above cases, we can conclude that the L(3,2,1)-labelling number k lies between + 5 and That is, + 5 k C n S. Mandal (Editor), GJAES GJAES Page 244

179 S. K. Dey, Uncertainty of Mathematical Modeling for River Water Quality, Global Journal on Advancement in Engineering and Science, 2(1), March 2016, pp IV. L(3,2,1)-labelling of finite number of cycles joining at a common cutvertex Using the results of lemma 1 and lemma 2 we can conclude the result for a graph which consists of finite number of cycles of finite lengths (joined with a common cutvertex). And the result is given below. Lemma 3: If a graph G with finite number of cycles of finite lengths are joined at a common cutvertex (bouquet of cycles), then the rage of L(3,2,1)-labelling number k(g) is + 5 k G + 10, where is the degree of the cutvertex. V. Conclusion In this paper we provide very close upper and lower bounds on L(3,2,1)-labelling for a cycle and a bouquet of cycles, joined at a common cutvetex. By using the results in this paper, we shall develop an algorithm for labelling the vertices. And also we shall label the vertices a large cactus graph and shall find out the lower and upper bounds of L(3,2,1)-chromatic number. VI. References [1] W. K. Hale, Frequency assignment: theory and application, Proc. IEEE, 68 (1980), [2] F. S. Roberts, T-colorings of graphs: recent results and open problems, Discrete Math., 93 (1991), [3] G. Chartrand, D. Erwin, F. Harary, and P. Zhang, Radio labeling of graphs, Bull. Inst. Combin. Appl., 33 (2001), [4] J. R. Griggs and R. K. Yeh, Labeling graphs with a condition at distance two, SIAM J. Discrete Math., 5 (1992,) [5] K. Jonas, Graph coloring analogues with a condition at distance two: L(2,1)-labelling and list labellings, Ph.D. thesis, University of South Carolina (1993), 8-9. [6] L. Jia-zhuang and S. Zhen-dong, The L(3, 2, 1)-labeling problem on graphs, Mathematica Applicata, 17 (4) (2004), [7] R. K. Yeh, A survey on labeling graphs with a condition two, Disc. Math., 306 (2006), [8] R.K. Yeh. Labeling Graphs with a Condition at Distance Two. Ph.D. Thesis, University of South Carolina, [9] D. D. Liu and X. Zhu, Multilevel distance labelings for paths and cycles, SIAM J. Disc. Math, 19 (2005), VI. Acknowledgments At the very onset, I express my sincerest gratitude and indebtedness to my honourable supervisor, Professor Madhumangal Pal for his good direction, guidance and continuous encouragement during each and every stage of completion of this work over the years. His truly scientific intuition has made him as a constant oasis of ideas and passions in different branches of mathematics, which exceptionally inspired and enriched me as a researcher. His involvement with originality has triggered and nourished my intellectual maturity that I will benefit from, for a long time to come. Honestly, I am indebted to him more than he knows. I want to give a special thanks to Dr. Shamim Khan, my husband come best friend and guide of my life without of whom I cannot do the work properly. He always encourages me and gives positive thoughts so that I can face every challenges of life. I acknowledge my deep gratitude and indebtedness to my parents and all my family members. Collective and individual acknowledgements with sincere appreciation are also owed to all my colleagues of Global Institute of Management & Technology, Krishnagar. And I want to give special thanks for Miss. Kaskshan Khan, without whom I cannot do the work properly. And I expresse my apology that I could not mention personally one by one. S. Mandal (Editor), GJAES GJAES Page 245

180 Special Issue: Conference Proceeding of i-con-2016 Global Journal on Advancement in Engineering and Science (GJAES) Vol. 2, Issue 1 : March-2016, ISSN (Print): Review Work ESL Speakers having Poorer Speaking Skills than Writing Skills Prasenjit Bhattacharjee Applied Science and Humanities Department Global Institute of Management and Technology NH-34, Palpara More, Krishnagar, Nadia. INDIA english.prasenjit@gmail.com Abstract: English is the most coveted language in India and many other countries that uses English as their 2 nd language. In a multi-lingual country like India, we cannot go successfully without English in our administrative, academic, professional, commercial or personal life. That is why, English enjoys the status of associate official language in India and with due respect to all the regional languages, it is given the status of the 2 nd language. Considering the importance of this language, the vernacular medium schools and most of the technological universities in India have suggested English Language and Communication in their syllabus. But in spite of all these efforts, most of the students fail to use English in effective communication process, particularly in speaking, although they have sound knowledge about the grammar. Most of the students claim that they have no problem in writing English, but they have all the problems in speaking in English. So, it becomes a threat for such students to speak in English. However, my effort is to explore why the speakers of English as Second language are poorer in speaking in English than writing. There seem to be many factors functioning behind this problem. But the greatest reason is the teaching method of English. Although there are many methods of teaching English, such as Grammar Translation Method, Direct Method, Palmer s Method, Dr. West s New Method etc., The Grammar Translation Method is the oldest, most conventional and widely accepted method of teaching English still now, particularly in the vernacular schools and colleges. My effort is to bring out how and why this method of teaching English is causing hindrance to the speaking skill of the students, while not retarding their writing skill. Key words: English language, Communication, speaking, writing. I. Introduction The position of a language in a country means its place in the national life, especially in relation to the other languages used there. In fact, the position of language cannot be anything static. With the change in the social and political life of a nation, the position of a language changes. For example, in the countries like India, which were formerly the British colonies, there is a marked shift in the position of English after independence. In India, English serves as a link language. Not only in India, but in the whole world, English is now a link language. That is why, English is considered to be the global language. Almost all the countries of the world are multilingual. People living in different parts of the world use different languages. This gives rise to a great problem. It is not possible for anyone to know and to tackle all the languages. In such a situation, a common language for communication is needed. In this respect, English is in an advantageous position. It is a language used and understood all over the world. Thus English removes the language barriers from different parts of the world, and moves the different countries of the world not as separate units, but as a united whole. That s why, even the countries where English is not the mother language or first language, use and teach English language as their second language. The linguistic aims of English in schools in such countries [ESL countries/ English as Second Language countries] are [i] to understand English when spoken, [ii] to speak English, [iii] to understand it when written and [iv] to write it. In short, to be more precise, the linguistic aim of such countries is to enable the students to have command over Listening, Writing, Reading and Speaking [LSRW] skills of English language. But it has been observed that generally ESL speakers have poorer speaking skills than writing and other skills. The cause of such discrepancy lies mainly in the English teaching methods in ESL countries. II. Objectives of Teaching English to ESL students Objective in education refers to the end towards which an educational institute sponsored activity is directed. It means the desired changes in the pupils behavior at the end of any particular activity. Now the basic aim of teaching any language is to develop in the students the four skills of listening, speaking, reading and writing. Of these, listening and reading are considered to be the Receptive skills and speaking and writing are considered to the Productive skills. Of these four skills, the receptive ones are easier as the students are required just to decode S. Mandal (Editor), GJAES 2016 GJAES Page 172

181 P. Bhattercharjee., ESL Speakers having Poorer Speaking Skills than Writing Skills,, Global Journal on Advancement in Engineering and Science, 2(1), March 2016, pp and interpret what others say or write. But in the productive skills, they are produce something on their own. It involves the complex process of decoding and encoding. III. Methods and Approaches of Teaching English A method is a comprehensive term or an umbrella term that includes within its periphery theoretical principles of language teaching or learning, material design and actual classroom practice. Edward M. Anthony says, Method is an over-all plan of orderly presentation of language material, no part of which contradicts with and all of which is based upon the selected approach. On the other hand, an approach is axiomatic. Approach is an assumption. The methods and approaches that are generally followed in English teaching and learning are as followed: a. Grammar Translation Method b. Direct Method c. Dr. West s New Method d. Palmer s Method e. Structural Approach to Teaching English f. The Functional-Communicative Approach g. The Substitution Method A. Grammar Translation Method Grammar Translation Method is the oldest method of teaching English. Thompson and Wyatt have laid down the following fundamental principles on which this method is based: a. The translation interprets foreign phraseology the best. b. In the process of interpretation the foreign phraseology is assimilated. c. The structure of a foreign language is best learnt when compared and contrasted with that of the mother tongue. d. This method lays stress on reading. e. It makes no attempt for training in speech. f. Grammar occupies an important place in this method, in that the linguistic material is graded on a grammatical plan. Hence it is also called the Translation-Grammar Method. g. H. Champion says, Under the Translation Method, the meaning of English words, phrases and sentences was taught by means of word-for-word translation into the mother-tongue. B. The Direct Method According to this method, English is taught by establishing a direct association between experience and expression, between the English word, phrase or idiom and its meaning [H. Champion]. This method aims at teaching English in the same way that a pupil follows in learning his mother-tongue. This method has its root in the earlier methods which attempted to teach foreign languages by creating a directing bond between experience and expression. The Direct Method was introduced in India in the beginning of 20 th century. It came as a challenge against the traditional Translation Method with its insistence on formal grammar and rapidly gained ground. The aim of the Direct Method is to enable the pupil to think in English and to express himself in it. It also aims at creating an English atmosphere inside the class and takes every care so that when a pupil learns English, his mother tongue may not intervene. C. Palmer s Method In his book The Principles of Language Study Palmer has laid down four-fold aims of his method: [a] Understanding English when spoken [b] Understanding English when written [c] The speaking of English [d] The writing of it. According to him, before a child is able to use the language actively, a considerable time should be given him to learn the language passively. The vocabulary must be made judiciously so that only those words which have a wide range of frequency in the normal reading material are selected and the vocabulary must be minimum. The teacher should try to develop this in the mind of the pupil while teaching the minimum essential vocabulary and the matter contained therein through illustration and vivid description. The materials for study should be graded properly so that the pupils may proceed from the known to the unknown step by step. Any sentence used in speaking or writing by anybody has been learnt either as a whole or by parts and this enables him to construct independent sentences. So stress should be laid learning by heart the pattern of sentences by pupils. The meaning of words can be conveyed to the pupils in four different ways: i. by material association, ii. By translation into the vernacular, iii. By definition and iv. By inference from the context. S. Mandal (Editor), GJAES 2016 GJAES Page 173

182 P. Bhattercharjee., ESL Speakers having Poorer Speaking Skills than Writing Skills,, Global Journal on Advancement in Engineering and Science, 2(1), March 2016, pp D. Dr. West s New Method Dr. West made an extensive research and experiments on the problems of teaching English. The New Method is the outcome of his research. Dr. West found that the Direct Method lays much stress on formation of speech habits in the children. But he believed that the bilingual child does not need so much need to speak his second language but rather to read it. He lays stress not only on oral reading but on purposeful silent reading. In order to develop purposeful silent reading in the children, he provides us with a new type of reading book containing interesting reading matter and a specially selected vocabulary, the size of which is as small as possible. He had also made some provision for some oral work mainly in the form of reading aloud before silent reading begins. In his method, Dr. West had also given some scope for training in speech to make his method complete. In this context, he made a distinction between speech vocabulary and reading vocabulary. According to him, in speaking, only a small number of carefully selected words can adequately express our ordinary ideas. E. Structural Approach to Teaching English The teachers and language experts were not satisfied with any other earlier methods of teaching English as all of them had limitations and failed to attain all the aims of teaching English. So experiments and extensive researches were going on in the field of teaching English as a foreign or second language. Structural approach was the outcome of such researches by the language experts. As the very name implies, emphasis on teaching of structures or patterns is crucial to the Structural Approach. According to Prof. Menon and Patel, Structural Approach is based on the belief that in the learning of a foreign language mastery of structures is more important than the acquisition of vocabulary. Structures mean the different arrangement as patterns of words. In fact, a huge vocabulary would be of no avail to the learners unless they learn to arrange them in meaningful and correct order which we call structures. Structures may be complete sentence or utterances, or they may form a part of a complete sentence. F. Functional Communicative Approach The students who were taught English through this approach were said be structurally competent but communicatively incompetent. They failed to use the structure they had learnt in their day to day needs of communication. So the language experts were in search of a method in which communicative acts and functions of the forms got greater importance than forms themselves. The Functional Communicative Approach brought about the desired shifts from language forms to communicative acts. The beliefs that worked behind the Approach are a. Language is an instrument for fluent and confident communication and interaction. b. Functions of forms or structures are more important than the forms themselves or the rules. c. Language learning is not merely imitative. d. Functions of language in real-life situations make language learning meaningful and interesting. G. The Substitution Method The substitution Method is a method which helps us to form correct habits of English speech in the children. In learning second language, the formation of correct speech habits is considered very important. This method is based on the fact that since the pupils learn their mother-tongue through the formation of a great number of correct speech habits, they will also learn the second language in the same way. It puts this principles into practice by preparing a series of tables which are called Substitution Tables. In this method the sentence is regarded as the unit of the language. A good number of model sentences are chosen from the reader and a table is framed for each of the model sentences. Through repeated practice and drill the pupils not only master the model sentences but learn how to use each model in various new substitutions. IV. Discussion Of all the above-mentioned methods and approaches of English teaching and learning, there is no denial of the fact that Grammar and Translation method is the most acceptable in most of the ESL countries like India. Several generations of Indians, before Independence, studied English through this method. It has been dominating the field of teaching English since then and still now teachers often prefer this method to other modern methods of teaching English. And because of this method mainly, the ESL speakers have poorer speaking skill than writing skills. In spite of having so many merits of the method, the demerits of it cannot be ignored either. In fact, this method may be useful to teach and learn English for ESL students. But this method has failed the main objectives of teaching English language and that is to make the students to have equal command over all the LSRW skills. The method is not at all interesting to the students as they are only passive learners. For the students, it is very tedious to memorize long lists of words and grammatical rules. It neglects S. Mandal (Editor), GJAES 2016 GJAES Page 174

183 P. Bhattercharjee., ESL Speakers having Poorer Speaking Skills than Writing Skills,, Global Journal on Advancement in Engineering and Science, 2(1), March 2016, pp the training in speech and listening comprehension. Here the mother tongue makes a continuous intervention. So fluent self-expression in the target language is never possible. It is not psychologically sound also because a child learns a language through imitation and not through logical application of grammatical rules. Therefore, because of too much stress on grammar and translation, now when a student is asked to speak something in English, they first form the idea or sentence in their vision in their mother language. Then they translate them, maintaining all the grammatical rules accurately. Therefore there comes a gap between their formation of sentences and expression of speech. As a result they never acquire the desired fluency level. But while writing, they get adequate time for thinking, translating and writing. So ESL learners have poorer speaking skill than writing skills. V. Conclusion Now it is the ripest time when, without any delay, an effective device or method or approach should be made to help the ESL students gain the desired accuracy and confident fluency in Speaking skills. That s why some conscious language experts and English teachers are adapting new and innovative ways of teaching English. In the present situation, the use of the audio-visual aid seems to be the most appropriate and effective one. A teacher of English must have the capability of showing a scene or communicating an experience to the class through appropriate gestures, mime and facial expressions. The pupils may be asked to imitate the teacher and give linguistic expressions to their actions. Apart from this method, some effective devices which have been invented and successfully applied by the English teachers are Role playing, Look and Say Method, Do and Say Method, Situational Method, The Playway method etc. VI. References [1] Asoke Gupta, A Handbook of Teaching English. Central Library [2] Anupama Gangopadhyay, Teaching of English. Rita Book Agency [3] Dr. K. Alex. Soft Skills, Know Yourself & Know the World. New Delhi. S. Chand. [4] [5] S. Mandal (Editor), GJAES 2016 GJAES Page 175

184 Special Issue: Conference Proceeding of i-con-2016 Global Journal on Advancement in Engineering and Science (GJAES) Vol. 2, Issue 1 : March-2016, ISSN (Print): Review Work Self-Discovery through SWOT Analysis Ashish Kumar Singh 1 and Kaushal Bindal 2 Global Institute of Management and Technology NH-34, Palpara More, Krishnagar, Nadia. INDIA Billiondollars440@gmail.com 1, Kaushalkumar @gmail.com 2 Abstract: Personal development is an essential step for making oneself more appealing to employers and customers. But it also helps boost one s self-image. People apply many different tactics to stand apart in this area. They want to secure the top position in life, but it is not as easy as it sounds. Individuals often conduct the SWOT analysis. Even though SWOT was originally used for business, it can help assess a person s strengths, weaknesses, opportunities and threats. This kind of simple analysis structure will provide guidance. It looks at internal and external factors. Now this self-analysis or self-discovery is not easy at all. It is perhaps one of the most complicated things. It plays a very significant role in personal progress. So it is a very important step towards finding life and career direction. Key words: Development, Discovery, Analysis, Strength, Weakness, Opportunities, Threats. I. Introduction Know thyself is a term coined by the great Greek philosopher Socrates meaning Know yourself. It is a life long process. To know oneself is to know one s true identity. How we manage our life, guide others, take charge, perform and behave in relationships really depend on how effectively we use our strengths and identify our weaknesses which may be discovered only when we know our self. We therefore need to discover ourselves and become our own true person, not what others perceive us to be. In corporate life also, we need to know ourselves and apply our hidden qualities to cope up with the new challenges that appear now and then in our professional life. In our professional life, we need to work with people of different religion, language and culture. There, to know others, we need to know ourselves as it is impossible to understand others unless a person has understood himself or herself. People apply many different tactics to stand apart in their area. They want to secure the top position in life, but it is not as easy as it sounds. Individuals often conduct the SWOT analysis. Even though SWOT was originally used for business, it can help assess a person s strengths, weaknesses, opportunities and threats. II. Importance of knowing oneself Knowing oneself is of immense importance. If we know ourselves, we will be able to know our strengths and weaknesses. Subsequently we will be able to remove our weak portion. We must know ourselves in order to be useful to ourselves and others. Knowing ourselves helps us in the following ways i. It helps to control emotions ii. It helps to reach our goal iii. It helps to reach better decisions iv. It helps to improve relationships v. It helps to realize and improve our full potential vi. It helps to experience happiness and joy III. Process of knowing oneself There are so many ways to know ourselves. Some remarkable ways to know ourselves are i. Maintain personal diary ii. Practice meditation iii. Exercise regularly iv. Go for walk regularly v. Do some riding or driving vi. Do some outings vii. Develop some hobbies viii. Develop new interests ix. SWOT Analysis Maintaining a personal diary helps us in learning what we are, our likes and dislikes, our passions and what we want to be in life. Meditation helps us to observe ourselves in the present moment. Meditation is not a way of S. Mandal (Editor), GJAES 2016 GJAES Page 176

185 A. K. Singh et al., Self-Discovery through SWOT Analysis, Global Journal on Advancement in Engineering and Science, 2(1), March 2016, pp emptying the mind, but emptying ourselves to anxiety, worry, and excitement and so on. Again it is proved time and again and exercise helps a person remain physically and mentally sound. Exercise is also a kind of meditation. If exercising is not possible, we can opt for walking because walking is a moving meditation. Regarding riding or driving, it has proved to be a good process particularly when one finds it hard to locate a quiet place. Practice of going out for an outing also allows oneself to remain with his or her own self. Developing new hobbies and new interests make us creative and continuously explore. The ultimate way of knowing oneself is going for SWOT Analysis. IV. SWOT Analysis SWOT Analysis is a useful technique for understanding our Strengths and Weaknesses, and for identifying both the Opportunities open to us and the Threats we face. Used in a business context, it helps us carve a sustainable niche in our market. Used in a personal context, it helps us develop our career in a way that takes the best advantage of our talents, abilities and opportunities. Fig.1: SWOT Analysis V. Benefits of SWOT Analysis Benefits of SWOT Analysis are many-fold. It Helps to focus on our strengths. Helps to minimize weaknesses. Helps to take the greatest possible advantages of opportunities available. Helps to eliminate threats that would otherwise put one in difficulties. Helpful for team work. In the business context also, SWOT Analysis is of great benefit. It is simple to the participants. It is less expensive. It can be done internally provided the internal facilitator has the experience to manage it. And ultimately, it is inclusive. It allows the participation of the team. It utilizes the whole team, the results are more likely to represent the real environment. VI. SWOT Analysis Grid A SWOT Analysis is typically created in a grid format, with the strengths and opportunities listed on the left, and the weaknesses and threats on the right. Fig.1: SWOT Analysis Grid S. Mandal (Editor), GJAES GJAES Page 177

186 A. K. Singh et al., Self-Discovery through SWOT Analysis, Global Journal on Advancement in Engineering and Science, 2(1), March 2016, pp VII. Questions to Complete The Grid STRENGTHS: - What do you do well? - What unique resources can you draw on? - What do others see as your strengths? WEAKNESSES: - What could you improve? - Where do you have fewer resources than others? - What are others likely to see as weaknesses? OPPORTUNITIES: - What good opportunities are open to you? - What trends could you take advantage of? - How can you turn your strengths into opportunities? THREATS: - What trends could harm you? - What is your competition doing? VIII. SWOT Analysis in Learning English as 2 nd Language English is the most coveted language in India and many other countries that uses English as their 2 nd language. In a multi-lingual country like India, we cannot go successfully without English in our administrative, academic, professional, commercial or personal life. That is why, English enjoys the status of associate official language in India and with due respect to all the regional languages, it is given the status of the 2 nd language. Considering the importance of this language, the vernacular medium schools and most of the technological universities in India have suggested English Language and Communication in their syllabus. But in spite of all these efforts, most of the students fail to use English in effective communication process. Most of the students are afraid of learning this language as a whole. In such situations, the students may also go for SWOT Analysis. The teachers of English should stand by them during the analysis. Under the guidance of the teachers, the students may strive to find out their strengths, opportunities, weaknesses and threats. STRENGTHS: - What do you do well in English language? - Do you understand English when spoken to? - Do you understand English when written? WEAKNESSES: - What could you improve? - Are you good in spelling? - Are you good in vocabulary? - Can you communicate confidently in English? OPPORTUNITIES: - What good opportunities are open to you? - What trends could you take advantage of? - How can you turn your strengths into opportunities? THREATS: - What trends could harm you? - Do you find English grammar to be interesting? IX. Conclusion Therefore self-discovery through SWOT Analysis is of immense importance not only to improve the career but even to cross any kind of obstacles in personal life. If we can identify our strengths, with those strengths we can minimize our weaknesses. We can transform our weaknesses into strengths. Again if we become positive enough in terms of opportunities that we may avail in our life, we may be least concerned about the threats that pose in our life. Thus, SWOT Analysis can help us to explore the endless possibilities that are there within ourselves. S. Mandal (Editor), GJAES GJAES Page 178

187 A. K. Singh et al., Self-Discovery through SWOT Analysis, Global Journal on Advancement in Engineering and Science, 2(1), March 2016, pp X. References [1] Dr. K. Alex. Soft Skills, Know Yourself & Know the World. New Delhi. S. Chand. [2] [3] X. Acknowledgments At the very onset, we express our sincerest gratitude and indebtedness to our honourable teacher Mr. Prasenjit Bhattacharjee for his good direction, guidance and continuous encouragement during each and every stage of completion of this work. His truly scientific intuition has made him as a constant oasis of ideas and passions in different branches of which exceptionally inspired and enriched us as a researcher. His involvement with originality has triggered and nourished our intellectual maturity that we ll benefit from, for a long time to come. Finally, we would like to thank everybody who was important to the successful realization of the monogram, as well as expressing our apology that we could not mention personally. S. Mandal (Editor), GJAES GJAES Page 179

188 Special Issue: Conference Proceeding of i-con-2016 Global Journal on Advancement in Engineering and Science (GJAES) Vol. 2, Issue 1 : March-2016, ISSN (Print): Review Work English: Influenced by German Labanya Ray Mukherjee Applied Science and Humanities Department Global Institute of Management and Technology NH 34 Palpara More, Krishnagar, Nadia, INDIA labanya.kly@gmail.com Abstract: English is like a flowing river. It has been changing through the ages. In fact, this nature of accepting and imbibing changes is one of the leading reasons of why English is the global Language today. The history of English language starts, as far as known, from Anglo-Saxon Age. English of that period was rather obscure. Even if we don t take the English Language of that period for our consideration, and if we start from Shakespearean English and come to today s English, it has changed from thou to you to u. English is thus evolving down the ages, thanks to the technological revolution. As the English words are losing their letters and forming a new spelling which only attributes to the sounds, we can see, it is a large influence of the German language. German letters have an exact pronunciation, no matter in which word you put them and in which position you put. My paper is all about how English is becoming Germanic with the help of SMS lingo. Keywords: changing, obscure, evolving, influence, SMS lingo I. Introduction Language, it is said, is like a flowing river. It is ever changing and the changes are always unpredictable. Like all other living languages, the language of the Angles, Saxons and Jutes of the bygone days has been changing over the years but the degree of change taking place in recent years has surpassed all previous records. Today, technology has taken English to a new level of brevity. People often shorten the language due to space and time limitations. Thanks to the instant messaging services like SMS or . SMS language has tend to create a novice language, which has become an integral part of the multilingual world. It pursues simple sentence structure for communication. We now often see people clipping words and making a new word out of it, which convey the exact meaning as the original. I find, this growing trend of SMS lingo has paved a way for the English language to resemble the German language. II. Origin of the two European languages Before I move on to the resemblance of the two European languages concerned let me first move back to the origin. Both English and German are West Germanic languages. Nevertheless, they are quite different. English and its ancestor Old English are members of the Anglo-Frisian group together with Frisian and its earlier form, which is called Old Frisian, whereas German belongs to the Proto-German group, which is also called the Netherlander- German group. The German language is subdivided in High and Low German. While High German is the official language of Germany, Low German is regarded as a dialect of it. Though Low German has more similarities with the English language I will take High German into consideration to prove the resemblance. III. Resemblance of the two languages All languages have an alphabet, a lexicon, and a set of rules establishing how sentences are constructed. While there are differences in the alphabet, and lexicons of the various languages, the most important difference involving syntax are those found in the construction rules. English and German because of their shared root have many similarities, but through time they have developed substantial differences. Most of the similarity between the two languages stems from the fact that much of the vocabulary has common roots. The words may evolve differently in the two languages, but in some cases, there is little enough difference in spelling and pronunciation. You may possibly be able to recognize some words and phrases, even if you don t speak German. For example, Mein name ist Labanya should be easily recognizable as My name is Labanya. IV. Familiar words If you are an English speaker unfamiliar with German, you may be surprised to learn that English and German share many words that are very similar. This is particularly true for everyday words in English that are anglo-saxon in origin. For e.g. S. Mandal (Editor), GJAES 2016 GJAES Page 180

189 L. R. Mukherjee, English: Influenced by German, Global Journal on Advancement in Engineering and Science, 2(1), March 2016, pp Arm der arm Book das buch Father- der vater House das haus Man- der mann Mother- die mutter Mouse- die maus Name der name Son- der sohn Lamp- die lampe Kindergarten- der kindergarten V. Similar origin with different meaning Some german words have the same origin as their English counterparts but the meaning has changed. For e.g.:- Kind-das kind (child ) Actual- aktuell (current) Brave- brav(obedient) Become- bekommen (to get) Conservative- konservativ (preservative) Gift- gift (poison) Handy- das handy (mobile) Ordinary- ordinar ( vulgar) VI. English: Influenced by German There are some more similarities and differences but the most striking point in German, according to me, is that though it has more letters than English, each of the letter has its own distinctive sound. Thus German letters, unlike English is phonetically similar all throughout. SMS lingo, as mentioned earlier, which has paved way for the English language to resemble the German language is possible only because of the quality of the German letters of being phonetically similar. The German language has more phonemes than English. The English language is a big mess when it comes to spelling. There are few rules, and even the ones that exist have too many exceptions. A good example would be words that contain the letters ough - ought, though, through, rough, bough, and thorough are all pronounced differently. But in German its always /ʊ/ for u; /f/ for v; /a:/ for a ; /e/ for e and so on. Unlike English where put is pronounced as /put/ and but as /b^t/ u in german is always pronounced as u. be it in unter or gut. VII. IPA representing German language pronunciation The chart below shows the way in which International Phonetic Alphabet (IPA) presents German language pronunciations. Let us have a look at the German phonology for a more thorough look at the sounds of German. Consonants DE AT CH Examples English approximation b bei [1] Ball ç ich, durch; China (DE) Hue d dann [1] Done f für, von Fuss ɡ gut [1] Guest h Hat Hut j Jahr Yard k kann, Tag [2] Cold l Leben Last l Mantel Bottle DE AT CH Examples Vowels Monophthongs a alles art aː aber, sah Father English approximation ɛ Ende, hätte Bet ɛː spät, wählen [5] there (Modern RP) eː eben, gehen face (Scottish English) ɪ ist, bitte Sit iː viel, Berlin Feel ɔ Osten, kommen lot (RP and Australian) S. Mandal (Editor), i-con 2016 GJAES Page 181

190 L. R. Mukherjee, English: Influenced by German, Global Journal on Advancement in Engineering and Science, 2(1), March 2016, pp m Mann Must m Atem Rhythm n Name Not n beiden Suddenly ŋ lang Long ŋ wenigen take an interest p Person, ab [2] Puck pf Pfeffer Cupful ʁ r reden [3] AT, CH: far (Scottish DE: French rouge English) s lassen, Haus, groß Fast ʔ ʃ schon, Stadt Shall t Tag, und [2] Tall ts Zeit, Platz Cats tʃ Matsch Match v was [1] Vanish x nach z Sie, diese [1] Hose beamtet [4] ([bəˈʔamtət]) Non-native consonants dʒ Dschungel [1] Jungle ʒ Genie [1] Pleasure ˈ ˌ Stress Bahnhofstraße ([ˈbaːnhoːfˌʃtʁaːsə]) loch (no lock loch merger) the glottal stops in uhoh! as in battleship /ˈbætəlˌʃɪp/ oː oder, hohe law (RP and Australian) œ öffnen roughly like hurt øː Österreich roughly like herd ʊ und Took uː Hut Pool ʏ müssen roughly like shoe, but shorter yː über roughly like shoe Diphthongs aɪ ein Tie aʊ auf How ɔʏ ɔɪ Euro, Häuser Boy Reduced vowels ɐ ər immer [3] fun CH: butter (Scottish DE, AT: roughly like English) ə Name Comma Semivowels ɐ r Uhr [3] ear CH: far (Scottish DE, AT: roughly like English) i Studie Yard u aktuell Would Non-native vowels [6] e Element (short [eː]) i Italien city (short [iː]) o originell (short [oː]) ø Ökonom (short [øː]) u Universität (short [uː]) y Psychologie (short [yː]) VIII. SMS lingo : The Yardstick of German Influence In sms lingo the first step that we generally do take is to replace the troublesome ph with f. thereby words like photograph, photo, epitaph becomes fotograf, foto, epitaf. This makes the word shorter as well as Germanic. We often use words in which we can easily replace the hard c with k. For eg. Camera can be well understood if written kamera. Subject can also be understood if written subjekt. Both subjekt and kamera are german words. If we actually replace the English c with k we can actually have words which may klear up konfusions regarding the standard pronunciation to be used and keyboards kan have one less letter too. Omission of double letters are well accepted in SMS lingo. Users often feel that double letters are deterant to akurate speling. In german the vowels are not used unnecessarily as used in English Language. For e.g. the word house in English is pronounced as /haus/. The silent e in the language serves no use and can easily be dropped. The word haus in german whereas have a similar pronunciation with no extra vowels attached to it. While using sms language if we write peopl instead of people the reader will not face any problem to make out the word. We can also write ppl by dropping all the vowels as we do it in sms language which can also be well understood. S. Mandal (Editor), i-con 2016 GJAES Page 182

191 L. R. Mukherjee, English: Influenced by German, Global Journal on Advancement in Engineering and Science, 2(1), March 2016, pp As I said earlier contractions and clippings are essential stylistic features of the language of the SMS. German language is phonetically so strong that most of the characters clearly follow IPA. It hardly needs any clipping and as such English when clipped resembles German closely. Another point that I would like to point out is the german number system. There are hardly any numbers that can replace a whole word whereas in English we have many. For e.g. 2n8 stands for tonight 2de stands for today 4 stands for for The letters in german also cannot replace a whole word which again English can do. For e.g. C for see/sea B for be/bee R for are U for you IX. Conclusion If this sms language that has grabbed a lot of people by their neck, does not seem to leave now, it will definitely evolve as the new English Language. Well though many people panic at the thought of sms lingo getting standardized in near future, I feel this language will be phonetically more strong like the present german language. There are some languages that have words comprising of only consonants like Croatian and Serbian. The reader there knows how to pronounce the word from his oral learning of that word. It is basically the vowels in a language that makes the pronunciation complicated. German, being a language having distinctive sound for each letter also for the vowels, is so phonetically strong. We can also look forward to have a phonetically srronger English as the sms lingo insists on dropping vowels. Once George Bernard Shaw wanted the English alphabet to be revised so that each sound had its own character. He famously argued that ghoti could be pronounced fish in current English, the gh as in enough, the o like women and the ti as in station. Not surprisingly, however, his proposed Shavian alphabet of some forty or more letters was never taken seriously. Maybe his dream of revising the language, so that each sound has its own character may not come true in near future. However, the SMS languages and its inefficiency in respect to time and volume of information it conveys at a time, may bring upon a day when we may really start using this short orthographic representation of SMS English that truly resembles German in many ways. X. References [1] Roland Schapers; Renate Luscher & Manfred Gluck, Herr Biedermann und die Branstifter, Verlagfur Deutsch (1998), Max Hueber [2] T. Balasubramanian, A Textbook of English Phonetics for Indian Students, Paperback (2009) [3] S. Mandal (Editor), i-con 2016 GJAES Page 183

192 Special Issue: Conference Proceeding of i-con-2016 Global Journal on Advancement in Engineering and Science (GJAES) Vol. 2, Issue 1 : March-2016, ISSN (Print): Review Work Learning the Four Skills of Language through Note Taking Labanya Ray Mukherjee 1, Soumyadeep Guha 2 Global Institute of Management & Technology NH 34, Palpara More, Krishnagar, Nadia INDIA labanya.kly@gmail.com 1, soumyadeepguha97@gmail.com 2 Abstract: We basically learn a language to communicate. We take the help of words, signs, symbols and other nonverbal means to communicate. Communication generally includes LSRW skills i.e. listening skills, speaking skills, reading skills, writing skills. We usually do not stick to one skill at a time. To utilize the time provided to learn a language, we should look out for a way which would be beneficial for us to learn all the skills at the same time rather than honing one skill at a time. English language teachers of engineering colleges take recourse to the taskbased instruction and content-based instruction which are very traditional way of teaching. This paper is looking out for some other method which would be beneficial for students and it is felt that note-taking would be the best method to learn all the skills in a very short span of time. This paper would focus on how note-taking would help in honing all the four skills of language learning. Keywords: Content based instruction, task based instruction, communication, skills of language learning I. Introduction In today s world, English has become the medium of communication across the globe, therefore it has become almost mandatory for aspiring students and professionals to learn English language to communicate. The job scenario has changed over the years and with the rapid globalization the need of learning English language is accelerating. In India, English is treated as a second language. The reactions of the learners also vary with their needs. In towns and cities English learning is being received widely by the masses. With the greater understanding of the fact that without English one probably stands nowhere in today s competitive market, the urge to learn English has increased. The educational boards have also therefore revised their syllabus and have given greater emphasis on English. Technical universities however emphasize on both English and personality development as they play a pivotal role in assisting the students with a job. II. Integrated Language Learning Classroom is an ideal venue for any student and teacher to interact with each other, share and exchange ideas and thoughts. Therefore to learn a language, an ideal classroom should be a place of interest for the learner as well as the teacher. Besides these four strands teacher, learner, setting and language four skills are equally important to learn a language. These four skills are commonly known as LSRW Skills- listening skills, speaking skills, reading skills and writing skills. A teacher usually uses either of the skills or maybe two skills together to teach the language but if all the four language skills are integrated together instead of segregating them one can learn the language better. Teachers prefer to instruct or teach taking the segregated skills approach: i.e. separating writing from speaking or listening from reading, probably because they feel it is difficult for the students to concentrate on all the skills at a time. For the engineering students or the technical students, language learning is not only about mastering one skill but it is about one s performance in all the skills. As per the syllabus prescribed by the universities, one hardly gets enough time to learn a language through an approach of segregated skill learning. Six months is not enough to learn all the four skills, one at a time. Therefore, one should come up with an idea that would integrate all the skills of language learning together. Integrated skill learning falls under two categories. Content based instruction (CBI) and task based instruction (TBI). In the content based instruction one learns the language through the content and in the task based instruction we do tasks that require communicative language use. In CBI a student is required to practice all the language skills in an integrated manner while learning a content, say for e.g. a chapter of history. The content may differ with the varying backgrounds and the level of proficiency. A beginner learner is often given a content that involves only basic social and interpersonal communication skills. With the increase in the proficiency level one may be given a topic that is more academic and complex. CBI may be classified into three types: theme based, adjunct and sheltered. In the theme-based model an interesting theme is given to the students with an aim of letting them communicate about the theme, where all the skills may be practiced. This is the most widely accepted form of CBI practiced in ESL S. Mandal (Editor), GJAES 2016 GJAES Page 184

193 L. R. Mukherjee et al., Learning the Four Skills of Language through Note Taking,, Global Journal on Advancement in Engineering and Science, 2(1), March 2016, pp classrooms. In the adjunct model, there is a proper coordination of the language and the content though they are taught separately. And in the sheltered model, the content is taught in simplified English keeping in consideration the proficiency level of the student. In TBI students are often given tasks that increases student interaction and collaboration. In TBI students work together to play roles, to enact scenes from a play, or maybe to prepare an article for a magazine. It is one of the best and interesting methods for both the teacher and the learner to develop the proficiency level. Oxford observes, The integrated-skill approach, as contrasted with the purely segregated approach exposes English language learners to authenticate language and challenges them to interact naturally in the language. Learners rapidly gain a true picture of the richness and complexity of the English language as employed for communication. This further stresses on the fact that English is not only a subject that requires pass marks in the examination but also a real means of interaction among people. Integrated skill approach is highly motivating for any learner of any age and any background. III. Note-taking Note-taking is one of the effective way in which both CBI and TBI is used for enhancing the skills of the language. Note-taking is widely used to enhance the listening and writing skills along with reading skills too. When one reads or listens to a piece of information, taking notes helps one concentrate on the matter and write down every word one hears/reads. It has been widely used over the years for teaching and learning a language. Though down the years it has drastically changed. An ESL teacher can definitely practice note-taking to develop the proficiency of a learner through an integrated-skill approach. IV. Dictogloss Australian Ruth Wajnryb has coined the term dictogloss which refers to a form of dictation, in which the students hear and reconstruct the whole text rather than doing so line by line (Wajynryb 1990). It is a classic teaching technique where one is required to reconstruct a text by listening to it minutely and noting down the key-words. It is basically a task-based activity which combines all the four skills to help students make their own text to learn a language. Dictogloss serves as a useful tool in the ESL classroom as it involves active involvement of the students in the class. Here the students are required to use their knowledge of vocabulary, grammar, lexical accuracy. In traditional dictation, learners reproduce the exact text but in dictogloss it involves reconstructing of the text. V. Stages of the activity Note-taking if used like dictogloss can be used effectively to teach/learn a language. Learners here use listening skill, writing skill to complete the task. The teacher needs to read out a text to the students at a normal sped while they take notes. Students can hereafter work in groups to make a summary of the notes taken, using the correct grammatical structures. Students hereafter are requested to present their work in front of the class, which requires active reading skills. Finally, the students can discuss among themselves or can speak in front of the class regarding any points to be added or any mistakes to be clarified which will involve the speaking skill too. Let us take this stepby-step. 1. Preparation of the text :- this basically involves the teacher s activity who has to chose a text keeping in mind the learners need. The text has to be interesting enough to keep the learners hooked to the text. It must be of appropriate length so that the whole task can be completed in allotted time. Selection of the topic plays a major role in motivating the learner. It is recommended to read out news of recent events, or maybe brief biographies of eminent personalities. 2. Introduction :- teachers and learners both play a similar role in this section. Here the teachers introduce the task to the learners, share the rules of the activity, then dictates the topic. The learners can note down the keywords. The listening skills of the learners are tested here. 3. Group formation:- The teacher now divides the class into groups and asks them to reconstruct the text by taking help of the notes taken earlier. Here the writing skill is practiced. 4. Discussion:- discussion is another important stage of this activity where active reading skills and speaking skills is practiced. This involves active interaction of the learners where each team is asked to compare their version of the note with the versions of the other team s notes. Here one or two teams come up in front of the class to read out their version keeping in mind the time allotted. If time permits all the teams can read out their versions. 5. Motivation:- this is the final stage where the teacher finally presents the original text one last time, and informs the class which team s reconstruction is the closest to the original. To motivate the learners the teacher can arrange for small prizes too. S. Mandal (Editor), GJAES 2016 GJAES Page 185

194 L. R. Mukherjee et al., Learning the Four Skills of Language through Note Taking,, Global Journal on Advancement in Engineering and Science, 2(1), March 2016, pp As seen from the steps of the note-taking with the help of dictogloss, all the four language skills, namely, listening, speaking, reading, and writing skills are integrated in this classroom activity. VI. Conclusion Thus this TBI involves active listening, note-taking, reconstructing the text and then discussing with the peer groups. No wonder, one benefits a lot by practicing such an integrated skills. The learners not only develop the proficiency level in the four skills but also learn various aspects of the language which helps one in real life situation. It is also beneficial for the teachers as they can teach the students in a very short period of time inculcating all the skills at once. VII. References [1] Gibbons, Pauline (2009). English Learners, Academic Literacy, and Thinking. Portsmouth, NH: Heinemann. Wajnryb, Ruth (1990). Resource Books for Teachers: Grammar dictation. Oxford: Oxford University Press. [2] Oxford Rebecca (2001). Integrated Skills in ESL/EFL Classroom. [3] Richards, Jack C.; Schmidt, Richard, eds. (2009). Dictogloss. Longman Dictionary of Language Teaching and Applied Linguistics. New York: Longman. [4] Scarcella, R., & Oxford, R. (1992). The tapestry of language learning: The individual in the Communicative classroom. Boston: Heinle & Heinle. S. Mandal (Editor), GJAES 2016 GJAES Page 186

195 Special Issue: Conference Proceeding of i-con-2016 Global Journal on Advancement in Engineering and Science (GJAES) Vol. 2, Issue 1 : March-2016, ISSN (Print): Review Work History of Atoms and Idea of Atomic Structure Nirmal Paul 1, Atreyi Das 2 Department of applied science & Humanaties 1, Department of mechanical enggineering 2 Global Institute of Management and Technology 1 & 2, Krishnagar, Nadia, INDIA s: mail2nirmalpaul@gmail.com 1, mail2atreyidas@gmail.com 2 Abstract: In 460 BC Democritus develops the idea of atoms. He pounded up materials in his pestle and mortar until he had reduced them to smaller and smaller particles which he called atoma. In 1808, John Dalton suggested that all matter was made up of tiny spheres that were able to bounce around with perfect elasticity and called them Atoms. In 1910 Ernest Rutherford oversaw Geiger and Marsden carrying out his famous experiment. They fired Helium nuclei at a piece of gold foil which was only a few atoms thick. They found that although most of them passed through about 1 in 10,000 hit. In 1913 Niels Bohr studied under Rutherford at the Victoria University in Manchester. Bohr refined Rutherford's idea by adding that the electrons were in orbits. Rather like planets orbiting the sun with each orbit only able to contain a set number of electrons. But Bothr s theory has also some limitations. According to the real atomic structure Bothr s theory is acceptable, but not as a whole. This paper would explore about the idea of history of atoms as well as atomic structure. Keywards: particle, atom, experiment, orbital, limitation I. Introduction The ancient Greek philosophers Leucippus and Democritus believed that atoms existed, but they had no idea as to their nature. Centuries later, in 1803, the English chemist John Dalton, guided by the experimental fact that chemical elements cannot be decomposed chemically, was led to formulate his atomic theory. Dalton's atomic theory was based on the assumption that atoms are tiny indivisible entities, with each chemical element consisting of its own characteristic atoms. The atom is now known to consist of three primary particles: protons, neutrons, and electrons, which make up the atoms of all matter. A series of experimental facts established the validity of the model. Radioactivity played an important part. Marie Curie suggested, in 1899, that when atoms disintegrate, they contradict Dalton's idea that atoms are indivisible. There must then be something smaller than the atom (subatomic particles) of which atoms were composed. Long before that, Michael Faraday's electrolysis experiments and laws suggested that, just as an atom is the fundamental particle of an element, a fundamental particle for electricity must exist. The "particle" of electricity was given the name electron. Experiments with cathode-ray tubes, conducted by the British physicist Joseph John Thomson, proved the existence of the electron and obtained the charge-to-mass ratio for it. The experiments suggested that electrons are present in all kinds of matter and that they presumably exist in all atoms of all elements. Efforts were then turned to measuring the charge on the electron, and these were eventually successful by the American physicist Robert Andrews Millikan through the famous oil drop experiment. 1 The study of the so-called canal rays by the German physicist Eugen Goldstein, observed in a special cathoderay tube with a perforated cathode, let to the recognition in 1902 that these rays were positively charged particles (protons). Finally, years later in 1932 the British physicist James Chadwick discovered another particle in the nucleus that had no charge and for this reason was named neutron. 2 Joseph John Thomson had supposed that an atom was a uniform sphere of positively charged matter within which electrons were circulating (the "plum-pudding" model). Then, around the year 1910, Ernest Ruthorford (who had discovered earlier that alpha rays consisted of positively charged particles having the mass of helium atoms) was led to the following model for the atom: Protons and neutrons exist in a very small nucleus, which means that the tiny nucleus contains all the positive charge and most of the mass of the atom, while negatively charged electrons surround the nucleus and occupy most of the volume of the atom. 3 In formulating his model, Rutherford was assisted by Hans Geiger and Ernest Marsden, who found that when alpha particles hit a thin gold foil, almost all passed straight through, but very few (only 1 in 20,000) were deflected at large angles, with some coming straight back. Rutherford remarked later that it was as if you fired a 15-inch artillery shell at a sheet of paper and it bounced back and hit you. The deflected particles suggested that the atom has a very tiny nucleus that is extremely dense and positive in charge. Also working with Rutherford was Henry G. J. Moseley who, in 1913, performed an important experiment. When various metals were bombarded with electrons in a cathode-ray tube, they emitted X rays, the wavelengths of which were related to the nuclear charge of the metal atoms. This led to the law of chemical periodicity, which provided refinement of the periodic table introduced by Mendeleev in According to S. Mandal (Editor), GJAES 2016 GJAES Page 187

196 N. Paul et al., History of Atoms and Idea of Atomic Structure, Global Journal on Advancement in Engineering and Science, 2(1), March 2016, pp this law, all atoms of an element have the same number of protons in the nucleus. It is called the atomic number and is given the symbol Z. Hydrogen is the simplest element and has Z = 1. II. Bohr Model of the Atom Through Rutherford's work it was known that electrons are arranged in the space surrounding the atomic nucleus. A planetary model of the atom, with the electrons moving in circular orbits around the nucleus seemed an acceptable model. However, such a "dynamic model" violated the laws of classical electrodynamics, according to which a charged particle, such as an electron, moving in the positive electric field of the nucleus, should lose energy by radiation and eventually spiral into the nucleus. To solve this contradiction, in 1913, the Danish physicist Neils Bohr (then studying under Rutherford) postulated that the electron orbiting the nucleus could move only in certain orbits, having in each a certain "quantized" energy. It turns out that the colors in fireworks would help prove him right. 6 III. Atomic Spectra The colorful lights of fireworks are emitted by "excited" atoms; that is, by atoms that have absorbed extra energy. Light consists of electromagnetic waves, each (monochromatic) color with a characteristic wavelength λ and frequency v. Frequency is related to energy E through the famous Planck equation, E = hν, where h is Planck's constant ( x J s). Note that white light, such as sunlight, is a mixture of light of all colors, so it does not have a characteristic wavelength. For this reason we say that white light has a "continuous spectrum." On the other hand, excited atoms emit a "line spectrum" consisting of a set of monochromatic visible radiations. Each element has a characteristic line spectrum that can be used to identify the element. Note that line emission spectra can also be obtained by heating a salt of a metal with a flame. For instance, common salt (sodium chloride) provides a strong yellow light to the flame coming from excited sodium, while copper salts emit a blue-green light and lithium salts a red light. The colors of fireworks are due to this phenomenon. Scientists in the late nineteenth century tried to quantify the line spectra of the elements. In 1885 the Swedish school teacher Johann Balmer discovered a series of lines in the visible spectrum of hydrogen, the wavelengths of which could be related with a simple equation: in which λ is wavelength, k is constant, a = 2, and b = 3, 4, 5, This group of lines was called the Balmer series. For the red line b = 3, for the green line b = 4, and for the blue line b = 5. Similar series were further discovered: in the infrared region, the Paschen series (with a = 3 and b = 4, 5 in the above equation), and much later in the ultraviolet region, the Lyman series (with a = 1 and b = 2, 3 ). In 1896 the Swedish spectroscopist Johannes Rydberg developed a general equation that allowed the calculation of the wavelength of the red, green, and blue lines in the atomic spectrum of hydrogen: where n L is the number of the lower energy level to which an electron falls and n H is the number of the higher energy level from which it falls. R is called the Rydberg constant ( x 10 7 m 1 ). R was later shown to be 2π 2 me 4 Z 2 /h 3 c, where m is the mass of the electron, e is its charge, Z is the atomic number, h is Planck's constant, and c is the speed of light. IV. Bohr's Quantum Model As noted earlier, Bohr had suggested the quantization of Ruthford's model of the atom. Although he was not aware of the work of Balmer and Paschen when he wrote the first version of his 1913 article, he had incorporated Planck's constant h into his model, which turned out to be an important decision. Bohr assumed that the absorption or emission of radiation can occur only by "jumps" of the electron from one stationary orbit to another. The energy differences between two such allowed orbits then provided the characteristic frequencies of the emitted light. ΔE = E n1 E n2 = hν Planck's constant h was named by Bohr the "quantum of action." Bohr's theory was in close agreement with many experimental facts regarding one-electron atoms (the hydrogen atom and hydrogen-like atoms, such as He + and Li 2+ ), but it could not explain the "fine structure" of the spectral lines; that is, the fact that certain lines were actually a set of closely spaced lines. In 1915 and 1916 respectively, W. Wilson and A. Sommerfeld refined Bohr's theory by admitting elliptical orbits. However, it became evident to many physicists, including Bohr himself, that is was time for a scientific revolution. 9 S. Mandal (Editor), GJAES 2016 GJAES Page 188

197 N. Paul et al., History of Atoms and Idea of Atomic Structure, Global Journal on Advancement in Engineering and Science, 2(1), March 2016, pp V. Wave Character of Matter To explain the photoelectric effect (the flow of electric current from a metal cathode when illuminated with visible or ultraviolet light of suitable frequency), Albert Einstein attributed particulate (material) properties to light. Thus, besides being an electromagnetic wave, light could be accounted for in terms of particles called photons. This dual property of light led the French physicist Luis Victor de Broglie to propose, in 1925, that matter should have dual character too, exhibiting both particulate and wave properties. De Broglie's genius idea was soon after (in 1927) verified by experiment. VI. The Schrödinger Equation The Schrödinger equation is the foundation of quantum mechanics. It is solved exactly for very few simple systems. In chemistry it is solvable without any approximation only for the hydrogen atom or hydrogen-like atoms (monoelectronic atomic cations). The mathematical solutions are called hydrogen orbital s, in general, an orbital is defined as a "one-electron wave function that obeys certain mathematical restrictions." Hydrogen orbitals depend on the values of the three quantum numbers n (principal),l (angular momentum or "asimuthal"), and m l (magnetic). The principal quantum number, n, identifies an electron's main shell, or energy level, and assumes integer values (1, 2, 3 ). The azimuthal (or angular momentum) quantum number, l, describes the subshell, or sub-level, occupied by the electron and has values that depend on n, taking values from 0 to n 1. For s orbitals l = 0; for p orbitals l = 1; for d orbitals l= 2; and for the more complex f orbitals l = 3. Finally the magnetic quantum number, m l, identifies the particular orbital an electron is in and has values that depend on l, taking on values from 0 to +l or l. For a given value of n, there can be only one s orbital, but there are three kinds of p orbitals, five kinds of d orbitals, and seven kinds of f orbitals. Although it does not follow from the Schrödinger equation, there is a fourth quantum number, m s, that describes the spin of the electron. It can assume two values, +1/2 and 1/2. According to the Pauli Exclusion Principle no two electrons in an atom can have the same set of four quantum numbers. If two electrons have the same values for n (main shell), l (sub-shell), and m 1 (orbital), they must differ in spin. Each orbital in an atom can hold no more than two electrons, and they must be opposed in spin. Such a couple of electrons, opposite in spin, constitutes an electron pair. For practical reasons, various graphical representations of atomic orbitals are used. The most useful are boundary surfaces, such as those shown in Figure 2. These enclose regions of space where the electron described by the corresponding wave function (orbital) can be found with high probability (e.g., 99%); s orbitals are spherical, p orbitals are dumb-bell shaped, d orbitals have a four-leaf-clover shape, while f orbitals have complex shapes. VII. Classical and quantum physics A fundamental difference between classical and quantum physics is that, while in classical physics the dynamic variables can be represented by ordinary algebraic variables, in quantum physics they are represented by "operators," which are expressed by mathematical matrices. This is a consequence of the fact that, while in classical physics any disturbance caused by the action of observation or measurement can, in principle, be calculated, in the submicroscopic world, the very action of observation or measurement of a dynamic variable disturbs the system. This is equivalent to the famous "uncertainty principle" of Heisenberg. The distinction between quantum (very small) and classical systems is generally made in units of h, Planck's constant. The size of h J s) is extremely small for the macroscopic world, but for the submicroscopic world of atoms, ions, molecules, etc., h is not small. Thus quantum mechanics is radically different from classical mechanics. For many-electron atoms, no exact solutions to the corresponding Schrödinger equation exist because of the electron-electron repulsions. However, various approximations can be used to locate the electrons in these atoms. The common procedure for predicting where electrons are located in larger atoms is the Aufbau (building up) principle. S. Mandal (Editor), GJAES 2016 GJAES Page 189

198 N. Paul et al., History of Atoms and Idea of Atomic Structure, Global Journal on Advancement in Engineering and Science, 2(1), March 2016, pp VIII. The Aufbau Principle The arrangement of electrons in electron shells (K, L, M, N) is important for explaining both the chemical behavior of the elements and their placement in the periodic table. The first shell is called K (n = 1), the second L (n = 2), the third M (n = 3), etc. Knowing the atomic number of an element, one places that number of electrons, one after another, into the various atomic orbitals, building up the atom until all the electrons have been added. Three basic principles are followed: The principle of least energy (electrons seek the lowest available energy level), The Pauli exclusion principle (no more than two electrons per orbital), and Hund's rule (electrons of the same energy spread out before pairing up). The principle of least energy would dictate that all electrons be located in the lowest energy K shell, in the 1s orbital. However, the Pauli principle forbids this by requiring that no two electrons in an atom can be described by the same set of four quantum numbers. This leads to the restriction that an orbital cannot accommodate more than two electrons, and they must be of opposite spin. In this way, for a given value of n, the s orbital can accommodate no more than two electrons, the three p orbitals up to six electrons, the five d orbitals up to ten electrons, and the seven f orbitals up to fourteen electrons. Hund's rule introduces one final restriction: electrons in degenerate (same energy) orbitals should spread out to fill as many orbitals as possible before pairing up. The seven electrons in the nitrogen atom would be placed in the 1s, 2s, and 2p sublevels as shown below. (Electrons are shown as up-pointing arrows with spin = +½, or down pointing arrows with spin = ½). The lowest energy 1s orbital fills first, then the 2s orbital, then the last three electrons go into the three higher energy 2p orbitals. In a hydrogen atom all orbitals within the same main shell have the same energy, but this is not true for atoms with many electrons because of the interactions among the electrons. Within a given main shell of a large atom, the s orbital is the lowest in energy, followed by the p orbitals, then the d orbitals, and finally the f orbitals. The electron configurations of atoms are more commonly shown as follows: Nitrogen (z = 7 ) ls 2 2s 2 2p 3 This shows that the nitrogen atom has a nuclear charge of +7, and it therefore has seven electrons. Two electrons are in the first main shell in an s orbital, and the other five are in the second main shell, two in the sorbital and three in the p x, p y, and p z orbitals. Each can have as many kinds of orbitals (subshells) as the shell number. The first shell has one (s ), the second has two (s and p ), the third has three (s, p, and d ), and the fourth has four (s, p, d, and f ). The fifth would probably have five, if there were any atoms big enough to have a full fifth shell. As atoms get larger, the order of filling electrons into orbitals gets more complicated. In the element Scandium (Sc), for example, the 4s orbital is considered filled before the 3d orbitals begin to fill. This may be explained in terms of the difference in shielding of the nucleus by the s and delectrons, as well as of interelectronic repulsion effects. It thus appears as if the 4s orbital is lower in energy than the 3d orbitals. (See Figure 3). The general order of filling of the various subshells is: 1s <2s <2p <3s <3p <4s <3d <4p <5s <4d <5p <6s <4f>5d <6p <7s <5f <6d <7p The d electrons always come in one shell late, and the f electrons two shells late. This can be demonstrated with the lead (Pb) atom. Using the Aufbau procedure to show the order of filling, the electron configuration for the Pb atom is: Lead (Pb, Z = 82): 1s 2 2s 2 2p 6 3s 2 3p 6 4s 2 3d 10 4p 6 5s 2 4d 10 5p 6 6s 2 4f 14 5d 10 6p 2 Perhaps the easiest way to determine the correct filling order is to use the periodic table. The square for each element represents the most recently added electron. In the first shell there are two s electrons; in the second there are two s and six p electrons; and in the third there are twos and six p electrons, and then ten more fill up the 3d orbitals after the fourth shell has begun. The transition elements result from electrons filling in the d orbitals, and the lanthanide and actinide elements from electrons filling in the f orbitals. Electron configurations for the various elements in group 5A of the periodic table (but not indicating the order of the filling) are shown below: Nitrogen (N, Z = 7): ls 2 2s 2 2p 3 Phosphorus (P, Z = 15): 1s 2 2s 2 2p 6 3s 2 3p 3 Arsenic (As, Z = 33): 1s 2 2s 2 2p 6 3s 2 3p 6 3d 10 4s 2 4p 3 Antimony (Sb, Z = 51): 1s 2 2s 2 2p 6 3s 2 3p 6 3d 10 4s 2 4p 6 4d 10 5s 2 5p 3 Bismuth (Bi, Z = 83): ls 2 2s 2 2p 6 3s 2 3p 6 3d 10 4s 2 4p 6 4d 10 4f 14 5s 2 5p 6 5d 10 6s 2 6p 3 Note that the electron configurations for the larger atoms can get rather cumbersome, but they can be readily shortened by using the noble gas core convention. Nitrogen [He] 2s 2 2p 3 Phosphorus [Ne] 3s 2 3p 3 Arsenic [Ar] 3d 10 4s 2 4p 3 S. Mandal (Editor), GJAES 2016 GJAES Page 190

199 N. Paul et al., History of Atoms and Idea of Atomic Structure, Global Journal on Advancement in Engineering and Science, 2(1), March 2016, pp Antimony [Kr] 4d 10 5s 2 5p 3 Bismuth [Xe] 4f 14 5d 10 6s 2 6p 3 IX. Limitations of Bohr s theory There are some drawbacks or limitation of Bohr s theory which are given below. It could not explain the line spectrum of multi electron atoms. This model failed to explain the effect of magnetic field on the spectra of atoms (Zeeman Effect). The effect of electric field on the spectra could not be explained by Bohr's model (Stark Effect). The shapes of molecules arising out of directional bonding could not be explained. The dual nature of electrons (both as wave and particle) and the path of motion of the electron in well defined orbits were not correct. X. Conclusion This paper is going to discuss about the history of atom and how the idea of atomic structure is intriduced by verious scientist. Specially, in this paper the conclusion of Niles Bohr is so much prefered. Bohr almost gave the overall idea about the atomic structure with verious remarkable correction of his earlier scientist s conclusion. He also has some drawbacks. But from Bohr experiment we can get a complete idea of the atomic structure. XI. References [1] Atkins, Peter, and Jones, Loretta (2002). Chemical Principles: The Quest for Insight, second edition. New York: W. H. Freeman. [2] Chang, Raymond (2002). Chemistry, seventh edition. Boston: McGraw-Hill. [3] Fong, P. (1962). Elementary Quantum Mechanics. Reading, MA: Addison-Wesley. [4] Fricke, M. (1976). "Quantum Mechanics." In Method and Appraisal in the Physical Sciences: The Critical Background to Modern Science, , ed. C. Howson. New York: Cambridge University Press. [5] Akhlesh Lakhtakia (Ed.); Salpeter, Edwin E. (1996). "Models and Modelers of Hydrogen". American Journal of Physics (World Scientific) 65 (9): 933. Bibcode:bdchgrifu L.doi: / ISBN [6] Niels Bohr (1913). "On the Constitution of Atoms and Molecules, Part I" (PDF). Philosophical Magazine 26 (151): doi: / [7] Olsen and McDonald 2005 [8] "CK12 Chemistry Flexbook Second Edition The Bohr Model of the Atom". Retrieved 30 September [9] Niels Bohr (1913). "On the Constitution of Atoms and Molecules, Part II Systems Containing Only a Single Nucleus" (PDF). Philosophical Magazine 26 (153): doi: / [10] "Revealing the hidden connection between pi and Bohr's hydrogen model." Physics World (November 17, 2015) [11] M.A.B. Whitaker (1999). "The Bohr Moseley synthesis and a simple model for atomic x-ray energies". European Journal of Physics 20 (3): Bibcode:1999EJPh W.doi: / /20/3/312. [12] Smith, Brian. "Quantum Ideas: Week 2" Lecture Notes, p.17. University of Oxford. Retrieved Jan. 23, [13] A. Sommerfeld (1916). "Zur Quantentheorie der Spektrallinien". Annalen der Physik 51 (17): 1. Bibcode: 1916AnP S. doi: /andp [14] W. Wilson (1915). "The quantum theory of radiation and line spectra". Philosophical Magazine 29 (174): doi: / XI. Acknowledgments At the very onset, I express my sincerest gratitude and indebtedness to my honourable teacher Mr. Nirmal Paul for his good direction, guidance and continuous encouragement during each and every stage of completion of this work. His truly scientific intuition has made me as a constant oasis of ideas and passions in different branches of which exceptionally inspired and enriched me as a researcher. His involvement with originality has triggered and nourished my intellectual maturity that I ll benefit from, for a long time to come. Finally, I would like to thank everybody who was important to the successful realization of the monogram, as well as expressing my apology that I could not mention personally. S. Mandal (Editor), GJAES 2016 GJAES Page 191

200 Special Issue: Conference Proceeding of i-con-2016 Global Journal on Advancement in Engineering and Science (GJAES) Vol. 2, Issue 1 : March-2016, ISSN (Print): Review Work DIFFERENT PHYSICOCHEMICAL STRATEGIES FOR THE REMOVAL OF HEXAVALENT CHROMIUM Aniruddha Roy, Department of Applied Science, Global Institute of Management & Technology, Krishnagar, West Bengal, India aniruddha.rick@yahoo.com Abstarct: Some metals as micronutrients have a major role in the life and growth process of plants and animals. However, certain forms of some metals may also act as toxic material even in relatively small quantities. Chromium is such a metal, whose concentration above a certain limit may cause a serious problem to the health of living organisms. Environmental concentration of chromium is known to increase due to industrial development. Two ionic states of chromium, Cr (III) and Cr (VI) are present in various forms in soil, water and in the biota. Chromium and its compounds originate in the environment mainly from anthropogenic activities. Further in plants, soil and in the water, chemical equilibrium between different chromium forms may exists. The atmosphere has become a major pathway for long range transfer of chromium to different ecosystems. The routes of exposure of chromium (VI) for human beings are thus different atmospheric segments as well as food. In biological systems only Cr (III) and Cr (VI) are significant. Among these two states, trivalent chromium (Cr-III) is considered as an essential component, while severe and often deadly pathological changes are associated with excessive intake of Cr (VI) compounds. Cr (VI) has major toxic effects on biological systems. It has been found that occupational exposure to hexavalent chromium compound leads to a variety of clinical problems, even some specific forms of cancer. This paper intends to present the adverse effect of Cr (VI) on environment as well as on human beings and also try to find a way out to dissolve the problem by a newly developed efficient and cost effective technique. Keywords :Heavy Metals, Chromium, Trivalent Chromium, Hexavalent Chromium, Toxicological effects, Carcinogenic effects Cr (VI) Reduction. I. Introduction In the environment metals occur naturally in varying concentrations and are present in rock, soil, water, even in plants and animals. If the concentration levels of required metals in the living organisms are above certain limit, there must be some negative impact on them because in that case metals may easily be accumulated in the food chain of biosphere. Cadmium, mercury, lead, copper, zinc, chromium are the heavy metals which have received special attentions in ecotoxicology in the recent years even though some of these metals are necessary for the biological function of organism. These metals may occur in different forms, such as- as ions in water, as vapor in air or as salt in metal rock, sand and soil. These may be bound by organic or inorganic molecules or attached to particles present in the air. Both natural and anthropogenic sources emit metals into the environment [1]. Once emitted, metals may reside in the environment for hundreds of years or more. Human activities have drastically changed the biogeochemical cycles and balance of some heavy metals in the environments. Therefore, a tendency towards their accumulations in the soil, sea water, fresh water and sediments is observed. During the last three decades considerable attention has been given to the problems which are created due to the adverse effects of some heavy metals on various ecosystems in different environmental compartments. Numerous field observations indicate a significant increase of heavy metal concentrations in agricultural and forest soil as well as in marine and inland water sediments. This increase is frequently observed in remote areas; even those are thousands of kilometers away from the major anthropogenic sources. This is mainly because due to the spreading of these heavy metals by flow of underground water due to hydraulic gradient, flow of air or other transboundary atmospheric long range transport systems. To access the ecological conditions and health risks associated with atmospheric fluxes of heavy metals, it is required to understand the relationship between sources of emission of these metals to the atmosphere and the levels of the concentrations of them in surrounding air and precipitate. In order to estimate the risk caused by metal pollution correctly, it is important to know the bioavailability of different chemical forms of the metals. Several heavy metals are more available when these are present in organometallic complexes (e.g. dimethyl mercury, tetra ethyl lead etc.) than as inorganic ions [2]. Chromium is one of the heavy metals, whose concentration in the environment is still increasing. Chromium (Cr) is most commonly found as trivalent state in nature. Hexavallent chromium compounds are also found in small quantities. Chromites (Cr 2 O 3,FeO) is only ore, which contains a significant amount of chromium. It has been detected that Cr (III) is 100 times less toxic and 1000 times less S. Mandal (Editor), GJAES 2016 GJAES Page 192

201 A. Roy et al., Different Physicochemical Strategies for the Removal of Hexavalent Chromium, Global Journal on Advancement in Engineering and Science, 2(1), March 2016, pp mutagenic than Cr (VI). There has been an increasing demand for chromites ore in recent years. In the opencast mining process, the chromites ore, in the chromites ore processing industries like ferrochrome plant, chromites ore concentrating plant- chromites ore processing waste as well as waste rock materials are dumped in the open ground without considering the environmental impact. This results oxidation of Cr (III) to Cr (VI) which cause danger to the topography of the area. This also results leaching of Chromium (VI) and other impurities to the ground water as well as surface water bodies. Therefore, contamination of Cr in ground water, surface water, and soil, in the vicinity of mines and chromites ore processing industries is expected. Most significant impact due to chrome contamination is observed in the hydro sphere and which is increasing day by day. Chromium concentrations in the rivers and fresh water lakes range commonly between 1.0 and 10.0 µg/l, in ocean water between 0.1 and 5.0 µg/l. It has been estimated that about 6.7 x 10 6 kg of chromium flowing annually in the sea with the industrial waste effluents [3]. II. Sources and uses of chromium Chromites ore (Cr 2 O 3, FeO) is considered as the main source of chromium, which contains significant amounts of chromium in trivalent state. Hexavalent chromium is also found in a very small quantity in this ore. The ore is obtained from open cast mines, this is not been found in pure form, its highest grade contains about 55% chromic oxide. Chromium and its compounds are useful in common life. Ferrochrome is the alloy, which is the main product of chromites ore. Ferrochrome is used to produce steel. Potassium chromate, sodium chromate and dichromate are among the other most important chromium products, which are mainly used for manufacturing chromic acid, chromium pigments for paint, ink, textile industries, in lather tanning and for corrosion control. Chromites ore is also used in refractory industry to make bricks, mortar etc. as, chromites has the ability to enhance thermal shock resistance, volume stability and strength of the material [4]. The other applications of chromium compounds include production of medicines, chemicals for laboratory use etc. III. Routes of exposure Environmental concentration of chromium is known to increase due to industrial development. Two ionic forms of chromium, Cr (III) and Cr (VI) are present in various forms in soil, water and in the biota. Chromium and its compounds originate in the environment mainly from anthropogenic activities. Further in plants, soil and in the water, chemical equilibrium between different chromium forms may exists. The atmosphere has become a major pathway for long range transfer of chromium to different ecosystems [5]. The routes of exposure of chromium (VI) for human beings are thus different atmospheric segments as well as food. A. Air The bronchial tree is the primary target organ for carcinogenic effects of chromium (VI). Inhalation of chromium containing aerosols is therefore a major concern with respect to exposure to chromium compounds. The retention of chromium compounds from inhalation based on a 24 hours respiratory volume of 20 m 3 in urban areas with an average chromium concentration of 50 mg/ m 3 is about mg. Individual uptake may vary depending on other relevant factors e.g. tobacco smoking and on the distribution of particle sizes in the inhaled aerosol. Chromium has been detected as a component of cigarette tobacco and its concentration varying from mg/kg [6]. However, no clear information is available about the fraction of chromium that appears in main stream tobacco smoke. B. Drinking water The efficient absorption of metals by soil tends to limit the effects of atmospheric input of chromium. The dumping of industrial waste materials significantly increase chromium concentration in soil and is usually accompanied by surface and underground water contamination. Hexavalent chromium is known as the most mobile chromium form in the soil as well as in water systems. Chromium (III) is generally not transported over great distance because of its low solubility and also tendency to be adsorbed by solid particles in the appropriate ph range. Redox conversion of Cr (III) to Cr (VI) may also happen in presence of oxygen in air, which increases the chromium (VI) dislocation from the soil into the water systems. Thus, the concentration of chromium in water varies according to the type of surrounding industrial sources and the nature of the underlying soil. It is quite natural consequence that, the increase of Cr(VI) concentration in water systems also increases Cr (VI) intake within the biosphere. C. Food The daily chromium intake from food is difficult to assess because, studies have used methods that are not easily comparable. The chromium intake also depends on the diets of concern area or concern person [7]. Levels of daily chromium intake from different routes of exposure are shown in table-1. S. Mandal (Editor), GJAES 2016 GJAES Page 193

202 A. Roy et al., Different Physicochemical Strategies for the Removal of Hexavalent Chromium, Global Journal on Advancement in Engineering and Science, 2(1), March 2016, pp Table-1: Levels of daily chromium intake from different routes of exposure. Routes of exposure Daily intake Absorption Food stuff <200 µg <10 µg Drinking water µg <1 µg Ambient air <1000 ng <5 ng IV. Effects of Hexavalent Chromium on Human A. Toxicological Effects Severe and often deadly pathological changes are associated with excessive intake of Cr (VI) compounds. Cr (VI) has major toxic effects on biological systems. It has been found that occupational exposure to hexavalent chromium compound leads to a variety of clinical problems. Inhalation and retention of materials containing Cr (VI) can cause various health problems. Ulcers, perforation of nasal septum, acute irritating dermatitis, asthma, bronchitis, inflammation of larynx and liver have been recorded due to the exposure to Cr (VI) compounds. Skin contact of hexavalent chromium compounds can induce skin allergies. Cr (VI) compounds are irritating and corrosive when allowed to come with contact with skin, digestive systems or lungs [8]. B. Mutagenic and Carcinogenic Effects Mutagenic and carcinogenic nature of Cr(VI) ion have been established long ago. Cancer was documented in a chrome exposed worker 100 years ago. The mechanism of cancer formation caused by Cr (VI) is not known for certain. However, it has been postulated that Cr (VI) binds to double bond stranded deoxyribonucleic acid (DNA), thus altering gene replication, repairment and duplication process, which ultimately tends towards cancer. Workers expose to Cr (VI) compounds in stainless steel welding, pigment production and other industrial occupation may suffer by various forms of cancer. V. Analytical Method: Spectrophotometric Measurement Of Chromium (VI) Spectrophotometric measurement of Cr (VI) may be conducted by the following method. This method involves the steps stated below. 1. At first, stock solution of 1000 ppm Cr (VI) is prepared by dissolving 2.8 g of K 2 Cr 2 O 7 in distilled water to 1000 ml volume. A set of four standard solutions of Cr (VI) of **appropriate concentrations are prepared by diluting the stock solution. 2. To estimate Cr (VI) in water sample, direct sample is taken. To estimate total Cr(VI) in any type of solid waste(copsw/chromites ore/soil) alkaline digestion of the sample is required to solubilize both water soluble as well as water insoluble Cr (VI), present in that sample. In this method 2.5±0.1 g of powdered sample ( mm size) is digested using 50 ml of alkaline solution (0.28 M Na 2 CO 3 and 0.5 M NaOH ), 400 mg MgCl 2 and 0.5 ml of 1.0 M phosphate buffer solution (0.5 M K 2 HPO M KH 2 PO 4 ) at C for 1 hour. Then the mass is cooled to room temperature and filtered through 0.45 µm membrane filter pad to 250 ml volumetric flask with proper washing. Finally, the ph of the filtrate is adjusted to 7.5±0.5 by drop wise addition of 5.0 M HNO 3 solution. The volume is made up to the mark. 3. Next, 50 ml of filtrate is taken in a 100 ml volumetric flask. 2 ml of 0.2 (N) H 2 SO 4 and 2 ml of 1,5- diphenylcarbazide solution are added one by one. Then volume is made up to the mark and the solution is mixed thoroughly. The solution is kept for 10 minutes for the appearance of a red violet colour. Development of colour for the set of four standard solutions are performed in the same manner. 4. The absorbence of all solutions are measured at 540 nm wavelength by spectrophotometer. Then calibration curve is constructed by plotting absorbance value against concentration of standard solutions. The unknown concentration of the sample solution is obtained by putting its absorbance value in the calibration curve. However, in case of modern instrument calibration curve is constructed automatically and concentration of unknown solution is provided by the instrument directly. 5. Finally, the concentration of Cr (VI) in the parent sample (without dilution) is calculated by applying dilution factor. [Concentration of Cr (VI) in the parent sample = Result obtained X Dilution factor.] ** This is to be noted that, the concentration of Cr (VI) in standard solution are to be chosen in such a way that, Cr (VI) concentration in the sample solution after final dilution, lies in between those of standards. One set of experimental data is shown in table-2. Calibration curve for Cr (VI) is shown in Fig-2, estimation of Cr (VI) in different type of samples is shown in Fig-1 in the form of a flowchart. S. Mandal (Editor), GJAES 2016 GJAES Page 194

203 A. Roy et al., Different Physicochemical Strategies for the Removal of Hexavalent Chromium, Global Journal on Advancement in Engineering and Science, 2(1), March 2016, pp Table-2: Absorbance value of Cr (VI) solutions with different concentrations. Concentration (mg/lit) Absorbance sample Fig-1: Flow Chart Diagram for estimation of Cr (VI) in different type of samples S. Mandal (Editor), GJAES 2016 GJAES Page 195

204 A. Roy et al., Different Physicochemical Strategies for the Removal of Hexavalent Chromium, Global Journal on Advancement in Engineering and Science, 2(1), March 2016, pp ABSORBANCE CONCENTRATION (mg/lit) Fig-2: Calibration curve for Cr (VI) From the calibration curve [Fig-2] concentration of Cr (VI) in the sample is found to be mg/lit. VI. Removal of Hexavalent Chromium by A New Technique A. Removal of Hexavalent Chromium From Solid Waste To remove hexavalent chromium from its main sources like- chromites ore, chromites ore processing solid waste, surface or underground water, so many research works have been done in the recent years. One new technique in this regard has been invented by the author and his coworkers in GEO-CHEM LABORATORY, Bhubaneswar. This method is based on the reduction of hexavalent chromium to trivalent chromium by a reductant solution. This promising technique appears to be more effective and less expensive than other conventional physiochemical methods. This process may relieve the toxicity of chromium (VI) acting on living organisms by converting it to chromium (III), as it has been proved that, Cr (III) is 100 times less toxic and 1000 times less mutagenic than Cr(VI) as mentioned previously. The main sources of Cr (VI) are chromites ore processing solid waste (COPSW), and chromites ore, which are obtained from different ferrochrome (an alloy which is used to produce steel) production plants and chromites ore mines respectively. This process is mainly based on removal of Cr (VI) from COPSW, consisting of slag, conditioning tower sludge and electrostatic precipitator dust of ferrochrome production plants. It has been estimated that, chromites ore processing solid waste contains Cr (VI) in a very high concentration. In COPSW, the amount of Cr (VI) as solid phase is found to be in the range of 200 to 600 mg/kg and that as dissolved phase is in the range of 10 to 55 mg/kg. Chromium (VI), which is present as dissolved phase can be removed by proper washing of COPSW by ordinary water. However, removal of Cr (VI) as solid phase can t be possible by any physical treatment. This new method is capable of removing Cr (VI) present in COPSW as solid as well as dissolved phase. In this method a solution of ferrous sulphate (FeSO 4 ) and sodium dithionite (Na 2 S 2 O 4 ) is injected into the source to enhance the reduction of Cr (VI) to Cr (III). The solution of FeSO 4 in combination with Na 2 S 2 O 4 also inhibit the oxidation and precipitation of ferrous ion. Laboratory batch tests using a 0.06 M FeSO M Na 2 S 2 O 4 solution on COPSW indicate effective reduction of Cr (VI) both in solid and dissolved phase. It has been found that (Developed by trial and error method) 100 ml of reductant solution is capable to completely remove (or reduce) hexavalent chromium from 2 kg of COPSW. A field (site) test involving injection (by spraying) of 1600 lit of 0.08 M FeSO M Na 2 S 2 O 4 solution into a COPSW stack containing 30,000 kg material, has been performed. After the treatment, proper mixing the material has been done. Estimation of Cr (VI) in COPSW sample, which was collected uniformly (as per IS sample collection procedure) from the stack, showed that an effective reduction of Cr (VI) in the COPSW stack. Only trace of Cr (VI) (3 to 5 ppm) was detected. S. Mandal (Editor), GJAES 2016 GJAES Page 196

205 A. Roy et al., Different Physicochemical Strategies for the Removal of Hexavalent Chromium, Global Journal on Advancement in Engineering and Science, 2(1), March 2016, pp Sample batch no. Weight of sample taken for treatment (kg) Table-3: Experimental results of laboratory batch tests. Initial Dose of reductant solution concentration of Volume(ml) Concentration total Cr(Vl) (M) before treatment (mg/kg) Final concentration of total Cr(Vl) after treatment (mg/kg) * *This has been observed that at least 100 ml reductant solution is required to moisten 2 kg sample properly. CONC. OF TOTAL Cr(VI) CONC. OF REDUCTANT SOLUTION Fig-3: Plot of concentration of residual Cr (VI) vs. Concentration of reductant solution S. Mandal (Editor), GJAES 2016 GJAES Page 197

206 DISTRIBUTION OF RESIDUAL Cr(Vl) A. Roy et al., Different Physicochemical Strategies for the Removal of Hexavalent Chromium, Global Journal on Advancement in Engineering and Science, 2(1), March 2016, pp CONC. OF REDUCTANT SOLUTION Fig-4: Distribution of residual Cr (VI) in samples after treatment with reductant solutions with different concentrations. B. Removal of Hexavalent Chromium From Waste Water. Although this method is based on removal of Cr (VI) from chromites ore processing solid waste, it may also be applicable for the removal of Chromium (VI) from other sources like underground and surface water. Removal of Cr (VI) from chromites ore can be done by the similar way as described for COPSW. For the treatment of Chromium (VI) contaminated water ( which are found in the area located near chromites ore mines or have high percentage of Cr 2 O 3, FeO in its soil), after reduction, the precipitate (Cr 2 O 3,XH 2 O) is to be removed by filtration. In all the cases, the dose ( concentration and quantity) of reductant solution are to be adjusted according to the concentration range of Cr (VI) in the source. Some experimental results are shown in the table below. Sample batch no. Volume of waste water sample taken for treatment (L) Table-4: Experimental results of laboratory batch tests. Initial concentration Dose of reductant solution of total Cr(Vl) before Volume(ml) Concentration treatment (M) (mg/l) Residual concentration of total Cr(Vl) after treatment & Filtration (mg/l) VII. Discussion It may be observed from the experimental data that, in both the cases the concentration of residual Cr (VI) is decreased with the increasing dose of reductant solution. In the treatment process after reaching certain concentration of reductant solution, the residual Cr (VI) concentration in the sample does not decreased. This dose of reductant solution may be considered as the optimum dose for the sample containing certain concentration of Cr S. Mandal (Editor), GJAES 2016 GJAES Page 198

207 A. Roy et al., Different Physicochemical Strategies for the Removal of Hexavalent Chromium, Global Journal on Advancement in Engineering and Science, 2(1), March 2016, pp (VI). Thus to remove Cr (VI) from any source, the optimum dose of reductant solution is to be determined experimentally by the help of trial and error method. Finally it may be applied in a large scale to remove hexavalent chromium from different sources. VIII. Conclusion It may be concluded that hexavalent chromium above certain limit in mine discharge, soil or ground water has an adverse effect on environment as well as on living beings whereas trivalent chromium is thought to be necessary for the normal functioning of living organisms. However, the necessity of chromium (III) may still be a controversial subject. Some laboratory studies have also shown that trivalent chromium may cause allergy. Some of the Cr (III) compounds are toxic, even genotoxic for human. Moreover, the oxidation of Cr (III) to Cr (VI) is very common process due to arial oxidation. So, more research works are also needed to reveal the actual impact of trivalent chromium on biosphere or mainly on human beings. IX. References [1] Kihlstrom J.E, Toxicology- the environmental impact of pollutants, Abo Akademi University, Baltic University programme, Uppsala, (1992) [2] Ziolkowski J., Environmental chemistry Protection, Education in advanced chemistry, Vol. 3, Poznan- Wroclaw.(1996) [3] A. Bielicka, I. Bojanowska, A. Wisniewski : Two Faces of Chromium Pollutant and Bioelement. Polish Journal of Environmental studies Vol. 14, No. 1, 5-10, (2005) [4] Ullmann s Encyclopedia of Industrial Chemistry, Vol. A 7,Chromium and Chromium Alloys, Germany (1986) [5] Kendrickm. J., May M.T., Plishkam. J., Robinson K.D., Metals in biological systems, Ellis Horwood Limited. (1992) [6] Chromium, Nickel and Welding. Lyon, International Agency for research on cancer, pp (IARC Monographs on the Evaluation of Carcinogenic Risk of Chemicals to Humans, Vol. 49, (1990) [7] Health assessment document for chromium. Research Triangle Park, NC, United States Environmental Protection Agency, (1984).[Final report No. EPA 600/ F]. [8] Cieslak- Golonka M., Chromium compounds in the systems of biological importance (In Polish ), Wiadomosci chemiczne 48 (1-2), 59, 1994 X. Acknowledgement The authors gratefully acknowledge the necessary support and assistance provided by GEO-CHEM LABORATORY, Bhubaneswar, Orissa during this study. Mr. Asit Kr. Roy, Ex. Assistant Professor in Chemistry, GIMT, Krishnagar and Dr. Rupak Bhattacharyya, HOD, AS & H Department, GIMT, Krishnagar, deserve special thanks from the authors for their valuable suggestions during the process of development of this paper. S. Mandal (Editor), GJAES 2016 GJAES Page 199

208 Global Institute of Management & Technology Palpara More, Krishna Nagar, Nadia, West Bengal, Pin: India Website:

PI-Controller Tuning For Heat Exchanger with Bypass and Sensor

PI-Controller Tuning For Heat Exchanger with Bypass and Sensor International Journal of Electrical Engineering. ISSN 0974-2158 Volume 5, Number 6 (2012), pp. 679-689 International Research Publication House http://www.irphouse.com PI-Controller Tuning For Heat Exchanger

More information

Three Element Boiler Drum Level Control using Cascade Controller

Three Element Boiler Drum Level Control using Cascade Controller Three Element Boiler Drum Level Control using Cascade Controller A. Amarnath Kumaran 1, M. Ponni Bala 2, V. Sivaraman 3 1 PG Scholar, 2 Associate Professor, 3 Instrumentation Manager, Department of Electronics

More information

Adaptive Cruise Control for vechile modelling using MATLAB

Adaptive Cruise Control for vechile modelling using MATLAB IOSR Journal of Electrical and Electronics Engineering (IOSR-JEEE) e-issn: 2278-1676,p-ISSN: 2320-3331, Volume 12, Issue 2 Ver. II (Mar. Apr. 2017), PP 82-88 www.iosrjournals.org Adaptive Cruise Control

More information

A Research Reactor Simulator for Operators Training and Teaching. Abstract

A Research Reactor Simulator for Operators Training and Teaching. Abstract Organized and hosted by the Canadian Nuclear Society. Vancouver, BC, Canada. 2006 September 10-14 A Research Reactor Simulator for Operators Training and Teaching Ricardo Pinto de Carvalho and José Rubens

More information

Solar Flat Plate Thermal Collector

Solar Flat Plate Thermal Collector Solar Flat Plate Thermal Collector 1 OBJECTIVE: Performance Study of Solar Flat Plate Thermal Collector Operation with Variation in Mass Flow Rate and Level of Radiation INTRODUCTION: Solar water heater

More information

HVAC INTEGRATED CONTROL FOR ENERGY SAVING AND COMFORT ENHANCEMENT vahid Vakiloroaya

HVAC INTEGRATED CONTROL FOR ENERGY SAVING AND COMFORT ENHANCEMENT vahid Vakiloroaya HVAC INTEGRATED CONTROL FOR ENERGY SAVING AND COMFORT ENHANCEMENT vahid Vakiloroaya (vahid.vakiloroaya@engineer.com) ABSTRACT: The overall attainable reduction in energy consumption and enhancement of

More information

MODELLING AND SIMULATION OF BUILDING ENERGY SYSTEMS USING INTELLIGENT TECHNIQUES

MODELLING AND SIMULATION OF BUILDING ENERGY SYSTEMS USING INTELLIGENT TECHNIQUES MODELLING AND SIMULATION OF BUILDING ENERGY SYSTEMS USING INTELLIGENT TECHNIQUES Ph.D. THESIS by V. S. K. V. HARISH ALTERNATE HYDRO ENERGY CENTRE INDIAN INSTITUTE OF TECHNOLOGY ROORKEE ROORKEE-247667 (INDIA)

More information

Thermal comfort assessment of Danish occupants exposed to warm environments and preferred local air movement

Thermal comfort assessment of Danish occupants exposed to warm environments and preferred local air movement Downloaded from orbit.dtu.dk on: Mar 08, 2019 Thermal comfort assessment of Danish occupants exposed to warm environments and preferred local air movement Simone, Angela; Yu, Juan ; Levorato, Gabriele

More information

ENERGY CONSERVATION IN BUILDINGS AND COMMUNITY SYSTEMS. Technical Report. P. Michel & M. El Mankibi ENTPE DGCB LASH France

ENERGY CONSERVATION IN BUILDINGS AND COMMUNITY SYSTEMS. Technical Report. P. Michel & M. El Mankibi ENTPE DGCB LASH France IEA INTERNATIONAL ENERGY AGENCY ENERGY CONSERVATION IN BUILDINGS AND COMMUNITY SYSTEMS Technical Report ADVANCED CONTROL STRATEGY P. Michel & M. El Mankibi ENTPE DGCB LASH France pierre.michel@entpe.fr

More information

Optimum Return Period of an Overhead Line Considering Reliability, Security and Availability with Respect to Extreme Icing Events

Optimum Return Period of an Overhead Line Considering Reliability, Security and Availability with Respect to Extreme Icing Events IWAIS XIV, China, May 0 Optimum Return Period of an Overhead Line Considering Reliability, Security and Availability with Respect to Extreme Icing Events Asim Haldar, Ph.D, P.Eng. ahaldar@nalcorenergy.com

More information

ISSN Vol.07,Issue.16, November-2015, Pages:

ISSN Vol.07,Issue.16, November-2015, Pages: ISSN 2348 2370 Vol.07,Issue.16, November-2015, Pages:3181-3185 www.ijatir.org Improvement of Power Quality in A Grid Connected Induction Generator Based Wind Farm using Static Compensator K. YOSHMA 1,

More information

Controller Tuning Of A Biological Process Using Optimization Techniques

Controller Tuning Of A Biological Process Using Optimization Techniques International Journal of ChemTech Research CODEN( USA): IJCRGG ISSN : 0974-4290 Vol.4, No.4, pp 1417-1422, Oct-Dec 2012 Controller Tuning Of A Biological Process Using Optimization Techniques S.Srinivasan

More information

Energy and indoor temperature consequences of adaptive thermal comfort standards

Energy and indoor temperature consequences of adaptive thermal comfort standards Energy and indoor temperature consequences of adaptive thermal comfort standards L. Centnerova and J.L.M. Hensen Czech Technical University in Prague, Czech Republic (lada@tzb.fsv.cvut.cz) Technische Universiteit

More information

Evaluation methods for indoor environmental quality assessment according to EN15251

Evaluation methods for indoor environmental quality assessment according to EN15251 Summary of this article was published in the REHVA European HVAC Journal Vol 49, Issue 4 (August), 2012, pages 14-19, available at http://www.rehva.eu/en/rehva-european-hvac-journal. Evaluation methods

More information

Performance and Reliability Analysis of a Mobile Robot Using Cara Fault Tree

Performance and Reliability Analysis of a Mobile Robot Using Cara Fault Tree Performance and Reliability Analysis of a Mobile Robot Using Cara Fault Tree Manjish Adhikari [1], Vishal Mandal [2] U.G. Student, Department of Electronics and Communication Engineering, UCEK, Jawaharlal

More information

Thermal comfort under transient seasonal conditions of a bioclimatic building in Greece

Thermal comfort under transient seasonal conditions of a bioclimatic building in Greece 54 2nd PALENC Conference and 28th AIVC Conference on Building Low Energy Cooling and Thermal comfort under transient seasonal conditions of a bioclimatic building in Greece A. Androutsopoulos Centre for

More information

A MODEL-BASED METHOD FOR THE INTEGRATION OF NATURAL VENTILATION IN INDOOR CLIMATE SYSTEMS OPERATION

A MODEL-BASED METHOD FOR THE INTEGRATION OF NATURAL VENTILATION IN INDOOR CLIMATE SYSTEMS OPERATION Ninth International IBPSA Conference Montréal, Canada August 15-18, 2005 A MODEL-BASED METHOD FOR THE INTEGRATION OF NATURAL VENTILATION IN INDOOR CLIMATE SYSTEMS OPERATION Ardeshir Mahdavi and Claus Pröglhöf

More information

Tuning of Proportional Derivative Control Parameters Base Particle Swarm Optimization for Automatic Brake System on Small Scale Wind Turbine Prototype

Tuning of Proportional Derivative Control Parameters Base Particle Swarm Optimization for Automatic Brake System on Small Scale Wind Turbine Prototype Modern Applied Science; Vol. 9, No. 2; 2015 ISSN 1913-1844 E-ISSN 1913-1852 Published by Canadian Center of Science and Education Tuning of Proportional Derivative Control Parameters Base Particle Swarm

More information

Reliability Assessment of Standalone Hybrid Energy System for Remote Telecom Tower

Reliability Assessment of Standalone Hybrid Energy System for Remote Telecom Tower pp. 27 37 Reliability Assessment of Standalone Hybrid Energy System for Remote Telecom Tower Kabindra Awale 1 *, Nava Raj Karki 2 1,2 Department of Electrical Engineering, Pulchowk Campus, Institute of

More information

Load Frequency Control of Power Systems Using FLC and ANN Controllers

Load Frequency Control of Power Systems Using FLC and ANN Controllers Load Frequency Control of Power Systems Using FLC and ANN Controllers Mandru Harish Babu PG Scholar, Department of Electrical and Electronics Engineering, GITAM Institute of Technology, Rushikonda-530045,

More information

Company Introduction HEATERS. Joint Venture: TIME FOR A COOL CHANGE

Company Introduction HEATERS. Joint Venture: TIME FOR A COOL CHANGE Agenda 1. Introduction 2. Definition & Purpose of Control 3. Process Control Terms 4. Feedback & Feedforward Control 5. HEI Requirements for C&I 6. Turbine Back Pressure Control 7. When to Use 1-Speed,

More information

Transfer Function Modelled Isolated Hybrid Power Generation System

Transfer Function Modelled Isolated Hybrid Power Generation System The International Journal Of Engineering And Science (IJES) Volume 4 Issue 12 Pages PP -44-49 215 ISSN (e): 2319 1813 ISSN (p): 2319 185 Transfer Function Modelled Isolated Hybrid Power Generation System

More information

Available online at ScienceDirect. Procedia Engineering 169 (2016 )

Available online at   ScienceDirect. Procedia Engineering 169 (2016 ) Available online at www.sciencedirect.com ScienceDirect Procedia Engineering 169 (2016 ) 158 165 4th International Conference on Countermeasures to Urban Heat Island (UHI) 2016 Indoor Thermal Comfort Assessment

More information

COMPARATIVE SUMMER THERMAL AND COOLING LOAD PERFORMANCE OF NATURAL VENTILATION OF CAVITY ROOF UNDER THREE DIFFERENT CLIMATE ZONES

COMPARATIVE SUMMER THERMAL AND COOLING LOAD PERFORMANCE OF NATURAL VENTILATION OF CAVITY ROOF UNDER THREE DIFFERENT CLIMATE ZONES COMPARATIVE SUMMER THERMAL AND COOLING LOAD PERFORMANCE OF NATURAL VENTILATION OF CAVITY ROOF UNDER THREE DIFFERENT CLIMATE ZONES Lusi Susanti 1, Hiroshi Matsumoto 2, and Hiroshi Homma 2 1 Department of

More information

Potentials of the new design concepts of district heating and cooling toward integration with renewable energy sources

Potentials of the new design concepts of district heating and cooling toward integration with renewable energy sources Potentials of the new design concepts of district heating and cooling toward integration with renewable energy sources Julio Efrain Vaillant Rebollar 1, Arnold Janssens 1, Eline Himpe 1, 1 Ghent University,

More information

Study of Supervisory Control Implementation in A Small Scale Variable Speed Wind Turbine

Study of Supervisory Control Implementation in A Small Scale Variable Speed Wind Turbine Study of Supervisory Control Implementation in A Small Scale Variable Speed Wind Turbine Katherin Indriawati 1,*, Ali Musyafa 1, Bambang L. Widjiantoro 1, and Anna Milatul Ummah 1 1,2,,4 Institute of Technology

More information

Simulation Before Design? A New Software Program for Introductory Design Studios

Simulation Before Design? A New Software Program for Introductory Design Studios SIMULATION BEFORE DESIGN? 1 Simulation Before Design? A New Software Program for Introductory Design Studios TROY NOLAN PETERS California Polytechnic State University INTRODUCTION The 2010 Imperative states:

More information

Potential of passive design strategies using the free-running temperature

Potential of passive design strategies using the free-running temperature 850 2nd PALENC Conference and 28th AIVC Conference on Building Low Energy Cooling and Potential of passive design strategies using the free-running temperature L. Rosales, M. E. Hobaica Universidad Central

More information

RELIABILITY AND SECURITY ISSUES OF MODERN ELECTRIC POWER SYSTEMS WITH HIGH PENETRATION OF RENEWABLE ENERGY SOURCES

RELIABILITY AND SECURITY ISSUES OF MODERN ELECTRIC POWER SYSTEMS WITH HIGH PENETRATION OF RENEWABLE ENERGY SOURCES RELIABILITY AND SECURITY ISSUES OF MODERN ELECTRIC POWER SYSTEMS WITH HIGH PENETRATION OF RENEWABLE ENERGY SOURCES Evangelos Dialynas Professor in the National Technical University of Athens Greece dialynas@power.ece.ntua.gr

More information

Analysis of DG Influences on System Losses in Distribution Network

Analysis of DG Influences on System Losses in Distribution Network Vol. 8, No.5, (015, pp.141-15 http://dx.doi.org/10.1457/ijgdc.015.8.5.14 Analysis of Influences on System osses in Distribution Network Shipeng Du, Qianzhi Shao and Gang Wang Shenyang Institute of Engineering,

More information

CONTROL OF BIO REACTOR PROCESSES USING A NEW CDM PI P CONTROL STRATEGY

CONTROL OF BIO REACTOR PROCESSES USING A NEW CDM PI P CONTROL STRATEGY Journal of Engineering Science and Technology Vol. 5, No. 2 (2010) 213-222 School of Engineering, Taylor s University College CONTROL OF BIO REACTOR PROCESSES USING A NEW CDM PI P CONTROL STRATEGY S. SOMASUNDARAM

More information

2014 Grid of the Future Symposium

2014 Grid of the Future Symposium 21, rue d Artois, F-75008 PARIS CIGRE US National Committee http : //www.cigre.org 2014 Grid of the Future Symposium Concepts and Practice Using Stochastic Programs for Determining Reserve Requirements

More information

Dynamic simulation of buildings: Problems and solutions Università degli Studi di Trento

Dynamic simulation of buildings: Problems and solutions Università degli Studi di Trento Dynamic simulation of buildings: Problems and solutions Università degli Studi di Trento Paolo BAGGIO The basic problem To design (and operate) energy efficient buildings, accurate modeling tools are needed.

More information

INTERNATIONAL JOURNAL OF ELECTRICAL ENGINEERING & TECHNOLOGY (IJEET)

INTERNATIONAL JOURNAL OF ELECTRICAL ENGINEERING & TECHNOLOGY (IJEET) INTERNATIONAL JOURNAL OF ELECTRICAL ENGINEERING & TECHNOLOGY (IJEET) International Journal of Electrical Engineering and Technology (IJEET), ISSN 0976 6545(Print), ISSN 0976 6545(Print) ISSN 0976 6553(Online)

More information

Prediction of Thermal Comfort. mech14.weebly.com

Prediction of Thermal Comfort. mech14.weebly.com Prediction of Thermal Comfort Thermal Sensation Scale (Rohles & Nevins, 1974) Fanger s Thermal Comfort Model (1982) Steady state model, applicable for M 3 met and a large group of people Fanger s Thermal

More information

Comsol Multiphysics for building energy simulation (BES) using BESTEST criteria Jacobs, P.M.; van Schijndel, A.W.M.

Comsol Multiphysics for building energy simulation (BES) using BESTEST criteria Jacobs, P.M.; van Schijndel, A.W.M. Comsol Multiphysics for building energy simulation (BES) using criteria Jacobs, P.M.; van Schijndel, A.W.M. Published in: Comsol Conference 21, October 14-16, 21, Grenoble, France Published: 1/1/21 Document

More information

ISSN Vol.09,Issue.04, March-2017, Pages:

ISSN Vol.09,Issue.04, March-2017, Pages: ISSN 2348 2370 Vol.09,Issue.04, March-2017, Pages:0558-0562 www.ijatir.org Simulation Exhibition of 220KW Wind Power Generation with PMSG Using Matlab Simulink K. INDRANI 1, DR. K. RAVI 2, S. SENTHIL 3

More information

THERMAL COMFORT OF A COURTYARD IN GUANGZHOU IN SUMMER

THERMAL COMFORT OF A COURTYARD IN GUANGZHOU IN SUMMER THERMAL COMFORT OF A COURTYARD IN GUANGZHOU IN SUMMER L Jin 1,*, QL Meng 1 and LH Zhao 1 1 Building Environment Energy Laboratory, South China University of Technology, Guangzhou 510640, China Engineering

More information

Earth, Wind & Fire Natural Airconditioning [1] Research objectives and Methods

Earth, Wind & Fire Natural Airconditioning [1] Research objectives and Methods Earth, Wind & Fire Natural Airconditioning [1] Research objectives and Methods Ben Bronsema Department of Architectural Engineering, Faculty of Architecture, Delft University of Technology, The Netherlands

More information

RESEARCH OF SOLAR WATER HEATERS MODEL

RESEARCH OF SOLAR WATER HEATERS MODEL RESEARCH OF SOLAR WATER HEATERS MODEL Donatas Dervinis Šiauliai University, Šiauliai State College, Lithuania Dainius Balbonas Šiauliai University, Šiauliai State College, Lithuania Annotation This paper

More information

Implementation of Wireless Sensor Network for Real Time Monitoring and controlling of Agriculture Parameter

Implementation of Wireless Sensor Network for Real Time Monitoring and controlling of Agriculture Parameter Implementation of Wireless Sensor Network for Real Time Monitoring and controlling of Agriculture Parameter Nikhil S Naik 1, Prof.R.J.Shelke 2 P.G. Student, Department of Electronics Walchand Institute

More information

Research Co-design Activity

Research Co-design Activity Research Co-design Activity A. Purpose of Co-design: The ultimate goals of this co-design activity are to: Directly involve all members of a group to make decisions together that would affect their daily

More information

Bio-climatic Chart for Different Climatic Zones of Northeast India

Bio-climatic Chart for Different Climatic Zones of Northeast India Proceedings of 3 rd International Conference on Solar Radiation and Day Lighting (SOLARIS 27) February 7-9, 27, New Delhi, India Copyright 27, Anamaya Publishers, New Delhi, India Bio-climatic Chart for

More information

BUILDING DESIGN FOR HOT AND HUMID CLIMATES IMPLICATIONS ON THERMAL COMFORT AND ENERGY EFFICIENCY. Dr Mirek Piechowski 1, Adrian Rowe 1

BUILDING DESIGN FOR HOT AND HUMID CLIMATES IMPLICATIONS ON THERMAL COMFORT AND ENERGY EFFICIENCY. Dr Mirek Piechowski 1, Adrian Rowe 1 BUILDING DESIGN FOR HOT AND HUMID CLIMATES IMPLICATIONS ON THERMAL COMFORT AND ENERGY EFFICIENCY Dr Mirek Piechowski 1, Adrian Rowe 1 Meinhardt Building Science Group, Meinhardt Australia 1 Level 12, 501

More information

The human body as its own sensor for thermal comfort

The human body as its own sensor for thermal comfort The human body as its own sensor for thermal comfort Vesely, M.; Zeiler, W.; Boxem, G.; Vissers, D.R. Published in: Proceedings of the International Conference on Cleantech for Smart Cities and Buildings

More information

Thermal Environment evaluation in commercial kitchens

Thermal Environment evaluation in commercial kitchens Downloaded from orbit.dtu.dk on: Nov 11, 2018 Thermal Environment evaluation in commercial kitchens Simone, Angela; Olesen, Bjarne W. Publication date: 2013 Link back to DTU Orbit Citation (APA): Simone,

More information

REAL-TIME CONTROL OF OCCUPANTS THERMAL COMFORT IN BUILDINGS. Galway, Ireland

REAL-TIME CONTROL OF OCCUPANTS THERMAL COMFORT IN BUILDINGS. Galway, Ireland REAL-TIME CONTROL OF OCCUPANTS THERMAL COMFORT IN BUILDINGS Magdalena Hajdukiewicz 1,2,3, Padraig O Connor 1, Colin O Neill 1, Daniel Coakley 1,2,3, Marcus M. Keane 1,2,3, Eoghan Clifford 1,2,3 1 Department

More information

FLOTATION CONTROL & OPTIMISATION

FLOTATION CONTROL & OPTIMISATION FLOTATION CONTROL & OPTIMISATION A global leader in mineral and metallurgical innovation FLOATSTAR OVERVIEW Flotation is a complex process that is affected by a multitude of factors. These factors may

More information

Allowing for Thermal Comfort in Free-running Buildings in the New European Standard EN15251

Allowing for Thermal Comfort in Free-running Buildings in the New European Standard EN15251 Allowing for Thermal Comfort in Free-running Buildings in the New European Standard EN15251 Fergus Nicol, Low Energy Architecture Research Unit (LEARN), School of Architecture, London Metropolitan University,

More information

Power Balancing Control of Hybrid Energy Sources Using Storage System

Power Balancing Control of Hybrid Energy Sources Using Storage System www.ijaceeonline.com ISSN: 2456-3935 Power Balancing Control of Hybrid Energy Sources Using Storage System Shanmugaraj Subramaniam PG Scholar, Department of Electrical and Electronics Engineering, Anna

More information

Mahangade Sayali, Mahangade Sejal, International Journal of Advance Research, Ideas and Innovations in Technology.

Mahangade Sayali, Mahangade Sejal, International Journal of Advance Research, Ideas and Innovations in Technology. ISSN: 2454-132X Impact factor: 4.295 (Volume3, Issue1) Available online at: www.ijariit.com Hybrid Wind-Pv System Connected To Grid Used For Automatic Irrigation Sayali Mahangade Electrical Power System,

More information

Performance evaluation of hybrid solar parabolic trough concentrator systems in Hong Kong

Performance evaluation of hybrid solar parabolic trough concentrator systems in Hong Kong Performance evaluation of hybrid solar parabolic trough concentrator systems in Hong Kong Huey Pang* 1, Edward W.C. Lo 1, TS Chung 1 and Josie Close 2 * 1 Department of Electrical Engineering, The Hong

More information

Performance Improvement on Water-cooled Cold-Plate

Performance Improvement on Water-cooled Cold-Plate Proceedings of the 4th WSEAS International Conference on Heat and Mass Transfer, Gold Coast, Queensland, Australia, January 17-19, 2007 104 Performance Improvement on Water-cooled Cold-Plate SHYAN-FU CHOU,

More information

Modelling and Fuzzy Logic Control of the Pitch of a Wind Turbine

Modelling and Fuzzy Logic Control of the Pitch of a Wind Turbine Modelling and Fuzzy Logic Control of the Pitch of a Wind Turbine Silpa Baburajan 1, Dr. Abdulla Ismail 2 1Graduate Student, Dept. of Electrical Engineering, Rochester Institute of Technology, Dubai, UAE

More information

Wind Turbine Power Limitation using Power Loop: Comparison between Proportional-Integral and Pole Placement Method

Wind Turbine Power Limitation using Power Loop: Comparison between Proportional-Integral and Pole Placement Method International Journal of Education and Research Vol. 1 No.11 November 2013 Wind Turbine Power Limitation using Power Loop: Comparison between Proportional-Integral and Pole Placement Method 1* NorzanahRosmin,

More information

Assessment of Marginal and Long-term Surplus Power in Orissa A Case Study

Assessment of Marginal and Long-term Surplus Power in Orissa A Case Study 1 Chandra 16th NATIONAL POWER SYSTEMS CONFERENCE, 15th-17th DECEMBER, 2010 103 Assessment of Marginal and Long-term in Orissa A Case Study Chandra Shekhar Reddy Atla, A.C. Mallik, Dr. Balaraman K and Dr.

More information

Control System Design for HVAC System and Lighting System using PID and MPC Controller

Control System Design for HVAC System and Lighting System using PID and MPC Controller Journal of Engineering and Science Research 1 (2): 66-72, e-issn: RMP Publications, DOI: Control System Design for HVAC System and Lighting System using PID and MPC Controller Nur Azizah Amir and Harutoshi

More information

Modelling Analysis of Thermal Performance of Internal Shading Devices for a Commercial Atrium Building in Tropical Climates

Modelling Analysis of Thermal Performance of Internal Shading Devices for a Commercial Atrium Building in Tropical Climates Modelling Analysis of Thermal Performance of Internal Shading Devices for a Commercial Atrium Building in Tropical Climates Kittitach Pichatwatana, and Fan Wang Abstract This paper examines the TAS computer

More information

SIMALATION AND CONTROL SYSTEM DESIGN OF THERMAL CONDITIONS IN BUILDING USING ACTIVE AND PASSIVE RESOURCES

SIMALATION AND CONTROL SYSTEM DESIGN OF THERMAL CONDITIONS IN BUILDING USING ACTIVE AND PASSIVE RESOURCES SIMALATION AND CONTROL SYSTEM DESIGN OF THERMAL CONDITIONS IN BUILDING USING ACTIVE AND PASSIVE RESOURCES Borut Zupančič, Igor Škrjanc, Aleš Krainer 2, Maja Atanasijević-Kunc Faculty of Electrical Engineering,

More information

Impact of Location of Distributed Generation On Reliability of Distribution System

Impact of Location of Distributed Generation On Reliability of Distribution System International Journal of Engineering Research and Development e-issn: 2278-067X, p-issn: 2278-800X, www.ijerd.com Volume 9, Issue 5 (December 2013), PP. 01-08 Impact of of Distributed Generation On Reliability

More information

Optimization of Heat Gain by Air Exchange through the Window of Cold Storage Using S/N Ratio and ANOVA Analysis

Optimization of Heat Gain by Air Exchange through the Window of Cold Storage Using S/N Ratio and ANOVA Analysis RESEARCH ARTICLE OPEN ACCESS Optimization of Heat Gain by Air Exchange through the of Cold Storage Using S/N Ratio and ANOVA Analysis Dr. Nimai Mukhopadhyay *, Aniket Deb Roy ** ( * Assistant professor,

More information

Thermal Modeling for Buildings. Karla Vega University of California, Berkeley Fall 2009

Thermal Modeling for Buildings. Karla Vega University of California, Berkeley Fall 2009 Thermal Modeling for Buildings Karla Vega University of California, Berkeley Fall 2009 Overview Motivation Problem Statement Related Work Heat Transfer Basics Proposed Approach Model SimMechanics Matlab

More information

Making APC Perform 2006 ExperTune, Inc. George Buckbee, P.E. ExperTune, Inc.

Making APC Perform 2006 ExperTune, Inc. George Buckbee, P.E. ExperTune, Inc. Making APC Perform 2006 ExperTune, Inc. George Buckbee, P.E. ExperTune, Inc. Summary Advanced Process Control (APC) promises to deliver optimal plant performance by optimizing setpoints, decoupling interactions,

More information

Voltage Stability Assessment of a Power System Incorporating Wind Turbine Using Power System Analysis Toolbox (Psat)

Voltage Stability Assessment of a Power System Incorporating Wind Turbine Using Power System Analysis Toolbox (Psat) IOSR Journal of Electrical and Electronics Engineering (IOSR-JEEE) e-issn: 2278-1676,p-ISSN: 2320-3331, Volume 9, Issue 2 Ver. VI (Mar Apr. 2014), PP 19-25 Voltage Stability Assessment of a Power System

More information

Comparison of Different Controllers for Equitable Water Supply in Water Networks

Comparison of Different Controllers for Equitable Water Supply in Water Networks Comparison of Different Controllers for Equitable Water Supply in Water Networks G R Anjana 1, M S Mohan Kumar 2 and Bharadwaj Amrutur 3 1 Department of Civil Engineering 2 Department of Civil Engg., IFCWS,

More information

Innovative Operation Strategies for Improving Energy Saving in a Cooling Tower System

Innovative Operation Strategies for Improving Energy Saving in a Cooling Tower System 58 China Steel Technical Report, Innovative No. 28, Operation pp.58-62, Strategies (2015) for Improving Energy Saving in a Cooling Tower System Innovative Operation Strategies for Improving Energy Saving

More information

PID Controller for Longitudinal Parameter Control of Automatic Guided Vehicle

PID Controller for Longitudinal Parameter Control of Automatic Guided Vehicle International Research Journal of Engineering and Technology (IRJET) e-issn: 2395-56 Volume: 3 Issue: 6 June-216 www.irjet.net p-issn: 2395-72 PID Controller for Longitudinal Parameter Control of Automatic

More information

Microgrid Energy Management System Using Fuzzy Logic Control

Microgrid Energy Management System Using Fuzzy Logic Control Microgrid Energy Management System Using Fuzzy Logic Control Lydie Roiné *, Kambiz Therani 2, Yashar Sahraei Manjili 3, Mo Jamshidi 3 Department of Electrical Engineering, Esigelec, Rouen, France, 2 Irseem,

More information

JOURNAL OF APPLIED SCIENCES RESEARCH

JOURNAL OF APPLIED SCIENCES RESEARCH Copyright 2015, American-Eurasian Network for Scientific Information publisher JOURNAL OF APPLIED SCIENCES RESEARCH ISSN: 1819-544X EISSN: 1816-157X JOURNAL home page: http://www.aensiweb.com/jasr 2015

More information

HUMAN-BEHAVIOR ORIENTED CONTROL STRATEGIES FOR NATURAL VENTILATION IN OFFICE BUILDINGS

HUMAN-BEHAVIOR ORIENTED CONTROL STRATEGIES FOR NATURAL VENTILATION IN OFFICE BUILDINGS HUMAN-BEHAVIOR ORIENTED CONTROL STRATEGIES FOR NATURAL VENTILATION IN OFFICE BUILDINGS Haojie Wang 1, Qingyan Chen 1,2* 1 School of Mechanical Engineering, Purdue University, West Lafayette, IN 47907 2

More information

Compactness ratio : 1.2. Openings ratio : 26% (thermic envelope / inhabitable area) Smaller is better 1.9 = bad < 0.8 = good

Compactness ratio : 1.2. Openings ratio : 26% (thermic envelope / inhabitable area) Smaller is better 1.9 = bad < 0.8 = good ENERGIZED CANOPY Compactness ratio : 1.2 (thermic envelope / inhabitable area) Smaller is better 1.9 = bad < 0.8 = good A good compactness ratio lets to need less material and energy to build the buildings

More information

Simulation And Optimization of a Solar Absorption Cooling System Using Evacuated Tube Collectors

Simulation And Optimization of a Solar Absorption Cooling System Using Evacuated Tube Collectors Simulation And Optimization of a Solar Absorption Cooling System Using Evacuated Tube Collectors Jean Philippe Praene, Alain Bastide, Franck Lucas, François Garde and Harry Boyer Université de la Réunion,

More information

A NOVEL OF PRIMARY DISTRIBUTION NETWORKS WITH MULTIPLE DISTRIBUTED GENERATOR PLACEMENT FOR LOSS REDUCTION

A NOVEL OF PRIMARY DISTRIBUTION NETWORKS WITH MULTIPLE DISTRIBUTED GENERATOR PLACEMENT FOR LOSS REDUCTION A NOVEL OF PRIMARY DISTRIBUTION NETWORKS WITH MULTIPLE DISTRIBUTED GENERATOR PLACEMENT FOR LOSS REDUCTION TERLI JAHNAVI M.Tech(PE). CRR COLLEGE OF ENGINEERING, ELURU. N PAVAN KUMAR, Assistant Professor

More information

Bulk Power System Integration of Variable Generation - Program 173

Bulk Power System Integration of Variable Generation - Program 173 Program Description Program Overview Environmentally driven regulations such as state-mandated renewable energy standards and federal air and water standards, along with improved economic viability for

More information

Simulation Analytics

Simulation Analytics Simulation Analytics Powerful Techniques for Generating Additional Insights Mark Peco, CBIP mark.peco@gmail.com Objectives Basic capabilities of computer simulation Categories of simulation techniques

More information

COMPARISON OF THE STANDARDIZED REQUIREMENTS FOR INDOOR CLIMATE IN OFFICE BUILDINGS

COMPARISON OF THE STANDARDIZED REQUIREMENTS FOR INDOOR CLIMATE IN OFFICE BUILDINGS Kazderko Mikhail COMPARISON OF THE STANDARDIZED REQUIREMENTS FOR INDOOR CLIMATE IN OFFICE BUILDINGS Bachelor s Thesis Building Services Engineering December 2012 DESCRIPTION Date of the bachelor's thesis

More information

DEVELOPMENT OF A SOLAR COLLECTOR/SOLAR WATER HEATING SYSTEM TEST CENTER IN IRAN

DEVELOPMENT OF A SOLAR COLLECTOR/SOLAR WATER HEATING SYSTEM TEST CENTER IN IRAN DEVELOPMENT OF A SOLAR COLLECTOR/SOLAR WATER HEATING SYSTEM TEST CENTER IN IRAN Farzad Jafarkazemi 1, Hossein Abdi 1, Arash Asadzadeh Zargar 1 and Abdollah Hassani 1 1 Solar Energy Research Group, Islamic

More information

Shifting Comfort Zone for Hot-Humid Environments

Shifting Comfort Zone for Hot-Humid Environments PLEA6 - The rd Conference on Passive and Low Energy Architecture, Geneva, Switzerland, 6-8 September 6 Shifting Comfort Zone for Hot-Humid Environments Kitchai Jitkhajornwanich Faculty of Architecture,

More information

International Journal of Mechanical Civil and Control Engineering. Vol. 1, Issue. 3, June 2015 ISSN (Online):

International Journal of Mechanical Civil and Control Engineering. Vol. 1, Issue. 3, June 2015 ISSN (Online): Evaluation of efficiency and collector time constant of a solar flat plate collector at various intensities of light and constant wind speed by using forced mode circulation of water Abhijit Devaraj 1

More information

Research of Load Leveling Strategy for Electric Arc Furnace in Iron and Steel Enterprises Yuanchao Wang1, a*, Zongxi Xie2, b and Zhihan Yang1, c

Research of Load Leveling Strategy for Electric Arc Furnace in Iron and Steel Enterprises Yuanchao Wang1, a*, Zongxi Xie2, b and Zhihan Yang1, c International Conference on Mechanics, Materials and Structural Engineering (ICMMSE 2016) Research of Load Leveling Strategy for Electric Arc Furnace in Iron and Steel Enterprises Yuanchao Wang1, a*, Zongxi

More information

Thermal Comfort Zone for Thai People

Thermal Comfort Zone for Thai People Engineering, 013, 5, 55-59 http://dx.doi.org/10.436/eng.013.5506 Published Online May 013 (http://www.scirp.org/journal/eng) Thermal Comfort Zone for Thai People Juntakan Taweekun *, Ar-U-Wat Tantiwichien

More information

ISO 7730 INTERNATIONAL STANDARD

ISO 7730 INTERNATIONAL STANDARD INTERNATIONAL STANDARD ISO 7730 Third edition 2005-11-15 Ergonomics of the thermal environment Analytical determination and interpretation of thermal comfort using calculation of the PMV and PPD indices

More information

COST-EFFICIENT ENVIRONMENTALLY-FRIENDLY CONTROL OF MICRO- GRIDS USING INTELLIGENT DECISION-MAKING FOR STORAGE ENERGY MANAGEMENT

COST-EFFICIENT ENVIRONMENTALLY-FRIENDLY CONTROL OF MICRO- GRIDS USING INTELLIGENT DECISION-MAKING FOR STORAGE ENERGY MANAGEMENT Intelligent Automation and Soft Computing, Vol. 1X, No. X, pp. 1-26, 20XX Copyright 20XX, TSI Press Printed in the USA. All rights reserved COST-EFFICIENT ENVIRONMENTALLY-FRIENDLY CONTROL OF MICRO- GRIDS

More information

PRESENTING A METHOD FOR THE ANALYSIS OF WIND POWER PLANT IMPACT ON RELIABILITY INDEXES AS WELL AS FORECASTING THEM BASED ON WIND REPETITION PATTERNS

PRESENTING A METHOD FOR THE ANALYSIS OF WIND POWER PLANT IMPACT ON RELIABILITY INDEXES AS WELL AS FORECASTING THEM BASED ON WIND REPETITION PATTERNS PRESENTING A METHOD FOR THE ANALYSIS OF WIND POWER PLANT IMPACT ON RELIABILITY INDEXES AS WELL AS FORECASTING THEM BASED ON WIND REPETITION PATTERNS *Salman Shensa 1, Sareh Sanei 1, Hadi Zayandehroodi

More information

Garg Vishakha, International Journal of Advance Research, Ideas and Innovations in Technology.

Garg Vishakha, International Journal of Advance Research, Ideas and Innovations in Technology. ISSN: 2454-132X Impact factor: 4.295 (Volume3, Issue3) Available online at www.ijariit.com Effect of Environmental Parameters on Solar PV Performance with MPPT Techniques on Induction Motor Driven Water

More information

Navigating an Auto Guided Vehicle using Rotary Encoders and Proportional Controller

Navigating an Auto Guided Vehicle using Rotary Encoders and Proportional Controller International Journal of Integrated Engineering, Vol. 9 No. 2 (2017) p. 71-77 Navigating an Auto Guided Vehicle using Rotary Encoders and Proportional Controller Sung How Lee 1, Kim Seng Chia 1,* 1 Faculty

More information

Designing Air-Distribution Systems To Maximize Comfort

Designing Air-Distribution Systems To Maximize Comfort Designing Air-Distribution Systems To Maximize Comfort By David A. John, P.E., Member ASHRAE An air-distribution system that provides occupant thermal comfort can be a complicated system to predict and

More information

Evaluation of the performance of Aggregated Demand Response by the use of Load and Communication Technologies Models

Evaluation of the performance of Aggregated Demand Response by the use of Load and Communication Technologies Models Institute for Energy Engineering (IIE-UPV) & Research Network REDYD-2050 Curso de formación en Mercados Eléctricos Evaluation of the performance of Aggregated Demand Response by the use of Load and Communication

More information

Minimizing Makespan for Machine Scheduling and Worker Assignment Problem in Identical Parallel Machine Models Using GA

Minimizing Makespan for Machine Scheduling and Worker Assignment Problem in Identical Parallel Machine Models Using GA , June 30 - July 2, 2010, London, U.K. Minimizing Makespan for Machine Scheduling and Worker Assignment Problem in Identical Parallel Machine Models Using GA Imran Ali Chaudhry, Sultan Mahmood and Riaz

More information

Energy Consumption Measurement of Energy-saving Buildings and Analysis of Simulation Example

Energy Consumption Measurement of Energy-saving Buildings and Analysis of Simulation Example Sensors & Transducers 203 by IFSA http://www.sensorsportal.com Energy Consumption Measurement of Energy-saving Buildings and Analysis of Simulation Example Ying Li, 2 Tiegang Kang Zhejiang College of Construction,

More information

Numerical Study on the Effect of Insulation Materials on the Single Zone building Performance

Numerical Study on the Effect of Insulation Materials on the Single Zone building Performance International Journal of Current Engineering and Technology E-ISSN 2277 4106, P-ISSN 2347 5161 2016 INPRESSCO, All Rights Reserved Available at http://inpressco.com/category/ijcet Research Article Numerical

More information

EVAPORATIVE COOLING FOR THERMAL COMFORT IN BUILDINGS

EVAPORATIVE COOLING FOR THERMAL COMFORT IN BUILDINGS EVAPORATIVE COOLING FOR THERMAL COMFORT IN BUILDINGS by DEV ANAND HINDOLIYA Submitted in fulfillment of the requirements of the degree of Doctor of Philosophy to the CENTRE FOR ENERGY STUDIES INDIAN INSTITUTE

More information

Ubiquitous Sensor Network System

Ubiquitous Sensor Network System TOMIOKA Katsumi, KONDO Kenji Abstract A ubiquitous sensor network is a means for realizing the collection and utilization of real-time information any time and anywhere. Features include easy implementation

More information

Understanding Extrusion

Understanding Extrusion Chris Rauwendaal Understanding Extrusion 2nd Edition Sample Chapter 2: Instrumentation and Control ISBNs 978-1-56990-453-4 1-56990-453-7 HANSER Hanser Publishers, Munich Hanser Publications, Cincinnati

More information

Assessing thermal comfort of dwellings in summer using EnergyPlus

Assessing thermal comfort of dwellings in summer using EnergyPlus Assessing thermal comfort of dwellings in summer using EnergyPlus Irina Bliuc, Rodica Rotberg and Laura Dumitrescu Gh. Asachi Technical University of Iasi, Romania Corresponding email: irina_bliuc@yahoo.com

More information

ENERGY EFFICIENCY IN BUILDINGS AND COMMUNITIES

ENERGY EFFICIENCY IN BUILDINGS AND COMMUNITIES ENERGY EFFICIENCY IN BUILDINGS AND COMMUNITIES Stefano Paolo Corgnati Department of Energy Politecnico di Torino E-mail: stefano.corgnati@polito.it I N T E R A C T I O N S B E T W E E N U S E R S A N D

More information

Thermal comfort evaluation of natural ventilation mode: case study of a high-rise residential building

Thermal comfort evaluation of natural ventilation mode: case study of a high-rise residential building J. Zuo, L. Daniel, V. Soebarto (eds.), Fifty years later: Revisiting the role of architectural science in design and practice: 50 th International Conference of the Architectural Science Association 2016,

More information

Design and Simulink of Intelligent Solar Energy Improvement with PV Module

Design and Simulink of Intelligent Solar Energy Improvement with PV Module International Journal of Information & Computation Technology. ISSN 0974-2239 Volume 4, Number 6 (2014), pp. 619-628 International Research Publications House http://www. irphouse.com Design and Simulink

More information

An Assessment of Thermal Comfort in Hot and Dry Season (A Case Study of 4 Theaters at Bayero University Kano)

An Assessment of Thermal Comfort in Hot and Dry Season (A Case Study of 4 Theaters at Bayero University Kano) International Journal of Multidisciplinary and Current Research Research Article ISSN: - Available at: http://ijmcr.com An Assessment of Thermal Comfort in Hot and Dry Season (A Case Study of Theaters

More information

WIND has been shown to be the fastest growing source

WIND has been shown to be the fastest growing source 792 IEEE TRANSACTIONS ON ENERGY CONVERSION, VOL. 24, NO. 3, SEPTEMBER 2009 A Reliability Model of Large Wind Farms for Power System Adequacy Studies Ahmad Salehi Dobakhshari, Student Member, IEEE, and

More information