Choosing a Rational Sample Size for the Underwater Inspection of Marine Structures

Size: px
Start display at page:

Download "Choosing a Rational Sample Size for the Underwater Inspection of Marine Structures"

Transcription

1 Choosing a Rational Sample Size for the Underwater Inspection of Marine Structures Valery M. Buslov, * Ronald E. Heft?on,** and Armen Martirossyan* * * *Ph.D., P.E., Project Manager, Han-Padron Associates, LLP, Eleven Penn Plaza, Suite 925, New York, NY , vbuslov@han-padron.com ** P.E., Regional Manager/Partner, Han-Padron Associates, LLP, 100 Oceangate, Suite 650, Long Beach, CA rheffron@han-padron.com *** Ph.D., Structural Engineer, Han-Padron Associates, LLP, 100 Oceangate, Suite 650, Long Beach, CA armenm@han-padron.com Abstract Underwater condition surveys have become common practice for marine structures as owners strive to ensure safety and protect their infrastructure investment. Due to limited visibility and the need to remove marine growth, it is impractical to inspect 100 percent of each component underwater. For this reason, it has long been a common practice on pile-supported structures to remove marine growth in three bands (waterline, mudline, and midpoint) on a representative sample of piles. A sample size of 10 percent is commonly used, although other guidelines are also used. The problem with this current industry practice is that the sample size has no reliable statistical basis. This has become a particular issue with large structures supported by several thousand piles, where the current industry practice may result in far more piles inspected than are necessary to reliably assess the condition of the structure. The cost impact in such cases can be significant. In order to improve the current industry practice, a methodology has been developed for the selection of sample size based on a statistical approach. It is recognized that any such method could get complex and onerous if it attempted to distinguish between levels of deterioration and the possible structural consequences for various structure types and construction materials. For this reason, a simplified approach was chosen which focuses on reportable defects. Using this method, a pile will either exhibit a reportable defect or not, with corresponding numerical representation of 0 and 1, respectively. A reportable defect is defined as any defect that is significant enough to justify making a note of it in the inspection notes. Using this simplified approach, it is necessary to establish two statistical parameters in order to determine an appropriate sample size: Mean Value Range shows the accuracy with which the calculated mean from the inspection of the sample describes the condition of the structure. Specific values are recommended in the paper. ConJidence Interval shows the degree of reliability for the range of mean values calculated from the inspection of the sample. Again, specific values are recommended in the paper. 1

2 The procedure may involve a large number of iterations and it has therefore been coded into a computer program. The population of piles may be varied by the user such that each row or certain areas of the structure may be treated separately to determine the appropriate sample size. A step-by-step approach is outlined to facilitate ease of use for the methodology. Several typical examples, including graphs illustrating the interdependence of the inspection results and the inspection requirements, are also included. The result is a methodology that will improve the cost-effectiveness of underwater inspection on large structures without sacrificing reliability. State-of-the-Art Approach The most common and conventionally used procedure for inspection of marine structures in general, and particularly for underwater inspections, consists of combining a general visual survey with a more detailed inspection of a sample of components. A general visual inspection (typically referred to as a Level I effort) of 100 percent of components is always required in order to identify obvious signs of major and/or typical damage. In addition, more detailed inspection involving removal of marine growth, testing, and measurements (typically referred to as a Level II effort), is usually necessary in order to determine the causes of damage, its typical pattern and magnitude. The required measurements, testing or sampling constitute an effort which is much greater than that required for a strictly visual survey, and consequently are typically performed only on a limited number of components. Since the degree of reliability of these conclusions depends on the number of observations (size of the sample), the scope of work should define the minimum number of elements to be inspected in detail. Frequently, these requirements are expressed as a percentage of the total number of the elements of a certain type in the structure. Values of 5 percent and 10 percent are most common. However, some other criteria are used as well, for instance at least one pile in each bent, or each 10th sheet pile, or a mean of three adjacent rebars, etc. Some of the fallacies of this approach are obvious without any statistical analysis. The major factor involved is the total number of components within the section of the structure with uniform construction. Ten percent of 200 piles supporting a concrete wharf is 20, which is less than the minimum usually considered for statistical evaluation. Ten percent of piles on a pier supported by 3,000 timber piles is 300, which is well in excess of what is required for evaluation of any imaginable damage parameter. However, the main problem with these mandatory requirements is that they have no reliable basis. In some cases an insufficient number of detailed observations are made; while in others, substantial funds are wasted for unnecessary survey work. A very interesting and convincing example is described in ref. [ 11. The authors performed a statistical evaluation of timber piles damaged by Limnoria using a 100 percent sample size for the survey. The evaluated parameter was pile diameter, and the survey covered several thousand piles. When accuracy requirements for the survey were established, the authors found out that at the set reliability level, the surveyed parameter could be established through a sample survey of only a handful of piles (maximum 24). Statistical Approach Statistical science (at the textbook level) contains all the information required to determine the minimum number of elements to be surveyed. However, whenever the subject of 2

3 the reliability is brought up, the immediate question is which parameters constitute the required level of reliability and what should be their numerical values? Statistically, it does not make any sense to establish specific requirements on the number of elements to be inspected without setting the reliability goals for the parameters to be determined during the course of inspection. It is quite possible that one of the major reasons that a statistical approach is not currently used for underwater surveys of marine structures is a lack of readiness to properly formulate the accuracy requirements. The following simple example illustrates the problem mentioned above: The inspection has a goal to establish the percent of steel H-piles affected by complete perforation of webs underwater. How many piles are to be included in the inspection sample? In order to find the answer to this question, using a statistical approach, a minimum of two decisions need to be made (specific values are for example purposes only): The percent of piles with perforation of webs should be established within a margin of *lo percent (because this is the contingency established in the cost estimate of the planned repair project), and The confidence level for the findings shall be 95 percent (because this is what is required by the structural engineer who will design the repairs). Inspectors, owners, and structural engineers have heretofore been ill equipped to specify survey requirements in this format. Accuracy Parameters During the preparation of this paper, the authors distributed, among their colleagues involved in planning and execution of underwater surveys, a questionnaire asking to nominate the acceptable accuracy range and the confidence level for several types of damaged components most frequently encountered in underwater inspections. It is obvious that a large number of factors need to be considered (for instance, the geometrical parameter of cross section which is entered into the structural capacity equation in the power of four should be determined much more accurately than that used as a simple multiplier). However, it was also important to learn what is the order of magnitude for these requirements based on the general understanding of the problem by professionals involved in the field. It is interesting to note that most of the respondents considered a 95 percent confidence level as both the sufficient and the minimum required parameter. With regard to the accuracy range, a wider variety of answers was recorded. It appears though that lo- 15 percent tolerance for the findings is considered acceptable when the surveyed parameters are to be used for the evaluation of the structural capacity. For the parameters used in the evaluation of durability and service life, 25 percent accuracy appears to be acceptable. In ref. [l] mentioned above, the 10 percent accuracy range was considered necessary for evaluation of the measured remaining timber pile diameter with 90 percent confidence level for this range. One of the most interesting and practically important lines of reasoning expressed by the respondents was that there should be a direct link between the required accuracy of the survey and the contingency level for the associated corrective/repair work. Field Data Limitations Dispersion ofsuntey Data. Unfortunately, having a good idea about requirements for the accuracy of data is insufficient to establish the number of tests. This number also depends on the 3

4 dispersion of the data itself. When the data clearly shows a definite trend ( narrow distribution with small values of standard deviation), a smaller sample may be required to satisfy the tolerance requirements set up as a reliability goal. The distribution parameters of the data to be collected constitute the required part of the input for calculation of the minimum number of observations; therefore they are to be established. There are two options to determine the necessary values: perform an analysis of similar data from the previous inspections (preferably on the same structure); or conduct a sample (pilot) investigation in the field. Both options reflect the fact that determination of the minimum number of observations to satisfy the set accuracy requirements is an iterative process. It should be recognized that when no data on the statistical parameters of the field data is available, the scope of work for the survey project (with set accuracy requirements) cannot be determined prior to the beginning of the fieldwork. This may create fairly obvious administrative/contractual problems, and some innovative approach to overcome these problems will be necessary. Possible solutions include a flexible unit cost pricing strategy or a separate agreement for the up-front preliminary survey and statistical evaluation. Type of Distribution. Any statistical analysis which involves the determination of the required minimum number of tests is possible only if the type of data distribution is established. The experience with surveys on marine structures shows that normal distribution may be assumed in most cases provided that: the number of components is sufficiently large (hundreds) the type of damage surveyed is not associated with accidental ( i.e., strictly local) activities, such as damage by ships, impact overloading on decks, etc. Grouping by Service Conditions. Both the survey to be conducted and the sample (pilot) inspection to establish the distribution parameters should be performed for groups of elements with similar service conditions. For example, the concrete piles on the back rows of a marginal wharf usually have much more extensive corrosion damage than the front row piles (because of the reflected wave splash generated by the stone slope and retaining wall). These piles also have a higher probability of structural damage due to lateral overloading (because of the shorter unsupported length). The outside piles of steel sheet pile pairs typically have more extensive corrosion damage than inside sheets. In these and other similar cases, the analysis should be performed separately for each group of components which may be considered as having distinctly different service conditions. Reportable Defects Concept The general statistical approach for determination of the minimum number of observations to satisfy the set accuracy requirements can be applied to any inspection task, including reliable determination of quantitative parameters such as remaining thickness of steel components, remaining reinforcing bar diameter in concrete structures affected by corrosion, diameter of timber piles affected by borers, etc. However, when the quantitative parameters are surveyed, the procedure for statistical evaluation involves a series of additional factors such as accuracy of the measuring tools, individual inspector s errors, the mode and the format in which the parameter is used in the capacity formula, and particularly a pattern of field data distribution. One possible approach to determining a structurally representative sample size could be to base the sample size on the type of damage observed. An example would be a concrete structure with components exhibiting various stages of corrosion distress, chemical attack, 4

5 freeze-thaw scaling, etc. The complexity of establishing a sample size that addresses the required reliability of each type of damage would get quite complex and impractical. For this reason, the methodology presented in this paper focuses on reportable defects. In underwater inspections of most structures, with a relatively short time allocated for the work, and limited visibility, the most frequent and efficient mode of investigation is when the main task is to determine the quantity of elements affected by certain type of defects (broken piles, piles affected by corrosion or infested by borers, sheet piles with perforations, broken or dislocated armor units on breakwaters, cracked and/or scoured supports, etc). These reportable defects can be recorded using a simple scale, which allows to describe the condition numerically. For this application, a scale damaged and not damaged with numerical representation of 1 and 0, correspondingly is used. As mentioned above, two additional necessary conditions are applied: Whenever significant differences exist in the structural configuration, geometry, or loading on exposure conditions, the structure should be conditionally divided into separate groups (populations) of uniform components. For a specific defect, normal distribution for the mean value of the sample throughout the population is assumed within each group of uniform components. Procedure for Determining Sample Size The procedure for determining the sample size which will satisfy set accuracy parameters is summarized in flow-chart shown in Figure 1. Several basic statistical definitions used in the derivation process are presented below: Sample x = (x,,xz,x3 )... x,, I (1) Sample Mean p=+ x c I i 1 Sample Standard Deviation - a= ~ [ n-1 C( xi-p2 q (3) Thereafter, three main parameters are used as a basis for evaluation if the data from previous inspections or the pilot inspection conducted prior to main survey describes the condition of the whole facility with sufficient accuracy. They are briefly explained below: Conjidence interval is employed for evaluation of the mean value obtained from previous inspection data or from a small-scale random pilot inspection conducted prior to the main inspection project. With this parameter, we need to achieve the probability of 1-0~ (or in other words confidence level of ~-CC), that the mean value calculated from the inspection of the sample is located within the range illustrated in the equation below: 112 (2) (4) 5

6 DEFINE SAMPLE POPULATION (ENTIRE STRUCTURE OR PORTION OF IT) INSPECTION DATA AVAILABLE? f \ f CONDUCT PILOT USE AVAILABLE INSPECTION DATA (30 RANDOMLY CHOSEN < J < COMPONENTS) r DEFINE CALCULATION ) PARAMETERS, DAMAGED UNITS UNDAMAGED UNITS - 0 ACCURACY LEVEL - % \CONFlDENCE LEVEL - % I f 4 \ DETERMINE REQUIRED NUMBER OF INSPECTIONS INPUT CALCULATION PARAMETERS \ J F CALCULATE: -SAMPLE MEAN - STANDARD DEVIATION.- MEAN VALUE RANGE EQUIRED NUMBER OF INSPECTIONS f ITERATE NUMBER OF ) INSPECTIONS UNTIL MEA\ VALUE RANGE WILL SATISFY SET ACCURACY CRITERIA < J Figure 1 Procedure for Determining Sample Size For example, to obtain a 95 percent confidence interval, c~=o.05 should be used, and for a confidence level of 90 percent, CY=O. 10. Values for coefficient t are determined from T- Distribution Tables (see Table- 1).

7 Table-1 Quantities of the T-Distribution with Degrees of Freedom p = 1 - / Infinity Source: Jay L. Devore,Nicholas R. Farnum, Applied Statistics for Engineers and Scientists = n - 1 Mean Value Range (accuracy level) shows the accuracy with which the calculated mean describes the condition for the population. As shown in Equation 4, it depends on several variables, and is expressed as mean value ±X%. 7

8 Number ofinspections, n, is the parameter which is the main subject of this discussion. It should be determined from Equation 4 prior to beginning the main inspection project, whereby the other variables are calculated or established as follows: Confidence Level and Mean Value Range - from the considerations discussed above; Mean and Standard Deviation - from the available (previous) survey data, or random small-scale pilot inspection. If the second option is used, the number 30 is recommended for the initial inspection. Also, it should be pointed out that the procedure might involve a large number of iterations to come up with a solution; however, it can be simplified with an introduction of a small computer program. As input, the program will need only the number of damaged and not damaged piles, available from previously performed inspections or from a random small-scale pilot inspection mentioned earlier, as well as the accuracy criteria set for specific inspection project. Alternatively, a set of charts can be created. Using these charts the inspector can graphically pick up the number of necessary inspection locations that will satisfy set conjidence level and mean value range criteria. An illustration of creating such charts will be demonstrated in the following example. In each specific case, the engineering personnel responsible for the planning and execution of the survey should make their own decisions on the acceptable accuracy range and the minimum confidence level, depending on the type of damage and goal of inspection. However, the following general guidelines are recommended for underwater inspection of port structures: Mean Value Range: a) used for structural evaluation f 15 percent; b) used for assessment of durability problems, preventive maintenance and service life predictions f 25 percent. ConJidence Level: 95 percent. Numerical Example The following example will clarify the methodology for determining the sample size for the inspection. The example doesn t directly follow the methodology, but rather illustrates different scenarios to make the methodology more clear. The number of randomly inspected piles is 10. The results of inspection for a specific defect are: Damaged 7 - denoted 1 Not damaged 3 - denoted 0 Mean Value p=+ c x; = 7x1+3xo=o7 i 10 * Standard Deviation - c=[$x(x; -P)~] ~ =[&(7x(l-#)ll;i = For a 95% confidence level, cz = 0.05, which means that the T-distribution coefficient of = t, (0.975) =

9 should be used in Equation 4. o 7 _ t Jlo t <PsoJ+ & 9 After numerical simplifications the following mean value range was obtained: I #LJ I The mean value range variation expressed in percentages is calculated as: x100=-27% and Thus, in this example, the mean value of 0.7 varies *27% with a confidence interval of 95%. If this range is too large compared to the acceptable parameter, then a new sample size should be determined by solving Equation 4. Assuming that at this stage mean and standard deviation can be constant for the whole population, and using the already calculated mean and standard deviation, Equation 4 is solved for n, using several iteration cycles. Figure 2 shows the variation of mean value range versus increasing the sample size, for current numerical example. Three different Confidence Interval levels of 67%, 95%, and 99.7%, were chosen for examination. As another exercise for four different Mean Value Ranges of 1 O%, 15%, 20%, and 30%, the variation of Confidence Interval versus the number of inspections is shown in Figure 3. Confidence Interval -67% -c95% b99.7 % so Number of Piles Figure 2. Mean Value Range Variation 9

10 85 80 Mean Value Range -10 % +15 % +20 % +I+30 % Number of Piles Figure 3. Confidence Interval Variation It should be pointed out that Figure 2 and Figure 3 are valid only for this numerical example; or more specifically, only for the calculated mean and standard deviation values. For different values, the curves on these charts will be different. If a computer is used in the field, these charts are unnecessary. However, a set of these charts can be created to assist in making decisions in the field regarding approximate sample size. Conclusions A reasonably practical method has been presented for the determination of a statistically representative sample size. Applying this method has the potential to reduce the cost of underwater inspections, particularly for larger structures, without reducing reliability. It should be recognized, however, that this method is recommended only for determining the number of locations for Level II inspection efforts. It is recommended that a Level I inspection effort (swim-by), which can typically be accomplished fairly economically since no removal of marine growth is necessary, should be performed on 100 percent of the accessible underwater components. References 1. Barquett, Ronald L., and Kenneth M. Childs, Jr. Evaluation of Publications Analysis Techniques for Evaluation of the Condition of Waterfront Structures. Non-Destructive Engineering Annual Meeting, University of California at Santa Cruz, Jay L. Devore,Nicholas R. Famum, Applied Statistics for Engineers and Scientists. International Thomson Publishing, January