Knowledge Sourcing Effectiveness. Electronic Companion Document

Size: px
Start display at page:

Download "Knowledge Sourcing Effectiveness. Electronic Companion Document"

Transcription

1 Knowledge Sourcing Effectiveness Electronic Companion Document Peter H. Gray Assistant Professor Katz Graduate School of Business University of Pittsburgh 244 Mervis Hall Pittsburgh, PA Phone: (412) , Fax: (412) Darren B. Meister Assistant Professor Richard Ivey School of Business University of Western Ontario London, Ontario N6A 3K7 Canada Phone: (519) , Fax: (519) In this Electronic Companion Document, we provide additional details regarding six aspects of the above-noted study, namely (a) the relationship between knowledge sourcing and similar constructs, (b) the utilization of formative constructs, (c) psychometric tests performed to assess the representativeness of the sample, likelihood of non-response bias, and likelihood of mono-method bias, (d) our item trimming process, and (e) additional data regarding item-construct discriminant validity. These ideas and analyses are included here to expand on discussions in the main paper. Knowledge Sourcing and Related Constructs A fundamental problem facing researchers who wish to employ terms that accurately and precisely describe behaviors that surround communication exchanges between individuals is the lack of agreement in published research on the meaning of the words information and knowledge. At the extreme, information is any stimulus that reaches an individual s sensory system (e.g., Miller 1969), and knowledge is any cognition formed in a human brain (e.g., Nonaka 1994); following this logic, all human communication acts are simply articulations of human knowledge. Further, many researchers employ these two terms interchangeably (e.g., Johnson 1996), leave them undefined (e.g., Borgatti and Cross 2003), or both (e.g., Sussman and Siegal 2003). A reliance on

2 definitions therefore will not help distinguish knowledge sourcing as reported in this manuscript from related behaviors. To explicate these differences, this section examines characteristics of the typical phenomena studied by researchers who employ the terms information seeking, newcomer information seeking, help seeking, environmental scanning, and information search, in order to demonstrate that, rhetoric aside, these literatures differ in the practical and theoretical issues they address. The term knowledge sourcing was introduced in this manuscript to clearly and precisely describe the phenomena of interest in this study. A first issue is differentiating knowledge sourcing from the more general term information seeking. Some authors (e.g., Borgatti and Cross 2003; Sussman and Siegal 2003) have described the behavior of individuals who are asking colleagues for their advice and expertise as information seeking. This is not surprising; information seeking is a sufficiently large concept (e.g., Johnson 1996) that it can be applied to virtually all human behaviors that involve efforts to locate useful sensory inputs. However, we believe that there is an important difference between asking others for their advice (based on their professional expertise and experience) and looking for sensory input and/or factual data. The former is a product of an individual s reasoning ability and beliefs about cause-and-effect connections, while the latter merely corresponds to some state of reality. To lump all human behaviors relating to both advice and facts into the same category ignores the important differences between the two; one augments recipients causal maps and helps them understand and predict future events, while the other contains no inherent causal inferences. Knowledge sourcing therefore clearly and unambiguously describes a behavior that some researchers treat as part of information seeking, and others do not. Other researchers have similarly developed more precise conceptualizations of specific subsets of the more generalized information seeking term. For example, feedback seeking (Ashford and Cummings 1983) focuses on how individuals obtain cues about the acceptability of their past job

3 performance, both by monitoring their environment and through explicit verbal requests. Feedback seeking therefore deals with a subset of employees and a subset of topic matters, including active requests for feedback and more passive scanning-like behaviors. Newcomer information seeking (Morrison 1993) is similar to feedback seeking, but is carried out by employees who are still learning about the expectations, appropriate attitudes and behaviors associated with their new jobs; it therefore considers an even smaller subset of employees. As part of the newcomer socialization literature (e.g., Miller and Jablin 1991), such behaviors are of primary interest as tools for enhancing person-job fit. Help seeking (e.g., Lee 1997) deals with requests for actual assistance, which may or may not involve access to others knowledge. Environmental scanning refers generally to the acquisition of information from an organization s external environment (Agarwal 1967); it involves sweeping the external information environment broadly and systematically in order to monitor developments that could impact an organization (Choo 1998, p. 93). The focus is on the acquisition of primarily factual data regarding a subset of possible topic matters (strategically important issues). Information search (e.g., Vandenbosch and Higgins 1996) is seen as a more directed form of enquiry than environmental scanning, but again does not distinguish between factual data and expertise-based knowledge. In summary, past research has both investigated phenomena that are related to knowledge sourcing but conceptually distinct, and considered behaviors that are sufficiently vaguely defined so as to include knowledge sourcing and a host of related behaviors. We employ the term knowledge sourcing to get away from more general theories that could apply to both knowledge as interpretations of causal structures in the work environment and factual information known to people. We focus very specifically only on knowledge and do not automatically generalize our findings to factual information. As demonstrated by the findings of this study, this separation improves the precision of theory that can be developed and tested. For example, one would not

4 expect that individuals with a higher level of intellectual demands would benefit from obtaining larger amounts of factual data, but they clearly do benefit from sourcing more knowledge. To truly understand knowledge-related phenomena in organizations requires the development of knowledgespecific constructs and theories, rather than simply assuming that theories about factual data automatically apply to knowledge. Future researchers are urged to employ this more precise term when studying individuals behavior in locating and accessing others expertise, experiences, opinions and insights as a way of developing an identifiable body of knowledge-specific research. Formative vs. Reflective Constructs As noted in the main paper, when items used to measure a construct do not necessarily covary, the appropriate relationship between items and construct is formative (Bollen 1984). Formative constructs are caused by their indicators, which precede the construct (Cohen et al. 1990). Following the logic specified by Cohen et al. (1990), formative indicators are not assumed to be correlated as they do not measure the same underlying phenomenon. Such constructs are specified as summative indices (Barclay et al. 1995), in contrast to reflective constructs whose indicators covary by definition and are analyzed through techniques such as principal components or factor analysis to identify underlying dimensions. The test for whether any set of indicators should be viewed as formative or reflective is to question whether a change in the underlying construct will necessarily result in similar changes in all the indicators. Thus, it is entirely conceivable that an individual could (for example) engage in higher levels of published knowledge sourcing without changing his or her level of dyadic or group sourcing. The same logic applies to intellectual demands and learning outcomes constructs. An individual could, for example, be assigned work that is more complex without that work necessarily

5 being more interdependent and non-routine as well. Similarly if an individual learns more from organizational best practices, he or she does not by definition also become more adaptive and innovative. We therefore treated (a) measures of published, dyadic, and group knowledge sourcing as formative indicators of knowledge sourcing, (b) measures of cognitive replication, adaptation, and innovation as formative indicators of learning outcomes, and (c) measures of non-routineness, complexity, and interdependence as formative indicators of intellectual demands. This approach avoids the complexity of a second-order factor model while ensuring that constructs are generalizable across different contexts where the relative proportion of each type of knowledge sourcing, intellectual demand, and learning outcome may be different. Reliability is not a meaningful concept when applied to formative constructs (Cohen et al. 1990) as there is no assumption that formative indicators will covary (Bollen and Lennox 1991; Chin 1998). This is due to the fact that formative constructs and reflective constructs have different epistemic relationships with their respective indicators (Bagozzi 1984). Reflective constructs feature indicators that are caused by the construct, and thus covary by definition. In contrast, formative constructs are defined by their indicators in a causal fashion such that formative constructs are completely determined by a linear combination of indicators (Hulland 1999). Examinations of correlations or internal consistency measures for formative constructs is thus inappropriate, illogical (Bollen 1984) and meaningless (Chin 1998). However, because each formative construct in this study (intellectual demands, knowledge sourcing, and learning outcomes) was comprised of three distinct dimensions, it was possible to assess the reliability of each of these dimensions separately. Thus, these reflective dimensions were also tested to demonstrate the reliability of their measures. For example, intellectual demands was formed using three subsets of items that measure non-routineness, complexity, and

6 interdependence. While these three dimensions themselves do not necessarily covary, the items used to measure each dimension were expected to covary. Psychometric Tests This section reports additional details regarding tests for the representativeness of the sample, likelihood of non-response bias, and likelihood of mono-method bias. The sample s representativeness was supported as no significant demographic differences were found between the sample and population figures supplied by TechCo s manufacturing engineering HR department. Respondents did not differ from the other employees on age (p=0.13), tenure (p=0.78) or gender (p=0.54). Using an extrapolation technique (Armstrong and Overton 1977), non-response bias was found to be unlikely. We assessed non-response bias by testing for differences between early and late responders (first 10% and last 10% of responses) on the basis that late responders would be most similar to non-respondents (Armstrong and Overton 1977). No significant (p<0.05) differences were found, suggesting that non-response bias was unlikely. Following Podsakoff and Dalton (1987), mono-method bias was considered by testing for a common method influence across all responses and none was found. A factor analysis of all items reported here revealed twelve factors explaining 47.0% of the variance, with no single factor featuring significant (p<0.10) loadings for all items. Item Trimming Items were trimmed following an SPSS analysis that examined internal consistency measures, item loadings, and composite reliability. Results are provided in Table 1 below. Trimmed items included one item from each of non-routineness (loading 0.55), complexity (0.54), interdependence

7 (0.68), cognitive replication (0.57), reciprocation wariness (0.67) and risk aversion (0.67). One item was also trimmed from each of the three knowledge sourcing scales for reasons of content validity, as these items referred to knowledge sourcing that occurred in response to difficult problems and thus could be conflated with intellectual demands. Two items were removed from the cognitive adaptation scale and one from the cognitive innovation scale to balance the number of indicators for each dimension of the learning outcomes formative construct (two items remaining for each dimension). Balancing the number of predictors is important to ensure that no dimension of a formative construct overpowers another purely by having more predictors.

8 Table 1 Items Trimmed Following SPSS Analysis Construct/ Dimension Non-Routineness Complexity Interdependence Published Sourcing Dyadic Sourcing Group Sourcing Cognitive Replication Cognitive Adaptation Cognitive Innovation Risk Aversion Reciprocation Wariness Original IC Original AVE Item Dropped Item Loading Item Text % ROU_ I frequently deal with unusual, one-of-a-kind things at work % COM_ I must balance many different goals and objectives at work % INT_ I can carry out my work without consulting other employees % PKS_ When I m working on a tough problem I often read documents that were written by TechCo people who may have encountered similar problems % DKS_ When I m working on a difficult issue, I often communicate one-on-one with individual employees who may have encountered similar issues % GKS_ When I am working on a challenging problem, I often bring it up for discussion with a group of employees who may have encountered similar problems % CR_ I have learned a lot from TechCo s Best Practices over the past year % CA_ My understanding of my job has evolved over this past year in response to our changing circumstances CA_ I have spent lots of time this past year learning new things to keep up with our changing circumstances % CI_ I have come up with many original ideas about how I could improve my work this past year % RA_ I am very willing to take risks when choosing a job or project to work on % RW_ Trading favors often benefits one person more than the other

9 Discriminant Analysis An exploratory factor analysis using SPSS 11.5 was used to identify any items that crossloaded on other dimensions, with results (loadings <0.4 suppressed) shown in Table 2 below. Because two items from the complexity dimension did not load cleanly, all complexity items were removed from the remaining analyses. Table 2 Exploratory Factor Analysis ROU_ ROU_ ROU_ INT_ INT_ INT_ COM_ COM_ COM_ PKS_ PKS_ PKS_ DKS_ DKS_ DKS_ GKS_ GKS_ GKS_ CR_ CR_ CR_ CA_ CA_ CA_ CA_ CI_ CI_ CI_ Table 3 below provides a table of correlations between items and constructs employed in the

10 PLS analysis, which provides further evidence of discriminant validity in that all items correlated most strongly with their intended construct. Table 3 Item Discriminant Analysis Non-routineness Interdependence Published Sourcing Dyadic Sourcing Group Sourcing Cognitive Replication Cognitive Adaptation Cognitive Innovation Learning Orientation Risk Aversion Reciprocation Wariness ROU_ ROU_ INT_ INT_ PKS_ PKS_ DKS_ DKS_ GKS_ GKS_ CR_ CR_ CA_ CA_ CI_ CI_ LO_ LO_ LO_ RA_ RA_ RW_ RW_ References

11 Agarwal, R Scanning the Business Environment. Macmillan, New York. Armstrong, J. S., Overton, T. S Estimating nonresponse bias in mail surveys. Journal of Marketing Research Ashford, S., Cummings, L Feedback as an individual resource: Personal strategies for creating information. Organizational Behavior and Human Performance Bagozzi, R. P A prospectus for theory construction in marketing. Journal of Marketing 48(Winter) Barclay, D., Higgins, C., Thompson, R The Partial Least Squares (PLS) approach to causal modeling: Personal computer adoption and use as an illustration. Technology Studies 2(2) Bollen, K. A Multiple indicators: Internal consistency or no necessary relationship? Quality & Quantity Bollen, K. A., Lennox, R Conventional wisdom on measurement: A structural equation perspective. Psychological Bulletin 110(2) Borgatti, S. P., Cross, R A relational view of information seeking and learning in social networks. Management Science 49(4) Chin, W. W The partial least squares approach for structural equation modelling. In G. A. Marcoulides (Ed.), Modern Methods for Business Research: : Lawrence Erlbaum. Choo, C. W The Knowing Organization. Oxford University Press, New York. Cohen, P., Cohen, J., Teresi, J., Marchi, M., Velez, C. N Problems in the measurement of latent variables in structural equations causal models. Applied Psychological Measurement Hulland, J Use of partial least squares (PLS) in strategic management research: A review of four studies. Strategic Management Journal Johnson, J. D Information seeking: An organizational dilemma. Quorum Books, Westport, CT. Lee, F When the going gets tough, do the tough ask for help? Help seeking and power

12 motivation in organizations. Organizational Behavior and Human Decision Processes 72(3) Miller, G. R Human information processing: Some research guidelines. In R. J. Kibler, & L. L. Barker (Eds.), Conceptual frontiers in speech-communication: New York: Speech Communication Association. Miller, V. D., Jablin, F. M Information seeking during organizational entry: Influences, tactics, and a model of the process. Academy of Management Review 16(1) Morrison, E. W Newcomer information seeking: Exploring types, modes, sources, and outcomes. Academy of Management Journal 36(3) Nonaka, I A dynamic theory of organizational knowledge creation. Organization Science 5(1) Podsakoff, P. M., Dalton, D. R Research methodology in organizational studies. Journal of Management Sussman, S., Siegal, W Informational influence in organizations: An integrated approach to knowledge adoption. Information Systems Research 14(1) Vandenbosch, B., Higgins, C Information acquisition and mental models: An investigation into the relationship between behavior and learning. Information Systems Research 7(2)