Facilitating Competitive Intelligence: The Next Step in Internet-Based Research

Size: px
Start display at page:

Download "Facilitating Competitive Intelligence: The Next Step in Internet-Based Research"

Transcription

1 Facilitating Competitive Intelligence: The Next Step in Internet-Based Research Willie Tejada USING THE INTERNET TO GATHER STRATEGIC COMPETITIVE INTELLIGENCE is becoming a reality a mission-critical process that professional organizations can no longer ignore. Until recently, competitive intelligence was confined to monitoring information such as how much they were spending on advertising, what they were doing in research and development, and who they were hiring. Today, however, the vast amount of information readily available through the Internet has the potential to greatly expand the scope of competitive intelligence and catapult it to new levels of importance embracing market changes, customer expectations, and developing global trends. Internet-based research for competitive intelligence holds the potential to discern important forces like adoption of technology, buying habits, and other behaviors often buried in seemingly unrelated, unstructured information sources articles, press releases, financial statements, etc. Many tools currently exist to make it easier to spot trends in structured information such as profits, revenues, or sale figures, but there are no tools to aid in analyzing unstructured information over time. As a result, Internetbased research remains a mostly manual, ad hoc process that demands too

2 Exhibit 4.1. Companies that Failed to Monitor CI Trend. Company: Driver: Mistake: Company: Driver: Mistake: Company: Driver: Mistake: Company: Driver: Mistake: Wang Technology Failure to understand technology trends in product development IBM PC Division Business Failure to predict commoditization of industry Apple Computing Marketing Failure to market superior technology and product Pan Am Societal Failure to understand societal trends contained in financial analysis much time and costs too much money to yield truly useful results and become an intrinsic part of business life. To stay competitive in the information age, executives know that they must improve methods to acquire and manage unstructured information from the Internet, and convert it to useful, pertinent knowledge to share with others. Research firm Dataquest Inc. estimates that by 1999 corporations will spend $4.5 billion to better leverage their knowledge resources. Of the top applications of knowledge management identified by the Gartner Group, competitive intelligence is the only one that requires the collection of external information, with the Internet emerging as the primary source. Knowledge derived from information in the vast ocean of the Internet, at best, is random and elusive. Until now, using the Internet required extensive manual searching, ad hoc analysis, and cumbersome sharing. In most cases, the process has proven to be so time-consuming and so costly that it winds up being performed poorly or not at all. Yet the failure to track competitive intelligence from organized research can be disastrous. Business history is rife with examples of firms that missed important trends, as shown in Exhibit 4.1. Without new technologies designed from the ground up to implement the revolutionary changes in the way research is collected, analyzed, and shared, organizations have little chance of improving competitive intelligence through Internet-based research. The broad field of knowledge management is still emerging as a way to facilitate the process of locating, organizing, transferring, and more efficiently using the information and expertise within an organization. Al-

3 though substantial funds are being expended on information and knowledge management solutions, until now there has been an absence of efficient Internet-based research systems that specifically target the dayto-day needs of those involved in competitive intelligence. This chapter explores the extent and limitations of today s technologies and offers a new way of implementing an effective Internet-based research system. INTERNET-BASED RESEARCH The Internet makes information that can be used for competitive intelligence more readily available. Today, public information or publishable business information, from patent filings to bankruptcy notices, is available on the Internet. When asked if they do Internet research, almost anyone in a modern corporation would probably answer Yes, I use a search engine. But searching for information is merely the first step in a much more extensive process. In general Internet-based research follows a process with three distinct steps: 1. Information collecting and sourcing. 2. Information discovery and analysis. 3. Knowledge sharing and transfer. This core process is consistent across many industries and jobs, specifically high technology, market research, financial services and biotechnology. While the content, the sources of information, and in some cases the methodology, will change from instance to instance, the core process collecting and sourcing, discovery and analysis, and sharing and transferring remains the same. MORE THAN A SEARCH ENGINE Business people performing research start with an objective; something they are looking to prove or learn, such as a market statistic or information on a technology. They then begin collecting information, an essentially manual process. Assuming they know where to look, they use a search engine, or more likely several searching facilities, type in keywords and phrases to search for, sort out the useful results, and gather a collection of research materials. If the search and retrieval has to be interrupted and continued at a later time, it cannot simply be picked up at the same point. Search engines don t work that way. The same keywords and phrases have to be reentered and the search started all over again. Useful information is later stored and organized in a way that makes it accessible when needed, perhaps as separate documents or cut and pasted into a master document. Before the Internet, this phase of the process was called building a re-

4 search set, and was accomplished by collecting a pile of clippings, papers, and reports. The second phase, analysis and discovery, begins after the information has been collected and saved. The collected materials are explored by opening documents and reading them to find out what s been discovered. Knowledge is gained by making connections between the discrete items of information, mentally linking the pieces of information that will speak to a conclusion. Typically, this is an iterative process of discovering and analyzing, and potentially, stepping back to add new information to the research set. After searching, collecting, discovering, and analyzing all the information, conclusions are shared with others in some form of research summary. Typically, some very exact impacting statements, key statistics, and charts and graphs are extracted from the source materials and placed into the report. The report contains not only facts and figures, or explicit knowledge, but also the methodology and thought processes employed to reach the conclusion. This tacit knowledge is the value the individual brings to the research process. This last phase of the process can be quite tedious and time consuming, because much of the background information needed to support statements and conclusions exists in a format that cannot simply be imported into the report. These nuggets of information have been extracted from a variety of sources and must be recreated into a form suitable for inclusion in the final report. THE PROBLEM WITH POINT SOLUTIONS A number of technologies are being used today to aid people performing Internet-based research. From search engines and push technology, to market research sites, information aggregators, and desktop software, many point solutions are available to address specific, individual parts of the research process. These technologies were never designed to work together as a system to facilitate the entire Internet-based research process. None of them enable information to be stored and viewed over time or collected as sets. Even if it were possible to combine these existing point solutions, they would not provide a way to capture the methodologies the thought process that determines what information to look for and how to use it. Search engines are a valuable technology, providing access to vast amounts of information. But search engines cast too broad a net, often returning excessive information in no particular order. Users must waste time rejecting junk in order to organize data. What s more, the quality of the information returned is often disappointing because the searches can-

5 not be tailored to the more focused and narrow, but much deeper, needs of researchers in specific industries. Today s search engines are similar to the Yellow Pages. They cover a broad spectrum of information on a wide variety of topics, but lack the precision and depth required for specific information. Auto parts dealers, for example, do not use the Yellow Pages to locate parts for customers. Instead, they use a special, vertical directory of auto parts manufacturers. Push technologies try to deal with the limitations of search engines by profiling a user s information needs, watching for that information, and then delivering it to the user automatically. But as attractive as push technology was when first introduced, it is no better than passively watching television for conducting research. While search and push technologies bring information to people, market research sites and information aggregator sites collect huge databases of information and act as clearinghouses. They provide useful indexes and some organization of the information, but they are essentially large databases, with no automated collection and fairly static information. A needle and thread may be useful for sewing on a single button or mending one hole. But a professional tailor requires the speed and accuracy of a sewing machine to make enough suits every day to support a business. Similarly, search engines and other technologies may do a good job for the masses, but they require too much time and are too cumbersome to be useful for performing mission-critical Internet-based research on a daily basis. THE SYSTEMS APPROACH TO INTERNET RESEARCH Professional researchers those who approach Internet research with clear objectives, who collect information, analyze it, and communicate it to others require an integrated, systems-based approach to research that facilitates the entire research process. They need a system that automates many of the repetitive and time-consuming tasks while providing a framework to help organize, manage, and communicate the collected information. To be truly useful, such a research system should be targeted to the special needs of the individual practitioner working within a particular vertical industry. While many point technologies play a role in aiding the research process, current implementations outside the context of a system are not sufficient for the next-generation Internet research product. A database is a highly useful technology, but it is not an accounting system. Similarly, a search engine is a highly useful technology, but it is not an Internet research system. Only a systems approach is able to address all phases of the research process within the context of the overriding objectives, the

6 practitioner who is setting the objectives, and the industry within which the practitioner works. Such a system should provide: An industry/practitioner-specific information map or catalog to guide searching and collecting information An object store to manage information and let it be viewed over time Conversion capabilities for extracting and publishing information A containment model for capturing and articulating methodologies The sheer volume of information that makes the Internet so attractive also hampers research because there exists no map or catalog of this information to help guide people in their search. Therefore, the next-generation Internet research system must provide a highly qualified catalog of sources for collecting information that is relevant to the research objectives. The best catalogs must be narrow in scope yet deep in detail, focusing on thorough classification of specific industries. To be truly useful, a catalog should include technology for both classifying and collecting the information. The catalog should consider not only the industry (high-tech, finance, biotech, etc.), but also the genre of practitioner doing the research (marketing researcher, financial broker, public relations account exec.) because two people within the same industry can have different information needs depending on their roles. By displaying information sources that are highly relevant and qualified for this domain, the catalog streamlines the collection process by letting the researcher only zero in on useful information. The way in which the catalog sources are organized is also important, because this can greatly enhance the ability to discover new information while browsing the catalog. The system can then automate the actual collection process, allowing the researcher to skip this tedious task. To help quickly determine which items of the collected information might be useful during the discovery and analysis phase, an Internet research system should provide the facility to store information about the information, or metadata. Metadata describe various aspects of the content and allows a detailed structured analysis to be performed. Like electronic Post-It notes, the metadata makes it easier to catalog and retrieve the unstructured information. Using metadata, reports can be created to determine, for example, how many documents in a collection were written by a particular author or what events occurred on a particular date. Some Internet sites already add metadata to the information they publish, including, for example, the type of article (such as a product review) as well as identifiers for the product, the company, and the author. Even this minimal level of metadata provides very useful information. Both the catalog and metadata should be able to be customized by the user to reflect individual needs. This makes it possible for a user to orga-

7 nize the catalog information in such a way that others using the catalog can achieve the same insight. The metadata also makes possible another byproduct information appreciation. Just as data mining can provide insight for structured information, knowledge mining can provide insight for unstructured information enhanced with metadata. Over time, as users collect information that is highly relevant to their business and save it in the system s object store, the ability to mine this information to gain additional insights will be a tremendous information asset. As yet, no point solution exists for performing this kind of trend analysis on unstructured information. Today s point solutions use information once and then discard it. They provide no means for storing information over time, thus precluding useful trend analysis. In order to make the extracted information usable for analysis and to facilitate sharing and communication, format conversions are required to move the information from its published state to one compatible with whatever output application is being employed (i.e., a word processor). This is essentially a manual process that mostly entails re-creating the information in another application. Finally, the ability to customize a system and make it work the way the individual wants it to is extremely important. A research system needs a way to tune the constituent technologies to solve very specific problems. It is not possible simply to take a search engine, a database, a word processor, format converters, and other utilities and integrate them through APIs and such, and have a useful research system. Business logic for each specific industry and practitioner must drive the operation of the entire system. Business logic is what turns a general purpose, horizontal tool into a precision instrument designed for a specific research domain. It converts the generic research process into one especially designed to collect, discover, and share information for a specific vertical niche. Internet-based research is now being performed frequently enough to consider competitive analysis a mission-critical business process, one that will be extremely important for achieving a competitive advantage in the next millennium. The time and money spent on Internet-based research can be greatly reduced by automation accomplished through a systems approach that supports all phases of the research process from knowledge acquisition and discovery to sharing both explicit and tacit knowledge and one which provides the ability to discern trends before it s too late. Author s Bio Willie Tejada is the vice president of marketing for Aeneid Corporation. Tejada has an extensive background in networking and collaboration, bringing to Aeneid his ex-

8 perience in the roots of knowledge management. In management positions at Novell, he contributed to the explosive growth of the local area networking (LAN) market, the precursor to today s Internet. He then brought his understanding and experience of LANs to Novell s groupware efforts, where he served as vice president of marketing for the groupware division. Prior to cofounding Aeneid, Tejada held the position of vice president of product marketing for NetManage.