Sunday, September 05, 2004

Competitor Intelligence: Part II
Part 1 is here.

The initial post in this series looked at some of the practical reasons competitor intelligence (CI) efforts fail. (By one count, 90% of new CI initiatives do not survive three years.) In this post, I want to look at the systemic problems growing out of managerial mindset and habits.

One difficulty is that even the best CI does not look like the data and information managers are accustomed to using. Internal MIS can produce reams of numbers, variances, and reports. They are carefully reconciled and percentages are calculated to the second decimal place. Compared to this, good CI will seem skimpy and weak. It just is not possible to get the depth of detail about competitors that we can retrieve about our activity.

Unfortunately, the well-established internal information architecture is often the standard by which the CI product gets measured. This tendency is exacerbated by the case study method prevalent in MBA programs.

In this paper (.pdf file) outlining his epistemological hierarchy, Stephen Haeckel writes that intelligence "in the CIA sense of the word, is produced by the application of inference to information." This reliance on inference presents a problem in a corporate environment where the information paradigm grows out of cost accounting and financial theory. Information is supposed to be analyzed in spreadsheets and complex statistical models. Inference, which often relies on nearly incommunicable wisdom arising from innate talent, individual experience, and wide reading, just does not seem concrete enough. Frequently, the intelligence estimate goes through a process akin to being nibbled to death by ducks as conventional analysts in various departments raise objections and "concerns" based on their SOP methods of attacking questions.

A related problem is that analysts using traditional methods focus on elements of detail complexity while competitor assessment is a matter where dynamic complexity plays a large role. (See here for more.)

I sometimes think of this as the Cassandra problem: the assessment of the situation can be correct but the person making the assessment is unable to convince peers and policy makers of its value.

This problem becomes particularly acute when CI produces bad news or cast current corporate performance in a bad light. When a benchmarking study shows customer service performance languishing in the third quartile, it is much, much easier to attack the methods of the study than to face the problem and take remedial action.

Another systemic weakness grows out of the underdeveloped nature of strategic management at most corporations. Good intelligence work is usually associated with a clear strategy and sound doctrine. These provide the conceptual filters that let intelligence analysts concentrate their efforts on the key questions and to deliver a product that is actionable.

World War Two's battle of Midway provides a good example of this. The code-breaking and intelligence assessment by Adm. Nimitz's staff is one of the great triumphs of modern military intelligence work. However, Layton and Rochefort were aided by Nimit'z clear leadership. Despite the carnage on Battleship Row the Pacific Fleet was looking for a fight from the beginning of 1942. The US Navy possessed a clear understanding of modern warfare: it had been working out carrier tactics and operations for over two decades. In the dozens of wargames played at the Naval War College in the 1930s enemy was nearly always the Japanese Fleet.
With clear direction, useful doctrine, and a deep understanding of their enemy, the naval analysts honed in on tracking the big fleet carriers of Adm. Yamamoto. When Tokyo launched the Midway operation, Nimitz was given ample warning and could set his trap.

Japan, in contrast, simply assumed that they would surprise the Americans, had no clear strategy, and had a doctrine which contained more wishful thinking than hard-headed analysis. No surprise, then, that the fog of war hung more heavily around the Japanese commanders.

One quandary that corporate intelligence professions face is the penchant of senior executives to put more faith in their own informal intelligence gathering than in the product of their own CI groups. Executives meet their peers at conferences, often they have worked together at previous employers. In short, they have sources not available to the CI grunts. When their own informal assessments conflict with the formal estimates, it is tempting to dismiss the latter and rely on industry gossip, experience, and gut feelings.

No comments: