Part II Spatial Interdependence

Size: px
Start display at page:

Download "Part II Spatial Interdependence"

Transcription

1 Part II Spatial Interdependence

2 Introduction to Part II Among the most challenging problems in interdependence theory and modelling are those that arise in determining optimal locations for firms, consumers and public facilities in two-dimensional space. Discontinuities are rife - in some cases the agent should move to a distant point or should not, marginal changes being of no help; distance metrics give rise to complicated extremum conditions which are difficult to solve by closed analytic methods; problems may involve sequences of movements with large numbers of potential destinations possible at every choice node, all of which must be examined by combinatorial methods; and optimal moves may be complicated by all the uncertainties of oligopolistic rivalry, since location is a major form of non-price competition. Movements through time are generally one-directional, one-dimensional, and occur in a continuous manner; movements in its twin dimension do not offer these simplifications. Small wonder, therefore, that two characteristics are frequent in locational analysis: on the one hand, it has seen pioneering applications of mathematical elegance in economics, including the calculus of variations, the theory of functional equations, spatial differential equations, and real and topological analysis. On the other hand it must frequently rely upon heuristics and complete enumeration algorithms for solutions. The five papers in this Part - two of them co-authored with mathematicians, without whose skills it would have been impossible to have written them - are concerned with spatial problems in two dimensions. The first four are operational in character, seeking to obtain exact or approximate solutions to difficult problems of location or movement in space. The last is a theoretical effort to apply Poisson probability distributions to spatial markets where demands for goods or services are only occasional for individual buyers (e.g. repair or emergency services). Chapter 9 was co-authored with Harold W. Kuhn and addresses the 'generalized Weber problem', which remained without direct solution techniques for about fifty years. The problem can be stated simply: suppose a firm must supply products to n noncollinear markets with possibly different amounts demanded. Given transport rates per unit of weight and distance, and transport costs proportionate to 217

3 218 Spatial Interdependence weight and distance, where is the minimum transport-cost point for the firm to locate? The problem as stated by Weber and attacked by other analysts contained n = 3 points, so that the convex hull of those points was a triangle. Various geometrical solution methods were worked out for this case, but they are not applicable to cases where n > 3. Hence, the 'generalized' problem is for one where n > 3. The generalized problem has a long history of mathematical interest - a fact of which we were unaware when we began this paper. It was made relevant to spatial economists by Alfred Weber in 1909 in its triangular form. The only known methods for its solution were the geometrical techniques referred to above or by physical analogue, the latter using Varignon's frame to find the resultant of forces set in train by suspending weights (proportionate to the relative importance of the markets) attached to strings over pulleys. The algorithm presented here - a type of gradient method - was rediscovered independently by two other researchers: W. Miehle in 1958 and Leon Cooper in However, upon further search, it was discussed in one form or another by three other persons much earlier, with results published in journals not readily accessible to economists. Those sources are listed in the paper. This article has had the greatest role in publicizing the algorithm to spatial economists and also contains the most thorough proofs of existence and convergence. Chapter 10, co-authored with Richard M. Soland, is an extension of the problem solved in Chapter 9, but a simple extension it is not. Suppose within the convex hull of the markets (generalized to 'sinks') one wished to locate m > 1 production points ('sources') in such manner as to allocate sinks to sources and locate sources in order to minimize transport costs. The distinctive feature of the 'multisource Weber problem', therefore, as compared with the 'single source' problem of Chapter 9, is the number of sources specified parametrically. The multisource problem is vastly more complicated. Where, in the single source problem, the objective function was strictly convex over the interior of the convex hull and piecewise linear over the edges, hence everywhere convex, the multisource objective function meets none of the criteria for well-behaved functions in minimization problems. Where in the single source problem the iterative solutions rolled downhill speedily to the optimum from arbitrary starting points, in the new problem they are likely to be captured by local minima cusps.

4 Introduction to Part II 219 Since transport costs are linear in weight and distance and no capacity constraints on shipments are placed upon the sources, all of a sink's shipments will originate from a single source in the optimal solution. Hence, shipments from a source to a sink will be a O,l-problem, which suggests a combinatorial method of solution. MULTIWEB, a branch-and-bound algorithm is designed to obtain the global minimum. Because MUL TIWEB requires large amounts of computer storage (at least in terms of computer capacity at the time the paper was written), it may not be feasible to use for large problems. An approximative heuristic method that yields local minima is also presented: CROSSCUT. The latter has been suggested by others, and we have had good success with it. Three different metrics are presented for distance measurement in both algorithms: Euclidean, approximate great circle, and metropolitan. The branch-and-bound algorithm is presented in detail, computational experience is detailed, and the use of MULTIWEB and CROSSCUT in tandem to obtain increasingly better answers for large problems is discussed. With the storage capacity of today's computer, no doubt much larger problems can be solved than was true when the article was written. Even when the optimum solution for large problems was not obtained, MULTIWEB exits with very good approximations to it. It remains, to the best of my knowledge, the only known method of obtaining optimal solutions to the multisource Weber problem. Chapters 11 and 12 also form a related pair. Both approach a 'dynamic' (a better term would be 'space-sequenced') problem involving the definition of routes by delivery vehicles located at given sources through a set of sinks in a minimum-transport-cost manner. This is a problem of extreme complexity, and in Chapter 11 I urge that good approximative solutions be sought rather than exact solutions which were infeasible because of demands on computer time, or, indeed, the effort required to specify the problem for solution. The method I employ consists in finding a feasible solution to the problem, then generating feasible solutions efficiently and moving to revise the solution at each step of the sequence as improved feasible solutions emerge. The movement from one best feasible solution found to another is continued until a given amount of computer time is expended or until the successive improvements between such solutions become acceptably small. An approximate solution to a 7-vehicle, 13 source problem - which in terms of complexity is very

5 220 Spatial Interdependence large, as will be seen in Chapter 12 - is found by use of the method. Although not known at the time Chapter 11 was written, the approximate solution is probably the optimum, again as revealed by the different techniques of Chapter 12. In the course of the paper I suggested that branch-and-bound methods be used to generate feasible solutions more efficiently. I wrote Chapter 12 four years later, by which time I had become more knowledgeable about the analytical power of branch-andbound combinatorial techniques, largely through my work with Richard M. Soland in Chapter 10. This resulted in the design of DYNCOM for the exact solution of such space-sequenced problems as those dealt with in Chapter 11, at least for moderately sized problems, and good approximate solutions for large problems. The paper also served to introduce branch-and-bound programming to economists and regional scientists, as at that time (and indeed presently) it was not a well-known technique in either field. The problems of Chapter 11 are revisited with DYNCOM, and new problems are devised to demonstrate its effectiveness and limitations. DYNCOM also includes options not available in the earlier method, such as recycling vehicles to their original starting sources after completion of their travels (although no conceptual problem prevents inclusion of this requirement) and the opportunity to use the three different metrics mentioned in the introduction to Chapter 10 or to simply read in the matrix of distances between sources and sinks as well as inter-sink distances. A series of five problems is addressed with DYNCOM, including the 7-source, 13-sink problem addressed in Chapter 11. Node storage as a limitational factor constrains rapidly as problem size increases, and after 60 minutes of IBM/91 computer time (a nearpaleontological time span) and storage of nodes, the algorithm had found no better solution than that arrived at by the heuristic of Chapter 11. Hence the conjecture that that solution is in fact optimal. This experience highlights an important observation first made by Leon Cooper in dealing with such spatial problems: the objective function tends to be shallow-bottomed, so that the algorithm rolls down to a good solution quickly, but then spends huge amounts of time to find feasible solutions that provide very little improvement. DYNCOM is a more systematic manner of searching for feasible solutions than the earlier method and, of course, has the advantage of providing a proof that the optimal solution will be reached after a

6 Introduction to Part II 221 finite (possibly very large) number of operations. It also prints out successively better feasible solutions as it moves down an inverted tree graph. Also, the ability to specify a percentage tolerance factor for the distance from the optimal solution that is acceptable can reduce time, and with it necessary node storage, substantially. Present day computer speed and memory capacity should provide muchimproved performance of DYNCOM in solving large problems. Further, in conformance with my stubborn views about the need for economists to strive for the operational and practical, and abandon an excessive concern for the 'optimal' when 'very good' is perfectly acceptable, I urge that such methods be used. This is especially advisable in light of the probable accuracy of the data they are analyzing. Thus in both chapters I have stressed the explicit complexities that bedevil such spatial problems to demonstrate the frequent futility and nonnecessity of searching for the 'very best'. Chapter 13 addresses a spatial problem of a quite different sort. It analyzes a demand regime in two-dimensional space which follows a Poisson distribution. The product or service sold in this environment is demanded when a consumer's normal activity is disturbed at long intervals and restoration of the status quo requires immediate satisfaction of that demand, although some price elasticity exists. The demand is, therefore, for any single consumer a 'rare event', falling under the Poisson rule in time, and, for a group of consumers spaced over a market area, under same rule over space for any fixed time period. The demand for durable household good repairs or for emergency health services are good examples, as noted earlier in the introduction. The analysis seeks the relationship between firms' spatial behaviour and their price-output decisions. Monopolistic and competitive market structures are examined and both fixed and variable market areas are studied, when all firms are assumed to have rising costs. In such an environment the monopolist's price will rise with the size of market area, but by far less than proportionately to the increase in market size. That is, a firm which increases its market area will raise price to reduce the density of demand. When transportation costs rise, the monopoly will lower price to increase demand density in the area, although price reductions are less proportionately than service area reduction. Hence the analysis leads us to expect that a monopoly selling such services in the face of a large price increase in petrol would reduce the radius of its market area and lower the price of the service simultaneously in order to increase demand intensity.

7 222 Spatial Interdependence In competition as firms enter to service a common fixed market area, each firm's demand density falls initially, but it is led to reduce price to increase density, and by more than what density would be for the same market area under monopoly. Interesting results are obtained when only transportation costs are raised in this competitive environment. Market areas increase as a function of the number of firms over an initial domain, but then begin to fall as more firms enter. The analysis can be adapted to analyze situations of this sort when transportation costs are borne by the customer, e.g. when physicians compete in a fixed market area for patients who pay the costs of transportation from their homes to physicians' offices. A rise in such costs should reduce demand density for a fixed number of physicians and lead them to reduce prices of their services. An increase in the number of physicians should have the same effect. If this does not occur - as some studies suggest - one is led to conjecture about the existence of rivalrous consonance in an oligopoly structure. The spatial analysis of Part II serves well, I believe, to evoke the extreme complexity of interdependence in this frequently neglected dimension. Problems whose statements possess a surface simplicity are frequently seen to be incapable of analytical solutions and must be solved by iterative algorithms or investigated with heuristics. Other areas of microeconomics may equal spatial economics in these regards, but I know of none that surpasses it. It offers a range of challenges to the theorist of elementary realistic importance, and deserves a great deal more attention than it receives.