Pages

Monday, May 9, 2011

Algorithmic complexity in financial markets

A fascinating essay by Donald MacKenzie in the London Review of Books is a good introduction to the software used to conduct certain kinds of automatic trading on today's exchanges. The programs attempt to spot significant leading market indicators like atypical buy or sell orders, or unusual share price fluctuations, and to respond to them before the rest of the market can do so. Of course, would-be buyers and sellers are both using software, so computers essentially are vying with other computers for market advantage on behalf of their human or corporate masters.

The prospect of making large amounts of money in this way has led, rather naturally, to an arms race: software on both sides attempts to gain advantage either by ever more clever concealment of buying or selling so as not to perturb the current share price unduly, or by ever faster recognition and response.

What the computers are doing is what human traders always have tried to do: take advantage of market inefficiencies to benefit themselves or their clients. Nothing here is illegal: no one is acting from a privileged position, e.g., as an insider with non-public information. However, the interaction between programs can have unforeseen consequences, such as the "flash crash" on 6 May 2010 that caused overall U.S. share prices to fall by some six percent in less than five minutes.

An investigation of the crash concluded that an innocent attempt to carry out a large, but not unprecedentedly large, sale of futures contracts triggered a feedback loop in which trading software, behaving exactly as designed, was driving down the price of such futures at alarming speed. Automatic "brakes" (enacted via software, of course) were applied by the exchange in which the futures were being traded, temporarily suspending trading. The safeguard was designed to allow human investigation of, and intervention in, such atypical occurrences. The pause worked, in the end, but not before the market tripped over more automated behavior, namely, the inability, by design, of some of the software to cease trading entirely.

I'm glossing over details in MacKenzie's account (and MacKenzie's account itself is undoubtedly a gloss on the actual details), so be sure to read it for yourself. The point to take away, though, is that the safeguards built into all this software are limited. The trading pause imposed by the exchange, for instance, was five seconds. That pause was not intended for computers, remember: it was designed for human beings to investigate possible computerized misbehavior.
This is a situation that in the terminology of the organisational sociologist Charles Perrow is one of ‘tight coupling’: there is very little ‘slack’, ‘give’ or ‘buffer’, and decisions need to be taken in what is, on any ordinary human scale, a very limited period of time. It takes me five seconds to blow my nose.
To quote one description of Perrow's concept of tight coupling,
Tightly coupled systems are highly centralized and rigid. Output is closely monitored within specified tolerances. Subsystems are interdependent. Change causes massive ramifications throughout the system. Tightly controlled time schedules with little slack are sensitive to delays. Production sequences must be strictly followed. Substitutions are not easily accomplished and equipment breakdowns can bring the entire system to a halt. Safety features must be designed into the system because human intervention is not easily accommodated. Emergency override features may be built-in, but systems design makes on-the-spot, field expedient solutions difficult.
Moreover, "the market" sometimes consists of a set of "trading venues," each with its own set of software controls, the whole coordinated nowhere by anyone. This kind of market undoubtedly is complex beyond the ability of any person to understand. And here we come to the heart of the danger MacKenzie sees:
Systems that are both tightly coupled and highly complex, Perrow argues in Normal Accidents (1984), are inherently dangerous. Crudely put, high complexity in a system means that if something goes wrong it takes time to work out what has happened and to act appropriately. Tight coupling means that one doesn’t have that time. Moreover, he suggests, a tightly coupled system needs centralised management, but a highly complex system can’t be managed effectively in a centralised way because we simply don’t understand it well enough; therefore its organisation must be decentralised. Systems that combine tight coupling with high complexity are an organisational contradiction, Perrow argues: they are ‘a kind of Pushmepullyou out of the Doctor Dolittle stories (a beast with heads at both ends that wanted to go in both directions at once)’.
Like our computers themselves, whose software environments long ago exceeded any human being's ability to understand them in toto, the software-mediated trading environment we have created is a system we simply do not understand. Until or unless humans develop a new science of "accident avoidance" that doesn't require full understanding of complex systems, there will continue to be unforeseen consequences of innocent activities in our markets.

(Thanks to The Browser for the link to MacKenzie's essay.)

No comments:

Post a Comment