Editor's note: Michael Wellman is a professor of computer science and engineering at the University of Michigan College of Engineering.
At 1:07 p.m. on April 23, a hijacked AP Twitter account falsely reported an attack on the White House. Seconds later, major US stock indexes started to fall. They were down 1 percent by the time the tweet was publicly identified as bogus three minutes later. And in another three minutes, the markets had recovered to pre-tweet levels.
The quick plunge appears to have been triggered or exacerbated by computer algorithms that automatically trade based on monitoring and analyzing social media streams. Indeed, a growing number of firms and startups claim they can tease emotion, meaning and context out of social media chatter and identify financially relevant information. Augify, SNTMNT, and Lucky Sort (acquired just last month by Twitter) are a few examples. Although we don't know how any particular product handled the fake AP tweet, the incident illustrates a potential pitfall of automated trading on news feeds in real time.
But perhaps more worrisome are other types of trading algorithms that work even faster. Those meriting the label "high-frequency trading," for example, can act on data in milliseconds or less. That's a great deal faster than humans can process even simple bits of information.
The actual focus of high-frequency trading algorithms is on rapid response to detailed events in the internal behavior of financial trading networks rather than on anything that is happening in the economy or world affairs. While some algorithms may be hanging out and listening to social media, a lot more spend their effort gleaning specific information about trade order activity and its transmission throughout the complex network of brokers and exchanges.
The catch is that the public ticker is always just a little bit out of date.
Typical retail investors may be surprised to learn what happens when they enter a trade to buy some stock at their favorite online brokerage. There are dozens of exchanges authorized to execute trades in U.S. equities. It is generally up to the broker to determine how to route a client's order. Before an exchange can execute a trade against an order on its books, it must check whether a better offer is available at some other exchange, as reflected on the "public ticker." This public price is compiled from information continually provided by the exchanges, and it is what the investor sees when looking up the current bid-ask prices for a security. Presumably, investors do not care where their orders go, as long as they get the best available prices.
The catch is that the public ticker is always just a little bit out of date. There's an unavoidable lag, or latency. It takes time for the exchanges to communicate updated price information and continually merge and publish these revisions. The delay may be fairly shortmeasured in milliseconds. But any disparity in time opens the door for somebody with faster connections and computers to figure it out before it becomes public.
Trading on price disparities across markets is called arbitrage, and if the latency edge of a high-frequency trader is large enough, the trader can in some cases obtain a risk-free profit. Other high-frequency trading strategies exploit latency advantages using statistical predictions rather than strict arbitrage. Whether the gains are sure or not, such profit opportunities have naturally set off a latency arms race. Trading firms routinely invest in high-speed hardware, dedicated computer lines, proximity to exchange servers, and any other measure that could shave mere microseconds from their latency and gain advantage over other traders.
These practices can affect regular investors and market performance overall. This is a large and controversial topic, and the extent of the effects depends on what trading strategies are involved.
In a recent study, University of Michigan doctoral student Elaine Wah and I developed a simple model of latency arbitrage across two exchanges with a delayed public ticker. We found that the presence of high-frequency trading in this model takes profits away from regular investors and, moreover, reduces the overall profit of the system. This doesn't even count all the direct costs of the latency arms race all the resources dedicated to trimming microseconds. Our model can't estimate the extent of the damage, but it does show that it degrades market performance.
Here's why: High-frequency trading aggressively clears trades across exchanges, whereas the market sometimes performs better by letting orders accumulate for a while to better judge which ones should trade.
It might be in the nation's best economic interest to eliminate this latency arms race. One way to do this would be to move to a system where all trades happen at discrete points in time say once per second rather than continuously. The interval is short enough so that markets can accurately track real-world events, but long enough so that shaving off tiny fractions of a second provides no significant advantage.
Our study showed that this market organization performs better by far than continuous trading. By placing a lower limit on time differences that matter for trading, we could create markets that can effectively keep up with even the fastest-moving world events, but no longer drive us to trade faster than reality itself.
[Image via Shutterstock]
No hay comentarios:
Publicar un comentario