A few years ago, a company called Knight Capital launched a new algorithmic trading software and put it to work. But there was a problem. It was losing £10 million every minute, and by the time they took it down, the company had lost £440 million.
A similar event in India during a recent Diwali mahurat trading session (a short trading session on the festival, which is otherwise a holiday) saw prices in certain derivatives plunge nearly 20%. This, it was discovered later, was due to a faulty algorithmic trading software at a broker. All trades on this day were annulled.
The problem was a rogue algorithm in both these cases. And these instances are the visible manifestations of a phenomenon that has been taking hold for a while now: machines have been quietly easing out humans from modern stock markets.
India allowed algorithmic trading only in 2008, but algos control nearly a third of all trades already. In the West, where the phenomenon first surfaced nearly three decades ago, algos were running about 50% of US equity trading volumes by 2012. In the foreign exchange markets, which run 23 hours a day, algorithms account for 80% of the trading volumes.
In simple terms, algorithmic trading involves the use of technology to automatically buy or sell securities. Humans do this, normally. They analyze companies, sectors, businesses, prices, and other data, and decide when to buy or sell. But this data can be automatically analyzed by a computer as well, and if you program an algorithm right, you could have the computer place orders automatically, and more importantly, quickly.
In the early days, algorithmic trading just did the simplest tasks faster—like buying something on one exchange and selling it on another, capturing the difference as an arbitrage profit. Then, they got a bit more sophisticated, using futures, or “options”, to create profitable trades that enter and exit in milliseconds.
Some that are in the pipeline is even more complex, using deep physics concepts and the insights of Ph.D. degree holders to build programs that profit from analyzing news, announcements, tweets, or price movements around the world, often with the help of artificial intelligence and machine learning. A very large algorithm-based hedge fund, Renaissance Technologies, earned the founder Jim Simons $1.6 billion in profits and a place as the highest-paid hedge fund manager in 2019.
In India, algorithmic trading is still less than 50%, and firms are relatively small in size. A significant amount of algo-trading volumes is in pure arbitrage (trading between the National Stock Exchange or NSE and BSE, for instance).
But complex algos will, at some point, take over the Indian stock market. What happens when the math goes wrong? Or when the robotic nature wreaks havoc? Besides, there is the eternal question of fairness. Is it fair for a few well-heeled market participants to secure undue advantages with the help of technology? And how much of the task of price discovery, which is the basic function of a market, can be outsourced to computers?
The “hi-tech” world loves to invest in indices. Just buy the stocks in the Nifty, or the US S&P 500, and you’ll do well, they say. This has been proven right for a long time, mostly because the fees for following an index is much lower. There’s no intelligence needed. Just buy the same stocks, with the same weightage. So, you don’t need a highly paid fund manager. So, the costs are lower. And with lower costs, your actual returns are a little bit better.
The costs are also lower because you can programmatically “track” an index or a model. Want to buy stocks starting with “M” on Mondays? A small algorithm can be built in minutes that will even buy stocks when you’re asleep. Make such a model into an exchange-traded fund (ETF) and, suddenly, everyone can buy into your “M on Mondays” model, a low-cost “smart beta” alternative investment that might just have worked by accident in the last few years.
But because of the low costs and apparent success, more money has come into the index and ETF bandwagon. The allure has got so out of hand that the Bank of Japan started to print money and buy its own country’s ETFs. The central bank’s market presence is so big that it is now the dominant shareholder in 23 of the top 100 companies.
A recent report by Morningstar, a global financial services firm, pointed out that the total funds managed by “passive” strategies became higher than the traditional active managed pools for the first time in 2019.
Even in India, the Employees’ Provident Fund Organisation (EPFO) has been buying into the Nifty ETFs with as much as ₹2,500 crores a month. This has massively distorted mutual funds. The Nifty ETF from State Bank of India, which the EPFO buys, is now the largest equity mutual fund in the country, with ₹65,000 crores in assets.
What effectively happens as a result of these massive inflows from big entities is that more money goes into the top 50 stocks, where the top 10 stocks have more than 60% weightage. If you take out the top 10, you might find that the rest of the market has actually fallen this year, versus being up 10% when you include them.
The distortion comes from the ability to blindly follow the index, using technology to replicate the index easily. If index ETF investing becomes the only way people invest, then it won’t matter if a company has good results or bad—it will be bought as long as its weight continues to be the same in the index. Think about what that does to a company’s incentives to report a good set of results.
Reflexivity and insurance
Beyond the skewed incentives, things also go wrong with algorithms every once in a while. In 1987, the popular use of technology in the US was for “portfolio insurance”. If the market fell by more than a certain amount, an algorithm would sell stocks automatically, basically restricting losses to an extent. Or that’s what people thought.
Until a certain “Black Monday” in October 1987, when stocks started to fall heavily. Insurance algorithms then automatically sold shares, taking prices further down. This triggered more insurance algorithm sales until the market closed more than 22% down in a single day.
The problem wasn’t the algorithm. It did exactly what it was supposed to. But the human thinking that technology could save the day was exactly what ruined the day. Too many people blindly relied on technology that didn’t know how to deal with the compounding effect of its own actions.
What works at a limited size can, thus, be a problem at scale. If a few investors ran portfolio insurance through algorithms, it might have worked well. A little bit of selling hardly causes the market to topple. But when nearly everyone is using them, they become a self-fulfilling prophecy. Sell when the market falls, and the market falls when everyone sells.
Models are fairly easy to construct. If this happens, do that. The concept works great on a spreadsheet, using back-tested data of the past few years or decades. However, when you’re actually in the game, the game itself will change if you’re a player of some size. This is what George Soros, a veteran market player, calls “Reflexivity”.
Reflexivity is when the very act of observing or measuring something changes it. If you’re influential and say the market is bullish, it makes people buy, and that buying makes it more bullish. Technology amplifies reflexivity, especially when the creators of the technology have no clue about the reflexive nature of markets.
Imagine that you programmatically buy stocks based on your algorithm, and you’re so profitable that you get a lot of money from investors. Now you’re the biggest entity in town, and everyone knows. If your stocks fall, say 30%, and your algorithm wants to sell, it simply cannot because everyone will sell ahead of you. Your mathematical model assumed that the price on the screen is what you’ll get if you buy or sell, but the reality is different because of your size and the impact you make on the markets.
The biggest example here is LTCM (Long-Term Capital Management), a hedge fund which had Nobel laureates Robert Merton and Myron Scholes, and more than $140 billion in assets at one point. The fund used technology and algorithms to automatically trade in different markets, from bonds to foreign exchange. It became the “big whale” in the markets it played in, a term used to denote the largest participant in size.
When the markets turned with a weak Asian economy and then a Russian default after 1998, they crushed LTCM, which simply saw its model collapse. As it later turned out, the competition knew exactly how big LTCM was and exited even before the hedge fund could, turning a brilliant technology to dust.
The dangerous peek
And then, there’s another side of the technology curtain that people find scary: the angle of privacy, of algorithms analyzing personal data.
In the US, you can sell “order flow”. Your buy order at your broker can be given to a large market participant, who will “peek” at the order and then can either say he’ll fulfill the order, or let it pass through to the other exchanges. But in reality, technology can be used to derive an advantage—if the market participant is faster than you, it could buy from another exchange at a lower price even faster and then sell to you, given that your order hasn’t even reached the other exchange yet.
Why is this abuse? Because without that market participant peeking at your order, you’d have got it at the other exchange at a lower price a little while later. (You can’t sell order flow in India at all. Yet.)
Order flow is popular because brokers get paid to allow other market participants to see their customers’ orders. That can be so big a payment that a broker can, as a result, give you “free” brokerage—that is, you don’t pay anything to trade, as long as you let your broker show the order flow elsewhere.
Earlier, you’d have to tell a broker what to buy, but if he misused that information, he could be charged with fraud. With a computer looking at this information, it’s quite likely that you won’t even know if it’s been abused, and even if it is, there’s no one to blame because the machine did it. Order peeking might not actually be abuse, but it crosses a line somewhere.
In India, the NSE co-location scam crossed the line by a fair bit. The idea was that the stock exchange provided data to brokers in a round-robin mechanism, sending it first to whoever connected to its trading system first and then to the second broker and so on.
If someone got connected to NSE’s trading system first, they would get the data first—and with technology, even a split second early was enough. The computers could then learn enough from this “early” feed to be able to place orders ahead of anyone else, even within the algorithmic trading universe. In all, profits were made, and after an investigation, even taken away from the people doing it.
The issue was so complex that without details from a whistle-blower, it would have been nearly impossible to detect. Even now, there could be many such scams going on, and we won’t know until we know.
The algo wars
While “fintech” is a buzzword nowadays, it is just as easy to abuse technology to take things to an extreme, to make invalid assumptions, to “fake” disclosures, to even set off something called predatory algorithms.
Predatory algorithms simply prey on other algorithms. Some will detect that an algorithm is trying to sell a large number of shares in small bits and pieces, and then take advantage by running ahead of it. Some will simply create artificial price signals to distort another algorithm’s design, like a trader in London did by spoofing over 150,000 orders that were quickly canceled. This resulted in a massive “flash” crash, where algorithms lost their footing, earning the spoofing trader around $40 million.
Technology is an opportunity, and in finance, it provides a direct link to profit as well. You can use it for good: to get more people in, to reduce costs, and to save time. But you can clearly use it for evil as well, and if you wear a dark suit, you might just get away with it.