How should we monitor trading bots?
One interesting thing about flash crashes is that they should no longer occur. The rules to protect markets from algorithm-generated disorder are established, extensive and very demanding.
The problem is that these rules don’t seem to be enforced.
The consequences may seem obvious, like last month when European markets sold off after a Citigroup trader in London allegedly added an extra zero to an order. The cross-market ripple effects in this session strongly suggest that the algos of several companies were not responding to lower than usual volume and were contributing to the turbulence. This in turn raises difficult questions as to whether the same algos could rock the boat under less obviously stressed conditions.
A general rule of securities law is that market abuse is market abuse whether committed by a human or a machine. What matters is the behavior. An individual or company can expect trouble if they threaten to undermine market integrity, destabilize an order book, send a misleading signal, or commit myriad other vaguely defined offenses. The mechanism is largely irrelevant.
Importantly, an algorithm that misbehaves when pitted against another company’s manipulative or failed trading strategy is also committing market abuse. Fooling around under pressure is no more an alibi for a robot than it is for a human.
For this reason, trading bots should be tested before deployment. Businesses need to ensure not only that they will work in all weathers, but also that they will not be fooled by big-finger mistakes or popular attack strategies such as dynamic ignition. The intent here is to protect against cascading failures such as the “hot potato” effect that contributed to the flash crash of 2010, where the algos failed to recognize a shortage of liquidity as they traded rapidly among themselves.
Mifid II (effective from 2018) applies a very wide Voight–Kampff test. Investment firms using the European platforms are required to ensure that any algorithm will not contribute to disorder and will continue to operate effectively “in stressed market conditions”. The burden of the police falls partly on tradewhich should require members to certify before each deployment or upgrade that bots are fully tested in “real market conditions”.
But what this means in practice quickly becomes complicated, because for the details you have to delve into Mifid II Regulatory technical standards (RTS) updates.
RTS6 sets out the basic self-assessment framework for investment firms to certify that their bots will not cause antagonism in the markets. Its sequel, RTS 7has a separate and completely different definition as to whether bots will to contribute to the chaos of the market. In short, an RTS 7 compliant company must certify that all systems will not amplify any market convulsions and must include an explanation of how these tests were performed.
RTS 6 is well understood, but how many commercial businesses meet the RTS 7 criteria? According to Nick Idelson, technical director of the consulting firm TraderServer, it is likely that less than half have tested their algo strategies to the required standard. The scale and complexity of the work suggests that even this estimate may be optimistic.
Mifid II’s definition of an algo skips automated site routing and captures just about everything else. If there is “limited or no human intervention” required when generating a quote, it is an algo. If predetermined parameters control the price, order size or timing, it is an algo. If there is a post-submission strategy other than just execution, it’s an algo. Stress testing of these systems must prove that everything will work as expected both individually and in combination.
The scope of the regulation is equally broad, applying to all financial instruments defined by Mifid II on any venue that allows or enables algorithmic trading. The “or allows” bit brings into its scope sites that disallow automated trading strategies, as well as those without auto-matching trading systems. (See question 31 of the ESMA Q&A 2021.) Using a strict interpretation of the rules, it is nearly impossible for a trade to meet best execution obligations without also being defined as automated.
The penalties for non-compliance are significant, up to 15 million euros or 15% of turnover for companies and up to four years in prison for individuals. The overall situation is similar, with IOSCO market integrity principles provide a framework for cross-border enforcement.
But unlike the United States (where JPMorgan Chase landed a $920 million settlement in 2020 for spoofing precious metals futures) and Hong Kong (where Instinet Pacific and Bank of America were fined for bot management failures), the algo police approach in the UK and Europe has been gently-gentle. As CAF noted in its May 2021 Market Intelligence Bulletin:
Our internal monitoring algorithms identified trading by an algorithmic trading company, which raised potential concerns about the impact that the algorithms responsible for executing the company’s various trading strategies were having on the market. As a result of our investigations, the company has adjusted the relevant algorithm and its control framework to prevent the company’s activity from having an undue influence on the market.
One of the hurdles regulators face relates to definitions, as it is difficult to pinpoint exactly what stress testing means contributing to market turmoil.
Is it enough for companies to run historical market data through bots in a sandbox? Or does such an approach risk missing the feedback loops created when the fleet interacts with responsive markets? TraderServe has worked with regulators on best practices using live market simulations, but according to Idelson, it remains impossible for an outsider to know whether a company’s testing approach was comprehensive, cursory or non-existent. For this reason, it would be useful to establish public precedents.
Judging by the weaknesses exposed by Citi’s flash crash, the European approach to bot regulation appears to be insufficient. But the far-reaching obligations set out in Mifid II make more proactive forms of policing difficult to sustain. If non-compliance is as widespread as it sounds, the most effective form of enforcement available to regulators may be a good old-fashioned show trial.
Further reading:
AI Trading and the Limits of EU Law Enforcement to Deter Market Manipulation — IT Law and Security Review (PDF)
Comments are closed.