The technological transformation of financial markets began way back in the 1970s. The first efforts focused on streamlining market access, facilitating orders with routing and matching programs. Algorithmic trading began to take off in the 1980s, and then, in the 1990s, came the internet.
When we talk about financial market efficiency, we’re really talking about information and access. If information flows freely and people can act on it via a relatively frictionless trading platform, then the price of goods, stocks, commodities, etc. is a meaningful reflection of what’s known about the world. The internet fundamentally transformed both information flows and access. News was incorporated into the market faster than ever before. Anyone with a modem could trade. Technology eliminated the gatekeepers: human order-routers (brokers) and human matching engines (known as ‘specialists’ in finance parlance) were no longer needed. The transition from “pits to bits” led to exchange consolidation; the storied NYSE acquired an electronic upstart to remain competitive.
Facilitation turned into automation, and now computers monitor the market and decide what to trade. They route orders, globally, with no need for human involvement beyond initial configuration and occasional check-ins. News breaks everywhere, all at once and in machine-readable formats, and vast quantities of price and tick data are instantly accessible. The result is that spreads are tighter, and prices are consistent even across exchanges and geographical boundaries.
Technology transformed financial markets, increasing efficiency and making things better for everyone.
Except when it didn’t.
For decades we’ve known that algorithmic trading can result in things going spectacularly off the rails. Black Monday, in 1987, is perhaps the most famous example: programmatic trading sell orders triggered other programmatic sell orders, which triggered still more sell orders, leading to a 20% drop in the market — and that happened in the pre-Internet era. Since then, we’ve seen unanticipated feedback loops, bad code, and strange algorithmic interactions lead to steep dives or spikes in stock prices. The Knight trading fiasco is one recent example; a stale test strategy was inadvertently pushed live and it sent crazy orders into the market, resulting in thousands of rapid trades and price swings unreflective of the fundamentals of the underlying companies. Crashes – flash crashes, now – send shockwaves through the market globally, impacting all asset types across all exchanges; the entire system is thrown into chaos while people try to sort out what’s going on.
So, while automation has been a net positive for the market, that side effect — fragility — negatively impacts and erodes trust in the health of the entire system. Regular people read the news, or look at their E-trade account, and begin to feel like financial markets are dangerous or rigged, which makes them both wary and angry. Media and analysts, meanwhile, simplify the story to make a very complex issue more accessible, creating a boogeyman in doing so: high-frequency trading (HFT).
The trouble is that “high-frequency trading” is about as precise as “fake news.”
HFT is a catch-all for a collection of strategies that share several traits: extremely rapid orders, a high quantity of orders, and very short holding periods. Some HFT strategies, such as market making and arbitrage, are net beneficial because they increase liquidity and improve price discovery. But others are very harmful. The nefarious ones involve intentional, deliberate, and brazen market manipulation, carried out by bad actors gaming the system for profit.
One example is quote stuffing, which involves flooding specific instruments (like a particular stock) with thousands and thousands of orders and cancellations at rates that exceed bandwidth capabilities. The goal is to increase latency and cause confusion among other participants in the market. Another example is spoofing, placing bids and offers with the intent to cancel rather than execute, and its advanced form, layering, where this is done at several pricing tiers to create the illusion of a fuller order book (in other words, faking supply and/or demand). The goals of these strategies is to entice other market participants — including other algorithms — to respond in a way that benefits the person running the manipulation strategy. People are creative. And in the early days of HFT, slimy people could do bad things with relative ease.
Technology brought us faster information flows and decreased barriers to access. But it also brought us increased fragility. A few bad actors in a gameable system can have a profound negative impact on participant trust, and on overall market resilience. The same thing is now happening with the marketplace of ideas in the era of social networks. (...)
Social networks enable malicious actors to operate at platform scale, because they were designed for fast information flows and virality. Bots and sockpuppets can be used to manipulate conversations, or to create the illusion of a mass groundswell of grassroots activity, with minimal effort. It’s incredibly easy to deploy bots into a hashtag to spread or to disrupt a message — quote stuffing the conversation the way a malicious HFT algorithm quote stuffs the order book of a stock. It’s easy to manipulate ratings or recommendation engines, to create networks of sockpuppets with the goal of subtly shaping opinions, preying on proximity bias and confirmation bias.
This would be a more manageable situation if the content remained on one platform. But the goal of a disinformation campaign is to ensure the greatest audience penetration, and achieving that involves spreading content across all of the popular social exchanges simultaneously. At a systems level, the social web is phenomenally easy to game because the big social platforms all have the same business model: eyes and ads. Since they directly compete with each other for dollars, they have had little incentive to cooperate on big issues. Each platform takes its own approach to troll-bashing and bot detection, with varying degrees of commitment; there’s no cross-platform policing of malicious actors happening at any kind of meaningful level.
In fact, until a very notable event in November 2016, there was no public acknowledgement by Twitter, Facebook, or Google that there even was a problem. Prior to the U.S. Presidential election, tech companies managed to move fast and break things in pursuit of user satisfaction and revenue, but then fell back on slippery-slope arguments to explain why it was too difficult to rein in propaganda campaigns, harassment, bots, etc. They chose to pretend that algorithmic manipulation was a nonissue, so that they bore no responsibility for the downstream effects. Technology platforms are simply hosts of the content; they don’t create it. But as malicious actors get more sophisticated, and it becomes increasingly difficult for regular people to determine who or what they’re communicating with, there will be a profound erosion of trust in social networks.
Markets can’t function without trust.
by Renee DiResta, Ribbonfarm | Read more:
When we talk about financial market efficiency, we’re really talking about information and access. If information flows freely and people can act on it via a relatively frictionless trading platform, then the price of goods, stocks, commodities, etc. is a meaningful reflection of what’s known about the world. The internet fundamentally transformed both information flows and access. News was incorporated into the market faster than ever before. Anyone with a modem could trade. Technology eliminated the gatekeepers: human order-routers (brokers) and human matching engines (known as ‘specialists’ in finance parlance) were no longer needed. The transition from “pits to bits” led to exchange consolidation; the storied NYSE acquired an electronic upstart to remain competitive.
Facilitation turned into automation, and now computers monitor the market and decide what to trade. They route orders, globally, with no need for human involvement beyond initial configuration and occasional check-ins. News breaks everywhere, all at once and in machine-readable formats, and vast quantities of price and tick data are instantly accessible. The result is that spreads are tighter, and prices are consistent even across exchanges and geographical boundaries.
Technology transformed financial markets, increasing efficiency and making things better for everyone.
Except when it didn’t.
For decades we’ve known that algorithmic trading can result in things going spectacularly off the rails. Black Monday, in 1987, is perhaps the most famous example: programmatic trading sell orders triggered other programmatic sell orders, which triggered still more sell orders, leading to a 20% drop in the market — and that happened in the pre-Internet era. Since then, we’ve seen unanticipated feedback loops, bad code, and strange algorithmic interactions lead to steep dives or spikes in stock prices. The Knight trading fiasco is one recent example; a stale test strategy was inadvertently pushed live and it sent crazy orders into the market, resulting in thousands of rapid trades and price swings unreflective of the fundamentals of the underlying companies. Crashes – flash crashes, now – send shockwaves through the market globally, impacting all asset types across all exchanges; the entire system is thrown into chaos while people try to sort out what’s going on.
So, while automation has been a net positive for the market, that side effect — fragility — negatively impacts and erodes trust in the health of the entire system. Regular people read the news, or look at their E-trade account, and begin to feel like financial markets are dangerous or rigged, which makes them both wary and angry. Media and analysts, meanwhile, simplify the story to make a very complex issue more accessible, creating a boogeyman in doing so: high-frequency trading (HFT).
The trouble is that “high-frequency trading” is about as precise as “fake news.”
HFT is a catch-all for a collection of strategies that share several traits: extremely rapid orders, a high quantity of orders, and very short holding periods. Some HFT strategies, such as market making and arbitrage, are net beneficial because they increase liquidity and improve price discovery. But others are very harmful. The nefarious ones involve intentional, deliberate, and brazen market manipulation, carried out by bad actors gaming the system for profit.
One example is quote stuffing, which involves flooding specific instruments (like a particular stock) with thousands and thousands of orders and cancellations at rates that exceed bandwidth capabilities. The goal is to increase latency and cause confusion among other participants in the market. Another example is spoofing, placing bids and offers with the intent to cancel rather than execute, and its advanced form, layering, where this is done at several pricing tiers to create the illusion of a fuller order book (in other words, faking supply and/or demand). The goals of these strategies is to entice other market participants — including other algorithms — to respond in a way that benefits the person running the manipulation strategy. People are creative. And in the early days of HFT, slimy people could do bad things with relative ease.
Technology brought us faster information flows and decreased barriers to access. But it also brought us increased fragility. A few bad actors in a gameable system can have a profound negative impact on participant trust, and on overall market resilience. The same thing is now happening with the marketplace of ideas in the era of social networks. (...)
Social networks enable malicious actors to operate at platform scale, because they were designed for fast information flows and virality. Bots and sockpuppets can be used to manipulate conversations, or to create the illusion of a mass groundswell of grassroots activity, with minimal effort. It’s incredibly easy to deploy bots into a hashtag to spread or to disrupt a message — quote stuffing the conversation the way a malicious HFT algorithm quote stuffs the order book of a stock. It’s easy to manipulate ratings or recommendation engines, to create networks of sockpuppets with the goal of subtly shaping opinions, preying on proximity bias and confirmation bias.
This would be a more manageable situation if the content remained on one platform. But the goal of a disinformation campaign is to ensure the greatest audience penetration, and achieving that involves spreading content across all of the popular social exchanges simultaneously. At a systems level, the social web is phenomenally easy to game because the big social platforms all have the same business model: eyes and ads. Since they directly compete with each other for dollars, they have had little incentive to cooperate on big issues. Each platform takes its own approach to troll-bashing and bot detection, with varying degrees of commitment; there’s no cross-platform policing of malicious actors happening at any kind of meaningful level.
In fact, until a very notable event in November 2016, there was no public acknowledgement by Twitter, Facebook, or Google that there even was a problem. Prior to the U.S. Presidential election, tech companies managed to move fast and break things in pursuit of user satisfaction and revenue, but then fell back on slippery-slope arguments to explain why it was too difficult to rein in propaganda campaigns, harassment, bots, etc. They chose to pretend that algorithmic manipulation was a nonissue, so that they bore no responsibility for the downstream effects. Technology platforms are simply hosts of the content; they don’t create it. But as malicious actors get more sophisticated, and it becomes increasingly difficult for regular people to determine who or what they’re communicating with, there will be a profound erosion of trust in social networks.
Markets can’t function without trust.
Image: uncredited