Back to blog

How Granularity in Model Monitoring Saves Quants from Costly Mistakes

If you’ve ever been responsible for monitoring models at a quant hedge fund, you know how tricky it can be. One minute everything looks fine; the next, you’re in a full-blown fire drill because of a performance issue you didn’t catch in time. In other cases, you learn way too late that there was something simple you could have changed that would have improved your results significantly, or some specific area that was completely overlooked. But why does this happen? The problem often comes down to granularity—or rather, the lack of it.

Automated Trading Systems Operate in a Highly Complex Environment

Automated trading systems are complex. It’s not just the model. There could be feature-building pipelines, data sources supplied by third parties that need to be worked on before they can be used, processes downstream from the model from which actual feedback and results could be derived, and many more wheels in motion. For each of these parts, there could be different versions in operation and development, different environments in production, etc.

But the system isn’t just complex; the business environment in which it operates can be even more complicated. The system might need to trade in many different assets, spanning various exchanges in different geographies, assets related to different industries going through disruptions, there might be different asset types, different time horizons for which the system is predicting prices, and many more business dimensions that could matter.

Where Do the Problems Start?

The most common mistake when considering how to monitor these systems is that people don’t appreciate how subtly an issue could start and how gains are to be found in pockets of the data. In the vast majority of cases, it will not be some results metric that just crushes all of a sudden in a highly visible way. What will actually happen is that some change in the world or in your system will cause underperformance in some specific segment of the data.

Some examples: A delay in data coming from a specific exchange causes underperformance only for assets traded on that exchange. A disruption in a specific industry causes your models to gradually but sufficiently quickly become stale and bring worse results for assets in that industry, perhaps only for longer time horizons when it starts. An economic shift in a specific country can do the same for your models for assets relating to that country. The list goes on.

Oftentimes these issues remain hidden since we are looking at results more broadly and on average, and other segments of the data plus the natural volatility of the market and the system’s performance make it very hard to discern noise from actual issues.

Granular Monitoring to the Rescue?

The way to resolve this challenge, in which problems could exist in many places but remain hidden, is to become granular with regard to all these different technical and business dimensions. Check your performance for every asset class, industry, time horizon, exchange, etc. And indeed, that’s what many are trying out.

But once they try it out, they learn another difficult truth—things just get way too noisy to handle. Many think that this is just the nature of the beast—there's no good way to automatically test all these different segments across all these different dimensions without drowning in noise. There is a way.

One real issue in the system can manifest itself in many, sometimes dozens or hundreds of ways. Say that indeed your model is becoming gradually stale for a specific industry. That could affect so many different segments in your data, like various countries that are more involved in that industry, specific exchanges in which assets from that industry are traded, and specific asset classes may be more affected than others.

So this one underlying issue may cause dozens of alerts in a “naive” system that just tracks performance “granularly” across all these dimensions.

Avoiding Alert Fatigue: Get Alerted on Issues and Not Symptoms

Anyone who has experience in monitoring knows that the true nemesis of monitoring is alert fatigue, caused by a poor signal-to-noise ratio in alerts, and the other big enemy is manual work.

In order to have a healthy monitoring operation, you must have an automated process, but on the other hand, you've got to make sure that it won’t spit out alerts at a pace that isn’t tannable. 

From what we learned above, what’s needed is an automated system that finds for you those specific segments that show poor performance in any relevant place in the system, but that is smart enough to detect when several (sometimes many) alerts are caused by the same underlying issue and alert you just once per issue, instead of once per symptom.

Ready to Catch Issues Early Without the Noise? Let’s Talk.

If you’re ready to stop playing catch-up with model performance issues, let’s talk about how granular monitoring can work for you. At Mona, we’ve built a platform that provides you with the granular insights you need while ensuring that you’re not overwhelmed by false alerts or unnecessary noise. The result is a clearer, more actionable picture of your model’s performance—and better decision-making for your quant team.