Recent posts by Mona

Be the first to know about top trends within the AI / ML monitoring industry through Mona's blog. Read about our company and product updates.

Posts by Itai Bar Sinai, Co-founder and CPO:

Beyond Backtests: Bridging the Gap Between Simulation and Real-Time Trading

 

Backtesting is the backbone of quantitative finance, enabling quants to simulate strategies and assess performance in a controlled environment. But as any seasoned quant will tell you, much like a model is only as good as its representation of the real world, a backtest is only as good as its alignment with real-time trading. The leap from simulated strategies to live markets often reveals discrepancies that can erode profits, undermine confidence, and even jeopardize entire strategies.

Why do these gaps between backtesting and real-time trading occur? And more importantly, how can they be addressed? In this blog, we explore the common pitfalls that create these discrepancies, the role of intelligent monitoring in closing the gap, and how Mona helps quants detect and address issues before they impact the bottom line.

How Granularity in Model Monitoring Saves Quants from Costly Mistakes

If you’ve ever been responsible for monitoring models at a quant hedge fund, you know how tricky it can be. One minute everything looks fine; the next, you’re in a full-blown fire drill because of a performance issue you didn’t catch in time. In other cases, you learn way too late that there was something simple you could have changed that would have improved your results significantly, or some specific area that was completely overlooked. But why does this happen? The problem often comes down to granularity—or rather, the lack of it.

Everything You Need to Know About Model Hallucinations

Everything You Need to Know About Model Hallucinations

If you’ve worked with LLMs at all, you’ve probably heard the term model hallucinations tossed around. So what does it mean? Is your model ingesting psychedelic substances? Or are you the crazy one and hallucinating a model that doesn’t actually exist? Luckily, the cultural parlance points to a problem that is less serious than it sounds. However, model hallucinations are something that every LLM user will encounter, and they can cause problems for your AI-based systems if not properly dealt with. Read on to learn about what model hallucinations are, how you can detect them, and steps you can take to remediate them when they inevitably do arise.

Overcome cultural shifts from data science to prompt engineering

Overcome cultural shifts from data science to prompt engineering

The widespread use of large language models such as ChatGPT, LLaMa, and LaMDA has the tech world wondering whether data science and software engineering jobs will at some point be replaced by prompt engineering roles, rendering existing teams obsolete. While the complete obsolescence of data science and software engineering seems unlikely anytime soon, there’s no denying that prompt engineering is becoming an important role in its own right. Prompt engineering blends the skills of data science, such as a knowledge of LLMs and their unique quirks, with the creativity of artistic positions. Prompt engineers are tasked with devising prompts for LLMs that elicit a desired response. In doing so, prompt engineers rely on some techniques used by data scientists, such as A/B testing and data cleaning yet must also have a finely developed aesthetic sense for what constitutes a “good” LLM response. Furthermore, they need the ability to make iterative tweaks to a prompt in order to nudge a model in the correct direction. Integrating prompt engineers into an existing data science and engineering org therefore requires some distinct shifts in culture and mindset. Read on to find out how the prompt engineering role can be integrated into existing teams and how organizations can better make the shift towards a prompt engineering mindset.