Recent posts by Mona

Be the first to know about top trends within the AI / ML monitoring industry through Mona's blog. Read about our company and product updates.

The Need for Specialized Monitoring in Quantitative Trading Models

The Need for Specialized Monitoring in Quantitative Trading Models

In the world of automated trading, quantitative (quant) models are at the core of decision-making. These models analyze vast datasets to execute trades at speeds and volumes that far exceed human capability. However, ensuring that these models consistently perform well requires effective monitoring. Traditional approaches to performance management, which focus on broad financial metrics, IT infrastructure, and standard machine learning (ML) monitoring, are often insufficient. Specialized, deep monitoring is necessary to truly understand how these models behave and to maintain their effectiveness over time.

Case Study: Best Practices for Monitoring GPT-Based Applications

Case Study: Best Practices for Monitoring GPT-Based Applications
 
This is a guest post by Hyro - a Mona customer. 

What we learned at Hyro about our production GPT usage after using Mona, a free GPT monitoring platform

At Hyro, we’re building the world’s best conversational AI platform that enables businesses to handle extremely high call volumes, provide end to end resolution without a human involved deal with staff shortages in the call center, and mine analytical insights from conversational data, all at the push of a button. We’re bringing automation to customer support at a scale that’s never been seen before, and that brings with it a truly unique set of challenges. We recently partnered with Mona, an AI monitoring company, and used their free GPT monitoring platform to better understand our integration of OpenAI’s GPT into our own services. Because Hyro operates in highly-regulated spaces, including the healthcare industry, it is essential for us that we ensure control, explainability, and compliance in all our product deployments. We can’t risk LLM hallucinations, privacy leaks, and other GPT failure modes that could compromise the integrity of our applications. Additionally, we needed a way to monitor token usage and the latency of the OpenAI service in order to keep costs down and deliver the best possible experience to our customers.

Everything You Need to Know About Model Hallucinations

Everything You Need to Know About Model Hallucinations

If you’ve worked with LLMs at all, you’ve probably heard the term model hallucinations tossed around. So what does it mean? Is your model ingesting psychedelic substances? Or are you the crazy one and hallucinating a model that doesn’t actually exist? Luckily, the cultural parlance points to a problem that is less serious than it sounds. However, model hallucinations are something that every LLM user will encounter, and they can cause problems for your AI-based systems if not properly dealt with. Read on to learn about what model hallucinations are, how you can detect them, and steps you can take to remediate them when they inevitably do arise.