Recent posts by Mona

Be the first to know about top trends within the AI / ML monitoring industry through Mona's blog. Read about our company and product updates.

Posts by Itai Bar Sinai, Co-founder and CPO:

Is your LLM application ready for the public?

Is your LLM application ready for the public?

Large language models (LLMs) are becoming the bread and butter of modern NLP applications and have, in many ways, replaced a variety of more specialized tools such as named entity recognition models, question-answering models, and text classifiers. As such, it’s difficult to imagine an NLP product that doesn’t use an LLM in at least some fashion. While LLMs bring a host of benefits such as increased personalization and creative dialogue generation, it’s important to understand their pitfalls and how to address them when integrating these models into a software product that serves end users. As it turns out, monitoring is well-posed to address many of these challenges and is an essential part of the toolbox for any business working with LLMs.

The challenges of specificity in monitoring AI

The challenges of specificity in monitoring AI

Monitoring is often billed by SaaS companies as a general solution that can be commoditized and distributed en-masse to any end user. At Mona, our experience has been far different. Working with AI and ML customers across a variety of industries, and with all different types of data, we have come to understand that specificity is at the core of competent monitoring. Business leaders inherently understand this. One of the most common concerns we find voiced by potential customers is that there’s no way a general monitoring platform will work for their specific use-case. This is what often spurs organizations to attempt to build monitoring solutions on their own; an undertaking they usually later regret. Yet, their concerns are valid, as monitoring is quite sensitive to the intricacies of specific use cases. True monitoring goes far beyond generic concepts such as “drift detection,” and the real challenge lies in developing a monitoring plan that fits an organization’s specific use cases, environment, and goals. Here are just a few of our experiences in bringing monitoring down to the level of the highly specific for our customers.

Best practices for setting up monitoring operations for your AI team

Best practices for setting up monitoring operations for your AI team

In recent years, the term MLOps has become a buzzword in the world of AI, often discussed in the context of tools and technology. However, while much attention is given to the technical aspects of MLOps, what's often overlooked is the importance of the operations. There is often a lack of discussion around the operations needed for machine learning (ML) in production, and monitoring specifically. Things like accountability for AI performance, timely alerts for relevant stakeholders, the establishment of necessary processes to resolve issues, are often disregarded for discussions about specific tools and tech stacks. 

Introducing automated exploratory data analysis powered by Mona

Introducing automated exploratory data analysis powered by Mona

In today's data-driven world, organizations increasingly rely on data to inform their decision-making, resulting in the need for efficient and accurate data analysis tools. In the last two decades, a plethora of tools for analytics, data science, and BI have been created to meet this need. However, one basic problem in data analysis has remained elusive: the problem of automating multivariate exploratory analysis clearly and free of noise.