Recent posts by Mona

Be the first to know about top trends within the AI / ML monitoring industry through Mona's blog. Read about our company and product updates.

The challenges of specificity in monitoring AI

The challenges of specificity in monitoring AI

Monitoring is often billed by SaaS companies as a general solution that can be commoditized and distributed en-masse to any end user. At Mona, our experience has been far different. Working with AI and ML customers across a variety of industries, and with all different types of data, we have come to understand that specificity is at the core of competent monitoring. Business leaders inherently understand this. One of the most common concerns we find voiced by potential customers is that there’s no way a general monitoring platform will work for their specific use-case. This is what often spurs organizations to attempt to build monitoring solutions on their own; an undertaking they usually later regret. Yet, their concerns are valid, as monitoring is quite sensitive to the intricacies of specific use cases. True monitoring goes far beyond generic concepts such as “drift detection,” and the real challenge lies in developing a monitoring plan that fits an organization’s specific use cases, environment, and goals. Here are just a few of our experiences in bringing monitoring down to the level of the highly specific for our customers.

GPT models are changing businesses. What's next?

GPT models are changing businesses. What's next?

Large language models (e.g., GPT-4) seem poised to revolutionize the business world. It’s only a matter of time before many professions are transformed in some way by AI, as GPT can already generate functional code, review and draft legal documents, give tax advice, and turn hand-sketched diagrams into fully-functioning websites. Among the roles most likely to be affected by GPT are those involving sales, marketing, customer support, and media, although it’s almost impossible to imagine a domain that won’t in some way be affected by GPT. While certain tasks invariably demand a human touch, it’s likely that the focus of many roles will shift toward these key human endeavors and away from those that can be automated. With all this in mind, it’s pertinent to ask what challenges organizations are likely to encounter as they begin to invest in advanced AI and which roadblocks developers are likely to run up against as they work to incorporate GPT APIs into software products. While it is still too early to anticipate all possible hurdles teams using GPT are likely to experience, our understanding of AI and large language models suggest at least a few that will be particularly prominent.

The fundamentals of responsible AI

The fundamentals of responsible AI

More than ever before, people around the world are impacted by the advancement in AI. AI is becoming ubiquitous and it can be seen in healthcare, retail, finance, government, and practically anywhere imaginable. We use it to improve our lives in many ways such as automating our driving, detecting diseases more accurately, improving our understanding of the world, and even creating art. Lately, AI is becoming even more available and “democratized” with the rise of accessible generative AI such as ChatGPT.

Best practices for setting up monitoring operations for your AI team

Best practices for setting up monitoring operations for your AI team

In recent years, the term MLOps has become a buzzword in the world of AI, often discussed in the context of tools and technology. However, while much attention is given to the technical aspects of MLOps, what's often overlooked is the importance of the operations. There is often a lack of discussion around the operations needed for machine learning (ML) in production, and monitoring specifically. Things like accountability for AI performance, timely alerts for relevant stakeholders, the establishment of necessary processes to resolve issues, are often disregarded for discussions about specific tools and tech stacks.