RUMvision v2: marking releases with annotations, pipeline live within two days
RUMvision has just launched v2, and at Elgentos, as a customer from day one, we’ve been looking forward to this release. Not because of a new color scheme or a polished dashboard, but for one specific reason: v2 comes with an API. That opens the door to the kind of automation we’ve already standardized for years with our other monitoring tools.
There’s also an extra reason why we’re following this launch with a bit of pride. Just like us, RUMvision is based in Groningen. Two companies from the north of the Netherlands working on the performance of Dutch and international webshops. Straightforward, no-nonsense engineering. It’s collaborations like these that show you don’t need to look to Amsterdam, Berlin, or San Francisco for world-class tooling.
Why annotations are the first thing we tackled
If you’re running a Magento or Hyvä shop, there’s one thing you want to avoid: not knowing where a performance degradation comes from. Your graph drops on a random Wednesday afternoon. Was it a release? A third-party script that quietly became slower? An A/B test that just went live?
Annotations solve this by placing every deploy as a clear marker in your monitoring timeline. If you see your LCP spike at a specific moment and there’s a release marker at that exact time, you immediately know where to look.
Our other monitoring tools, Tideways and Sentry, have supported this way of working for a while. The Elgentos deployment pipeline already sends an annotation to both tools with every release, including commit hash, environment, and release name. RUMvision was the last missing piece where we still had to correlate things manually. That ends now.
Pipeline live within two days, rollout in progress
We were prepared. As soon as v2 was released, we added the RUMvision endpoint to the same pipeline step that already calls Tideways and Sentry. It took a small piece of code, one secret in our CI, and a rollout plan we’ve executed many times before.
Within two days, the CI/CD integration was live. Every new release that goes through the pipeline now automatically sends an annotation to RUMvision. Rolling this out to all existing client environments happens gradually, following the normal release cycle of each shop. There are no rushed deployments and no extra tickets for clients; everything simply moves along with the next planned release.
That’s exactly why we value a centrally managed monitoring stack as an agency: build the integration once, and every client benefits automatically.
What you see in practice now
In practice, the impact is immediate. When Core Web Vitals drop, you can instantly see whether a release happened just before. If a degradation is tied to a specific release, rolling back becomes a straightforward decision. And for post-mortems, annotations give us a reliable timeline instead of having to reconstruct events afterward.
What we’re building next
Annotations are just step one. The next step is where things become really interesting. We want to bring all monitoring data together into a single timeline—RUMvision RUM data, Tideways APM traces, Sentry errors, deploy events, and even business KPIs—and have AI analyze it.
Not to make dashboards look better, but to answer a question that still takes time today: something is wrong with a shop, what changed, and what is the most likely cause?
An LLM with access to annotations, performance metrics, error rates, and conversion data can generate a strong hypothesis in seconds—something that would normally take an engineer much longer. That’s the point where monitoring shifts from reactive to proactive.
A RUMvision MCP would be a big step forward
To move faster in that direction, an MCP server from RUMvision would be a major step forward. MCP, or Model Context Protocol, introduced by Anthropic and now supported by major AI platforms, would allow AI agents to securely access RUM data, correlate it with other sources, and draw conclusions without every company needing to build its own integration.
RUMvision has the data, and we—and others—have the use cases. An MCP server would be the missing link between those worlds. If that becomes reality, we’d be more than happy to contribute as a launching partner.