From Chaos to Confidence: How We Dropped Manual Regression Testing and Boosted Velocity at Cambri
Three years of steady investment in test automation made our manual regression tests obsolete — and our releases safer than ever. The ROI on manual regression turned negative — and that’s when we knew it was time to let it go.
When I started at Cambri, the team’s development velocity was dropping, production issues were piling up, and firefighting had become part of our daily routine. Fast forward three years — we went from fragile releases and almost no regression testing to fully automated quality checks, built-in testing from the start, and a high-velocity development process.
This is the story of how we made that transformation.
The Starting Point: Rapid Growth, Slowing Momentum
I joined Cambri in April 2021. By the summer, it was clear that the combination of rapidly growing technical debt and the absence of automated testing had brought development to a near halt. Critical issues were being reported almost daily, forcing the team to drop everything to stabilize production. The constant firefighting drained focus and morale, leaving little room for meaningful progress or innovation.
Our first priority became to stabilize the product and rebuild deployment confidence.
Step One: Investing in Test Automation
The initial situation with the test coverage was bad. Backend test coverage was below 1%, the frontend had none, and our only safety net was a handful of Cypress end-to-end tests covering a few happy paths.
We aimed to release confidently every two weeks, so we set out to strengthen our testing foundation. During the summer of 2021, we introduced coverage metrics — total coverage and delta (new code coverage per PR) — for both backend and frontend.
Developers began writing more tests. In the beginning, most frontend tests were snapshots — quick to produce but shallow in value. On the backend, we introduced integration tests that validated code against live environments, including real database interactions. This was a significant improvement. It allowed us to test end-to-end functionality without extensive mocking, which was critical given our limited coverage.
By the end of 2021, both backend and frontend coverage reached around 30%. The number of critical production issues began to drop. However, using a shared staging environment for integration testing brought new headaches — schema changes would often break everyone’s tests. Still, we were finally moving in the right direction.
My takeaway from this phase is simple: start measuring quality early. The approach doesn’t have to be perfect. What matters is progress and accountability. Improving coverage by even a fraction of a percent at a time builds momentum and discipline. It’s true that coverage alone doesn’t guarantee quality; one can have 100% coverage and still lack a reliable safety net. But building solid test automation is a long-term effort. It takes both technical investment and a cultural shift in how the team thinks about quality.
Step Two: Introducing QA and Manual Regression Testing
In early 2022, we hired our first full-time QA engineer, and introduced a manual regression testing plan to run every sprint. This gave the team renewed confidence in releases — we could finally validate stability in a structured, repeatable way.
In parallel, we invested in dockerizing the development environment, ensuring consistent setups across local development and continuous integration. By mid-2022, test coverage reached around 50%, and releases were fully automated through a continuous delivery pipeline. The team could now deliver updates bi-weekly with far fewer surprises.
Within a year, we’d moved from fragile, manual releases to a predictable and stable release rhythm — a milestone many teams would consider a success.
Yet cracks were beginning to show: as the product grew, manual regression testing was turning into a bottleneck.
Step Three: The Regression Bottleneck
By mid-2024, manual regression testing had become a major bottleneck. Our QA team was overloaded, spending half their time running repetitive test scenarios. The product had grown, the engineering team was faster, and manual regression simply couldn’t keep up.
We analyzed the value of those regression runs and found a striking pattern:
They rarely uncovered new or critical issues.
High-severity bugs were almost always found during exploratory testing instead.
After three years of building a solid automated test suite, manual regression had hit diminishing returns. We were spending significant effort for minimal gain.
The Turning Point: Dropping Manual Regression
It was a bold decision: we chose to eliminate manual regression testing altogether. To manage the risk, we closely monitored production metrics and agreed that if quality dropped, we could always roll back.
I still remember the hesitation in the team. We were going against a best practice that had been drilled into us over years in the industry. The data was clear — the return on investment for manual regression was extremely low — but it still felt uncomfortable to let go of something that had always been part of “how things are done.”
The first few weeks were quiet. We watched production closely, ready to react if needed — but everything kept running smoothly. After two weeks without major issues, then a month, then two, we knew the experiment had worked. Our QA engineers now had time for higher-impact work: exploratory testing, reviewing user stories, and improving test automation. Developers, too, started taking more ownership of quality, embedding testing earlier in their workflows.
For me, this was one of the most rewarding moments at Cambri. In every company I’d worked before, manual regression testing was treated as a sign of a mature process. Introducing it once felt like progress — but being able to move past it showed that we’d grown up as a team.
Today
Today, the team deploys with confidence several times a week. Some releases deliver small improvements or fixes; others introduce major, high-impact features. Regressions still happen, but now they’re caught early, during exploratory testing or while developers run test cases defined by the QA team.
In getting us there, the cultural transformation played as big a role as the technical one. We moved from a mindset of testing after development to one of building quality in from the start. Quality and velocity go hand in hand. Without sustainable development practices and embedding quality into the product, long-term development becomes very challenging. In a world of lightning-fast innovation, losing momentum can be fatal for a young company. Cambri avoided that fate by learning to balance speed with quality.
Lessons Learned
As I was writing this, I couldn’t help wondering — could we have reached this point faster? Maybe. But each stage revealed new challenges and possibilities we couldn’t have foreseen at the start. We became more ambitious as we progressed: in the beginning, our dream was simply to release bi-weekly with a single click. When we finally achieved that, we realized we could aim even higher.
Over this journey, a few key lessons stood out:
Measure progress, not perfection. Coverage metrics helped us track improvement and motivated developers to write more tests, even when the early ones weren’t ideal.
Automate relentlessly — but intelligently. Integration and unit tests gave us the safety net to move faster, but automation only adds value when it truly reduces risk.
Embrace change. Manual regression testing was essential at one stage, but knowing when to evolve beyond it was just as important.
Make quality everyone’s job. When QA shifted from repetitive testing to collaboration and exploration, the whole team leveled up.
Looking back, every step was necessary. Each challenge forced us to build the confidence and discipline that now define how we work.


