Case study · Platform Engineering

Rolling out agentic orchestration across core workflows.

In ninety days we moved five dev-owned workflows onto a flow-based agentic orchestration experience that runs on the same runtime as our nine existing BPMN processes. Same engine, new front door. Here is what shipped and what we measured.

Business unit
Platform Engineering
Scope
5 flow processes
Outcome
4x dev iteration
Deployed
Q1 2026

Accrual has been running orchestration on a BPMN-style designer since 2022. That designer is the right tool for the workflows it was designed for. It was the wrong tool for our developer-owned workflows, where the audience is engineers, not business analysts, and the build-ship cycle is days, not quarters. Rather than pick a second orchestration product, we picked a second authoring experience on the same runtime. This is what that took.

The signal we were responding to

Throughout 2024 and into 2025, our platform team noticed a pattern. Developer teams building automation-adjacent workflows were hand-rolling orchestration logic inside individual bots, rather than using our approved orchestration platform. The reason was never "we do not like the platform". The reason was always "the authoring experience does not match our build cycle".

Business analysts using BPMN typically spend weeks modelling a workflow with stakeholders before shipping it. Developer teams building, say, a fraud triage helper needed to ship a first version in days, iterate weekly, and version-control everything. The BPMN designer was a bad fit for that cadence and a bad fit for the code-adjacent mental model those developers had.

The hand-rolled orchestration inside bots was producing observability gaps, duplicated primitives, and incidents. It was a real problem. It was also a predictable one, because we had built the paved road for a different persona.

The decision

We had three options. Tell developers to use BPMN and deal with it. Let them keep hand-rolling orchestration inside bots and accept the cost. Give them a different authoring experience on the same runtime.

We chose the third. The vendor we had picked for BPMN offered a flow-based authoring experience that shipped artifacts onto the same orchestration engine. It was not a second product with a second runtime. It was a second IDE for the engine we already ran.

We did not commit to a big rollout. We committed to moving five pilot workflows over ninety days, measuring the results, and deciding from there.

The five pilot workflows

Fraud triage helper. A workflow that receives suspected fraud events from the transaction stream, enriches them with context from three internal systems, scores them with a shared model, and routes the ones that need a human into a task queue. Previously hand-rolled inside a single "mega-bot" that had grown to 3,400 lines of orchestration code.

Credit bureau pull coordinator. Coordinates pulls from three credit bureaus, reconciles the responses, and writes the merged view to our credit analysis pipeline. Previously a cron job and a Slack channel.

Document classification pipeline. Ingests loan and onboarding documents, classifies them, routes them to the right downstream processor. Previously a sequence of bots linked by shared state in a database.

Sanctions rescreening flow. Runs nightly rescreening of our entire commercial book against updated watchlists and flags cases for review. Previously a monolithic bot.

Agent co-pilot pipeline. Powers the relationship banker co-pilot by coordinating retrieval, reranking, and generation calls. New build, not a migration.

What changed in the developer experience

The fraud triage helper is the most instructive example. In the old model, making a change required modifying a bot that was also responsible for its own orchestration, testing the bot end to end, and deploying via our bot pipeline. Average change cycle was seven business days, most of which was spent on testing because a small change to orchestration could break anything.

In the new model, the orchestration lives in a flow artifact. The bot does one thing. A change to the orchestration is an edit to the flow. A change to the bot is an edit to the bot. Both go through CI separately. Average change cycle is now around 36 hours, including code review and a staged rollout.

The 4x iteration speed is not magic. It is what you get when orchestration is a first-class artifact separate from the bots that execute within it.

The numbers

5
Flow processes live in 90 days
4x
Faster change cycle vs prior approach
0
Incidents caused by the split authoring model

The incident number is the one we watched most carefully, because the risk of a split authoring model is that BPMN and flow authors drift into different conventions and an incident sits in the boundary. After ninety days, zero incidents have been attributable to the split. We credit our style guide, shared code review, and the fact that both experiences ship onto the same runtime so there is nowhere for drift to hide operationally.

What our coding agent contributed

Our code-generating agent, which we already used for bot authoring, was extended to draft flow artifacts. For the fraud triage rebuild, the agent produced a workable first draft of the flow from a written brief. The engineer who owned the rebuild reported that roughly 60 percent of the agent's draft made it to the shipped version, with the rest rewritten to match our style and to handle the long tail of cases the brief did not cover.

This is not as flashy as "the agent wrote the flow". It is more valuable. Getting a workable first draft in minutes instead of hours lets the engineer spend attention on the interesting parts. We do not ship anything the agent generates without human review. We ship more, faster, because the tedious part is faster.

What we are watching

Drift between authoring styles. Zero incidents so far, but style drift can cause future incidents. Our weekly platform review agenda now includes a check on flow-authored workflows.

Vendor concentration. Running BPMN, flow, and case on the same runtime reduces our fragmentation risk on the authoring side but increases our exposure to a single runtime decision by the vendor. We do our annual "what are our options" exercise on this in Q4.

Audit coverage. We have not yet had a regulatory audit of a flow-authored workflow. Our internal audit team is comfortable with the model. Our first external audit touching this will happen in Q3. We will update this case study after that engagement.

Related reading

For the thinking behind this rollout, see Same product, new front door. For the broader platform model that hosts BPMN, flow, and case management on one runtime, see the platform overview.

Case study Orchestration Developer experience

Interested in how we run our platform?

Read the engineering blog, or see the full platform overview.

Engineering blog