Over the past year, everyone on our support team was already using AI tools like ChatGPT or Claude.

Helpful output is not the same as operationally useful output.

When someone needed help investigating an issue, they still had to manually copy context into chat, and every conversation restarted from zero because the model could not see the systems we were actually working in.

What if the model could query our operational systems directly?

The Real Problem: Context Fragmentation

While storefront platforms like Shopify handle the customer-facing side of commerce, fulfillment networks like Cahoot operate the infrastructure that moves orders through warehouses, carriers, and shipping systems.

Modern Third Party Logistics (3PL) platforms are complex operational systems.

Issue diagnosis breaks when context is split across tools.

A single merchant order spans storefronts, fulfillment orchestration, warehouse partners, carrier APIs, and inventory systems. In practice, investigations required stitching context across Zendesk history, fulfillment data, tracking events, warehouse inventory, help center docs, and Jira tickets.

That workflow does not scale. It is exactly the type of problem large language models should help with, but only if they have structured access to the right systems.

A Lightweight Approach

Instead of building a full AI platform, I created a lightweight MCP toolkit that connects LLM platforms directly to the systems our team already uses.

We did not need a new app, we needed a thin connection layer.

The idea was simple: if an LLM can call tools, it can query operational systems directly.

In our case this only required three things:

  1. An LLM platform that supports MCP connections
  2. APIs for the systems we want to query
  3. Lightweight MCP servers exposing those APIs as tools

Our environment already had all three: the help desk API, the help center API, and the Cahoot platform API.

Because of that, the only thing needed was a small MCP layer connecting those systems.

The toolkit works with MCP-enabled LLM clients.

Instead of building a new application, the goal was to extend the AI tools the team was already using.

What the MCP Toolkit Unlocks

Once the model had structured access to operational systems, the workflow changed immediately.

Instead of jumping between dashboards, the team could investigate issues directly through a single AI conversation.

The conversation became the investigation workspace.

For example, an agent could ask: "Have we seen this issue before for this merchant?"

The system could then pull related tickets, order and inventory context, and relevant documentation inside the same conversation.

Because the model now had structured access to operational systems, it could assemble investigation context automatically. In many cases the model could even draft the explanation for the support agent.

Another useful discovery was that many AI tools already include MCP connectors for common systems. For example, Claude includes a prebuilt Jira MCP connector that can be installed with a single step.

Investigation and escalation can live in one flow.

With that connector alongside the custom MCP tools, the same workflow could investigate operational issues, gather context from multiple systems, draft internal explanations, and escalate confirmed problems directly to Jira.

Deployment and Team Adoption

Deployment was intentionally simple.

Because the MCP servers run locally, I created lightweight installer scripts so setup took only a couple of clicks.

Lightweight deployment removed adoption friction.

The team used a dedicated support investigation project workspace, where the model accumulated prior investigations, common merchant issues, and troubleshooting notes over time.

The system did not require a dedicated AI platform, just structured access to existing APIs and the right workflow integration.

Product Thinking: The Real Adoption Barrier

One important lesson from this experiment was that the biggest barrier to AI adoption inside operational teams is not model capability. It is workflow integration.

If the system requires people to change how they work, adoption drops quickly.

Adoption follows workflow fit, not model sophistication.

The goal was not to introduce a new tool. The goal was to enhance the tools the team was already using.

By connecting LLM clients directly to operational APIs through MCP, the model became a natural extension of the investigation workflow rather than another system agents had to learn.

Why This Matters for Fulfillment Platforms

3PL platforms are fundamentally integration systems.

Most failures happen between systems, not inside one screen.

Orders flow through storefronts, integration layers, fulfillment networks, carrier APIs, and tracking systems. Failures often occur at those boundaries, where traditional support tooling struggles to reason across systems.

Once AI systems gain structured access to platform APIs, they can help diagnose real operational issues instead of answering abstract questions.

What Comes Next

Building the MCP toolkit made something very clear: once an LLM has structured access to operational systems, the possibilities expand quickly.

The toolkit started as a lightweight investigation assistant for internal teams, but it also revealed a path toward a more complete AI-native support system.

I am currently working on the next step: a production RAG-based support system designed to power a customer-facing assistant. The system uses vector embeddings to retrieve knowledge base content while also integrating operational data from the platform itself.

The assistant needs to dynamically tailor responses based on user role and request context.

It is a larger system that requires deeper integration across teams, but the MCP toolkit proved something important.

The fastest path to the right architecture often starts with a small working system.

Robert Klouda