Syncing Stories at Scale: Helping a global sportswear leader modernize its archive

We helped a global brand’s archive evolve, keeping millions of stories moving.
Global sportswear leader  no background (4)
Client
Industry
Services Provided
CMS Development 3rd Party Integration Rails App Maintenance Ruby on Rails Development Data Management Modernize Your Rails App Rails Consulting

The Situation

An unsupported sync tool reached its limit

Our client came to us with a truly impressive archive: millions of records representing decades of products, exhibits, and creative history. This archive wasn’t just a data set. It was a living resource used by storytellers, editors, archivists, warehouse teams, and museum partners.

Behind the scenes, though, their tools were stuck in the past. The collections platform that had supported their archive for twenty years was no longer up to the task of modern workflows. Updates were slow, data was hard to manage, and connecting it all to the website was becoming increasingly unreliable.

Their team had already done extensive work to clean and improve the data. What they needed next wasn’t a complete restart. They needed a way to carry that work forward and keep it in sync across platforms without losing the value of everything they had already built.

The Trigger

The system was straining and workarounds weren’t helping

Before transitioning to MuseumPlus, the archive lived in a legacy system running on a remote desktop server. It was difficult to access, required technical knowledge to use, and made even basic updates feel risky. Over time, they developed creative workarounds to keep things moving, but as the data grew, those workarounds created more problems than they solved.

Because the legacy system was hard to access and operate, only a small group of specialists could safely make updates. As the archive grew, more and more of the work landed on their shoulders, creating a bottleneck for the rest of their internal team and slowing the pace at which new stories could enter the system.

Migrating to MuseumPlus, a modern collections platform, was the first big step toward a second act for the archive. Things started to improve, but not without side effects. Many of the existing connections to their website were built around the quirks of the old system. Once the data was cleaned up and updated in MuseumPlus, those connections began to break, especially around tags and categories.

To make things more challenging, a previous development agency had tried to use an in‑house, no‑longer‑supported tool to sync product data from the new collections platform to the site. In practice, it was too brittle and cumbersome for a modern API environment, and it couldn’t reliably support where the archive needed to go next.

The Challenge

Modernizing the sync without disrupting the archive

The challenge was replacing an aging sync tool that could no longer support the scale or direction of the work.

Leaving it in place meant living with fragile integrations and growing bottlenecks. Replacing it meant handling millions of records and deeply embedded workflows with extreme care (and some risk).

The client chose to modernize the connection between systems rather than disturb the systems themselves. By modernizing the tool they used for syncing, they could move forward without undoing years of thoughtful work.

That’s where we came in.

Our Approach

Giving the system a purpose-built connection

Before making any technical decisions, our first step was to understand the full scope of the archive and the true scale of the job. The client was managing over 2 million records, many of which had hundreds of unique data fields. And they weren’t just being stored. They were being curated, edited, and actively used in live storytelling across platforms.

We collaborated closely with the client’s internal teams and their MuseumPlus vendor to get a clear picture of how the new platform structured its data and how people depended on it day to day. From there, we made a deliberate choice: instead of forcing the old integration to do one more trick, we would give the system a purpose-built connection that could grow with it.

That’s where Hatchet became useful. Hatchet is a workflow engine we used to make syncing predictable at scale, with retries, logging, guardrails, and monitoring built in. It gave us the structure to move millions of records from MuseumPlus to the internal CMS in a way that was accurate, observable, and resilient to change.

We built in validation steps, fallback rules, and structured logging, not just for traceability, but to power alerts and retries that could resolve many issues automatically. This helped reduce manual cleanup, minimize data drift, and protect downstream users from silent failures.

We also created workflows to manage digital assets, syncing originals, thumbnails, and related media from AWS S3, and ensured those assets displayed consistently across the web experience.

Implementation and Iteration

Designing for uncertainty

Working with the MuseumPlus API meant accepting real constraints. Not every field or status change was exposed in a way we could rely on, and assuming perfect data would have put the archive at risk.

Instead of pushing the API beyond what it could guarantee, we designed the Hatchet implementation to anticipate uncertainty. Each sync became a moment to verify intent, catch inconsistencies early, and prevent quiet failures from reaching editors or users.

We tested destructive updates in staging, planned for unexpected field changes, and built tools that let the content team evolve taxonomy without looping in engineers each time.

The Outcome

A platform they could build with

With the new sync system in place, the archive entered its next chapter. Content now moved from MuseumPlus to the CMS on a dependable hourly cadence, with clear, predictable logic behind every update and removal.

Editors saw their changes reflected faster. Assets stayed organized and consistent. And because the system no longer relied on a few specialists to shepherd content through, the archive became a truly multi-user platform, supporting everyone from warehouse teams and collection managers to editors and digital storytellers.

The result was more than efficiency. The archive regained momentum and accuracy. Most importantly, the archive felt alive again. Instead of working around the system, the internal team could finally work with it confidently.

The Second Act

Strengthening the link between systems

This project reinforced a simple idea: not every system needs to be reinvented to move forward. In this case, the archive itself still held deep value. The work was in modernizing the connection around it.

By replacing a discontinued sync tool with an integration built for scale and visibility,, the team was able to keep what mattered while removing the friction that had slowed them down. The result was a new chapter that respected the history embedded in the archive and made it easier to use with confidence.

Browse More Case Studies

  • Pressing on! Helping Newspaper Club Through a Sudden Paper Outage

    Newpaper, printing service

    Navigating Through a Sudden Paper Outage

    What do you do when one of your key products is suddenly unavailable? Here's how we helped with minimal disruption for customers.

    Read Case Study

  • Walkenhorst’s

    Package and money delivery services

    Building a Custom Tool to Replace Warehouse Scanning Software

    We helped Walkenhorst's build an application that updates orders in real-time and generates accurate reports on inventory.

    Read Case Study

  • Pac Global

    Insurance

    A Transparent, Time Saving Workflow

    A redesigned insurance claims process that increased customer transparency while reducing dependency on email.

    Read Case Study

Want a Free Action Plan? Book a Call