Building a future-ready content integration layer with MACH
Many organizations are pursuing a headless strategy for enterprise content management. A common term to describe this approach is MACH: microservices, API-first, cloud, headless. This architecture consists of many services, deployed in the cloud, communicating and producing data via APIs. In MACH architectures, it's good practice to have an integration layer to synthesize these sources of data before providing it to a front-end.
CMS content is a common integration point, as it's close to the front-end and benefits from a view of all systems whose data ends up exposed to the end user. Recognizing this, many CMS platforms have a marketplace for integrations with third-party services. However, for various reasons you may want to implement your own integration layer.
One reason to build your own integration layer is gaps in the integration marketplace. While these marketplaces are mature and offer integrations with many services, they generally depend on the service provider to build and maintain the integration. As such, an integration may be available in one CMS platform but not another. Additionally, because these integrations must cater to all users of the CMS they are typically designed for the most standard use cases and may not work for your situation. Similarly, if your software architecture includes custom-built services that must be synthesized with managed content, you'll need to build the integration yourself.
You may also want to avoid pre-made integrations to avoid vendor lock-in. Building your own integration layer allows you (but doesn't require you) to separate the CMS platform from the synthesis of content with external data. This kind of separation can be beneficial if you want to replace your current CMS platform, or if you're adding a new one but see some risks that may necessitate a change.
In this post I'll detail how we built a performant and scalable system of custom integrations that empowered editors to conduct A/B tests and other experiments without depending on developers.
What we built
This retail client brought us in to replace a homegrown CMS, feeding their React Native app, with the enterprise platform Optimizely. They wanted to integrate personalization, experimentation, and a product information management system. In addition, to save on platform costs and boost performance, they needed thorough server-side caching.
We delivered on these requirements while also providing front-end access to all integrated data in a single network call. A GraphQL Backend-for-Frontend microservice was built to orchestrate the various systems and serve enriched content.
Configurable Redis and in-memory caching were also added. But we wanted to knock it out of the park with performance and leave their lean engineering team with a way to quickly build new integrations.
To accomplish both of these goals, we designed and implemented a unique content enrichment module.
The content enrichment module
We had three goals in mind when designing the module:
- To ensure world-class user experience, execute as efficiently as possible by
- minimizing time waiting for network calls to external services, and
- avoiding redundant iteration of content nodes
- To support the client's lean engineering team, allow developers to add new integrations without needing to manipulate the content tree
- To avoid vendor lock-in, don't couple it to any particular CMS platform or tech stack
Content enrichment is a hot path—mobile screens make a content request each time they're loaded—so we needed to go beyond standard caching. Knowing that integration performance is primarily constrained by network requests, we decided to put some limitations on how new integrations are built:
- To update a content block, developers must first define which external data they need. Then, they can define how the data changes the block.
- All external data must be retrieved by implementing a special interface and allowing the module to control how and when data is requested.
These rules limit developer flexibility when building new integrations. However, in exchange, we get some really nice benefits.
First, by requiring developers to define enrichment data requirements and entrusting the module to make requests, we can ensure we never request duplicate data and that all data of the same type are requested at once.
Second, development of a new integration is simple and therefore fast—just implementing a couple of interfaces—and carries low risk of defects.
Finally, because application of changes is defined on a block-by-block basis and the module handles the complex (and thoroughly tested) task of mutating the content tree, developers spared from understanding the plumbing and can focus on the important business logic.
Taken together, this means that when developers add new integrations, the most efficient implementation is actually the easiest to build! This module enabled us to quickly and efficiently build out powerful capabilities for content editors, including no-code feature flags, no-code experiments, and real-time native mobile preview.
The best code is no code
When implementing a content architecture it's crucial to consider the editor experience. Editors are responsible for meeting and adapting to business goals and technology is what enables them to do so. However, while technology serves the editorial experience, it can also get in the way. The slow pace of change requests and software releases can severely limit editors' need to adapt to changing business requirements.
Our client wanted to roll out a full suite of experiments with their new system. For the system to keep up, editors couldn't be made to depend on engineering changes for each experiment. To empower editors, we designed special meta-blocks which allowed A/B tests to be fully implemented from within the CMS—no tickets or PRs required! These blocks connect multiple bits of content to an experiment key and variation flag, which allows the orchestration service to ask the experimentation platform which content to serve to the user. In a similar way, meta-blocks also allow editors to feature-flag their content blocks so they can build while front-end components are still being developed. And, because it's all implemented with our enrichment module, editors can add as many experiments as they need without impacting performance.
Setting a foundation
To summarize, we built a content integration microservice that uses flexible caching and a unique enrichment module to serve personalized content. In addition to superior customer experience, we helped content editors run A/B tests and feature-flag content without waiting for an engineer to code it. Finally, we improved developer velocity by freeing them from the nitty-gritty of integration code to focus on the big picture. The client team loved what we built, and are now using the design as a foundation for their enterprise content architecture.