Experts Blog

Headless at EnergieSchweiz: Where Everything Interacts

A headless content management system alone is not yet a website. A Kubernetes cluster cannot deploy anything on its own. It is only when architecture, code, tools, processes and people are allowed to interact that the adjustments made to the content in the CMS trigger an update in the statically generated website. Below, we use the example of EnergieSchweiz to explain how this works.

Dedicated Cloud Strategy

In relaunching its website, EnergieSchweiz is actively moving towards cloud-centred solutions. This approach was behind the decision to use Contentful as a headless CMS, Algolia as a search service and the Kubernetes cluster operated by Begasoft to deploy the website. The aim was to decouple the systems and ensure the infrastructure is stable and fail-safe. Static site generators (SSG) are ideally suited to decoupling headless CMS and deployment:

  • When the website is “baked”, the system validates whether everything expected by the frontend is included in the content. This means that any errors in the content model or the frontend can be quickly identified and rectified without affecting the most recent “baked” version.
  • System interruptions and failures – outside of the website itself – can be kept to a minimum as the content is part of the “recipe”.
  • The complex logic for the frontend can be tested on all systems (local, DEV, UAT, PROD). Data packages for testing can be easily assembled and versioned and the result can be validated quickly.

An Interconnected System Landscape

While the technological design for the new EnergieSchweiz website was a greenfield project, some external systems had to be connected.

Algolia is a SaaS solution for intelligent search. By using this solution, content can be specifically structured for searches and made searchable in a way best suited to the website.

Contentful acts as a central content hub for the EnergieSchweiz website. The authors only work with the CMS. External content is automatically imported into Contentful.

The publication database (Pub DB) is the interface for publications issued by the Swiss Federal Office of Energy. As some publications are relevant for EnergieSchweiz, the website must be capable of creating automatic and editorial links to these publications.

Content from all these elements is gathered in one place, processed and sent to the browser in use as quickly as possible. The image below is a simplified representation of the interfaces that were identified from the very start and which provide visitors with content.

EnergieSchweiz: The system landscape
EnergieSchweiz: The system landscape

The Heart of The Architecture

The headless CMS Contentful is the main workspace for the authors of EnergieSchweiz. This is where they work on texts, upload images, interlink content and populate metadata for SEO. But from an architectural point of view, the action is taking place elsewhere in an area that had always seemed insignificant in the past: the continuous integration and continuous delivery pipeline or CI/CD pipeline. This pipeline is the lynchpin for source code, content, configuration for the various environments, defined processes and people. In a nutshell, every time something is changed, a new version of the website is generated, automatically tested and rolled out. The image below is a greatly simplified representation of the way content and source changes interact.

We used Gitlab as the main tool for managing source code and pipelines. Thanks to Contentful’s webhook capability and the automated triggers when the source code is changed, a new version is built and tested on the right environment and rolled out automatically.

Architecture: Development View
Architecture: Development View

CI/CD Pipeline – The Art of Integration

Every conventional CMS boosts integration capabilities. A system that starts off lean tends to soon become bulky due to plugins or modules. Every additional plugin or module makes the system more difficult to maintain.

We use CI/CD pipelines to decouple the integration of external systems and ensure data quality in the CMS. By taking this approach, we can maintain tools independently from one another. This can be best explained using the scenarios below, which all took place at EnergieSchweiz.

Scenario 1: Updating a publication from the PubDB

Every day, an automated job on Gitlab extracts the list of publications and extended metadata, which includes data such as the download figures, from the publication database. Each publication is then automatically added to Contentful or updated. Should an error occur in importing the publication, the team members responsible for rectifying the error are informed. As the error messages are clear and comprehensible, they can examine each individual case closely, find the cause, fix the problem at the data source and trigger the import manually.

Changes can be made to the interface, interval or data extensions without affecting other parts of the website thanks to a dedicated pipeline and dedicated tooling adjustments. The whole thing can be tested independently.

Moreover, website updates do not have to be triggered manually because whenever content is changed and published, an update is triggered automatically.

Scenario 2: Consistent data in search and display

The underlying problem for any search is how to keep the current display and current search index consistent. Always remember:

  • It should not be possible to find unpublished content.
  • It must be possible to display any content that can be found.
  • It must be possible to find any content that can be displayed.

Although Contentful and Algolia can be integrated, we deliberately chose not to do so because this may mean the index is updated too early or, in the worst-case scenario, filled with content that cannot be built.

We update the index on Algolia while the frontend is “baking”. As soon as all pages are built and automatically tested, the website is essentially ready to be rolled out. Immediately afterwards, the index on Algolia is updated with the content for the website that has been prepared in exactly the same way.

This approach keeps the timedelta between updating the index and the website to an acceptable minimum.

Scenario 3: Independent microsites for dedicated campaigns

The basic setup allows dedicated microsites to be created for campaigns. The authors can use existing content to do so or can create new content. They can do all this without compromising on decoupling, reusability or stability.

Thanks to the headless CMS Contentful, more independent microsites can be built without having to duplicate data unnecessarily in third-party systems. By using Gatsby and React and the components they developed, we can reuse existing components and adapt them for the campaigns.

One major advantage is that the microsites validate our data model again and show the project team how and where we can still adjust the content model. We are not bound by the existing structure.

The Architecture at a Glance

Due to the scenarios mentioned above and the possibilities available to us, the system architecture as explained below seemed most suited to EnergieSchweiz. The image below shows a simplified version of the three different user groups and how they interact with the entire system, the components the system is made up of and which parts are coupled to each other.

  1. We used Contentful as our headless CMS. The authors primarily work on this system. Contentful is loosely coupled to Gitlab via a webhook.
  2. Gitlab is used as a tool both to manage the source code and for the diverse build pipelines (BP).
  3. The frontend BP builds the website. This build pipeline is started by a trigger. The pipeline pre-generates the EnergieSchweiz website using Gatsby (SSG). Once the website has been generated, Algolia’s search index is updated and the Nginx container is built. Automatic deployment to the relevant environment is then triggered.
  4. The PubDB importer (scenario 1) obtains the list of publications daily and updates the modified content in Contentful directly.
  5. The Kubernetes cluster is operated by Begasoft. Each deployment configuration is automatically updated and the completed containers are rolled out.
  6. External HTTP requests are re-routed through the K8s Ingress to the right pods.
  7. To ensure the pods can be found externally, Begasoft maintains DNS entries on its own DNS server.
Architecture at a glance
Architecture at a glance

The Key to Success: A Shared Objective

A good headless CMS does not specify in advance what the data structure should be and which technologies should be used. Instead, it offers a platform to discuss and define content, how it interacts with other content and how it can be reused.

The tools and technologies used on the EnergieSchweiz website all factor in reusability. Publishing content and code changes is part of a standardised process.

Ultimately, it is communication that is the key to success. As long as everyone involved sees eye to eye and shares the same objective, a project like this is sure to be successful.

Connecting Frontend and Backend with Contract Driven Development

While the concepts and ideas around headless CMS and its use cases are getting more wide spread, our Sitecore team started the journey on this topic by introducing Sitecore Javascript Services JSS for the multisite platform for Zurich Airport. Tobias Studer has explored the uncharted territory step by step and gives an insight into the considerations and challenges that shaped the final solution.

Our Responsibility for Sustainable Digital Services

It is high time we take a closer look at the environmental footprint of digital services. This is not a new issue, but one that is becoming more relevant in the current situation as the COVID-19 pandemic has led to an explosion of data traffic volumes.