Case study: How a monster with cookies ate 22% of our visibility


The author’s views are entirely his or hers (except for the unlikely event of hypnosis) and may not always reflect Moza’s views.

Last year, the Homeday team – one of Germany’s leading real estate technology companies – decided to move to a new content management system (CMS). The goals of the move were, among other things, to increase page speed and create a state-of-the-art website suitable for the future, with all the necessary features. One of the main motivations for migrating was to allow content editors to work more freely in creating pages without the help of developers.

After evaluating several CMS options, we opted for Contentful because of its state-of-the-art technology fund with superior experience for both editors and developers. From a technical point of view, Contentful as a headless CMS allows us to choose which rendering strategy we want to use.

We are currently conducting migration in multiple phases or waves to reduce the risk of problems that have far-reaching negative consequences. During the first wave, we ran into a problem with our consent for cookies, which caused almost 22% loss of visibility in five days. In this article, I will describe the problems we faced during this first wave of migration and how we solved them.

Setting up the first test wave

For the first test wave, we selected 10 SEO pages with high traffic but low conversion rate. We have established an infrastructure for reporting and monitoring these 10 pages:

  • Track rankings for the most relevant keywords

  • SEO Dashboard (DataStudio, Moz Pro, SEMRush, Search Console, Google Analytics)

  • Regular beaches

On the whole planning and testing phasewe moved the first 10 pages of SEO to the new CMS in December 2021. Although several challenges arose during the testing phase (extended loading time, larger object model HTML document, etc.), we decided to publish it because we didn’t see a big blocker and wanted to move the first test wave before Christmas.

First performance review

Very excited about the first step of the migration, we watched the performance of the relocated sites the next day.

What we saw next time didn’t really cheer us up.

Overnight, the visibility of trace keywords for migrated pages decreased from 62.35% to 53.59%, we lost 8.76% of visibility in one day!!

Due to this steep drop in the scale, we conducted another extensive round of testing. Among other things, we tested coverage / indexing issues if all meta tags, structured data, internal links, page speed, and mobile friendliness were included.

Second performance review

All articles had a post-migration cache date, and the content was fully indexed and read by Google. In addition, several migration risk factors (change of URLs, content, meta tags, layouts, etc.) could be ruled out as sources of error, as there were no changes.

The visibility of our keyword traces suffered another drop to 40.60% over the next few days, down a total of almost 22% in five days. This was also clearly shown compared to the competition of the tracked keywords (here “estimated traffic”), but the visibility looked similar.

Since other migration risk factors and Google updates were ruled out as sources of error, it must have been a technical issue. Possible causes could be too much JavaScript, low Core Web Vitals results, or a larger, more complex document object model (DOM). The DOM represents the page as objects and nodes, so that programming languages ​​such as JavaScript can communicate with the page and change, for example, style, structure, and content.

After the biscuit crumbs

We had to identify the problems as quickly and quickly as possible, and reduce the negative effects and decline in traffic. The first real hint of what technical reason could be the cause we finally got when one of our tools showed us that the number of pages with high external links had increased, as well as the number of pages with the largest content size. It is important that pages do not exceed the maximum content size, as pages with very high content content may not be fully indexed. With regard to high external links, it is important that all external links are trusted and relevant to users. It was suspicious that the number of external connections increased just like that.

Increase high external link URLs (over 10)
Increase URLs above a certain maximum content size (51,200 bytes)

Both measurements were disproportionately high compared to the number of pages we moved. But why?

When we checked which external links were added to the migrated pages, we saw that Google reads and indexes cookie consent form for all relocated sites. We searched the site and checked the content of the consent for cookies and found that our theory was confirmed:

A search on the site confirmed that Google had indexed the consent for cookies

This caused several problems:

  1. Due to the indexing of the consent form for cookies, tons of duplicate content was created for each page.

  2. The size of the content of the migrated pages has increased drastically. This is a problem because pages with very high body content may not be fully indexed.

  3. The number of external outbound connections has increased dramatically.

  4. Our snippets suddenly showed a date on the SERPs. This would suggest a blog or news article, while most articles on Homeday are evergreen content. In addition, the meta description was truncated to display the date.

But why did this happen? According to our Cookiebot, search engine spiders access sites that simulate full consent. This gives them access to all content and the spider does not index copies of cookie consent banners.

So why wasn’t that the case for relocated sites? We searched and rendered pages with various user agents, but we still couldn’t find traces of the Cookiebot in the source code.

Explore Google DOMs and find a solution

Migrated pages are rendered with dynamic data coming from Contentful and plugins. Plugins contain only JavaScript code, and sometimes come from a partner. One of these plug-ins was a cookie management partner that obtains HTML consents for cookies outside of our code database. Therefore, we did not find any traces of HTML code for the consent of cookies in the HTML source files. We saw a larger DOM, but we traced it back to Nuxt’s default, more complex, larger DOM. Nuxt is the JavaScript framework we work with.

We used the Google Search Console URL Review Tool to verify that Google has read a copy of the consent banner for cookies. We compared the DOM of the migrated page with the DOM of the non-migrated page. Within the DOM migrated page, we finally found the consent content for cookies:

We found the content of the consent for cookies within the DOM relocated site

Something else that caught our attention was the JavaScript files uploaded to our old pages, compared to the files uploaded to our migrated pages. Our website has two third-party banner consent scripts: one for displaying the banner and accepting consent (uc) and one for importing banner content (cd).

  • The only script loaded on our old pages was uc.jswho is responsible for consent banner for cookies. This is the only script we need on each page to process user consent. Displays the consent banner for cookies without indexing the content and saves the user’s decision (if he agrees or disagrees with the use of cookies).

  • For relocated sites, in addition to uc.js, there was also a cd.js uploading a file. If we have a page where we want to show the user more information about our cookies and index the cookie data, then we need to use cd.js. We thought the two files depended on each other, which is not correct. Uc.js can run itself. The cd.js file was the reason the contents of the cookie banner were rendered and indexed.

It took us a while to find it because we thought the second file was just a prerequisite for the first. We found that the solution would be to simply remove the uploaded cd.js file.

Review of operation after the implementation of the solution

On the day we deleted the file, the visibility of our keyword was 41.70%, which was still 21% lower than before the move.

However, the day after deleting the file, our visibility increased to 50.77%, and the next day it almost returned to normal at 60.11%. The estimated traffic behaved similarly. What a relief!

Soon after the implementation of the solution, organic transport returned to pre-migration levels


I can imagine that a lot of SEOs have dealt with small problems like this. It seems trivial, but it caused a significant drop in visibility and traffic during the move. Therefore, I suggest migrating in waves and blocking enough time to investigate technical errors before and after migration. In addition, a careful review of site performance in the weeks following relocation is crucial. These are certainly my key excerpts from this migration wave. We have just completed the second migration wave in early May 2022 and I can say that no major bugs have emerged so far. We will have two more waves and the migration will hopefully be successfully completed by the end of June 2022.

The operation of the relocated sites is now almost normal and we will continue with the next wave.


Leave a Comment

error: Content is protected !!