How ConsentManager handles Search Engine Crawlers

In order to provide an adequate picture of the website to search engines, we will not block the consent layer from search engine spiders. This allows the search engine to verify that the website is using measures to prevent visitors from data processing and cookies.

Anyhow, in order to avoid counting search engine robots in our reports, we have set a robots.txt file that forbids search engines from accessing our counting and consent measuring pixels.

User-agent: *
Disallow: /delivery/consent.php
Disallow: /delivery/pixel.php

In addition, we set the data-nosnippet attribute in order to signal to Search Engines that the content of the consent layer should not be displayed in search results.

Back to top