- HTML-based site – Googlebot indexed all pages at all levels.
- JS-based site – robot didn’t even get to its second level in most cases.
There are three options that can make the indexing process go faster:
- Provide Googlebot with a pre-rendered HTML document
- Server rendering
1. Provide a pre-rendered HTML document prepared for Googlebot
We provide the robot with a pre-written HTML document for preliminary rendering by setting up a system for detecting its hits (when checking the User-Agent header). When the robot visits your site, you simply give it the HTML-copies of the pages (they shouldn’t contain JS code). Moreover, these copies are used only by bots, and not by ordinary users, they, in turn, receive versions of pages equipped with JS features. This method allows you to quickly index all pages of the site.
At the same time, you can view HTML-code (generated by Googlebot) and JS exceptions in the Google Search Console.
When applied, both Googlebot and the user get all the necessary data when they load the page for the first time. Then JS-scripts are loaded that already work with these pre-loaded data. This option is good for users and search engines. What do you need for doing this? You can learn JS essentials and do it yourself or hire dedicated developers from Ukraine, like a company here, and save your time.
When using Server-Side Rendering (SSR) on the server side, we get a fast page-by-page transition through the site by the robot and the user. We should avoid working with functions that directly affect the DOM (document object model). If interaction with the browser’s DOM is necessary. It’s good to use Angular Renderer or abstraction.
For dynamic content rendering, you can use tools from Google Dynamic Rendering such as Puppeteer and Rendertron. As a result, the search robot receives the final result in the form of a full-fledged page with JS.
Server rendering is recommended to use if you have websites:
- with frequently appearing content
- with heavy JS code
- with blocks of external resources (YouTube videos, social signal counters, online chat rooms)
But SSR has a number of drawbacks:
- when the user has a slow Internet speed, the page loading quickness decreases
- download speed also depends on the location of the server and the number of users who simultaneously use the application
The robot scans and renders pages without saving the state (it’s not supported):
- service workers (the script is launched by the browser in the background separately from the page)
- local storage (data storage between user sessions)
- cookies, Cache API
What does it mean? Googlebot renders site pages without saving personal preferences and user settings.
It is worth noting that Googlebot no longer crawls URLs with a hash (link with characters in the tail after the # sign). An example of this kind of links is site.by/#backlinks.
What about images:
- Google does not index images linked from CSS
- If the site has a lazy image loading, you need to add a noscript tag around the image tag to make sure Googlebot scans them
The choice of the most appropriate variant is up to you. Think of the site specs and what tasks you want the UX part to solve. Each variant has its pros and cons. If we put SEO on the first place, rendering the app on the back side may let you avoid so-called empty pages problem.