Pages are put on the Googlebot crawling and generating queue. When a page is awaiting crawling versus rendering, it is not always clear which is which. If you enable crawling is the first thing Googlebot verifies before requesting a URL from the crawling queue through HTTP. The robots.txt file is scanned by Googlebot. If it flags the URL as forbidden, Googlebot will not make an HTTP request to that URL and will instead skip it.
How HTTP requests function?
Avoid making your pages render for search engines
Search engines must be able to comprehend your pages’ content and your crawling and indexing policies based just on the initial HTML output. You will struggle to convince your pages to perform effectively if they can’t.
Include necessary information in the initial HTML response
Every page should have its own URL
However, Google would have a tremendously difficult time navigating your site and determining what keywords your pages should rank for. Every page on your site requires to have a unique URL.
To access new pages, avoid using fragmented URLs as Google will generally disregard them. Visitors may feel free to access your “About Us” page at https://example.com#about-us, but search engines frequently ignore the fragment, so they won’t be made aware of that URL.
Your initial HTML response should contain navigational elements.
The HTML answer should contain every navigational component. It goes without saying that you should include your main navigation, but don’t overlook your sidebar and footer, which provide crucial contextual connections.