Sometimes you might want to stop search engines and/or crawlers from accessing or indexing your website. You can use a robots.txt file to stop this from happening.
Here are some examples of code/rules you can add to your robots.txt file, to achieve different things. Blocking Google from indexing anything under /blog (e.g., https://my-domain-name.com/blog) on your website: User-agent: Googlebot Disallow: /blog/ Allow all bots to access any part of your website: User-agent: * Allow: / Let the bot know where your sitemap is: Sitemap: https://example.com/sitemap.xml