Indexierung & Crawling
Wie Suchmaschinen programmatische Seiten in großem Umfang entdecken, crawlen und indexieren.
Crawl Budget
The number of pages search engines will crawl on your site within a given timeframe.
Crawl Rate
The speed at which search engine bots request pages from your site.
Robots.txt
A file instructing search engine crawlers which pages or sections to access or avoid.
XML Sitemaps
Files listing URLs to help search engines discover and prioritize pages for crawling.
Sitemap Index
A file referencing multiple sitemaps, essential for large programmatic sites.
Internal Linking
Links between pages on the same domain, distributing authority and guiding crawlers.
Link Equity
The ranking value passed through links from one page to another.
PageRank
Google's algorithm measuring page importance based on quantity and quality of links.
Crawl Depth
The number of clicks from homepage required to reach a page, affecting crawl priority.
Orphan Pages
Pages without internal links pointing to them, making discovery difficult.
Index Bloat
Having too many low-quality pages indexed, diluting crawl budget and site quality.
Noindex Tags
Meta directives preventing search engines from indexing specific pages.
Nofollow Links
Links with rel='nofollow' that don't pass ranking signals to the destination.
Google Search Console
Google's free tool for monitoring search performance and indexing status.
Indexing API
APIs allowing programmatic submission of URLs for faster indexing.
URL Inspection
Search Console feature for checking how Google sees and indexes specific pages.
Pagination SEO
Handling paginated content to prevent duplicate content and crawl waste.
Faceted Navigation
Filter systems creating numerous URL combinations requiring careful crawl management.