← back_to_blog()

Google Indexing for Next.js and React: A Developer's SEO Guide

12 min readRampify Team
google-indexingnextjs-seoreact-seogoogle-search-consolespec-driven-development

Next.js gives you the tools to build indexable sites. But having the tools and using them correctly are different things. A surprising number of Next.js applications have indexing problems — dynamic routes missing from sitemaps, client-side navigation that doesn't update metadata, Suspense boundaries that get indexed as loading states.

This guide covers how Google actually crawls JavaScript applications, which Next.js rendering strategies work best for SEO, and the specific configuration patterns that prevent indexing problems. If you're building with React and want your pages in Google's index, this is the technical reference.

How Google Crawls JavaScript: Two-Phase Indexing#

Google doesn't just fetch your HTML and call it done. For JavaScript applications, indexing happens in two phases:

Phase 1: HTML Crawl. Googlebot fetches the initial HTML response — exactly what curl would return. For server-rendered pages (SSG/SSR), this HTML contains your content. For client-side rendered (CSR) pages, this HTML is often an empty <div id="root"> with script tags.

Phase 2: JavaScript Rendering. Google places your page in a rendering queue and runs it through a headless Chromium instance (WRS — Web Rendering Service). This executes your JavaScript, waits for the page to settle, and captures the final DOM.

The Rendering Queue Has a Delay

The gap between Phase 1 and Phase 2 can be seconds to days. Google prioritizes rendering based on perceived page importance. New pages on low-authority sites may wait in the rendering queue for an extended period. During this time, only the initial HTML is evaluated for indexing decisions.

This two-phase process is why rendering strategy matters for SEO. If your critical content is in the initial HTML (SSG/SSR), Google can evaluate it immediately. If it requires JavaScript execution (CSR), Google has to wait for Phase 2 — and might make a preliminary indexing decision based on the empty Phase 1 HTML.

Rendering Strategies and SEO Impact#

Rendering Strategy SEO Impact

Client-Side Rendering (CSR)
1.Initial HTMLempty shell
2.Content visible to Googleafter Phase 2
3.Metadata in initial responseoften missing
4.Indexing reliabilitylowest
5.Time to indexslowest
Total time:Avoid for SEO pages
SSG / SSR (Next.js Default)
1.Initial HTMLfull content
2.Content visible to Googleimmediately
3.Metadata in initial responsealways present
4.Indexing reliabilityhighest
5.Time to indexfastest
Total time:Best for SEO pages

Static Site Generation (SSG) — Best for SEO#

Pages generated at build time. The HTML is complete before any user (or bot) requests it. Google gets the full page on the first fetch.

// app/blog/[slug]/page.tsx — SSG with generateStaticParams
export async function generateStaticParams() {
  const posts = await getAllPosts();
  return posts.map(post => ({ slug: post.slug }));
}
 
export default async function BlogPost({ params }: { params: Promise<{ slug: string }> }) {
  const { slug } = await params;
  const post = await getPost(slug);
  return <article>{/* full content in HTML */}</article>;
}

SEO advantage: The HTML contains everything. No rendering delay. No JavaScript dependency. Google indexes what it fetches.

Server-Side Rendering (SSR) — Good for SEO#

Pages rendered on each request. Same SEO benefit as SSG (full HTML), but with a response time cost. Use for pages with frequently changing data.

// app/dashboard/page.tsx — SSR (dynamic by default when using cookies/headers)
export const dynamic = 'force-dynamic';
 
export default async function Dashboard() {
  const data = await fetchLatestData();
  return <div>{/* server-rendered content */}</div>;
}

SEO advantage: Same as SSG for indexing purposes. The trade-off is server cost and TTFB. For pages where freshness matters more than performance (stock prices, inventory counts), SSR is the right choice.

Client-Side Rendering (CSR) — Worst for SEO#

Content loaded via JavaScript after the initial HTML. Requires Google's Phase 2 rendering to see any content.

// This pattern is problematic for SEO
'use client';
import { useEffect, useState } from 'react';
 
export default function ProductPage() {
  const [product, setProduct] = useState(null);
 
  useEffect(() => {
    fetch('/api/product/123').then(r => r.json()).then(setProduct);
  }, []);
 
  if (!product) return <div>Loading...</div>;
  return <div>{product.name}</div>;
  // Google's Phase 1 sees: <div>Loading...</div>
  // Google's Phase 2 sees: <div>Product Name</div>
  // If Phase 2 is delayed, Google may index the loading state
}
CSR Is Not Unfindable, Just Unreliable

Google can render JavaScript. CSR pages do get indexed. But they're indexed less reliably and less quickly than server-rendered pages. For pages where search visibility matters — blog posts, product pages, landing pages — never rely on CSR alone.

Next.js App Router SEO Checklist#

1. Metadata API#

Next.js 13+ provides a built-in Metadata API. Use it for every page that should be indexed.

// app/blog/[slug]/page.tsx
import { Metadata } from 'next';
 
export async function generateMetadata(
  { params }: { params: Promise<{ slug: string }> }
): Promise<Metadata> {
  const { slug } = await params;
  const post = await getPost(slug);
 
  return {
    title: post.title,
    description: post.description,
    alternates: {
      canonical: `https://example.com/blog/${slug}`
    },
    openGraph: {
      title: post.title,
      description: post.description,
      type: 'article',
      publishedTime: post.date,
      url: `https://example.com/blog/${slug}`,
      images: [{ url: post.ogImage, width: 1200, height: 630 }]
    },
    twitter: {
      card: 'summary_large_image',
      title: post.title,
      description: post.description,
      images: [post.twitterImage]
    }
  };
}

2. sitemap.ts#

Dynamic sitemap generation ensures every published page is discoverable:

// app/sitemap.ts
import { MetadataRoute } from 'next';
 
export default async function sitemap(): Promise<MetadataRoute.Sitemap> {
  const posts = await getPublishedPosts();
  const docs = await getDocPages();
 
  return [
    { url: 'https://example.com', lastModified: new Date(), priority: 1.0 },
    { url: 'https://example.com/pricing', lastModified: new Date(), priority: 0.9 },
    ...posts.map(post => ({
      url: `https://example.com/blog/${post.slug}`,
      lastModified: new Date(post.updatedAt),
      priority: 0.7
    })),
    ...docs.map(doc => ({
      url: `https://example.com/docs/${doc.slug}`,
      lastModified: new Date(doc.updatedAt),
      priority: 0.6
    }))
  ];
}

3. robots.ts#

Control what Google can and can't crawl:

// app/robots.ts
import { MetadataRoute } from 'next';
 
export default function robots(): MetadataRoute.Robots {
  return {
    rules: [
      {
        userAgent: '*',
        allow: '/',
        disallow: ['/api/', '/dashboard/', '/settings/']
      }
    ],
    sitemap: 'https://example.com/sitemap.xml'
  };
}

4. Canonical URLs#

Every indexable page needs a canonical URL. In Next.js, use the alternates field in Metadata:

// This prevents duplicate content issues from:
// - www vs non-www
// - Trailing slash variations
// - Query parameter variations
// - HTTP vs HTTPS
 
export const metadata: Metadata = {
  alternates: {
    canonical: 'https://example.com/blog/my-post' // always absolute, always HTTPS
  }
};

5. JSON-LD Structured Data#

Add schema markup for rich results in Google Search:

// app/blog/[slug]/page.tsx
export default async function BlogPost({ params }: { params: Promise<{ slug: string }> }) {
  const { slug } = await params;
  const post = await getPost(slug);
 
  const jsonLd = {
    '@context': 'https://schema.org',
    '@type': 'Article',
    headline: post.title,
    description: post.description,
    datePublished: post.date,
    dateModified: post.updatedAt,
    author: { '@type': 'Organization', name: 'Your Team' },
    publisher: {
      '@type': 'Organization',
      name: 'Your Company',
      logo: { '@type': 'ImageObject', url: 'https://example.com/logo.png' }
    },
    image: `https://example.com${post.ogImage}`,
    mainEntityOfPage: `https://example.com/blog/${slug}`
  };
 
  return (
    <>
      <script
        type="application/ld+json"
        dangerouslySetInnerHTML={{ __html: JSON.stringify(jsonLd) }}
      />
      <article>
        <h1>{post.title}</h1>
        {/* content */}
      </article>
    </>
  );
}

Common Next.js Indexing Problems#

Problem 1: Dynamic Routes Not in Sitemap#

If you use dynamic routes ([slug], [id]) but don't implement generateStaticParams or include them in your sitemap, Google has to discover these URLs through crawling alone. Newly added pages may take weeks to be found.

Fix: Always pair dynamic routes with either generateStaticParams (for SSG) or explicit sitemap inclusion:

// If you can't use generateStaticParams (too many pages, data changes too often),
// at least include all URLs in sitemap.ts
export default async function sitemap(): Promise<MetadataRoute.Sitemap> {
  const allProducts = await getAllProductSlugs(); // even dynamic ones
  return allProducts.map(slug => ({
    url: `https://example.com/products/${slug}`,
    lastModified: new Date()
  }));
}

Problem 2: Client-Side Navigation Without Metadata Updates#

Next.js client-side navigation (via <Link>) doesn't trigger a full page load. If your metadata relies on client-side state rather than the Metadata API, Google won't see updated titles and descriptions when crawling.

Fix: Always use the server-side Metadata API (generateMetadata or metadata export). Never rely on useEffect to set document.title for pages that need to be indexed.

// BAD: Client-side title update
'use client';
useEffect(() => { document.title = product.name; }, [product]);
 
// GOOD: Server-side metadata
export async function generateMetadata({ params }) {
  const { id } = await params;
  const product = await getProduct(id);
  return { title: product.name, description: product.description };
}

Problem 3: Suspense Loading States Getting Indexed#

Next.js Suspense boundaries render a fallback while data loads. If Google's renderer encounters a Suspense boundary that hasn't resolved, it may index the fallback content instead of the actual content.

// RISKY: If Google renders before Suspense resolves
<Suspense fallback={<div>Loading product details...</div>}>
  <ProductDetails id={productId} />
</Suspense>
 
// SAFER: Move the data fetch to the page level (server component)
export default async function ProductPage({ params }) {
  const { id } = await params;
  const product = await getProduct(id); // resolves before HTML is sent
  return <ProductDetails product={product} />;
}
Check Google's Rendered Version

Use URL Inspection in Google Search Console to see exactly what Google rendered. Click "View Crawled Page" then "Screenshot" to see Google's view. If you see loading spinners or skeleton screens, your Suspense boundaries are being indexed before resolving.

Problem 4: Redirect Chains from Trailing Slashes#

Next.js can be configured to add or remove trailing slashes. If your configuration doesn't match your canonical URLs and internal links, you get redirect chains that waste crawl budget.

// next.config.ts
const nextConfig = {
  trailingSlash: false, // or true — pick one and be consistent
 
  // If trailingSlash: false, ensure:
  // - Canonical URLs don't have trailing slashes
  // - Internal <Link> hrefs don't have trailing slashes
  // - Sitemap URLs don't have trailing slashes
};

The redirect chain problem:

/blog/my-post/ → 308 → /blog/my-post → 200

Google follows the redirect, but each hop costs crawl budget. If you have thousands of pages with trailing slash mismatches, the wasted budget adds up.

Problem 5: Missing Image Alt Text and Dimensions#

Images without alt text miss an indexing signal, and images without width/height cause layout shift (hurting Core Web Vitals):

// BAD
<img src="/hero.jpg" />
 
// GOOD — Next.js Image component handles this
import Image from 'next/image';
<Image
  src="/hero.jpg"
  alt="Dashboard showing Google Search Console data in the code editor"
  width={1200}
  height={630}
  priority // for above-the-fold images
/>

React SPAs: Hard Mode#

If you're building a pure React SPA (Create React App, Vite, etc.) without a server-rendering framework, SEO is significantly harder.

The core problem: Your initial HTML is an empty shell. Google must render JavaScript to see any content. While Google can do this, it's slower and less reliable than server-rendered HTML.

Options for React SPAs:

  1. Migrate to Next.js — The most reliable solution. Next.js gives you SSG/SSR with minimal changes to your React components.
  2. Pre-rendering service — Tools like Prerender.io serve a cached, rendered version of your page to bots. Adds latency and a dependency.
  3. Static pre-rendering at build time — Tools like react-snap crawl your SPA at build time and save the rendered HTML. Works for content that doesn't change per-user.
  4. Accept the limitation — For internal dashboards and authenticated apps, SEO doesn't matter. Use CSR without guilt.
When CSR Is Fine

Not every page needs to be indexed. Authenticated dashboards, admin panels, user settings — these should be behind a login and excluded from indexing via noindex or robots.txt. Don't add SSR complexity for pages that search engines shouldn't see.

Monitoring with Rampify#

Rampify's MCP tools are built for exactly this workflow — catching Next.js indexing problems from your editor:

# In Claude Code or Cursor
 
"Scan my Next.js site for SEO issues"
 
# AI calls get_issues() → returns:
# - Pages missing metadata
# - Missing canonical URLs
# - Pages not in sitemap
# - Broken internal links
# - Missing JSON-LD schema
# - Missing image alt text
 
"Check my GSC data for indexing problems"
 
# AI calls get_gsc_insights() → returns:
# - Pages crawled but not indexed
# - Rendering issues detected
# - Pages with declining impressions
# - Opportunities based on search data

The MCP server knows your codebase structure. When it identifies a missing generateMetadata export, it can tell you the exact file path and show you the implementation pattern. No context-switching to a dashboard. No guessing which files to change.

For the complete MCP setup, see Connecting GSC to Your AI Coding Tools via MCP. For understanding the broader indexing landscape, start with our Google Indexing developer guide.

Try Spec-Driven Development with Rampify

Scan your site for SEO issues, pull GSC data into your editor, and create structured specs — all from your AI coding tools. No dashboard tab required.

Get Started Free