Pillar guide · 15 min read

Modern SEO and Performance Guide

What still works in classical SEO, what changed, and how to build a Next.js site that ranks and stays ranked.

Format
Pillar guide
Updated
Apr 15, 2026
Read time
15 min read

TL;DR

Modern SEO in 2026 is technical SEO done right (crawl, index, schema, Core Web Vitals), paired with content engineered for both classical ranking and AI citation. The technical foundation has not changed much (it is still server-rendered HTML, clean schema, fast pages) but the targets have raised: INP under 150ms, LCP under 1.8s, schema validated in CI. Next.js 16 App Router, used carefully, gives a clean platform. Replatforms succeed or fail on URL parity and redirect maps. FPWS treats SEO as engineering work, not content marketing.

01

Technical SEO baseline, 2026 edition

Technical SEO in 2026 is the engineering foundation under everything else: clean crawlability (robots.txt, sitemap.xml, IndexNow), correct indexability (canonical, hreflang, noindex where appropriate), security (HTTPS, HSTS, CSP, security headers), URL structure (lowercase, hyphenated, no trailing slash, no parameters in canonicals), JavaScript rendering (server-side or static, never CSR-only), and crawl efficiency (no orphan pages, no infinite parameter spaces, no duplicate content from filter combinations).

The fundamentals have not moved much. Crawl, index, render, rank. The targets have raised: Google's render budget is tighter, Bing's IndexNow protocol is now table stakes (Bing pings on publish, every published article), and security headers are increasingly weighted as a quality signal alongside HTTPS itself.

The single most common technical failure we audit is JavaScript-rendered primary content. Frameworks default to server-side rendering for a reason. If a page's main content only resolves after a client-side fetch, Googlebot may render it (eventually, on a delay) but other crawlers (Bingbot, AI bots, Facebook's link preview) often will not. Server-render the main content always. Use client-side hydration for interactivity, not for the words.

FPWS ships every site with a typed schema module, a sitemap generated at build time from the routing tree (with lastmod from git history), IndexNow pinging on every article publish, and security headers configured in next.config.js. None of this is exotic. All of it is too rarely done.

02

Schema architecture, single source of truth

Schema architecture in 2026 means a single typed module exporting builder functions for each schema type, with stable @id URIs, cross-page entity references, and validation in CI. Hand-written JSON-LD per page is an anti-pattern. The schema-dts npm package gives full TypeScript typing for Schema.org, structured-data-testing-tool validates in CI, and Google's Rich Results test serves as the post-deploy checkpoint. Schema is engineering, not metadata sprinkled in by hand.

The pattern we ship: a single schema.ts module that exports buildOrganization(), buildPerson(), buildArticle(), buildLocalBusiness(), buildService(), buildBreadcrumb(), buildFAQ(), and so on. Each builder returns a typed object (using schema-dts types). Each entity has a stable @id like https://domain.com/#organization. Pages compose schemas by calling the builders and dropping the result into a single JSON-LD script tag.

The benefits: types catch typos at compile time, refactors update every page that references the entity, the schema graph stays internally consistent, and post-deploy validation finds drift fast. The cost: an afternoon of upfront work per project. Pays for itself within a sprint.

Schema types worth shipping by default: Organization, WebSite (with potentialAction SearchAction), Person (founder), BreadcrumbList (every page), Article (every long-form piece), FAQPage (where Q&A appears), Service (each service page), and the relevant LocalBusiness subtype if applicable. Resist adding schema for the sake of it. Recipe schema on a service page is noise.

03

Core Web Vitals in 2026, what still matters

Core Web Vitals in 2026 are LCP (Largest Contentful Paint), INP (Interaction to Next Paint, which replaced FID in 2024), and CLS (Cumulative Layout Shift). FPWS targets LCP under 1.8s, INP under 150ms, and CLS under 0.05 on mid-tier mobile, measured via real user monitoring (RUM) not lab-only Lighthouse. The thresholds Google calls 'Good' are looser; we hold tighter targets because real-world performance compounds into rankings, conversion, and AI-citation eligibility.

INP is the metric that bites most teams. It measures the worst interaction delay on the page (input to next paint), and most React apps have at least one slow interaction lurking, a search filter, a modal open, a tab switch, that pushes INP past 200ms. The fix is usually breaking up long tasks with scheduler.yield(), deferring non-critical work, and pruning the JavaScript bundle.

LCP is mostly an asset problem in 2026. Use Next/Image with explicit width and height, AVIF first then WebP, responsive sizes attribute, and a blur placeholder for the hero image only. Self-host fonts, subset to the characters the page actually uses, preload only above-fold weights, and font-display: swap. Hero LCP under 1.8s on 4G mid-tier mobile is achievable with discipline, not heroics.

CLS is the easiest to fix and the most often missed. Reserve space for every image, ad, and embed. Avoid injecting content above existing content after page load. Use CSS aspect-ratio for media containers. Most CLS regressions we audit come from late-injected cookie banners or chat widgets pushing content down.

The measurement that matters is RUM, not Lighthouse. Lighthouse is useful for catching obvious regressions in CI; Vercel Analytics or Search Console's Core Web Vitals report tells you what real users on real devices actually experience. The two often disagree by 30 percent or more. Trust RUM.

  • LCP under 1.8s on mid-tier mobile (target tighter than Google's 2.5s 'Good' threshold)
  • INP under 150ms (Google's 'Good' is 200ms; we hold tighter)
  • CLS under 0.05
  • Measured via RUM (Vercel Analytics, Search Console), not lab Lighthouse alone
  • JavaScript budget under 120KB gzipped on home, under 80KB on article pages
04

Next.js 16 App Router SEO, the patterns that work

Next.js 16 App Router gives a clean SEO foundation when used correctly: server components by default, generateMetadata for per-page title and description, generateStaticParams for known dynamic routes, generateSitemaps for large sitemaps, and the metadata.ts root file for site-wide defaults. Server-side rendering is the default. Avoid 'use client' for content components. Use revalidateTag and updateTag for fresh content without sacrificing static generation.

The App Router patterns that materially help SEO: generateMetadata in every page.tsx for typed per-page metadata (title, description, canonical via alternates.canonical, openGraph, twitter), a single root metadata.ts for site-wide defaults that pages can extend, and the dynamic sitemap.ts API for sitemaps generated at build time from the route tree.

Cache Components in Next.js 16 (the use cache directive, cacheLife, cacheTag) let you ship pages that are statically rendered for crawlers but revalidate cleanly when content changes. Pair it with updateTag on the API route or server action that mutates the underlying data. The result: SEO-friendly static rendering with no stale-content lag.

Avoid putting primary content inside client components. 'use client' has its place (form interactions, motion, anything stateful), but the words on the page should render server-side. The pattern we ship: server component owns the layout and content; client components are leaf nodes for interactivity only.

On dynamic routes, use generateStaticParams to pre-render the known set at build time, and let the rest fall through to dynamic rendering with caching. Do not let crawlers hit a slow database query on every request.

05

Internal linking, the underrated lever

Internal linking is the discipline of routing PageRank, crawl budget, and topical authority through the site's link graph. Strong internal linking uses descriptive anchor text (not 'click here'), surfaces related content contextually inside articles, builds hub-and-spoke topic clusters around pillar pages, and resolves orphan pages systematically. It is the cheapest, highest-leverage SEO work most sites neglect because it is invisible until it compounds.

The hub-and-spoke pattern is the workhorse: a pillar page covers a topic broadly (3,000 to 4,000 words), and 6 to 12 supporting articles each go deep on a sub-topic and link back to the pillar with descriptive anchor text. The pillar links out to each supporting article. The pattern routes authority efficiently, signals topical depth to search engines, and gives AI engines a clean entity-and-topic graph to attach citations to.

Anchor text matters more than most teams think. Generic 'learn more' or 'read more' anchors give crawlers nothing. Descriptive anchors that name the destination concept ("see our guide on Core Web Vitals in 2026") pass topical signals along the link. Aim for descriptive, varied (not exact-match-keyword on every link, which looks manipulative), and contextually placed inside paragraphs rather than as standalone link lists.

Orphan pages (pages with no internal links pointing to them) are the most common internal-linking failure. They get crawled rarely, ranked weakly, and cited never. Run a monthly audit: for every indexed page, count internal links pointing to it. Anything below three deserves either more links or removal.

06

Content for the citation era

Content for the citation era is content engineered for two simultaneous targets: classical search ranking and AI assistant citation. The structural changes that serve both: a 60 to 80 word TL;DR at the top of long-form pieces, every H2 followed within 200 pixels by a self-contained answer block, FAQ sections with FAQPage schema, named-author bylines linked to Person schema, and depth where depth earns its place rather than to hit a word count.

The era of "write 3,000 words on every topic so Google ranks you" is over. Google now penalizes thin or padded content explicitly (Helpful Content Update, March 2024 spam update, December 2025 quality refinement that extended E-E-A-T weighting to all competitive query types). AI engines reward concision in the citable passage and depth in the surrounding context. The right answer is to write as long as the topic earns and to engineer the structure for both targets.

The pattern we ship for every long-form piece: TL;DR (60 to 80 words, citable, answers the headline query), 5 to 9 H2 sections each with its own citable answer block plus 2 to 4 paragraphs of body, supporting bullet lists where they aid scannability, an FAQ section of 5 to 8 real questions, and a related-content block linking to the pillar and 2 to 3 sibling articles.

Author bylines are quietly important. Articles signed by a named human, with a Person schema linking to LinkedIn and a real bio, get cited more reliably than unsigned content. AI engines and classical search both weight author entity signals more than they did two years ago. Use one named author per article. Link consistently.

07

Replatform without ranking loss

Replatforming (WordPress to Next.js, Wix to Webflow, Shopify to headless) succeeds or fails on URL parity and the redirect map. The required artifacts: a CSV mapping every old URL to its new URL, 301 redirects for every changed path, canonical preservation, schema and metadata parity (title, description, OG, schema), Search Console change-of-address submission, and 90 days of post-launch monitoring. Skip any of these and rankings drop. Do all of them and traffic is preserved or grows.

The pattern that works: before writing a line of code on the new site, audit the existing site's full URL inventory (Search Console, sitemap, server logs, crawler tools). Score each URL by traffic and rankings. Decide for each: keep at the same path, move to a new path with a 301, or remove and 410. Build the redirect map as a CSV in the repo, kept under version control, deployed as a redirect config in next.config.js or middleware.

Before launch: stage the new site at a preview URL, validate every key page renders correctly, schema is preserved, metadata matches, internal links resolve. Crawl the staging site with Screaming Frog or equivalent. Fix anything that comes up.

At launch: lower DNS TTLs 24 to 48 hours in advance, deploy at a low-traffic time, monitor server logs for crawler hits, watch Search Console for any indexation drops in the first 72 hours. Submit a sitemap to GSC and BWT immediately. File a change-of-address request in Search Console if the domain itself changed.

For 90 days post-launch: weekly crawl audits, weekly Search Console review for indexation and performance, monthly ranking comparison against pre-launch baseline. Most ranking issues from a replatform surface within 2 to 6 weeks of launch. Catch them early, fix them fast, and the migration ends up neutral or positive.

08

Measurement, what to actually watch

SEO measurement in 2026 spans Search Console (impressions, clicks, position by query and page), Bing Webmaster Tools (Bing and Copilot visibility), DataForSEO (rank tracking, backlink profile, AI-citation monitoring), real-user CWV via Vercel Analytics, and conversion attribution back to organic landing. The reporting cadence that works: weekly check on traffic and CWV trend, monthly deep dive on ranking movement, AI-citation count, and lead-quality scoring. Vanity metrics (impressions alone, pageviews alone) without conversion context are noise.

Search Console is the source of truth for query data and indexation. Bing Webmaster Tools is the source of truth for Bing and Copilot. Anyone who tells you to skip BWT because "Bing doesn't matter" is wrong: Bing's index powers ChatGPT search, DuckDuckGo, and a chunk of Apple Intelligence's web layer. It matters.

DataForSEO covers the layers Google does not give you directly: live SERP data for arbitrary queries, backlink profile changes, competitor rank tracking at scale, and AI-citation monitoring for ChatGPT, Perplexity, and Google AI Overviews. We pull DataForSEO into a monthly dashboard alongside Search Console exports.

The metric that matters most for revenue is conversion attribution, not traffic. Use UTM parameters consistently, fire conversion events in Vercel Analytics (or GA4) with clean source attribution, and tie back to CRM data for lead quality scoring. The goal is not 100,000 monthly visits. The goal is the right 5,000 monthly visits, converting at the rates the business needs.

Questions

Answered below.

  • Google's official 'Good' thresholds are LCP under 2.5s, INP under 200ms, and CLS under 0.1. FPWS holds tighter targets: LCP under 1.8s, INP under 150ms, CLS under 0.05, all measured via real user monitoring on mid-tier mobile devices.

Want this work done for you?

Let's talk.