Static Site Search with Fuse.js

How to add full-text search to a static site without a server

Published December 28, 2025 ET

Static sites are great. They're fast, cheap to host, and simple to deploy. But search is tricky—there's no server to query, no database to search. So how do you add search to a static site?

This is how I built the search feature on this blog.

The Approach

The solution is straightforward:

  1. At build time, generate a JSON file containing all searchable content
  2. When a user visits the search page, load that JSON file
  3. Use Fuse.js to perform fuzzy searching entirely in the browser

No server required. The search happens client-side.

Part 1: Generate the Search Index

First, I needed a build script that reads all my markdown files and outputs a JSON index. Here's the script I use:

// scripts/generate-search-index.js
const fs = require("fs");
const path = require("path");
const matter = require("gray-matter");
const { marked } = require("marked");

const contentDirectory = path.join(process.cwd(), "content");
const outputPath = path.join(process.cwd(), "public", "search-index.json");

function stripHtml(html) {
  return html
    .replace(/<[^>]*>/g, " ")
    .replace(/\s+/g, " ")
    .trim();
}

function getPostsWithContent(section) {
  const sectionPath = path.join(contentDirectory, section);

  if (!fs.existsSync(sectionPath)) {
    return [];
  }

  const files = fs.readdirSync(sectionPath).filter(f => f.endsWith(".md"));

  return files.map(filename => {
    const slug = filename.replace(".md", "");
    const filePath = path.join(sectionPath, filename);
    const fileContent = fs.readFileSync(filePath, "utf-8");
    const { data, content } = matter(fileContent);
    const htmlContent = marked.parse(content);
    const plainText = stripHtml(htmlContent);

    return {
      slug,
      section,
      title: data.title || slug,
      description: data.description || "",
      tags: data.tags || [],
      content: plainText,
      url: `/${section}/${slug}`,
    };
  });
}

function generateSearchIndex() {
  const thoughts = getPostsWithContent("thoughts");
  const knowledge = getPostsWithContent("knowledge");
  // Add more sections as needed

  return [...thoughts, ...knowledge];
}

const searchIndex = generateSearchIndex();
fs.writeFileSync(outputPath, JSON.stringify(searchIndex));
console.log(`Search index generated: ${searchIndex.length} posts indexed`);

The key steps:

  1. Read markdown files from each content directory
  2. Parse frontmatter using gray-matter to extract title, description, and tags
  3. Convert markdown to plain text by first rendering to HTML with marked, then stripping all HTML tags
  4. Output a JSON array with everything needed for search and display

Wiring it up

In package.json, I use npm's pre hook to run this script automatically before every build:

{
  "scripts": {
    "prebuild": "node scripts/generate-search-index.js",
    "build": "next build"
  }
}

Now whenever I run npm run build, the search index regenerates first.

Part 2: The Search Page

The search page loads the index and uses Fuse.js for fuzzy matching. Here's the core of the React component:

import { useState, useEffect, useMemo } from "react";
import Fuse from "fuse.js";

const fuseOptions = {
  keys: [
    { name: "title", weight: 2 },
    { name: "description", weight: 1.5 },
    { name: "tags", weight: 1.5 },
    { name: "content", weight: 1 },
  ],
  threshold: 0.3,
  includeMatches: true,
  minMatchCharLength: 2,
};

export default function SearchPage() {
  const [query, setQuery] = useState("");
  const [searchIndex, setSearchIndex] = useState([]);

  // Load search index on mount
  useEffect(() => {
    fetch("/search-index.json")
      .then(res => res.json())
      .then(data => setSearchIndex(data));
  }, []);

  const fuse = useMemo(
    () => searchIndex.length > 0 ? new Fuse(searchIndex, fuseOptions) : null,
    [searchIndex]
  );

  const results = useMemo(() => {
    if (!fuse || !query.trim()) return [];
    return fuse.search(query).slice(0, 50);
  }, [fuse, query]);

  // ... render search input and results
}

Fuse.js Configuration

A few things to note about the configuration:

  • Weighted keys: Title matches are weighted 2x higher than body content. This means searching for "split" will rank an article titled "Mastering the Split" above an article that merely mentions the word.
  • Threshold of 0.3: This controls fuzzy matching. Lower values require closer matches; 0.3 is a reasonable balance between finding relevant results and avoiding noise.
  • minMatchCharLength: 2: Ignores single-character matches, which are usually not useful.

Part 3: CI/CD Integration

Here's where I hit a gotcha that cost me some debugging time.

My GitHub Actions workflow was running next build directly:

- name: Build with Next.js
  run: npx next build  # This bypasses prebuild!

The problem: npm's pre hooks only run when you use npm run <script>. Running next build directly skips the prebuild step entirely.

The fix:

- name: Build with Next.js
  run: npm run build  # This runs prebuild first

Now every deploy automatically regenerates the search index.

Trade-offs

This approach has some trade-offs worth considering:

Pros:

  • No server infrastructure needed
  • Search works offline once the index is loaded
  • Fast searches (everything happens in-browser)
  • No API calls, no rate limits

Cons:

  • Index size grows with content. My ~100 posts produce a ~150KB index.
  • Users must download the entire index before searching
  • Not suitable for very large sites (thousands of posts)

For a personal blog, this works well. For a larger site, you'd want something like Algolia or a self-hosted search server.

Dependencies

For reference, here are the npm packages this setup requires:

npm install fuse.js gray-matter marked

That's it. Three dependencies, no server, full-text search.