Blog

How Small Performance Fixes Add Up to 85% Faster

By Ian StrangFebruary 16, 2026

The dashboard was taking 15 seconds to load. I opened the browser's network tab and watched the requests scroll by. And scroll. And scroll.

300+ network requests. The same API endpoint called 23 times. Zero caching. Every component fetching data independently, unaware that five other components needed the same data.

It was October 2025, and Capo had a performance problem.

The Audit

The AI analyzed the network traffic and produced a damning report:

Dashboard page load:
- Total requests: 312
- Unique endpoints: 28
- /api/players called: 23 times
- /api/seasons called: 18 times
- /api/matches called: 15 times
- Caching: None
- Request deduplication: None

The architecture was straightforward but wasteful. Each React component fetched its own data:

// PlayerList component
const [players, setPlayers] = useState([]);
useEffect(() => {
  fetch('/api/players').then(r => r.json()).then(setPlayers);
}, []);

// PlayerStats component (renders at the same time)
const [players, setPlayers] = useState([]);
useEffect(() => {
  fetch('/api/players').then(r => r.json()).then(setPlayers);
}, []);

// PlayerDropdown component (also renders at the same time)
const [players, setPlayers] = useState([]);
useEffect(() => {
  fetch('/api/players').then(r => r.json()).then(setPlayers);
}, []);

Three components, three identical requests, three separate state management systems. Multiply this across the entire application, and you get 300+ requests for 28 unique endpoints.

The Solution: React Query

React Query (now TanStack Query) solves this with automatic request deduplication and caching. Multiple components requesting the same data share a single request and cached result.

The transformation:

// Before: Each component manages its own state
const [players, setPlayers] = useState([]);
useEffect(() => {
  fetch('/api/players').then(r => r.json()).then(setPlayers);
}, []);

// After: Shared hook with automatic caching
const { data: players, isLoading } = usePlayers();

When three components call usePlayers() simultaneously, React Query makes one request and shares the result. Subsequent calls within the stale time return cached data instantly.

The Query Key Pattern

Cache isolation was critical. In a multi-tenant application, cached data from one tenant must never appear for another tenant.

The solution: include tenant ID in every query key.

export const queryKeys = {
  players: (tenantId: string | null) => ['players', tenantId] as const,
  matches: (tenantId: string | null) => ['matches', tenantId] as const,
  seasons: (tenantId: string | null) => ['seasons', tenantId] as const,
  // ... every query key includes tenantId
};

Switching tenants means new query keys, which means fresh data. No risk of showing Club A's players to Club B.

The AI suggested this pattern during implementation. It recognized the security implication of shared caches in a multi-tenant system and built tenant-awareness into every hook from the start.

The Implementation Scale

The refactoring touched most of the application:

  • 28 custom hooks created (usePlayers, useMatches, useSeasons, etc.)
  • 50+ components refactored to use hooks instead of direct fetching
  • Query key factory ensuring consistent, tenant-aware cache keys
  • Mutation patterns for create/update/delete with automatic cache invalidation

The AI handled the bulk transformation. I reviewed each change, checking for components with special requirements that the standard pattern wouldn't handle.

The Results

ScreenBeforeAfterImprovement
Dashboard15.0s1.59s89%
Player Profiles96.0s5.10s95%
Tables10.0s1.90s81%
Admin Matches6.45s2.41s63%
Match Control66.0s2-6s96%

Average improvement: 85%

The request count dropped from 300+ to about 30. The same data, fetched once, shared everywhere.

Why Small Fixes Compound

The 85% improvement didn't come from one optimization. It came from several:

  1. Request deduplication: Multiple components share one request
  2. Caching: Subsequent page visits use cached data
  3. Stale-while-revalidate: Show cached data immediately, refresh in background
  4. Optimistic updates: UI updates before server confirms

Each optimization provides modest improvement alone. Together, they compound.

Consider a user navigating the app:

  • First visit: Data fetched once (not 23 times)
  • Navigation to another page: Shared data already cached
  • Return to first page: Instant load from cache
  • Data mutation: Optimistic update shows immediately

The user experiences a fast, responsive application. The server handles a fraction of the previous load.

The Mutation Pattern

Before React Query, updating data meant manual cache management:

const handleSave = async () => {
  await fetch('/api/players', { method: 'POST', body: ... });
  // How do we update the UI?
  window.location.reload();  // The lazy solution
};

Full page reloads destroy the user experience. But manually updating every component that displays player data is error-prone.

React Query mutations handle this:

const mutation = useCreatePlayer();

const handleSave = () => {
  mutation.mutate(playerData, {
    onSuccess: () => {
      // React Query invalidates relevant caches
      // All components showing player data refresh automatically
    }
  });
};

The mutation knows which query keys to invalidate. Components re-render with fresh data. No page reload. No manual cache management.

The Background Processing Connection

Client-side caching was half the solution. The worker rewrite handled the server side.

Before: User clicks "Complete Match" → waits 45 seconds → sees updated stats.

After: User clicks "Complete Match" → sees success immediately → stats update in background → cache invalidation callback triggers React Query refresh → UI updates automatically.

The worker processes data asynchronously. When finished, it calls back to invalidate caches. React Query detects the invalidation and refetches. The user sees updated data without waiting for processing.

What the AI Got Wrong

The AI initially didn't understand tenant-aware cache keys. The first implementation used simple keys like ['players'], which would leak data between tenants. I had to explain the security implication explicitly.

The AI also suggested using enabled: !!tenantId to prevent queries from running without a tenant. This caused race conditions — queries would be permanently disabled if tenant context loaded slowly. The fix was to always run queries but handle null tenant gracefully in the fetch function.

The Broader Pattern

I talk more about the overall approach in How I Actually Vibe Code. Performance optimization illustrates a key principle: architecture decisions compound.

The genetic algorithm for team balancing benefits from fast data loading. Players see balanced teams instantly because the underlying data is cached.

Good foundations make later features faster. The React Query infrastructure, built in October 2025, has accelerated every feature since.

The Lesson

Performance problems often aren't about slow code. They're about wasteful architecture. The API endpoints were fast. The database queries were optimized. The problem was making 300 requests when 30 would suffice.

The fix wasn't optimization in the traditional sense. It was eliminating unnecessary work. Request deduplication. Caching. Sharing data between components.

15 seconds to 1.59 seconds. Not through clever algorithms, but through not doing the same work 23 times.

Series Navigation