Cache
Cached event handlers
To cache an event handler, you simply need to use the defineCachedEventHandler
method.
It works like defineEventHandler
but with an additional second options parameter.
// Cache an API handler
export default defineCachedEventHandler((event) => {
// My event handler
}, { maxAge: 60 * 60 /* 1 hour */ });
With this example, the response will be cached for 1 hour and a stale value will be sent to the client while the cache is being updated in the background. If you want to immediately return the updated response set swr: false
.
varies
option, only the specified headers will be considered when caching and serving the responses.See the options section for more details about the available options.
cachedEventHandler
method as alias of defineCachedEventHandler
.Cached functions
You can also cache a function using the defineCachedFunction
function. This is useful for caching the result of a function that is not an event handler, but is part of one, and reusing it in multiple handlers.
For example, you might want to cache the result of an API call for one hour:
export const cachedGHStars = defineCachedFunction(async (repo: string) => {
const data: any = await $fetch(`https://api.github.com/repos/${repo}`)
return data.stargazers_count
}, {
maxAge: 60 * 60,
name: 'ghStars',
getKey: (repo: string) => repo
})
export default defineEventHandler(async (event) => {
const repo = event.context.params.repo
const stars = await cachedGHStars(repo).catch(() => 0)
return { repo, stars }
})
The stars will be cached in development inside .nitro/cache/functions/ghStars/<owner>/<repo>.json
with value
being the number of stars.
{"expires":1677851092249,"value":43991,"mtime":1677847492540,"integrity":"ZUHcsxCWEH"}
cachedFunction
method as alias of defineCachedFunction
.Edge workers
In edge workers, the instance is destroyed after each request. Nitro automatically uses event.waitUntil
to keep the instance alive while the cache is being updated while the response is sent to the client.
To ensure that your cached functions work as expected in edge workers, you should always pass the event
as the first argument to the function using defineCachedFunction
.
import type { H3Event } from 'h3'
export const cachedGHStars = defineCachedFunction(async (event: H3Event, repo: string) => {
const data: any = await $fetch(`https://api.github.com/repos/${repo}`)
return data.stargazers_count
}, {
maxAge: 60 * 60,
name: 'ghStars',
getKey: (event: H3Event, repo: string) => repo
})
export default defineEventHandler(async (event) => {
const repo = event.context.params.repo
const stars = await cachedGHStars(event, repo).catch(() => 0)
return { repo, stars }
})
This way, the function will be able to keep the instance alive while the cache is being updated without slowing down the response to the client.
Caching route rules
This feature enables you to add caching routes based on a glob pattern directly in the main configuration file. This is especially useful to have a global cache strategy for a part of your application.
Cache all the blog routes for 1 hour with stale-while-revalidate
behavior:
export default defineNitroConfig({
routeRules: {
"/blog/**": { cache: { maxAge: 60 * 60 } },
},
});
export default defineNuxtConfig({
routeRules: {
"/blog/**": { cache: { maxAge: 60 * 60 } },
},
});
If we want to use a custom storage mount point, we can use the base
option.
export default defineNitroConfig({
storage: {
redis: {
driver: "redis",
url: "redis://localhost:6379",
},
},
routeRules: {
"/blog/**": { cache: { maxAge: 60 * 60, base: "redis" } },
},
});
export default defineNuxtConfig({
nitro: {
storage: {
redis: {
driver: "redis",
url: "redis://localhost:6379",
},
},
},
routeRules: {
"/blog/**": { cache: { maxAge: 60 * 60, base: "redis" } },
},
});
Customize cache storage
Nitro stores the data in the cache:
mount point.
- In production, it will use the memory driver by default.
- In development, it will use the filesystem driver, writing to a temporary dir.
To overwrite the production storage, set the cache
mount point using the storage
option:
export default defineNitroConfig({
storage: {
cache: {
driver: 'redis',
/* redis connector options */
}
}
})
export default defineNuxtConfig({
nitro: {
storage: {
cache: {
driver: 'redis',
/* redis connector options */
}
}
}
})
In development, you can also overwrite the cache mount point using the devStorage
option:
export default defineNitroConfig({
devStorage: {
cache: {
driver: 'redis',
/* redis connector options */
}
}
})
export default defineNuxtConfig({
nitro: {
devStorage: {
cache: {
driver: 'redis',
/* redis connector options */
}
}
}
})
Options
The cachedEventHandler
and cachedFunction
functions accept the following options:
Default to
cache
.'_'
otherwise.'nitro/handlers'
for handlers and 'nitro/functions'
for functions.String
). If not provided, a built-in hash function will be used to generate a key based on the function arguments.
By default, it is computed from function code, used in development to invalidate the cache when the function code changes.
Default to
1
(second).-1
a stale value will still be sent to the client while the cache updates in the background. Defaults to
0
(disabled).stale-while-revalidate
behavior to serve a stale cached response while asynchronously revalidating it. Defaults to
true
.boolean
to invalidate the current cache and create a new one.boolean
to bypass the current cache without invalidating the existing entry.['host', 'x-forwarded-host']
to ensure these headers are not discarded and that the cache is unique per tenant.Cache keys and invalidation
When using the defineCachedFunction
or defineCachedEventHandler
functions, the cache key is generated using the following pattern:
`${options.group}:${options.name}:${options.getKey(...args)}.json`
For example, the following function:
const getAccessToken = defineCachedFunction(() => {
return String(Date.now())
}, {
maxAge: 10,
name: 'getAccessToken',
getKey: () => 'default'
})
Will generate the following cache key:
nitro:functions:getAccessToken:default.json
You can invalidate the cached function entry with:
await useStorage('cache').removeItem('nitro:functions:getAccessToken:default.json')
Normalizing Cache Keys
/
and -
. This behavior helps ensure compatibility across various storage backends (e.g., file systems
, key-value
stores) that might have restrictions on characters in keys
, and also prevents potential path traversal vulnerabilities.For example:
getKey: () => '/api/products/sale-items'
Would generate a key like:
api/productssaleitems.json
This behavior may result in keys that look different from the original route or identifier.
escapeKey
utility function provided below:function escapeKey(key: string | string[]) {
return String(key).replace(/\W/g, "");
}
It's recommended to use escapeKey()
when invalidating manually using route paths or identifiers to ensure consistency with Nitro's internal key generation.
For example, if your getKey
function is:
getKey: (id: string) => `product/${id}/details`
And you want to invalidate product/123/details
, you would do:
const normalizedKey = escapeKey('product/123/details')
await useStorage('cache').removeItem(`nitro:functions:getProductDetails:${normalizedKey}.json`)