How to create an api from markdown files with Next.js
I've previously talked about how to read markdown files from Next.js' API routes. However, it was only recently, while improving some of my blog's logic, that I realised the true value of this technique. So here's a simple guide on how to create an incredibly powerful content API from your Markdown files and double your build speed while you're at it.
The setup
Before you can actually get started turning your markdown files into an API, you'll need these files in an easily readable and parsable format. I would recommend following my explanation on how to transform your markdown to JSON at build time but really any way to get your markdown into JSON will probably work.
You'll need your different categories of content each in a JSON file with each element being part of a larger array. For example I had:
posts.json
authors.json
categories.json
Creating your API endpoints
Let's create endpoints to fetch our posts as an example.
// pages/api/posts/index.js
// You'll need to specify the absolute URL to fetch your file
const url = `http://example.com`
/**
* fetches and returns all posts from json cache
*/
export const getAllPosts = async () => {
const data = (await fetch(`${url}/cache/posts.json`).then((res) =>
res.json()
))
return data
}
/**
* Returns current page from query string. If undefined, returns 0.
*/
export const pagination = (req) => {
const { page = '0' } = req.query as { page: string }
const actualPage = parseInt(page) <= 0 ? 1 : parseInt(page)
return actualPage
}
/**
* Returns a list of paginated posts
*/
const posts = async (req, res) => {
const page = pagination(req)
const data = (await getAllPosts())
// sort posts by publish date
.sort((post1, post2) => (post1.data.date > post2.data.date ? -1 : 1))
// only take 10 depending on page value
.slice((page - 1) * 10, page * 10)
res.status(200).json(data)
}
export default posts
It's fairly straightforward since we don't need to do a whole lot apart from return our entire JSON file. Nonetheless, we can implement small UX improvements.
- On line
32
we sort our content by publishing date - We paginate all of our content, so we don't have to repeat the code in our pages
You can check out how Ironeko's endpoint looks as an example.
The second piece of the puzzle is to then create an endpoint used just to return one singular piece of content. We'll use it to actually create our pages.
// pages/api/posts/[slug].js
// You'll need to specify the absolute URL to fetch your file
const url = `http://example.com`
/**
* fetches and returns all posts from json cache
*/
export const getAllPosts = async () => {
const data = (await fetch(`${url}/cache/posts.json`).then((res) =>
res.json()
))
return data
}
/**
* Finds and returns a single post
*/
const post = async (req, res) => {
const { slug } = req.query
const data = (await getAllPosts()).find((p) => p.slug === slug)
if (data) {
res.status(200).json(data)
} else {
res.status(404)
res.end()
}
}
export default post
The above is very similar to our endpoint, only with some minor differences:
-
We filter our data to find a post that matches the
slug
provided in the API url- A
slug
is a unique piece of text that identifies our article. For example you can see the slug for this page in the URLhow-to-create-an-api-from-markdown-files-with-next-js
- A
-
If we couldn't find a match in our data we return a 404 error, which we'll then handle when trying to display the page
If you've done it right it'll look something like the endpoint we use here on Ironeko.
Combining your new endpoints with Next.js' ISR to improve your build time
If you've managed a static site before you'll no doubt have noticed how each time a new piece of content is added your builds take a little bit longer. This blog has just over 100 articles, and the build time has slowly crawled to around 3 minutes... Every. Time. I. Edit. Something.
You may have heard of Incremental Static Regeneration before. (If not, Smashing Magazine published the fantastic A Complete Guide To Incremental Static Regeneration (ISR) With Next.js which is an excellent explanation of the feature.)
There's a lot of magic related to how Incremental Static Regeneration works, but it's a fantastic feature and you don't necessarily need to understand how it functions to use it to its full potential.
The point of ISR is that rather than telling Next.js how many pages you want to generate at build time, you'll just have to tell it where to get the information to generate them. The pages will then be generated upon request.
Thanks to your new endpoint you can replace your getStaticProps
and getStaticPaths
with a much simpler:
// pages/posts/[slug].js
// You'll need to specify the absolute URL to fetch your file
const url = `http://example.com`
export const getStaticPaths = async () => {
return {
paths: [],
fallback: 'blocking',
}
}
export const getStaticProps = async ({ params }) => {
const { slug } = params
const post = (await fetch(`${url}/api/posts/${slug}`)
.then((res) => res.json())
.catch(() => null))
if (!post) {
return {
notFound: true,
}
}
return {
props: {
post,
},
revalidate: 60,
}
}
You should see your build time cut down exponentially depending on how much content you have!
Bonus: Serialize your markdown (or mdx) on build
Ideally with these changes you'll want to keep your serverless function execution time as short as possible (especially if you're on a Vercel Hobby plan). So there's a couple more corners you can cut.
If you're using markdown or mdx you can actually serialize your markdown into a usable string outside of your getStaticProps
. When generating your caches, rather than adding your markdown to your cache you can convert it directly to HTML or JSX so you won't have to convert it later while generating your page!
// use this wherever you transform your markdown into JSON
import matter from 'gray-matter'
import { serialize } from 'next-mdx-remote/serialize'
import { remark } from 'remark'
import html from 'remark-html'
/**
* Convert a markdown file to JSON
*/
const serializeMarkdown = async (markdownFile) => {
// we extract the front-matter info from our markdown
const matterResult = matter(markdownFile)
// convert the markdown body into HTML
const html = remark().use(html).processSync(matterResult.content)
// or generate a JSX string from your MDX!
const mdx = await serialize(matterResult.content)
return {
html,
mdx,
...matterResult,
}
}
With these small changes Ironeko's build time dropped to around 40 seconds. This has improved my quality of life considerably, so you should definitely give it a try if you're falling out of love with static generation.