Here at Industrie & Co we’ve recently completed work on The Trade Barriers Register, our first project using Sanity.io’s headless content management system. This article outlines a few things we learned by doing this.
Our client’s needs come first
Our client, Food Innovation Australia Limited, in collaboration with The Export Council of Australia, requested a website that could display customisable information, such that their clients, Australian exporters, importers, and domestic traders, could access news, case-studies, and a range of other information to help them resolve barriers to trade.
A key feature of the Trade Barriers Register is the ability for parties to describe the trade barriers they have experienced, and be put in touch with appropriate agencies that can help them overcome those barriers.
Our client required that its staff to be able to make their own edits to most of the site content, as well as to the questionnaire their clients would complete, and they needed to be able to generate comprehensive and timely reports about the use of the site in order to justify the expense of having built it.
There was neither the time, nor the budget to build a custom API server and dedicated admin site to allow our client’s staff to administer the content, and we felt that, due to the complexity of the content, a traditional content management system was going to be of only limited use.
What is a headless CMS?
Traditional monolithic content management systems combine the site templates, content storage, content modelling, administration, and data storage into a single system. As a content publisher you can extend this with a range of plugins, and modify the templates, and make configuration changes that are limited to what the system will allow.
In contrast, a headless CMS does away with site templates and instead exposes an API that your front-end, be it a website, app, or even another API can consume. A headless CMS will typically also offer a document store of some kind, a UI for administering content, and a range of plugins for extending its behaviour. The distinguishing feature of a headless CMS is simply its lack of a head.
Monolith CMSs are, in general, great for building blogs, or blog-like websites, but they get problematic when the front-end isn’t a website, or it’s a modern single-page app, or it’s meant to display more structured and interrelated content than just some posts and comments. Monolith CMSs quickly become overwhelming to use when the content being managed becomes complex.
Why Sanity?
Most of the other sites we’ve built for this client tend to feature a public-facing React
front end, and a private admin-facing front-end, both running on Netlify
, and a custom API server on the back-end, running on Heroku
. In this case however our client wanted a lot more control over their content, and their budget and schedule did not allow time to build a custom API and admin interface. A designer friend had used Sanity for a project I’d been quite impressed with, so I decided to take a look.
I spent about 15 minutes playing with Sanity before deciding that this was a suitable tool for the job. The admin UI is slick, it’s highly customisable and, being headless, it’s easy for the front-end to talk to it.
While most CMSs (headless or not) will also host and control the admin environment, the Sanity Studio (their admin UI) runs as a stand-alone React
based single-page app that can be run locally by a developer and deployed anywhere. The code itself is open source and available on GitHub.
High level architecture
Overall the site architecture looks like this:
Implementation details
On the Sanity slack (which is a fantastic resource if you are building a Sanity project, the devs are super helpful) a few people asked me to describe how we approached the project, and how we solved some of the issues we faced.
Overall
- All code hosted in client’s
GitHub
account. CircleCI
used to run tests and report code-coverage tocodecov.io
.Greenkeeper.io
used to keep Node dependencies up-to-date.Jest
used to run unit-tests.Eslint
andprettier
used to keep code consistent.
Sanity
- Two datasets,
test
andproduction
. - Customised structure with custom icons, customised hierarchies, and client logo.
- Document and field-level validation rules, document icons, custom previews and sort-orders.
- A range of one-off scripts to automate migrations and clean up orphaned data and assets.
- Customised deployment scripts to generate the correct
sanity.json
depending on whether we deploy tolocalhost
or tosanity.studio
. - Plugin (
React
component) that allows admin to select which news-sources to import from.
User Web
- Written using
React
and hosted onNetlify
(develop
branch deploys to staging,master
branch deploys to production). - The
staging
site, and local development sites, interact withtest
dataset, andproduction
site interacts withproduction
dataset. - Read-only access to CMS API. User input sent either via
Netlify Forms
, Lambda functions, or Survey Monkey.
Lambda Functions
- Hosted on
AWS
as we needed longer time-outs thanNetlify
would allow. - Separate deploys for
staging
andproduction
, withstaging
writing totest
dataset andproduction
writing toproduction
dataset.
Survey Monkey
- Client’s SurveyMonkey account, integrated with client’s
Zapier
account, allowing client’s staff to maintain their own Trade Barrier survey questions.
Zapier
- Used to transmit survey data to the CMS via a lambda function, and to Google Studio for reporting.
Google Studio
- Combines user activity reporting with high-level data from Survey Monkey to give client a customised and coherent set of reports.
Sanity, datasets, and staged releases
Sanity offers the following core concepts:
- Projects
- Datasets
- Schema
- Structure
- Queries
A project
can have multiple datasets
, incorporates various documents
and objects
into a schema
, controls how the schema
is presented to admins via a structure
, and how content is delivered to the front-ends with queries
.
Typically when building an API for a client we’d deploy a staging
and production
version of the API, but Sanity does not really support this. We could have multiple projects but there is no out-of-the-box way to deploy staging
and production
projects off different branches of the same codebase. For the sake of expediency we chose to have a single deployed project and use two datasets, one called test
, and one production
.
A normal Sanity project looks a bit like this:
config/
@sanity/
.checksums
schemas/
...various documents and objects.js
schema.js
package.json
sanity.json
structure.js
We modified this slightly to allow different configurations based on whether we wanted to run the site locally, or deploy it to sanity.studio
.
bin/
fixSanity
config/
@sanity/
.checksums
sanity.json.template
schemas/
...various documents and objects.js
schema.js
package.json
̶s̶a̶n̶i̶t̶y̶.̶j̶s̶o̶n̶
structure.js
Our sanity.json.template
file looks like this:
{
"root": true,
"project": {
"name": "project-cms"
},
"parts": [
{
"name": "part:@sanity/base/schema",
"path": "./schemas/schema.js"
},
{
"name": "part:@sanity/desk-tool/structure",
"path": "./structure.js"
}
],
"env": {
"production": {
"api": {
"projectId": "our-project-id",
"dataset": "production"
},
"plugins": [
"@sanity/base",
"@sanity/components",
"@sanity/default-layout",
"@sanity/default-login",
"@sanity/desk-tool",
"logo",
"article-importer"
]
},
"development": {
"api": {
"projectId": "our-project-id",
"dataset": "test"
},
"plugins": [
"@sanity/base",
"@sanity/components",
"@sanity/default-layout",
"@sanity/default-login",
"@sanity/desk-tool",
"logo",
"article-importer",
"@sanity/vision",
"@sanity/storybook"
],
"__experimental_spaces": [
{
"name": "test",
"title": "Test (public)",
"default": true,
"api": {
"dataset": "test"
}
},
{
"name": "production",
"title": "Production (private)",
"api": {
"dataset": "production"
}
}
]
},
"test": {
"api": {
"projectId": "our-project-id",
"dataset": "test"
},
"plugins": [
"@sanity/base",
"@sanity/components",
"@sanity/default-layout",
"@sanity/default-login",
"@sanity/desk-tool",
"logo",
"article-importer"
]
}
}
}
The script in bin/fixSanity
simply reads the above template, takes whatever is in the env
block according to the current NODE_ENV
and generates a sanity.json
file at the root of the project. It’s a bit hacky, but it works so long as you remember to yarn deploy
and not sanity deploy
the project.
#!/usr/bin/env nodeconst { readFileSync, writeFileSync } = require('fs')
const path = require('path')const sanityEnv = process.env.NODE_ENV || 'development'const sanityTemplate = path.join(
__dirname,
'..',
'config',
'sanity.json.template'
)const outputFile = path.join(
__dirname,
'..',
'sanity.json'
)const sanityConfig = readFileSync(sanityTemplate)
const { env, ...config } = JSON.parse(sanityConfig)if (!env[sanityEnv])
throw new Error(`No config for environment '${sanityEnv}'`)const sanityJson = {
...config,
...env[sanityEnv]
}const output = JSON.stringify(sanityJson, null, 2)
writeFileSync(outputFile, output)
In the scripts
section of package.json
we added the following:
"clean": "rm -f ./sanity.json",
"start": "./bin/fixSanity && sanity start",
"test": "./bin/fixSanity && sanity check",
"deploy": "NODE_ENV=production ./bin/fixSanity && sanity deploy",
"posttest": "yarn clean",
"postdeploy": "yarn clean",
In this way when we yarn start
to deploy locally our local admin UI gets the vision
and storybook
developer plugins and a switch that enables us to flip between test
and production
datasets, and when we yarn deploy
the live version is configured without those developer plugins, and only talks to the production
dataset. This stops our client from getting confused and gives developers a lot of flexibility.
Honestly it would be preferable if sanity deploy
simply pushed the code to Sanity’s servers and the build step happened there, and was customisable with environment variables. This can be achieved however by deploying the studio to Netlify instead of Sanity’s own servers.
Continuous Deployment
Ideally we’d configure everything to deploy automatically.
Front-end
Netlify
lets us just link projects directly to a GitHub
repo / branch so, when CircleCI
has green-lit the merge, the front-end code just deploys.
Lambdas
Initially we deployed lambda functions to Netlify
along with the front-end, but Netlify
’s 10 second timeout policy was unsuitable for our situation and so we migrated them over to AWS
. Deploying the lambda functions to AWS
was a little less automagical than deploying to Netlify
and we resorted to adding the following run step to our .circleci/config.yml
.
- run:
name: Deploy
command: |
if [ "${CIRCLE_BRANCH}" == "develop" ]; then
export NODE_ENV=development
export DATASET=test
npm run deploy:staging
elif [ "${CIRCLE_BRANCH}" == "master" ]; then
export NODE_ENV=production
export DATASET=production
npm run deploy:production
else
echo 'No deployment necessary'
fi
Sanity Studio
Because there was only the single Sanity project we decided to handle deployment manually, but in retrospect it would have been easy enough to add a similar run step to .circleci/config.yml
to auto-deploy staging
and production
projects, each with their own datasets.
Migrations
Being an agile team we are always happy when clients change their minds. In the course of developing this project our client changed field names, document names, validation rules, and front-end layouts any number of times.
Because Sanity is a headless CMS the front-end changes were completely isolated from the back end, and work could progress on the front-end using mocked data when no equivalent data was available from the back-end.
Sanity does not really support the concept of migrations in the way, say, a project that’s backed by a traditional database does. Instead you need to write scripts that are run via sanity exec someScript — with-user-token
.
An example of this is as follows. I wrote a lambda function that imports data from a range of RSS feeds, normalises it, and injects the articles into the CMS. Part of doing this involved, in some cases, extracting the articles’ headings from DOM elements using cheerio
, trimming them, and generating article slugs using slugify
.
Initially I used slugify
quite naively and forgot to remove characters like ‘!’
, ‘?’
, ‘,’
, and ‘:’
from the headings before generating the slugs. I also forgot to convert the headings to lower case, so I was getting slugs that just looked wrong.
So I fixed up my makeSlug
function:
import slugify from 'slugify'const remove = /[*+~.()'"!:@,?]/g
const makeSlug = text =>
slugify(text, { lower: true, remove })export default makeSlug
but by then I’d already imported thousands of articles.
To fix this I wrote a small utility function to repair the slugs:
import client from 'part:@sanity/base/client'
import makeSlug from './utils/makeSlug'
import errorHandler from './utils/errorHandler'const query = '*[_type == "article"] { _id, heading, source, slug }'const run = async () => {
const articles = await client.fetch(query) const fixedArticles = articles.reduce((acc, article) => {
const slug = makeSlug(heading)
if (slug !== article.slug.current)
acc[article._id] = [article, {
slug: { ...article.slug, current: slug }
}]
return acc
}, {}) const ids = Object.keys(fixedArticles) return Promise.all(
ids.map(id => {
const [original, { slug }] = fixedArticles[id]
const txn = client.patch(id)
if (slug.current !== original.slug.current)
txn.set({ slug })
return txn.commit()
})
)
}run()
.then(() => {
console.log('done')
})
.catch(errorHandler)
Sanity enforces API rate limits, so the first time I ran this script it processed a hundred or so articles, then the sanity client failed with a 429
error. Rather than bother to deal with it however I just ran the script over and over again until it fixed all the slugs.
I’ve logged an issue with the Sanity project on GitHub to have rate limiting handled generically by the Sanity client rather than forcing individual developers to have to do it. Go give that issue a 👍 if you agree with me.
Removing orphaned documents
Another example of a handy script to run is one that deletes document types that are left over in the system but are no longer used. Our client wanted a few document types renamed and one removed over the course of development, but we already had data in the system associated with the old document names. Being newbies it simply didn’t occur to us that this data would still be there but it was showing up in queries.
The following script nukes those orphans:
import client from 'part:@sanity/base/client'
import errorHandler from './utils/errorHandler'
import allMyDocumentTypes from './utils/allMyDocumentTypes'const query = `*[!(_type in [
...allMyDocumentTypes,
// and the system types
'sanity.imageAsset',
'system.group',
'system.retention',
'system.listener'
])]`client
.fetch(query)
.then(items => {
if (!items.length) return true return items
.reduce(
(trx, item) => trx.delete(item._id),
client.transaction()
)
.commit()
.then(() => console.log('Done!'))
})
.catch(errorHandler)
Note the crafty way we use a reduce
to add the deletes
to the transaction
and then seal it off with a single commit
. No issues with rate-limiting there.
You can use similar logic to mutate field names when your client decides to change title
to headline
, or description
from a blockContent
to text
field.
Speaking of blockContent vs text fields
Default Sanity projects come with a blockContent
structure (more formally known as Portable Text) that allows your admins to enter rich text via a built-in WYSIWYG editor. It’s neat but when you want to render a text-only preview of that content you need to strip out everything but the text.
In our project our FAQs
show a preview that includes the FAQ’s answer
if there is one. The answer
is a piece of blockContent
but the preview can only display text. To convert blockContent
to text we use this handy utility:
const blocksToText = (blocks = []) =>
blocks
.map(block =>
block._type !== 'block' ||
!block.children
? ''
: block
.children
.map(child => child.text)
.join(' ')
)
.join('\n\n')
)export default blocksToText
Another issue with blockContent
(being fixed as I write this) is that if you enter some content, then later delete it, the value of the field is not returned to ‘’
or undefined
, it still has some internal structure hanging around.
To get around this we wrote a utility function isBlockEmpty
.
const isBlockEmpty = block =>
!block ||
!block[0] ||
!block[0].children ||
!block[0].children[0] ||
!block[0].children[0].textexport default isBlockEmpty
These are then used in the FAQ
’s preview.prepare
function as follows:
prepare({ question, answer }) {
return {
title: `Q: ${question}?`,
subtitle: isBlockEmpty(answer)
? 'unanswered'
: blocksToText(answer)
}
}
Conclusion
All in all I enjoyed my first dive into using Sanity instead of writing a more traditional custom API. I’ve never really been a fan of monolithic CMSs and avoid using them at all, so that was never going to be an option.
The Sanity product is not without its quirks, to be sure, but it’s a relatively new product and evolving rapidly.
The development team, based in Norway, are very easy to interact with via their Slack channel, and were very generous with their time helping me overcome some newbie issues.
In the last few days the team has rolled out a beta of their GraphQL
API, which is layered on top of their custom query language GROQ
. That’s a welcome development but came too late for this particular project.
I’m certainly keen to use Sanity again on future projects.
Disclaimer
I have no commercial interest in Sanity.io and am simply writing this from the perspective of an interested developer. I have received no remuneration for writing this article.
Links
Client
Project
Sanity
- sanity.io
- slack.sanity.io
- github.com/sanity-io/sanity
- www.sanity.io/docs/data-store/graphql
- www.portabletext.org
Services
Code Quality
Other modules
Us
—
Like this but not a subscriber? You can support the author by joining via davesag.medium.com.