Pining for the fnords (photo by the author, then enhanced with a bunch of silly filters)

Fun with Sanity

Our client’s needs come first

Our client, Food Innovation Australia Limited, in collaboration with The Export Council of Australia, requested a website that could display customisable information, such that their clients, Australian exporters, importers, and domestic traders, could access news, case-studies, and a range of other information to help them resolve barriers to trade.

The Trade Barriers Register public facing site

What is a headless CMS?

Traditional monolithic content management systems combine the site templates, content storage, content modelling, administration, and data storage into a single system. As a content publisher you can extend this with a range of plugins, and modify the templates, and make configuration changes that are limited to what the system will allow.

Simen Svale Skogsrud, Sanity’s CTO, explains this in a much more entertaining way

Why Sanity?

Most of the other sites we’ve built for this client tend to feature a public-facing React front end, and a private admin-facing front-end, both running on Netlify, and a custom API server on the back-end, running on Heroku. In this case however our client wanted a lot more control over their content, and their budget and schedule did not allow time to build a custom API and admin interface. A designer friend had used Sanity for a project I’d been quite impressed with, so I decided to take a look.

High level architecture

Overall the site architecture looks like this:

Implementation details

On the Sanity slack (which is a fantastic resource if you are building a Sanity project, the devs are super helpful) a few people asked me to describe how we approached the project, and how we solved some of the issues we faced.

Overall

Sanity

  • Two datasets, test and production.
  • Customised structure with custom icons, customised hierarchies, and client logo.
  • Document and field-level validation rules, document icons, custom previews and sort-orders.
  • A range of one-off scripts to automate migrations and clean up orphaned data and assets.
  • Customised deployment scripts to generate the correct sanity.json depending on whether we deploy to localhost or to sanity.studio.
  • Plugin (React component) that allows admin to select which news-sources to import from.

User Web

  • Written using React and hosted on Netlify (develop branch deploys to staging, master branch deploys to production).
  • The staging site, and local development sites, interact with test dataset, and production site interacts with production dataset.
  • Read-only access to CMS API. User input sent either via Netlify Forms, Lambda functions, or Survey Monkey.

Lambda Functions

  • Hosted on AWS as we needed longer time-outs than Netlify would allow.
  • Separate deploys for staging and production, with staging writing to test dataset and production writing to production dataset.

Survey Monkey

  • Client’s SurveyMonkey account, integrated with client’s Zapier account, allowing client’s staff to maintain their own Trade Barrier survey questions.

Zapier

  • Used to transmit survey data to the CMS via a lambda function, and to Google Studio for reporting.

Google Studio

  • Combines user activity reporting with high-level data from Survey Monkey to give client a customised and coherent set of reports.

Sanity, datasets, and staged releases

Sanity offers the following core concepts:

  • Projects
  • Datasets
  • Schema
  • Structure
  • Queries
config/
@sanity/
.checksums
schemas/
...various documents and objects.js
schema.js
package.json
sanity.json
structure.js
bin/
fixSanity

config/
@sanity/
.checksums
sanity.json.template
schemas/
...various documents and objects.js
schema.js
package.json
̶s̶a̶n̶i̶t̶y̶.̶j̶s̶o̶n̶
structure.js
{
"root": true,
"project": {
"name": "project-cms"
},
"parts": [
{
"name": "part:@sanity/base/schema",
"path": "./schemas/schema.js"
},
{
"name": "part:@sanity/desk-tool/structure",
"path": "./structure.js"
}
],
"env": {
"production": {
"api": {
"projectId": "our-project-id",
"dataset": "production"
},
"plugins": [
"@sanity/base",
"@sanity/components",
"@sanity/default-layout",
"@sanity/default-login",
"@sanity/desk-tool",
"logo",
"article-importer"
]
},
"development": {
"api": {
"projectId": "our-project-id",
"dataset": "test"
},
"plugins": [
"@sanity/base",
"@sanity/components",
"@sanity/default-layout",
"@sanity/default-login",
"@sanity/desk-tool",
"logo",
"article-importer",
"@sanity/vision",
"@sanity/storybook"
],
"__experimental_spaces": [
{
"name": "test",
"title": "Test (public)",
"default": true,
"api": {
"dataset": "test"
}
},
{
"name": "production",
"title": "Production (private)",
"api": {
"dataset": "production"
}
}
]
},
"test": {
"api": {
"projectId": "our-project-id",
"dataset": "test"
},
"plugins": [
"@sanity/base",
"@sanity/components",
"@sanity/default-layout",
"@sanity/default-login",
"@sanity/desk-tool",
"logo",
"article-importer"
]
}
}
}
#!/usr/bin/env nodeconst { readFileSync, writeFileSync } = require('fs')
const path = require('path')
const sanityEnv = process.env.NODE_ENV || 'development'const sanityTemplate = path.join(
__dirname,
'..',
'config',
'sanity.json.template'
)
const outputFile = path.join(
__dirname,
'..',
'sanity.json'
)
const sanityConfig = readFileSync(sanityTemplate)
const { env, ...config } = JSON.parse(sanityConfig)
if (!env[sanityEnv])
throw new Error(`No config for environment '${sanityEnv}'`)
const sanityJson = {
...config,
...env[sanityEnv]
}
const output = JSON.stringify(sanityJson, null, 2)
writeFileSync(outputFile, output)
"clean": "rm -f ./sanity.json",
"start": "./bin/fixSanity && sanity start",
"test": "./bin/fixSanity && sanity check",
"deploy": "NODE_ENV=production ./bin/fixSanity && sanity deploy",
"posttest": "yarn clean",
"postdeploy": "yarn clean",

Continuous Deployment

Ideally we’d configure everything to deploy automatically.

Front-end

Netlify lets us just link projects directly to a GitHub repo / branch so, when CircleCI has green-lit the merge, the front-end code just deploys.

Lambdas

Initially we deployed lambda functions to Netlify along with the front-end, but Netlify’s 10 second timeout policy was unsuitable for our situation and so we migrated them over to AWS. Deploying the lambda functions to AWS was a little less automagical than deploying to Netlify and we resorted to adding the following run step to our .circleci/config.yml.

- run:
name: Deploy
command: |
if [ "${CIRCLE_BRANCH}" == "develop" ]; then
export NODE_ENV=development
export DATASET=test
npm run deploy:staging
elif [ "${CIRCLE_BRANCH}" == "master" ]; then
export NODE_ENV=production
export DATASET=production
npm run deploy:production
else
echo 'No deployment necessary'
fi

Sanity Studio

Because there was only the single Sanity project we decided to handle deployment manually, but in retrospect it would have been easy enough to add a similar run step to .circleci/config.yml to auto-deploy staging and production projects, each with their own datasets.

Migrations

Being an agile team we are always happy when clients change their minds. In the course of developing this project our client changed field names, document names, validation rules, and front-end layouts any number of times.

import slugify from 'slugify'const remove = /[*+~.()'"!:@,?]/g
const makeSlug = text =>
slugify(text, { lower: true, remove })
export default makeSlug
import client from 'part:@sanity/base/client'
import makeSlug from './utils/makeSlug'
import errorHandler from './utils/errorHandler'
const query = '*[_type == "article"] { _id, heading, source, slug }'const run = async () => {
const articles = await client.fetch(query)
const fixedArticles = articles.reduce((acc, article) => {
const slug = makeSlug(heading)
if (slug !== article.slug.current)
acc[article._id] = [article, {
slug: { ...article.slug, current: slug }
}]
return acc
}, {})
const ids = Object.keys(fixedArticles) return Promise.all(
ids.map(id => {
const [original, { slug }] = fixedArticles[id]
const txn = client.patch(id)
if (slug.current !== original.slug.current)
txn.set({ slug })
return txn.commit()
})
)
}
run()
.then(() => {
console.log('done')
})
.catch(errorHandler)

Removing orphaned documents

Another example of a handy script to run is one that deletes document types that are left over in the system but are no longer used. Our client wanted a few document types renamed and one removed over the course of development, but we already had data in the system associated with the old document names. Being newbies it simply didn’t occur to us that this data would still be there but it was showing up in queries.

import client from 'part:@sanity/base/client'
import errorHandler from './utils/errorHandler'
import allMyDocumentTypes from './utils/allMyDocumentTypes'
const query = `*[!(_type in [
...allMyDocumentTypes,
// and the system types
'sanity.imageAsset',
'system.group',
'system.retention',
'system.listener'
])]`
client
.fetch(query)
.then(items => {
if (!items.length) return true
return items
.reduce(
(trx, item) => trx.delete(item._id),
client.transaction()
)
.commit()
.then(() => console.log('Done!'))
})
.catch(errorHandler)

Speaking of blockContent vs text fields

Default Sanity projects come with a blockContent structure (more formally known as Portable Text) that allows your admins to enter rich text via a built-in WYSIWYG editor. It’s neat but when you want to render a text-only preview of that content you need to strip out everything but the text.

const blocksToText = (blocks = []) =>
blocks
.map(block =>
block._type !== 'block' ||
!block.children
? ''
: block
.children
.map(child => child.text)
.join(' ')
)
.join('\n\n')
)
export default blocksToText
const isBlockEmpty = block =>
!block ||
!block[0] ||
!block[0].children ||
!block[0].children[0] ||
!block[0].children[0].text
export default isBlockEmpty
prepare({ question, answer }) {
return {
title: `Q: ${question}?`,
subtitle: isBlockEmpty(answer)
? 'unanswered'
: blocksToText(answer)
}
}

Conclusion

All in all I enjoyed my first dive into using Sanity instead of writing a more traditional custom API. I’ve never really been a fan of monolithic CMSs and avoid using them at all, so that was never going to be an option.

Espen Hovlandsdal discusses GROQ and GraphQL at Sanity

Disclaimer

I have no commercial interest in Sanity.io and am simply writing this from the perspective of an interested developer. I have received no remuneration for writing this article.

Links

Client

Project

Sanity

Services

Code Quality

Other modules

Us

--

--

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
Dave Sag

Blockchain Tsar & Senior Javascript Practitioner at Industrie&Co (part of Accenture) — see https://industrie.co