Thomas
Brasington
Opinion
Design Systems

Leave design token organisation to the machine

July 22, 2022

Naming tokens for any system is complex; the level of specificity, the order, and the consistency within the system.

A humanoid robot assembling and stack boxes on top of each other.
Image courtesy of DALL-E 2

As Nathan Curtis' article on naming highlights, there are several ways to do it, and the depth and length of tokens become large.

Why do we do this? Well, it helps us self-document our decisions; what it does is in the name.

When we need to make new UI elements, we know what tokens to use for a background, text, corner radius, etc.

And from a multi-brand and theming point of view, it provides a hook to manage the swapping of values that speeds up testing the value of the tokens.

If you have ever managed tokens in a spreadsheet, a JSON file, or even your design tool's menu system — finding the one you are looking for can be difficult, particularly if you start adding tokens for every component variant and state.

The reason for the length and verbosity is that we have namespaces to avoid collisions in the compilation and interpretation software.

We become machine-like so that a machine can turn our intent into visuals.

And the hours spent working on how to name everything. By trying to be efficient, scalable and organised, we end up spending a lot of time working out what one we need to use.

I don't think we can get away from the low-level setup, as tokens are inherently low-level, but perhaps there are methods and tooling to help us, humans.

The design tokens working group is on this with the idea of grouping and nesting.

Being able to group and nest tokens is vital as the decisions are outside any particular tool, but this has led me to think these groupings can be organised around an idea of the design system's API surface.

Not all tokens need to be known by many of the users. As tokens are grouped and nested, they only need to follow the naming convention within their groupings.

With that in mind, here are the principles I am taking to help my work are
think about the API surface layer of a token;

  • who and what needs to consume, use or read it?
  • let a machine compile the tokens
  • Use technology like interfaces in Typescript to help with naming consistency at a group and composite level
  • Isolate token naming to their components in smaller objects to make it understandable

When I think about users, I think of these four roles in determining the API surface layer:

  1. Creating and managing tokens at their smallest level (think radius, spacing)
  2. Creating tokens to map to component variants and state
  3. Creating experiences using those components
  4. The software presenting all of the above to the end user

Decisions made by role one are representing and abstracting the brand's visual identity and providing the building blocks for any components designed by role 2.

// my button tokens can follow standard properties
const ButtonBaseTokens = {
  default: {
    backgroundColor: colorToVariable("blue1"),
    color: colorToVariable("white0"),
    borderColor: colorToVariable("blue1"),
  },
  ...
}

// while my container is a bit more abstract and provides directional guides 
export const ContainerTokens = {
  base: {
    text: colorToVariable("blue1"),
    textSecondary: colorToVariable("blue2"),
    textTertiary: colorToVariable("grey4"),
    background: colorToVariable("grey8"),
    divider: colorToVariable("grey6"),
    outline: colorToVariable("grey6"),
  },
  ...
}

With role 2, tokens begin to be assembled into the states, variants, and colour schemes.

I can focus on the organisation and naming that makes sense for the component, while the interfaces and machine compilation help keep consistency and traceability.

I only need to look up the definitions within the isolated token components and assemble them appropriately in that tool.

So far, I have found this particularly helpful when using design tools like Figma.

There is no need to pollute the style palette with hundreds of colour tokens for the different variants and states.

I define that the hover state of a button uses blue.2, so in Figma, I can set the background colour of that component state to be blue.2. It does not need a further abstraction.

Then for the code/token dictionary object, I can follow suit for any future component or application that needs to consume them.

While this is in Typescript, I can write a generator to output it to JSON following the Design Token specification.

Then as a creator of experiences (role 3), I have ceased to worry about the tokens as I am using components that have limited that API surface layer to a few fundamental properties that affect what the end users see. Ideally, anything that needs a token has already been set in the components I use.

Finally, the API implementation can then expose all of the tokens, but as for the machine, those long names don't matter (we as creators of the API need to ensure collisions and lookups work)

There are some pitfalls to this approach; I expect to find more:

  • Convention is still required in role 2. Even though there is freedom in the structure, it's helpful for some consistency as different people work. Colour remains challenging, especially when factoring that components may have different colour modes on top of themes.
  • It requires code to compile the tokens and is somewhat locked to the framework components, which will need additional work to create appropriate generators for different languages.

However, I am hoping these trade-offs are ok, as by leaning on the machine to do the heavy parsing of naming structures so that as creators, we spend our time delivering products for our users.

Get in touch

moc.notgnisarbt@liam