Decoders are used by Elm to translate data between JSON and Elm data structures. There are two common ways of writing decoders: classical and pipeline. How do the two relate to each other?
The setup
We have an Elm record that models a user in our system with a name and age. Pretty boring.
type alias User =
{ name : String
, age : Int
}
The API we’re integrating with has JSON with the same data but in a somewhat different shape. The key names are different and the age is nested.
{
"first_name": "Joël",
"age": {
"sun_cycles": 42,
"moon_cycles": 547
}
}
No problem. Elm’s JSON decoders are built to deal with this.
Classical decoders
We want to extract two pieces of information out of the JSON and use these to
construct a User
record.
We can use Json.Decode.field
to read a field at the root of the current
object and Json.Decode.at
to read a nested field. These helpers allow us to
write individual decoders for the name and age.
-- name
Json.Decode.field "first_name" Json.Decode.string
-- age
Json.Decode.at ["age", "sun_cycles"] Json.Decode.int
Now we need a way to combine multiple decoders together. The
Json.Decode.map2
helper is exactly what we need. We give it a 2-argument
function that it can apply to the values decoded by 2 decoders.
userDecoder : Decoder User
userDecoder =
Json.Decode.map2 (\name age -> { name = name, age = age })
(Json.Decode.field "first_name" Json.Decode.string)
(Json.Decode.at ["age", "sun_cycles"] Json.Decode.int)
There’s a map3
function if we wanted to apply a 3-argument function to 3
decoders, a map4
for 4-argument functions, etc. This is the classical
approach to writing decoders.
Pipeline decoding
Eventually however, you might want to combine a large number of decoders, larger
than the largest mapN
provided by the elm/json
library. What can we do
when we run out of maps?
One solution is to define a special helper like this:
andMap = Json.Decode.map2 (|>)
We can refactor our decoder from earlier to use this helper. We’ve just invented pipeline decoding.
userDecoder : Decoder User
userDecoder =
Json.Decode.succeed (\name age -> { name = name, age = age })
|> andMap (Json.Decode.field "first_name" Json.Decode.string)
|> andMap (Json.Decode.at ["age", "sun_cycles"] Json.Decode.int)
This will scale up as large as we need. Are we combining 100 decoders? We can do
that! We would pass a 100-argument function to succeed
and follow that with
100 pipes to andMap
.
Nicer pipelines
As we start thinking about scaling the pipeline approach, we may notice that the
combination of andMap
and field
or at
show up all the time. Let’s extract
that combination into a nicer named helper:
required fieldName decoder =
andMap (Json.Decode.field fieldName decoder)
requiredAt path decoder =
andMap (Json.Decode.at path decoder)
Armed with these helpers, we can refactor our decoder one more time. This looks much cleaner, with fewer parentheses or nested function calls.
userDecoder : Decoder User
userDecoder =
Json.Decode.succeed (\name age -> { name = name, age = age })
|> required "first_name" Json.Decode.string
|> requiredAt ["age", "sun_cycles"] Json.Decode.int
This pattern is exactly what the NoRedInk/elm-json-decode-pipeline
library
provides along with a few extra helpers in that style. When you hear people say
they are using “pipeline style” decoders, they usually mean decoders written
using this library.
Note: If you dig around the source of that library, their equivalent to andMap
that
I’ve shown here is called custom
.
Pipeline vs Classical
A big advantage of the pipeline approach is that your decoders can keep
growing as large as you need them to. They tend to be terser than their
classical equivalents. Because they have a higher level of abstraction, they can
be easier for a newcomer start using once they see the pattern once or twice.
However, this also means they can be harder for newcomers to use when doing less
common operations. It’s also much harder to learn how they work (hence the need
for this article and the one on andMap
).
On the other hand, you tend to get more concrete error messages when using
the various mapN
functions. It’s also easier to learn how they work.
Pipeline decoders are not an alternative for the whole elm/json
library. They
mostly replace the pattern of combining multiple decoders with mapN
functions.
The other helpers are still commonly used in combination with pipeline
decoders, particularly Json.Decode.andThen