## Introduction

Rectangling is the art and craft of taking a deeply nested list (often sourced from wild caught JSON or XML) and taming it into a tidy data set of rows and columns. There are three functions from tidyr that are particularly useful for rectangling:

A very large number of data rectangling problems can be solved by combining these functions with a splash of dplyr (largely eliminating prior approaches that combined mutate() with multiple purrr::map()s).

To illustrate these techniques, we’ll use the repurrrsive package, which provides a number deeply nested lists originally mostly captured from web APIs.

library(tidyr)
library(dplyr)
library(repurrrsive)

## GitHub users

We’ll start with gh_users, a list which contains information about six GitHub users. To begin, we put the gh_users list into a data frame:

users <- tibble(user = gh_users)

This seems a bit counter-intuitive: why is the first step in making a list simpler to make it more complicated? But a data frame has a big advantage: it bundles together multiple vectors so that everything is tracked together in a single object.

Each user is a named list, where each element represents a column.

There are two ways to turn the list components into columns. unnest_wider() takes every component and makes a new column:

But in this case, there are many components and we don’t need most of them so we can instead use hoist(). hoist() allows us to pull out selected components using the same syntax as purrr::pluck():

hoist() removes the named components from the user list-column, so you can think of it as moving components out of the inner list into the top-level data frame.

## Game of Thrones characters

got_chars has a similar structure to gh_users: it’s a list of named lists, where each element of the inner list describes some attribute of a GoT character. We start in the same way, first by creating a data frame and then by unnesting each component into a column:

This is more complex than gh_users because some component of char are themselves a list, giving us a collection of list-columns:

What you do next will depend on the purposes of the analysis. Maybe you want a row for every book and TV series that the character appears in:

Or maybe you want to build a table that lets you match title to name:

(Note that the empty titles ("") are due to an infelicity in the input got_chars: ideally people without titles would have a title vector of length 0, not a title vector of length 1 containing an empty string.)

Again, we could rewrite using unnest_auto(). This is convenient for exploration, but I wouldn’t rely on it in the long term - unnest_auto() has the undesirable property that it will always succeed. That means if your data structure changes, unnest_auto() will continue to work, but might give very different output that causes cryptic failures from downstream functions.

Next we’ll tackle a more complex form of data that comes from Google’s geocoding service. It’s against the terms of service to cache this data, so I first write a very simple wrapper around the API. This relies on having an Google maps API key stored in an environment; if that’s not available these code chunks won’t be run.

The list that this function returns is quite complex:

houston <- geocode("Houston TX")
str(houston)
#> List of 2
#>  $results:List of 1 #> ..$ :List of 5
#>   .. ..$address_components:List of 4 #> .. .. ..$ :List of 3
#>   .. .. .. ..$long_name : chr "Houston" #> .. .. .. ..$ short_name: chr "Houston"
#>   .. .. .. ..$types :List of 2 #> .. .. .. .. ..$ : chr "locality"
#>   .. .. .. .. ..$: chr "political" #> .. .. ..$ :List of 3
#>   .. .. .. ..$long_name : chr "Harris County" #> .. .. .. ..$ short_name: chr "Harris County"
#>   .. .. .. ..$types :List of 2 #> .. .. .. .. ..$ : chr "administrative_area_level_2"
#>   .. .. .. .. ..$: chr "political" #> .. .. ..$ :List of 3
#>   .. .. .. ..$long_name : chr "Texas" #> .. .. .. ..$ short_name: chr "TX"
#>   .. .. .. ..$types :List of 2 #> .. .. .. .. ..$ : chr "administrative_area_level_1"
#>   .. .. .. .. ..$: chr "political" #> .. .. ..$ :List of 3
#>   .. .. .. ..$long_name : chr "United States" #> .. .. .. ..$ short_name: chr "US"
#>   .. .. .. ..$types :List of 2 #> .. .. .. .. ..$ : chr "country"
#>   .. .. .. .. ..$: chr "political" #> .. ..$ formatted_address : chr "Houston, TX, USA"
#>   .. ..$geometry :List of 4 #> .. .. ..$ bounds       :List of 2
#>   .. .. .. ..$northeast:List of 2 #> .. .. .. .. ..$ lat: num 30.1
#>   .. .. .. .. ..$lng: num -95 #> .. .. .. ..$ southwest:List of 2
#>   .. .. .. .. ..$lat: num 29.5 #> .. .. .. .. ..$ lng: num -95.8
#>   .. .. ..$location :List of 2 #> .. .. .. ..$ lat: num 29.8
#>   .. .. .. ..$lng: num -95.4 #> .. .. ..$ location_type: chr "APPROXIMATE"
#>   .. .. ..$viewport :List of 2 #> .. .. .. ..$ northeast:List of 2
#>   .. .. .. .. ..$lat: num 30.1 #> .. .. .. .. ..$ lng: num -95
#>   .. .. .. ..$southwest:List of 2 #> .. .. .. .. ..$ lat: num 29.5
#>   .. .. .. .. ..$lng: num -95.8 #> .. ..$ place_id          : chr "ChIJAYWNSLS4QIYROwVl894CDco"
#>   .. ..$types :List of 2 #> .. .. ..$ : chr "locality"
#>   .. .. ..$: chr "political" #>$ status : chr "OK"

Fortunately, we can attack the problem step by step with tidyr functions. To make the problem a bit harder (!) and more realistic, I’ll start by geocoding a few cities:

city <- c("Houston", "LA", "New York", "Chicago", "Springfield")
city_geo <- purrr::map(city, geocode)

I’ll put these results in a tibble, next to the original city name:

The first level contains components status and result, which we can reveal with unnest_wider():

Notice that results is a list of lists. Most of the cities have 1 element (representing a unique match from the geocoding API), but Springfield has two. We can pull these out into separate rows with unnest_longer():

Now these all have the same components, as revealed by unnest_wider():

We can find the lat and lon coordinates by unnesting geometry:

And then location:

Again, unnest_auto() makes this simpler with the small risk of failing in unexpected ways if the input structure changes:

We could also just look at the first address for each city:

Or use hoist() to dive deeply to get directly to lat and lng:

## Sharla Gelfand’s discography

We’ll finish off with the most complex list, from Sharla Gelfand’s discography. We’ll start the usual way: putting the list into a single column data frame, and then widening so each component is a column. I also parse the date_added column into a real date-time1.

At this level, we see information about when each disc was added to Sharla’s discography, not any information about the disc itself. To do that we need to widen the basic_information column:

Unfortunately that fails because there’s an id column inside basic_information. We can quickly see what’s going on by setting names_repair = "unique":

The problem is that basic_information repeats the id column that’s also stored at the top-level, so we can just drop that:

Alternatively, we could use hoist():

Here I quickly extract the name of the first label and artist by indexing deeply into the nested list.

A more systematic approach would be to create separate tables for artist and label:

discs %>%
hoist(basic_information, artist = "artists") %>%
select(disc_id = id, artist) %>%
unnest_longer(artist) %>%
unnest_wider(artist)
#> # A tibble: 167 x 8
#>     disc_id join  name        anv   tracks role  resource_url            id
#>       <int> <chr> <chr>       <chr> <chr>  <chr> <chr>                <int>
#>  1  7496378 ""    Mollot      ""    ""     ""    https://api.discog… 4.62e6
#>  2  4490852 ""    Una Bèstia… ""    ""     ""    https://api.discog… 3.19e6
#>  3  9827276 ""    S.H.I.T. (… ""    ""     ""    https://api.discog… 2.77e6
#>  4  9769203 ""    Rata Negra  ""    ""     ""    https://api.discog… 4.28e6
#>  5  7237138 ""    Ivy (18)    ""    ""     ""    https://api.discog… 3.60e6
#>  6 13117042 ""    Tashme      ""    ""     ""    https://api.discog… 5.21e6
#>  7  7113575 ""    Desgraciad… ""    ""     ""    https://api.discog… 4.45e6
#>  8 10540713 ""    Phantom He… ""    ""     ""    https://api.discog… 4.27e6
#>  9 11260950 ""    Sub Space … ""    ""     ""    https://api.discog… 5.69e6
#> 10 11726853 ""    Small Man … ""    ""     ""    https://api.discog… 6.37e6
#> # … with 157 more rows

discs %>%
hoist(basic_information, format = "formats") %>%
select(disc_id = id, format) %>%
unnest_longer(format) %>%
unnest_wider(format) %>%
unnest_longer(descriptions)
#> # A tibble: 281 x 5
#>     disc_id descriptions text  name     qty
#>       <int> <chr>        <chr> <chr>    <chr>
#>  1  7496378 Numbered     Black Cassette 1
#>  2  4490852 LP           <NA>  Vinyl    1
#>  3  9827276 "7\""        <NA>  Vinyl    1
#>  4  9827276 45 RPM       <NA>  Vinyl    1
#>  5  9827276 EP           <NA>  Vinyl    1
#>  6  9769203 LP           <NA>  Vinyl    1
#>  7  9769203 Album        <NA>  Vinyl    1
#>  8  7237138 "7\""        <NA>  Vinyl    1
#>  9  7237138 45 RPM       <NA>  Vinyl    1
#> 10 13117042 "7\""        <NA>  Vinyl    1
#> # … with 271 more rows

Then you could join these back on to the original dataset as needed.

1. I’d normally use readr::parse_datetime() or lubridate::ymd_hms(), but I can’t here because it’s a vignette and I don’t want to add a dependency to tidyr just to simplify one example.