Working with ‘Plyr’ package

Author

Atefeh Anisi

Published

February 16, 2022

Prompt:

The plyr package has by now been replaced by other, even faster packages, but the idea of Split, apply, combine is as relevant as ever.

Read the paper The Split-Apply-Combine Strategy for Data Analysis by Hadley Wickham.

Write a blog post addressing the following questions:

  1. The R code for the split-apply-combine paper is posted with the paper. Pick one of the examples demonstrating plyr functionality (such as dlply or ddply, …) and rewrite the example using functionality from the package dplyr. Make sure that your example works and the results are identical.

First, we need to call libraries.

library(ggplot2)
library(plyr)
library(dplyr)
#> 
#> Attaching package: 'dplyr'
#> The following objects are masked from 'package:plyr':
#> 
#>     arrange, count, desc, failwith, id, mutate, rename, summarise,
#>     summarize
#> The following objects are masked from 'package:stats':
#> 
#>     filter, lag
#> The following objects are masked from 'package:base':
#> 
#>     intersect, setdiff, setequal, union

Baseball case study from the paper

The example shows that we can subset the data based on id, and then apply the changes on that specific subset. IF we want to do this for all different ids, we need to repeat this two lines of code multiple times or we should apply a for loop on different ids.


baberuth <- subset(baseball, id == "ruthba01")
baberuth <- transform(baberuth, cyear = year - min(year) + 1)

This line of code using the function ddply from the “Plyr” package can do the same thing easier. The code is also shorter and less complex. The ddply function will get the whole data frame as the input, get the variable to split the data frame by it (id), apply mentioned changes on different independent pieces, and finally combine pieces together.

baseball <- ddply(baseball, .(id), transform, 
  cyear = year - min(year) + 1)

We can also do the same with other base functions in R. For example, these lines of code mentioned below can do the same on our dataframe.


baseball <- baseball %>% group_by(id) %>% mutate(cyear =year - min(year) + 1)
baseball
#> # A tibble: 21,699 × 23
#> # Groups:   id [1,228]
#>    id     year stint team  lg        g    ab     r     h   X2b   X3b    hr   rbi
#>    <chr> <int> <int> <chr> <chr> <int> <int> <int> <int> <int> <int> <int> <int>
#>  1 aaro…  1954     1 ML1   NL      122   468    58   131    27     6    13    69
#>  2 aaro…  1955     1 ML1   NL      153   602   105   189    37     9    27   106
#>  3 aaro…  1956     1 ML1   NL      153   609   106   200    34    14    26    92
#>  4 aaro…  1957     1 ML1   NL      151   615   118   198    27     6    44   132
#>  5 aaro…  1958     1 ML1   NL      153   601   109   196    34     4    30    95
#>  6 aaro…  1959     1 ML1   NL      154   629   116   223    46     7    39   123
#>  7 aaro…  1960     1 ML1   NL      153   590   102   172    20    11    40   126
#>  8 aaro…  1961     1 ML1   NL      155   603   115   197    39    10    34   120
#>  9 aaro…  1962     1 ML1   NL      156   592   127   191    28     6    45   128
#> 10 aaro…  1963     1 ML1   NL      161   631   121   201    29     4    44   130
#> # … with 21,689 more rows, and 10 more variables: sb <int>, cs <int>, bb <int>,
#> #   so <int>, ibb <int>, hbp <int>, sh <int>, sf <int>, gidp <int>, cyear <dbl>
  1. Which (base R) functions do you know that support the split-apply-combine strategy? In your opinion, are these sufficient? State why or why not?

Before now, I had only used basic R methods like apply() and lapply(). When we constantly labor with many inputs and seek out various outputs, they are insufficient. So, to get numerous outputs from various inputs, we need more functions. In addition, there are occasions when we only need to perform some of the functions on a portion of the data or when we need to perform many distinct functions on different data portions. We should divide the data into manageable pieces, perform a particular function separately on each piece, and combine all the pieces. These two tasks are, therefore, insufficient. In this assignment, I tried doing the same on the data frame using group_by and mutate functions. It worked, but using the ddply() function from “Plyr” package was much easier and integrated. Also, depending on the data frame and the functions you want to apply to different pieces of the data, mixing and integrating basic functions will be very complicated and may cause problems.

  1. The ChatGPT response to the prompt “Describe the split-apply-combine paradigm” is given below. Generally, the chatbot’s answers are decent. ChatGPT does have problems getting details right. Which details would you change in the response?

The “split-apply-combine” paradigm is a data analysis strategy used in statistics and data science to analyze grouped data. It consists of three steps:

Split: The data is first split into groups based on one or more variables of interest.

Apply: A specific operation or function is applied to each group of data, usually for the purpose of aggregating, summarizing, or transforming the data within each group.

Combine: The results of the operation applied to each group are then combined and returned as a single output.

The split-apply-combine paradigm is particularly useful in cases where the data cannot be analyzed as a whole and instead must be analyzed in smaller, more manageable chunks. This allows for more efficient data processing and improved scalability. The paradigm is commonly implemented using tools such as the “groupby” function in the R and Python programming languages, and the “dplyr” library in R.

I checked the ChatGPT again for the answer, and here is what I got:

The split-apply-combine paradigm is a data analysis approach used in the field of statistics and data science to analyze and summarize data. This approach consists of three main steps, which are:

Split: The first step involves splitting the data into smaller subsets based on one or more grouping variables. For example, if we have a data set containing information about sales transactions for a company, we could split the data by product category or by region.

Apply: The second step involves applying a function to each of the subsets generated in the first step. The function could be a statistical measure, such as mean or standard deviation, or a custom function that performs a specific analysis on the data. The idea is to obtain summary statistics or other insights for each subset of the data.

Combine: The final step involves combining the results obtained in the second step into a single output. This could involve summarizing the results in a table or graph, or performing further analysis on the summarized data.

The split-apply-combine paradigm is particularly useful when working with large datasets, as it allows us to analyze the data in smaller, more manageable subsets. It is also a very flexible approach, as it can be applied to many different types of data and analysis problems.

So much similar to the one above. I think this is a good answer for the whole topic explaining what is the topic, which area it is related to, and what are the major parts of it. If I want to add more details to this answer, it would be the explanation about different types of data which the strategy is able to work with. Also, it would be good to add the name of a few packages in python, R and etc, which are the best choices when someone is applying this strategy.

You can write your answers directly the README.Rmd file. Make sure that the file knits (you will need to activate any packages your use in your code chunks with a call to library(xxx), where xxx is the name of the package, such as plyr ). Commit your changes and push to your repo; add any files in the README_files directory to your repository.