library(plyr)
library(dplyr)
#>
#> Attaching package: 'dplyr'
#> The following objects are masked from 'package:plyr':
#>
#> arrange, count, desc, failwith, id, mutate, rename, summarise,
#> summarize
#> The following objects are masked from 'package:stats':
#>
#> filter, lag
#> The following objects are masked from 'package:base':
#>
#> intersect, setdiff, setequal, union
Prompt:
The plyr
package has by now been replaced by other, even faster packages, but the idea of Split, apply, combine is as relevant as ever.
Read the paper The Split-Apply-Combine Strategy for Data Analysis by Hadley Wickham.
Write a blog post addressing the following questions:
- The R code for the split-apply-combine paper is posted with the paper. Pick one of the examples demonstrating
plyr
functionality (such asdlply
orddply
, …) and rewrite the example using functionality from the packagedplyr
. Make sure that your example works and the results are identical. ##Example from the paper plyr package
# Base ball case study ============================================
<- subset(baseball, id == "ruthba01")
baberuth_plyr <- transform(baberuth_plyr, cyear = year - min(year) + 1)
baberuth_plyr
# Base ball case study ============================================
<- ddply(baseball, .(id), transform,
baseball_plyrcyear = year - min(year) + 1)
##Rewrite with dplyr package
<- baseball %>%
baberuth_dplyr filter(id == "ruthba01") %>%
mutate(cyear = year - min(year) + 1)
########Baseball
<- baseball %>%
baseball_dplyr group_by(id) %>%
mutate(cyear = year - min(year) + 1)
- Which (base R) functions do you know that support the split-apply-combine strategy? In your opinion, are these sufficient? State why or why not?
lapply: lapply is used to iteratively apply a function to each element in a list.
sapply: Sapply turns around the list, vector or data frame to give an output in the form of a vector,
matrix, or an array, depending upon the structure of the input.
tapply: tapply applies a measure or a function to subsets within a vector and provides output in the form of an array.
Data manipulation requires functions that are more sophisticated than the ones above, as these would require writing a complex code. These functions are also not as convenient and efficient as dplyr and plyr.
- The ChatGPT response to the prompt “Describe the split-apply-combine paradigm” is given below. Generally, the chatbot’s answers are decent. ChatGPT does have problems getting details right. Which details would you change in the response?
The “split-apply-combine” paradigm is a data analysis strategy used in statistics and data science to analyze grouped data. It consists of three steps:
Split: The data is first split into groups based on one or more variables of interest.
Apply: A specific operation or function is applied to each group of data, usually for the purpose of aggregating, summarizing, or transforming the data within each group.
Combine: The results of the operation applied to each group are then combined and returned as a single output.
The split-apply-combine paradigm is particularly useful in cases where the data cannot be analyzed as a whole and instead must be analyzed in smaller, more manageable chunks. This allows for more efficient data processing and improved scalability. The paradigm is commonly implemented using tools such as the “groupby” function in the R and Python programming languages, and the “dplyr” library in R.
Change : The aggregation, completion and modelling of data is carried out using the split-apply-combine strategy. It provides a clear, flexible and efficient way to perform complex data analysis. The split-apply-combine strategy is implemented in R packages such as plyr, dplyr, and data.table.
You can write your answers directly the README.Rmd
file. Make sure that the file knits (you will need to activate any packages your use in your code chunks with a call to library(xxx)
, where xxx is the name of the package, such as plyr
). Commit your changes and push to your repo; add any files in the README_files
directory to your repository.