Prepping data for #rstats #tidyverse and a priori planning

messy data can be your friend (or frenemy)

Many if not most data clean up, tidying, wrangling, and joining can be done directly in R. There are many advantages to this approach – i.e. read in data in whatever format (from excel to json to zip) and then do your tidying – including transparency, a record of what you did, reproducibility (if you ever have to do it again for another experiment or someone else does), and reproducibility again if your data get updated and you must rinse and repeat! Additionally, an all-R workflow forces you to think about your data structure, the class of each vector (or what each variable means/represents), missing data, and facilitates better QA/QC. Finally, your data assets are then ready to go for the #tidyverse once read in for joining, mutating in derived variables, or reshaping. That said, I propose that good thinking precedes good rstats. For me, this ripples backwards from projects where I did not think ahead sufficiently and certainly got the job done with R and the tidyverse in particular, but it took some time that could have been better spent on the statistical models later in the workflow. Consequently, here are some recent tips I have been thinking about this season of data collection and experimental design that are pre-R for R and how I know I like to subsequently join and tidy.

Tips

  1. Keep related data in separate csv files.
    For instance, I have a site with 30 long-term shrubs that I measure morphology and growth, interactions with/associations with the plant and animal community, and microclimate. I keep each set of data in a separate csv (no formatting, keeps it simple, reads in well) including shrub_morphology.csv, associations.csv, and microclimate.csv. This matches how I collect the data in the field, represents the levels of thinking and sampling, and at times I sample asynchronously so open each only as needed. You can have a single excel file instead with multiple sheets and read those in using R, but I find that there is a tendency for things to get lost, and it is hard for me to have parallel checking with sheets versus just opening up each file side-by-side and doing some thinking. Plus, this format ensures and reminds me to write a meta-data file for each data stream.
  2. Always have a key vector in each data file that represents the unique sampling instance.
    I like the #tidyverse dyplr::join family to put together my different data files. Here is an explanation of the workflow. So, in the shrub example, the 30 individual shrubs structure all observations for how much they grow, what other plants and animals associate with them, and what the microclimate looks like under their canopy so I have a vector entitled shrub_ID that demarcates each instance in space that I sample. I often also have a fourth data file for field sampling that is descriptive using the same unique ID as the key approach wherein I add lat, long, qualitative observations, disturbances, or other supporting data.
  3. Ensure each vector/column is a single class.
    You can resolve this issue later, but I prefer to keep each vector a single class, i.e. all numeric, all character, or all date and time.
  4. Double-code confusing vectors for simplicity and error checking.
    I double-code data and time vectors to simpler vectors just to be safe. I try to use readr functions like read_csv that makes minimal and more often than not correct assumptions about the class of each vector. However, to be safe for vectors that I have struggled with in the past, and fixed in R using tidytools or others, I now just set up secondary columns that match my experimental design and sampling. If I visit my site with 30 shrubs three times in a growing season, I have a date vector that captures this rich and accurate sampling process, i.e. august 14, 2019 as a row, but I also prefer a census column that for each row has 1,2, or 3. This helps me recall how often I sampled when I reinspect the data and also provides a means for quick tallies and other tools. Sometimes, if I know it is long-term data over many years, I also add a simple year column that simply lists 2017, 2018, and 2019. Yes, I can reverse engineer this in R, but I like the structure – like a backbone or scaffold to my dataframe to support my thinking about statistics to match design.
  5. Keep track of total unique observation instances.
    I like tidy data. In each dataframe, I like a vector that provides me a total tally of the length of the data as a representation of unique observations. You can wrangle in later, and this vector does not replace the unique ID key vector at all. In planning an experiment, I do some math. One site, 30 shrubs, 3 census events per season/year, and a total of 3 years. So, 1 x 30 x 3 x 3 should be 270 unique observations or rows. I hardcode that into the data to ensure that I did not miss or forget to collect data. It is also fulfilling to have them all checked off. The double-check using tibble::rowid_to_column should confirm that count, and further to tip #2, you can have a variable or set of variables to join different dataframes so this becomes fundamentally useful if I measured shrub growth and climate three times each year for three years in my join (i.e. I now have a single observation_ID vector I generated that should match my hardcoded collection_ID data column and I can ensure it lines up with the census column too etc per year). A tiny bit of rendundancy just makes it so much easier to check for missing data later.
  6. Leave blanks blank. Ensures your data codes true and false zeros correctly (for me this means I observed a zero, i.e. no plants under the shrub at all versus missing data) and also stick to tip #3. My quick a priori rule that I annotate in meta-data for each file is that missing altogether is coded as blank (i.e. no entry in that row/instance but I still have the unique_ID and row there as placeholder) and an observed zero in the field or during experiment is coded as 0. Do not record ‘NA’ as characters in a numeric column in the csv because it flips the entire vector to character, and read_csv and other functions sorts this out better with blanks anyway. I can also impute in missing values if needed by leaving blanks blank.
  7. Never delete data. Further to tip #1, and other ideas described in R for Data Science, once I plan my experiment and decide on my data collecting and structural rules a priori, the data are sacred and stay intact. I have many colleagues delete data that they did not ‘need’ once they derived their site-level climate estimates (then lived to regret it) or delete rows because they are blank (not omit in #rstats workflow but opened up data file and deleted). Sensu tip #5, I like the tally that matches the designed plan for experiment and prefer to preserve the data structure.
  8. Avoid automatic factor assignments. Further to simple data formats like tip #1 and tip #4, I prefer to read in data and keep assumptions minimal until I am ready to build my models. Many packages and statistical tools do not need the vector be factor class, and I prefer to make assignments directly to ensure statistics match the planned design for the purpose of each variable. Sometimes, variables can be both. The growth of the shrub in my example is a response to the growing season and climate in some models but a predictor in other models such as the effect of the shrub canopy on the other plants and animals. The r-package forcats can help you out when you need to enter into these decisions and challenges with levels within factors.

Putting the different pieces together in science and data science is important. The construction of each project element including the design of experiment, evidence and data collection, and #rstats workflow for data wrangling, viz, and statistical models suggest that a little thinking beforehand and planning (like visual lego instruction guides) ensures that all these different pieces fit together in the process of project building and writing. Design them so that connect easily.

Sometimes you can get away without instructions and that is fun, but jamming pieces together that do not really fit and trying to pry them apart later is never really fun.

#rstats adventures in the land of @rstudio shiny (apps)

Preamble
Colleagues and I had some sweet telemetry data, we did some simple models (& some relatively more complex ones too), we drew maps, and we wrote a paper. However, I thought it would be great to also provide stakeholders with the capacity to engage with the models, data, and maps. I published the data with a DOI, published the code at zenodo (& online at GitHub), and submitted paper to a journal. We elected not to pre-print because this particular field of animal ecology is not an easy place. My goal was to rapidly spin up some interactive capacity via two apps.

Adventures
Map app is simple but was really surprising once rendered. Very different and much more clear finding through interactivity. This was a fascinating adventure!
Model app exploring the distribution of data and the resource selection function application for this species confirmed what we concluded in the paper.

Workflow
Shiny app steps development flow is straightforward, and I like the logic!!
1. Use RStudio
2. Set up a shiny app account (free for up to 25hrs total use per month)
3. Set up a single r script with three elements
(i) ui
(ii) server
(iii) generate app (typically single line)
4. Click run app in RStudio to see it.
5. Test and play.
6. Publish (click publish button).

There is a bit more to it but not much more.

Rationale
A user interface makes it an app (haha), the server serves up the rstats or your work, and the final line generates app using shiny package. I could have an interactive html page published on GitHub and use plotly and leaflet etc, but I wanted to have the sliders and select input features more like a web app – because it is.

Main challenge to adventure was leaflet and reactive data
The primary challenge, adventure time style, was the reactive data calls and leaflet. If you have to produce an interactive map that can be updated with user input, you change your workflow a tiny bit.
a. The select input becomes an input$var that is in essence the name of vector you can use in your rstats code. So, this intuitive in conventional shiny app to me.
b. To take advantage of user input to render an updated map, I struggled a bit. You still use the input but want to filter your data to replot map. Novel elements include introducing a reactive function call to rewrite your dataframe in server chunk and then in leaflet first renderLeaflet map but them use an observe function to update the map with the reactive, i.e. user-defined, subset of the data. Simple in concept now that I get it, but it was still a bit tweaky to call specific elements from reactive data for mapping.

Summary
Apps from your work can illuminate patterns for others and for you.

Apps can provide a mechanism to interact with your models and see the best fits or outcomes in a more parallel, extemporary capacity

Apps are a gratifying mean to make statistics and data more accessible

Updates
Short-cut/parsimony coding: If you wrap your data script or wrangling into the renderPlot call, your data becomes reactive (without the formal reactive function).

The position of scripts is important – check this – numerous options where to read in data and this has consequences.

Also, consider modularizing your code.

Check out conditionalPanel function for customization across tabPanels. Tips in general here for shiny.

Hacking the principles of #openscience #workshops

In a previous post, I discussed the key elements that really stood out for me in recent workshops associated with open science, data science, and ecology. Summer workshop season is upon us, and here are some principles to consider that can be used to hack a workshop. These hacks can be applied a priori as an instructor or in situ as a participant or instructor by engaging with the context from a pragmatic, problem-solving perspective.

Principles

1. Embrace open pedagogy.
2. Use and current best practices from traditional teaching contexts.
3. Be learner centered.
4. Speak less, do more.
5. Solve authentic challenges.

Hacks (for each principle)

1. Prepare learning outcomes for every lesson.

2. Identify solve-a-problem opportunities in advance and be open to ones that emerge organically during the workshop.

3. Use no slide decks. This challenges the instructor to more directly engage with the students and participants in the workshop and leaves space for students to shape content and narrative to some extent. Decks lock all of us in. This is appropriate for some contexts such as conference presentations, but workshops can be more fluid and open.

4. Plan pauses. Prepare your lessons with gaps for contributions.  Prepare a list of questions to offer up for every lesson and provide time for discussion of solutions.

5. Use real evidence/data to answer a compelling question (scale can be limited, approach beta as long as an answer is provided, and the challenge can emerge if teaching is open and space provided for the workshop participants to ideate).

Final hack that is a more general teaching principle, consider keeping all teaching materials within a single ecosystem that then references outwards only as needed. For me, this has become all content prepared in RStudio, knitted to html, then pushed to GitHub gh-pages for sharing as a webpage (or site). Then participants can engage in all ideas and content including code, data, ideas in one place.

 

Overdispersion tests in #rstats

A brief note on overdispersion

Assumptions

Poisson distribution assume variance is equal to the mean.

Quasi-poisson model assumes variance is a linear function of mean.

Negative binomial model assumes variance is a quadratic function of the mean.

rstats implementation

#to test you need to fit a poisson GLM then apply function to this model

library(AER)

dispersiontest(object, trafo = NULL, alternative = c(“greater”, “two.sided”, “less”))

trafo = 1 is linear testing for quasipoisson or you can fit linear equation to trafo as well

#interpretation

c = 0 equidispersion

c > 0 is overdispersed

Resources

  1. Function description from vignette for AER package.
  2. Excellent StatsExchange description of interpretation.

A note on AIC scores for quasi-families in #rstats

A summary note on recent set of #rstats discoveries in estimating AIC scores to better understand a quasipoisson family in GLMS relative to treating data as poisson.

Conceptual GLM workflow rules/guidelines

  1. Data are best untransformed. Fit better model to data.
  2. Select your data structure to match purpose with statistical model.
  3. Use logic and understanding of data not AIC scores to select best model.

(1) Typically, the power and flexibility of GLMs in R (even with base R) get most of the work done for the ecological data we work with within the research team. We prefer to leave data untransformed and simple when possible and use the family or offset arguments within GLMs to address data issues.

(2) Data structure is a new concept to us. We have come to appreciate that there are both individual and population-level queries associated with many of the datasets we have collected.  For our purposes, data structure is defined as the level that the dplyr::group_by to tally or count frequencies is applied. If the ecological purpose of the experiment was defined as the population response to a treatment for instance, the population becomes the sample unit – not the individual organism – and summarised as such. It is critical to match the structure of data wrangled to the purpose of the experiment to be able to fit appropriate models. Higher-order data structures can reduce the likelihood of nested, oversampled, or pseudoreplicated model fitting.

(3) Know thy data and experiment. It is easy to get lost in model fitting and dive deep into unduly complex models. There are tools before model fitting that can prime you for better, more elegant model fits.

Workflow

  1. Wrangle then data viz.
  2. Library(fitdistrplus) to explore distributions.
  3. Select data structure.
  4. Fit models.

Now, specific to topic of AIC scores for quasi-family field studies.

We recently selected quasipoisson for the family to model frequency and count data (for individual-data structures). This addressed overdispersion issues within the data. AIC scores are best used for understanding prediction not description, and logic and exploration of distributions, CDF plots, and examination of the deviance (i.e. not be more than double the degrees of freedom) framed the data and model contexts. To contrast poisson to quasipoisson for prediction, i.e. would the animals respond differently to the treatments/factors within the experiment, we used the following #rstats solutions.

————

#Functions####

#deviance calc

dfun <- function(object) {

with(object,sum((weights * residuals^2)[weights > 0])/df.residual)

}

#reuses AIC from poisson family estimation

x.quasipoisson <- function(…) {

res <- quasipoisson(…)

res$aic <- poisson(…)$aic

res

}

#AIC package that provided most intuitive solution set####

require(MuMIn)

m <- update(m,family=”x.quasipoisson”, na.action=na.fail)

m1 <- dredge(m,rank=”QAIC”, chat=dfun(m))

m1

#repeat as needed to contrast different models

————

Outcomes

This #rstats opportunity generated a lot of positive discussion on data structures, how we use AIC scores, and how to estimate fit for at least this quasi-family model set in as few lines of code as possible.

Resources

  1. An R vignette by Ben Bolker of quasi solutions.
  2. An Ecology article on quasi-possion versus nb.regression for overdispersed count data.
  3. A StatsExchange discussion on AIC scores.

Fundamentals

Same data, different structure, lead to different models. Quasipoisson a reasonable solution for overdispersed count and frequency animal ecology data. AIC scores are a bit of work, but not extensive code, to extract. AIC scores provide a useful insight into predictive capacities if the purpose is individual-level prediction of count/frequency to treatments.