The missing elements in contemporary ecology: knowledge-synthesis & pilot studies

There are at least 7 lucky steps to modern ecology.
1. observation
2. questions
3. hypothesis
4. predictions
5. pilot study
6. revise predictions
7. do experiment

Synthesis
One ‘innovation’ that my colleagues and I are increasingly relying on in designing experiments are knowledge-synthesis tools/repositories including meta-analyses, systematic reviews, and checking data repositories for similar datasets to the one we are about to collect. These scientific products (sometimes peer-reviewed publications) are profoundly shaping our designs and increasing the efficiency of the experiments we do (barring all the other limitation of field ecology). For instance, if a related meta-analysis reports mean effect sizes and explores between versus within heterogeneity, we use these estimates to calibrate our potential expectations for our particular system and design (particularly if significant within-study heterogeneity was reported in the meta and sources proposed that our system might also share). If a systematic review identifies research gaps or forms of treatments, we then consider whether it is viable for us to explore them in forthcoming experiments. Finally, published datasets, even just looking at them cursorily, really helps. We get a feel for the various elements that others considered important enough to change and/or measure. All that said, I think there is still a nice opportunity here for additional help for ecologists in designing better experiments – publishing pilot experiments.

mini-pilot

Pilot experiments
A dominant and recurring theme in this model text on design is pilot experiments. My collaborators and I are often tempted to skip them, and truthfully, we do them only about 60% of the time. We need to do them more often as we sometimes hit snaggles that we could have anticipated if we had run even a quick, beta version of the experiment. I know they are not always possible, but here are is a quick summary from that fav design text on the topic.

Best principles in design of pilot studies (adapted from text)
(1) Try all treatments in full. Only in their application do you often realize that sometimes your design will fail.

(2) Ensure that the process you are keen to examine is actually in effect at the local/relevant scale that you will be executing the study.

(3) Use pilot studies to calibrate/balance between replication and number of treatment levels within a factor(s).

(4) Build and test your datasheets too. Develop code/shorthand if needed. Ensure you have all the third variables that matter recored/built into sheets too.

(5) Use pilot study to calibrate effort (i.e. how long it takes to apply treatments or measure responses/patterns). Estimate frequency of processes you are interested in measuring as well.

(6) Use pilot studies to ensure all researchers are calibrated.

(7) It is also an opportunity to quickly practice your statistical analyses to ensure that will be able to provide a summary of the evidence that you need to test your predictions.

Given all that, I think a key element missing from the literature are these pilot studies. PeerJ Pre-Prints is an emerging source for some of these studies but really most of them feel like proper, full studies to me.  PLOSONE has also significantly evolved from its inception and become a far more robust (and more intensively reviewed) set of publications. Consequently, I sometimes wish I could get a quick feel for how to do new experimental designs by skimming pilot studies.  Methods in Ecology and Evolution definitely helps if there is an appropriate paper, but if not, we are stuck.

Non-Sequitur-Experimental-Variable

Summary solutions
Share your pilot experiments in some form. How could we do this? I propose there are at least three ways.

1. Publish the pilot dataset with well-written meta-data. This would be an immense resource to others. Consider tagging them ‘pilot study’ or preliminary/scoping exercise in some form.
2. Write up the idea and design with the recursive design decisions (like a PRISMA report for filtering studies in a synthesis). A new journal like ‘design decisions in ecology & evolution’ would be great, but there are a load of journals out there already. Instead, consider sharing it with others in the field for feedback, pop on Github, share on social media, if substantive decisions were made that are information, publish in Ideas in Ecology & Evolution etc. or as a pre-print on PeerJ.
3. Sketch up the design with notes and publish on figshare as a figure. I have yet to do this, but maybe we should start doing this for the community. I put all figures on the lab notebook/blog (www.ecoblender.org), but there is no reason that the reach should not be wider. These figures will also be a resource for instructors of ecology by showing the structured thinking often associated with testing a hypothesis via experimentation.

Articulated datasets (even pilot ones), sketches, and design decisions are an important set of elements within our preliminary and finalized workflows. The vast majority of workflows published are not in ecology, for instance see this repository, myexperiment.org, but contemporary ecology is rich and complex enough to accommodate sharing these processes. Managers and decision makers would also use these products as a means to infer best practices and policy associated with the weighting of evidence. Imagine an appendix for every primary study like synthesis papers that list excluded studies and the criteria that illuminates the decisions associated with design. There are tools out there for us such as Kepler, but I rarely see the products of them in peer-reviewed publications. Perhaps, we can explore the value of adapting directed acyclic graphs or decision trees for design in ecology as a standard and optional product that we share for every field/lab/greenhouse experiment we complete.

o-LEGO-2-facebook

Posted in fun