Steps to update a meta-analysis or systematic review

You completed a systematic review or meta-analysis using a formal workflow. You won the lottery in big-picture thinking and perspective. However, time passes with peer review or change (and most fields are prolific even monthly in publishing to your journal set). You need to update the process. Here is a brief workflow for that process.

Updating a search

  1. Revisit the same bibliometrics tool initially used such as Scopus or Web Science.
  2. Record date of current instance and contrast with previous instance documented.
  3. Repeat queries with exact same terms. Use ‘refine’ function or specific ‘timespan’ to year since last search. For instance, last search for an ongoing synthesis was Sept 2019, and we are revising now in Jan 2020. Typically, I use a fuzzy filter and just do 2019-2020. This will generate some overlap. The R package wosr is an excellent resource to interact with Web of Science in all instances and enables reproducibility. The function ‘query_wos’ is fantastic, and you can specify timespan using the argument PY = (2019-2020).
  4. Use a resource that reproducibly enables matching to explore overlaps from first set of studies examined to current updated search. I use the R-package Bibliometrix function ‘duplicatedMatching’, and if there is uncertainty, I then manually check via DOI matching using R code.
  5. Once you have generated your setdiff, examine new entries, collect data, and update both meta-data and primary dataframes.

Implications

  1. Science is rapid, evolving, and upwards of 1200 publications per month are published in some disciplines.
  2. Consider adding a search date to your dataframe. It would be informative to examine the rate that one can update synthesis research.
  3. Repeat formal syntheses, and test whether outcomes are robust.
  4. Examine cumulative meta-analytical statistics.
  5. Ensure your code/workflow for synthesis is resilient to change and replicable through time – you never know how long reviews will take if you are trying to publish your synthesis.

Vision statement for Ecological Applications @ESAApplications journal and ideas for @ESA_org

Philosophy. Know better, do better.

Applied science has an obligation to engender social good. These pathways can include knowledge mobilization, mode-2 scientific production, transparency, addressing the reproducibility crisis in science, promoting diversity and equity through representation, and enabling discovery through both theory and application of ecological principles. Evidence-based decision making can leverage the work published in Ecological Applications. However, evidence-informed decision making that uses ecological principles and preliminary evidence as a means to springboard ideas and more rapidly respond to global challenges are also needed. Science is not static, and the frame-rate of changes and challenges is exceptionally rapid. We cannot always (ever) afford to wait for sufficient, deep evidence, and in ecological applications, we need to share what clearly works, what can work, and finally also what did not work. This is a novel paradigm for publishing in a traditional journal. We are positioned with innovations in ecology such as more affordable sensor technology, R, citizen science, novel big data streams from the Earth Sciences, and team science to provide insight-level data and update data and findings over time. An applied journal need not become full open access or all open science practice based (although we must strive for these ideals), but instead provide at least some capacity within the journal to interact with policy, decision processes, and dialogue to promote the work published and to advance societal knowledge.

Proposed goals: content

  1. Leverage the ‘communications’ category of publications to hone insights in the field and advance insights that are currently data limited.
  2. Invite stakeholders and policy practioners to more significantly contribute to communications reacting and responding to evidence and highlighting evidence (similar to the ‘letters to the editor’) from a constructive and needs-based perspective.
  3. Provide the capacity for authors of article publications to update contributions with a new category of paper entitled ‘application updates’.
  4. Look to other applied science journals such as Cell for insights. This journal for instance includes reviews, perspectives, and primers as contributions. It also has a strong thematic and special issue focus to organize content.
  5. In addition to an Abstract, further develop the public text box model to describe highlights, challenges, and next steps for every article.
  6. Expand the breadth of the ‘open research’ section of contributions to include code, workflows, field methods, photographs, or any other research product that enables reproducibility.
  7. Explore a mechanism to share applications that were unsuccessful or emerging but not soundly confirmed.
  8. Explore a new ‘short synthesis’ contribution format that examines aggregated evidence. This can include short-format reviews, meta-analyses, systematic reviews, evidence maps, description of new evidence sets that support ecological applications or policy, and descriptions of compiled qualitative evidence for a contempoary challenge.

Proposed goals: process

  1. Accelerate handling time (currently, peer-review process suggests three weeks for referees). Reduce editor review time to 2 weeks and referee turnaround time to 2 weeks.
  2. Remove formatting requirements for initial submission.
  3. Remove cover letter requirement. Instead, include a short form in ScholarOne submission system that provides three brief fields to propose implications wherein the authors propose why a specific contribution is a good fit for this journal.
  4. Allow submission of a single review solicited by the authors. This review must be signed and does not count toward journal review process but can be a brilliant mechanism to inform editor-level review.
  5. Data must be made available at the time of submission. This can be a private link to data or published in a repository with limited access until acceptance. It is so useful to be able to ‘see’ data, literally, in table format to understand how and what was interpreted and presented.
  6. Consider double-blind review.
  7. Develop more anchors or hooks in papers that can reused and leveraged for policy. This can include specific reporting requirements such as plot/high-level sample sizes (N), total sample sizes of subjects (n), clear reporting of variance, and where possible, an effect size metric even as simple as the net percent change of the primary intervention or application.
  8. The current offerings are designated by contribution type such as article, letter, etc. However, once viewing a paper, the reader must best-guess based on title, abstract, and keywords how this paper contributes to application. A system of simple badges that visually signals to readers and those these seek to reuse content what a paper addresses. These badges can be placed above the title alongside the access and licensing designation badges. Categories of badges can include an icon for biome/ecosystem, methods, R or code used, immediately actionable, mode-2 collaboration, and theory.
  9. Expand SME board further. Consider accept without review as mechanism to fast track contributions that are critical and the most relevant. This would include an editor-only exceptionally rapid review process.
  10. Engage with ESA, other journals, and community to develop and offer more needs-driven special issues.
Landscapes are changing and people are always part of the picture.
Science is an important way of knowing and interacting with natural systems.
Not everything needs fixing.

steps to update a manuscript that was hung up in peer review forever then rejected (or just neglected for a long time)

Sometimes, peer review (and procrastination) help. Other times, the delays generate more net work. I was discussing this workflow with a colleague regarding a paper that was submitted two-years ago, rejected, then we both ran out of steam. This was the gold-standard workflow we proposed (versus reformat and submit to another journal immediately).

Workflow

  1. Hit web of science and check for new papers on topic.
  2. Download the pdfs.
  3. Read them.
  4. Think about what to cite or add.
  5. Add citations and rebuild biblio.
  6. Update writing to mention new citations especially if they are really relevant (intro and discussion).
  7. Take whatever pearls of wisdom you can from rejection in first place and revise ideas, plots, or stats.
  8. Format for new journal.
  9. Check requirements for that journal.
  10. Search the table of contents for the journal and check your lit cited to ensure you cite a few papers from that journal – if not, assess whether that the right journal for this contribution.
  11. Download pdfs from new journal, read, cite, and interpret.
  12. Then, look up referees and emails.
  13. Write cover letter.
  14. Set up account for that new and different annoying journal system – register and wait.
  15. Fight with system to submit and complete all the little boxes/fields.

Prepping data for #rstats #tidyverse and a priori planning

messy data can be your friend (or frenemy)

Many if not most data clean up, tidying, wrangling, and joining can be done directly in R. There are many advantages to this approach – i.e. read in data in whatever format (from excel to json to zip) and then do your tidying – including transparency, a record of what you did, reproducibility (if you ever have to do it again for another experiment or someone else does), and reproducibility again if your data get updated and you must rinse and repeat! Additionally, an all-R workflow forces you to think about your data structure, the class of each vector (or what each variable means/represents), missing data, and facilitates better QA/QC. Finally, your data assets are then ready to go for the #tidyverse once read in for joining, mutating in derived variables, or reshaping. That said, I propose that good thinking precedes good rstats. For me, this ripples backwards from projects where I did not think ahead sufficiently and certainly got the job done with R and the tidyverse in particular, but it took some time that could have been better spent on the statistical models later in the workflow. Consequently, here are some recent tips I have been thinking about this season of data collection and experimental design that are pre-R for R and how I know I like to subsequently join and tidy.

Tips

  1. Keep related data in separate csv files.
    For instance, I have a site with 30 long-term shrubs that I measure morphology and growth, interactions with/associations with the plant and animal community, and microclimate. I keep each set of data in a separate csv (no formatting, keeps it simple, reads in well) including shrub_morphology.csv, associations.csv, and microclimate.csv. This matches how I collect the data in the field, represents the levels of thinking and sampling, and at times I sample asynchronously so open each only as needed. You can have a single excel file instead with multiple sheets and read those in using R, but I find that there is a tendency for things to get lost, and it is hard for me to have parallel checking with sheets versus just opening up each file side-by-side and doing some thinking. Plus, this format ensures and reminds me to write a meta-data file for each data stream.
  2. Always have a key vector in each data file that represents the unique sampling instance.
    I like the #tidyverse dyplr::join family to put together my different data files. Here is an explanation of the workflow. So, in the shrub example, the 30 individual shrubs structure all observations for how much they grow, what other plants and animals associate with them, and what the microclimate looks like under their canopy so I have a vector entitled shrub_ID that demarcates each instance in space that I sample. I often also have a fourth data file for field sampling that is descriptive using the same unique ID as the key approach wherein I add lat, long, qualitative observations, disturbances, or other supporting data.
  3. Ensure each vector/column is a single class.
    You can resolve this issue later, but I prefer to keep each vector a single class, i.e. all numeric, all character, or all date and time.
  4. Double-code confusing vectors for simplicity and error checking.
    I double-code data and time vectors to simpler vectors just to be safe. I try to use readr functions like read_csv that makes minimal and more often than not correct assumptions about the class of each vector. However, to be safe for vectors that I have struggled with in the past, and fixed in R using tidytools or others, I now just set up secondary columns that match my experimental design and sampling. If I visit my site with 30 shrubs three times in a growing season, I have a date vector that captures this rich and accurate sampling process, i.e. august 14, 2019 as a row, but I also prefer a census column that for each row has 1,2, or 3. This helps me recall how often I sampled when I reinspect the data and also provides a means for quick tallies and other tools. Sometimes, if I know it is long-term data over many years, I also add a simple year column that simply lists 2017, 2018, and 2019. Yes, I can reverse engineer this in R, but I like the structure – like a backbone or scaffold to my dataframe to support my thinking about statistics to match design.
  5. Keep track of total unique observation instances.
    I like tidy data. In each dataframe, I like a vector that provides me a total tally of the length of the data as a representation of unique observations. You can wrangle in later, and this vector does not replace the unique ID key vector at all. In planning an experiment, I do some math. One site, 30 shrubs, 3 census events per season/year, and a total of 3 years. So, 1 x 30 x 3 x 3 should be 270 unique observations or rows. I hardcode that into the data to ensure that I did not miss or forget to collect data. It is also fulfilling to have them all checked off. The double-check using tibble::rowid_to_column should confirm that count, and further to tip #2, you can have a variable or set of variables to join different dataframes so this becomes fundamentally useful if I measured shrub growth and climate three times each year for three years in my join (i.e. I now have a single observation_ID vector I generated that should match my hardcoded collection_ID data column and I can ensure it lines up with the census column too etc per year). A tiny bit of rendundancy just makes it so much easier to check for missing data later.
  6. Leave blanks blank. Ensures your data codes true and false zeros correctly (for me this means I observed a zero, i.e. no plants under the shrub at all versus missing data) and also stick to tip #3. My quick a priori rule that I annotate in meta-data for each file is that missing altogether is coded as blank (i.e. no entry in that row/instance but I still have the unique_ID and row there as placeholder) and an observed zero in the field or during experiment is coded as 0. Do not record ‘NA’ as characters in a numeric column in the csv because it flips the entire vector to character, and read_csv and other functions sorts this out better with blanks anyway. I can also impute in missing values if needed by leaving blanks blank.
  7. Never delete data. Further to tip #1, and other ideas described in R for Data Science, once I plan my experiment and decide on my data collecting and structural rules a priori, the data are sacred and stay intact. I have many colleagues delete data that they did not ‘need’ once they derived their site-level climate estimates (then lived to regret it) or delete rows because they are blank (not omit in #rstats workflow but opened up data file and deleted). Sensu tip #5, I like the tally that matches the designed plan for experiment and prefer to preserve the data structure.
  8. Avoid automatic factor assignments. Further to simple data formats like tip #1 and tip #4, I prefer to read in data and keep assumptions minimal until I am ready to build my models. Many packages and statistical tools do not need the vector be factor class, and I prefer to make assignments directly to ensure statistics match the planned design for the purpose of each variable. Sometimes, variables can be both. The growth of the shrub in my example is a response to the growing season and climate in some models but a predictor in other models such as the effect of the shrub canopy on the other plants and animals. The r-package forcats can help you out when you need to enter into these decisions and challenges with levels within factors.

Putting the different pieces together in science and data science is important. The construction of each project element including the design of experiment, evidence and data collection, and #rstats workflow for data wrangling, viz, and statistical models suggest that a little thinking beforehand and planning (like visual lego instruction guides) ensures that all these different pieces fit together in the process of project building and writing. Design them so that connect easily.

Sometimes you can get away without instructions and that is fun, but jamming pieces together that do not really fit and trying to pry them apart later is never really fun.

#rstats adventures in the land of @rstudio shiny (apps)

Preamble
Colleagues and I had some sweet telemetry data, we did some simple models (& some relatively more complex ones too), we drew maps, and we wrote a paper. However, I thought it would be great to also provide stakeholders with the capacity to engage with the models, data, and maps. I published the data with a DOI, published the code at zenodo (& online at GitHub), and submitted paper to a journal. We elected not to pre-print because this particular field of animal ecology is not an easy place. My goal was to rapidly spin up some interactive capacity via two apps.

Adventures
Map app is simple but was really surprising once rendered. Very different and much more clear finding through interactivity. This was a fascinating adventure!
Model app exploring the distribution of data and the resource selection function application for this species confirmed what we concluded in the paper.

Workflow
Shiny app steps development flow is straightforward, and I like the logic!!
1. Use RStudio
2. Set up a shiny app account (free for up to 25hrs total use per month)
3. Set up a single r script with three elements
(i) ui
(ii) server
(iii) generate app (typically single line)
4. Click run app in RStudio to see it.
5. Test and play.
6. Publish (click publish button).

There is a bit more to it but not much more.

Rationale
A user interface makes it an app (haha), the server serves up the rstats or your work, and the final line generates app using shiny package. I could have an interactive html page published on GitHub and use plotly and leaflet etc, but I wanted to have the sliders and select input features more like a web app – because it is.

Main challenge to adventure was leaflet and reactive data
The primary challenge, adventure time style, was the reactive data calls and leaflet. If you have to produce an interactive map that can be updated with user input, you change your workflow a tiny bit.
a. The select input becomes an input$var that is in essence the name of vector you can use in your rstats code. So, this intuitive in conventional shiny app to me.
b. To take advantage of user input to render an updated map, I struggled a bit. You still use the input but want to filter your data to replot map. Novel elements include introducing a reactive function call to rewrite your dataframe in server chunk and then in leaflet first renderLeaflet map but them use an observe function to update the map with the reactive, i.e. user-defined, subset of the data. Simple in concept now that I get it, but it was still a bit tweaky to call specific elements from reactive data for mapping.

Summary
Apps from your work can illuminate patterns for others and for you.

Apps can provide a mechanism to interact with your models and see the best fits or outcomes in a more parallel, extemporary capacity

Apps are a gratifying mean to make statistics and data more accessible

Updates
Short-cut/parsimony coding: If you wrap your data script or wrangling into the renderPlot call, your data becomes reactive (without the formal reactive function).

The position of scripts is important – check this – numerous options where to read in data and this has consequences.

Also, consider modularizing your code.

Check out conditionalPanel function for customization across tabPanels. Tips in general here for shiny.