A vision statement describing goals for Ecology @ESAEcology #openscience

Many aspects of the journal Ecology are exceptional.  It is a society journal and that is important. The strength of research, depth of reporting, and scope of primary ecological research that informs and shapes fundamental theory has been profound.  None of these benefits need to change.  Nonetheless, research that supports the scientific process and engenders discovery can always evolve and must be fluent.  So must the process of scientific communication including publications through journals.  With collaborators and support from NCEAS and a large publishing company, I have participated in meta-science research examining needs and trends in the process of peer review for ecologists and evolutionary biologists, i.e. Behind the shroud: a survey of editors in ecology and evolution published in Frontiers in Ecology and the Environment or biases in peer review such as Systematic Variation in Reviewer Practice According to Country and Gender in the Field of Ecology and Evolution published in PLOS ONE.  In total, we have published 50 peer-reviewed publications describing a path forward for ecology and evolution in particular with respect to inclusivity, open science, and journal policy.  Ideally, we have identified at least three salient elements for journals relevant to authors, referees, and editors, and four pillars for a future for scholarly publishing more broadly.  The three elements for Ecology specifically would be speed, recognition, and more full and reproducible reporting.  The four pillars include an ecosystem of products, open access, open or better peer review, and recognition for participation in the process .

 

Goals to consider

  1. Rapid peer review with no more than 4 weeks total for first decision.
  2. A 50% editor-driven rejection rate of initial submissions.
  3. Two referees per submission if in agreement (little to no evidence more individuals are required).
  4. Double the 2017 impact factor to ~10 within 2 years and return to top 10 ranking in 160 of journals listed in field of ecology.
  5. Further diversify the contributions to address exploration, confirmation, replication, consolidation, & synthesis.
  6. Innovate content offering to encompass more elements of the scientific process including design, schemas, workflows, ideation tools, data models, ontologies, and challenges.
  7. Allow authors to report failure and bias in process and decision making for empirical contributions.
  8. Provide additional novel material from every publication as free content even when behind paywall.
  9. Develop a collaborative reward system for the editorial board that capitalizes on existing expertise and produces novel scientific content such as editorials, commentaries, and the reviews as outwardly facing products. Include and invite referees to participate in these ‘meta’ papers because reviews are a form of critical and valuable synthesis.
  10. Promote a vision of scientific synthesis in every publication in the Discussion section of reports. Request an effect size measure for reports to provide an anchor for future reuse (i.e. use the criteria proposed in ‘Will your paper be used in a meta‐analysis? Make the reach of your research broader and longer lasting’).
  11. Revise the data policy to require data deposition – at least in some form such as derived data – openly prior to final acceptance but not necessarily for initial submission.
  12. Request access to code and data for review process.
  13. Explore incentives for referees – this is a critical issue for many journals. Associate reviews with Publons or ORCID.
  14. Emulate the PeerJ model for badges and profiles for editors, authors, and
  15. Remove barriers for inclusivity of authors through double-blind review.
  16. Develop an affirmative action and equity statement for existing publications and submissions to promote diversity through elective declaration statements and policy changes.
  17. All editors must complete awareness training for implicit bias. Editors can also be considered for certification awarded by the ESA based on merit of reviewing such as volume, quality of reviews, and service. Recognition and social capital are important incentives.
  18. Develop an internship program for junior scientists to participate in the review and editorial process.
  19. Explore reproducibility through experimental design and workflow registration with the submission process.
  20. Remove cover letters as a requirement for submission.

Outcomes

I value our community and the social good that our collective research, publications, and scientific outcomes provide for society.  However, I am also confident that we can do more.  Journals and the peer review process can function to illuminate the scientific process and peer review including addressing issues associated with reproducibility in science and inclusivity.  Know better, do better.  It is time for scientific journals to evolve, and the journal Ecology can be a flagship for change that benefits humanity at large by informing evidence-based decision making and ecological literacy.

 

Hacking the principles of #openscience #workshops

In a previous post, I discussed the key elements that really stood out for me in recent workshops associated with open science, data science, and ecology. Summer workshop season is upon us, and here are some principles to consider that can be used to hack a workshop. These hacks can be applied a priori as an instructor or in situ as a participant or instructor by engaging with the context from a pragmatic, problem-solving perspective.

Principles

1. Embrace open pedagogy.
2. Use and current best practices from traditional teaching contexts.
3. Be learner centered.
4. Speak less, do more.
5. Solve authentic challenges.

Hacks (for each principle)

1. Prepare learning outcomes for every lesson.

2. Identify solve-a-problem opportunities in advance and be open to ones that emerge organically during the workshop.

3. Use no slide decks. This challenges the instructor to more directly engage with the students and participants in the workshop and leaves space for students to shape content and narrative to some extent. Decks lock all of us in. This is appropriate for some contexts such as conference presentations, but workshops can be more fluid and open.

4. Plan pauses. Prepare your lessons with gaps for contributions.  Prepare a list of questions to offer up for every lesson and provide time for discussion of solutions.

5. Use real evidence/data to answer a compelling question (scale can be limited, approach beta as long as an answer is provided, and the challenge can emerge if teaching is open and space provided for the workshop participants to ideate).

Final hack that is a more general teaching principle, consider keeping all teaching materials within a single ecosystem that then references outwards only as needed. For me, this has become all content prepared in RStudio, knitted to html, then pushed to GitHub gh-pages for sharing as a webpage (or site). Then participants can engage in all ideas and content including code, data, ideas in one place.

 

Why read a book review when you you can read the book (for free via #oa #openscience)?

 

Reviews, recommendations, and ratings are an important component of contemporary online consumption. Rotten Tomatoes, Metacritic, and Amazon.com reviews and recommendations increasingly shape decisions. Science and technical books are no exception. Increasingly, I have checked reviews for a technical book on a purchasing site even before I downloaded the free book. Too much information, not too little informs many of the competing learning opportunities (#rstats ) for instance).  I used to check the book reviews section in journals and enjoyed reading them (even if I never read the book). My reading habits have changed now, and I rarely read sections from journals and focus only on target papers. This is an unfortunate. I recognize that reviews are important for many science and technical products (not just for books but packages, tools, and approaches). Here is my brief listicle for why reviews are important  for science books and tools.

benefit description
curation Reviews (reviewed) and published in journals engender trust and weight critique to some extent.
developments and rate of change A book review typically frames the topic and offering of a book/tool in the progress of the science.
deeper dive into topic The review usually speaks to a specific audience and helps one decide on fit with needs.
highlights The strengths and limitations of offering are described and can point out pitfalls.
insights and implications Sometimes the implications and meaning of a book or tool is not described directly. Reviews can provide.
independent comment Critics are infamous. In science, the opportunity to offer praise is uncommon and reviews can provide balance.
fits offering into specific scientific subdiscpline Technical books can get lost bceause of the silo effect in the sciences. Reviews can connect disciplines.

Here is an estimate of the frequency of publication of book reviews in some of the journals I read regularly.

journal total.reviews recent
American Naturalist 12967 9
Conservation Biology 1327 74
Journal of Applied Ecology 270 28
Journal of Ecology 182 0
Methods in Ecology & Evolution 81 19
Oikos 211 22

Details of journal data scrape here: https://cjlortie.github.io/book.reviews/

 

A good novel tells us the truth about its hero; but a bad novel tells us the truth about its author.

–Gilbert K. Chesterton

Politics versus ecological and environmental research: invitation to submit to Ideas in Ecology and Evolution #resist

Dear Colleagues,

Ideas in Ecology and Evolution would like to invite you to submit to a special issue entitled
‘Politics versus ecological and environmental research.’

The contemporary political climate has dramatically changed in some nations. Global change marches on, and changes within each and every country influence everyone. We need to march too and can do so in many ways. There has been extensive social media discussion and political activity within the scientific community. One particularly compelling discussion is best captured by this paraphrased exchange.

“Keep politics out of my science feeds.”
“I will keep politics out of my science when politics keeps out of science.”

The latter context has never existed, but the extent of intervention, falsification by non-scientists, blatant non-truths, and threat to science have never been greater in contemporary ecology and environmental science in particular.

Ideas in Ecology and Evolution is an open-access journal. We view the niche of this journal as a home for topics that need discussing for our discipline. Ideas are a beautiful opportunity sometimes lost by the file-drawer problem, and this journal welcomes papers without data to propose new ideas and critically comment on issues relevant to our field both directly and indirectly. Lonnie Aarssen and I are keen to capture some of the ongoing discussion and #resist efforts by our peers. We will rapidly secure two reviews for your contributions to get ideas into print now.

We welcome submissions that address any aspect of politics and ecology and the environment. The papers can include any of (but not limited to) the following formats: commentaries, solution sets, critiques, novel mindsets, strategies to better link ecology/environmental science to political discourse, analyses of political interventions, summaries of developments, and mini-reviews that highlight ecological/environmental science that clearly support an alternative decision.

Please submit contributions using the Open Journal System site here.

Warm regards,

Chris Lortie and Lonnie Aarssen.

A rule-of-thumb for chi-squared tests in systematic reviews

Rule

A chi-squared test with few observations is not a super powerful statistical test (note, apparently termed both chi-square and chi-squared test depending on the discipline and source). Nonetheless, this test useful in systematic reviews to confirm whether observed patterns in the frequency of study of a particular dimension for a topic are statistically different (at least according to about 4/10 referees I have encountered). Not as a vote-counting tool but as a means for the referees and readers of the review to assess whether the counts of approaches, places, species, or some measure used in set of primary studies differed. The mistaken rule-of-thumb is that <5 counts per cell violates the assumptions of chi-squared test. However, this intriguing post reminds that it is not the observed value but the expected value that must be at least 5 (blog post on topic and statistical article describing assumption). I propose that this a reasonable and logical rule-of-thumb for some forms of scientific synthesis such as systematic reviews exploring patterns of research within a set of studies – not the strength of evidence or effect sizes.

An appropriate rule-of-thumb for when you should report a chi-squared test statistic in a systematic review is thus as follows.

When doing a systematic review that includes quantitative summaries of frequencies of various study dimensions, the total sample size of studies summarized (dividend) divided by the potential number of differences in the specific level tested (divisor) should be at least 5 (quotient). You are simply calculating whether the expected values can even reach 5 given your set of studies and the categorical analysis of the frequency of a specific study dimension for the study set applied during your review process.

total number of studies/number of levels contrasted for specific study set dimension >= 5

[In R, I used nrow(main dataframe)/nrow(frequency dataframe for dimension); however, it was a bit clunky. You could use the ‘length’ function or write a new function and use a ‘for loop’ for all factors you are likely to test].

Statistical assumptions aside, it is also reasonable to propose that a practical rule-of-thumb for literature syntheses (systematic reviews and meta-analyses) requires at least 5 studies completed that test each specific level of the factor or attribute summarized.

Example

For example, my colleagues and I were recently doing a systematic review that captured a total of 49 independent primary studies (GitHub repo). We wanted to report frequencies that the specific topic differed in how it was tested by the specific hypothesis (as listed by primary authors), and there were a total of 7 different hypotheses tested within this set of studies.  The division rule-of-thumb for statistical reporting in a review was applied, 49/7 = 7, so we elected to report a chi-squared test in the Results of the manuscript.  Other interesting dimensions of study for the topic had many more levels such as country of study or taxa and violated this rule. In these instances, we simply reported the frequencies in the Results that these aspects were studied without supporting statistics (or we used much simpler classification strategies). A systematic review is a form of formalized synthesis in ecology, and these syntheses typically do not include effect size measure estimates in ecology (other disciplines use the term systematic review interchangeably with meta-analysis, we do not do so in ecology). For these more descriptive review formats, this rule seems appropriate for describing differences in the synthesis of a set studies topologically, i.e. summarizing information about the set of studies, like the meta-data of the data but not the primary data (here is the GitHub repo we used for the specific systematic review that lead to this rule for our team). This fuzzy rule lead to a more interesting general insight. An overly detailed approach to the synthesis of a set of studies likely defeats the purpose of the synthesis.