Tips for rapid scientific recordings

Preamble

If a picture is worth a thousand words, a video is worth 1.8 million words. Like all great summary statistics, this has been discussed and challenged (Huffington Post supporting this idea and a nice comment at Replay Science reminding the public it is really a figure of speech).

Nonetheless, short scientific recordings, posted online are an excellent mechanism to put a face to a name, share your inspirations in science, and provide the public with a sense of connection to scientists. It is a reminder that people do science and that we care. I love short videos that provide the viewer with insights not immediately evident in the scientific product. Video abstracts with slide decks are increasingly common. I really enjoy them. However, sometimes I do not get to see what the person looks like (only the slide deck is shown) or how they are reacting/emoting when they discuss their science. Typically, we are not provided with a sense why they did the science or why they care. I think short videos that share a personal but professional scientific perspective that supplements the product is really important. I can read the paper, but if clarifications, insights, implications, or personal challenges in doing the research were important, it would be great to hear about them.

In that spirit, here are some brief suggestions for rapid scientific communications using recordings.

Tips

  1. Keep the duration at less than 2 minutes. We can all see the slider at the bottom with time remaining, and if I begin to disconnect, I check it and decide whether I want to continue. If it is <2mins, I often persist.
  2. Use a webcam that supports HD.
  3. Position the webcam above you facing down. This makes for a better angle and encourages you to look up.
  4. Ensure that you are not backlit. These light angles generally lead to a darker face that makes it difficult for the viewer to see any expressions at all.
  5. Viewers will tolerate relatively poor video quality but not audio. Do a 15 second audio test to ensure that at moderate playback volumes you can be clearly understood.
  6. Limit your message to three short blocks of information. I propose the following three blocks for most short recordings. (i) Introduce yourself and the topic. (ii) State why you did it and why you are inspired by this research. (iii) State the implications of the research or activity. This is not always evident in a scientific paper for instance (or framed in a more technical style), and in this more conversational context, you take advantage of natural language to promote the desired outcome.
  7. Prep a list of questions to guide your conversation. Typically, I write up 5-7 questions that I suspect the audience might like to see addressed with the associated product/activity.
  8. Do not use a script or visual aids. This is your super short elevator pitch. Connect with the audience and look into the camera.
  9. Have a very small window with the recording on screen, near the webcam position, to gently self-monitor your movement, twitches, and gestures. I find this little trick also forces me to look up near the webcam.
  10. Post online and use social media to effectively frame why you did the recordings. Amplify the signal and use a short comment (both in the YouTube/Vimeo field) and with the social media post very lightly promoting the video.

Happy rapid recording!

Elements of a successful #openscience #rstats workshop

What makes an open science workshop effective or successful*?

Over the last 15 years, I have had the good fortune to participate in workshops as a student and sometimes as an instructor. Consistently, there were beneficial discovery experiences, and at times, some of the processes highlighted have been transformative. Last year, I had the good fortune to participate in Software Carpentry at UCSB and Software Carpentry at YorkU, and in the past, attend (in part) workshops such as Open Science for Synthesis. Several of us are now deciding what to attend as students in 2017. I have been wondering about the potential efficacy of the workshop model and why it seems that they are so relatively effective. I propose that the answer is expectations.  Here is a set of brief lists of observations from workshops that lead me to this conclusion.

*Note: I define a workshop as effective or successful when it provides me with something practical that I did not have before the workshop.  Practical outcomes can include tools, ideas, workflows, insights, or novel viewpoints from discussion. Anything that helps me do better open science. Efficacy for me is relative to learning by myself (i.e. through reading, watching webinars, or stuggling with code or data), asking for help from others, taking an online course (that I always give up on), or attending a scientific conference.

Delivery elements of an open science training workshop

  1. Lectures
  2. Tutorials
  3. Demonstrations
  4. Q & A sessions
  5. Hands-on exercises
  6. Webinars or group-viewing recorded vignettes.

Summary expectations from this list: a workshop will offer me content in more than one way unlike a more traditional course offering. I can ask questions right there on the spot about content and get an answer.

Content elements of an open science training workshop

  1. Data and code
  2. Slide decks
  3. Advanced discussion
  4. Experts that can address basic and advanced queries
  5. A curated list of additional resources
  6. Opinions from the experts on the ‘best’ way to do something
  7. A list of problems or questions that need to addressed or solved both routinely and in specific contexts when doing science
  8. A toolkit in some form associated with the specific focus of the workshop.

Summary of expectations from this list: the best, most useful content is curated. It is contemporary, and it would be a challenge for me to find out this on my own.

Pedagogical elements of an open science training workshop

  1. Organized to reflect authentic challenges
  2. Uses problem-based learning
  3. Content is very contemporary
  4. Very light on lecture and heavy on practical application
  5. Reasonably small groups
  6. Will include team science and networks to learn and solve problems
  7. Short duration, high intensity
  8. Will use an open science tool for discussion and collective note taking
  9. Will be organized by major concepts such as data & meta-data, workflows, code, data repositories OR will be organized around a central problem or theme, and we will work together through the steps to solve a problem
  10. There will be a specific, quantifiable outcome for the participants (i.e. we will learn how to do or use a specific set of tools for future work).

Summary of expectations from this list: the training and learning experience will emulate a scientific working group that has convened to solve a problem. In this case, how can we all get better at doing a certain set of scientific activities versus can a group aggregate and summarize a global alpine dataset for instance. These collaborative solving-models need not be exclusive.

Higher-order expectations that summarize all these open science workshop elements

  1. Experts, curated content, and contemporary tools.
  2. Everyone is focussed exclusively on the workshop, i.e. we all try to put our lives on hold to teach and learn together rapidly for a short time.
  3. Experiences are authentic and focus on problem solving.
  4. I will have to work trying things, but the slope of the learning curve/climb will be mediated by the workshop process.
  5. There will be some, but not too much, lecturing to give me the big picture highlights of why I need to know/use a specific concept or tool.

 

 

 

Review journals or journals with synthesis format contributions in EEB

Colleagues and I were checking through current journal listings that either explicitly focus on synthesis such as systematic reviews or include a section that is frequently well represented with synthesis contributions. Most journals in ecology, evolution, and environmental science that publish primary standard, research articles nonetheless also offer the opportunity for these papers too, but it can be less frequent or sometimes less likely to accept different forms of synthesis (i.e. systematic reviews in particular versus meta-analyses).

List

Diverse synthesis contributions very frequent
Conservation Letters (Letters)
Perspectives in Science
Perspectives in Plant Ecology, Evolution and Systematics
Diversity & Distributions
Ecology Letters
TREE
Oikos
Biological Reviews
Annual review of ecology, evolution, systematics
Letters to Nature
Frontiers in Ecology and the Environment
PLOS ONE (many systematic reviews)
Environmental Evidence
Biology Letters
Quarterly Review of Biology

Frequent synthesis contributions with some diversity in formats
Global Ecology and Biogeography
Annals of Botany
New Phytologist
Ecography
Ecological Applications
Functional Ecology
Proceedings of the the Royal Society B
Ecology and Evolution

 

 

Rules of thumb for better #openscience and transparent #collaboration

Rules-of-thumb for reuse of data and plots
1. If you use unpublished data from someone else, even if they are done with it, invite them to be a co-author.
2. If you use a published dataset, at the minimum contact authors, and depending on the purpose of the reuse, consider inviting them to become a co-author. Check licensing.
3. If you use plots initiated by another but in a significantly different way/for a novel purpose, invite them to be co-author (within a reasonable timeframe).
4. If you reuse the experimental plots for the exact same purpose, offer the person that set it up ‘right of first refusal’ as first author (within a fair period of time such as 1-2 years, see next rule).
5. If adding the same data to an experiment, first authorship can shift to more recent researchers that do significant work because the purpose shifts from short to long-term ecology.  Prof Turkington (my PhD mentor) used this model for his Kluane plots.  He surveyed for many years and always invited primary researchers to be co-authors but not first.  They often declined after a few years.
6. Set a reasonable authorship embargo to give researchers that have graduated/changed focus of profession a generous chance to be first authors on papers.  This can vary from 8 months to a year or more depending on how critical it is to share the research publicly.  Development pressures, climate change, and extinctions wait for no one sadly.
Rules-of-thumb for collaborative writing
1. Write first draft.
2. Share this draft with all potential first authors so that they can see what they would be joining.
3. Offer co-authorship to everyone that appropriately contributed at this juncture and populate the authorship list as firmly as possible.
4. Potential co-authors are invited to refuse authorship but err on the side of generosity with invitations.
5. Do revisions in serial not parallel.  The story and flow gets unduly challenging for everyone when track changes are layered.

A set of #rstats #AdventureTime themed #openscience slide decks

Purpose

I recently completed a set of data science for biostatistics training exercises for graduate students. I extensively used R for Data Science and Efficient R programming to develop a set of Adventure Time R-statistics slide decks. Whilst I recognize that they are very minimal in terms of text, I hope that the general visual flow can provide a sense of the big picture philosophy that R data science and R statistics offer contemporary scientists.

Slide decks

  1. WhyR? How tidy data, open science, and R align to promote open science practices.
  2. Become a data wrangleR. An introduction to the philosophy, tips, and associated use of dplyr.
  3. Contemporary data viz in R. Philosophy of grammar of graphics, ggplot2, and some simple rules for effective data viz.
  4. Exploratory data analysis and models in R. An explanation of the difference between EDA and model fitting in R. Then, a short preview of how to highlighting modelR.
  5. Efficient statistics in R. A visual summary of the ‘Efficient R Programming’ book ideas including chunk your work, efficient planning, efficient planning, and efficient coding suggestions in R.

Here is the knitted RMarkdown html notes from the course too https://cjlortie.github.io/r.stats/, and all the materials can be downloaded from the associated GitHub repo.

I hope this collection of goodies can be helpful to others.

adventures