Recommendation

On the importance of experimental design: pitfall traps and arthropod communities

ORCID_LOGO based on reviews by Cécile ALBERT and Matthias Foellmer
A recommendation of:
picture

Which pitfall traps and sampling efforts should be used to evaluate the effects of cropping systems on the taxonomic and functional composition of arthropod communities?

Data used for results
Scripts used to obtain or analyze results

Abstract

EN
AR
ES
FR
HI
JA
PT
RU
ZH-CN
Submission: posted 08 January 2019
Recommendation: posted 04 October 2019, validated 07 October 2019
Cite this recommendation as:
Bartomeus, I. (2019) On the importance of experimental design: pitfall traps and arthropod communities. Peer Community in Ecology, 100030. 10.24072/pci.ecology.100030

Recommendation

Despite the increasing refinement of statistical methods, a robust experimental design is still one of the most important cornerstones to answer ecological and evolutionary questions. However, there is a strong trade-off between a perfect design and its feasibility. A common mantra is that more data is always better, but how much is enough is complex to answer, specially when we want to capture the spatial and temporal variability of a given process. Gardarin and Valantin-Morison [1] make an effort to answer these questions for a practical case: How many pitfalls traps, of which type, and over which extent, do we need to detect shifts in arthropod community composition in agricultural landscapes. There is extense literature on how to approach these challenges using preliminary data in combination with simulation methods [e.g. 2], but practical cases are always welcomed to illustrate the complexity of the decisions to be made. A key challenge in this situation is the nature of simplified and patchy agricultural arthropod communities. In this context, small effect sizes are expected, but those small effects are relevant from an ecological point of view because small increases at low biodiversity may produce large gains in ecosystem functioning [3].
The paper shows that some variables are not important, such as the type of fluid used to fill the pitfall traps. This is good news for potential comparisons among studies using slightly different protocols. However, the bad news are that the sampling effort needed for detecting community changes is larger than the average effort currently implemented. A potential solution is to focus on Community Weighed Mean metrics (CWM; i.e. a functional descriptor of the community body size distribution) rather than on classic metrics such as species richness, as detecting changes on CWM requires a lower sampling effort and it has a clear ecological interpretation linked to ecosystem functioning.
Beyond the scope of the data presented, which is limited to a single region over two years, and hence it is hard to extrapolate to other regions and years, the big message of the paper is the need to incorporate statistical power simulations as a central piece of the ecologist's toolbox. This is challenging, especially when you face questions such as: Should I replicate over space, or over time? The recommended paper is accompanied by the statistical code used, which should facilitate this task to other researchers. Furthermore, we should be aware that some important questions in ecology are highly variable in space and time, and hence, larger sampling effort across space and time is needed to detect patterns. Larger and longer monitoring schemes require a large effort (and funding), but if we want to make relevant ecology, nobody said it would be easy.

References

[1] Gardarin, A. and Valantin-Morison, M. (2019). Which pitfall traps and sampling efforts should be used to evaluate the effects of cropping systems on the taxonomic and functional composition of arthropod communities? Zenodo, 3468920, ver. 3 peer-reviewed and recommended by PCI Ecology. doi: 10.5281/zenodo.3468920
[2] Johnson, P. C., Barry, S. J., Ferguson, H. M., and Müller, P. (2015). Power analysis for generalized linear mixed models in ecology and evolution. Methods in ecology and evolution, 6(2), 133-142. doi: 10.1111/2041-210X.12306
[3] Cardinale, B. J. et al. (2012). Biodiversity loss and its impact on humanity. Nature, 486(7401), 59-67. doi: 10.1038/nature11148

Conflict of interest:
The recommender in charge of the evaluation of the article and the reviewers declared that they have no conflict of interest (as defined in the code of conduct of PCI) with the authors or with the content of the article. The authors declared that they comply with the PCI rule of having no financial conflicts of interest in relation to the content of the article.

Evaluation round #2

DOI or URL of the preprint: https://zenodo.org/record/3451553

Version of the preprint: v2

Author's Reply, 02 Oct 2019

Download tracked changes file

Dear recommender,

All the minor editorial comments have been taken into account in this third version.

As requested, we emphasized the power of simulation approaches to improve sampling designs. In the end of the last paragraph of the introduction, we added this: « Simulation studies, based on artificial communities generated from preliminary data, are powerful to estimate the power and sensitivity of different sampling efforts in different scenarios (Arnold et al. 2011; Baumgardt et al. 2019). They can be useful to optimize a sampling design or to choose the relevant community metrics to test ecological hypotheses (Botta-Dukát and Czúcz 2016). »

This sentence was also added in the discussion : « Although being very time-consuming in computation time, simulation-based power analyses offer the advantage to be very flexible (Johnson 2015). »

I upload below a word file with tracked changes between the previous v2 and this new v3 version. The clean and formatted version, according to the PCI style, is available here: https://doi.org/10.5281/zenodo.3468920 .

Yours sincerely,

Antoine Gardarin

Decision by ORCID_LOGO, posted 24 Sep 2019

The authors responded to all the main criticism from the reviewers and this version reads very well. I made small edits and suggestions on the text, which can be found here: https://www.dropbox.com/s/xcwwux926564fu9/Correctedmanuscript13sept2019_IB.docx?dl=0. Some are purely editorial, and the authors are free to accept or not the proposed changes. I found several places where the sentences were a bit convoluted and I made suggestions here and there, but I am not an English native speaker, so check carefully I did not misunderstand anything. The only main comment I would like to make is to emphasize the power of simulating data based on preliminary data to assess replication needs. A quick search showed several papers proposing this approach that can be cited (e.g. https://besjournals.onlinelibrary.wiley.com/doi/full/10.1111/2041-210X.12306 or https://bmcmedresmethodol.biomedcentral.com/articles/10.1186/1471-2288-11-94, but there may be others). If the authors provide an example of the code and data used as supplementary material, they can help other researchers to perform their own simulations based on the observed variation in other localities and/or years. This will give broader generality to the method presented. After this final small tweaks, I would be happy to recommend the preprint.

Best,
Ignasi

Mandatory modifications
As indicated in the 'How does it work?’ section and in the code of conduct, please make sure that:
-Data are available to readers, either in the text or through an open data repository such as Zenodo (free), Dryad or some other institutional repository. Data must be reusable, thus metadata or accompanying text must carefully describe the data.
-Details on quantitative analyses (e.g., data treatment and statistical scripts in R, bioinformatic pipeline scripts, etc.) and details concerning simulations (scripts, codes) are available to readers in the text, as appendices, or through an open data repository, such as Zenodo, Dryad or some other institutional repository. The scripts or codes must be carefully described so that they can be reused.
-Details on experimental procedures are available to readers in the text or as appendices.
-Authors have no financial conflict of interest relating to the article. The article must contain a "Conflict of interest disclosure" paragraph before the reference section containing this sentence: "The authors of this preprint declare that they have no financial conflict of interest with the content of this article." If appropriate, this disclosure may be completed by a sentence indicating that some of the authors are PCI recommenders: “XXX is one of the PCI Ecology recommenders.”
In order to reach a better referencing and greater visibility of your recommended preprint, we suggest you to do the following modifications :
(i) add the following sentence in the acknowledgements: "Version 3 of this preprint has been reviewed and recommended by Peer Community In Ecology (https://doi.org/10.24072/pci.ecology.100030) »
Note that this DOI is not the DOI of your article, but the DOI of the recommendation text. The DOI of your article remains unchanged.
Doing so is very important because it would:
-indicate to readers that, unlike many other preprint in this server, your pre-print has been peer-reviewed and recommended
-make visible this information in Google Scholar search (which is quite important).
(ii) In addition, we suggest you to remove line numbering from the preprint.

Optional modifications
==> (if you wish) we advise you to use templates (word docx template and a latex template) to format your preprint in a PCI style. This is optional. Here is the links of the templates:
https://peercommunityin.org/templates/
Please be careful to correctly update all text in these templates (doi, authors’ names, address, title, date, recommender first name and family name …). Please be careful to also choose the badge “Open Code” if appropriate (in addition to the “Open access”, “Open data” and “Open Peer-Review” badges).
Indicate in the “cite as” box the version of the article that you are currently formatting. This should be version 3.
If some of the reviewers are anonymous, indicate for example “Ignasi Bartomeus and two anonymous reviewers”.


Evaluation round #1

DOI or URL of the preprint: https://doi.org/10.5281/zenodo.3468920

Author's Reply, 13 Sep 2019

Download author's reply Download tracked changes file

Dear Recommender,

Please find the revised version of our manuscript entitled Which pitfall traps and sampling efforts should be used to evaluate the effects of cropping systems on the taxonomic and functional composition of arthropod communities?

We thank the recommender and the reviewers for their careful revision. All their comments have been taken into account. Their external perspective helped us to better explain what has been done and to improve the manuscript.

Our answers are in red and in italics in the “Response to reviewers comments” file. All the modifications in the manuscript are visible in the word file. Hopefully, our amendments will meet with your approval. Do not hesitate to get in touch with me if you need clarifications.

On behalf of my co-author, Yours sincerely,

Antoine Gardarin, researcher at INRA-AgroParisTech

Decision by ORCID_LOGO, posted 21 Feb 2019

After carefully reading the manuscript and reading the reviewers comments, I think that analyzing how to improve a common sampling technique used for describing ground arthropod communities can constitute a good contribution to the field. However, I concur with both reviewers that the manuscript needs to be presented in a more clear way, and acknowledge better its limitations. I made a number of wording suggestions in the text to improve clarity, especially about the results presented (see the document here: https://www.dropbox.com/s/3qyv35ljifj1yfc/Manuscript_IB.docx?dl=0). My only main concern is regarding the simulation (see reviewer 2 detailed advise). I am not sure the data at hand allows testing the sampling effort question, but if the authors think so, it should be clearly justified in the paper. Clear recommendations on what pitfall traps are optimal in different conditions would be of great help. I hope our comments are helpful to strengthen the manuscript.

Best, Ignasi Bartomeus

Reviewed by , 18 Feb 2019

In this study, the authors aim at filling a gap in our understanding of the efficacy and necessary sample size of different pitfall trap types in agricultural fields, for which such tests are lacking, in contrast to more natural habitats. The first part of the manuscript focuses on the analysis and results of the field study component; the second part focuses on a simulation study to evaluate necessary sample sizes of pitfall traps to detect statistically significant effects at alpha = 0.05 in crop fields as a function of proportional cropping system difference, using the information on ground-dwelling spider and carabid communities gained from the field experiment.

In my opinion, there is a lot of valuable data presented, which should provide guidance for future studies. However, the presentation and organization of the manuscript are not clear, making it unnecessarily difficult to understand the paper.

L 39-40: The generalizability of the simulation results is less then indicated (see below), which the abstract will need to reflect.

L 115-116: rephrase “poorly abundant and diversified arthropod”. Maybe “little abundant arthropods with low diversity”.

L 144-163: I strongly suggest providing a diagram illustrating the experimental design. It’s hard to keep track of the spatial and temporal aspects.

L 189-190: so eight traps per sampling station? Please clarify.

L 202-203: why only five individuals? This seems very low. Please explain.

L 211: Please specify the removal process

L 276 – section 3.1: I’m missing the model output. Please add the table.

L 292 – Figure 1: This looks like boxplots showing raw data, not estimated effects from a model. I suggest providing an effects plot instead.

L 308 – Figure 2: I think the variables are on the left.

L 323: “Over the variability“ sounds very odd, please rephrase.

L 2237 – 338: Why for large pitfall traps filled with salt water only?

L 359: I don’t think there are supplementary materials.

L 417 – 423: Your test was more limited than this paragraph suggests. I don’t think you can generalize to other types of pitfall traps.

Reviewed by , 25 Jan 2019

User comments

No user comments yet