Giter Club home page Giter Club logo

Comments (9)

rueuntal avatar rueuntal commented on May 30, 2024

My gut feeling is that when calculating the effect of aggregation (spatial - nonspatial), we don't want to make the additional assumption that the density is the same across all plots, given that density difference is contained in the spatial curve. On the other hand, when we convert the nonspatial curve to calculate the effect of N the unit becomes the number of individuals, so it seems more reasonable to have an analytical, smooth curve. While I feel this is the right thing to do, it means that there would be two versions of the nonspatial curve, which we'd have to explain to people why so and how they differ, in an already complicated paper. Would be great if someone could show how I'm wrong...

from mobr.

dmcglinn avatar dmcglinn commented on May 30, 2024

Above you say;

If we think about the nonspatial curve as being created by randomly shuffling the individuals across plots, then the two curves are only exactly identical when all plots within the treatment have exactly the same abundance (density).

Which two curves are you referring to?

I understand rarefaction(c(101, 100, 100), 'indiv', 150) where 150 is the mean number of individuals in a plot but I don't understand mean(c(rarefaction(c(101, 100, 100), 'indiv', c(1, 300)))) can you walk me through how you came up with that, thanks!

from mobr.

rueuntal avatar rueuntal commented on May 30, 2024

Sorry about the confusion. I was referring to the individual based curve and the nonspatial curve. Using the analytical solution these two curves are completely identical, ignoring the rescaling factor for now. If the nonspatial curve is calculated by shuffling individuals though, they'd be different because densities could potentially differ between plots within a single treatment.
rarefaction(c(101, 100, 100), 'indiv', 150) this is S we'd expect, given that the average density is 150 (aka the analytical solution). mean(c(rarefaction(c(101, 100, 100), 'indiv', c(1, 300)))) this is S obtained by shuffling individuals between the two plots, then take their average. In other words, in the second method, we shuffle individuals, then randomly pick one of the two plots to compute S(plot = 1). In this extreme case, no matter how we reshuffle, S(plot 1) is always 3 and S(plot 2) is always 1. So if we repeat the procedure many times and take the average, it would be 2.

Hope that makes more sense! Let me know if anything's still unclear.

from mobr.

dmcglinn avatar dmcglinn commented on May 30, 2024

Thanks for the clarification! Ok I get your point now. I also do not like the idea of two flavors of the nonspatial curve I can see a few options for avoiding this:

  1. we develop and use the permutation generated nonspatial curve for computing both delta-curves. If you have to eventually use it to compare against the spatial curve why not save it in memory and use it to also compare against the individual-based curve? When calculating the difference between the nonspatial and individual curves we would only calculate these at the 1 to # of plots * plot_density rather than 1 to N_min which I know is not ideal but I really don't think going back to the interpolation approach is a good idea given all the artifacts that was introducing.
  2. demonstrate that although this is a problem in theory in practice it is really pretty minor and therefore we're sticking with the analytical approach in both cases. This would require some sensitivity analyses to some edge cases to see how bad things can get.

from mobr.

rueuntal avatar rueuntal commented on May 30, 2024

Thanks for the input! I'm a bit torn. Having the analytical solution is appealing to me both aesthetically (sorry about my math obsession!) and practically. As you said, not being able to evaluate the effect of N across the full range of scales is clearly not ideal, especially since the selling point of our approach is "across-scales". On the other hand interpolation is not an issue when we look at the effect of aggregation, and using randomizing (instead of the analytical solution) is probably more accurate for this. So having the two different flavors feel more correct to me but yeah our paper may be further bogged down.

@ngotelli what do you think? I'll also see if I can pick Brian's brain tomorrow.

from mobr.

dmcglinn avatar dmcglinn commented on May 30, 2024

Do you think we could argue that within treatment variation in N is actually a signature of aggregation and therefore it is important that the non-spatial curve treats N as different between treatments but essentially constant across plots?

from mobr.

rueuntal avatar rueuntal commented on May 30, 2024

That's sneaky! 😆 I think we could, but let's confirm that the other folks are on board. (And we'd want to emphasize this in the ms.)

Bad wording, not emphasize but clarify.

from mobr.

rueuntal avatar rueuntal commented on May 30, 2024

I talked with Brian. He's more in favor of my initial reaction (two flavors of the nonspatial curve) both for conceptual and practical reasons, but suggested that if the two flavors do not differ much for empirical data then we'd have good justification for only including the analytical curve for communication purpose. So I think I'll test how the two approaches differ using both simulated data & empirical data once the other anomaly (step-wise behavior of the spatial curve) is fixed, and put this on hold until then. What do you think @dmcglinn ?

from mobr.

rueuntal avatar rueuntal commented on May 30, 2024

I looked back at our old code, and it turns out that we never actually implemented flavor 1 (randomly shuffling individuals across plots while keeping N_plot unchanged). Instead we went from the sample-based rarefaction directly to the current version where we calculate the expected S for the mean density across plots within treatment (flavor 2).

This is slightly different from the cartoon we have in the ms. I'll go ahead and close this issue, and add the comments to the ms instead.

from mobr.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.