Giter Club home page Giter Club logo

amt's People

Contributors

bniebuhr avatar bsmity13 avatar jmsigner avatar joshobrien avatar robitalec avatar romainfrancois avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

amt's Issues

missing data from amt arXiv

Can you push the data relating to the fisher use cases:
dat <- read_csv("data/Martes pennanti LaPoint New York.csv") %>%
-and-
land_use <- raster("data/landuse_study_area.tif")
to the master? I tried accessing the . rda and it is not loading and those data are not available with the package.

Thus I can not run the code provided in the supplement to the arXiv (8 May 2018)

Simple shapefile conversion

I apologize for what I am sure is a simple question but is there a convenient workflow for converting a track to shapefile with metadata intact? I can convert to other formats and eventually get to a writeOGR but I also run into a lot of errors that way:

example_sp<-as_ltraj(example, infolocs=example)
example_sp<-ltraj2spdf(example_sp)

gives Error in `[.data.frame`(tr, !is.na(tr$x), c("x", "y")) : undefined columns selected

but

example_move<-as_move(example)
example_sp<-move2ade(example_move)
example_sp@data<-example

works for making a spdf

`steps_by_burst()` leads to `Error in diff_rcpp(x$x_) : negative length vectors are not allowed`

Hello!

I am rarefying some GPS movement data to 3h fix rate, removing bursts with small numbers of locations, and calculating steps and angles, just following the same code from the vignettes and the paper. Here is is the code.

# resample with 3-hours, get only trajectories with more that 7 days of monitoring
# (7 * 24/3 = 56 locations)
mov.track2 <- mov.track %>% 
  tidyr::nest(-CollarID) %>% 
  dplyr::mutate(resampled.data = purrr::map(data, function(d) {
    d %>% 
      amt::track_resample(rate = lubridate::hours(3), tolerance = lubridate::minutes(30)) %>% 
      amt::filter_min_n_burst(min_n = 7 * 24/3) %>% 
      amt::steps_by_burst()
    })) %>% 
  dplyr::select(-data) %>% 
  tidyr::unnest(cols = c(CollarID, resampled.data))

If I remove the steps_by_burst calculation, it runs just fine. However, when I include I got the error Error in diff_rcpp(x$x_) : negative length vectors are not allowed.
Does anyone know what might be happening?

Here is the Traceback of the command, but I do not know if it helps...

40. | diff_rcpp(x$x_)
39. diff_x.track_xy(x) 
38. diff_x(x) 
37. step_lengths_sq.track_xy(x) 
36. step_lengths_sq(x) 
35. step_lengths.track_xy(x, lonlat = lonlat, append_last = FALSE) 
34. step_lengths(x, lonlat = lonlat, append_last = FALSE) 
33. eval_tidy(xs[[i]], unique_output) 
32. lst_quos(xs, transform = expand_lst) 
31. tibble(x1_ = x$x_[-n], x2_ = x$x_[-1], y1_ = x$y_[-n], y2_ = x$y_[-1], 
       sl_ = step_lengths(x, lonlat = lonlat, append_last = FALSE), 
       ta_ = direction_rel(x, lonlat = lonlat, ero_dir = "E", append_last = FALSE)) 
30. steps_base(x, n, lonlat, keep_cols = keep_cols) 
29. steps.track_xyt(x, lonlat = lonlat, keep_cols = keep_cols, ...) 
28. steps(x, lonlat = lonlat, keep_cols = keep_cols, ...) 
27. withCallingHandlers(expr, warning = function(w) invokeRestart("muffleWarning")) 
26. suppressWarnings(steps(x, lonlat = lonlat, keep_cols = keep_cols, 
                       ...)) 
25. steps_by_burst.track_xyt(.) 
24. amt::steps_by_burst(.) 
23. function_list[[k]](value) 
22. withVisible(function_list[[k]](value)) 
21. freduce(value, `_function_list`) 
20. `_fseq`(`_lhs`) 
19. eval(quote(`_fseq`(`_lhs`)), env, env) 
18. eval(quote(`_fseq`(`_lhs`)), env, env) 
17. withVisible(eval(quote(`_fseq`(`_lhs`)), env, env)) 
16. d %>% amt::track_resample(rate = lubridate::hours(3), tolerance = lubridate::minutes(30)) %>% 
  amt::filter_min_n_burst(min_n = 7 * 24/3) %>% amt::steps_by_burst() 
15. .f(.x[[i]], ...) 
14. purrr::map(data, function(d) {
  d %>% amt::track_resample(rate = lubridate::hours(3), tolerance = lubridate::minutes(30)) %>% 
    amt::filter_min_n_burst(min_n = 7 * 24/3) %>% amt::steps_by_burst()
}) 
13. mutate_impl(.data, dots, caller_env()) 
12. mutate.tbl_df(tbl_df(.data), ...) 
11. mutate(tbl_df(.data), ...) 
10. as.data.frame(mutate(tbl_df(.data), ...)) 
9. mutate.data.frame(., resampled.data = purrr::map(data, function(d) {
  d %>% amt::track_resample(rate = lubridate::hours(3), tolerance = lubridate::minutes(30)) %>% 
    amt::filter_min_n_burst(min_n = 7 * 24/3) %>% amt::steps_by_burst()
})) 
8. dplyr::mutate(., resampled.data = purrr::map(data, function(d) {
  d %>% amt::track_resample(rate = lubridate::hours(3), tolerance = lubridate::minutes(30)) %>% 
    amt::filter_min_n_burst(min_n = 7 * 24/3) %>% amt::steps_by_burst()
})) 
7. function_list[[i]](value) 
6. freduce(value, `_function_list`) 
5. `_fseq`(`_lhs`) 
4. eval(quote(`_fseq`(`_lhs`)), env, env) 
3. eval(quote(`_fseq`(`_lhs`)), env, env) 
2. withVisible(eval(quote(`_fseq`(`_lhs`)), env, env)) 
1. mov.track %>% tidyr::nest(-CollarID) %>% dplyr::mutate(resampled.data = purrr::map(data, 
                                                                                   function(d) {
                                                                                     d %>% amt::track_resample(rate = lubridate::hours(3), 
                                                                                                               tolerance = lubridate::minutes(30)) %>% amt::filter_min_n_burst(min_n = 7 *  ...

Thanks in advance!

Error in executing code from vignette

This following creates an error while trying to run the code from the "Fitting a Step-Selection Function" vignette (amt version 0.0.4.0)

m0 <- ssf1 %>% fit_clogit(case_ ~ forest + strata(step_id_))
Error in eval(predvars, data, env) : object 'case_' not found

Vector allocation memory issue for hr_akde with "ou" ctmm model

Hello! I am looking for assistance with hr_akde using an "ou" ctmm model. I'm looking to estimate aKDEs for a series of biweekly temporal windows for numerous individual elk sampled at a 5hr fix rate.

I am thrown the below memory error when I attempt aKDE using an "ou" ctmm model for biweekly periods. (Note that I can get the code provided to work for hr_od and hr_akde with the ctmm model set to 'iid')

Error: Problem with `mutate()` column `hr_akde_ou`.
i `hr_akde_ou = map(data, ~hr_akde(., model = fit_ctmm(., "ou")))`.
x cannot allocate vector of size 321108.4 Gb

I've provided reproducible code and attempted to attach data in the hopes that someone may be able to assist me with a fix for whatever I might be doing wrong?

(If there is a better location for posing this question, my apologies and please let me know where instead I should direct this inquiry!)
Thank you!
El Pero
[email protected]

elktrks_5.zip

#read in track data
filtered_5time <- readRDS("elktrks_5.rds")

#save track class forlater
trk.class <- class(filtered_5time)

#Nest elk tracks
nest_5hr <- filtered_5time %>% nest(-id, -sex, -release_age, -release_cohort, -release_date) 

#make sure track classification remains
class(nest_5hr) <- trk.class

###subset temporal windows
#biweekly
nest_5hr_bi <- filtered_5time %>% nest(-id, -sex, -release_age, -release_cohort, -release_date, -bwfr) 

#make sure track classification remains
class(nest_5hr_bi) <- trk.class

#create akdes at 95% and 50% isopleths
hr_bw <- nest_5hr_bi %>%  
  mutate(n = map_int(data, nrow)) %>% 
  filter(n > 20) %>% 
  mutate(
    hr_akde_iid = map(data, ~ hr_akde(., model = fit_ctmm(., "iid"))),
    hr_akde_ou = map(data, ~ hr_akde(., model = fit_ctmm(., "ou"))),
    cor_akde_iid = map(data, ~ hr_akde(., model = fit_ctmm(., "iid"), levels= c(0.5))),
    cor_akde_ou = map(data, ~ hr_akde(., model = fit_ctmm(., "ou"), levels=c(0.5)))
  )

head(hr_bw)
saveRDS(hr_bw, "BWakdeUD.rds")

#to long format so we can use area function
hr_bw2 <- hr_bw %>% select(-data) %>%
  pivot_longer(hr_akde_iid:cor_akde_ou, names_to = "estimator",
               values_to = "hr")

str(hr_bw2, 2)

#area function 
hr_bw2.area <- hr_bw2 %>%
  mutate(hr_area = map(hr, hr_area)) %>%
  unnest(cols = hr_area)

head(hr_bw2.area, 10)
saveRDS(hr_bw2.area, "BWakdeAREA.rds")

indicate column from input data.frame to mk_track using tidyselect

Hi!

I have being using amt and to start each process we need to use the function mk_track.
To use it, we pass as argument the coordinates and time (x, y, t), and each of the other columns must be names.
For instance:

mk_track(my_data_frame, .x = x, .y = y, .t = timestamp, collar_id, animal_id, burst, year, 
    health_condition, sex, body_mass, crs = 4326)

It is nice that there is the option all_cols that can be set to TRUE, so it does not get tiring to include all columns all the time.

I was thinking, however, if it would not make sense to include some kind of tidyselect structure to be able to include columns by number of names in a sequence, to make this process easier. Such as mk_track(my_data_frame, x, y, timestamp, cols = collar_id:body_mass) or mk_track(my_data_frame, x, y, timestamp, cols = 4:10).

Or maybe it is better to keep the function simple and recommend the use of dplyr::select before using mk_track?

Something like

my_data_frame %>%
    dplyr::select(collar_id:body_mass) %>%
    mk_track(x, y, timestamp, all_cols = TRUE, crs = 4326)

RSF Workflow

Hello,

I'm attempting to use amt in the typical create random points/extract covariates workflow, but I'm running into some problems. I'm not sure if this is the right place to post questions but here goes.

dat <- read_csv('my_data.csv',col_types=cols()) #x,y data for rsf, in lon/lat
rast <- raster('my_layer.tif') #also lon/lat, epsg:4326

The most straightforward approach is to do the following

 dat %>% 
  make_track(lon,lat,crs = CRS('+init=epsg:4326')) %>%
  random_points %>%
  extract_covariates(rast)

This works but this gives me warning messages (below). I assume because random_points uses sf internally, which doesn't work as well with spherical coordinates.

although coordinates are longitude/latitude, st_intersects assumes that they are planar

Converting to a flat projection then seems to require a fairly convoluted workflow to extract covariates. In addition, the random points object needs to be a spatial object in order to transform coordinates, but using make_track drops the case_ column.

dat %>% 
  make_track(lon,lat,crs = CRS('+init=epsg:4326')) %>%
  transform_coords(CRS('+init=epsg:3035')) %>%
  random_points %>%
  make_track(x_,y_,crs = CRS('+init=epsg:3035')) %>%
  transform_coords('+init=epsg:4326') %>%
  extract_covariates(rast)
  • It might be nice if random_points could accept a bounding box (or any polygon) in the coordinates system that matches the tracks, instead of forcing mcp or kde.
  • Also it might be nice if random points was a spatial object, such as an sf object, so that it's easy to do transformations on the points.
  • Is there a good way to tell make_track to keep all the columns in the data frame? There might be but I can't figure it out.
  • Finally, it would be nice if extract_covariates converted to the coordinate system of the raster before doing extraction. This is what raster::extract usually does, and I see in the code that extract_covariates is internally using raster::extract, but if using amt I have to explicitly do the conversion.

Thank you for the help!

Ben

Error in track_resample

Hi,
I updated the package to the latest version, since then I'm getting the following error at resampling tracks and generating random steps using "track-resample" function:

codes:
ssf<-tracks %>%
mutate(rs=map(tracks, function(x){
x<-x %>%
track_resample(hours(step_duration), tolerance=minutes(60)) %>%
filter_min_n_burst%>%
steps_by_burst %>%
random_steps(n_control=100)

The error:
Error in fitdistrplus::fitdist(x, "gamma", keepdata = FALSE, lower = 0) :
the function mle failed to estimate the parameters,
with the error code 100

I appreciate you help me solve the problem (I analysed this data in the past and it was not such error then).

Warning message on convergence

Hello again,

I created an iSSF model, and got the following warning:

Warning message:
In fitter(X, Y, istrat, offset, init, control, weights = weights, :
Loglik converged before variable 4 ; coefficient may be infinite.

Is this something to be overly concerned about. I do get the model results.

Also, is there an option or a possibility of running model diagnostics (or is it required - asking in case a reviewer asks for it).

Regards,
Anjan

Understanding log_RSS

Hi,
I have a query on correctly interpreting the RSS values, as I am getting different results when using the naive estimate and the model vs the results from the log_RSS function. For simplicity, lets only consider the landcover class variable in my model, with the reference as agriculture.
Model coefficient for fallow = -1.533 [ exp(coeff) = 0.215]. == Thus, more likely to select agri.

Using naive estimates, I weight it with the available locations for fallow and agri, using the following formula:
1/(exp(coef(m1.1$model)["landuseCFallow"])*a.flw/a.agri)
which gives me a value of 5.09 (i.e. 5 times more likely to use agri???)

However, when I use the log_RSS function to do the same, with x1 as fallow and x2 as the reference class agriculture,
log_RSS = 0.1417
exp(log_RSS) = 1.15 (so this says that the animal is 1.15 times more likely to use fallow?)

Or am using the function incorrectly. Please help!

Updating crs argument in mk_track to accept WKT2 string

Currently the crs argument in the mk_track function only accepts proj4string characters, when R has now moved on to WKT2 strings. Attempting hr_akde from a track_xyt object with a deprecated CRS argument in a newly updated R environment no longer works.

dplyr issue: Error: `.data` is a corrupt grouped_df, the `"groups"` attribute must be a data frame

I have code drawn from the amt vignette to regularize some GPS data stored as an sf object. It was working a few months ago but when I came back to it, an update to dplyr seems to be causing problems (or so I assume).
Original code:

amttestA<-make_track(pointsf,
                    lat_albers,long_albers,fixtime,id=AnimalId,
                    Gender=Gender, in.dena=in.dena,disttobound=disttobound,
                    bear.bout=bear.bout,f.cat=f.cat,elevation=elevation)
amttestB<-amttestA%>%
  group_by(id)%>%
  nest()

amttestC<-amttestB%>%
  mutate(burst=map(data,function(x)
    x%>%
      track_resample(rate=hours(2),toleranace=minutes(20),
                     Gender=Gender, in.dena=in.dena,disttobound=disttobound,
                     bear.bout=bear.bout,f.cat=f.cat,elevation=elevation)))

The first issue I encountered was this clash between sf and tidyr when using nest(); consequently I had to make my sf object a dataframe so that nest() would work:

amttestA<-make_track(as.data.frame(pointsf),
                    lat_albers,long_albers,fixtime,id=AnimalId,
                    Gender=Gender, in.dena=in.dena,disttobound=disttobound,
                    bear.bout=bear.bout,f.cat=f.cat,elevation=elevation)
amttestB<-amttestA%>%
  group_by(id)%>%
  nest()

amttestC<-amttestB%>%
  mutate(burst=map(data,function(x)
    x%>%
      track_resample(rate=hours(2),toleranace=minutes(20),
                     Gender=Gender, in.dena=in.dena,disttobound=disttobound,
                     bear.bout=bear.bout,f.cat=f.cat,elevation=elevation)))

The final call, to create the bursts, produces the error:

"Error: `.data` is a corrupt grouped_df, the `"groups"` attribute must be a data frame 

Running with debug pulls up:

function (df, quo) 
{
  .Call(`_dplyr_filter_impl`, df, quo)
}

This appears to be very similar to this issue.
I've tried various combinations of grouping and ungrouping the objects before the call to burst, and/or making my data object a vanilla dataframe in a couple different ways, and still get the same error.
I'm sorry this isn't a reprex; the data are sensitive. If you're having trouble reproducing it I can email some.
Thanks for your work on this package!

Traceback:

Error: `.data` is a corrupt grouped_df, the `"groups"` attribute must be a data frame 
37.
stop(structure(list(message = "`.data` is a corrupt grouped_df, the `\"groups\"` attribute must be a data frame", 
    call = NULL, cppstack = NULL), class = c("Rcpp::exception", 
"C++Error", "error", "condition"))) 
36.
filter_impl(.data, quo) 
35.
filter.tbl_df(x, !!quo(burst_ > 0)) 
34.
NextMethod() 
33.
as.vector(y) 
32.
setdiff(from$class, class(to)) 
31.
track_transfer_attr(.data, NextMethod()) 
30.
filter.track_xy(x, !!quo(burst_ > 0)) 
29.
NextMethod() 
28.
as.vector(y) 
27.
setdiff(from$class, class(to)) 
26.
track_transfer_attr(.data, NextMethod()) 
25.
filter.track_xyt(x, !!quo(burst_ > 0)) 
24.
filter(x, !!quo(burst_ > 0)) 
23.
track_resample.track_xyt(., rate = hours(2), toleranace = minutes(20), 
    Gender = Gender, in.dena = in.dena, disttobound = disttobound, 
    bear.bout = bear.bout, f.cat = f.cat, elevation = elevation) 
22.
track_resample(., rate = hours(2), toleranace = minutes(20), 
    Gender = Gender, in.dena = in.dena, disttobound = disttobound, 
    bear.bout = bear.bout, f.cat = f.cat, elevation = elevation) 
21.
function_list[[k]](value) 
20.
withVisible(function_list[[k]](value)) 
19.
freduce(value, `_function_list`) 
18.
`_fseq`(`_lhs`) 
17.
eval(quote(`_fseq`(`_lhs`)), env, env) 
16.
eval(quote(`_fseq`(`_lhs`)), env, env) 
15.
withVisible(eval(quote(`_fseq`(`_lhs`)), env, env)) 
14.
x %>% track_resample(rate = hours(2), toleranace = minutes(20), 
    Gender = Gender, in.dena = in.dena, disttobound = disttobound, 
    bear.bout = bear.bout, f.cat = f.cat, elevation = elevation) 
13.
.f(.x[[i]], ...) 
12.
map(data, function(x) x %>% track_resample(rate = hours(2), toleranace = minutes(20), 
    Gender = Gender, in.dena = in.dena, disttobound = disttobound, 
    bear.bout = bear.bout, f.cat = f.cat, elevation = elevation)) 
11.
mutate_impl(.data, dots, caller_env()) 
10.
mutate.tbl_df(., burst = map(data, function(x) x %>% track_resample(rate = hours(2), 
    toleranace = minutes(20), Gender = Gender, in.dena = in.dena, 
    disttobound = disttobound, bear.bout = bear.bout, f.cat = f.cat, 
    elevation = elevation))) 
9.
mutate(., burst = map(data, function(x) x %>% track_resample(rate = hours(2), 
    toleranace = minutes(20), Gender = Gender, in.dena = in.dena, 
    disttobound = disttobound, bear.bout = bear.bout, f.cat = f.cat, 
    elevation = elevation))) 
8.
function_list[[k]](value) 
7.
withVisible(function_list[[k]](value)) 
6.
freduce(value, `_function_list`) 
5.
`_fseq`(`_lhs`) 
4.
eval(quote(`_fseq`(`_lhs`)), env, env) 
3.
eval(quote(`_fseq`(`_lhs`)), env, env) 
2.
withVisible(eval(quote(`_fseq`(`_lhs`)), env, env)) 
1.
amttestB %>% mutate(burst = map(data, function(x) x %>% track_resample(rate = hours(2), 
    toleranace = minutes(20), Gender = Gender, in.dena = in.dena, 
    disttobound = disttobound, bear.bout = bear.bout, f.cat = f.cat, 
    elevation = elevation))) 

Session info:

R version 3.5.3 (2019-03-11)
Platform: x86_64-w64-mingw32/x64 (64-bit)
Running under: Windows 10 x64 (build 17134)

Matrix products: default

locale:
[1] LC_COLLATE=English_United States.1252  LC_CTYPE=English_United States.1252   
[3] LC_MONETARY=English_United States.1252 LC_NUMERIC=C                          
[5] LC_TIME=English_United States.1252    

attached base packages:
[1] stats     graphics  grDevices utils     datasets  methods   base     

other attached packages:
 [1] forcats_0.4.0       stringr_1.4.0       dplyr_0.8.3         purrr_0.3.3        
 [5] tidyr_1.0.0         tibble_2.1.3        tidyverse_1.3.0     adehabitatLT_0.3.24
 [9] CircStats_0.2-6     boot_1.3-22         MASS_7.3-51.3       adehabitatMA_0.3.13
[13] ade4_1.7-13         effects_4.1-0       carData_3.0-2       amt_0.0.6          
[17] geosphere_1.5-7     rgeos_0.4-3         rgdal_1.4-3         sf_0.7-4           
[21] tmap_2.2            readxl_1.3.1        data.table_1.12.2   readr_1.3.1        
[25] ggplot2_3.2.1       plyr_1.8.4          lubridate_1.7.4     sp_1.3-1           

loaded via a namespace (and not attached):
 [1] nlme_3.1-137       fs_1.2.7           satellite_1.0.1    httr_1.4.1        
 [5] webshot_0.5.1      RColorBrewer_1.1-2 mapview_2.6.3      tools_3.5.3       
 [9] backports_1.1.3    R6_2.4.0           KernSmooth_2.23-15 DBI_1.0.0         
[13] lazyeval_0.2.2     colorspace_1.4-1   nnet_7.3-12        raster_2.8-19     
[17] withr_2.1.2        tidyselect_0.2.5   leaflet_2.0.2      compiler_3.5.3    
[21] cli_1.1.0          rvest_0.3.5        xml2_1.2.2         scales_1.0.0      
[25] classInt_0.3-3     digest_0.6.23      minqa_1.2.4        base64enc_0.1-3   
[29] dichromat_2.0-0    pkgconfig_2.0.2    htmltools_0.3.6    lme4_1.1-21       
[33] dbplyr_1.4.2       htmlwidgets_1.3    rlang_0.4.2        rstudioapi_0.10   
[37] shiny_1.3.2        generics_0.0.2     jsonlite_1.6       crosstalk_1.0.0   
[41] magrittr_1.5       Matrix_1.2-17      Rcpp_1.0.1         munsell_0.5.0     
[45] lifecycle_0.1.0    stringi_1.4.3      tmaptools_2.0-1    grid_3.5.3        
[49] promises_1.0.1     crayon_1.3.4       lattice_0.20-38    haven_2.2.0       
[53] splines_3.5.3      hms_0.5.2          zeallot_0.1.0      pillar_1.4.2      
[57] codetools_0.2-16   stats4_3.5.3       reprex_0.3.0       XML_3.98-1.19     
[61] glue_1.3.1         modelr_0.1.5       png_0.1-7          vctrs_0.2.0       
[65] nloptr_1.2.1       httpuv_1.5.1       cellranger_1.1.0   gtable_0.3.0      
[69] assertthat_0.2.1   mime_0.6           lwgeom_0.1-6       xtable_1.8-3      
[73] broom_0.5.2        survey_3.35-1      e1071_1.7-1        later_0.8.0       
[77] class_7.3-15       survival_2.43-3    viridisLite_0.3.0  units_0.6-2 

Make obvious message optional

The function amt::mk_track() is verbose:

amt/R/track.R

Line 66 in 6510122

message(".t found, creating `track_xyt`.")

when called repeatedly within a wrapper it can annoyingly cover the screen.

It is not particularly informative since it just reflects what the user decides to do.

So it would be great if you could add a verbose = TRUE argument to the function or find an alternative to make such display optional.

AMT "0" step lengths and turning angles (steps_by_burst vs. as_moveHMM)

I am using the amt package to create tracks and steps for a number of mallards - both amt steps_by_burst and as_moveHMM steps. After using mk_track and nesting by ID (to create "trk") I am running the below code:

amtsteps <- trk %>% 
  mutate(steps = map(data, function(x) 
    x %>% remove_capture_effect(start = days(1)) %>% 
      track_resample(rate = minutes(60), tolerance = minutes(5)) %>% 
      filter_min_n_burst(min_n = 3) %>% steps_by_burst(keep_cols = 'both')))
amtsteps <- amtsteps %>% select(ID, steps) %>% unnest(cols = steps)

Next,

hmmsteps <- trk %>% 
  mutate(steps = map(data, function(x) 
    x %>% remove_capture_effect(start = days(1)) %>% 
      track_resample(rate = minutes(60), tolerance = minutes(5)) %>% 
      filter_min_n_burst(min_n = 3)))
hmmsteps <- hmmsteps %>% select(ID, steps) %>% unnest(cols = steps) 
hmmsteps$ID<- paste(hmmsteps$ID,hmmsteps$burst_) #using bursts added to ids for hmm ID
hmmsteps <- as_moveHMM(hmmsteps)

I compared the two dataframes (amtsteps - stepsxyt; hmmsteps - moveData). The observed steps match up but I noticed differences in how amt steps_by_burst vs. as_moveHMM calculate angles when the step length = 0.

AMT
moveHMM

Steps_by_burst appears to calculate a ta_ even when step = 0. The way that as_moveHMM handles this makes more sense to me, step=0, angle = NA AND the next angle = NA

I am not sure if this is an issue with the AMT package but I am not understanding how steps_by_burst is calculating a ta_ when the step = 0. Not a major issue, I only had 6 step lengths that = 0 out of 30k+ - only noticed this when I was ensuring that the steps and angles were calculated the same between the packages. Everything else seems fine, only differences is sometimes the last decimal of a step/angle may be rounded up or down differently between steps_by_burst and as_moveHMM.

`extract_covariates()` loses factor levels

raster::extract() has argument factors = FALSE. If TRUE, extract() returns a factor, else it returns an integer.

Would be nice to have access to that argument from extract_covariates().

S3 function hr_ba.hr_prob has no generic

Seems like last commit to master deleted generic function hr_ba(). However, hr_ba.hr_prob() still exists, prompting error when building package.

I added empty function called hr_ba() on branch adjust_params while I was working on another problem, but this is not a real fix.

TA for random steps when observed is NA

Step 1 has no turn angle because there is no previous step (ta_ = NA). However, the random steps paired with step 1 do have a turn angle. Demonstration:

deer %>% 
  steps_by_burst() %>% 
  random_steps(n_control = 3) %>% 
  print(n = 9, width = 100)

Problem would arise for first step of any burst. Problem might be that direction_p should be NA? Isn't that direction of the previous step?

`hr_locoh()` doesn't return area with units

hr_locoh(...)$locoh$area has class numeric instead of units.

Might have to do with sf object missing CRS at time of sf::st_area call?

As a result, hr_area(locoh, units = TRUE) does not work as expected.

Error when running hr_kde

Hello,

I am trying to calculate KDE, AKDE and other estimates using the approach presented in Signer and Fieberg pre-print (home range in a tidy world). It is a very nice approach, thanks for that!

However, I am having some trouble when running hr_kde or hr_akde. I get the following error for both:

kde1 <- hr_kde(mov.track.1, levels = c(0.5, 0.95))
Error in sp::CRS(SRS_string = from$wkt) : 
  unused argument (SRS_string = from$wkt)

May it be because of the CRS I am using?

I don't know exactly how to make a reprex out of it, but I can send a piece of code and data to you guys, if it is the best way. Here a sample of my code:

library(tidyverse)
library(amt)
library(sf)

# Load data
movement_data <- read_rds("data/movement_data.rda")

# crs to use
crs.use <- sp::CRS("+proj=aea +lat_1=-5 +lat_2=-42 +lat_0=-32 +lon_0=-60 +x_0=0 +y_0=0 +ellps=aust_SA +units=m")

# transform data into track object
mov.track <- movement_data %>% 
  amt::make_track(X, Y, timestamp, ID, name, sex, weight.kg, estimated.age.months,
                phase, crs = sp::CRS("+init=epsg:4326")) %>% 
                amt::transform_coords(crs_to = crs.use)

# select 1 individual
mov.track.1 <- mov.track %>% 
  dplyr::filter(name == "Jussara")

# calculate MCP and KDE
mcp1 <- hr_mcp(mov.track.1, levels = c(0.5, 0.95))
kde1 <- hr_kde(mov.track.1, levels = c(0.5, 0.95))

Any hints on why this may happen?
Thanks!

PS: below my session Info

R version 3.6.1 (2019-07-05)
Platform: x86_64-w64-mingw32/x64 (64-bit)
Running under: Windows 10 x64 (build 18362)

Matrix products: default

locale:
[1] LC_COLLATE=English_United States.1252  LC_CTYPE=English_United States.1252   
[3] LC_MONETARY=English_United States.1252 LC_NUMERIC=C                          
[5] LC_TIME=English_United States.1252    

attached base packages:
[1] stats     graphics  grDevices utils     datasets  methods   base     

other attached packages:
 [1] ggeffects_0.14.3   broom_0.5.3        sf_0.9-6           amt_0.1.3          ggpubr_0.2.4      
 [6] magrittr_1.5       lubridate_1.7.4    forcats_0.4.0      stringr_1.4.0      dplyr_1.0.2       
[11] purrr_0.3.3        readr_1.3.1        tidyr_1.0.0        tibble_2.1.3       ggplot2_3.3.2     
[16] tidyverse_1.3.0    knitr_1.26         ezknitr_0.6        install.load_1.2.1

loaded via a namespace (and not attached):
 [1] nlme_3.1-140        fs_1.3.1            insight_0.8.5       httr_1.4.1          rprojroot_1.3-2    
 [6] numDeriv_2016.8-1.1 tools_3.6.1         backports_1.1.5     utf8_1.1.4          rgdal_1.4-8        
[11] R6_2.4.1            sjlabelled_1.1.5    KernSmooth_2.23-15  rgeos_0.5-2         DBI_1.1.0          
[16] colorspace_1.4-1    raster_3.0-7        withr_2.1.2         sp_1.3-2            tidyselect_1.1.0   
[21] compiler_3.6.1      cli_2.0.1           rvest_0.3.5         xml2_1.3.2          scales_1.1.0       
[26] checkmate_1.9.4     DEoptimR_1.0-8      robustbase_0.93-5   classInt_0.4-2      mvtnorm_1.0-11     
[31] minqa_1.2.4         pkgconfig_2.0.3     lme4_1.1-21         bibtex_0.4.2.2      dbplyr_1.4.2       
[36] bbmle_1.0.22        rlang_0.4.7         readxl_1.3.1        rstudioapi_0.10     FNN_1.1.3          
[41] generics_0.0.2      jsonlite_1.6        Gmedian_1.2.4       Matrix_1.2-17       Rcpp_1.0.3         
[46] munsell_0.5.0       fansi_0.4.1         lifecycle_0.2.0     stringi_1.4.3       gbRd_0.4-11        
[51] MASS_7.3-51.4       grid_3.6.1          bdsmatrix_1.3-3     crayon_1.3.4        lattice_0.20-38    
[56] haven_2.2.0         splines_3.6.1       hms_0.5.2           pillar_1.4.3        boot_1.3-22        
[61] ggsignif_0.6.0      ctmm_0.5.7          codetools_0.2-16    stats4_3.6.1        reprex_0.3.0       
[66] glue_1.4.2          packrat_0.5.0       modelr_0.1.5        vctrs_0.3.4         nloptr_1.2.1       
[71] Rdpack_0.11-1       cellranger_1.1.0    gtable_0.3.0        assertthat_0.2.1    xfun_0.12          
[76] lwgeom_0.2-5        RSpectra_0.16-0     e1071_1.7-3         class_7.3-15        survival_3.1-8     
[81] units_0.6-5         ellipsis_0.3.0  

Error: `.data` is a corrupt grouped_df, the `"groups"` attribute must be a data frame

Could you please help with the following error associated with home range area estimations for nested tracks? (https://movebankworkshopraleighnc.netlify.app/2019outputfiles/testvignettemovebank)

kde.week <- trk %>%
mutate(year = year(t_), month = month(t_), week = week(t_)) %>%
group_by(id, year, month, week) %>% nest(data=-"id") %>%
mutate(kdearea = map(data, ~ hr_kde(., levels=c(0.95)) %>% hr_area)) %>%
select(id, year, month, week, kdearea) %>% unnest()

Error: .data is a corrupt grouped_df, the "groups" attribute must be a data frame

update_vonmises function

If the cos_ta_ coefficient is negative then this row:
new_conc <- unname(dist$params$kappa + beta_cos_ta)
can lead to a negative kappa value, which is (apparently) not allowed. Should this be abs(unname(dist$params$kappa + beta_cos_ta))?

sampling_rate_many

The unit has no effect here:

sr <- summarize_sampling_rate_many(trk, "animals_id", time_unit = "min")

The sampling rate should be in all in minutes here. This does not seem to be passed on to summarize_sampling_rate.

hr_* for bbmm

Hello,

Are there any plans to build some hr_bbmm (or hr_bbkde) for Brownian Bridge movement model home range estimation?

support for terra package in extract_covariates

Hi!

I was wondering that it could be desirable to have a support for SpatRast objects from the terra package in the function amt::extract_covariates.
terra is equivalent but faster than raster, so it would be nice to be able to use it for the same purpose here.

The function in terra seems to be the same, terra::extract, even though maybe the input must be a SpatVect object. In any way, I think it may be easy to implement the extract_covarieates for it.

Do you think about an improvement in this direction?

`hr_area` unit

hr_area() should gain an argument units to determine whether or not units are returned.

CRS for `hr_akde`

dat %>% nest(data = -id) %>%
  mutate(akde = map(data,~ hr_akde(., fit_ctmm(., "ou")))) %>% hr_to_sf(akde, id) %>%
  st_write("akde.shp")

The CRS is lost.

akde fails

hr <- amt_fisher %>% nest(data = -id) %>%

mutate(hr = map(data, hr_akde), n = map_int(data, nrow)) %>%

hr_to_sf(hr, id, n)

 

Error: arguments have different crs

In addition: Warning messages:

1: Problem with `mutate()` input `hr`.

i Discarded datum Unknown based on GRS80 ellipsoid in Proj4 definition

i Input `hr` is `map(data, hr_akde)`.

2: Problem with `mutate()` input `hr`.

i Discarded datum Unknown based on GRS80 ellipsoid in Proj4 definition

i Input `hr` is `map(data, hr_akde)`.

3: Problem with `mutate()` input `hr`.

i Discarded datum Unknown based on GRS80 ellipsoid in Proj4 definition

i Input `hr` is `map(data, hr_akde)`.

4: Problem with `mutate()` input `hr`.

i Discarded datum Unknown based on GRS80 ellipsoid in Proj4 definition

i Input `hr` is `map(data, hr_akde)`.

as_track for data frames with x_, y_, and t_ variables

Hello,

I have been using the amt package to perform operations with individuals separately, such as re-sample, calculate movement variables by burst annotating data etc. Generally I have a track_xyt object and, following the examples from the amt paper, I use tidyr::nest before calculating the variables for each individual. Sometimes this involves grouping variables by burst or some unit of time (day, week, month) to calculate extra movement variables (e.g., summaries of mean, maximum or cumulative distance traveled during the time frame). However, when doing these operations, sometimes the classes track_xy and track_xyt are lost in the unnested output, and it keeps only the classes data.frame, tbl_df, and tbl. I do not have a reproducible example here in hands, but I can come back here as soon as this happens again.

In any case, with that on mind (and since this has happened several times to me already), I was thinking about an as_track method for data.frames that used to be track_xy or track_xyt object but "lost" these classes. In this case, they already have the variables called x_ and y_ (and maybe t_), but these classes must be enforced into them again.

Looking at the other as_track methods, it would look something like (not sure if the function_name.data.frame works because of the dots):

#' @export
#' @rdname as_track
as_track.data.frame <- function(x, ...) {
  cols <- colnames(x)

  if("x_" %in% cols & "y_" %in% cols & !("t_" %in% cols)) {
    make_track(x = x$x_, y = y_, ...)
  } else {
    if("x_" %in% cols & "y_" %in% cols & "t_" %in% cols) {
      make_track(x = x$x_, y = x$y_, t = x$t_, ...)
    }
  }
  
}

I think this might be useful in these cases, so I keep here as suggestion. I can also implement it and make a PR, but it would be nice to have a double check by someone else. And more than that, to discuss if this would really be worth implementing.

invalid crs from Move object

I had a workflow using a single individual move object (originally subset from a larger moveStack).

I was able to use hr_mcp, hr_locoh just fine, but received an error with hr_kde:

bird <- allmovebankdata[["X193540.rail"]]
library(amt)
birdtrack <- as_track(bird)
mcp <- hr_mcp(birdtrack)
locohK <- hr_locoh(birdtrack)
kde2 <- hr_kde(birdtrack, levels = c(0.5, 0.95))

Error in sp::CRS(SRS_string = from$wkt) : 
  no arguments in initialization list

also with

kde2 <- birdtrack %>% hr_kde(h=hr_kde_pi(.),trast = raster(as_sp(.), nrow = 1000, ncol = 1000), levels = c(0.5, 0.95), rand_buffer = 10)
Error in checkSlotAssignment(object, name, value) : 
  assignment of an object of class “character” is not valid for slot ‘proj4string’ in an object of class “Spatial”; is(value, "CRS") is not TRUE

When I make a track without using 'as_track' or assigning a crs it works.

bird2 <- as(bird,"data.frame")
birdtrack2 <- make_track(bird2,.x= location_long, .y = location_lat, .t = timestamp)
kde <- hr_kde(birdtrack2)
class(kde)
[1] "kde"     "hr_prob" "hr"      "list"  

which is great, but my data is in lat-long so hr_area produces incomparable hr sizes.

Conf. Int with RSS

Hi,
I wanted to clarify the significance/interpretation of the CI values in the log_RSS outputs.

My data has movement data from 5 individuals, and I used the nesting function while preparing the data for the SSF analysis.
Do the CI values then represent the individual variation in RSSs?
log_rss lwr upr
1 -0.3016729 -0.9272444 0.3238985
2 0.2160458 -0.1960536 0.6281452
3 -1.5873831 -2.7426228 -0.4321434
4 0.2295385 -0.5318209 0.9908980
5 -12.0790214 -2733.3505573 2709.1925144
The data looks at different landcover classes, with respect to a reference class (grassland)

I am assuming there is no separate method to account if individual-level effects in the amt workflow (e.g. including a random effect)

speed is not returning the results in m/s

speed function is not giving the outputs in m/s (as its documentation indicates). That is maybe because internally the funtion is calculating the steps without considering the lonlat = TRUE argument. I have tried changing the CRS value, but it is the same.

speed.track_xyt <- function(x, append_na = TRUE, ...) {
  stps <- suppressWarnings(steps(x))
  s <- stps$sl_ / as.numeric(stps$dt_, units = "secs")
  if (append_na) {
    c(s, NA)
  } else {
    s
  }
}

Any tryCatch type function for failed smoothing parameter calculations

I have a large set of tracks (several thousand) that I am trying to calculate kde's for. When running code to produce the kde and HR-area estimates some individuals fail to produce smoothing parameter estimates, rightfully so in most cases. Is there a way to "skip" those?

I can run all individuals with locoh methods just fine. (And I just plain gave up on calculating lscv for now)

kdepifunc<-function(x) {
  x %>% hr_kde(h=hr_kde_pi(.),trast = raster(as_sp(.), nrow = 1000, ncol = 1000), levels = c(0.5, 0.95), rand_buffer = 10)
}  

#works on first few individuals
z1<-alltracks_data_30m[1:2,]%>%  mutate(kde_pi = map(trk, kdepifunc)) %>%  mutate(kde_pi_area = map(kde_pi, function(x) x %>% hr_area()))

#does not work on all individuals
z3<-alltracks_data_30m%>%  mutate(kde_pi = map(trk, kdepifunc)) %>%  mutate(kde_pi_area = map(kde_pi, function(x) x %>% hr_area()))

Error in sp::SpatialPolygonsDataFrame(con, df) : Object length mismatch:
con has 1 Polygons objects, but df has 2 rows

The only work around I can think of is looping a tryCatch in but for loops sound like a poor substitute. Is there a way to insert the tryCatch into th piped commands?

Loss of class

nest, unnest, bind_rows make track_*, and steps_* classes to disappear.

```hr_kde``` levels not automatically matched to geometry

I discovered this trying to see if I could use a vector of descending values for the levels argument to make the geometries save in descending order. When I did, the levels vector in the dataframe were in descending order but the geometries were in ascending (by default it seems). I tested it on MCPs just to see and the geometries wrote in ascending order but so did the levels, so all good there. I am not sure about the other hr_ functions.

f <- amt_fisher %>% 
  make_track(x_,y_,t_, id=id, sex=sex, crs=5070) %>% 
  nest(locs = c(x_,y_,t_))
hr <- f %>% 
  mutate(kde = map(locs, ~hr_kde(., levels=c(0.95, 0.85, 0.6, 0.5))),
         mcp = map(locs, ~hr_mcp(., levels=c(0.95, 0.85, 0.6, 0.5))))
hr_area(hr$mcp[[1]])
hr_area(hr$kde[[1]])
> hr_area(hr$mcp[[1]])
# A tibble: 4 x 3
  level what          area
  <dbl> <chr>        <dbl>
1  0.5  estimate  7574937.
2  0.6  estimate  9253790.
3  0.85 estimate 17693784.
4  0.95 estimate 19753529.
> hr_area(hr$kde[[1]])
# A tibble: 4 x 3
  level what          area
  <dbl> <chr>        <dbl>
1  0.95 estimate  7842050.
2  0.85 estimate 10137466.
3  0.6  estimate 18998986.
4  0.5  estimate 27258422.

Since this is in the same ballpark, I would also like to suggest making the geometries save in descending order as the default so that home range dataframes can be converted with hr_to_sf and the product can be plotted and the geometries drawn in the correct order. Currently, if you plot the sf objects (or exported .shp files in Arc) as it is, the larger level home range gets drawn over the smaller and you can't see them all unless fill=NA.
This is fixed with arrange(desc(levels)) after the fact, but that's putting an essential step in plotting on the user and I could not find it addressed in the documentation.

Problem with `hr_isopleths()` when aKDE has >1 level

Demo of problem:

library(amt)

dat <- amt_fisher %>% 
  filter(id == "F1")

# Works fine:
akde <- hr_akde(dat)
hr_isopleths(akde)

# Doesn't work (levels argument ignored):
hr_isopleths(akde, levels = c(0.95, 0.5))

# Doesn't work (bug in hr_isopleth.akde)
akde2 <- hr_akde(dat, levels = c(0.95, 0.5))
hr_isopleths(akde2) # Error

# amt:::hr_isopleths.akde
x <- akde2
conf.level <- 0.95
# function (x, conf.level = 0.95, ...) 
# {
  # This is fine
  checkmate::assert_number(conf.level, lower = 0, upper = 1)
  res <- ctmm::SpatialPolygonsDataFrame.UD(x$akde, level.UD = x$levels, 
                                           conf.level = conf.level)
  res1 <- sf::st_as_sf(res)
  res1 <- sf::st_transform(res1, akde$crs)
  res1 <- res1[, setdiff(names(res1), "name")]
  
  # Error here
  # Should each always be 3? (lci, estimate, uci)
  # Alternatively, should we keep res1$name to extract "level" and "what"?
  res1$level <- rep(x$level, each = nrow(res1))
  res1$what <- rep(c(paste0("lci (", conf.level, ")"), 
                     "estimate", paste0("uci (", conf.level, ")")), 
                   length(x$levels))
  
  # Rest of code is fine
  res1$area = sf::st_area(res1)
  res1[, c("level", "what", "area", "geometry")]
# }

Potential solution (might not work if length(conf.level) > 1)

# function (x, conf.level = 0.95, ...) 
# {
  checkmate::assert_number(conf.level, lower = 0, upper = 1)
  res <- ctmm::SpatialPolygonsDataFrame.UD(x$akde, level.UD = x$levels, 
                                           conf.level = conf.level)
  res1 <- sf::st_as_sf(res)
  res1 <- sf::st_transform(res1, akde$crs)
  
  ## Proposed fix
  # Extract level and what from raw names
  split_names <- strsplit(res1$name, " ", fixed = TRUE)
  
  level_perc <- sapply(split_names, getElement, 2)
  est <- sapply(split_names, getElement, 3)

  res1$level <- as.numeric(gsub(pattern = "%", replacement = "", 
                     x = level_perc, fixed = TRUE))/100
  res1$what <- ifelse(est == "low", paste0("lci (", conf.level, ")"),
                      ifelse(est == "est", "estimate",
                      paste0("uci (", conf.level, ")")))
  res1$name <- NULL
  row.names(res1) <- NULL
  ##
  
  res1$area = sf::st_area(res1)
  res1[, c("level", "what", "area", "geometry")]
# }

Mismatching units when calculating dt_ column in steps() function

Hello,
I have been working with your AMT package to run a step selection function on some animal movement data recorded from GPS collars. The package has been incredibly useful and your guide, Fitting Step-Selection Functions with amt, was very easy to follow. However, I have come across a problem within the steps() function in regards to the dt_ calculations. I have been running the function across a dataframe which includes multiple individuals and the units for the dt_ calculations seem vary between the individuals. I have found a quick solution to this problem (listed below), but I thought I would bring this to your attention because some users might not notice such a thing occurring, especially when working with very large data sets.

Expected:
when running a function which includes track() and steps() on a multi-individual dataframe by=id, which uses a datetime for dt_ calculations, function returns a dt_ column with units same across all individuals.

Actual:
steps() function returns a dt_ column with hours as the unit for some individuals, and minutes as the unit for others.

Example on how to reproduce problem:

##rewrite track() and steps() into one function called stepsfunction()
stepsfunction<- function(x.col, y.col, date.col) {
  trk <- track(x.col, y.col, date.col) %>% 
    steps()
}
##run stepsfunction() on a dt with multiple individuals labeled with an "ID" using "datatable" package to run by=ID
steps <- dt[, stepsfunction( 8, x.col = EASTING, y.col = NORTHING, date.col = datetime), 
         by = ID]
##return the ranges of calculated dt_ to look for noticeable differences between individuals
##in this case we should expect the minimum dt_ to be ~2hr because data come from collars with 2hr fix rates. 
knitr::kable(locs[, range(dt_), by = ID])

output:
M003 and M009 show to be problematic. When investigated in detail, calculations were done in minutes.

ID V1
M002 1.965556 hours
M002 66.031944 hours
M003 29.350000 hours
M003 1590.483333 hours
M004 1.975556 hours
M004 35.999722 hours
M005 1.966667 hours
M005 22.000000 hours
M006 1.979167 hours
M006 111.999722 hours
M008 1.983333 hours
M008 37.983333 hours
M009 29.316667 hours
M009 2279.333333 hours

How to fix problem:

##add line which replaces dt_ column with manually imputed difftime() indicating specific unit of time
stepsfunction<- function(x.col, y.col, date.col) {
  trk <- track(x.col, y.col, date.col) %>% 
    steps()
    trk$dt_ <- difftime(trk$t2_, trk$t1_, unit='hours')
}
steps <- dt[, stepsfunction( 8, x.col = EASTING, y.col = NORTHING, date.col = datetime), 
         by = ID]
knitr::kable(locs[, range(dt_), by = ID])
ID V1
M002 1.9655556 hours
M002 66.0319444 hours
M003 0.4891667 hours
M003 26.5080556 hours
M004 1.9755556 hours
M004 35.9997222 hours
M005 1.9666667 hours
M005 22.0000000 hours
M006 1.9791667 hours
M006 111.9997222 hours
M008 1.9833333 hours
M008 37.9833333 hours
M009 0.4886111 hours
M009 37.9888889 hours

Session info:
R version 3.4.3 (2017-11-30)
Platform: x86_64-w64-mingw32/x64 (64-bit)
Running under: Windows >= 8 x64 (build 9200)

Matrix products: default

locale:
[1] LC_COLLATE=English_United States.1252 LC_CTYPE=English_United States.1252
[3] LC_MONETARY=English_United States.1252 LC_NUMERIC=C
[5] LC_TIME=English_United States.1252

attached base packages:
[1] stats graphics grDevices utils datasets methods base

other attached packages:
[1] bindrcpp_0.2 data.table_1.10.4-3 amt_0.0.2.0 survival_2.41-3
[5] forcats_0.3.0 stringr_1.3.0 dplyr_0.7.4 purrr_0.2.4
[9] readr_1.1.1 tidyr_0.8.0 tibble_1.4.2 ggplot2_2.2.1
[13] tidyverse_1.2.1 raster_2.6-7 sp_1.2-7 lubridate_1.7.2

loaded via a namespace (and not attached):
[1] reshape2_1.4.3 splines_3.4.3 haven_1.1.1 lattice_0.20-35
[5] colorspace_1.3-2 yaml_2.1.17 utf8_1.1.3 rlang_0.2.0
[9] pillar_1.2.1 fitdistrplus_1.0-9 foreign_0.8-69 glue_1.2.0
[13] modelr_0.1.1 readxl_1.0.0 bindr_0.1 plyr_1.8.4
[17] munsell_0.4.3 gtable_0.2.0 cellranger_1.1.0 rvest_0.3.2
[21] mvtnorm_1.0-7 psych_1.7.8 labeling_0.3 parallel_3.4.3
[25] broom_0.4.3 Rcpp_0.12.15 scales_0.5.0 jsonlite_1.5
[29] mnormt_1.5-5 hms_0.4.2 stringi_1.1.6 grid_3.4.3
[33] rgdal_1.2-16 cli_1.0.0 tools_3.4.3 magrittr_1.5
[37] lazyeval_0.2.1 crayon_1.3.4 pkgconfig_2.0.1 MASS_7.3-47
[41] Matrix_1.2-12 xml2_1.2.0 assertthat_0.2.0 httr_1.3.1
[45] rstudioapi_0.7 boot_1.3-20 R6_2.2.2 circular_0.4-93
[49] nlme_3.1-131 compiler_3.4.3

3D data processing?

Hi, first of all thanks for providing the package! It has made past analysis really easy.

In an analysis I dealt with recently I did run into some limitations, mostly that the package is currently dealing with 2D data. This seems the norm in the field (I rolled into a movement ecology project by accident, so forgive my ignorance if my take on this is wrong). I was wondering if there are planned provisions for including an option of managing 3D data (height / depth).

This would basically be an extension of Potential Path Volumes for birds (https://movementecologyjournal.biomedcentral.com/articles/10.1186/s40462-019-0158-4) or fish (https://besjournals.onlinelibrary.wiley.com/doi/10.1111/2041-210X.13232) or similar concepts I guess.

Although some analysis can be reduced to 2D framework, some can't (or I haven't found an explicit solution for it anyway). I was wondering if there are provisions to add height / depth to the modelling environment (or if there are constraints that make this not feasible). Again, I might be missing large sections of literature but the things I do find seem rather recent.

optional output crs for bbox

Hi!

I was looking at the bbox function and it is nice it allows one to include a buffer. This might be quite useful!

In one of my functions to do something similar, I have added an additional parameter to allow the reprojection of the bbox. This might be particularly interesting if one has a movement dataset on UTM and wants to get the bbox in lat-lon coordinates (EPSG 4326), for instance, to use in functions such as ggmap, which generally use such projection.

The part of code to do this looks like:

 box <- obj %>% 
      sf::st_bbox() %>% 
      matrix(ncol = 2, byrow = T) %>%
      as.data.frame() %>% 
      sf::st_as_sf(coords = c("V1", "V2"), crs = sf::st_crs(obj)) %>% 
      sf::st_transform(crs = crs) %>% 
      sf::st_bbox()

where in this case obj is an sf input object and crs is the desired coordinate reference system.
The idea of this block is to avoid reprojecting the whole movement dataset, which might take time depending on its size.

So I was wondering whether it would not be possible to add such an option to amt::bbox(). I can try it, make an example, and make a pull request, if you think it is worth it.

Non-environmental, temporal varying covariate?

Hello: I'm trying to find the best venue for my question, and am hoping this is it for a question regarding implementation of amt.

I'm finding limited instruction on whether it's possible to associate a temporally varying, non-environmental covariate with available steps for inclusion in the SSF's fit_clogit.
For example, a 'days to parturition' variable? It is easy enough to associate this temporal variable in tracks with my used locations, but I'm wondering about how to resample and associate it with available steps along with my environmental covariates?
This is similar to a time of day variable, except continuous rather than categorically circular. However, the time of day covariate only seems passable with amt's built in 'time_of_day' function.

In various vignettes and Signer et al. (2019) 'dummy' internal covariates (e.g., age or month) are initially considered but never carried through to fit SSFs.

I'd be happy to attach a project, code, or data if that would be helpful. I'd appreciate any ideas, guidance, or insights!

Cannot make track

Hello, I've been trying to make a track with some gps data. My columns are time, date, longitude, latitude and a datetime column I've created. I keep getting an error as follows:

Error: invalid column index : NA for variable: 't1' = 'NA'

Here's the code I've been using

ap<-read.csv("bear602.csv", header=T)
ap
summary(ap)

ap$datetime<-as.POSIXct(lubridate::mdy(ap$GMT_DATE) + lubridate::hms(ap$GMT_TIME))
projt <- CRS("+proj=longlat +ellps=WGS84 +datum=WGS84 +no_defs")

tbl<-as.tbl(ap)
.x<-as.numeric(tbl$LONGITUDE)
.y<-as.numeric(tbl$LATITUDE)
.t<-as_datetime(tbl$datetime)

tr2<-make_track(tbl, .x, .y, .t, crs = projt)

and the error:
.t found, creating track_xyt.
Error: invalid column index : NA for variable: 't1' = 'NA'

I've tried transforming the coordinates, changing column names, using only the necessary columns, etc to no avail. Could you provide guidance?

error with hr_od and as_telemetry

Hi, Ive been having an error pop up that is stopping an analysis that previously ran without error with the same script and dataset. I've been digging into the error and I think the real issue is pooping up with the as_telemetry function while prepping the data for the fit_ctmm within the hr_od.
The odd thing is that it appears to only show up for some of the datasets. See the example below. I've included a track dataset with multiple animals that is then nested by id, year, and season. The as_telemetry function works for the first row of the data, but not for the second row, which is where the hr_od analysis gets stopped.

trk_data.Rdata.zip

## Load required packages
library(amt)
library(ctmm)
library(raster)
library(tidyverse)

load("trk_data.Rdata")
# nest the data by year, id and season
elks <- trk %>% nest(data = -c(id, year,season)) %>% 
  mutate(n = map_int(data, nrow)) %>% 
  filter(n > 20)
elks1 <- elks %>%
  mutate(
    "hr_od_ou" = map(data, ~ hr_od(., model = fit_ctmm(., "ou"))),
    "hr_od_50" = map(data, ~ hr_od(., model = fit_ctmm(., "ou"), levels = c(0.5)))
  )

# issue appears to be with `as_telemetry` but not with all of the rows in nested data
## this works
as_telemetry(elks$data[[1]]) 
## but this doesnt 
as_telemetry(elks$data[[2]])

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.