Giter Club home page Giter Club logo

neontreeevaluation_package's Introduction

Travis-CI Build Status

A multi-sensor benchmark dataset for detecting individual trees in airborne RGB, Hyperspectral and LIDAR point clouds

Maintainer: Ben Weinstein - University of Florida.

Paper and Citation

https://www.biorxiv.org/content/10.1101/2020.11.16.385088v1

Broad scale remote sensing promises to build forest inventories at unprecedented scales. A crucial step in this process is designing individual tree segmentation algorithms to associate pixels into delineated tree crowns. While dozens of tree delineation algorithms have been proposed, their performance is typically not compared based on standard data or evaluation metrics, making it difficult to understand which algorithms perform best under what circumstances. There is a need for an open evaluation benchmark to minimize differences in reported results due to data quality, forest type and evaluation metrics, and to support evaluation of algorithms across a broad range of forest types. Combining RGB, LiDAR and hyperspectral sensor data from the National Ecological Observatory Network’s Airborne Observation Platform with multiple types of evaluation data, we created a novel benchmark dataset to assess individual tree delineation methods. This benchmark dataset includes an R package to standardize evaluation metrics and simplify comparisons between methods. The benchmark dataset contains over 6,000 image-annotated crowns, 424 field-annotated crowns, and 3,777 overstory stem points from a wide range of forest types. In addition, we include over 10,000 training crowns for optional use. We discuss the different evaluation sources and assess the accuracy of the image-annotated crowns by comparing annotations among multiple annotators as well as to overlapping field-annotated crowns. We provide an example submission and score for an open-source baseline for future methods.

Installation

library(devtools)
install_github("Weecology/NeonTreeEvaluation_package")

Download sensor data

To download evaluation data from the Zenodo archive (1GB), use the download() function to place the data in the correct package location. Download the much larger training data, set training=TRUE.

library(NeonTreeEvaluation)
download()

Getting Started

The package contains two vignettes. The ‘Data’ vignette describes each datatype and how to interact with it in R. The ‘Evaluation’ vignette shows how to submit predictions to the benchmark.

Submission Format

CSV bounding boxes

The format of the submission is as follows

  • A csv file
  • 5 columns: plot_name, xmin, ymin, xmax, ymax

Each row contains information for one predicted bounding box.

The plot_name should be named the same as the files in the dataset without extension (e.g. SJER_021_2018 not SJER_021_2018.tif) and not the full path to the file on disk. Not all evaluation data are available for all plots. Functions like evaluate_field_crowns and evaluate_image_crowns will look for matching plot name and ignore other plots.Depending on the speed of the algorithm, the simplest thing to do is predict all images in the RGB folder (see list_rgb()) and the package will handle matching images with the correct data to the correct evaluation procedure.

For a list of NEON site abbreviations: https://www.neonscience.org/field-sites/field-sites-map

Example

The package contains a sample submission file.

library(raster)
library(dplyr)
library(NeonTreeEvaluation)
head(submission)
#>        xmin     ymin      xmax     ymax     score label     plot_name
#> 1  41.01716 230.8854 151.08607 342.6985 0.8098674  Tree DSNY_014_2019
#> 2 357.32129 122.1164 397.57458 159.3758 0.6968824  Tree DSNY_014_2019
#> 3  30.39723 136.9157  73.79434 184.9473 0.5713338  Tree DSNY_014_2019
#> 4 260.65921 285.6689 299.68811 326.7933 0.5511004  Tree DSNY_014_2019
#> 5 179.34564 371.6130 232.49385 400.0000 0.4697072  Tree DSNY_014_2019
#> 6 316.27377 378.9802 363.67542 400.0000 0.3259409  Tree DSNY_014_2019

Shp Polygons

Instead of bounding boxes, some methods may return polygons. To submit as polygons, create a single unprojected shapefile with polygons in image coordinates. Polygons must be complete with no holes. Here is an example of the above csv file in polygon format. Here the xmin, xmax, etc. columns are ignored since the information is stored in the geometry data.

head(submission_polygons)
#> Simple feature collection with 6 features and 7 fields
#> geometry type:  POLYGON
#> dimension:      XY
#> bbox:           xmin: 30.39723 ymin: 122.1164 xmax: 397.5746 ymax: 400
#> CRS:            NA
#>        xmin     ymin      xmax     ymax     score label     plot_name
#> 1  41.01716 230.8854 151.08607 342.6985 0.8098674  Tree DSNY_014_2019
#> 2 357.32129 122.1164 397.57458 159.3758 0.6968824  Tree DSNY_014_2019
#> 3  30.39723 136.9157  73.79434 184.9473 0.5713338  Tree DSNY_014_2019
#> 4 260.65921 285.6689 299.68811 326.7933 0.5511004  Tree DSNY_014_2019
#> 5 179.34564 371.6130 232.49385 400.0000 0.4697072  Tree DSNY_014_2019
#> 6 316.27377 378.9802 363.67542 400.0000 0.3259409  Tree DSNY_014_2019
#>                      st_sfc.lst.
#> 1 POLYGON ((41.01716 230.8854...
#> 2 POLYGON ((357.3213 122.1164...
#> 3 POLYGON ((30.39723 136.9157...
#> 4 POLYGON ((260.6592 285.6689...
#> 5 POLYGON ((179.3456 371.613,...
#> 6 POLYGON ((316.2738 378.9802...

Scores for an image-annotated crowns

Author Precision Recall Cite/Code
Weinstein et al. 2020 0.66 0.79 https://deepforest.readthedocs.io/
Silva et al. 2016 0.34 0.47 lidR package

The main data source are image-annotated crowns, in which a single observer annotated visible trees in 200 40m x 40m images from across the United States. This submission has bounding boxes in image coordinates. To get the benchmark score image-annotated ground truth data.

#Get a three sample plots to run quickly, ignore to run the entire dataset
df<-submission %>% filter(plot_name %in% c("SJER_052_2018"))

#Compute total recall and precision for the overlap data
results<-evaluate_image_crowns(predictions = df,project = T, show=F, summarize = T)
#> [1] "SJER_052_2018"
results[1:3]
#> $overall
#> # A tibble: 1 x 2
#>   precision recall
#>       <dbl>  <dbl>
#> 1         1  0.778
#> 
#> $by_site
#> # A tibble: 1 x 3
#> # Groups:   Site [1]
#>   Site  recall precision
#>   <chr>  <dbl>     <dbl>
#> 1 SJER   0.778         1
#> 
#> $plot_level
#> # A tibble: 1 x 3
#> # Groups:   plot_name [1]
#>   plot_name     recall precision
#>   <chr>          <dbl>     <dbl>
#> 1 SJER_052_2018  0.778         1

For a list of NEON site abbreviations: https://www.neonscience.org/field-sites/field-sites-map

Scores for an field-annotated crowns

Author Recall Cite/Code
Weinstein et al. 2020 0.61 https://deepforest.readthedocs.io/

The second data source is a small number of field-annotated crowns from two geographic sites. These crowns were drawn on a tablet while physically standing in the field, thereby reducing the uncertainty in crown segmentation.

df <- submission %>% filter(plot_name=="OSBS_95_competition")
results<-evaluate_field_crowns(predictions = df,project = T)
#> [1] "OSBS_95_competition"

results
#> $overall
#> # A tibble: 1 x 2
#>   precision recall
#>       <dbl>  <dbl>
#> 1     0.029      1
#> 
#> $by_site
#> # A tibble: 1 x 3
#> # Groups:   Site [1]
#>   Site  recall precision
#>   <chr>  <dbl>     <dbl>
#> 1 <NA>       1     0.029
#> 
#> $plot_level
#> # A tibble: 1 x 3
#> # Groups:   plot_name [1]
#>   plot_name           recall precision
#>   <chr>                <dbl>     <dbl>
#> 1 OSBS_95_competition      1     0.029

Scores for an field-collected stems

Author Recall Cite/Code
Weinstein et al. 2020 0.74 https://deepforest.readthedocs.io/

The third data source is the NEON Woody Vegetation Structure Dataset. Each tree stem is represented by a single point. This data has been filtered to represent overstory trees visible in the remote sensing imagery.

df <- submission %>% filter(plot_name=="JERC_049_2018")
results<-evaluate_field_stems(predictions = df,project = F, show=T, summarize = T)
#> [1] "JERC_049"

results
#> $overall
#>      recall
#> 1 0.5555556
#> 
#> $by_site
#> # A tibble: 1 x 2
#>   Site  recall
#>   <fct>  <dbl>
#> 1 JERC   0.556
#> 
#> $plot_level
#>   siteID plot_name    recall n
#> 1   JERC  JERC_049 0.5555556 9

If you would prefer not to clone this repo, a static version of the benchmark is here: https://zenodo.org/record/3723357#.XqT_HlNKjOQ

Sensor Data

RGB Camera

library(raster)
library(NeonTreeEvaluation)

#Read RGB image as projected raster
rgb_path<-get_data(plot_name = "SJER_021_2018",type="rgb")
rgb<-stack(rgb_path)

#Find path and parse
xmls<-get_data("SJER_021_2018",type="annotations")
annotations<-xml_parse(xmls)
#View one plot's annotations as polygons, project into UTM
#copy project utm zone (epsg), xml has no native projection metadata
xml_polygons <- boxes_to_spatial_polygons(annotations,rgb)

plotRGB(rgb)
plot(xml_polygons,add=T)

Lidar

To access the draped lidar hand annotations, use the “label” column. Each tree has a unique integer.

library(lidR)
path<-get_data("TEAK_052_2018",type="lidar")
r<-readLAS(path)
trees<-lasfilter(r,!label==0)
plot(trees,color="label")

We elected to keep all points, regardless of whether they correspond to tree annotation. Non-tree points have value 0. We highly recommend removing these points before predicting the point cloud. Since the annotations were made in the RGB and then draped on to the point cloud, there will naturally be some erroneous points at the borders of trees.

Hyperspectral

Hyperspectral surface reflectance (NEON ID: DP1.30006.001) is a 426 band raster covering visible and near infrared spectrum.

path<-get_data("MLBS_071_2018",type="hyperspectral")
g<-stack(path)
nlayers(g)
#> [1] 426
#Grab a three band combination to view as false color
f<-g[[c(52,88,117)]]
plotRGB(f,stretch="lin")

Submission Ranks

To add score to this benchmark, please submit a pull request to this README with the scores and the submission csv for confirmation.

Citation

This benchmark is currently in review. Either cite this repo, or the original article using these data: 1 Weinstein, Ben G., et al. “Individual tree-crown detection in RGB imagery using semi-supervised deep learning neural networks.” Remote Sensing 11.11 (2019): 1309. https://www.mdpi.com/2072-4292/11/11/1309

neontreeevaluation_package's People

Contributors

bw4sz avatar ethanwhite avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

neontreeevaluation_package's Issues

some functions don't work when using "NeonTreeEvaluation::" instead of "library(NeonTreeEvaluation)"

Dear Mr. Weinstein,

I encountered errors with the list_field_crowns() and list_field_stems() functions when calling them without using library(NeonTreeEvaluation):

NeonTreeEvaluation::list_field_crowns()
#> Error in unique(crown_polygons$plotID): object 'crown_polygons' not found
NeonTreeEvaluation::list_field_stems()
#> Error in unique(field$plotID): object 'field' not found

Created on 2021-05-18 by the reprex package (v2.0.0)

I prefer writing package-name:: instead of library(package-name) in my code with all but the most frequently called functions. In this case it seems like the data sets crown_polygons and field are not loaded and can thus not be found when I don't use library(NeonTreeEvaluation).

The problem can be solved by prefixing the data variables with NeonTreeEvaluation:: like so:

list_field_crowns<-function(){
  plot_names<-unique(NeonTreeEvaluation::crown_polygons$plotID) # here
  plot_names<-paste(plot_names,"_competition",sep="")
  rgb_images<-get_data(plot_names,"rgb")

  #Ensure all exists
  rgb_images<-rgb_images[sapply(rgb_images, file.exists)]
  return(rgb_images)
}

This is probably also an issue with other functions that use these data sets. Should I make a pull request where I prefix all instances of data usage with NeonTreeEvaluation::?

get_data function

system.file("extdata", "Weinstein2019.csv",package = "NeonTreeEvaluation"), should be cleaner.

Why match ground truth and prediction based on overlap and not based on jaccard?

Hi Ben,

as far as I understand the code of compute_precision_recall() you match individual ground truth and prediction polygons by maximizing their overall overlap with clue::solve_LSAP(). Then you calculate the Jaccard index of the matched polygons.

I was wondering why you don't match ground truth and predictions based on their jaccard index? This would probably not make much of a difference in most cases but I have encountered scenarios where it might matter.
More specifically, I encountered cases of undersegmentation where adjacent trees were recognized as just one tree. When such trees stand in a somewhat diagonal line with regards to the xy-coordinate system of the bounding boxes, they will have one very large bbox. Small bboxes of individually detected trees next to such an undersegmentation might completely fall inside the large bbox. Now let's imagine there is a ground truth bbox for such an individual tree next to an undersegmented cluster. The ground truth bbox might not perfectly overlap with the bbox of the segmented tree but it might overlap completely with the large bbox of the undersegmented cluster next to it. Now, matching segmentation and ground truth bboxes by maximizing their overlap area might lead to the ground truth bbox of our small tree being assigned to the larger bbox of the undersegmented cluster. A matching based on the overlap-over-union on the other hand would likely assign the ground truth bbox correctly or at least not to the large bbox.

I hope my description is understandable. If not, please let me know and I will provide a graphical illustration.

Cheers,
Leon

How to average recall and precision across sites

  1. Average across all bounding boxes. Pros: clear. Con: Biased towards denser forested sites
  2. Average per image, average across images. Pros: Accounts for density. Cons: Biased towards sites with more images? Depends on the skew?
  3. Average per site, average across sites. Pros: Accounts for density and different number of images per sites. Cons: awkward, need to keep track of which image is which site. Will change over time.

evaluate_image_crowns doesn't work in vignette "evaluation"

Dear Mr. Weinstein,

I was trying to learn about this package by executing the code in the vignettes. However, with the following code from the "evaluation" vignette, I got an error:

library(raster)
library(dplyr)
library(NeonTreeEvaluation)

#Get three sample plots to run quickly, ignore to run the entire dataset
df<-submission %>% filter(plot_name %in% c("SJER_052","TEAK_061","TEAK_057"))

#Compute total recall and precision for the overlap data
results<-evaluate_image_crowns(predictions = df,project = T, show=F, summarize = T)
#> [1] SJER_052
#> 1292 Levels: 2018_SJER_3_252000_4104000_image_628 ...
#> [1] TEAK_057
#> 1292 Levels: 2018_SJER_3_252000_4104000_image_628 ...
#> Error: Can't combine `..1$IoU` <units> and `..73$IoU` <double>.
results
#> Error in eval(expr, envir, enclos): object 'results' not found

Created on 2021-05-18 by the reprex package (v2.0.0)

This error occurs with both the TEAK_061 and TEAK_057 plots but not with the SJER_052 plot.

The error traces back to the calc_jaccard function where it occurs when the list of jaccard statistics for individual predictions is bounded together into one data.frame. I think this happens because the values returned by the IoU function are "unit" objects (as noted in the error message) which cannot be combined with the NA values that are returned when there is no ground truth polygon for a predicted one. In any case, the error doesn't show up when I wrap the call to the IoU function with as.numeric().

calc_jaccard <- function(assignment, ground_truth, predictions) {
  jaccard_stat <- list()
  for (i in 1:nrow(predictions)) {
    polygon_row <- predictions[i, ]
    y <- predictions[predictions$crown_id == polygon_row$crown_id, ]

    # check assignment
    polygon_assignment <- assignment[assignment$prediction_id == polygon_row$crown_id, "crown_id"]
    if (length(polygon_assignment) == 0) {
      # the NA value assigned to IoU here might not be compatible with the output of the IoU function below
      jaccard_stat[[i]] <- data.frame(crown_id = as.character(polygon_row$crown_id), prediction_id = NA, IoU = NA) 

    } else {
      x <- ground_truth[ground_truth$crown_id == polygon_assignment, ]
      # find intersection over union
      d<-data.frame(crown_id = polygon_assignment, prediction_id = polygon_row$crown_id, IoU = as.numeric(IoU(x, y))) # I added the "as.numeric()" here
      d$crown_id<-as.character(d$crown_id)
      d$prediction_id<-as.character(d$prediction_id)
      jaccard_stat[[i]] <- d
    }
  }
  statdf <- suppressWarnings(dplyr::bind_rows(jaccard_stat)) # the error occurs here
  return(statdf)
}

I don't know how it's possible that you did not get this error. Maybe there is something different with my system or R installation. Here is the output of sessioninfo::session_info() on my machine:

sessioninfo::session_info()
#> ─ Session info ───────────────────────────────────────────────────────────────
#>  setting  value                       
#>  version  R version 4.0.5 (2021-03-31)
#>  os       Ubuntu 18.04.5 LTS          
#>  system   x86_64, linux-gnu           
#>  ui       X11                         
#>  language (EN)                        
#>  collate  en_US.UTF-8                 
#>  ctype    en_US.UTF-8                 
#>  tz       Europe/Berlin               
#>  date     2021-05-18                  
#> 
#> ─ Packages ───────────────────────────────────────────────────────────────────
#>  ! package     * version date       lib source        
#>  P backports     1.2.1   2020-12-09 [?] CRAN (R 4.0.3)
#>  P cli           2.5.0   2021-04-26 [?] CRAN (R 4.0.3)
#>  P crayon        1.4.1   2021-02-08 [?] CRAN (R 4.0.3)
#>  P digest        0.6.27  2020-10-24 [?] CRAN (R 4.0.3)
#>  P ellipsis      0.3.2   2021-04-29 [?] CRAN (R 4.0.3)
#>  P evaluate      0.14    2019-05-28 [?] CRAN (R 4.0.2)
#>  P fansi         0.4.2   2021-01-15 [?] CRAN (R 4.0.3)
#>  P fs            1.5.0   2020-07-31 [?] CRAN (R 4.0.2)
#>  P glue          1.4.2   2020-08-27 [?] CRAN (R 4.0.2)
#>  P highr         0.9     2021-04-16 [?] CRAN (R 4.0.5)
#>  P htmltools     0.5.1.1 2021-01-22 [?] CRAN (R 4.0.3)
#>  P knitr         1.33    2021-04-24 [?] CRAN (R 4.0.5)
#>  P lifecycle     1.0.0   2021-02-15 [?] CRAN (R 4.0.3)
#>  P magrittr      2.0.1   2020-11-17 [?] CRAN (R 4.0.3)
#>  P pillar        1.6.0   2021-04-13 [?] CRAN (R 4.0.3)
#>  P pkgconfig     2.0.3   2019-09-22 [?] CRAN (R 4.0.2)
#>  P purrr         0.3.4   2020-04-17 [?] CRAN (R 4.0.2)
#>  P reprex        2.0.0   2021-04-02 [?] CRAN (R 4.0.3)
#>  P rlang         0.4.11  2021-04-30 [?] CRAN (R 4.0.5)
#>  P rmarkdown     2.8     2021-05-07 [?] CRAN (R 4.0.5)
#>  P sessioninfo   1.1.1   2018-11-05 [?] CRAN (R 4.0.0)
#>  P stringi       1.6.1   2021-05-10 [?] CRAN (R 4.0.5)
#>  P stringr       1.4.0   2019-02-10 [?] CRAN (R 4.0.2)
#>  P styler        1.4.1   2021-03-30 [?] CRAN (R 4.0.3)
#>  P tibble        3.1.1   2021-04-18 [?] CRAN (R 4.0.3)
#>  P utf8          1.2.1   2021-03-12 [?] CRAN (R 4.0.3)
#>  P vctrs         0.3.8   2021-04-29 [?] CRAN (R 4.0.3)
#>  P withr         2.4.2   2021-04-18 [?] CRAN (R 4.0.3)
#>  P xfun          0.22    2021-03-11 [?] CRAN (R 4.0.3)
#>  P yaml          2.2.1   2020-02-01 [?] CRAN (R 4.0.2)
#> 
#> [1] /home/leon/Projects/RPackages/NeonTreeEvaluation_package/renv/library/R-4.0/x86_64-pc-linux-gnu
#> [2] /tmp/RtmpPtwYTP/renv-system-library
#> [3] /usr/lib/R/library
#> 
#>  P ── Loaded and on-disk path mismatch.

Created on 2021-05-18 by the reprex package (v2.0.0)

Blank images

The benchmark should allow images with no predictions.

travis build

* checking dependencies in R code ... WARNING
'::' or ':::' imports not declared from:
  ‘sf’ ‘stringr’
count_trees: no visible binding for global variable ‘CHM_height’
count_trees: no visible binding for global variable ‘individualID’
count_trees: no visible binding for global variable ‘eventID’
count_trees: no visible binding for global variable ‘.’

download function does not necessarily work "out of the box"

Dear Mr. Weinstein,

when I tried to execute the code of the Data vignette, I got an error with the download function:

NeonTreeEvaluation::download()
Downloading file to /home/leon/Projects/RPackages/NeonTreeEvaluation_package/renv/library/R-4.0/x86_64-pc-linux-gnu/NeonTreeEvaluation/extdata/NeonTreeEvaluation.zip
trying URL 'https://zenodo.org/api/files/012fcb19-a0e0-4d08-9793-62ba35a4adb6/weecology/NeonTreeEvaluation-1.7.1.zip'
Content type 'application/octet-stream' length 1962013082 bytes (1871.1 MB)
==
downloaded 103.5 MB

Error in download.file(eval_url, destination, mode = "wb") : 
  download from 'https://zenodo.org/api/files/012fcb19-a0e0-4d08-9793-62ba35a4adb6/weecology/NeonTreeEvaluation-1.7.1.zip' failed
In addition: Warning messages:
1: In download.file(eval_url, destination, mode = "wb") :
  downloaded length 108485754 != reported length 1962013082
2: In download.file(eval_url, destination, mode = "wb") :
  URL 'https://zenodo.org/api/files/012fcb19-a0e0-4d08-9793-62ba35a4adb6/weecology/NeonTreeEvaluation-1.7.1.zip': Timeout of 60 seconds was reached

The timeout mentioned at the end of that error message is changeable via options(timeout = x) which was what I used to download the data in the end. To make the download function work "out of the box" the documentation of the download.file function (which is used internally by download) contains a suggestion:

The timeout for many parts of the transfer can be set by the option timeout which defaults to 60 seconds. This is often insufficient for downloads of large files (50MB or more) and so should be increased when download.file is used in packages to do so. Note that the user can set the default timeout by the environment variable R_DEFAULT_INTERNET_TIMEOUT in recent versions of R, so to ensure that this is not decreased packages should use something like

    options(timeout = max(300, getOption("timeout")))

(It is unrealistic to require download times of less than 1s/MB.)

It's not mentioned there but I've read here that it is bad practice to change the options of a user's R session. Maybe something like the following would be appropriate:

timeout_option_backup <- getOption("timeout")
options(timeout = max(2000, getOption("timeout")))

# Call download.file() here

options(timeout = timeout_option_backup)

The 2000 is just a placeholder. It's probably better to determine the size of the to-be-downloaded file beforehand and use that for a more meaningful value.

load_field_crown documentation copied from load_ground_truth

Hi Ben,

the documentation of load_field_crown seems to be a copy of the documentation of load_ground_truth.

#' Load and overlay ground truth annotations for single plot evaluation
#'
#' load_ground_truth is a wrapper function to get a plot annotation from file, project into geographic coordinates and potentially overlay on RGB data
#' @param plot_name The name of plot as given by the filename (e.g "SJER_021.tif" -> SJER_021).
#' @param show Logical. Whether to plot the ground truth data overlayed on the RGB image.
#' @return A SpatialPolygonsDataFrame of ground truth boxes.
#' @export
#'
load_field_crown <- function(plot_name, show = TRUE) {

#' Load and overlay ground truth annotations for single plot evaluation
#'
#' load_ground_truth is a wrapper function to get a plot annotation from file, project into geographic coordinates and potentially overlay on RGB data
#' @param plot_name The name of plot as given by the filename (e.g "SJER_021.tif" -> SJER_021).
#' @param show Logical. Whether to plot the ground truth data overlayed on the RGB image.
#' @return A SpatialPolygonsDataFrame of ground truth boxes.
#' @export
#'
load_ground_truth <- function(plot_name, show = TRUE) {

Cheers,
Leon

error with download() "[...]extdata/NeonTreeEvaluation.zip' cannot be opened"

The download() function appears to be not working anymore for me:

NeonTreeEvaluation::download()
#> Downloading file to /[...]/renv/library/R-4.1/x86_64-pc-linux-gnu/NeonTreeEvaluation/extdata/NeonTreeEvaluation.zip
#> Warning in download.file(eval_url, destination, mode = "wb"): URL https://
#> zenodo.org/api/files/d5c9f957-f439-4088-bf58-23bf679575d9/weecology/
#> NeonTreeEvaluation-1.8.0.zip: cannot open destfile '/[...]/renv/library/R-4.1/x86_64-pc-
#> linux-gnu/NeonTreeEvaluation/extdata/NeonTreeEvaluation.zip', reason 'No such
#> file or directory'
#> Warning in download.file(eval_url, destination, mode = "wb"): download had
#> nonzero exit status
#> Error in unzip(destination, list = TRUE): zip file '/[...]/renv/library/R-4.1/x86_64-pc-linux-gnu/NeonTreeEvaluation/extdata/NeonTreeEvaluation.zip' cannot be opened

Created on 2021-05-19 by the reprex package (v2.0.0)

I am suspecting that this is related to the extdata directory because I have observed on my machine that this directory is missing after a fresh install of the package. The directory might be missing because there are no longer any files in that directory since the last commit 36fb9ef.

Do you also experience this issue? Maybe the extdata directory can be created manually in download()?

Sanity check for data args

"RGB" versus "rgb"

> #get RGB image
>   get_data(target_plot,"RGB")
Error in get_data(target_plot, "RGB") : object 'path' not found
> ?get_data
> target_plot
[1] "2018_SJER_3_252000_4106000_image_234"
> #get RGB image
>   get_data(target_plot,"rgb")
[1] "/Library/Frameworks/R.framework/Versions/3.5/Resources/library/NeonTreeEvaluation/extdata/evaluation/RGB//2018_SJER_3_252000_4106000_image_234.tif"

Regarding training images resolution

Hi, I wanted to re-train the existing model used in this package. For that, I understood that training images must be RGB and of size 400*400. But I didn't understand how to determine spatial resolution while capturing images from Google Earth. I read in the documentation that 0.1m spatial resolution is standard, so how will I know that image I am about to save from Google Earth is of 0.1m spatial resolution.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.