Giter Club home page Giter Club logo

s3fs's Issues

undeclared dependency on R version >= 3.6.0

Some functions in the s3fs package make use of base::trimws with the whitespace argument. However, that argument is not available prior to R version 3.6.0.

So in older R versions:

foo <- s3_dir_ls("s3://foo/bar")

gives:

Error in trimws(path, which = "right", whitespace = "/") : 
  unused argument (whitespace = "/")

Initial cran release

Prepare for release:

- [ ] Check current CRAN check results

  • Update NEWS
  • urlchecker::url_check()
  • devtools::check_rhub()
  • devtools::check_win_devel()
  • rhub::check_for_cran()
  • Update cran-comments.md

Submit to CRAN:

  • devtools::submit_cran()
  • Approve email

Wait for CRAN...

  • Accepted ๐ŸŽ‰
  • Update Github Release

{progressr} support?

For long-running tasks, it might be helpful to get progress reports along the way, e.g. via {progressr}.

{progressr} support would need to be bundled within the individual functions so that users can make use of it, as outlined in the "developer API" example.

Warning in `s3fs::s3_file_delete()` when deleting file from Backblaze B2

When deleting a file from Backblaze B2 via s3fs::s3_file_delete(), I observe the following warning:

Warning message:
In rbindlist(lapply(resp$Versions, function(v) list(size = v$Size,  :
  Column 3 ['owner'] of item 1 is length 0. This (and 0 others like it) has been filled with NA (NULL for list columns) to make each item uniform.

The file is deleted successfully and the function returns the input path as expected. I didn't test other object storage providers besides Backblaze B2.

The output indicates that the warning is thrown here:

s3fs/R/s3filesystem_class.R

Lines 1048 to 1058 in a905290

df = rbindlist(
lapply(resp$Versions, function(v)
list(
size = v$Size,
version_id =v$VersionId,
owner = v$Owner$DisplayName,
etag = v$ETag,
last_modified = v$LastModified
)
)
)

Since multiple data.table calls are already wrapped in suppressWarnings() in that file, I dared to submit #43 that suppresses the above warning.

A discussion from dplyr development about how to properly handle cases where data.table::rbindlist() throws this warning is found here.

s3fs 0.1.4 cran release

Prepare for release:

  • Check current CRAN check results
  • Update NEWS
  • urlchecker::url_check()
  • devtools::check_rhub()
  • devtools::check_win_devel()
  • Update cran-comments.md

Submit to CRAN:

  • devtools::submit_cran()
  • Approve email

Wait for CRAN...

  • Accepted ๐ŸŽ‰
  • Update Github Release

cran release 0.1.3

Prepare for release:

  • Check current CRAN check results
  • Update NEWS
  • urlchecker::url_check()
  • devtools::check_rhub()
  • devtools::check_win_devel()
  • Update cran-comments.md

Submit to CRAN:

  • devtools::submit_cran()
  • Approve email

Wait for CRAN...

  • Accepted ๐ŸŽ‰
  • Update Github Release

Inconsistent result of s3fs::s3_dir_create with fs::dir_create

When the directory already exists, fs::dir_create ignores it and returns the path(s) to the directory(ies), while s3fs::s3_dir_create returns the string "Directory already exists in AWS S3".

If the goal is to make code written using the fs package portable to the s3fs than the behavior of the functions should be as close as possible.

Also, returning a string is not the most common behavior in R. A condition should be raised, either error, warning or message.

Files without extensions will be created as a `dir/file` instead of `file`

Hi!

First, thanks a lot for this package. It has made my interactions with S3 backends much easier compared to using {paws}.

I noticed the following niche issue when uploading a file without extension using s3_file_upload():

Assuming a remote path of $BUCKET/my/file, the resulting file will be created as $BUCKET/my/file/file - instead of $BUCKET/my/file.
When using paws.storage::s3()$put_object() directly, this is not the case (i.e. it works as intended) and $BUCKET/my/file gets created.

support for encryption?

One nice thing about aws.s3::s3save is that it supports server side encryption, for example:

         aws.s3::s3save(letters,
                        object = "letters.RData",
                        bucket = my_bucket,
                        opts = list(headers = c('x-amz-server-side-encryption' = 'aws:kms'))
                        )

Is it possible that s3fs::s3_file_upload, s3fs::s3_file_create, s3fs::s3_file_touch etc could support this as well? I looked through the help pages and it wasn't obvious...

Invalid copy source encoding

I have a file which is on S3 which basically looks like this 202408050803-lum_v2_gs_nl-%13668%-1002120667.wav. If I want to copy it to another folder, I'm getting 'invalid source encoding' due to that '%' character.

>        s3_file_copy(path = info$file[i], new_path = info$new_path[i], overwrite = TRUE)
Error: InvalidArgument (HTTP 400). Invalid copy source encoding

s3fs cran 0.1.5 release

Prepare for release:

  • Check current CRAN check results
  • Update NEWS
  • urlchecker::url_check()
  • devtools::check_rhub()
  • devtools::check_win_devel()
  • Update cran-comments.md

Submit to CRAN:

  • devtools::submit_cran()
  • Approve email

Wait for CRAN...

  • Accepted ๐ŸŽ‰
  • Update Github Release

I can't save files to a Minio bucket any longer on SerializationError (HTTP 400)

Hi,
Thank you for the very useful package!

First, let me be clear: the error is probably NOT a bug of the package, but mine.
I just would like to have some insights on the reason of the error. From what I read on the Web, the error could be caused by a firewall.

  • I have used s3fs without a glitch with self-hosted Minio for months.
  • Then, suddenly, approx. 3 weeks ago, I have been unable to save any file into buckets.

The error is:

Error: SerializationError (HTTP 400). failed to read from query HTTP response body

I have checked everything:

  • Minio secrets
  • my code which did not change
  • the type of file involved
  • I have sanitized the filename to save
  • curl -I http://[my_minio_server_IP]:9000/minio/health/live returns OK

To no avail.
Please help,
S.

[improvement] Get a simple return on `s3fs::s3_file_system`

It would be practical to have a simple return on the connection function, instead of the whole page of connection details the function is returning now.
Maybe another enclosing function would do the trick:

validity <- s3fs::is_valid(s3fs::s3_file_system(
  aws_access_key_id = Sys.getenv("MINIO_KEY"),
  aws_secret_access_key = Sys.getenv("MINIO_PWD"),
  endpoint = glue::glue("http://{theenv$miosrv}")
))
(validity)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.