Giter Club home page Giter Club logo

grunt-aws-s3's Introduction

grunt-aws-s3

Interact with AWS S3 using AWS SDK

Warning

Versions 0.4.0 to 0.5.0 have a bug where options.params is ignored.
Version 0.8.0 doesn't actually support Node 0.8.x and 0.9.x.

It's not recommended to use concurrencies over 100 as you may run into EMFILE/ENOTFOUND errors.

Getting Started

This plugin requires Grunt ~0.4.0

If you haven't used Grunt before, be sure to check out the Getting Started guide, as it explains how to create a Gruntfile as well as install and use Grunt plugins. Once you're familiar with that process, you may install this plugin with this command:

  npm install grunt-aws-s3 --save-dev

Once the plugin has been installed, it may be enabled inside your Gruntfile with this line of JavaScript:

  grunt.loadNpmTasks('grunt-aws-s3');

Make sure that your AWS IAM policy allows s3:GetObject, s3:GetObjectAcl, s3:ListBucket, s3:PutObject, and s3:PutObjectAcl on everything under the buckets you plan to deploy to. This task sets ACL properties, so you can easily find yourself in a situation where tools like s3cmd have no problem deploying files to your bucket, while this task fails with "AccessDenied".

The "aws_s3" task

Options

options.accessKeyId (required)

Type: String

The AWS accessKeyId. You can load it via JSON as shown in the example or use the AWS_ACCESS_KEY_ID environment variable.

options.secretAccessKey (required)

Type: String

The AWS secretAccessKey. You can load it via JSON as shown in the example or use the AWS_SECRET_ACCESS_KEY environment variable.

options.awsProfile

Type: String

Great if you have credentials profile stored in ~/.aws/credentials.

options.sessionToken

Type: String

The AWS sessionToken. You can load it via JSON as shown in the example or use the AWS_SESSION_TOKEN environment variable.

options.bucket (required)

Type: String

The AWS bucket name you want to upload to.

options.endpoint

Type: String

The AWS endpoint you'd like to use. Set by default by the region.

options.s3ForcePathStyle

Type: Boolean
Default: false

Force use path-style url (http://endpoint/bucket/path) instead of default host-style (http://bucket.endpoint/path)

options.region

Type: String
Default: US Standard

The AWS region.

If not specified, it uploads to the default 'US Standard'

options.maxRetries

Type: Integer

The maximum amount of retries to attempt with a request.

options.sslEnabled

Type: Boolean

Whether to enable SSL for requests or not.

options.httpOptions

Type: Object

A set of options to pass to the low-level HTTP request. The list of options can be found in the documentation

options.signatureVersion

Type: String

Change the signature version to sign requests with. Possible values are: 'v2', 'v3', 'v4'.

options.access

Type: String
Default:public-read

The ACL you want to apply to ALL the files that will be uploaded. The ACL values can be found in the documentation.

options.uploadConcurrency

Type: Integer
Default: 1

Number of uploads in parallel. By default, there's no concurrency. Must be > 0. Note: This used to be called concurrency but the option has been deprecated, however it is still backwards compatible until 1.0.0.

options.downloadConcurrency

Type: Integer
Default: 1

Number of download in parallel. By default, there's no concurrency. Must be > 0.

options.copyConcurrency

Type: Integer
Default: 1

Number of copies in parallel. By default, there's no concurrency. Must be > 0.

options.params

Type: Object

A hash of the params you want to apply to the files. Useful to set the ContentEncoding to gzip for instance, or set the CacheControl value. The list of parameters can be found in the documentation. params will apply to all the files in the target. However, the params option in the file list has priority over it.

options.mime

Type: Object

The MIME type of every file is determined by a MIME lookup using node-mime. If you want to override it, you can use this option object. The keys are the local file paths and the values are the MIME types.

  {
    'path/to/file': 'application/json',
    'path/to/other/file': 'application/gzip'
  }

You need to specify the full path of the file, including the cwd part.
The mime hash has absolute priority over what has been set in options.params and the params option of the file list.

options.stream

Type: Boolean
Default: false

Allows to use streams instead of buffers to upload and download. The option can either be turned on for the whole subtask or for a specified file object like so:

  {'action': 'upload', expand: true, cwd: 'dist/js', src: ['**'], stream: true}

options.debug

Type: Boolean
Default: false

This will do a "dry run". It will not upload anything to S3 but you will get the full report just as you would in normal mode. Useful to check what will be changed on the server before actually doing it. Unless one of your actions depends on another (like download following a delete), the report should be accurate.
listObjects requests will still be made to list the content of the bucket.

options.differential

Type: Boolean
Default: false

listObjects requests will be made to list the content of the bucket, then they will be checked against their local file equivalent (if it exists) using MD5 (and sometimes date) comparisons. This means different things for different actions:

  • upload: will only upload the files which either don't exist on the bucket or have a different MD5 hash
  • download: will only download the files which either don't exist locally or have a different MD5 hash and are newer.
  • delete: will only delete the files which don't exist locally

The option can either be specified for the whole subtask or for a specified file object like so:

  {'action': 'upload', expand: true, cwd: 'dist/js', src: ['**'], differential: true}

In order to be able to compare to the local file names, it is necessary for dest to be a finished path (e.g directory/ instead of just dir) as the comparison is done between the file names found in cwd and the files found on the server dest. If you want to compare the files in the directory scripts/ in your bucket and the files in the corresponding local directory dist/scripts/ you need to have something like:

  {cwd: 'dist/scripts/', dest: 'scripts/', 'action': 'download', differential: true}

options.overwrite

Type: Boolean
Default: true

By setting this options to false, you can prevent overwriting files on the server. The task will scan the whole bucket first and if it encounters a path that's about to be erased will stop.

options.displayChangesOnly

Type: Boolean Default: false

If enabled, only lists files that have changed when performing a differential upload.

options.progress

Type: String Default: dots

Specify the output format for task progress. Valid options are:

  • dots: will display one dot for each file, green for success, yellow for failure
  • progressBar: will display a progress bar with current/total count and completion eta
  • none: will suppress all display of progress

options.changedFiles

Type: String Default: aws_s3_changed

This tasks exports the list of uploaded files to a variable on the grunt config so it can be used by another task (grunt-invalidate-cloudfront for instance). By default it's accessible via grunt.config.get('aws_s3_changed') and this option allows you to change the variable name.

options.gzipRename (DEPRECATED - see options.compressionRename)

Type: String Default: ``

When using the gzip abilities of the task (see below), you can use this option to change the extensions of the files uploaded to S3. Values can be:

  • gz: will replace the compound extension with .gz (e.g. build.css.gz -> build.gz)
  • ext: will keep the original extension and remove .gz (e.g. build.css.gz -> build.css)
  • swap: will swap the two extensions (e.g. build.css.gz -> build.gz.css)

This only works with the gzip abilities of the task which is based on compound extensions like these: .css.gz.

options.compressionRename

Type: String Default: ``

When using the compression abilities of the task (see below), you can use this option to change the extensions of the files uploaded to S3. Values can be:

  • compress: will replace the compound extension with the compression specific extension (e.g. build.css.gz -> build.gz)
  • ext: will keep the original extension and remove the compression specific extension (e.g. build.css.gz -> build.css)
  • swap: will swap the two extensions (e.g. build.css.br -> build.br.css)

This only works with the compression abilities of the task which is based on compound extensions like these: .css.gz.

options.compressionTypes

Type: Object Default: {'.br': 'br', '.gz': 'gzip'}

When using the compression abilities of the task (see below), you can use this option to change if a specific extension is recognized as a compression extension and to change the encoding type this compression algorithm maps to. This option should contain a object that maps extensions to mime types.

compression

This task doesn't compress anything for you. The grunt-contrib-compress task is here for that and is much more suitable.
However, uploading compressed files is annoying because you need to set ContentType and ContentEncoding correctly for each of the compressed files. As of version 0.12.0, this plugin will try to guess if a file needs to have their ContentType and ContentEncoding changed relying on a convention rather than configuration (inspired by hapi).

The convention is that a compressed file must have a compression specific extension, e.g. .gz, in its extension as well as its original extension (e.g. .css, .js) like so: build.js.gz.
In this case the plugin will apply the ContentType from build.js to build.js.gz and set the ContentEncoding to gzip.

If for some reason you're not following this convention (e.g. you're naming your files build.gz), you can force the ContentType through the mime option of the plugin which still has priority. Provided the extension is still .gz, the ContentType will be set for you. Alternatively, you can use the compressionRename option which will be able to rename the files on the fly as they're uploaded to S3.

Actions

This Grunt task supports three modes of interaction with S3, upload, download and delete. Every action that you specify is executed serially, one after the other. If multiple upload actions are one after the other, they will be grouped together.

You choose the action by specifying the key action in the file hash like so:

  {'action': 'upload', expand: true, cwd: 'dist/js', src: ['**'], dest: 'app/js/'}

By default, the action is upload.

upload

The upload action uses the newest Grunt file format, allowing to take advantage of the expand and filter options.
It is the default action, so you can omit action: 'upload' if you want a cleaner look. Don't forget to set a dest (use dest: '/' for the root).

Lastly don't forget to set expand: true where you use the cwd property or Grunt just ignores it, this is explained in Grunt Building the files object dynamically

  files: [
    {expand: true, cwd: 'dist/staging/scripts', src: ['**'], dest: 'app/scripts/'},
    {expand: true, cwd: 'dist/staging/styles', src: ['**'], dest: 'app/styles/', action: 'upload'}
  ]

You can also include a params hash which will override the options.params one. For example:

  
  params: {
    ContentType: 'application/json'
    CacheControl: '3000'
  }

  // ...

  files: [
    {expand: true, cwd: 'dist/staging/scripts', src: ['**'], dest: 'app/scripts/', params: {CacheControl: '2000'}},
    {expand: true, cwd: 'dist/staging/styles', src: ['**'], dest: 'app/styles/'}
  ]

This will yield for the params which will eventually be applied:

  {
    ContentType: 'application/json',
    CacheControl: '2000'
  }

  // AND

  {
    ContentType: 'application/json',
    CacheControl: '3000'
  }

The options.mime hash, however, has priority over the ContentType. So if the hash looked like this:

  {
    'dist/staging/styles/LICENCE': 'text/plain'
  }

The ContentType eventually applied to dist/staging/styles/LICENCE would be text/plain even though we had a ContentType specified in options.params or in params of the file.

When the differential option is enabled, it will only upload the files which either don't exist on the bucket or have a different MD5 hash.

download

The download action requires a cwd, a dest and no src like so:

  {cwd: 'download/', dest: 'app/', action: 'download'}

The dest is used as the Prefix in the listObjects command to find the files on the server (which means it can be a path or a partial path). The cwd is used as the root folder to write the downloaded files. The inner folder structure will be reproduced inside that folder.

If you specify '/' for dest, the whole bucket will be downloaded. It handles automatically buckets with more than a 1000 objects.
If you specify 'app', all paths starting with 'app' will be targeted (e.g. 'app.js', 'app/myapp.js', 'app/index.html, 'app backup/donotdelete.js') but it will leave alone the others (e.g. 'my app/app.js', 'backup app/donotdelete.js').

When the differential options is enabled, it will only download the files which either don't exist locally or have a different MD5 hash and are newer.

Note: if dest is a file, it will be downloaded to cwd + file name. If dest is a directory ending with /, its content will be downloaded to cwd + file names or directories found in dest. If dest is neither a file nor a directory, the files found using it as a prefix will be downloaded to cwd + paths found using dest as the prefix.

The download action can also take an exclude option like so:

  {cwd: 'download/', dest: 'app/', action: 'download', exclude "**/.*"}

The value is a globbing pattern that can be consumed by grunt.file.isMatch. You can find more information on globbing patterns on Grunt's doc. In this example, it will exclude all files starting with a . (they won't be downloaded). If you want to reverse the exclude (that is, only what will match the pattern will be downloaded), you can use the flipExclude option like so:

  {cwd: 'download/', dest: 'app/', action: 'download', exclude "**/.*", flipExclude: true}

In this example, only the files starting with a . will be downloaded.

Example:

  {cwd: 'download/', dest: 'app/', action: 'download'} // app/myapp.js downloaded to download/myapp.js
  {cwd: 'download/', dest: 'app/myapp.js', action: 'download'} // app/myapp.js downloaded to download/myapp.js
  {cwd: 'download/', dest: 'app', action: 'download'} // app/myapp.js downloaded to download/app/myapp.js

delete

The delete action just requires a dest, no need for a src like so:

  {dest: 'app/', 'action': 'delete'}

The dest is used as the Prefix in the listObjects command to find the files on the server (which means it can be a path or a partial path).

If you specify '/', the whole bucket will be wiped. It handles automatically buckets with more than a 1000 objects.
If you specify 'app', all paths starting with 'app' will be targeted (e.g. 'app.js', 'app/myapp.js', 'app/index.html, 'app backup/donotdelete.js') but it will leave alone the others (e.g. 'my app/app.js', 'backup app/donotdelete.js').

When the differential options is enabled, it will only delete the files which don't exist locally. It also requires a cwd key with the path to the local folder to check against.

Please, be careful with the delete action. It doesn't forgive.

The delete action can also take an exclude option like so:

  {dest: 'app/', 'action': 'delete', exclude "**/.*"}

The value is a globbing pattern that can be consumed by grunt.file.isMatch. You can find more information on globbing patterns on Grunt's doc. In this example, it will exclude all files starting with a . (they won't be deleted). If you want to reverse the exclude (that is, only what will match the pattern will be deleted), you can use the flipExclude option like so:

  {dest: 'app/', 'action': 'delete', exclude "**/.*", flipExclude: true}

In this example, only the files starting with a . will be deleted.

dest is the folder on the bucket that you want to target. At the moment, a globbing pattern shouldn't be in src (which would reference local files) but exclude. Exclude takes 1 globbing pattern, and can be "flipped" so that it becomes "delete all that match this pattern" rather than "don't delete all that match this pattern".

If you use differential, you need to give a cwd, which will indicate which folder dest is referencing locally. In that case, differential will only delete the files on AWS which don't exist locally (look at this in terms of cleaning up if you have changed some assets names or something).

copy

The copy action just requires a src and a dest so:

  {src: 'app/', dest: 'copy/', 'action': 'delete'}

The `src` is used as the Prefix in the [listObjects command](http://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/S3.html#listObjects-property) to find the files _on the server_ (which means it can be a path or a partial path). It will then copy objects to `dest`.

The `copy` action can also take an `exclude` option like so:

```js
  {src: 'app/', dest: 'copy/', 'action': 'delete', exclude "**/.*"}

The value is a globbing pattern that can be consumed by grunt.file.isMatch. You can find more information on globbing patterns on Grunt's doc. In this example, it will exclude all files starting with a . (they won't be copied). flipExclude also works.

Usage Examples

The example loads the AWS credentials from a JSON file (DO NOT forget to exclude it from your commits).

  {
    "AWSAccessKeyId": "AKxxxxxxxxxx",
    "AWSSecretKey": "super-secret-key"
  }
aws: grunt.file.readJSON('aws-keys.json'), // Read the file

aws_s3: {
  options: {
    accessKeyId: '<%= aws.AWSAccessKeyId %>', // Use the variables
    secretAccessKey: '<%= aws.AWSSecretKey %>', // You can also use env variables
    region: 'eu-west-1',
    uploadConcurrency: 5, // 5 simultaneous uploads
    downloadConcurrency: 5 // 5 simultaneous downloads
  },
  staging: {
    options: {
      bucket: 'my-wonderful-staging-bucket',
      differential: true, // Only uploads the files that have changed
      gzipRename: 'ext' // when uploading a gz file, keep the original extension
    },
    files: [
      {dest: 'app/', cwd: 'backup/staging/', action: 'download'},
      {src: 'app/', cwd: 'copy/', action: 'copy'},
      {expand: true, cwd: 'dist/staging/scripts/', src: ['**'], dest: 'app/scripts/'},
      {expand: true, cwd: 'dist/staging/styles/', src: ['**'], dest: 'app/styles/'},
      {dest: 'src/app', action: 'delete'},
    ]
  },
  production: {
    options: {
      bucket: 'my-wonderful-production-bucket',
      params: {
        ContentEncoding: 'gzip' // applies to all the files!
      },
      mime: {
        'dist/assets/production/LICENCE': 'text/plain'
      }
    },
    files: [
      {expand: true, cwd: 'dist/production/', src: ['**'], dest: 'app/'},
      {expand: true, cwd: 'assets/prod/large', src: ['**'], dest: 'assets/large/', stream: true}, // enable stream to allow large files
      {expand: true, cwd: 'assets/prod/', src: ['**'], dest: 'assets/', params: {CacheControl: '2000'}},
      // CacheControl only applied to the assets folder
      // LICENCE inside that folder will have ContentType equal to 'text/plain'
    ]
  },
  clean_production: {
    options: {
      bucket: 'my-wonderful-production-bucket',
      debug: true // Doesn't actually delete but shows log
    },
    files: [
      {dest: 'app/', action: 'delete'},
      {dest: 'assets/', exclude: "**/*.tgz", action: 'delete'}, // will not delete the tgz
      {dest: 'assets/large/', exclude: "**/*copy*", flipExclude: true, action: 'delete'}, // will delete everything that has copy in the name
    ]
  },
  download_production: {
    options: {
      bucket: 'my-wonderful-production-bucket'
    },
    files: [
      {dest: 'app/', cwd: 'backup/', action: 'download'}, // Downloads the content of app/ to backup/
      {dest: 'assets/', cwd: 'backup-assets/', exclude: "**/*copy*", action: 'download'}, // Downloads everything which doesn't have copy in the name
    ]
  },
  secret: {
    options: {
      bucket: 'my-wonderful-private-bucket',
      access: 'private'
    },
    files: [
      {expand: true, cwd: 'secret_garden/', src: ['*.key'], dest: 'secret/'},
    ]
  }
},

Todos

  • Better testing (params, sync, etc.)

Release History

Full changelog.

grunt-aws-s3's People

Contributors

ahageali avatar albert-lacki avatar bprodoehl avatar chriswren avatar chrono avatar dedsm avatar djaney avatar eugenioclrc avatar ffflorian avatar frankcortes avatar iltempo avatar immer avatar ivanzhaowy avatar jeantil avatar joshuaspence avatar krotscheck avatar mathieuloutre avatar rayd avatar rgeorge-msm avatar royaldark avatar seth-admittedly avatar skibum55 avatar stevemayhew avatar takeno avatar trioni avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

grunt-aws-s3's Issues

Uploading empty dir breaks upload

This isn't a major problem, but I have a task called "sync" which first does an upload and then does a download, both with differential: true. The idea is to keep local and remote in sync constantly.

grunt.registerTask('sync', ['aws_s3:resourcesUpload', 'aws_s3:resourcesDownload']);

However, if I try to run upload first and the dir is empty (aka just cloned the project and I'm trying to get resources), the upload task will fail without any error/warning, and then my download task won't continue.

Probably it's a better idea to have an "initial" task that gets run after git clone/npm install, but it might be a good idea to allow this to fail more gracefully so that the next task in the list continues

IAM Permissions

What are the required IAM permissions for S3 in order to use this task and upload files to S3?

Having the following permissions enabled results in a "Fatal error: Failed to list content of bucket hello-world"

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "Stmt1401233450000",
            "Effect": "Allow",
            "Action": [
                "s3:GetObject",
                "s3:GetObjectAcl",
                "s3:ListBucket",
                "s3:PutObject",
                "s3:PutObjectAcl"
            ],
            "Resource": [
                "arn:aws:s3:::*/*"
            ]
        }
    ]
}

Content-Encoding gzip meta-tag being added to non-gzip files

This only started occurring when I added the 'CacheControl' param.

files: [
          { expand: true, dest: 'recipe_decorator/', cwd: 'dist/', src: ['decorator.*', 'mgd.css.gz', 'mgd.css'], action: 'upload', params: { 'CacheControl': 'max-age=6000'} }
        ]

Every file uploaded to S3 was tagged with Content-Encoding gzip.

Separating out the gzip file corrected this the problem, but its not ideal:

files: [
          { expand: true, dest: 'recipe_decorator/', cwd: 'dist/', src: ['decorator.min.js', 'mgd.css'], action: 'upload', params: { 'CacheControl': 'max-age=6000'} },
          { expand: true, dest: 'recipe_decorator', cwd: 'dist/', src: ['mgd.css.gz', 'decorator.min.js.gz'], action: 'upload', params: { 'CacheControl': 'max-age=6000'} }
        ]

Do not require accesskey/secretaccesskey

I'd like to use IAM policy to control access to s3 buckets instead of shipping secrets around. Assuming aws-sdk supports IAM (I think it should...) I should be able to use this library without specifying keys.

Issue deleting bucket files with suggested policy actions

I think this is a documentation issue. When setting up the policies for the task, I had trouble with the delete command not working, getting the following error:

Running "s3Deploy:staging" (s3Deploy) task
Deleting the content of https://s3-eu-west-1.amazonaws.com/mybucket-name/styles/
Fatal error: Failed to list content of bucket mybucket-name
AccessDenied: Access Denied

Through trial and error, and having read @coop182's blog post on getting started, I've ended up with this IAM policy in AWS:

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Sid": "Stmt1424411733000",
      "Effect": "Allow",
      "Action": "s3:*",
      "Resource": [
        "arn:aws:s3:::mybucket-name",
        "arn:aws:s3:::mybucket-name/*"
      ]
    }
  ]
}

Note the 2 resources with and without the /*. After having limited the actions to what your readme said and having added DeleteObject, this is what fixed the errors for me. Am I doing something wrong?

listObjects – Not properly recursively fetching a list when there are more than 1000 items

This may be special to my use case, but the initial call to listObjects will fetch 1000 items, and then it assumes list.Marker will be set and recursively calls listObjects to fetch the next set of items, but this initial call list.Marker is always null even though list.IsTruncated is true. Attempting to debug by changing the recursive logic to be based off of the IsTruncated field, but wanted to log it here.

Going off of this stacko article: http://stackoverflow.com/questions/9437581/node-js-amazon-s3-how-to-iterate-through-all-files-in-a-bucket

Display only changed files in differential upload

Would it be possible to have an option to display only the files that have changed when doing a differential upload? The summary at the end is nice, but I have to dig through thousands of lines of "local file == remote file" to find the ones that have changed.

Incorrect ContentType set on CSS GZipped files uploaded to AWS

There was an issue that my company/team experienced last week when using this module as part of our grunt deployment process, where it was setting the wrong Content-Type header for CSS files uploaded to AWS. The value should be "text/css" but it was being set to "application/javascript; charset=utf-8".

After looking into this I've discovered this issue occurred at some point after 0.12.1, which we were using previously. I had recently updated npm modules prior to that deploy and was using the most recent version, which has this issue.

I've not yet looked to see what may have caused the issue.

Specifying `ContentEncoding: "gzip"` causes content in subfolders to not be found

Seems like specifying ContentEncoding: "gzip" causes files to not be found. Could that be a bug, or am I using this wrong? I certainly could be--I'm new to the deployment/optimization stuff.

But if I remove that property, it all works, and if I put it back on, it stops working.

Live example (chrome's net panel shows the load failure as 'failed' and '200 OK':
http://tidepool.co.s3-website-us-west-1.amazonaws.com/

Also noted here:
http://stackoverflow.com/questions/18456535/aws-s3-returns-200ok-parser-fails-if-contentencoding-gzip

Move Action

I came here to look up the documentation for a "move" action, but realized that there isn't one. Would it be difficult to add one?

We have a need to move the current files, in a directory we are uploading to, into a versioned "roll back" directory.

UnexpectedParameter error message when downloading

Whenever I do a download using 7.0 or 7.1, I get a fatal error on the download and the following error message:

UnexpectedParameter: Unexpected key 'LastModified' found in params

I've tried adding params, just to see if there was some issue where it was required to be set even on a download, but that didn't fix it.

I installed 6.0, and things work just fine for me there.

Sometimes errors out on a timeout.

Running "aws_s3:staging" (aws_s3) task
Uploading to https://s3-ap-southeast-1.amazonaws.com/client-connect-win/
................................................................Fatal error: Failed to upload src/img/bg.png with bucket client-connect-win
Error: Your socket connection to the server was not read from or written to within the timeout period. Idle connections will be closed.�
grunt returned exit code 1action grunt failed

My s3 uploads fail intermittently because of this on this config:

aws_s3: {
      options: {
        accessKeyId: '<%= aws.key %>',
        secretAccessKey: '<%= aws.secret %>',
        region: "<%= aws.region %>",
        uploadConcurrency: 300
      },
      staging: {
        options: {
          bucket: "<%= aws.staging.bucket %>"
        },
        files: [
          {
            expand: true,
            cwd: 'src/',
            src: ['**']
          }
        ]
      }
    },

The file it decided to error out on wasn't even particularly big only 5kb. Any idea on how I can avoid that?

Fatal error: setImmediate is not defined

When i execute the upload i receive this message:

Fatal error: setImmediate is not defined

i'm using the latest version available: 0.8.0

My grunt version:
grunt-cli v0.1.9
grunt v0.4.2

Why is the ContentType header set to the file name?

I've got the following grunt file:

grunt.initConfig({
    aws: grunt.file.readJSON('aws-credentials.json'), // Read the file
    pkg: grunt.file.readJSON('package.json'),
    aws_s3: {
      options: {
        accessKeyId: '<%= aws.AWSAccessKeyId %>',
        secretAccessKey: '<%= aws.AWSSecretKey %>',
        region: 'eu-west-1',
      },
      clean_production: {
        options: {
          bucket: 'example.com',
        },
        files: [
          {dest: '/', action: 'delete'},
          ],
      },
      production: {
        options: {
          bucket: 'example.com',
          params: {
            ContentEncoding: 'gzip', // applies to all the files!
          }
        },
        files: [
          {expand: true, cwd: 'build/', src: ['**'], dest: ''},
          ],
      }
    },

Grunt works fine .. but for some reason, when I inspect the metadata on my S3 panel after the push, I see the file path there instead of the correct ContentType:

image

I've got "grunt-aws-s3": "^0.10.1", in my package.json

Does not delete from a bucket

Got the folowing task:
clean_dev: { options: { bucket: 'my.brand.new.bucket', debug: false}, files: [{ dest: 'js', action: 'delete' }]}

In command window it lists all the js files in the js folder and states that 4/4 objects deleted from my.brand.new.bucket. But in fact all the files persist.

action: 'delete' - Fatal error: Failed to list content of bucket <bucket> NoSuchKey: The specified key does not exist.

I have this task working perfectly for uploading, but I can't seem to be able to delete files from my bucket. I get this error when running grunt aws_s3:clean:

Fatal error: Failed to list content of bucket <bucket>
NoSuchKey: The specified key does not exist.

Here's my task config:

aws: grunt.file.readJSON('aws-keys.json'),

aws_s3: {
  options: {
    accessKeyId: '<%= aws.AWSAccessKeyId %>',
    secretAccessKey: '<%= aws.AWSSecretKey %>',
    region: 'eu-west-1',
    uploadConcurrency: 5,
    downloadConcurrency: 5,
    bucket: '<bucket>'
  },
  deploy: {
    files: [{
      expand: true, 
      cwd: 'dist', 
      src: [
        '**/*'
      ], 
      dest: ''
    }]
  },
  clean: {
    options: {
      debug: true
    },
    files: [{
      dest: '/', 
      action: 'delete'
    }]
  }
}

warning: Recursive process.nextTick detected.

Full error is:

(node) warning: Recursive process.nextTick detected. This will break in the next version of node. Please use setImmediate for recursive deferral.

I'm getting this error about 50% of the time on the staging_assets task when uploading data and can't figure out why. What's strange is I have 3 sections tasks, two which always work and one which fails half the time:

aws_s3: {
      options: {
        access: 'public-read',
        uploadConcurrency: 3
      },
      staging: {
        options: {
          bucket: 'staging.towerstorm.com',
          params: {
            CacheControl: "max-age",
            Expires: Date.now() + 86400 * 365,
            ContentEncoding: "gzip"
          }
        },
        files: [
          {
            src: 'build/gzipped/js/vendor-sync.min.js',
            dest: 'frontend/<%= frontendVersion %>/js/vendor-sync.min.js'
          },
          {
            src: 'build/gzipped/js/vendor-async.min.js',
            dest: 'frontend/<%= frontendVersion %>/js/vendor-async.min.js'
          },
          {
            src: 'build/gzipped/js/frontend.min.js',
            dest: 'frontend/<%= frontendVersion %>/js/frontend.min.js'
          },
          {
            src: 'build/gzipped/css/frontend.min.css',
            dest: 'frontend/<%= frontendVersion %>/css/frontend.min.css'
          },
          {
            src: 'build/gzipped/css/game.min.css',
            dest: 'frontend/<%= frontendVersion %>/css/game.min.css'
          }
        ]
      },
      staging_assets: { //Stuff that isn't gzip compressed
        options: {
          bucket: 'staging.towerstorm.com',
          params: {
            CacheControl: "max-age",
            Expires: Date.now() + 86400 * 365
          }
        },
        files: [{
          expand: true,
          cwd: 'build/fonts/',
          src: ['**'],
          dest: 'fonts/'
        }, {
          expand: true,
          cwd: 'build/assets/',
          src: ['**'],
          dest: ''
        }]
      },
      staging_uncached: {
        options: {
          bucket: 'staging.towerstorm.com',
          params: {
            CacheControl: "max-age",
            Expires: Date.now() + 86400,
            ContentEncoding: "gzip"
          }
        },
        files: [
          {
            expand: true,
            cwd: 'build/gzipped/html/views/',
            src: ['**'],
            dest: 'views/'
          },
          {
            expand: true,
            cwd: 'build/gzipped/html/templates/',
            src: ['**'],
            dest: 'templates/'
          },
          {
            src: 'build/gzipped/html/index.html',
            dest: 'index.html'
          }
        ]
      },
}

So the staging and staging_uncached tasks works fine but 'staging_assets' often gives this error. This warning appears about 1000 times then it has the following at the end:

util.js:35
[20:30:18]   var str = String(f).replace(formatRegExp, function(x) {
[20:30:18]                      ^
[20:30:18] RangeError: Maximum call stack size exceeded
[20:30:18] Process exited with code 7

I googled and discovered that this error occurs when your task name is the same as the plugin task name but none of my tasks are called anything like the grunt plugins. The task is as follows:

 grunt.registerTask('push', ['jade:staging', 'compress', 'aws_s3:staging', 'aws_s3:staging_uncached', 'aws_s3:staging_assets', 'invalidate_cloudfront:staging']);

I'm using:

  • grunt version 0.4.5
  • grunt-cli version 0.1.13
  • grunt-aws-s3 version 0.9.4

Use `storageClass: 'REDUCED_REDUNDANCY'`

Hey again Mathieu, been a while!

I was trying to use this option REDUCED_REDUNDANCY.

From the AWS docs:
http://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/S3.html#putObject-property

Tried a few variations of capitalization on the storageClass key, but wasn't able to get it to take. Are the options passed through one at a time--so that an option you haven't specifically handled won't get sent to the AWS api, or are they sent through exactly as I type them?

Upload never finishes when trying to upload 0 files

The task never finishes (has to be canceled) when trying to upload 0 files (not a single file exists that matches the cwd/ src pattern).

Setting differential to true doesn't make a difference.

Console output:

DIRECTORY PATH>grunt aws_s3:stag
Running "aws_s3:stag" (aws_s3) task
No region defined. S3 will default to US Standard

<WAITING FOREVER>

My task configuration (related section):

stag: {
  options: {
    bucket: 'staging.<DOMAIN>.com',
    differential: true
  },
  files: [
    {
      expand: true,
      cwd: 'dist/stag/',
      src: ['**'],
      dest: '',
      params: { CacheControl: 'max-age=0, no-cache' }
    }
  ]
},

renaming file on upload?

hi, if I do something like:

               files: [
                 {expand: true, cwd: 'local/', src: ['myfile.htm'], dest: 'myfile.html'}
               ]

I got in the bucket a folder named "myfle.html" that contains the file "myfile.htm".

How can I force the expected behavior? (having a single "myfile.html")?

Differential uploading items at bucket root

It appears differential uploading is broken for files at the root of my bucket. My Gruntfile contains code that looks like this:

          upload_staging: {
            options: {
              bucket: 'staging.jonfaulkenberry.com',
              differential: true
            },
            files: [
              { 'action': 'upload', expand: true, cwd: 'dist', src: ['**/*'] },
              { 'action': 'delete', cwd: 'dist/', dest: '/' },
            ]
          },

However, each time I run this command, the files at the root of my bucket are always uploaded (despite no changes). The files within subfolders are uploading as expected. If I run the task over and over without making any changes, I continue to have any files at the root of my directory uploaded. The output is the same each time I run the task (see below).

.........
List: (9 objects):
- dist/404.html -> https://s3-us-west-2.amazonaws.com/staging.jonfaulkenberry.com/404.html
- dist/bower_components/requirejs/require.js === https://s3-us-west-2.amazonaws.com/staging.jonfaulkenberry.com/bower_components/requirejs/require.js
- dist/favicon.ico -> https://s3-us-west-2.amazonaws.com/staging.jonfaulkenberry.com/favicon.ico
- dist/index.html -> https://s3-us-west-2.amazonaws.com/staging.jonfaulkenberry.com/index.html
- dist/robots.txt -> https://s3-us-west-2.amazonaws.com/staging.jonfaulkenberry.com/robots.txt
- dist/scripts/7df468c4.main.js === https://s3-us-west-2.amazonaws.com/staging.jonfaulkenberry.com/scripts/7df468c4.main.js
- dist/scripts/vendor/d7100892.modernizr.js === https://s3-us-west-2.amazonaws.com/staging.jonfaulkenberry.com/scripts/vendor/d7100892.modernizr.js
- dist/styles/6bad6e7f.main.css === https://s3-us-west-2.amazonaws.com/staging.jonfaulkenberry.com/styles/6bad6e7f.main.css
- dist/styles/vendor/3be154ad.pure.min.css === https://s3-us-west-2.amazonaws.com/staging.jonfaulkenberry.com/styles/vendor/3be154ad.pure.min.css

Upload files with changed `params` but unchanged content

I love using the differential feature to save time by only uploading modified assets. However, if I change the caching header or content encoding of a resource, and the content stays the same, the resource will not be re-uploaded with the new headers.

Maybe we could add another config property to re-upload files if any headers change? Since differential is already used as a part of the aws node sdk we probably wouldn't want to change how that works. Solving this issue would be done by implementing a helper property like uploadChangedHeaders to solve the problem described above.

Any thoughts?

S3 file differences

Noticing a difference between a file before it's pushed to s3 and after. These are probably not all the differences, but I see 2 initially:

  • Unicode?

// [Fisher-Yates shuffle](http://en.wikipedia.org/wiki/Fisher–Yates_shuffle). becomes [Fisher-Yates shuffle](http://en.wikipedia.org/wiki/Fisher–Yates_shuffle)

  • Indents
            ctx

...becomes:

      ctx

Is there a reason for this? Is this S3 and not the grunt task?

Warning: Cannot call method 'forEach' of undefined Used --force, continuing.

D:\work\www\c360>grunt aws_s3 --force
Running "aws_s3:staging" (aws_s3) task
Warning: Cannot call method 'forEach' of undefined Used --force, continuing.

Done, but with warnings.

--> Getting this error while I run "grunt aws_s3" in windows command line.

Find the config code below:

aws_s3: {
            options: {
                accessKeyId: 'abc',
                secretAccessKey: 'xyz',
                region: 'jkl'
            },
            staging: {
                options: {
                    bucket: 'bucketname',
                    params: {
                        ContentEncoding: 'gzip'
                    },
                    differential: true
                },
                files: [{dest: 'gzipped2/', cwd: 'gzipped/', action: 'upload'}]
            }
        },

Differential delete confusion.

I'm trying to use the differential delete to remove files that exist in my S3 bucket that don't exist locally though the feature appears to either be non-functional or I'm just misunderstanding how it's meant to be used. My configuration looks something like this:

aws_s3: {
  options: {
    accessKeyId: '<%= aws().key %>',
    secretAccessKey: '<%= aws().secret %>',
    bucket: 'some-bucket',
    access: 'public-read',
    differential: true,
  },
  css: {
    files: [
      {
        action: 'delete',
        expand: true,
        cwd: 'www/',
        src: ['styles/stylesheets/**/*.css', 'js/vendor/**/*.css'],
        dest: '/'
      }
    ]
  }
}

This configuration works fine for the upload action but when set it to delete first I get an error that says Differential delete needs a "cwd" I was able to get around this by modifying the grunt-aws-s3 task to check for the cwd on filePair.orig.cwd instead but then I ran into issues/confusion with the deleteObjects method. It appears that the argument being passed in is the destination to to each individual file not a directory that contains files to be deleted so no objects are returned from listObjects. I think I have an idea for how to reasonably refactor this to make the behavior act to how I would expect it to but I wasn't sure if I just was configuring things incorrectly or didn't understand the feature. Any clarification would be greatly appreciated.

Recursive process.nextTick detected and call stack size exceeded with differential: true and a lot of files.

When I do a differential update of s3, which upIoads about 1800 files, I now get about 1185 warnings like:

(node) warning: Recursive process.nextTick detected. This will break in the next version of node. Please use setImmediate for recursive deferral.

and then a crash:

util.js:35
  var str = String(f).replace(formatRegExp, function(x) {
                      ^
RangeError: Maximum call stack size exceeded

I'm using node v0.10.26, and here's the relevant part of my Gruntfile.coffee

            aws_s3:
                options:
                    accessKeyId: "<%= aws.key %>"
                    secretAccessKey: "<%= aws.secret %>"
                    uploadConcurrency: 5
                    downloadConcurrency: 5
                    access: "public-read"

                default: {}
                deploy:
                    options:
                        bucket: "foobar-<%= env %>"
                        params:
                            Expires: oneYearFromNow().toISOString()

                        differential: true

                    files: [...]

Upload 'cwd' does not work

files: [{
expand: true,
cwd: 'public/scripts',
src: ['public/scripts/*.js'],
dest: 'scripts/',
params: {
ContentEncoding: 'gzip'
},
}]

if i omitted cwd it will upload to s3, but whenever i included it the grunt task just keep running without stopping

Output reason why task failed

I tried to configure the task, but I can't get it to work. Maybe I did something wrong (probably), but there is no info on why the task failed.
I am using load-grunt-config to load the task. The AWS keys are hardcoded to remove the possible source of error.

grunt aws-s3:staging -v:

Initializing
Command-line options: --verbose

Reading "Gruntfile.js" Gruntfile...OK

Registering Gruntfile tasks.
Initializing config...OK
Loading "Gruntfile.js" tasks...OK
+ clear, default, lint

Running tasks: aws-s3:staging

Loading "grunt-aws-s3" plugin

Registering "C:\Version Control\<PROJECT FOLDER>\node_modules\grunt-aws-s3\tasks" tasks.
Loading "aws_s3.js" tasks...OK
+ aws_s3
Warning: Task "aws-s3:staging" failed. Use --force to continue.

Aborted due to warnings.

aws-s3.js:

module.exports = {
  options: {
    accessKeyId: '<HARDCODED VALUE>',
    secretAccessKey: '<HARDCODED VALUE>',
    region: 'us-west-2',
    uploadConcurrency: 8,
    downloadConcurrency: 8,
    params: {
        ContentEncoding: 'gzip'
    }
  },

  prod: {
    options: {
      bucket: '<DOMAIN>.com',
    },
    files: [      
      {expand: true, cwd: 'dist/', src: ['**/*.html'], dest: '/', params: {CacheControl: 'max-age=1200'}},
      {expand: true, cwd: 'dist/', src: ['**', '!**/*.html'], dest: '/', params: {CacheControl: 'max-age=1200'}}
    ]
  },  
  staging: {
    options: {
      bucket: 'staging.<DOMAIN>.com',
      differential: true,
      displayChangesOnly: true,
      params: {
        CacheControl: 'max-age=0, no-cache'
      }
    },
    files: [
      {expand: true, cwd: 'dist/', src: ['**'], dest: '/'}
    ]
  }
};

Running a delete task stops all subsequent tasks from running

If the first subtask is a delete one, the subsequent subtasks will not run at all, assuming you're trying to run a MultiTask, e.g. grunt aws_s3. Explicitly placing these subtasks in default doesn't help.

The only workaround is setting concurrency to 1 and putting everything in one subtask.

question: check s3 dir exists

I am using this to upload versioned files (ie. function/0.0.1/dist/) and the current upload function will over-write files that already exist. uploading over an existing dir is undesirable for me!

anyway we can use this plug in to check a s3 dir exists before upload?

How to set param expires?

First of all: great module!

params: {
CacheControl: '3000',
Expires: '1700000000'
}

Gives me: InvalidParameterType: Expected params.Expires to be a Date object, ISO-8601 string, or a UNIX timestamp

Setting cache headers - possible values

I see that I can use a CacheControl property on the params object, but I am not clear on what value goes in there, and how that integer relates to other strings I've seen like "max-age=630720000, public".

I actually found this in the AWS docs:
CacheControl — (String) Specifies caching behavior along the request/reply chain.
http://docs.aws.amazon.com/AWSJavaScriptSDK/latest/frames.html

But I'm not sure what values are possible or how to set it. Is a really big integer ok?

option to clear bucket

Would be nice to have an option to clear the bucket as part of the upload; uploading rev'ed files means the bucket just accumulates every rev'ed version of the file.

Copy single file?

I don't see an easy way to copy a single file given that the src attribute will be used as a prefix for the destination file. Am I missing something here? Trying the following cmd:

                backup: {
                    files:[
                        {src: 'xxx/js/yyy.js', dest: 'xxx/js/zzz.js', action: 'copy'},
                    ]
                },

Does not allow extra headers

I'd like to add any header like 'x-robots-tag':'noindex' but there is no way to do that. When try using option.params.Metadata it adds a prefix 'x-amz-meta' that is not what I want. Thanks!

Delete requires dest: '/', upload requires dest: '' as bucket root

When I want to delete everything in a bucket, I have to specify dest: '/' (slash).
When I want to upload files to the bucket root, I have to specify dest: '' (no slash).

Using dest: '' with action: 'delete' throws following error: Fatal error: No "dest" specified for deletion. No need to specify a "src".
Using dest: '/' with action: 'upload' puts the files in a folder without a name, but not the root directory (root/[FOLDER WITHOUT NAME]/).

I expected the bucket root to be '/' (slash) for all actions, or at least consistent.

My configuration (related section):

files: [
  {
    dest: '/',
    action: 'delete'
  },
  {
    expand: true,
    cwd: 'dist/',
    src: ['**/*.html'],
    dest: '',
    params: { CacheControl: 'max-age=3600' }
  },
  {
    expand: true,
    cwd: 'dist/',
    src: ['**', '!**/*.html'],
    dest: '',
    params: { CacheControl: 'max-age=1200' }
  }
]

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.