Giter Club home page Giter Club logo

grunt-s3's People

Contributors

aaaristo avatar asaayers avatar baffles avatar coen-hyde avatar collin avatar davb avatar dxops avatar geedew avatar hereandnow avatar jesseditson avatar jgable avatar juriejan avatar lbeschastny avatar mattrobenolt avatar mponizil avatar mreinstein avatar nanek avatar owiber avatar paladox avatar pifantastic avatar plasticut avatar sampsasaarela avatar smithclay avatar tanepiper avatar thanpolas avatar tleen avatar weltonrodrigo avatar zdexter avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

grunt-s3's Issues

Temp file shown when uploading files

I'm getting the following output for every file when uploading since upgrading from alpha to alpha.2:

↗ Uploaded: /var/folders/9_/wpwnc64j71n7fn_vc4h1bdqr0000gn/T/1372195118776.9172

Error: socket hang up

I sometimes get socket errors when trying to push content to s3:

image

Why does this happen? It seems to be non-deterministic.

npm 0.0.4 release upload not working

Just noting that the npm 0.0.4 release upload is not working.

The master branch does work, so you may have just forgot to publish the updates the npm? Or the config examples I'm using from the repo may be more up to date than what is in npm.

Deleting with wildcard

👍 I am able to remove individual files by name after updating to 40f01fe, thanks! But trying

clean: {
  del: [{
    src: '**/*.*'
  }]
}

as per #70 doesn't remove anything. Is there a way to wipe a folder/bucket?

has no method 'init'

Trying to use with Yeoman 1.0.

  1. Installed grunt-s3 via npm .
  2. Added s3 config section as per documentation.
  3. Registered npm task like grunt.loadNpmTasks('grunt-s3');.

When I run grunt s3 I get:

Loading "s3.js" tasks...ERROR

TypeError: Object # has no method 'init'

Any suggestions?

Bump NPM Version

I was just about to submit a pull request to fix directories in windows, switching '' to '/' for uploads; but I noticed the fix is already applied, but hasn't been pushed to NPM.

I know it is an awkward version, since you are on an alpha right now, just wondering how long you expect until you push the next version?

Thank you!

Use temporary folder during deployment

A few times, the .gz files when using gzip: true stuck around after the deploy. If they were moved to a temp folder instead, then even if there is a problem with the cleanup, they will no longer litter the folders.

This issue was created from #51

Doesn't seem to be doing anything

It simply says:

$ grunt s3
Running "s3" task

Done, without errors.

But it never actually does anything.

s3: {
            options: {
                key: 'My key',
                secret: 'my secret',
                bucket: 'mybucket',
                access: 'public-read'
            },
            dev: {
                options: {
                    encodePaths: true,
                    maxOperations: 20
                },
                upload: [
                    {
                        src: 'important_document.txt',
                        dest: 'documents/important.txt',
                    }
                ]
            }
        }

knox-0.0.11 seems to break grunt-s3

Just a heads up an issue we ran into today -- it looks like the latest version of knox, 0.0.11, seems to break grunt-s3... on upload/change/delete, only HTTP 400 errors are returned.

I'll update this ticket when I get a chance to look into it more, but locking down the version of knox to 0.0.9 seems to be the best workaround for now.

Gzipped files have their 'Content-Type' overwritten

This may not be best practice, but... in my project I've been uploading HTML views without the .html suffix, so that the URLs show up as .../about rather than .../about.html. Haven't been able to find a better solution for this problem, given S3's routing limitations.

It works rather well – I simply specify Content-Type: text/html in the headers, and the page renders correctly in the browser.

However, when I gzip these files – that same text/html is overwritten with application/octet-stream, and the HTML view is downloaded as opposed to rendering in-browser.

Hardly ideal!

Is there a way to circumvent this issue? Perhaps by favoring any manually-designated headers over the ones provided by gzip?

Migrate Away from Knox to Official AWS Node SDK

As Amazon has released an official AWS SDK for Node (http://aws.amazon.com/sdkfornodejs/), this plugin should move away from using Knox to using the official SDK. This will make it easier to add needed functionality such as the ability to set Cache-Control headers when uploading new files to S3. Tthis takes Knox out of the loop as a potential bottleneck for new functionality.

Download (and only download) fails with 'aws "key" required'

Same settings work for upload, but not download:

s3: {
            options: {
                encodePaths: true,
                maxOperations: 50,
                access: 'public-read'
                key: '<%= aws.key %>',
                secret: '<%= aws.secret %>',
                region: '<%= aws.region %>',
                bucket: '<%= aws.bucket %>'
            },
            upload_file: {                
                upload: [
                    {
                        src: '/path/to/a/file1.tar.gz',
                        dest: 'file1.tar.gz'
                    }
                ]
            },
            download_file: {                
                download: [
                    {
                      src: 'file2.tar.gz',
                      dest: 'file2.tar.gz'
                    }
                ]
            }
        }

grunt s3:upload_file works, grunt s3:download_file fails with Warning: aws "key" required Use --force to continue

Documentation update request for upload.rel

My discovery of the upload item .rel property (below) allowed me to remove about 75 lines of Gruntfile.js configuration. Unfortunately that very useful property is entirely undocumented and left out of your README examples. Might be helpful for others if you could add that to the doc/examples.

upload: [
{    
    src: 'dist/**',
    dest: 'myapp/',
    rel : 'dist/' 
}

getConfig() is not defined

I'm trying to use the delete functionality:

        s3: {
            options: {
                key: process.env.AWS_KEY,
                secret: process.env.AWS_SECRET,
                access: 'public-read'
            },

            clean: {
                options: {
                    del: [{
                        src: '**/*.*'
                    }]
                }
            }
        },

I get the following error:

$ grunt s3:clean --stack
Running "s3:clean" (s3) task
Warning: getConfig is not defined Use --force to continue.
ReferenceError: getConfig is not defined
    at Object.exports.del (/Users/nick.heiner/opower/x-web-deploy/node_modules/grunt-s3/tasks/lib/s3.js:342:38)
    at /Users/nick.heiner/opower/x-web-deploy/node_modules/grunt-s3/tasks/lib/S3Task.js:53:22
    at /Users/nick.heiner/opower/x-web-deploy/node_modules/grunt/node_modules/async/lib/async.js:86:13
    at Array.forEach (native)
    at _forEach (/Users/nick.heiner/opower/x-web-deploy/node_modules/grunt/node_modules/async/lib/async.js:26:24)
    at Object.async.forEach (/Users/nick.heiner/opower/x-web-deploy/node_modules/grunt/node_modules/async/lib/async.js:85:9)
    at Object.S3Task.run (/Users/nick.heiner/opower/x-web-deploy/node_modules/grunt-s3/tasks/lib/S3Task.js:52:5)
    at Object.<anonymous> (/Users/nick.heiner/opower/x-web-deploy/node_modules/grunt-s3/tasks/s3.js:37:10)
    at Object.<anonymous> (/Users/nick.heiner/opower/x-web-deploy/node_modules/grunt/lib/grunt/task.js:258:15)
    at Object.thisTask.fn (/Users/nick.heiner/opower/x-web-deploy/node_modules/grunt/lib/grunt/task.js:78:16)

Aborted due to warnings.

Am I doing something wrong?

301, write ECONNRESET

I got the following error

Running "s3:dev" (s3) task
>> Error: Upload error: /home/user/node_project/public/file1 (301)
Fatal error: write ECONNRESET

with the following configuration

s3:{
        options:{
            key: '***',
            secret: '***',
            bucket: 'bucket.01',
            access: 'public-read'
        },
        dev:{
            // Files to be uploaded.
            upload:[
                {
                    src:'public/*',
                    dest:'/'
                }
            ]
        }
    }

Error: Hostname/IP doesn't match certificate's altnames

Hi,

I am trying to use a bucket name with dots in it (ex: media.mywebsite.com), and it fails with the message: Error: Hostname/IP doesn't match certificate's altnames.

This issue has been fixed with: "knox": "0.8.0". See "style" section of Knox documentation. I have updated my local grunt-s3 with the change, and it looks like working for my use case. Not sure about the rest though ...

Best,
Olivier

Error: "has no method 'replace'" when src is array

Using fresh install from Git repository link (0.1.0 with Grunt 0.4.0rc7).

Here is my grunt-s3 configuration:

s3: {
    key: '<%= aws.key %>',
    secret: '<%= aws.secret %>',
    bucket: '<%= aws.bucket %>',
    access: 'public-read',
    upload: [
        {
            rel: '<%= siteConfig.output %>',
            src: ['<%= siteConfig.output %>/**/*.*', '!<%= siteConfig.output %>/js/*.js', '!<%= siteConfig.output %>/css/*.css', '!<%= siteConfig.output %>/img/*.*' ],
            dest: '/',
            gzip: true
        },
        {
            rel: '<%= siteConfig.output %>',
            src: ['<%= siteConfig.output %>/js/*.js', '<%= siteConfig.output %>/css/*.css', '<%= siteConfig.output %>/img/*.*'],
            dest: '/',
            gzip: true,
            headers: { 'Cache-Control': 'public, max-age=' + (60 * 60 * 24 * 365) }
        }
    ]
}

Seems to work fine when src properties are not arrays, but I get the following error with the above configuration:

Warning: Object /Users/Andrew/Dropbox/Projects/andrewduthie.com/output/**/*.*,!/Users/Andrew/Dropbox/Projects/andrewduthie.com/output/js/*.js,!/Users/Andrew/Dropbox/Projects/andrewduthie.com/output/css/*.css,!/Users/Andrew/Dropbox/Projects/andrewduthie.com/output/img/*.* has no method 'replace' Use --force to continue.

From my own debugging, seems to be caused at s3.js:54

upload.src = path.resolve(grunt.template.process(upload.src));

When I change upload.src to file, it seems to work correctly for me, but I'm not familiar enough with it to be able to say it's a fix in all cases.

grunt-s3 doesn't work with node-0.8

It seems like the underscore.deferred grunt-s3 requires doesn't work with node-0.8. When you do an npm install in grunt-s3 after upgrading to 0.8, you get this error:

npm ERR! Error: No compatible version found: underscore.deferred@'>=0.1.2- <0.2.0-'
npm ERR! No valid targets found.
npm ERR! Perhaps not compatible with your version of node?
npm ERR!     at installTargetsError (/usr/local/lib/node_modules/npm/lib/cache.js:506:10)
npm ERR!     at next_ (/usr/local/lib/node_modules/npm/lib/cache.js:452:17)
npm ERR!     at next (/usr/local/lib/node_modules/npm/lib/cache.js:427:44)
npm ERR!     at /usr/local/lib/node_modules/npm/lib/cache.js:419:5
npm ERR!     at saved (/usr/local/lib/node_modules/npm/node_modules/npm-registry-client/lib/get.js:136:7)
npm ERR!     at /usr/local/lib/node_modules/npm/node_modules/graceful-fs/graceful-fs.js:230:7
npm ERR!     at Object.oncomplete (fs.js:297:15)
npm ERR!  [Error: No compatible version found: underscore.deferred@'>=0.1.2- <0.2.0-'
npm ERR! No valid targets found.
npm ERR! Perhaps not compatible with your version of node?]```

FIle with full path not getting uploaded

my Gruntfile.js contains below files to be uploaded.

      upload: [
        {
          src: 'public/javascripts/widget/*.*',
          dest: 'widget/',
          gzip: true
        },
        {
          src: 'public/assets/widget.css',
          dest: 'widget/',
          gzip: true
        },
        {
          src: 'public/assets/widget_body.css',
          dest: 'widget/',
          gzip: true
        },
        {
          src: 'public/images/widget/*',
          dest: 'widget/',
          gzip: true
        } 
      ]

path that contains wildcard was getting uploaded to s3 but full path files were not. Then I tried changing it to

 upload: [
        {
          src: 'public/javascripts/widget/*.*',
          dest: 'widget/',
          gzip: true
        },
        {
          src: 'public/assets/widget*',
          dest: 'widget/',
          gzip: true
        },
        {
          src: 'public/images/widget/*',
          dest: 'widget/',
          gzip: true
        } 
      ],

and this works. Full path is not getting uploaded can you have a look into this.

After 5th file, uploads slow drastically

Hey there, I'm using the 2.0 alpha off of master, and I seem to have an issue that the first 5 uploads go very quickly, then they drastically slow afterwards.

We're talking ~3 seconds total for the first 5 uploads, then ~ seconds per upload thereafter. Filtering a tcpdump by "s3" seems to show that there's just nothing going on. The first 5 uploads seem to happen nearly immediately, then there's a 4-5 minute pause, then uploads resume at a slower speed, with perhaps 15-30 seconds between uploads.

Here is what my configuration looks like

s3:
  options:
    key: "key"
    secret: "secret"
    bucket: "my.bucket.with.periods"
    secure: false
  production:
    options: {}
    upload: [
      {src: "build/img/*", dest: "/img"},
      {src: "build/js/*", dest: "/js"},
      {src: "build/css/*", dest: "/css"},
      {src: "build/*", dest: "/"}
    ]

I completely realize I'm using alpha software, so bugs might exist :) Unfortunately, I need the exposed secure: false flag for my bucket name with periods.

Any ideas what could be going on or how I could help to further debug?

Per-file options seem to get ignored/overridden

I'm using this config:

s3: {
    options: {
        key: process.env.AWS_KEY,
        secret: process.env.AWS_SECRET,
        bucket: 'static.pagerank.nfriedly.com',
        access: 'public-read',
        maxOperations: 4,
        gzip: true,
        headers: {
            'Cache-Control': 'max-age=' + 60*60*24*365 // 1 year
        }
    },
    'prod': {
        // These options override the defaults
        options: {

        },
        // Files to be uploaded.
        upload: [{
            src: 'public/*.html',
            dest: '/',
            // do gzip (default)
            headers: {
                'Cache-Control': 'max-age=' + 60*1 // 1 minute
            }
        }, {
            src: 'public/*.{js,css}',
            dest: '/',
            // do gzip (default)
            // 1-year caching (default)
        }, {
            src: 'public/*.{jpg,png,gif}',
            dest: '/',
            gzip: false
            // 1-year caching(default)
        }]
    }
}

And everything uploads correctly, but all files have the default options: my .html files have a 1-year cache control header and my images are gzipped.

I know I can work around this with multiple sub-tasks, but I thought I'd let you know (and check if I'm doing anything wrong)

log destination name, not source name

when using gzip: true the source is a temp directory and name, making the log output unreadable and useless:

>> ↗ Uploaded: /var/folders/7n/01l92jms11j_bzh0wl4bt42r0000gp/T/1371067271643.1118 (28bc2e0afb2333a24a47ceab21f533e8)
>> ↗ Uploaded: /var/folders/7n/01l92jms11j_bzh0wl4bt42r0000gp/T/1371067271645.8916 (fedddb00b156319fa99a2da566cfdcbd)
>> ↗ Uploaded: /var/folders/7n/01l92jms11j_bzh0wl4bt42r0000gp/T/1371067271646.9639 (71b70c273bc0f3ed2869fbcd3bfe1807)
>> ↗ Uploaded: /var/folders/7n/01l92jms11j_bzh0wl4bt42r0000gp/T/1371067271639.2773 (89fe4b001560d43ffc150eeb412761a8)
>> ↗ Uploaded: /var/folders/7n/01l92jms11j_bzh0wl4bt42r0000gp/T/1371067271644.859 (b42486b3e2b7bf1983ea28a55e316012)
>> ↗ Uploaded: /var/folders/7n/01l92jms11j_bzh0wl4bt42r0000gp/T/1371067271647.9175 (6f3f28c887ff5f155c27d89f60e8b766)
>> ↗ Uploaded: /var/folders/7n/01l92jms11j_bzh0wl4bt42r0000gp/T/1371067271647.2693 (0d3c6ccd5d26a0a5f958a176ec1f2345)
>> ↗ Uploaded: /var/folders/7n/01l92jms11j_bzh0wl4bt42r0000gp/T/1371067271648.5952 (7f87204f61c66867c2d4bc4346da949f)
>> ↗ Uploaded: /var/folders/7n/01l92jms11j_bzh0wl4bt42r0000gp/T/1371067271670.9746 (e96a13bf74c41f2009bdac7f6d1a2580)
>> ↗ Uploaded: /var/folders/7n/01l92jms11j_bzh0wl4bt42r0000gp/T/1371067271667.337 (3395ee25e1c1a681f8861de9533bfad7)
>> ↗ Uploaded: /var/folders/7n/01l92jms11j_bzh0wl4bt42r0000gp/T/1371067271661.6167 (71a044130fb520427ee460693870165e)
>> ↗ Uploaded: /var/folders/7n/01l92jms11j_bzh0wl4bt42r0000gp/T/1371067271660.1301 (0347b01dbbcae8655bd30a9c27ec384d)
>> ↗ Uploaded: /var/folders/7n/01l92jms11j_bzh0wl4bt42r0000gp/T/1371067271644.9058 (39e20b85fffc903172b74fb66c3d824b)
>> ↗ Uploaded: /var/folders/7n/01l92jms11j_bzh0wl4bt42r0000gp/T/1371067271669.4417 (090591ee91599a5cf9d1163715cd6c22)
>> ↗ Uploaded: /var/folders/7n/01l92jms11j_bzh0wl4bt42r0000gp/T/1371067271672.44 (a8e8d0ea25b6d56338c1807ed9d64eed)
>> ↗ Uploaded: /var/folders/7n/01l92jms11j_bzh0wl4bt42r0000gp/T/1371067271659.609 (6e72c8691437324b198e7a4753711b01)
>> ↗ Uploaded: /var/folders/7n/01l92jms11j_bzh0wl4bt42r0000gp/T/1371067271671.025 (e45d8cc4675ce8a5c33aa720d4d33232)
>> ↗ Uploaded: /var/folders/7n/01l92jms11j_bzh0wl4bt42r0000gp/T/1371067271674.4954 (01fa9ea9f18dfb89751b29250bd40830)
>> ↗ Uploaded: /var/folders/7n/01l92jms11j_bzh0wl4bt42r0000gp/T/1371067271673.5227 (b14e178bf59923145088a1a093be344c)
>> ↗ Uploaded: /var/folders/7n/01l92jms11j_bzh0wl4bt42r0000gp/T/1371067271665.5266 (72c3fcf9f71fc3782433adc70c0d588b)
>> ↗ Uploaded: /var/folders/7n/01l92jms11j_bzh0wl4bt42r0000gp/T/1371067271678.92 (124ad5de5f812e983c831e8e385ccd72)
>> ↗ Uploaded: /var/folders/7n/01l92jms11j_bzh0wl4bt42r0000gp/T/1371067271683.458 (2496a8b9c80137d41decefa69fadbe1a)

Feature Request: Multi-Task Support?

Would it be easy to make this task into a multi-task?

I could see a situation where a user may have several different s3 operations for their project. For example:

  • A cdn upload script that uploads built files to s3
  • A deployment script that downloads config files from s3

The ideal situation would be for those tasks to be able to be separated from one another, which a multi-task would do nicely.

Different JSON structure needed to upload

Hi, first of all thanks for this project.

I'm using grunt v0.4.1 and grunt-s3 v0.1.0.
I've tried using the example provided in the home of the project but without any luck. I've looked here https://github.com/pifantastic/grunt-s3/blob/master/tasks/lib/s3.js#L97-L99 and found out that I was in need of a slightly different JSON structure:

s3: {
  key: 'key',
  secret: 'secret,
  bucket: 'bucket',
  access: 'public-read',
  upload: [{
    src: 'dist/scripts/main.min.js',
    dest: 'main.min.js',
    gzip: true
  }]
}

Am I doing something wrong?
With the above JSON my file gets correctly uploaded.
I'm kind of new with grunt so I don't know if there's a convention I'm missing here.
If the above JSON is correct I could submit a PR.

Cheers

Error: getaddrinfo ENOTFOUND

I'm running into this issue with the latest version of grunt-s3:
Automattic/knox#192

I get Error: getaddrinfo ENOTFOUND after a while when uploading many files.

I found that adding res.resume() to the client.putFile callback in tasks/lib/s3.js solved the problem.

Feature Request: Upload both compressed/uncompressed (gzip) version of same file

Howdy, just a small feature request. Could the gzip configuration option be extended to allow for a file to be uploaded with both an uncompressed, as well as a compressed version of the same file to s3?

My thought was this, if gzip: true, then it will continue to act as it does now. But if gzip: "some-file-name.gz.ext", then the original file is uploaded to dest and a gzip-compressed version is uploaded as name.gz.ext. Either that, or the extension is simply replaced on the fly. (ie. .js > .gz.js)

'rel' option not documented

I found the 'rel' option for uploads by reading the source code and it is really useful (only way to replace /**/ wild card paths from src). It should probably be added to the readme.md so people know about it...

Deleting S3 objects?

Just curious if support for deleting objects in s3 is in the general roadmap, or if you're looking for someone to contribute delete support? Thanks!

How to do glob uploads?

I tried the normal Grunt expansion syntax:

                upload: [{
                    expand: true,
                    cwd: "release/",
                    src: ["**/*.js"],
                    dest: ""
                }]

but this was not working :(

Removal of helpers

Hi.

I can't see any issue regarding this. But if its been covered i'm sorry.

Helpers have been removed in Grunt 0.4 in favour of using require(). I notice your Readme still gives examples of using helpers. These no longer work.

I've managed to get them working by requiring the lib file directly like so:

var s3 = require('grunt-s3/tasks/lib/s3').init(grunt);

This all seems to work as long as you explicitly pass in the s3 options when you use it. Like so:

var pull = s3.pull('file.txt', 'file.txt', grunt.config.get('s3'));

I'm happy to update the ReadMe documentation with this method. But was wondering if there would be a cleaner way of requiring the s3 lib file directly. Perhaps by changing something in something in tasks/s3.js to return a reference to it when required directly? I'm not sure how this would work.

Ideally you could do this to get access to the s3 methods in lib/s3.js:

var s3 = require('grunt-s3').init(grunt);

Do not upload if file exists?

Is it possible to not upload when a file already exists? We use md5 hashed assets and a lot never change, so we are prolonging the deployment process significantly

Thanks
PS: Great project.

option to not log every transfer

We're uploading hundreds of files, there's no reason to log them all with grunt.log.ok. grunt.verbose.ok would be fine, with a `grunt.log.ok' of total stats, such as total files uploaded/downloaded/deleted, etc.

security?

I have one question. What do you think about security? I mean having access and secret keys written in plain text in your grunt.js config file. In my configuration grunt.js is in my /webroot folder so anyone (who knows the url) can access it. I don't want to expose my S3 login info to the whole world!
Right now I am using s3cmd to sync my files with S3 and I am looking how to automate this with grunt (as I am already using it for other tasks).
What is your solution for this? Thanks!

Kris

Grunt 0.4 Release

I'm posting this issue to let you know that we will be publishing Grunt 0.4 on Monday, February 18th.

If your plugin is not already Grunt 0.4 compatible, would you please consider updating it? For an overview of what's changed, please see our migration guide.

If you'd like to develop against the final version of Grunt before Monday, please specify "grunt": "0.4.0rc8" as a devDependency in your project. After Monday's release, you'll be able to use "grunt": "~0.4.0" to actually publish your plugin. If you depend on any plugins from the grunt-contrib series, please see our list of release candidates for compatible versions. All of these will be updated to final status when Grunt 0.4 is published.

Also, in an effort to reduce duplication of effort and fragmentation in the developer community, could you review the grunt-contrib series of plugins to see if any of your functionality overlaps significantly with them? Grunt-contrib is community maintained with 40+ contributors—we'd love to discuss any additions you'd like to make.

Finally, we're working on a new task format that doesn't depend on Grunt: it's called node-task. Once this is complete, there will be one more conversion, and then we'll never ask you to upgrade your plugins to support our changes again. Until that happens, thanks for bearing with us!

If you have any questions about how to proceed, please respond here, or join us in #grunt on irc.freenode.net.

Thanks, we really appreciate your work!

No "sync" capabilities

The command line tool s3cmd has a great little sync command that only uploads files that differ/do not exist in the target bucket.

This is a pretty common case for deploying static sites, and could save a lot of time/bandwidth for large projects.

As outlined on the s3cmd site, it could be done by comparing MD5 checksum and filesize with the files in the bucket before attempting to upload them.

No debug output when running `grunt s3`

I'm having trouble getting the s3 grunt task working. I'm currently on grunt 0.4.0. Here is what i have in my config:

    s3: {
        options: {
            key: '<%= localConfig.aws.key %>',
            secret: '<%= localConfig.aws.secret %>',
            bucket: '<%= localConfig.aws.bucket %>',
            access: "public-read"
        },
        deploy: {
            options: {},
            upload: [
                {
                    src: "build/example-<%= pkg.version %>.min.js",
                    dest: "shawn/",
                    gzip: true
                },
                {
                    src: "build/example-<%= pkg.version %>.js",
                    dest: "shawn/"
                }
            ]
        }
    }

The keys are definitely being substituted properly. This is the output:

$ grunt s3
Running "s3" task

Done, without errors.

Without any debug info it's a bit hard to figure out what is wrong. Any idea what's wrong?

How to keep subdirectories when uploading?

Hi,

Is it possible to copy a directory-structure from src to dst?

So let's say you have:

src/scripts/main.js
src/scripts/vendor/require.js
src/scripts/vendor/jquery.js

But after running an upload with:

upload: [
          {
            src: 'src/scripts/**/*.js',
            dest: '/scripts/'
          }
]

the require.js and jquery.js files are in the /scripts/ dir, instead of /scripts/vendor/ dir.

Cheers,
Alwin

Feature Request: Allow for "Dry Run" logging/output

I have a moderately complex build process I'm managing, and I'd like to be able to prevent my build script from uploading to S3 until I've verified that all the rules I have in place are correct.

Could a debug: true option be added that would cause all the input files, as well as their intended destination to be logged, rather than performing an actual upload to S3?

Problems with AWS regions and windows

The current version of grunt-s3 uses an old version of knox, package.json explicity defines knox as
"knox": "0.0.9". The latest available version is 0.4.1
The old knox doesn't know how to calculate correct url's for s3 buckets that are not in the default us region, e.g. in Ireland. So all uploads are erroneous with 307 status code.

Also the current system doesn't allow nested folder structure. The destination path is generated using path.join(upload.dest, path.basename(file)); which removes the path from the source file.

Better alternative would be to allow users to define a base path, and then generate the destination using something like
var dest = file.replace(options.basePath, "");

At the moment i am using
upload: [
{
src: 'release/**',
dest: '',
gzip: true,
basePath: 'release/'
}
With my custom version where i want to upload the whole release folder without source folder name.

enable 'region' configuration of the knox client

Here (and in some other place): https://github.com/pifantastic/grunt-s3/blob/master/tasks/lib/s3.js#L98
you use:

// Pick out the configuration options we need for the client.
var client = knox.createClient(_(config).pick([
'endpoint', 'port', 'key', 'secret', 'access', 'bucket', 'secure'
]));

could you enable also the 'region' attribute? Like this:

var client = knox.createClient(_(config).pick([
  'region', 'endpoint', 'port', 'key', 'secret', 'access', 'bucket', 'secure'
]));

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.