Giter Club home page Giter Club logo

serverless-image-resizing's Introduction

Archived

See https://github.com/awslabs/serverless-image-handler instead.

Serverless Image Resizing

Description

Resizes images on the fly using Amazon S3, AWS Lambda, and Amazon API Gateway. Using a conventional URL structure and S3 static website hosting with redirection rules, requests for resized images are redirected to a Lambda function via API Gateway which will resize the image, upload it to S3, and redirect the requestor to the resized image. The next request for the resized image will be served from S3 directly.

Usage

  1. Build the Lambda function

    The Lambda function uses sharp for image resizing which requires native extensions. In order to run on Lambda, it must be packaged on Amazon Linux. You can accomplish this in one of two ways:

    • Upload the contents of the lambda subdirectory to an Amazon EC2 instance running Amazon Linux and run npm install, or

    • Use the Amazon Linux Docker container image to build the package using your local system. This repo includes Makefile that will download Amazon Linux, install Node.js and developer tools, and build the extensions using Docker. Run make all.

  2. Deploy the CloudFormation stack

    Run bin/deploy to deploy the CloudFormation stack. It will create a temporary Amazon S3 bucket, package and upload the function, and create the Lambda function, Amazon API Gateway RestApi, and an S3 bucket for images via CloudFormation.

    The deployment script requires the AWS CLI version 1.11.19 or newer to be installed. Be sure to set your AWS credentials using aws configure

  3. Test the function

    Upload an image to the S3 bucket and try to resize it via your web browser to different sizes, e.g. with an image uploaded in the bucket called image.png:

    • http://[BucketWebsiteHost]/300x300/path/to/image.png
    • http://[BucketWebsiteHost]/90x90/path/to/image.png
    • http://[BucketWebsiteHost]/40x40/path/to/image.png

    You can find the BucketWebsiteUrl in the table of outputs displayed on a successful invocation of the deploy script.

  4. (Optional) Restrict resize dimensions

    To restrict the dimensions the function will create, set the environment variable ALLOWED_DIMENSIONS to a string in the format (HEIGHT)x(WIDTH),(HEIGHT)x(WIDTH),....

    For example: 300x300,90x90,40x40.

License

This reference architecture sample is licensed under Apache 2.0.

serverless-image-resizing's People

Contributors

andybab avatar bigadev avatar freakingawesome avatar grav avatar holygrolli avatar hyandell avatar jorrit avatar jpignata avatar levous avatar peterkuiper avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

serverless-image-resizing's Issues

using ImageMagick

I was just wondering why image-magick (maybe through gm node module) wasn't used for this project since it's built in with Amazon linux image that lambda runs? That will avoid the extra complexities of having to build this in a docker container..

Ec2 Centos os instead of amazon linux?

Hi,

Is it possible to build the project on centos 7 with node.js and developer tools installed? I would like to deploy this project manually by executing npm install on ec2 centos jenkins. I will package lambda directory as a zip file and upload it to lambda. Is this possible?

Thanks!

HTTPs distrubution causing endless loading + Allowed dimensions not working

Hi there maybe someone can help me out,

Currently I am facing 2 issues.

I finally got the lambda working. Since i had already an existing bucket I adapted all the settings from the Cloudformation stack.
Image generation is working fine, when I access the bucket URL like
xxxx.s3-website.eu-central-1.amazonaws.com/90x90/somefile.png

Since we need HTTPS I created a cloudfront distribution which accesses the bucket URL above. But somehow this is causing endless loading, when i try to access via HTTPS. HTTP over cloudfront is working fine. Even if i remove the Redirect-Rule in S3, it has no impact here, http is working fine, but https is not. Not sure if this is a problem regarding the redirect, or more a generic cloudfront issue with some settings

The second thing is, i want to restrict the allowed resize dimensions and added the environment variable here at the lamda. But somehow this has no effect. i still can create dimensions however i want i.e.

https://xxx.execute-api.eu-central-1.amazonaws.com/prod?key=key=777x777/somefile.png
or
xxxx.s3-website.eu-central-1.amazonaws.com/777x777/somefile.png

cloudfront

Any hints highly appreciated :)

screenshot from 2018-02-21 09-36-37

Right RAM size

Currently your docs/blog recommends using the max memory available. Is that strictly necessary? Can we go lower? As per the sharp documentation it doesn't carry any huge part of image to memory

QUESTION: Why not use Event in image-resize.yaml:ResizeFunction?

I'm just getting started with AWS SAM and I was wondering why the Api and ResizeFunctionPermission resources are defined within image-resize.yaml. Isn't it possible to just use the AWS::Serverless::Function Event property instead?

According to the AWS SAM docs, "An AWS::Serverless::Api resource need not be explicitly added to a AWS Serverless Application Definition template. A resource of this type is implicitly created from the union of Api events".

deploy shell script errors?

Hello,

I am trying to test this project. Executing deploy shell script is throwing errors. aws cli version is 1.11.63. Could be a syntax error in the assignments?

$ ./bin/deploy 
+ sed -e s/REGION/us-east-2/g -e s/ACCOUNT_ID/**********/g api-template.yaml
+ aws s3 mb s3://temp-serverless-resize-1b6c****f8af25
make_bucket: s3://temp-serverless-resize-f62****fd939/
+ aws cloudformation package --output-template-file=deploy/output.yaml --template-file=image-resize.yaml --s3-bucket=temp-serverless-resize-f5a629fd248fd939
usage: aws [options] <command> <subcommand> [<subcommand> ...] [parameters]
To see help text, you can run:

  aws help
  aws <command> help
  aws <command> <subcommand> help
aws: error: argument operation: Invalid choice, valid choices are:

cancel-update-stack                      | continue-update-rollback                
create-change-set                        | create-stack                            
delete-change-set                        | delete-stack                            
describe-account-limits                  | describe-change-set                     
describe-stack-events                    | describe-stack-resource                 
describe-stack-resources                 | describe-stacks                         
estimate-template-cost                   | execute-change-set                      
get-stack-policy                         | get-template                            
get-template-summary                     | list-change-sets                        
list-stack-resources                     | list-stacks                             
set-stack-policy                         | signal-resource                         
update-stack                             | validate-template                       
wait                                     | help             

Thanks!

invalid ELF header Error

I tried to build the Lambda function in EC2 Instance and in Amazon Linux Docker container. In both cases I get error: "/var/task/node_modules/sharp/build/Release/sharp.node: invalid ELF header" when I execute Lambda function

Error when unzipping and re-zipping function.zip

all the resizing with the initial function.zip works great, but when i unzip the file and zip it again (on MacOS, without editing files) and upload it, lambda function fails with the following errors in Cloudwatch:

Unable to import module 'index': Error
at Function.Module._resolveFilename (module.js:469:15)
at Function.Module._load (module.js:417:25)
at Module.require (module.js:497:17)
at require (internal/module.js:20:19)

If i upload the original function.zip again to lambda, everything is working fine again

Serverless image resizing + CloudFront?

I'm having a little trouble setting up CloudFront to work correctly with this serverless image resizing structure.

I quickly ran into a problem when adding a CloudFront distribution in front of my bucket : CloudFront caches all redirects, including 307 and 302 redirects. The result is an infinite loop, as after the first redirect, it doesn't hit S3 anymore but instead immediately redirects the request to API gateway, resulting in an infinite loop.

As a workaround I completely disabled the caching in CloudFront, but without it I lose a big part of CloudFront's benefits (if it has to fetch the S3 image every time).

How would you make this work, while keeping the caching in CloudFront enabled?

Thank you!

Using the step-by-step guide rather than the CloudFormation

This is just for information, to help people coming to this for the first time and using the step-by-step guide in this very helpful blog post:

https://aws.amazon.com/blogs/compute/resize-images-on-the-fly-with-amazon-s3-aws-lambda-and-amazon-api-gateway/

Everything that follows should be read alongside the blog post, I followed it and got tripped up in a number of areas so this is for fellow first timers.

1) Create Bucket

Once you have done this, make a note of YOUR_BUCKET_NAME and add this bucket policy to make it publicly accessible :

{
  "Version":"2012-10-17",
  "Statement":[{
	"Sid":"PublicReadGetObject",
        "Effect":"Allow",
	  "Principal": "*",
      "Action":["s3:GetObject"],
      "Resource":["arn:aws:s3:::___YOUR_BUCKET_NAME___/*"
      ]
    }
  ]
}

REMEMBER TO LEAVE THE TRAILING BACKSLASH AND WILDCARD

then Static Website Hosting, Enable website hosting and, for Index Document, enter index.html (which does not exist and will never be used).

Then make a note of your bucket ENDPOINT, which will look something like this:

http://___BUCKET_NAME___.s3-website-us-east-1.amazonaws.com

This will be important for testing the Lamda function once it's set up.

2) Create LAMDA function

On first UI page, enter name: resize and then for Role, choose Create a custom role. I was then taken to a new tab on the AWS IAM console. Leave defaults except for: choose View Policy Document, Edit and input the following

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Action": [
        "logs:CreateLogGroup",
        "logs:CreateLogStream",
        "logs:PutLogEvents"
      ],
      "Resource": "arn:aws:logs:*:*:*"
    },
    {
      "Effect": "Allow",
      "Action": [ "s3:PutObject", "s3:PutObjectAcl" ],
      "Resource": "arn:aws:s3:::___YOUR_BUCKET_NAME___/*"    
    }
  ]
}

REMEMBER TO LEAVE THE TRAILING BACKSLASH AND WILDCARD

Then you go on to the function page itself, where you:

Choose Function package and upload the .ZIP file of the contents of the Lambda function.
To configure your function, for Environment variables, add two variables:
For Key, enter BUCKET; for Value,enter YOUR_BUCKET_NAME that you created above.
For Key, enter URL; for Value, enter the ENDPOINT that you noted above.

For Memory, choose 1536. For Timeout, enter 10 sec. Save function, not Save and Test. Wait for a while as the zip file takes some time to upload and process.

3 Create Trigger (API GATEWAY)

On the function page, go to Triggers > Add Trigger
Choose the dotted square and choose API Gateway.
To allow all users to invoke the API method, for Security, choose Open and then Next.
You need the API hostname, which you can reveal by clicking on right arrow next to

Method: ANY, Resource path: /resize, Authorization: NONE.

This reveals a dropdown with Invoke URL, which will be something like:

https://k5qvgrznqg.execute-api.us-east-1.amazonaws.com/prod/ANY/resize

The API_HOSTNAME is everything after the https://, wthout the "prod/ANY/resize"

4 Create Redirection Rule

Go Back to S3 Bucket, Static Web Site Hosting Tab and insert the redirection rule with the API_HOSTNAME added:

<RoutingRules>
    <RoutingRule>
        <Condition> 
            <HttpErrorCodeReturnedEquals>404</HttpErrorCodeReturnedEquals>
        </Condition>
        <Redirect>
            <HostName>___API_HOSTNAME___</HostName>
            <Protocol>https</Protocol>
            <ReplaceKeyPrefixWith>prod/resize?key=</ReplaceKeyPrefixWith>
            <HttpRedirectCode>307</HttpRedirectCode>
        </Redirect>
    </RoutingRule>
</RoutingRules>

5 Testing Your Lambda Function

Add an image to your bucket

It's very important to remember to do 2 things:

Make sure the image is public readable by changing permissions
Make sure you are accessing the image via the website ENDPOINT

http://___BUCKET_NAME___.s3-website-us-east-1.amazonaws.com/image.jpg

NOT

http://s3.amazonaws.com/___BUCKET_NAME___/image.jpg

Hope this helps someone.

Keep image proportions

Hello.
Is there a way to keep the image proportions when resizing?
For example if the image is 600x300 pixels, and I query for a resize of 150x150px, the result image is 150x150 pixels instead of 150x75 pixels.
Is there a wat to accomplish this?

Thanks a lot.

Is this setup possible without the ListObjects permission on the S3 bucket?

From what I've read (and experienced) when you don't have the ListObjects permission on the bucket, you get a 403 instead of a 404 when trying to get an image which doesn't exist.

Everything I read simply says to add that permission and it fixes the problem, but in my case I really don't want people to be able to list a bucket contents - there might be sensitive files. However the keys are really long hashes, so the chances of someone finding a valid link by accident are close to null.

Also, I'm thinking that if someone finally tries to access a valid hash, no matter the 403 or 404 on not found objects, it will work and display the image anyway.

So I'm looking for a solution here. Is it possible to either :

  • Make the redirection rule work with 403s (I tried to no avail)?
  • Have S3 return 404 without giving the ListObjects permission to everyone?

Thanks!

Image resizing internal error

Hi,
First at all, thanks for your hard works!
I'm want to integrate the function to S3 Lambda Api, but after done everything in instructions:
https://aws.amazon.com/blogs/compute/resize-images-on-the-fly-with-amazon-s3-aws-lambda-and-amazon-api-gateway

I received error messages when try to get a image from link:
http://creyoshop.s3-website-ap-southeast-1.amazonaws.com/300x300/smartwatch_Moto_360_24(3).jpg

module initialization error: Error
at Error (native)
at Object.Module._extensions..node (module.js:597:18)
at Module.load (module.js:487:32)
at tryModuleLoad (module.js:446:12)
at Function.Module._load (module.js:438:3)
at Module.require (module.js:497:17)
at require (internal/module.js:20:19)
at Object. (/var/task/node_modules/sharp/index.js:12:13)
at Module._compile (module.js:570:32)
at Object.Module._extensions..js (module.js:579:10)
at Module.load (module.js:487:32)
at tryModuleLoad (module.js:446:12)
at Function.Module._load (module.js:438:3)
at Module.require (module.js:497:17)
at require (internal/module.js:20:19)
at Object. (/var/task/index.js:3:13)

Can you help me to resolve this problem!

help,there is throw a mistake.

2017-03-31T07:41:10.167Z 6835cdb1-15e5-11e7-ae5d-7dd945e4dd5c TypeError: Cannot read property 'key' of undefined
at exports.handler.S3.getObject.promise.then.then (/var/task/index.js:9:40)

Bit more detail on event.queryStringParameters.key

Hi

This looks very helpful.

Is there any chance you could expand a bit on this, at line 13, for a beginner:

const key = event.queryStringParameters.key;

From the following regex in line 14, it looks like queryStringParameters.key gets everything after the initial protocol and host, so for:

http://[BucketWebsiteHost]/300x300/image.png

queryStringParameters.key will return 300x300/image.png. Is this right? It looks more like a path than a key/value pair to me.

I would have thought that a key in a url would come after a ?, so for something like

http://[BucketWebsiteHost]/300x300/image.png?filename=myfile

event.queryStringParameters.key would return 1 key, filename, relating to one value, myfile.

Is there some default position I'm not aware of?

I'm trying to get some clarity so that I can structure my incoming urls in a way that can be easily and repeatably parsed whilst also leaving some flexibility for adding functionality later.

Thanks

Tom

Query string params not matching.

I spent the last few hours trying to figure out why my code was not working. I figured out the code stopped when trying to match the params with the regex. Tried it multiple times and it did not match for me so I made a little adjustment. Its possible that Im not the only one out there with this problem.

const match = key.match(/(\d+)×(\d+)\/(.*)/); - this was the line that caused me problems.

If your matching is not working, replace the regex in the parenthesis with this one:
/(\d+)×(\d+)\/(.*)/i
I`m not good at regex so it might not be perfect. Feedbacks are welcomed :).

How can I debug this ?

I having problem with one of the polices.

{
    "errorMessage": "Access Denied",
    "errorType": "AccessDenied",
    "stackTrace": [
        "Request.extractError (/var/runtime/node_modules/aws-sdk/lib/services/s3.js:539:35)",
        "Request.callListeners (/var/runtime/node_modules/aws-sdk/lib/sequential_executor.js:105:20)",
        "Request.emit (/var/runtime/node_modules/aws-sdk/lib/sequential_executor.js:77:10)",
        "Request.emit (/var/runtime/node_modules/aws-sdk/lib/request.js:678:14)",
        "Request.transition (/var/runtime/node_modules/aws-sdk/lib/request.js:22:10)",
        "AcceptorStateMachine.runTo (/var/runtime/node_modules/aws-sdk/lib/state_machine.js:14:12)",
        "/var/runtime/node_modules/aws-sdk/lib/state_machine.js:26:10",
        "Request.<anonymous> (/var/runtime/node_modules/aws-sdk/lib/request.js:38:9)",
        "Request.<anonymous> (/var/runtime/node_modules/aws-sdk/lib/request.js:680:12)",
        "Request.callListeners (/var/runtime/node_modules/aws-sdk/lib/sequential_executor.js:115:18)"
    ]
}

Can you check if run into same problems ?

Keep getting the "Missing Authentication Token" error

I've followed the instructions completely (well, I'm pretty sure I did). Done the steps probably 5 different times, and I always end up with the same result: Missing Authentication Token when it redirects.

I an able to go to the API and execute a test GET with key=200x200/blue_marble.jpg and everything works fine (a new folder with the result image shows up in the S3 bucket).

Any ideas where I took a wrong turn?

Thanks!
Eric

The specified key does not exist.

I cloned the repo, npm install and bin/deploy. Everything from the stack got deployed but loading https://serverlessimageresize-imagebucket-co9umyu60i6w.s3.amazonaws.com/90x90/Telegram_2.png gives The specified key does not exist.

When selecting api gateway in lambda I get:
The API with ID jz13ijhr04 does not include a resource with path /* having an integration arn:aws:lambda:us-east-1:876791237098:function:ServerlessImageResize-ResizeFunction-O1XO17DSC9BI on the ANY method.

/apigateway
screen shot 2018-02-02 at 12 03 02 pm

screen shot 2018-02-02 at 12 04 06 pm

Only works when image is in root directory

Currently it uses the original key as the filename (see index.js). I had to modify this to use the key as the whole path so that I could resize images in any directory.

Getting this to work with CloudFront

This doesn't work with cloudfront (at least I am not sure of a way to make it work yet). Since CF caches redirects, it keeps on redirecting the browser to the api for lambda function (and it defeats the purpose).

Any suggestions / ideas are greatly appreciated

image crop issue on different aspect ratios

Hi Author,

Image resize functionality works great if the aspect ratio is same i.e. 300x300 but whenever I try different ratio i.e. 580 x 240 it crops the image.

Can you help in this please?

Images that are resized to be smaller get larger file size

Hi Guys

Very cool system you have set up. I'm using it to dynamically create thumbnails for different image sets for the purpose of increasing my user content load speeds, but I found that it is having the opposite effect.
Case in point is the first file in my S3 bucket, which when resized from 500x750 to 400x400, more than quadrupled in size.
Can anyone please help me fix this, as the smaller image should have a smaller file size.
Every single 400x400 image it has created is almost 0.25 mb, even when the origin file was much much smaller.

File size : 57343
Image dimensions: 500x750
https://s3.amazonaws.com/serverlessimageresize-imagebucket-e91q0598izp8/00exzGI8I8CKfSpQDqCQGeFRjO3mEi0KzoUswz7j3hPim84L.jpg

File size : 237162
Image dimensions: 400x400
https://s3.amazonaws.com/serverlessimageresize-imagebucket-e91q0598izp8/400x400/00exzGI8I8CKfSpQDqCQGeFRjO3mEi0KzoUswz7j3hPim84L.jpg

Is there a way to retrieve image with same dimensions?

Is there a way to retrieve image by giving only width?

I need to show small images without cropping, let's say, I uploaded image 2000x1000, when I retrieve it, I just want to give width, like 500 and system automatically returns image accordingly.

Problem is I don't want cropped image, I only want resized and I don't have the original dimensions with me.

(discussion) thumbnail key format

The key format for resized images used by this program is /widthxheight/originalkey. I think /originalkeydir/originalkeyfilename-widthxheight.ext is better because it enables the removal of the original image and all its derivatives from S3 with one wildcard delete operation. If the width and height are the first part of the key this is much harder.

I wonder if my argument is right or if there are stronger arguments in favor of the current key format.

Animated gif becomes static

Is there a way to get gifs to resize and keep the animation? If this is not possible it would be nice to filter those out and redirect to the original image.

Internal Server Error - Image resizing Using S3

Hi, I followed your tutorial but I'm getting an error:
{"message": "Internal server error"}

I can access to my picture with this URL: myBucket.s3-website-ap-xxx.amazonaws.com/test.jpg
but if I try: myBucket.s3-website-ap-xxx.amazonaws.com/300×300/test.jpg the URL change to
https://yyyyy.execute-api.ap-xxx.amazonaws.com/prod/resize?key=300x300/test.jpg and I'm getting the following error.

I used this link: https://github.com/awslabs/serverless-image-resizing/raw/master/dist/function.zip for the function. I tried Node.js 4.3 and 6.10 but same results. Did I miss a step?

Also I am getting - The deployment package of your Lambda function "resizeForArtwork" is too large to enable inline code editing. However, you can still invoke your function right now.

Also, I had to add "s3:List*" in my bucket policy otherwise I've got access denied.

How to use SSL for image URL

Thank you, the script is awesome, and so is your blog post, it's amazing!

Actually this issue is not directly related to your script, but I want to ask your opinion, maybe you have some ideas that can be useful to me.

So the thing is I want to use the lambda function in production, so of course I'll need to use ssl.

I've tried using CloudFront, but up until now, I haven't found the solution. CloudFront can transform bucket url into https, but it will stop bucket's redirection rules, API Gateway, and Lambda from running.

Redirected you too many times error

I got this error
[...].execute-api.ap-southeast-1.amazonaws.com redirected you too many times.
when following the tutorial.
And the addressbar will redirect you to:
https://[...].execute-api.ap-southeast-1.amazonaws.com/prod/resize?key=//////////130x130/blue_marble.jpg
It turns out that the error is because in the code const key = event.queryStringParameters.key; already has the slash e.g.: \130x130\blue_marble.jpg and later the code redirect to the location that add one more slash ${URL}\${key}

error with the example: The specified key does not exist.

I followed the instructions on the AWS blog to set this project up. Upon testing in the end, I get the following error.

<Code>NoSuchKey</Code>
<Message>The specified key does not exist.</Message>
<Key>200x200/earthSearch.jpg</Key>

This is the same type of error that you would get if the image doesn't exist in S3 bucket which leads me to assume that something is wrong with my redirect rules, which is odd because it seems like a simple step. I added the rules in the right place per screenshot below

My prefixes are the same as yours. The only thing I can imagine is different is that I couldn't add dashes inside the HostName tag (it wouldn't allow me to save with daashes). So to follow your hello world example, my hostname looked something like the code below.

My API endpoint is live too and hooked up to lambda func. Anything else I am missing?

Thanks for putting this together!

      <HostName>h3ll0w0rld.execute-api.us-east-1.amazonaws.com</HostName>
      <ReplaceKeyPrefixWith>prod/resize?key=</ReplaceKeyPrefixWith>

screen shot 2017-03-29 at 9 40 29 pm

Cloudformation vs Step by Step Instructions

I followed the step by step instructions but could not get it to work. - However when running the cloudformation stack everything worked as expected. - There seems to be slight differences in IAM policies and some other config items. (i.e. timeout) It would be great if the 2 paths could be in sync.

Small Suggestion

Great stuff and quite useful. We are going to be using this right away (unless we hit any huge lambda bill in a surprising way)

One suggestion:

Can you remove the dist/functions.zip out of repository and reset the repo? Due to the commit of binary file the cloning of repo takes quite some time?

Module initialization error

Hi,
I tried to setup everything following the tutorial but I always get this error:

module initialization error: Error
at Error (native)
at Object.Module._extensions..node (module.js:434:18)
at Module.load (module.js:343:32)
at Function.Module._load (module.js:300:12)
at Module.require (module.js:353:17)
at require (internal/module.js:12:17)
at Object.<anonymous> (/var/task/node_modules/sharp/lib/constructor.js:8:15)
at Module._compile (module.js:409:26)
at Object.Module._extensions..js (module.js:416:10)
at Module.load (module.js:343:32)

I already changed to Node.js 4.3 but I still get the same error.
I even try to upload directly your functions.zip so that I don't change anything on the lambda code but still get the same error.

Compiling Sharp seems to provide different result?

Hi guys, rather new to using Docker but from what I can tell I'm doing it right, so something it acting up when compiling Sharp.

I'm developing from a Windows 10 machine and want to use this library in a production setting as it's lightweight and easy to modify into something useful and secure. However when I try to compile it against the Docker Image... well everything seemingly goes well. The issue is in one of the node modules; specifically the directory /node_modules/sharp/vendor/* which is a completely different total size to the pre-compiled version in the dist/ folder provided. This seems to be a symptom of the root of the issue.

Some follow up information:
This is the result of the npm install - which appears normal to me.
npminstall

The /sharp/vendor directory size on the pre-compiled version is 48.8 MB compared to my compiled version of 19.8 MB. All the same files are avaiable, some of them are either just empty or have an increased file size.

After uploading an unedited code base (simply my recompilation attempt), and attempting to resize the image through the normal process (which works for the precompiled Zip) I receive the suspected error of unable to load the module successfully (see below).
capture2

Any help regarding this would be most appreciated! I'm happy to follow up with any more information that may help solve this ((hopefully I'm doing something simple and dumb).

HTTPS Redirect Possible?

For the lambda function environmental variable BUCKET, the walkthrough shows an HTTP endpoint linked to the s3 static site. If I use HTTPS instead, it never resolves the request on initial resize.

Using HTTP is a problem though since it will show the mixed content insecure message in Chrome.

Is it possible to have it HTTPs end-to-end?

Internal server error

Hi, I followed your tutorial but I'm getting an error:
{"message": "Internal server error"}

I can access to my picture with this URL: myBucket.s3-website-ap-xxx.amazonaws.com/test.jpg
but if I try: myBucket.s3-website-ap-xxx.amazonaws.com/300x300/test.jpg the URL change to
https://yyyyy.execute-api.ap-xxx.amazonaws.com/prod/resize?key=300x300/test.jpg and I'm getting the following error.

I used this link: https://github.com/awslabs/serverless-image-resizing/raw/master/dist/function.zip for the function. I tried Node.js 4.3 and 6.10 but same results. Did I miss a step?

Also, I had to add "s3:List*" in my bucket policy otherwise I've got access denied.

Example does not work for regions that use AWS4-HMAC-SHA256

The example does not work for regions that use AWS4-HMAC-SHA256 (Frankfurt). A quick fix to make it work:

const S3 = new AWS.S3({
  bucket: process.env.BUCKET,
  region: process.env.REGION,
  signatureVersion: process.env.SIGNATURE_VERSION
});

It would be nice if the documentation would include a mention about this.

. /bin/deploy not working

When using the deploy command I my terminal window gets closed and do not know what is happening. While using the AWS EC2 instance option my ssh connection gets closed.
Down below are my taken actions. Am I missing something?

Steps performed on the EC2-Instance:

    git clone https://github.com/awslabs/serverless-image-resizing/
    cd lambda
    npm install
    cd ../bin
    . deploy

Steps performed on my local machine:

    git clone https://github.com/awslabs/serverless-image-resizing/
    make all
     . deploy

What's stopping users from spamming the resize event?

What's stopping a user from spamming the resize event by entering various widths and heights for the image, like:

https://www.domain.com/300x300/image.png
https://www.domain.com/301x301/image.png
https://www.domain.com/302x302/image.png
https://www.domain.com/303x303/image.png
etc

Is it possible to specify what widths and heights are allowed? For example, only allow images to be resized to 500x500 and 250x250, but no others?

Feature request: Separate source and resized buckets

Hello,

We already have an existing bucket. Now we need to have the resized images to go to another bucket. Is it possible to have the source and resized buckets as separate ones?

We can of course download the source here and make change to it. However, it'd be nice to have the feature implemented.

My regards.

Suggestion: Remove the need for Amazon Linux

This is a promising project but the repo name contradicts the requirements spelled out in the README.

The Lambda function uses sharp for image resizing which requires native extensions. In order to run on Lambda, it must be packaged on Amazon Linux.

If "serverless image resizing" requires Amazon Linux, then it's not serverless at all is it?

Obviously, Lambda is still a very young platform (in Alpha?) so I don't want to ask for the stars and moon. Just for a way to make this repo easier to use and develop off of.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.