Giter Club home page Giter Club logo

feathers-blob's Introduction

feathers-blob

Node.js CI Dependency Status Download Status

Feathers abstract blob store service

Installation

npm install feathers-blob --save

Also install a abstract-blob-store compatible module.

API

const BlobService = require('feathers-blob')

blobService = BlobService(options)

  • options.Model is an instantiated interface that implements the abstract-blob-store API
  • options.id is a string 'key' for the blob identifier.
  • returnUri defaults is true, set it to false to remove it from output.
  • returnBuffer defaults is false , set it to true to return buffer in the output.

Tip: returnUri/returnBuffer are mutually exclusive.

If you only want a buffer output instead of a data URI on create/get operations, you need to set returnBuffer to be true, also to set retuarnUri to be false.

If you need both, use the default options, then extract the buffer from the data URI on the client-side to avoid transferring the data twice over the wire.

blobService.create(body, params)

where input body is an object with either:

  • a key uri pointing to data URI of the blob,
  • a key buffer pointing to raw data buffer of the blob along with its contentType (i.e. MIME type).

Optionally, you can specify in the body the blob id which can be the file path where you want to store the file, otherwise it would default to ${hash(content)}.${extension(contentType)}.

Tip: You can use feathers hooks to customize the id. You might not want the client-side to write whereever they want.

returns output 'data' of the form:

{
  [this.id]: `${hash(content)}.${extension(contentType)}`,
  uri: body.uri, // When returnUri options is set true
  buffer: body.buffer, // When returnBuffer options is set true
  size: length(content)
}

blobService.get(id, params)

returns output data of the same form as create.

blobService.remove(id, params)

Params:

Query:

  • VersionId (string): Version ID of document to access if using a versioned s3 bucket

Example:

blobService.get('my-file.pdf', {
  query: {VersionId: 'xslkdfjlskdjfskljf.sdjfdkjfkdjfd'},
})

Example

const { getBase64DataURI } = require('dauria');
const AWS = require('aws-sdk');
const S3BlobStore = require('s3-blob-store');
const feathers = require('@feathersjs/feathers');
const BlobService = require('feathers-blob');

const s3 = new AWS.S3({
  endpoint: 'https://{service}.{region}.{provider}.com',
  accessKeyId: process.env.AWS_ACCESS_KEY_ID,
  secretAccessKey: process.env.AWS_SECRET_ACCESS_KEY,
});

const blobStore = S3BlobStore({
  client: s3,
  bucket: 'feathers-blob'
});

const blob = {
  uri: getBase64DataURI(new Buffer('hello world'), 'text/plain')
}

const app = feathers();

app.use('/upload', BlobService({
  Model: blobStore
}));

const blobService = app.service('upload');

blobService.create(blob).then(function (result) {
  console.log('Stored blob with id', result.id);
}).catch(err => {
  console.error(err);
});

Should you need to change your bucket's options, such as permissions, pass a params.s3 object using a before hook.

app.service('upload').before({
  create(hook) {
    hook.params.s3 = { ACL: 'public-read' }; // makes uploaded files public
  }
});

For a more complete example, see examples/app which can be run with npm run example.

Tests

Tests can be run by installing the node modules and running npm run test.

To test the S3 read/write capabilities set the environmental variable S3_BUCKET to the name of a bucket you have read/write access to. Enable the versioning functionality on the bucket.

License

Copyright (c) 2018

Licensed under the MIT license.

feathers-blob's People

Contributors

ahdinosaur avatar andys8 avatar belal-mazlom avatar bertho-zero avatar claustres avatar corymsmith avatar daffl avatar dasantonym avatar davidbludlow avatar florianbepunkt avatar fratzinger avatar green3g avatar greenkeeper[bot] avatar lwhiteley avatar mairu avatar mcchrish avatar mdartic avatar nikitavlaznev avatar nuran-jafarov avatar salttis avatar sarkistlt avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

feathers-blob's Issues

An in-range update of aws-sdk is breaking the build 🚨

The devDependency aws-sdk was updated from 2.335.0 to 2.336.0.

🚨 View failing branch.

This version is covered by your current version range and after updating it in your project the build failed.

aws-sdk is a devDependency of this project. It might not break your production code or affect downstream projects, but probably breaks your build or test tools, which may prevent deploying or publishing.

Status Details
  • continuous-integration/travis-ci/push: The Travis CI build failed (Details).

Release Notes for Release v2.336.0

See changelog for more information.

Commits

The new version differs by 2 commits.

  • 907d609 Updates SDK to v2.336.0
  • 41c9d5a make onAsync event listener public (#2299)

See the full diff

FAQ and help

There is a collection of frequently asked questions. If those don’t help, you can always ask the humans behind Greenkeeper.


Your Greenkeeper Bot 🌴

Refactoring proposal based on AWS SDK and presigned URLs

feathers-blob currently relies on a model conforming to the abstract-blob-store interface. Most of the time it appears feathers-blob is used to store data in cloud object storages, e.g. AWS S3 or Google Cloud Storage, thus relying on e.g. https://github.com/jb55/s3-blob-store or https://github.com/maxogden/google-cloud-storage under-the-hood, which do not seem to be maintained anymore. For instance the S3 module uses https://github.com/nathanpeck/s3-upload-stream as a dependency, which is now deprecated for a long time.

Now most cloud providers provides a similar interface for their Object Storage as Amazon S3 does, we wonder if refactoring feathers-blob directly using an up-to-date version of the AWS SDK wouldn't be the most relevant.

Moreover, a lot of issues indicate that file uploading/downloading is still a challenging task for most people who have to understand a lot of concepts like blobs, data uris, multipart support, the different client/server responsibilities, etc. In order to simplify this we could also rely on presigned URLs.

At Kalisio we have already started an effort with something able to replace feathers-blob based on this proposal. It looks like this:

  • backend service
    • create operation => generates a presigned URL to be used to PUT the object based on a key
    • get operation => generates a presigned URL to be used to GET the object based on a key
    • upload/download proxy routes to cloud provider in order to avoid CORS problems in constrained environments (required if your client cannot directly access the provider service)
  • additional frontend service methods
    • upload =>get signed URL and post data to either the proxy or the provider service directly
    • download => get signed URL and read data from either the proxy or the provider service directly

Let us know what you think about that, notably if feathers-blob handles others use cases that would not benefit from this proposal. Otherwise, any help is welcome to upgrade this module.

An in-range update of aws-sdk is breaking the build 🚨

The devDependency aws-sdk was updated from 2.347.0 to 2.348.0.

🚨 View failing branch.

This version is covered by your current version range and after updating it in your project the build failed.

aws-sdk is a devDependency of this project. It might not break your production code or affect downstream projects, but probably breaks your build or test tools, which may prevent deploying or publishing.

Status Details
  • continuous-integration/travis-ci/push: The Travis CI build failed (Details).

Release Notes for Release v2.348.0

See changelog for more information.

Commits

The new version differs by 1 commits.

See the full diff

FAQ and help

There is a collection of frequently asked questions. If those don’t help, you can always ask the humans behind Greenkeeper.


Your Greenkeeper Bot 🌴

An in-range update of aws-sdk is breaking the build 🚨

The devDependency aws-sdk was updated from 2.329.0 to 2.330.0.

🚨 View failing branch.

This version is covered by your current version range and after updating it in your project the build failed.

aws-sdk is a devDependency of this project. It might not break your production code or affect downstream projects, but probably breaks your build or test tools, which may prevent deploying or publishing.

Status Details
  • continuous-integration/travis-ci/push: The Travis CI build failed (Details).

Release Notes for Release v2.330.0

See changelog for more information.

Commits

The new version differs by 2 commits.

  • d5ea34e Updates SDK to v2.330.0
  • 19a5b59 fix: type was incorrect for Service.setupRequestListeners() (#2216)

See the full diff

FAQ and help

There is a collection of frequently asked questions. If those don’t help, you can always ask the humans behind Greenkeeper.


Your Greenkeeper Bot 🌴

docs: get does not return buffer

Steps to reproduce

A Service.get() operation does not return data the same format as in create (as stated in the Readme):

blobService.get(id, params)
returns output data of the same form as create.

Service.create() accepts an object with either an uri or a Buffer, but get only returns an uri.

Expected behavior

Buffer object should be provided or docs clarified.

I can put together a PR. Please let me know if this is an issue with the docs or the get implementation (or intended).

Character encoding information is lost when saving a data URI

Steps to reproduce

  • Tell us what broke. The more detailed the better.

A UTF-8 encoded HTML string saved as to the blob store will be retrieved with no encoding specified, leading to garbled characters.

  • If you can, please create a simple example that reproduces the issue
const div = '<p>don’t</p>'

// convert the HTML string to a Buffer
const buffer = Buffer.from(html)

// convert the Buffer to a Base64-encoded data URI
const uri = dauria.getBase64DataURI(buffer, 'text/html;charset=utf-8')

console.log(uri.substr(0, uri.indexOf(',')))
// data:text/html;charset=utf-8;base64

// store the HTML blob
const saved = await app.service('blobs').create({ uri })

// load the HTML blob
const blob = await app.service('blobs').get(saved.id)

console.log(blob.uri.substr(0, blob.uri.indexOf(',')))
// data:text/html;base64

Expected behavior

The charset information should be preserved.

Actual behavior

The charset information is lost. This is because only the part of the data URI after the prefix is stored, then on retrieval the extension of the blob's filename is used to guess what the "content-type" part of the prefix should be.

System configuration

Tell us about the applicable parts of your setup.

feathers-blob v1.2.0

Use filename in new hash

In my system, if 2 people upload the same file with different file names, I would like the blob to save it twice. Is this possible?

How to resize an image before create it ?

Expected behavior

I want to resize image before create it and create two versions of my image (resized and original).

System configuration

feathers-blob :"^1.3.1"
NodeJS version:v8.4.0
React Native Version:"0.52.0"

Add TypeScript definitions

Steps to reproduce

  • Create a new feathers project with typescript
  • try to import import BlobService from 'feathers-blob';

Expected behavior

It should import the module without issues

Actual behavior

I get the following error:

Could not find a declaration file for module feathers-blob'. '*' implicitly has an 'any' type. Try npm install @types/feathers-blob if it exists or add a new declaration (.d.ts) file containing `declare module 'feathers-blob';

Getting socket time out on upload large volume stream (100GB)

Steps to reproduce

below code working fine till 50GBand when did top on current running pod its still not releasing memory after job done as well
CPU(cores) MEMORY(bytes)
34m 45675Mi

Expected behavior

Actual behavior

Exception in thread "Thread-11" java.io.UncheckedIOException: javax.net.ssl.SSLException: java.net.SocketException: Connection reset
at software.amazon.awssdk.utils.async.StoringSubscriber$Event.runtimeError(StoringSubscriber.java:181)
at software.amazon.awssdk.utils.async.ByteBufferStoringSubscriber.transferTo(ByteBufferStoringSubscriber.java:112)
at software.amazon.awssdk.utils.async.ByteBufferStoringSubscriber.blockingTransferTo(ByteBufferStoringSubscriber.java:134)
at software.amazon.awssdk.services.s3.internal.crt.S3CrtRequestBodyStreamAdapter.sendRequestBody(S3CrtRequestBodyStreamAdapter.java:48)
Caused by: javax.net.ssl.SSLException: java.net.SocketException: Connection reset
at sun.security.ssl.Alert.createSSLException(Alert.java:127)
at sun.security.ssl.TransportContext.fatal(TransportContext.java:370)
at sun.security.ssl.TransportContext.fatal(TransportContext.java:313)
at sun.security.ssl.TransportContext.fatal(TransportContext.java:308)
at sun.security.ssl.SSLTransport.decode(SSLTransport.java:141)
at sun.security.ssl.SSLSocketImpl.decode(SSLSocketImpl.java:1293)
at sun.security.ssl.SSLSocketImpl.readApplicationRecord(SSLSocketImpl.java:1260)
at sun.security.ssl.SSLSocketImpl.access$300(SSLSocketImpl.java:75)
at sun.security.ssl.SSLSocketImpl$AppInputStream.read(SSLSocketImpl.java:926)
at org.apache.http.impl.io.SessionInputBufferImpl.streamRead(SessionInputBufferImpl.java:137)
at org.apache.http.impl.io.SessionInputBufferImpl.read(SessionInputBufferImpl.java:197)
at org.apache.http.impl.io.ContentLengthInputStream.read(ContentLengthInputStream.java:176)
at org.apache.http.conn.EofSensorInputStream.read(EofSensorInputStream.java:135)
at com.amazonaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:90)
at com.amazonaws.event.ProgressInputStream.read(ProgressInputStream.java:180)
at com.amazonaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:90)
at com.amazonaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:90)
at com.amazonaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:90)
at com.amazonaws.event.ProgressInputStream.read(ProgressInputStream.java:180)
at com.amazonaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:90)
at com.amazonaws.util.LengthCheckInputStream.read(LengthCheckInputStream.java:107)
at com.amazonaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:90)
at com.amazonaws.services.s3.internal.S3AbortableInputStream.read(S3AbortableInputStream.java:125)
at com.amazonaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:90)
at java.io.FilterInputStream.read(FilterInputStream.java:133)
at software.amazon.awssdk.core.internal.io.SdkLengthAwareInputStream.read(SdkLengthAwareInputStream.java:65)
at java.io.FilterInputStream.read(FilterInputStream.java:107)
at software.amazon.awssdk.utils.async.InputStreamConsumingPublisher.doBlockingWrite(InputStreamConsumingPublisher.java:55)
at software.amazon.awssdk.core.async.BlockingInputStreamAsyncRequestBody.writeInputStream(BlockingInputStreamAsyncRequestBody.java:76)
at software.amazon.awssdk.core.internal.async.InputStreamWithExecutorAsyncRequestBody.doBlockingWrite(InputStreamWithExecutorAsyncRequestBody.java:108)
at software.amazon.awssdk.core.internal.async.InputStreamWithExecutorAsyncRequestBody.lambda$subscribe$0(InputStreamWithExecutorAsyncRequestBody.java:81)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Suppressed: java.net.SocketException: Broken pipe (Write failed)
at java.net.SocketOutputStream.socketWrite0(Native Method)
at java.net.SocketOutputStream.socketWrite(SocketOutputStream.java:111)
at java.net.SocketOutputStream.write(SocketOutputStream.java:155)
at sun.security.ssl.SSLSocketOutputRecord.encodeAlert(SSLSocketOutputRecord.java:83)
at sun.security.ssl.TransportContext.fatal(TransportContext.java:401)
... 34 more
Caused by: java.net.SocketException: Connection reset
at java.net.SocketInputStream.read(SocketInputStream.java:210)
at java.net.SocketInputStream.read(SocketInputStream.java:141)
at sun.security.ssl.SSLSocketInputRecord.read(SSLSocketInputRecord.java:475)
at sun.security.ssl.SSLSocketInputRecord.readFully(SSLSocketInputRecord.java:458)
at sun.security.ssl.SSLSocketInputRecord.decodeInputRecord(SSLSocketInputRecord.java:242)
at sun.security.ssl.SSLSocketInputRecord.decode(SSLSocketInputRecord.java:180)
at sun.security.ssl.SSLTransport.decode(SSLTransport.java:110)
... 31 more

and stack trace:

org.springframework.aop.framework.CglibAopProxy$CglibMethodInvocation.invokeJoinpoint(CglibAopProxy.java:793)\n\t\tat org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:163)\n\t\tat org.springframework.aop.framework.CglibAopProxy$CglibMethodInvocation.proceed(CglibAopProxy.java:763)\n\t\tat org.springframework.aop.aspectj.MethodInvocationProceedingJoinPoint.proceed(MethodInvocationProceedingJoinPoint.java:89)\n\t\tat org.springframework.cloud.sleuth.instrument.async.TraceAsyncAspect.traceBackgroundThread(TraceAsyncAspect.java:64)\n\t\tat sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)\n\t\tat sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)\n\t\tat sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)\n\t\tat java.lang.reflect.Method.invoke(Method.java:498)\n\t\tat org.springframework.aop.aspectj.AbstractAspectJAdvice.invokeAdviceMethodWithGivenArgs(AbstractAspectJAdvice.java:634)\n\t\tat org.springframework.aop.aspectj.AbstractAspectJAdvice.invokeAdviceMethod(AbstractAspectJAdvice.java:624)\n\t\tat org.springframework.aop.aspectj.AspectJAroundAdvice.invoke(AspectJAroundAdvice.java:72)\n\t\tat org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:186)\n\t\tat org.springframework.aop.framework.CglibAopProxy$CglibMethodInvocation.proceed(CglibAopProxy.java:763)\n\t\tat org.springframework.aop.interceptor.ExposeInvocationInterceptor.invoke(ExposeInvocationInterceptor.java:97)\n\t\tat org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:186)\n\t\tat org.springframework.aop.framework.CglibAopProxy$CglibMethodInvocation.proceed(CglibAopProxy.java:763)\n\t\tat org.springframework.aop.interceptor.AsyncExecutionInterceptor.lambda$invoke$0(AsyncExecutionInterceptor.java:115)\n\t\tat java.util.concurrent.FutureTask.run(FutureTask.java:266)\n\t\tat org.springframework.cloud.sleuth.instrument.async.TraceRunnable.run(TraceRunnable.java:64)\n\t\tat net.chase.ccb.photon.core.concurrent.PhotonRunnable.run(PhotonRunnable.java:34)\n\t\tat net.chase.ccb.photon.core.concurrent.PhotonRunnable.run(PhotonRunnable.java:34)\n\t\tat java.lang.Thread.run(Thread.java:748)\n\tCaused by: java.util.concurrent.CompletionException: software.amazon.awssdk.core.exception.SdkClientException: Failed to send the request: A callback has reported failure.\n\t\tat software.amazon.awssdk.utils.CompletableFutureUtils.errorAsCompletionException(CompletableFutureUtils.java:65)\n\t\tat software.amazon.awssdk.core.internal.http.pipeline.stages.AsyncExecutionFailureExceptionReportingStage.lambda$execute$0(AsyncExecutionFailureExceptionReportingStage.java:51)\n\t\tat java.util.concurrent.CompletableFuture.uniHandle(CompletableFuture.java:836)\n\t\tat java.util.concurrent.CompletableFuture$UniHandle.tryFire(CompletableFuture.java:811)\n\t\tat java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:488)\n\t\tat java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:1990)\n\t\tat software.amazon.awssdk.utils.CompletableFutureUtils.lambda$forwardExceptionTo$0(CompletableFutureUtils.java:79)\n\t\tat java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:774)\n\t\tat java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:750)\n\t\tat java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:488)\n\t\tat java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:1990)\n\t\tat software.amazon.awssdk.core.internal.http.pipeline.stages.AsyncRetryableStage$RetryingExecutor.maybeAttemptExecute(AsyncRetryableStage.java:103)\n\t\tat software.amazon.awssdk.core.internal.http.pipeline.stages.AsyncRetryableStage$RetryingExecutor.maybeRetryExecute(AsyncRetryableStage.java:184)\n\t\tat software.amazon.awssdk.core.internal.http.pipeline.stages.AsyncRetryableStage$RetryingExecutor.lambda$attemptExecute$1(AsyncRetryableStage.java:159)\n\t\tat java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:774)\n\t\tat java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:750)\n\t\tat java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:488)\n\t\tat java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:1990)\n\t\tat software.amazon.awssdk.utils.CompletableFutureUtils.lambda$forwardExceptionTo$0(CompletableFutureUtils.java:79)\n\t\tat java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:774)\n\t\tat java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:750)\n\t\tat java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:488)\n\t\tat java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:1990)\n\t\tat software.amazon.awssdk.core.internal.http.pipeline.stages.MakeAsyncHttpRequestStage.lambda$null$0(MakeAsyncHttpRequestStage.java:103)\n\t\tat java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:774)\n\t\tat java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:750)\n\t\tat java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:488)\n\t\tat java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:1990)\n\t\tat software.amazon.awssdk.core.internal.http.pipeline.stages.MakeAsyncHttpRequestStage.lambda$executeHttpRequest$3(MakeAsyncHttpRequestStage.java:165)\n\t\tat java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:774)\n\t\tat java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:750)\n\t\tat java.util.concurrent.CompletableFuture$Completion.run(CompletableFuture.java:456)\n\t\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)\n\t\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)\n\t\t... 1 common frames omitted\n\tCaused by: software.amazon.awssdk.core.exception.SdkClientException: Failed to send the request: A callback has reported failure.\n\t\tat software.amazon.awssdk.core.exception.SdkClientException$BuilderImpl.build(SdkClientException.java:111)\n\t\tat software.amazon.awssdk.core.exception.SdkClientException.create(SdkClientException.java:43)\n\t\tat software.amazon.awssdk.services.s3.internal.crt.S3CrtResponseHandlerAdapter.handleError(S3CrtResponseHandlerAdapter.java:127)\n\t\tat software.amazon.awssdk.services.s3.internal.crt.S3CrtResponseHandlerAdapter.onFinished(S3CrtResponseHandlerAdapter.java:93)\n\t\tat software.amazon.awssdk.crt.s3.S3MetaRequestResponseHandlerNativeAdapter.onFinished(S3MetaRequestResponseHandlerNativeAdapter.java:24)\n"

System configuration

amazonS3Client = S3AsyncClient.crtBuilder()
.credentialsProvider(StaticCredentialsProvider
.create(AwsSessionCredentials
.create("AccessKey", "SecretKey", "sessionToken"())))
.region(Region.US_EAST_1)
.maxConcurrency(500).minimumPartSizeInBytes((long) 12514185)
.targetThroughputInGbps(20.0)
.httpConfiguration(S3CrtHttpConfiguration.builder().proxyConfiguration(S3CrtProxyConfiguration.builder()
.host("").port()).build()).build())
.build();

        PutObjectRequest putObjectRequest = PutObjectRequest.builder().bucket(<Bucket>).key(filePath).build();

        CompletableFuture<PutObjectResponse> putObjectResponse =
                amazonS3Client.putObject(putObjectRequest,
                        AsyncRequestBody.fromInputStream(inputStream, fileSize,  Executors.newFixedThreadPool( 10)));
        putObjectResponse.join();

Metadata does not been used on cb of createWriteStream

Steps to reproduce

On method createWriteStream, after the uplaod of image been successfull, the callback isn´t get the metadata:

createWriteStream(opts: any, done: any) {
    let { key } = opts;
    return this.cloudinary.uploader.upload_stream({}, function (err, image) {
      done(err, { name: image.public_id });
    });
  }

Expected behavior

I expected the resolver of promise get the metada ou use for response.

Unable to set ACL permissions for S3

When uploading files to Amazon S3 I'm unable to set them as public-read. The create hook accepts a second param which I thought was intended for this, but doing:

uploadsService.create({ uri: this.image }, { acl: 'public-read' });

Always results in Access Denied errors while trying to get those images from S3.

feathers blobs service save docx with .bin extension

Steps to reproduce

  • Use feathers blob to receive uploaded file with .docx extension. The file base64 string is as bellow

"data:application/octet-stream;base64,UEsDBBQABgAIAAAAIQCz8LkduwEAAFwIAAATAAgCW0NvbnRlbnRfVHlwZXNdLnhtbCCiBAIooAACAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
... (continue lots of stuff)"

Expected behavior

Expected file is saved with docx extension.

Actual behavior

It is saved in upload folder with .bin extension

System configuration

Module versions (especially the part that's not working):

"@feathersjs/feathers": "^3.1.7",
"feathers-blob": "^2.0.1",
"fs-blob-store": "^5.2.1"

NodeJS version: v8.11.2

Operating System: Win 10 Pro

Browser Version: Google Chrome Version 73.0.3627.0 (version does not matter)

Code Structure:

I have uploads.service.js (generated by feathers cli) that have before create hook named createBlob.

createBlob calle blobs service using bellow code:

      uri: context.data.uri
    }

    const result = await context.app.service('blobs').create(blob)
      .then(function (response) {
        console.log('Blob saved!', JSON.stringify(response.size), " bytes");
        return response;
      }).catch(err => {
        console.log(ErrSavingBlob.message, " ", err.message)
        throw new ErrSavingBlob();
      })

My code for blobs.service (generated by feathers cli) as follows

const {CONST} = require('../../util/const');
const hooks = require('./blobs.hooks');

const blobService = require('feathers-blob');
const fs = require('fs-blob-store');

const blobStorage = fs(CONST.uploadPath);

module.exports = function (app) {
  app.use('/blobs', blobService({
    Model: blobStorage
  }));
  const service = app.service('blobs');
  service.hooks(hooks);
};

Make data uri encoding optional

Data uri is great to typically upload/download images, however for others types of data like raw binary data, etc. it adds a layer of complexity that is not really useful.

The usage of data uri could remains default (backward compatibility) but the usage of raw data buffer could be enforced by either a service constructor option or a query parameter.

[Question] Data uri VS Raw data buffer

Hello,

Not a bug here :)

I was just wondering what are the difference between using data uri or raw data buffer

I've been using feathers-blob for several years and as I always used the feathers doc when starting a project, I've used data uri every times.

But as I'm starting a new project, I wonder if raw data buffer is better (or worst) than data uri and what are the pros and cons.

My main concern is performances.

Thanks for your work!

Add abstract-blob-store as a dependency

See #81. If abstract-blob-store is needed to compile the project, abstract-blob-store should be a production dependency. It's currently a development dependency, so not installed when used in other projects.

Could we have the contentType in the create result ?

This is not a bug, but more a feature request if it sounds useful.

When I upload a new file, that is an image, I would like to create a thumbnail just after it.

By using the returnBuffer option, I can retrieve the buffer and give it to sharp module.

But, I need to do this only if it's an image, and I see that feathers-blob analyze already the uri to retrieve the mime.

Do you think it could be nice to add the contentType retrieved (or initially given for a buffer) in the response ?

This could give this in the response :

{
  [this.id]: id,
  ...(this.returnBuffer && { buffer }),
  ...(this.returnUri && { uri }),
  size: buffer.length,
  contentType
}

Steps to reproduce

When I upload an image, with uri param, I would like to generate a thumbnail by using the buffer retrieved by the feathers-blob service, and I would like to access the contentType (computed or retrieved) in the service.
Today, the contentType need to be recomputed, by using the uri param.

Expected behavior

Tell us what should happen

Return the contentType in the service response.

Actual behavior

Tell us what happens instead

No contentType returned.

System configuration

Don't think it's useful.

passing option returnUri: false and returns uri...

Similar to this issue

Steps to reproduce

use the following config
{ Model: blobStore, returnUri: false }

Expected behavior

{
  "id": "filename.zip"
}

dont return uri

Actual behavior

returns uri

{
  "id": "filename.zip",
  "uri": "data:application/zip;base64,UEsDBAoAAAAAAHgnfE8AAAAAAAAAAAAAAAAQAAkAYm9vdHN0cmFwLTQuNC4xL1VUBQABxcTfXVBLAwQKAAAACAB4...."
}

System configuration

Tell us about the applicable parts of your setup.

Module versions (especially the part that's not working):
2.2.0

NodeJS version:
v13.1.0

Operating System:
ubuntu 18.04

Object which key contains '/' cannot be retrieved by REST on S3

Steps to reproduce

Create a storage service targeting a S3 bucket like bucket. Try to access an object in a "sub-directory" with a key like bucket/sub/file.

Expected behavior

The object is correctly retrieved.

Actual behavior

A 404 error is raised. It's probably because express interpret the id with a / as a different route.

System configuration

Tell us about the applicable parts of your setup.

Module versions: 2.1

NodeJS version: 8.16

Operating System: Windows

An in-range update of aws-sdk is breaking the build 🚨

The devDependency aws-sdk was updated from 2.320.0 to 2.321.0.

🚨 View failing branch.

This version is covered by your current version range and after updating it in your project the build failed.

aws-sdk is a devDependency of this project. It might not break your production code or affect downstream projects, but probably breaks your build or test tools, which may prevent deploying or publishing.

Status Details
  • continuous-integration/travis-ci/push: The Travis CI build failed (Details).

Release Notes for Release v2.321.0

See changelog for more information.

Commits

The new version differs by 1 commits.

See the full diff

FAQ and help

There is a collection of frequently asked questions. If those don’t help, you can always ask the humans behind Greenkeeper.


Your Greenkeeper Bot 🌴

Route.post() requires a callback function but got a Object

If we use blobservice in app.post() we get the below error, but not with app.use()

app/node_modules/express/lib/router/route.js:202
        throw new Error(msg);
        ^

Error: Route.post() requires a callback function but got a [object Object]
    at Route.<computed> [as post] (app/node_modules/express/lib/router/route.js:202:15)
    at Function.app.<computed> [as post] (app/node_modules/express/lib/application.js:482:19)
    at Function.module.exports (app/src/services/bulk/bulk.service.js:44:9)
    at Function.configure (app/node_modules/@feathersjs/feathers/lib/application.js:59:8)
    at Function.module.exports (app/src/services/index.js:24:9)
    at Function.configure (app/node_modules/@feathersjs/feathers/lib/application.js:59:8)
    at Object.<anonymous> (app/src/app.js:62:5)


Code

app.post('/upload',
        uploader,
        // another middleware, to transfer the received file to feathers
        // multer generates file for uploaded file and body for form data
        function (req, res, next) {
            req.feathers.file = req.file;   // for upload file method file
            req.feathers.body = req.body;   // for paste method data emails
            next();
        },
        BlobService({ Model: blobStorage }),
        },
    );

NodeJS version:
v12.22.1

    "@feathersjs/authentication-local": "^4.5.11",
    "@feathersjs/authentication-oauth": "^4.5.11",
    "@feathersjs/configuration": "^4.5.11",
    "@feathersjs/errors": "^4.5.11",
    "@feathersjs/express": "^4.5.11",
    "@feathersjs/feathers": "^4.5.11",
    "@feathersjs/socketio": "^4.5.11",
    "@feathersjs/transport-commons": "^4.5.11",
    "feathers-blob": "^2.5.0",
    "s3-blob-store": "^4.1.1",

How to render uploaded photos?

How to retrieve uploaded files and render them back as files instead of JSON response?

Perhaps something to do with context.result.uri in after.get ? How to turn that URI to actual file response and say, render the image? I don't know what terms to use because it's all abstract to me, obviously, but I want to turn that retrieved URI to image. Basically I want to use the same service to upload and render uploaded files. It's weird that it's not already documented.

Edit: This should probably be done in a middleware after the service calls.

Edit 2: This is the best I've got so far

// upload.hooks.js
  after: {
    get: [
      (context) => {
        context.result = dauria.parseDataURI(context.result.uri);
      }
    ],
// upload.service.js
app.use('...',
    middleware,
    serviceInstance,
    (req, res, next) => {
      if (req.method === 'GET') {
        res.contentType(res.data.MIME);
        res.send(res.data.buffer);
      }
    }

So I need help pointing me in right direction, I would like to return a like to return a file url instead of base64

Steps to reproduce

(First please check that this issue is not already solved as described
here
)

  • Tell us what broke. The more detailed the better.
  • If you can, please create a simple example that reproduces the issue and link to a gist, jsbin, repo, etc.

Expected behavior

Tell us what should happen

Actual behavior

Tell us what happens instead

System configuration

Tell us about the applicable parts of your setup.

Module versions (especially the part that's not working):

NodeJS version:

Operating System:

Browser Version:

React Native Version:

Module Loader:

Support Multipart File Upload

Steps to reproduce

  • Upload a file with multipart upload
  • Some previous logic needs to encode in base64 the uploaded file, something like: const uri = dauria.getBase64DataURI(hook.params.file.buffer, hook.params.file.mimetype);
  • Call blobService.create like in Readme: blobService.create({ uri }).then(...)
  • This service decodes again the uri into a buffer: https://github.com/feathersjs/feathers-blob/blob/master/src/index.js#L51

Expected behavior

This seems like some unnecessary effort of encoding/decoding. Would be great that in the create method of the service, if params.file.buffer and .mimetype is set, then the body's uri is not needed anymore.

If both are set, then probably uri should take the priority.

Actual behavior

N/A

System configuration

Not relevant.

An in-range update of aws-sdk is breaking the build 🚨

The devDependency aws-sdk was updated from 2.350.0 to 2.351.0.

🚨 View failing branch.

This version is covered by your current version range and after updating it in your project the build failed.

aws-sdk is a devDependency of this project. It might not break your production code or affect downstream projects, but probably breaks your build or test tools, which may prevent deploying or publishing.

Status Details
  • continuous-integration/travis-ci/push: The Travis CI build could not complete due to an error (Details).

Release Notes for Release v2.351.0

See changelog for more information.

Commits

The new version differs by 3 commits.

  • 70cbaa7 Updates SDK to v2.351.0
  • 4073d3d Merge pull request #2342 from srchase/npmignore-additions
  • 13b0dd3 updated npmignore, fix for issue #2341

See the full diff

FAQ and help

There is a collection of frequently asked questions. If those don’t help, you can always ask the humans behind Greenkeeper.


Your Greenkeeper Bot 🌴

The "chunk" argument must be of type string or an instance of Buffer or Uint8Array.

Steps to reproduce

(First please check that this issue is not already solved as described
here
)

  • Tell us what broke. The more detailed the better.
  • If you can, please create a simple example that reproduces the issue and link to a gist, jsbin, repo, etc.

After upgrading AWS SDK to "@aws-sdk/client-s3": "3.224.0", feathers-blob stopped downloading files and now gives the error "[ERR_INVALID_ARG_TYPE]: The "chunk" argument must be of type string or an instance of Buffer or Uint8Array. Received an instance of IncomingMessage." Is this related to #96 ?

Expected behavior

feathers-blob should return the downloaded file.

Actual behavior

An error was generated, "[ERR_INVALID_ARG_TYPE]: The "chunk" argument must be of type string or an instance of Buffer or Uint8Array. Received an instance of IncomingMessage"

Here is the full trace:

TypeError [ERR_INVALID_ARG_TYPE]: The "chunk" argument must be of type string or an instance of Buffer or Uint8Array. Received an instance of IncomingMessage
at new NodeError (node:internal/errors:387:5)
at readableAddChunk (node:internal/streams/readable:260:13)
at S3Readable.Readable.push (node:internal/streams/readable:228:10)
at SimpleQueue.processed [as _callback] (/home/dev/node_modules/s3-download-stream/index.js:47:10)
at /home/dev/node_modules/SimpleQueue/SimpleQueue.js:31:9
at /home/dev/node_modules/s3-download-stream/index.js:67:5
at /home/dev/node_modules/@aws-sdk/smithy-client/dist-cjs/client.js:16:35
at processTicksAndRejections (node:internal/process/task_queues:96:5) {
code: 'ERR_INVALID_ARG_TYPE'
}

System configuration

Tell us about the applicable parts of your setup. I'm using feathers-blob integrated with s3

Module versions (especially the part that's not working): feathers-blob 2.6.0

NodeJS version:

Operating System:

Browser Version:

React Native Version:

Module Loader:

del

I'm sorry, I figured

remove it

Why datauris?

I really like feathers-blob because it enables me to not have to use an intermediary library (like the Amazon S3 iOS library) on mobile; everything can go straight to the server.

However, some questions/issues have popped up along the way:

  1. I like the service but how do you upload multipart requests to the blob service? So far I’ve only been able to upload images or videos that have been base 64 encoded in data uris
  2. When you get a blob object, the entire request waits until the blob is read from the stream, which in my case with videos is a long long time, so how can I download just the blob metadata and then only download the data uri
  3. Why data uris? In my particular case I’m writing an ios app and in order to store videos or images, I have to get the extension of the file (which is pretty hard if you just have a data object) and then base 64 encode it (which is easy). Why not just store the base 64 representation of a file?

An in-range update of debug is breaking the build 🚨

The dependency debug was updated from 4.1.0 to 4.1.1.

🚨 View failing branch.

This version is covered by your current version range and after updating it in your project the build failed.

debug is a direct dependency of this project, and it is very likely causing it to break. If other packages depend on yours, this update is probably also breaking those in turn.

Status Details
  • continuous-integration/travis-ci/push: The Travis CI build failed (Details).

Commits

The new version differs by 4 commits.

  • 68b4dc8 4.1.1
  • 7571608 remove .coveralls.yaml
  • 57ef085 copy custom logger to namespace extension (fixes #646)
  • d0e498f test: only run coveralls on travis

See the full diff

FAQ and help

There is a collection of frequently asked questions. If those don’t help, you can always ask the humans behind Greenkeeper.


Your Greenkeeper Bot 🌴

Large file upload to S3 errors with TimeOut from S3

Steps to reproduce

While uploading large files say 10Mb, after few minutes upload fails with the below error from aws-sdk. Smaller files upload just fine.

RequestTimeout: Your socket connection to the server was not read from or written to within the timeout period. 

Tried to investigate, but just found this from aws-sdk and it talks about steraming and Content-Length, not sure if the feathers-blob service is setting the length. RequestTimeout: Your socket connection ...

Same issue while using both datauri and buffer.

File

file { buffer:
   <Buffer 22 45 6d 61 69 6c 20 41 64 64 72 65 73 73 22 2c 22 45 6d 61 69 6c 20 46 6f 72 6d 61 74 22 2c 22 43 6f 6e 66 69 72 6d 65 64 22 2c 22 53 75 62 73 63 72 ... >,
  id: 'se32n6rifii5btlxmkefo3c3rgjfbesdjqwg.csv',
  fileName: 'list3.csv',
  fileId: 'se32n6rifii5btlxmkefo3c3rgjfbesdjqwg.csv',
  mimeType: 'text/csv',
  contentType: 'text/csv',
  encoding: '7bit',
  size: 12293411 }

Following the fileupload docs

uploads.service.js

const hooks = require('./uploads.hooks');
const AWS = require('aws-sdk');
const S3blobStorage = require('s3-blob-store');
const BlobService = require('feathers-blob');
const mime = require('mime-types');

const fileFilter = (req, file, cb) => {
    const fileType = mime.lookup(file.originalname);
    if (fileType != "text/csv") cb(null, false);
    cb(null, true)
}

const limits = {
    fileSize: 12582912,
    files: 1,
}

const multer = require('multer');
const multipartMiddleware = multer({ limits: limits, fileFilter: fileFilter });
const uploader = multipartMiddleware.single('upfile');

module.exports = function (app) {

    const cfgS3 = app.get('s3');
    const s3 = new AWS.S3({
        endpoint: cfgS3.url,
        accessKeyId: cfgS3.accessKeyId,
        secretAccessKey: cfgS3.secretAccessKey,
    });

    const blobStorage = S3blobStorage({
        client: s3,
        bucket: cfgS3.bucket
    });

    // Initialize our service with any options it requires
    app.use('/uploads',
        uploader,
        // another middleware, to transfer the received file to feathers
        function (req, res, next) {
            req.feathers.file = req.file;
            next();
        },
        BlobService({ Model: blobStorage }));

    // Get our initialized service so that we can register hooks
    const service = app.service('uploads');

    service.hooks(hooks);
};

System configuration

node v10.13.0

    "@feathersjs/express": "^4.3.10",
    "@feathersjs/feathers": "^4.3.10",
    "@feathersjs/socketio": "^4.3.10",
    "aws-sdk": "^2.568.0",

Custom ID not returned on remove

The custom ID provided on service creation is used for create/get operations but remove always use the default id, see https://github.com/feathersjs-ecosystem/feathers-blob/blob/master/src/index.js#L79. I guess it should be as well resolve({ [this.id]: id }).

I don't know if there could be any side effect but it will ensure a consistent behavior accross all operations, otherwise if you need to rely on the ID you have to manage remove differently from others operations.

Waiting for feedback if PR required.

An in-range update of aws-sdk is breaking the build 🚨

The devDependency aws-sdk was updated from 2.344.0 to 2.345.0.

🚨 View failing branch.

This version is covered by your current version range and after updating it in your project the build failed.

aws-sdk is a devDependency of this project. It might not break your production code or affect downstream projects, but probably breaks your build or test tools, which may prevent deploying or publishing.

Status Details
  • continuous-integration/travis-ci/push: The Travis CI build failed (Details).

Release Notes for Release v2.345.0

See changelog for more information.

Commits

The new version differs by 3 commits.

  • ef650d4 Updates SDK to v2.345.0
  • 2a58de4 Merge pull request #2323 from srchase/httpOptions-on-AssumeRole
  • fed6952 httpOptions on STS assume role in SharedIniFileCredentials

See the full diff

FAQ and help

There is a collection of frequently asked questions. If those don’t help, you can always ask the humans behind Greenkeeper.


Your Greenkeeper Bot 🌴

Extension can be ".false" if mime type is not recognized

Issue

There is conversion between mimeType and extension. It can be the case, that it's not working as intended, when the mimeType is not recognized correctly.

The used library, can actually return false.

Used here:

const ext = mimeTypes.extension(contentType);

Source in lib mime-types returning false:
https://github.com/jshttp/mime-types/blob/master/index.js#L107-L123

This will lead to ids ending with .false.

Goal

I'm thinking about if the implementation can not work without recognizing the mime type at all.

  • It would probably make sense to not have the extension ".false" and leave it out, if the implementation is still working.
  • The create method should fail early and raise an error, if it doesn't make sense to store the file without mimetype/extension.

Additional information

The changes have to stay in sync with other code, using mimeTypes or extensions.

Usage of mimeTypes.lookup:

const contentType = mimeTypes.lookup(ext);

Type definitions of mime-types api

// Type definitions for mime-types 2.1
// Project: https://github.com/jshttp/mime-types#readme
// Definitions by: Gyusun Yeom <https://github.com/Perlmint>
// Definitions: https://github.com/DefinitelyTyped/DefinitelyTyped

export function lookup(filenameOrExt: string): string | false;
export function contentType(filenameOrExt: string): string | false;
export function extension(typeString: string): string | false;
export function charset(typeString: string): string | false;
export const types: {[key: string]: string};
export const extensions: {[key: string]: string[]};

Typescript: Cannot use namespace 'AbstractBlobStore' as a type

Steps to reproduce

Using files.service.ts like so:

// Initializes the `files` service on path `/api/v1/files`
import { ServiceAddons } from '@feathersjs/feathers';
import BlobService from 'feathers-blob';
import multer from 'multer';
import { Application } from '../../declarations';
import logger from '../../logger';
import hooks from './files.hooks';

interface Data { }

interface ServiceOptions { }

// Add this service to the service type index
declare module '../../declarations' {
    interface ServiceTypes {
        'api/v1/files': ServiceAddons<any>;
    }
}

const log = (msg) => logger.info('[File Service]: ' + msg)


const multipartMiddleware = multer();

export default function (app: Application) {
    const storeOptions = app.get('blobStore');

    let store;
    if (storeOptions.type === 'local') {
        log('Creating new local blob store:' + storeOptions.path)
        const BlobStore = require('fs-blob-store');
        store = BlobStore(storeOptions.path);
    } else if (storeOptions.type === 's3') {
        const AWS = require('aws-sdk');
        const BlobStore = require('s3-blob-store')
        log('Creating new s3 blob store:' + storeOptions.bucket)
        AWS.config.update(storeOptions);

        const s3 = new AWS.S3();
        store = new BlobStore({
            client: s3,
            bucket: storeOptions.bucket,
        });
    }

    const Files = new (BlobService as any)({
        Model: store,
    })

    app.use('/api/v1/files',

        // multer parses the file named 'uri'.
        // Without extra params the data is
        // temporarely kept in memory
        multipartMiddleware.single('uri'),

        // another middleware, this time to
        // transfer the received file to feathers
        function (req: any, res: any, next) {
            req.feathers.file = req.file;
            next();
        },

        Files,
    );

    // Get our initialized service so that we can register hooks
    const service = app.service('api/v1/files');

    service.hooks(hooks);
}

running tsc from the npm run compile command gives this error.

    "compile": "rm -rf lib/ && tsc"

(First please check that this issue is not already solved as described
here
)

Expected behavior

Feathers blob works with typescript

Actual behavior

node_modules/feathers-blob/types/index.d.ts:34:12 - error TS2709: Cannot use namespace 'AbstractBlobStore' as a type.

34     Model: AbstractBlobStore;
              ~~~~~~~~~~~~~~~~~


Found 1 error.

System configuration

Tell us about the applicable parts of your setup.

Module versions (especially the part that's not working):

{
  "name": "wsb-services-ts",
  "description": "wsb rest services in typescript",
  "version": "0.0.0",
  "homepage": "",
  "private": true,
  "main": "src",
  "keywords": [
    "feathers"
  ],
  "author": {
    "name": "groemhildt",
    "email": "[email protected]"
  },
  "contributors": [],
  "bugs": {},
  "directories": {
    "lib": "src",
    "test": "test/",
    "config": "config/"
  },
  "engines": {
    "node": "^14.0.0",
    "npm": ">= 3.0.0"
  },
  "scripts": {
    "test": "npm run lint && npm run compile && npm run jest",
    "lint": "eslint src/. test/. --config .eslintrc.json --ext .ts --fix",
    "dev:api": "ts-node-dev --no-notify src/",
    "dev:jobs": "ts-node-dev ${TS_ARGS} --no-notify src/jobs/",
    "start": "node lib/",
    "start:jobs": "node lib/jobs/",
    "jest": "jest  --forceExit",
    "compile": "rm -rf lib/ && tsc"
  },
  "standard": {
    "env": [
      "jest"
    ],
    "ignore": []
  },
  "dependencies": {
    "@feathersjs/authentication": "^4.5.10",
    "@feathersjs/authentication-client": "^4.5.10",
    "@feathersjs/authentication-local": "^4.5.10",
    "@feathersjs/authentication-oauth": "^4.5.10",
    "@feathersjs/configuration": "^4.5.10",
    "@feathersjs/errors": "^4.5.10",
    "@feathersjs/express": "^4.5.10",
    "@feathersjs/feathers": "^4.5.10",
    "@feathersjs/rest-client": "^4.5.10",
    "@feathersjs/socketio": "^4.5.10",
    "@feathersjs/transport-commons": "^4.5.10",
    "@wsb/report-services": "^2.2.0",
    "aws-sdk": "^2.790.0",
    "bull": "^3.18.1",
    "compression": "^1.7.4",
    "cookie-parser": "^1.4.5",
    "cors": "^2.8.5",
    "dauria": "^2.0.0",
    "feathers-blob": "^2.3.0",
    "feathers-objection": "^6.3.0",
    "feathers-swagger": "^1.2.1",
    "fs-blob-store": "^5.2.1",
    "helmet": "^4.2.0",
    "http-proxy-middleware": "^1.0.6",
    "isomorphic-fetch": "^3.0.0",
    "knex": "^0.21.12",
    "lodash.omit": "^4.5.0",
    "multer": "^1.4.2",
    "objection": "^2.2.3",
    "pg": "^8.5.0",
    "s3-blob-store": "^4.1.1",
    "serve-favicon": "^2.5.0",
    "tmp-promise": "^3.0.2",
    "winston": "^3.3.3"
  },
  "devDependencies": {
    "@types/bull": "^3.14.4",
    "@types/compression": "^1.7.0",
    "@types/cookie-parser": "^1.4.2",
    "@types/cors": "^2.8.8",
    "@types/helmet": "^4.0.0",
    "@types/jest": "^26.0.15",
    "@types/jsonwebtoken": "^8.5.0",
    "@types/lodash.omit": "^4.5.6",
    "@types/multer": "^1.4.4",
    "@types/serve-favicon": "^2.5.1",
    "@typescript-eslint/eslint-plugin": "^4.7.0",
    "@typescript-eslint/parser": "^4.7.0",
    "axios": "^0.21.0",
    "eslint": "^7.13.0",
    "jest": "^26.6.3",
    "shx": "^0.3.3",
    "ts-jest": "^26.4.4",
    "ts-node-dev": "^1.0.0",
    "typescript": "^4.0.5"
  }
}

NodeJS version:
14.14.0
Operating System:
MacOS
Browser Version:
Firefox

❯ tsc --version
message TS6029: Version 1.5.3

An in-range update of aws-sdk is breaking the build 🚨

Version 2.313.0 of aws-sdk was just published.

Branch Build failing 🚨
Dependency aws-sdk
Current Version 2.312.0
Type devDependency

This version is covered by your current version range and after updating it in your project the build failed.

aws-sdk is a devDependency of this project. It might not break your production code or affect downstream projects, but probably breaks your build or test tools, which may prevent deploying or publishing.

Status Details
  • continuous-integration/travis-ci/push: The Travis CI build could not complete due to an error (Details).

Release Notes Release v2.313.0

See changelog for more information.

Commits

The new version differs by 1 commits.

See the full diff

FAQ and help

There is a collection of frequently asked questions. If those don’t help, you can always ask the humans behind Greenkeeper.


Your Greenkeeper Bot 🌴

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.