Giter Club home page Giter Club logo

turborepo-remote-cache's People

Contributors

adriantr avatar allcontributors[bot] avatar bmuenzenmeyer avatar danielmitrov avatar dependabot[bot] avatar dev-goformative avatar devtribe avatar dlehmhus avatar emalihin avatar ewhite613 avatar fox1t avatar gtjamesa avatar gustavoam-asdf avatar izi-p avatar jgoz avatar joedevivo avatar klaitos avatar kodiakhq[bot] avatar matteovivona avatar nakatanakatana avatar naoto-ida avatar nimmlor avatar semantic-release-bot avatar snyk-bot avatar step-security-bot avatar tm1000 avatar tom-fletcher avatar warflash avatar zigang93 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

turborepo-remote-cache's Issues

Missing "storage.objects.delete" permission for GCP ?

Hello,

I have set up the remote cache since one week in our GCP environment now. First, thanks for the amazing implementation, its rocks.

I observe some Error during the artifact creation error in the output logs with the error description : [service-account] does not have storage.objects.delete access to the Google Cloud Storage object. My service account has two roles as mentioned in the doc : Storage Object Creator and Storage Object Viewer.

Is it a required permission but not mentioned in the set up doc or am i missing something ? (i will add the permission on my side, but i want to fully understand this)

Using version v1.10.1 running in docker container with gcp bucket inside a Cloud Run (application default credentials).

Health check without auth header

๐Ÿš€ Feature Proposal

Create a (or modify the existing) health check endpoint that is not protected with the authorization header.

Motivation

Without this, it's not possible to scale the service behind an AWS ALB with a health check as ALB doesn't support passing headers.

Example

AWS ALB health checks

Guidance for running as AWS Lambda

Looking at the implementation, it does feel like it can be deployed anywhere, however, the S3 read/writes might be tricky โ€“ especially for large files, as Lambdas are essentially short-lived.

Do you folks have any guidance/tips on how to effectively run this on AWS Lambda?

DIfferent behavior between docker image and npm package (`slug` not recognized as a team ID)

Hi, I'm trying out very basic case with local filesystem as a storage backend.

I'm running turbo like this:

turbo run build --api="http://localhost:3000" --token=mytoken --team=myteam --remote-only

In case of docker image everything works as expected:

docker run -it --rm -p 3000:3000 -e STORAGE_PROVIDER=local -e TURBO_TOKEN=mytoken -e LOG_LEVEL=debug fox1t/turborepo-remote-cache:1.5.1
...
{"severity":"INFO","level":30,"time":1661781474850,"pid":8,"hostname":"7ae9741d9db3","reqId":"OpxpJZ7SSmyC3DTj5cV1tw-49","req":{"method":"PUT","url":"/v8/artifacts/1604679dd3f91bfc?slug=myteam","hostname":"localhost:3000","remoteAddress":"172.17.0.1","remotePort":61292},"message":"incoming request"}
{"severity":"INFO","level":30,"time":1661781474859,"pid":8,"hostname":"7ae9741d9db3","reqId":"OpxpJZ7SSmyC3DTj5cV1tw-49","res":{"statusCode":200},"responseTime":9.032999999821186,"message":"request completed"}

But when I run npx equivalent I get the following error:

STORAGE_PROVIDER=local TURBO_TOKEN=mytoken LOG_LEVEL=debug npx [email protected]
...
{"severity":"INFO","level":30,"time":1661781704349,"pid":72243,"hostname":"Vadims-MacBook-Pro.local","reqId":"3oYQdZzrRU6eocU3CQtKzg-4","req":{"method":"GET","url":"/v8/artifacts/834351818444d6ba?slug=myteam","hostname":"localhost:3000","remoteAddress":"127.0.0.1","remotePort":64930},"message":"incoming request"}
{"severity":"WARNING","level":40,"time":1661781704349,"pid":72243,"hostname":"Vadims-MacBook-Pro.local","reqId":"3oYQdZzrRU6eocU3CQtKzg-4","validation":[{"keyword":"required","dataPath":"","schemaPath":"#/required","params":{"missingProperty":"teamId"},"message":"should have required property 'teamId'"}],"validationContext":"querystring","stack":"Error: querystring should have required property 'teamId'\n    at defaultSchemaErrorFormatter (/Users/vadim/.npm/_npx/dd8a09686e5b1d7e/node_modules/fastify/lib/context.js:40:10)\n    at wrapValidationError (/Users/vadim/.npm/_npx/dd8a09686e5b1d7e/node_modules/fastify/lib/validation.js:105:17)\n    at validate (/Users/vadim/.npm/_npx/dd8a09686e5b1d7e/node_modules/fastify/lib/validation.js:90:12)\n    at preValidationCallback (/Users/vadim/.npm/_npx/dd8a09686e5b1d7e/node_modules/fastify/lib/handleRequest.js:89:18)\n    at handler (/Users/vadim/.npm/_npx/dd8a09686e5b1d7e/node_modules/fastify/lib/handleRequest.js:72:7)\n    at handleRequest (/Users/vadim/.npm/_npx/dd8a09686e5b1d7e/node_modules/fastify/lib/handleRequest.js:20:5)\n    at runPreParsing (/Users/vadim/.npm/_npx/dd8a09686e5b1d7e/node_modules/fastify/lib/route.js:451:5)\n    at next (/Users/vadim/.npm/_npx/dd8a09686e5b1d7e/node_modules/fastify/lib/hooks.js:158:7)\n    at handleResolve (/Users/vadim/.npm/_npx/dd8a09686e5b1d7e/node_modules/fastify/lib/hooks.js:175:5)\n    at processTicksAndRejections (node:internal/process/task_queues:96:5)","type":"Error","message":"querystring should have required property 'teamId'"}
{"severity":"INFO","level":30,"time":1661781704350,"pid":72243,"hostname":"Vadims-MacBook-Pro.local","reqId":"3oYQdZzrRU6eocU3CQtKzg-4","res":{"statusCode":400},"responseTime":0.24070799350738525,"message":"request completed"}

Seems like whatever version is published on npm doesn't recognize slug as team ID.

Digitalocean spaces support

๐Ÿš€ Feature Proposal

Supporting Digitalocean spaces would be very much appreciated.

Motivation

Currently it seems only the "big" cloud providers are supported where as it seems spaces would be quite a bit cheaper to use as a s3 service

Dynamic authorization tokens

๐Ÿš€ Feature Proposal

Currently, TURBO_TOKEN is defined as an environment variable, therefore making changes to the token(s) requires a full restart. Instead, authorized tokens should be stored in a database, and a UI or API should be exposed to add or remove tokens from that database; ideally allowing users some kind of SSO login flow which ends in issuing them a token.

Motivation

We're looking at alternatives to Vercel hosting the Turbo remote cache, because Vercel charges $20/user/month. If we do pay Vercel $20/user/month, then each user gets their own API token, and removing a user from Vercel also invalidates their access to the cache. We'd like to self-host the cache, but manually maintaining a list of tokens is an operational headache we'd like to avoid. If we were to decide to restrict ourselves to use only one API token (and share that API token via 1Password etc.), then we might as well make a shared Vercel user on Vercel's free tier for that purpose.

Lambda/API Gateway integration not working

๐Ÿ› Bug Report

After configuring the server using Lambda/API Gateway setup, I'm running into issues where all the caches are a MISS. Similar to: #28

When there is a cache miss, the server properly adds the caches to S3. However, on a sub sequential turbo run with no changes to source code, the server responds with a 200, saying that we have a cache hit on the GET /artifacts/:id call. But then there is no POST /artifacts/events, so it thinks it was a cache miss and makes a sub sequential PUT /artifacts/:id, replacing already existing cache.

Running this locally works without problems

here is what my configs for openapi/REST API Gateway look like

paths:
  '/v8/{proxy+}':
    x-amazon-apigateway-any-method:
      parameters:
        - name: proxy
          in: path
          required: true
          schema:
            type: string
      responses: {}
      x-amazon-apigateway-integration:
        # ref: https://docs.aws.amazon.com/apigateway/api-reference/resource/integration/
        uri: --REDACTED--
        httpMethod: POST
        passthroughBehavior: when_no_match
        type: aws_proxy
        credentials: --REDACTED--
     
  • server: turborepo-remote-cache: "1.14.1"
  • client: turbo: "1.8.8"

Error (412 Precondition Failed) when creating artifact

On deploying to Vercel and using S3 storage, I get this error when trying to run turbo commands โ€“

{
	"severity": "WARNING",
	"level": 40,
	"time": 1655122987166,
	"pid": 9,
	"hostname": "169.254.33.177",
	"reqId": "q-nGx8L1TCaa2laKhCeJRA-25",
	"data": {
		"message": "Access Denied",
		"code": "AccessDenied",
		"region": null,
		"time": "2022-06-13T12:23:07.166Z",
		"requestId": "KRFCEX6FM17HZD2Z",
		"extendedRequestId": "xYHj4oepC5Eu4oD0QcrQcTGCbOyITRXV7rp2JSXa1RwAVSOKsiI0b2gkABewiDxQXa7GN+8RUGM=",
		"statusCode": 403,
		"retryable": false,
		"retryDelay": 26.181109834484474
	},
	"isBoom": true,
	"isServer": false,
	"output": {
		"statusCode": 412,
		"payload": {
			"statusCode": 412,
			"error": "Precondition Failed",
			"message": "Error during the artifact creation"
		},
		"headers": {}
	},
	"stack": "Error: Error during the artifact creation\n    at Object.handler (/vercel/path0/src/plugins/remote-cache/routes/put-artifact.ts:29:31)\n    at processTicksAndRejections (node:internal/process/task_queues:96:5)",
	"type": "Error",
	"message": "Error during the artifact creation"
}

I've made sure the environment variables are correctly set up, the IAM credentials are right, and that the user has full adminstrator access on AWS.

Where can I found my S3_ENDPOINT in docker setup? have error: Artifact not found

Hi, I am not sure which S3_ENDPOINT you are refer to ? where can I find it at aws?

aws bucket name: turbo-repo

my .env config

NODE_ENV=production
PORT=3000
TURBO_TOKEN=yourToken
LOG_LEVEL=info
STORAGE_PROVIDER=s3
STORAGE_PATH=turbo-repo
AWS_ACCESS_KEY_ID=<aws secret  id>
AWS_SECRET_ACCESS_KEY=<aws secret key>
AWS_REGION=ap-southeast-1
S3_ENDPOINT=http://turbo-repo.s3.ap-southeast-1.amazonaws.com

I get some error in my docker logs.. not sure what Artifact means..

{"severity":"WARNING","level":40,"time":1662821516456,"pid":7,"hostname":"7467bd81768e","reqId":"oEKrAxmVR0K3hrHPnmpmcA-1","data":{},"isBoom":true,"isServer":false,"output":{"statusCode":404,"payload":{"statusCode":404,"error":"Not Found","message":"Artifact not found"},"headers":{}},"stack":"/home/app/node/src/plugins/remote-cache/routes/get-artifact.ts:29\n      throw notFound(`Artifact not found`, err)\n                    ^\n\nError: Artifact not found\n    at Object.handler (/home/app/node/src/plugins/remote-cache/routes/get-artifact.ts:29:21)\n    at processTicksAndRejections (node:internal/process/task_queues:96:5)","type":"Error","message":"Artifact not found"}
{"severity":"INFO","level":30,"time":1662821516458,"pid":7,"hostname":"7467bd81768e","reqId":"oEKrAxmVR0K3hrHPnmpmcA-1","res":{"statusCode":404},"responseTime":53.324764251708984,"message":"request completed"}
{"severity":"INFO","level":30,"time":1662821516696,"pid":7,"hostname":"7467bd81768e","reqId":"oEKrAxmVR0K3hrHPnmpmcA-2","req":{"method":"POST","url":"/v8/artifacts/events?teamId=team_pingspace_wcs","hostname":"turborepo-cache.pingspace.co","remoteAddress":"161.142.173.156","remotePort":37378},"message":"incoming request"}
{"severity":"INFO","level":30,"time":1662821516702,"pid":7,"hostname":"7467bd81768e","reqId":"oEKrAxmVR0K3hrHPnmpmcA-2","res":{"statusCode":200},"responseTime":4.817142486572266,"message":"request completed"}

I am pretty sure that my aws access id and secret is able to upload .env file to bucket

aws s3api put-object --bucket turbo-repo --key .env --body .env

Support MinIO as storage provider

๐Ÿš€ Feature Proposal

Add support for MinIO as storage provider.

Motivation

We have a running MinIO instance and want to use it for our remote cache.
I tried to use it with the AWS S3 Storage Provider docs, but I can't get it working.
So I think there are maybe some additional settings or steps needed.

Example

Adding specific doc section for MinIO.

Support Cloudflare Workers

๐Ÿš€ Feature Proposal - Add support for the Cloudflare Ecosystem

Make the necessary changes that will allow this server to be hosted on Cloudflare Workers and backed by Cloudflare R2 storage.

Motivation

Cloudflare workers are far more cost efficient and performant than the other serverless environments that currently exist.
The option to deploy in the cloudflare ecosystem would also allow make it more performant to use Cloudflare R2 instead of Amazon S3. This is beneficial as R2 does not incur egress network costs.


I know that this may require a significant re-write, or switching to a server framework that runs on all environments, but it may be worth it to make this project the defacto option for anyone wanting to run a remote cache for Turborepo.

Incompatible with Yarn/npm, or Node <18

Please read this entire template before posting any issue. If you ignore these instructions
and post an issue here that does not follow the instructions, your issue might be closed,
locked, and assigned the missing discussion label.

๐Ÿ› Bug Report

A clear and concise description of what the bug is.

Since v1.14.2, users of yarn or npm can't install this package.
Consumers using node<18 also can't install this package

This is the error message

error [email protected]: The engine "node" is incompatible with this module. Expected version ">=18". Got "16.20.0"
warning [email protected]: The engine "pnpm" appears to be invalid.
error Found incompatible module.

To Reproduce

Steps to reproduce the behavior:

  • In any yarn or npm project, run
yarn add turborepo-remote-cache

or

npm install turborepo-remote-cache

Expected behavior

Should intall

Your Environment

  • os: Mac
nvm current
v16.20.0

yarn --version
1.22.19

Large build output cannot be uploaded

I found that artifacts from the end of my build pipeline (just before deploy) are not cached.
Bit of a show-stopper as it causes a full redeploy!

It looks like Fastify has a default body size limit of 1 MiB, and so I'm speculating this is the reason.

Could we have a configuration option to bump this considerably higher?

A backport to the last stable version (so 1.1.2 I guess) would be really helpful, BTW.

`/v8/artifacts/events` returns 404 and doesn't store data locally

When running locally I can't get my repo to actually cache the results.

Setup in turborepo-remote-cache:

yarn build
NODE_ENV=development TURBO_TOKEN=\"123\" yarn start

setup in test-turbo-reop

.turbo/config.json

{
    "teamId": "team_FcALQN9XEVbeJ1NjTQoS9Wup",
    "apiUrl": "http://localhost:3000"
}

package.json

{
    ...
    "scripts":{
            "build:turbo": "turbo run build --token=\"123\"",
            ...
    }
}

Log output from turborepo-remote-cache

{"severity":"INFO","level":30,"time":1654715502319,"pid":83236,"hostname":"MacBook-Pro-2.local","message":"Server listening at http://0.0.0.0:3000"}
^[[A{"severity":"INFO","level":30,"time":1654715546148,"pid":83236,"hostname":"MacBook-Pro-2.local","reqId":"vmy6wThJSLCcGWwTxwgjpg-0","req":{"method":"POST","url":"/v8/artifacts/events?teamId=team_FcALQN9XEVbeJ1NjTQoS9Wup","hostname":"localhost:3000","remoteAddress":"127.0.0.1","remotePort":59506},"message":"incoming request"}
{"severity":"INFO","level":30,"time":1654715546151,"pid":83236,"hostname":"MacBook-Pro-2.local","reqId":"vmy6wThJSLCcGWwTxwgjpg-0","message":"Route POST:/v8/artifacts/events?teamId=team_FcALQN9XEVbeJ1NjTQoS9Wup not found"}
{"severity":"INFO","level":30,"time":1654715546155,"pid":83236,"hostname":"MacBook-Pro-2.local","reqId":"vmy6wThJSLCcGWwTxwgjpg-0","res":{"statusCode":404},"responseTime":5.764100074768066,"message":"request completed"}
{"severity":"INFO","level":30,"time":1654715546156,"pid":83236,"hostname":"MacBook-Pro-2.local","reqId":"vmy6wThJSLCcGWwTxwgjpg-1","req":{"method":"POST","url":"/v8/artifacts/events?teamId=team_FcALQN9XEVbeJ1NjTQoS9Wup","hostname":"localhost:3000","remoteAddress":"127.0.0.1","remotePort":59508},"message":"incoming request"}
{"severity":"INFO","level":30,"time":1654715546156,"pid":83236,"hostname":"MacBook-Pro-2.local","reqId":"vmy6wThJSLCcGWwTxwgjpg-1","message":"Route POST:/v8/artifacts/events?teamId=team_FcALQN9XEVbeJ1NjTQoS9Wup not found"}
{"severity":"INFO","level":30,"time":1654715546156,"pid":83236,"hostname":"MacBook-Pro-2.local","reqId":"vmy6wThJSLCcGWwTxwgjpg-1","res":{"statusCode":404},"responseTime":0.36331605911254883,"message":"request completed"}

Is there something that I am missing with having to register a team so that it doesn't 404 with the server or a configuration step that I missed in the docs?

Storage limit support?

๐Ÿš€ Feature Proposal

Is there a way I can limit the amount of storage that is used and if not is there any change this feature might get added?

Motivation

I only have a finite amount of cloud storage and I'm worried using this will quickly fill up all my cloud storage.
So I would like to set a limit to the amount of storage the cache can take up.

Example

I would like to have for example a remote cache with a maximum of 20 GiB
If that size is reached and new cache entries are added the oldest caches are removed.

Alternatives

I presume I can clear the cache myself and I would accept to do that once a week / month but there Is currently no documentation on this :(

Turbo stops using cache, error "skipping HTTP Request, too many failures have occurred"

๐Ÿ› Bug Report

Because this does not support all the operations used by turbo, turbo will just stop using it after a few failed requests.

When run in verbose mode you can see an error:

2023-03-17T15:31:40.766-0700 [DEBUG] turbo.analytics: failed to record cache usage analytics: error="skipping HTTP Request, too many failures have occurred"
2

The server output is like this:

โžค YN0007: โ”‚ turborepo-remote-cache@npm:1.13.2 must be built because it never has been before or the last one failed
โžค YN0000: โ”‚ turborepo-remote-cache@npm:1.13.2 STDERR npm WARN exec The following package was not found and will be installed: [email protected]
โžค YN0000: โ”” Completed in 2s 423ms
โžค YN0000: Done with warnings in 34s 967ms

{"severity":"INFO","level":30,"time":1679091896878,"pid":21521,"hostname":"44cb917766de","message":"Server listening at http://0.0.0.0:4444"}
{"severity":"INFO","level":30,"time":1679091897852,"pid":21521,"hostname":"44cb917766de","reqId":"PRk_gsXKRoyq60Zzc71Yaw-0","req":{"method":"GET","url":"/v8/artifacts/851d9ae536aeee95?teamId=team_formative","hostname":"localhost:4444","remoteAddress":"127.0.0.1","remotePort":41986},"message":"incoming request"}
{"severity":"INFO","level":30,"time":1679091897897,"pid":21521,"hostname":"44cb917766de","reqId":"PRk_gsXKRoyq60Zzc71Yaw-1","req":{"method":"GET","url":"/v8/artifacts/e19892748780222a?teamId=team_formative","hostname":"localhost:4444","remoteAddress":"127.0.0.1","remotePort":41984},"message":"incoming request"}
{"severity":"INFO","level":30,"time":1679091898039,"pid":21521,"hostname":"44cb917766de","reqId":"PRk_gsXKRoyq60Zzc71Yaw-1","res":{"statusCode":200},"responseTime":141.249200001359,"message":"request completed"}
{"severity":"INFO","level":30,"time":1679091898047,"pid":21521,"hostname":"44cb917766de","reqId":"PRk_gsXKRoyq60Zzc71Yaw-0","res":{"statusCode":200},"responseTime":194.30381099879742,"message":"request completed"}

This is running the latest version every time, using yarn dlx. Reproduced this today using 1.13.2

FST_ERR_CTP_INVALID_MEDIA_TYPE on PUT

๐Ÿ‘‹ Hello! Very excited to see this repo - got it up and running with the help of your stellar docs, but am now running into a problem.

{
  "severity": "ERROR",
  "level": 50,
  "time": 1642192525782,
  "pid": 17,
  "hostname": "REDACTED",
  "reqId": "REDACTED",
  "name": "FastifyError",
  "code": "FST_ERR_CTP_INVALID_MEDIA_TYPE",
  "message": "Unsupported Media Type: application/octet-stream",
  "statusCode": 415,
  "stack": "FastifyError: Unsupported Media Type: application/octet-stream\n    at ContentTypeParser.run (/home/app/node/node_modules/fastify/lib/contentTypeParser.js:142:16)\n    at handleRequest (/home/app/node/node_modules/fastify/lib/handleRequest.js:37:39)\n    at runPreParsing (/home/app/node/node_modules/fastify/lib/route.js:427:5)\n    at Object.routeHandler [as handler] (/home/app/node/node_modules/fastify/lib/route.js:385:7)\n    at Router.lookup (/home/app/node/node_modules/find-my-way/index.js:378:14)\n    at Router.defaultRoute (/home/app/node/node_modules/fastify/fastify.js:583:23)\n    at Router._defaultRoute (/home/app/node/node_modules/find-my-way/index.js:610:14)\n    at Router.lookup (/home/app/node/node_modules/find-my-way/index.js:376:36)\n    at Server.emit (events.js:400:28)\n    at Server.emit (domain.js:475:12)",
  "type": "Error",
}

Troubleshooting So Far

I am using the local filesystem approach, deployed in docker.

Trying to piece together the docs between here and turborepo, I don't know if
ENV VARS TURBO_TOKEN needs to be ANY string (the same in CLI and deployed in docker), or a bearer token.

I've also tried supplying --token and --api flag during commands. This seemed to help get calls through to the cache in addition to the valid .turbo/config.json apiUrl key

I am going to keep digging, including trying to run this locally, but wanted to document my current roadblock in the hopes it could help others.

Feature request: make accessKey and secretKey optional

If you have an active AWS profile, then you can omit accessKey and secretKey from the aws options.

Is there anyway we could make these fields optional? We could make another .env variable that says whether you are using an AWS profile?

Cache server not working

Hope anyone that reads this is having a great day!

I only have the following line in the Docker Container logs, and no other log appears when I use the remote cache.

54896/11/01 01:17PM 30 Server listening at http://0.0.0.0:3000 | severity=INFO pid=6 hostname=turborepo-remote-cache

How do I know when I run the npm script build command, the action is importing the cache, and/or exporting it?

Is the STORAGE_PATH environmental variable correctly used along with volumes?

The following contains the contents of the TurboRepo Remote Cache docker-compose.yml file.

version: '3.9'
services:

  turborepo-remote-cache:
    image: fox1t/turborepo-remote-cache:1.8.0
    container_name: turborepo-remote-cache
    hostname: turborepo-remote-cache
    environment:
      - NODE_ENV=production
      - TURBO_TOKEN='xxx'
      - LOG_LEVEL=debug
      - STORAGE_PROVIDER=local
      - STORAGE_PATH=/tmp
    volumes:
      - /mnt/appdata/repo-team/turborepo-remote-cache/tmp:/tmp
    ports:
      - 3535:3000
    networks:
      - proxy

networks:
  proxy:
    driver: bridge
    external: true

Originally posted by @NorkzYT in #84

Free Hosted Remote Cache discontinued

Due to the high costs that were not justified by the actual usage, we have decided to discontinue our free hosted remote cache service. We apologize for any inconvenience this may cause and hope to bring the service back in the future.

Unable to use for different repo ?

Hi, I had deploy docker version
I just found out that can't share same remote cache server for different repo..

Example:
example.com <--- this is my custom remote cache server

repo A -- build command: "turbo build --team="team_A" --token="token" --api="http://example.com\""
repo B -- build command: "turbo build --team="team_B" --token="token" --api="http://example.com\""

repoe B cannot remote cache..
from my custom remote cache server .. it show Artifact not found error

it is cannot use different repo with different team?

Expected behavior

Should be able to use remote cache even different repo

Your Environment

  • os: Mac
  • any other relevant information

ERR_STREAM_PREMATURE_CLOSE on PUT Artifact with Azure Blob Storage implementation

๐Ÿ› Bug Report

When I use azure blob storage implementation (that i implemented myself in this repo), I get an error on PUT artifact with this error code ERR_STREAM_PREMATURE_CLOSE.

It looks like this:

 {"severity":"WARNING","level":40,"time":1680011533094,"pid":8,"hostname":"***","reqId":"***","data":{"code":"ERR_STREAM_PREMATURE_CLOSE"},"isBoom":true,"isServer":false,"output":{"statusCode":412,"payload":{"statusCode":412,"error":"Precondition Failed","message":"Error during the artifact creation"},"headers":{}},"stack":"Error: Error during the artifact creation\n at Object.handler (/home/app/node/src/plugins/remote-cache/routes/put-artifact.ts:33:31)\n at process.processTicksAndRejections (node:internal/process/task_queues:95:5)","type":"Error","message":"Error during the artifact creation"}

It still uploads small artifacts but big artifacts like our nextjs build ouputs are not sent to the azure blob storage.

I have a limited nodejs stream knowledge so maybe you will have an idea how to fix that.

Expected behavior

PUT artifact is working without 412 ERR_STREAM_PREMATURE_CLOSE error

Your Environment

  • os: Mac

Unable to link to self hosted cache

To Reproduce

Steps to reproduce the behavior:
Host a docker container with simple local storage settings
Try to link & enable cache from client via creating a .turbo/config.json with appropriate fields that include token, teamId and apiurl.
Now run turbo link, and the on the client you get error: could not get team information, while on the server these logs are printed:

2023-02-21T18:16:31.570785728Z {"severity":"INFO","level":30,"time":1677003391570,"pid":7,"hostname":"09921250ff30","reqId":"Oe2v3NkCQsO6f_5DaWa91Q-14","res":{"statusCode":404},"responseTime":0.828801155090332,"message":"request completed"}
2023-02-21T18:26:52.569629705Z {"severity":"INFO","level":30,"time":1677004012568,"pid":7,"hostname":"09921250ff30","reqId":"Oe2v3NkCQsO6f_5DaWa91Q-15","req":{"method":"GET","url":"/v2/teams?limit=100","hostname":"myserver.com:36891","remoteAddress":"10.0.0.2","remotePort":52646},"message":"incoming request"}
2023-02-21T18:26:52.569669865Z {"severity":"INFO","level":30,"time":1677004012568,"pid":7,"hostname":"09921250ff30","reqId":"Oe2v3NkCQsO6f_5DaWa91Q-15","message":"Route GET:/v2/teams?limit=100 not found"}
2023-02-21T18:26:52.569674505Z {"severity":"INFO","level":30,"time":1677004012569,"pid":7,"hostname":"09921250ff30","reqId":"Oe2v3NkCQsO6f_5DaWa91Q-15","res":{"statusCode":404},"responseTime":0.45916032791137695,"message":"request completed"}

My environment

Arch 6.1.12-arch1-1
[email protected]

Edit:

Just for clarification, running turbo with cli flags like --team, --token and --api reports that it is using remote cache -
("Remote caching enabled").

Vercel deployment broken

Recently I changed this package to use the "standard" environment variables for AWS access. However, in #30 (comment) it was revealed that for Vercel deployment for whatever reason they do not allow those environment variables to be used.

Some fallback environment variables will likely be needed to support Vercel deployment.

Docs:

Vercel deployments will not work until that is resolved.

This may be as simple as changing vercel.ts like this:

import type { VercelRequest, VercelResponse } from '@vercel/node'
import { createApp } from './app'

// Propage any environment vars APP_AWS_* to AWS_* to workaround
// vercel restriction on environment variables
Object.keys(process.env).filter(k => k.startsWith('APP_AWS_')).forEach(k => {
  process.env[k.substring(4)] = process.env[k];
})

const app = createApp()

export default async function (req: VercelRequest, res: VercelResponse) {
  await app.ready()
  app.server.emit('request', req, res)
}

HTTP Error 412 while using recent Turbo versions

๐Ÿ› Bug Report

I updated Turbo from 1.10.16 to 1.12.5 today and started seeing this in CI:

 WARNING  failed to contact remote cache: Error making HTTP request: HTTP status client error (412 Precondition Failed) for url (http://0.0.0.0:45045/v8/artifacts/e946449d9e73b6d1?slug=ci)

There is some discussion around this in this turbo issue, where people mention that this is likely due to using this remote cache server along with S3 specifically.

To Reproduce

I doubt that it will be possible for me to create reproduction instructions / repo for this issue considering that others have failed to reliably reproduce this in the thread above.

Expected behavior

To not get the http status errors.

Your Environment

  • Using this package with my GH action. I'm specifically using trappar/turborepo-remote-cache-gh-action@v2, which is a new version I've been working on in order to support the up-to-date version of this package.
  • Turbo version 1.12.5
  • Seeing the failures in CI while using GitHub Actions, where I'm using ubuntu-latest
  • Server is configured to connect to an S3 bucket

Support read-only mode to prevent unwanted cache update

๐Ÿš€ Feature Proposal

Support a read-only mode that allows cache read from clients but doesn't contribute new cache entries.

Motivation

In the context of a mono-repo ci integration, we would like to share ci cache artefacts with local machines, but we don't want users to be able to contribute new values.

The purpose is to speed up local work without risking contamination of the ci and cd artefacts from potentially broken or unsafe local clients.

Example

With such a feature, we would be able to generate cache read/write from the ci, where we can guarantee state and monitoring with a normal instance.

At the same time, another instance of the remote repo could allow local users to consume the same cache entry (backed by s3 storage in our case) without risking broken builds.

I've looked at the official turbo documentation, and it seems they don't natively offer such a feature.
The closest to it, are some cache management flags that allow to prevent cache uploads, but those are easily circumvented and necessite to split ci and dev scripts.

An open issue on turbo addresses similar needs but seems that nothing much came of it yet.

Do you have any plan to implement this ? And how it could be done ?

We're ready to contribute to a PR implementing this if you're open to it :)

Artifacts are never uploaded to cache server with AWS Lambda deployment

What version of Turborepo are you using?

1.8.3

What package manager are you using / does the bug impact?

Yarn v1, Yarn v2/v3 (node_modules linker only)

What operating system are you using?

Mac

Describe the Bug

The PUT endpoint for uploading artifacts is never being called when deployed to AWS lambda, when configured for Vercel servers it just stores them fine.

My config.json

{
  "teamId": "team_FcALQN9XEVbeJ1NjTQoS9Wup",
  "apiUrl": "https://XXXXXXX.execute-api.eu-central-1.amazonaws.com",
  "token": "abcd"
}

Note: I have tried both teamId and teamSlug for the team identifier. It does not make any change.

My build command

yarn turbo run build --token="abcd"           

(yes, I am passing token twice to really make sure it gets there)

Requests on the cache server

1 GET Artefact

Request

{
    "severity": "INFO",
    "level": 30,
    "time": 1677700599229,
    "pid": 8,
    "hostname": "169.254.44.209",
    "reqId": "LuNKSAAKR5e9t5CU_gvE2A-2",
    "req": {
        "method": "GET",
        "url": "/v8/artifacts/3f04bb2017b72da6?teamId=team_FcALQN9XEVbeJ1NjTQoS9Wup",
        "hostname": "XXXXX.execute-api.eu-central-1.amazonaws.com",
        "remoteAddress": "XXXXX"
    },
    "message": "incoming request"
}

Response

{
    "severity": "WARNING",
    "level": 40,
    "time": 1677700599257,
    "pid": 8,
    "hostname": "XXX",
    "reqId": "LuNKSAAKR5e9t5CU_gvE2A-2",
    "data": {
        "message": null,
        "code": "BadRequest",
        "region": null,
        "time": "2023-03-01T19:56:39.239Z",
        "requestId": "G1XS8G2FKHWSXTSZ",
        "extendedRequestId": "XCmY4Ggr4L/bmuBfzffH445xoSDEb5qa/Dxo/lvYUanFAZtHoj9lY0Wq2xK/fH2VylFyByXiHsE=",
        "statusCode": 400,
        "retryable": false,
        "retryDelay": 40.85058254148659
    },
    "isBoom": true,
    "isServer": false,
    "output": {
        "statusCode": 404,
        "payload": {
            "statusCode": 404,
            "error": "Not Found",
            "message": "Artifact not found"
        },
        "headers": {}
    },
    "stack": "Error: Artifact not found\n    at Object.handler (/var/task/index.js:100897:37)\n    at processTicksAndRejections (node:internal/process/task_queues:96:5)",
    "type": "Error",
    "message": "Artifact not found"
}

POST artefact events

Request

{
    "severity": "INFO",
    "level": 30,
    "time": 1677700599517,
    "pid": 8,
    "hostname": "XXXX",
    "reqId": "LuNKSAAKR5e9t5CU_gvE2A-3",
    "req": {
        "method": "POST",
        "url": "/v8/artifacts/events?teamId=team_FcALQN9XEVbeJ1NjTQoS9Wup",
        "hostname": "XXXX.execute-api.eu-central-1.amazonaws.com",
        "remoteAddress": "86.49.226.196"
    },
    "message": "incoming request"
}

Response

{
    "severity": "INFO",
    "level": 30,
    "time": 1677700599518,
    "pid": 8,
    "hostname": "XXXX",
    "reqId": "LuNKSAAKR5e9t5CU_gvE2A-3",
    "res": {
        "statusCode": 200
    },
    "responseTime": 1.0016169999726117,
    "message": "request completed"
}

No other reuqest follows this sequence, and the result is artefacts are not uploaded

Your Environment

  • os: Mac
  • turbo 1.8.3

Expected Behavior

Artefacts should be uploaded.

Current state of main branch is not deployable to vercel.

๐Ÿ› Bug Report

I attempted to deploy this repo via the deploy to vercel button, that however yielded an exception during build (see screenshot). I later attempted to deploy it again via fork, which didn't work either, resulting in the same error.

image

To Reproduce

Steps to reproduce the behavior:

Fork the repository, deploy the forked repo to vercel, done.

Expected behavior

Build and start on vercel without an exception

Your Environment

Vercel ๐Ÿคท๐Ÿปโ€โ™‚๏ธ

Emit d.ts files in build

๐Ÿš€ Feature Proposal

Emit d.ts files in build.

Motivation

The build process doesn't currently output any d.ts files. This means that when using the npm package in a typescript project, you have to add a new declaration file containing declare module 'turborepo-remote-cache/build/app';

As the project is written in typescript, it would be nicer to output the types by adding to tsconfig.json:

{
  "compilerOptions": {
    ...
+   "declaration": true,
  }
}

Example

Writing a proxy for the server (such as an AWS Lambda Function) in typescript.

azure-storage has been deprecated

npm WARN deprecated [email protected]: Please note: newer packages @azure/storage-blob, @azure/storage-queue and @azure/storage-file are available as of November 2019 and @azure/data-tables is available as of June 2021. While the legacy azure-storage package will continue to receive critical bug fixes, we strongly encourage you to upgrade. Migration guide can be found: https://github.com/Azure/azure-sdk-for-js/blob/main/sdk/storage/MigrationGuide.md

publish on npm?

for folks wanting to run this locally between worktrees, not needing docker or needing to clone + compile would be a big help.

just a lil npx turborepo-remote-cache or something would be great :D <3

Error on use with Github Actions

๐Ÿ› Bug Report

I tried use this repo with docker inside github actions, but I got error on during the artifact creation. I no have ideia what ocurring this error because this is generic.

I clone this repo locally for debug and this work normaly.

To Reproduce

Steps to reproduce the behavior:

name: Build and Deploy
on: [push]
jobs:
  build:
    name: Build and Deploy
    runs-on: ubuntu-latest

    services:
      turbo_cache:
        image: fox1t/turborepo-remote-cache
        env:
          NODE_ENV: production
          PORT: 3000
          TURBO_TOKEN: XXX
          STORAGE_PROVIDER: google-cloud-storage
          STORAGE_PATH: <storage-path>
          GCS_PROJECT_ID: <project-id>
          GCS_CLIENT_EMAIL: <email>
          GCS_PRIVATE_KEY: ${{ secrets.GCP_CREDENTIALS_JSON.private_key }}
        ports:
          - 3000:3000
//...

Error on Github Actions

Print service container logs: 21bde0cd5856471ca87367f9ae4b6d9d_fox1tturboreporemotecache_9b0647
/usr/bin/docker logs --details bb3a0a8794f47437bd7be173ed7db8f9f44e791998c3[2](https://github.com/beyounglabs/transformers/actions/runs/4084527958/jobs/7041334215#step:10:2)f5c606[3](https://github.com/beyounglabs/transformers/actions/runs/4084527958/jobs/7041334215#step:10:3)a36fae8e4355
 {"severity":"INFO","level":30,"time":1675430365053,"pid":7,"hostname":"bb3a0a8794f4","message":"Server listening at http://0.0.0.0:3000/"}
 {"severity":"INFO","level":30,"time":1675[4](https://github.com/beyounglabs/transformers/actions/runs/4084527958/jobs/7041334215#step:10:4)30[5](https://github.com/beyounglabs/transformers/actions/runs/4084527958/jobs/7041334215#step:10:5)59487,"pid":7,"hostname":"bb3a0a8794f4","reqId":"UT8iMrFOS8G-Gre72NQmrQ-0","req":{"method":"GET","url":"/v8/artifacts/7ca77c752709781f?slug=beyoung_transformers","hostname":"172.17.0.1:3000","remoteAddress":"172.18.0.1","remotePort":32910},"message":"incoming request"}
 {"severity":"WARNING","level":40,"time":1675430559579,"pid":7,"hostname":"bb3a0a8794f4","reqId":"UT8iMrFOS8G-Gre72NQmrQ-0","data":{},"isBoom":true,"isServer":false,"output":{"statusCode":404,"payload":{"statusCode":404,"error":"Not Found","message":"Artifact not found"},"headers":{}},"stack":"/home/app/node/src/plugins/remote-cache/routes/get-artifact.ts:29\n      throw notFound(`Artifact not found`, err)\n                    ^\n\nError: Artifact not found\n    at Object.handler (/home/app/node/src/plugins/remote-cache/routes/get-artifact.ts:29:21)\n    at processTicksAndRejections (node:internal/process/task_queues:96:5)","type":"Error","message":"Artifact not found"}
 {"severity":"INFO","level":30,"time":1[6](https://github.com/beyounglabs/transformers/actions/runs/4084527958/jobs/7041334215#step:10:6)[7](https://github.com/beyounglabs/transformers/actions/runs/4084527958/jobs/7041334215#step:10:7)5430559585,"pid":7,"hostname":"bb3a0a8794f4","reqId":"UT8iMrFOS8G-Gre72NQmrQ-0","res":{"statusCode":404},"responseTime":96.68799800000852,"message":"request completed"}
 {"severity":"INFO","level":30,"time":1675430559788,"pid":7,"hostname":"bb3a0a8794f4","reqId":"UT8iMrFOS8G-Gre72NQmrQ-1","req":{"method":"POST","url":"/v8/artifacts/events?slug=beyoung_transformers","hostname":"172.17.0.1:3000","remoteAddress":"172.18.0.1","remotePort":32926},"message":"incoming request"}
 {"severity":"INFO","level":30,"time":1675430559791,"pid":7,"hostname":"bb3a0a[8](https://github.com/beyounglabs/transformers/actions/runs/4084527958/jobs/7041334215#step:10:8)7[9](https://github.com/beyounglabs/transformers/actions/runs/4084527958/jobs/7041334215#step:10:9)4f4","reqId":"UT8iMrFOS8G-Gre72NQmrQ-1","res":{"statusCode":200},"responseTime":2.7053000000014435,"message":"request completed"}
 {"severity":"INFO","level":30,"time":1675430563280,"pid":7,"hostname":"bb3a0a8794f4","reqId":"UT8iMrFOS8G-Gre72NQmrQ-2","req":{"method":"GET","url":"/v8/artifacts/b3c1adf0c0c67da0?slug=beyoung_transformers","hostname":"172.17.0.1:3000","remoteAddress":"172.18.0.1","remotePort":34550},"message":"incoming request"}
 {"severity":"INFO","level":30,"time":1675430563284,"pid":7,"hostname":"bb3a0a8794f4","reqId":"UT8iMrFOS8G-Gre72NQmrQ-3","req":{"method":"PUT","url":"/v8/artifacts/7ca77c752709781f?slug=beyoung_transformers","hostname":"172.17.0.1:3000","remoteAddress":"172.18.0.1","remotePort":34560},"message":"incoming request"}
 {"severity":"WARNING","level":40,"time":1675430563298,"pid":7,"hostname":"bb3a0a8794f4","reqId":"UT8iMrFOS8G-Gre72NQmrQ-3","data":{},"isBoom":true,"isServer":false,"output":{"statusCode":412,"payload":{"statusCode":412,"error":"Precondition Failed","message":"Error during the artifact creation"},"headers":{}},"stack":"/home/app/node/src/plugins/remote-cache/routes/put-artifact.ts:33\n      throw preconditionFailed(`Error during the artifact creation`, err)\n                              ^\n\nError: Error during the artifact creation\n    at Object.handler (/home/app/node/src/plugins/remote-cache/routes/put-artifact.ts:33:31)\n    at processTicksAndRejections (node:internal/process/task_queues:96:5)","type":"Error","message":"Error during the artifact creation"}
 {"severity":"INFO","level":30,"time":1675430563299,"pid":7,"hostname":"bb3a0a8794f4","reqId":"UT8iMrFOS8G-Gre72NQmrQ-3","res":{"statusCode":412},"responseTime":15.39499900001[10](https://github.com/beyounglabs/transformers/actions/runs/4084527958/jobs/7041334215#step:10:10)12,"message":"request completed"}
 {"severity":"WARNING","level":40,"time":1675430563308,"pid":7,"hostname":"bb3a0a8794f4","reqId":"UT8iMrFOS8G-Gre72NQmrQ-2","data":{},"isBoom":true,"isServer":false,"output":{"statusCode":404,"payload":{"statusCode":404,"error":"Not Found","message":"Artifact not found"},"headers":{}},"stack":"/home/app/node/src/plugins/remote-cache/routes/get-artifact.ts:29\n      throw notFound(`Artifact not found`, err)\n                    ^\n\nError: Artifact not found\n    at Object.handler (/home/app/node/src/plugins/remote-cache/routes/get-artifact.ts:29:21)\n    at processTicksAndRejections (node:internal/process/task_queues:96:5)","type":"Error","message":"Artifact not found"}
 {"severity":"INFO","level":30,"time":1675430563309,"pid":7,"hostname":"bb3a0a8794f4","reqId":"UT8iMrFOS8G-Gre72NQmrQ-2","res":{"statusCode":404},"responseTime":28.97209799999837,"message":"request completed"}
 {"severity":"INFO","level":30,"time":1675430563512,"pid":7,"hostname":"bb3a0a8794f4","reqId":"UT8iMrFOS8G-Gre72NQmrQ-4","req":{"method":"POST","url":"/v8/artifacts/events?slug=beyoung_transformers","hostname":"172.17.0.1:3000","remoteAddress":"172.18.0.1","remotePort":34572},"message":"incoming request"}
 {"severity":"INFO","level":30,"time":1675430563513,"pid":7,"hostname":"bb3a0a8794f4","reqId":"UT8iMrFOS8G-Gre72NQmrQ-4","res":{"statusCode":200},"responseTime":1.[11](https://github.com/beyounglabs/transformers/actions/runs/4084527958/jobs/7041334215#step:10:11)73000000[12](https://github.com/beyounglabs/transformers/actions/runs/4084527958/jobs/7041334215#step:10:12)6194,"message":"request completed"}
 {"severity":"INFO","level":30,"time":1675430569545,"pid":7,"hostname":"bb3a0a8794f4","reqId":"UT8iMrFOS8G-Gre72NQmrQ-5","req":{"method":"GET","url":"/v8/artifacts/7a84019eaa8739de?slug=beyoung_transformers","hostname":"172.17.0.1:3000","remoteAddress":"172.18.0.1","remotePort":34582},"message":"incoming request"}
 {"severity":"INFO","level":30,"time":1675430569549,"pid":7,"hostname":"bb3a0a8794f4","reqId":"UT8iMrFOS8G-Gre72NQmrQ-6","req":{"method":"PUT","url":"/v8/artifacts/b3c1adf0c0c67da0?slug=beyoung_transformers","hostname":"172.17.0.1:3000","remoteAddress":"172.18.0.1","remotePort":34594},"message":"incoming request"}
 {"severity":"INFO","level":30,"time":1675430569549,"pid":7,"hostname":"bb3a0a8794f4","reqId":"UT8iMrFOS8G-Gre72NQmrQ-7","req":{"method":"GET","url":"/v8/artifacts/eaa815124d059847?slug=beyoung_transformers","hostname":"172.17.0.1:3000","remoteAddress":"172.18.0.1","remotePort":34578},"message":"incoming request"}
 {"severity":"WARNING","level":40,"time":1675430569556,"pid":7,"hostname":"bb3a0a8794f4","reqId":"UT8iMrFOS8G-Gre72NQmrQ-6","data":{},"isBoom":true,"isServer":false,"output":{"statusCode":412,"payload":{"statusCode":412,"error":"Precondition Failed","message":"Error during the artifact creation"},"headers":{}},"stack":"/home/app/node/src/plugins/remote-cache/routes/put-artifact.ts:33\n      throw preconditionFailed(`Error during the artifact creation`, err)\n                              ^\n\nError: Error during the artifact creation\n    at Object.handler (/home/app/node/src/plugins/remote-cache/routes/put-artifact.ts:33:31)\n    at processTicksAndRejections (node:internal/process/task_queues:96:5)","type":"Error","message":"Error during the artifact creation"}
 {"severity":"INFO","level":30,"time":1675430569557,"pid":7,"hostname":"bb3a0a8794f4","reqId":"UT8iMrFOS8G-Gre72NQmrQ-6","res":{"statusCode":412},"responseTime":8.76[14](https://github.com/beyounglabs/transformers/actions/runs/4084527958/jobs/7041334215#step:10:14)99000014737,"message":"request completed"}
 {"severity":"WARNING","level":40,"time":1675430569577,"pid":7,"hostname":"bb3a0a8794f4","reqId":"UT8iMrFOS8G-Gre72NQmrQ-5","data":{},"isBoom":true,"isServer":false,"output":{"statusCode":404,"payload":{"statusCode":404,"error":"Not Found","message":"Artifact not found"},"headers":{}},"stack":"/home/app/node/src/plugins/remote-cache/routes/get-artifact.ts:29\n      throw notFound(`Artifact not found`, err)\n                    ^\n\nError: Artifact not found\n    at Object.handler (/home/app/node/src/plugins/remote-cache/routes/get-artifact.ts:29:21)\n    at processTicksAndRejections (node:internal/process/task_queues:96:5)","type":"Error","message":"Artifact not found"}
 {"severity":"INFO","level":30,"time":1675430569579,"pid":7,"hostname":"bb3a0a8794f4","reqId":"UT8iMrFOS8G-Gre72NQmrQ-5","res":{"statusCode":404},"responseTime":33.89039800001774,"message":"request completed"}
 {"severity":"WARNING","level":40,"time":1675430569605,"pid":7,"hostname":"bb3a0a8794f4","reqId":"UT8iMrFOS8G-Gre72NQmrQ-7","data":{},"isBoom":true,"isServer":false,"output":{"statusCode":404,"payload":{"statusCode":404,"error":"Not Found","message":"Artifact not found"},"headers":{}},"stack":"/home/app/node/src/plugins/remote-cache/routes/get-artifact.ts:29\n      throw notFound(`Artifact not found`, err)\n                    ^\n\nError: Artifact not found\n    at Object.handler (/home/app/node/src/plugins/remote-cache/routes/get-artifact.ts:29:21)\n    at processTicksAndRejections (node:internal/process/task_queues:96:5)","type":"Error","message":"Artifact not found"}
 {"severity":"INFO","level":30,"time":1675430569606,"pid":7,"hostname":"bb3a0a8794f4","reqId":"UT8iMrFOS8G-Gre72NQmrQ-7","res":{"statusCode":404},"responseTime":56.34499599999981,"message":"request completed"}
 {"severity":"INFO","level":30,"time":1675430569808,"pid":7,"hostname":"bb3a0a8794f4","reqId":"UT8iMrFOS8G-Gre72NQmrQ-8","req":{"method":"POST","url":"/v8/artifacts/events?slug=beyoung_transformers","hostname":"172.17.0.1:3000","remoteAddress":"172.18.0.1","remotePort":34604},"message":"incoming request"}
 {"severity":"INFO","level":30,"time":1675430569810,"pid":7,"hostname":"bb3a0a8794f4","reqId":"UT8iMrFOS8G-Gre72NQmrQ-8","res":{"statusCode":200},"responseTime":0.8954000000085216,"message":"request completed"}
 {"severity":"INFO","level":30,"time":1675430588230,"pid":7,"hostname":"bb3a0a8794f4","reqId":"UT8iMrFOS8G-Gre72NQmrQ-9","req":{"method":"PUT","url":"/v8/artifacts/eaa8[15](https://github.com/beyounglabs/transformers/actions/runs/4084527958/jobs/7041334215#step:10:15)124d059847?slug=beyoung_transformers","hostname":"172.17.0.1:3000","remoteAddress":"172.18.0.1","remotePort":45336},"message":"incoming request"}
 {"severity":"WARNING","level":40,"time":[16](https://github.com/beyounglabs/transformers/actions/runs/4084527958/jobs/7041334215#step:10:16)75430588272,"pid":7,"hostname":"bb3a0a8794f4","reqId":"UT8iMrFOS8G-Gre72NQmrQ-9","data":{},"isBoom":true,"isServer":false,"output":{"statusCode":412,"payload":{"statusCode":412,"error":"Precondition Failed","message":"Error during the artifact creation"},"headers":{}},"stack":"/home/app/node/src/plugins/remote-cache/routes/put-artifact.ts:33\n      throw preconditionFailed(`Error during the artifact creation`, err)\n                              ^\n\nError: Error during the artifact creation\n    at Object.handler (/home/app/node/src/plugins/remote-cache/routes/put-artifact.ts:33:31)\n    at processTicksAndRejections (node:internal/process/task_queues:96:5)","type":"Error","message":"Error during the artifact creation"}
 {"severity":"INFO","level":30,"time":1675430588274,"pid":7,"hostname":"bb3a0a8794f4","reqId":"UT8iMrFOS8G-Gre72NQmrQ-9","res":{"statusCode":412},"responseTime":43.37320099998033,"message":"request completed"}
 {"severity":"INFO","level":30,"time":1675430621811,"pid":7,"hostname":"bb3a0a8794f4","reqId":"UT8iMrFOS8G-Gre72NQmrQ-10","req":{"method":"PUT","url":"/v8/artifacts/7a84019eaa8739de?slug=beyoung_transformers","hostname":"[17](https://github.com/beyounglabs/transformers/actions/runs/4084527958/jobs/7041334215#step:10:17)2.17.0.1:3000","remoteAddress":"172.[18](https://github.com/beyounglabs/transformers/actions/runs/4084527958/jobs/7041334215#step:10:18).0.1","remotePort":35098},"message":"incoming request"}
 {"severity":"WARNING","level":40,"time":16754306[21](https://github.com/beyounglabs/transformers/actions/runs/4084527958/jobs/7041334215#step:10:21)863,"pid":7,"hostname":"bb3a0a8794f4","reqId":"UT8iMrFOS8G-Gre72NQmrQ-10","data":{},"isBoom":true,"isServer":false,"output":{"statusCode":412,"payload":{"statusCode":412,"error":"Precondition Failed","message":"Error during the artifact creation"},"headers":{}},"stack":"/home/app/node/src/plugins/remote-cache/routes/put-artifact.ts:33\n      throw preconditionFailed(`Error during the artifact creation`, err)\n                              ^\n\nError: Error during the artifact creation\n    at Object.handler (/home/app/node/src/plugins/remote-cache/routes/put-artifact.ts:33:31)\n    at processTicksAndRejections (node:internal/process/task_queues:96:5)","type":"Error","message":"Error during the artifact creation"}
 {"severity":"INFO","level":30,"time":1675430621872,"pid":7,"hostname":"bb3a0a8794f4","reqId":"UT8iMrFOS8G-Gre72NQmrQ-10","res":{"statusCode":412},"responseTime":60.56[23](https://github.com/beyounglabs/transformers/actions/runs/4084527958/jobs/7041334215#step:10:23)04999970365,"message":"request completed"}

Turbo build Cache enabled

image

Cache storaged (from local test) in GCP Bucket

image

Again, I use console.error locally in this file to catch the error.

"Error: error:0909006C:PEM routines:get_name:no start line\n    at Sign.sign (node:internal/crypto/sig:131:29)\n    at Object.sign (/home/mnzs/tmp/turborepo-remote-cache/node_modules/.pnpm/[email protected]/node_modules/jwa/index.js:152:45)\n    at Object.jwsSign [as sign] (/home/mnzs/tmp/turborepo-remote-cache/node_modules/.pnpm/[email protected]/node_modules/jws/lib/sign-stream.js:32:24)\n    at GoogleToken.requestToken (/home/mnzs/tmp/turborepo-remote-cache/node_modules/.pnpm/[email protected]/node_modules/gtoken/build/src/index.js:232:31)\n    at GoogleToken.getTokenAsyncInner (/home/mnzs/tmp/turborepo-remote-cache/node_modules/.pnpm/[email protected]/node_modules/gtoken/build/src/index.js:166:21)\n    at GoogleToken.getTokenAsync (/home/mnzs/tmp/turborepo-remote-cache/node_modules/.pnpm/[email protected]/node_modules/gtoken/build/src/index.js:145:55)\n    at GoogleToken.getToken (/home/mnzs/tmp/turborepo-remote-cache/node_modules/.pnpm/[email protected]/node_modules/gtoken/build/src/index.js:97:21)\n    at JWT.refreshTokenNoCache (/home/mnzs/tmp/turborepo-remote-cache/node_modules/.pnpm/[email protected]/node_modules/google-auth-library/build/src/auth/jwtclient.js:172:36)\n    at JWT.refreshToken (/home/mnzs/tmp/turborepo-remote-cache/node_modules/.pnpm/[email protected]/node_modules/google-auth-library/build/src/auth/oauth2client.js:152:24)\n    at JWT.getRequestMetadataAsync (/home/mnzs/tmp/turborepo-remote-cache/node_modules/.pnpm/[email protected]/node_modules/google-auth-library/build/src/auth/oauth2client.js:284:28)"

This occour because I use base64 credentials of gcp, so I change that to json and pass in env vars and works locally.

But some error ocurring in Github Actions and I can't access real error, exists another way to get erros on storage error? Or another exemple to use this solution with Github Actions?

Consider removing @vercel/node from the dependencies

๐Ÿ› Bug Report

Currently, the installation size of this package is huge. This is partly due to @vercel/node, which lists an old version of typescript as its dependency, not a peer dependency. The size of turborepo-remote-cache is 202MB, of which typescript alone is more than 50MB. If your project uses a newer version of typescript, as most likely it does, two different versions will be installed, which means typescript alone will be over 100MB in size. They are discussing about moving typescript to peer dependencies, but will not happen for the time being.

However, @vercel/node is currently only used for some trivial type information in this package: VercelRequest and VercelResponse. These two type definitions are nothing but the following:

import { ServerResponse, IncomingMessage } from 'http';
export declare type VercelRequestCookies = {
    [key: string]: string;
};
export declare type VercelRequestQuery = {
    [key: string]: string | string[];
};
export declare type VercelRequestBody = any;
export declare type VercelRequest = IncomingMessage & {
    query: VercelRequestQuery;
    cookies: VercelRequestCookies;
    body: VercelRequestBody;
};
export declare type VercelResponse = ServerResponse & {
    send: (body: any) => VercelResponse;
    json: (jsonBody: any) => VercelResponse;
    status: (statusCode: number) => VercelResponse;
    redirect: (statusOrUrl: string | number, url?: string) => VercelResponse;
};

It doesn't seem to me this is worth the 50MB disk size. Therefore, I suggest that you consider removing @vercel/dev, at least for the time being.

To Reproduce

npm init -y
npm install turborepo-remote-cache [email protected] --save-dev
du -sh node_modules # 265M
du -sh node_modules/typescript # 64M
du -sh node_modules/@vercel/node/node_modules/typescript # 58M

Expected behavior

Only one version of typescript will be installed, reducing the installation size.

Your Environment

  • os: Mac
  • any other relevant information

Cache not being utilized when called through Docker build

I've successfully cloned the turborepo-remote-cache and I've hooked it up in AWS. Everything is working fine, up until I try to use the cache from docker. Here's my dockerfile:

FROM docker.artifactory.moveaws.com/node:16 as build
COPY . /app
WORKDIR /app
RUN yarn install
RUN yarn build
RUN npx turbo run build

Both RUN yarn build and RUN npx turbo run build are not updating the cache. Originally I thought that my docker instance couldn't communicate with the deployed service, but when I added a CURL inside the container, I could verify that outbound calls are properly triggered.

My question is, has anyone run into this? Maybe someone has an example of a dockerfile already out there? Or what could possibly be the issue that the commands won't trigger the cache during the build phase?

Remote caching disabled since turbo 1.9.4

๐Ÿ› Bug Report

Remote caching disabled since turbo 1.9.4

To Reproduce

Upgrade to >1.9.3 version

Expected behavior

Get Remote caching enabled

Since upgrade turborepo to 1.9.4 no way to get remove caching enable, someone have a fix or can reproduce the problem? thanks

Add an option to increase bodyLimit.

Please read this entire template before posting any issue. If you ignore these instructions
and post an issue here that does not follow the instructions, your issue might be closed,
locked, and assigned the missing discussion label.

๐Ÿ› Bug Report

Some of generated artifacts cannot be stored because of body size limit. Here's the server log:

{"severity":"ERROR","level":50,"time":1700542082033,"pid":8,"hostname":"126f24f5fcfd","reqId":"dEQBFoUGTuGubhl7lyAaDA-11","name":"FastifyError","code":"FST_ERR_CTP_BODY_TOO_LARGE","message":"Request body is too large","statusCode":413,"stack":"FastifyError: Request body is too large\n    at rawBody (/home/app/node/node_modules/.pnpm/[email protected]/node_modules/fastify/lib/contentTypeParser.js:206:16)\n    at ContentTypeParser.run (/home/app/node/node_modules/.pnpm/[email protected]/node_modules/fastify/lib/contentTypeParser.js:166:5)\n    at handleRequest (/home/app/node/node_modules/.pnpm/[email protected]/node_modules/fastify/lib/handleRequest.js:41:33)\n    at runPreParsing (/home/app/node/node_modules/.pnpm/[email protected]/node_modules/fastify/lib/route.js:530:5)\n    at next (/home/app/node/node_modules/.pnpm/[email protected]/node_modules/fastify/lib/hooks.js:168:7)\n    at handleResolve (/home/app/node/node_modules/.pnpm/[email protected]/node_modules/fastify/lib/hooks.js:185:5)","type":"Error"}

To Reproduce

Steps to reproduce the behavior:

  1. Run cache server via any possible way (locally with Docker in my case) and any provider (local in my case)
  2. Try to build some turbo run build commands

Paste your code here:

TURBO_TEAM=my-team TURBO_TOKEN=token TURBO_API=http://0.0.0.0:3000 turbo run build --no-daemon --remote-only --force

Expected behavior

All artifacts are saved.

apiUrl is not working

๐Ÿ› Bug Report

I configure apiUrl but turborepo only use default cache

To Reproduce

config.json

{
  "teamId": "team_name",
  "apiUrl": "http://localhost:3000"
}

package.json

{
  "build": "turbo run build --token=token_name"
}

Paste the results here:

image

Log prints remote cache is enabled but cache files still create in default path

Your Environment

  • os: Mac

Help request - configured with s3, issue in the server log

Hi there, we're trying to setup a remote cache using AWS S3 and can see the following error in the log relating to the line of code below. Note using k8s, this image - ducktors/turborepo-remote-cache:1.14.1.

{"severity":"WARNING","level":40,"time":1684164667390,"pid":7,"hostname":"turborepo-cache-57d6c996db-m8ktx","reqId":"WmEucNymQiefEGn9CLNAYw-0","data":null,"isBoom":true,"isServer":false,"output":{"statusCode":400,"payload":{"statusCode":400,"error":"Bad Request","message":"Missing Authorization header"},"headers":{}},"stack":"Error: Missing Authorization header\n    at Object.<anonymous> (/home/app/node/src/plugins/remote-cache/index.ts:56:27)\n    at hookIterator (/home/app/node/node_modules/.pnpm/[email protected]/node_modules/fastify/lib/hooks.js:246:10)\n    at next (/home/app/node/node_modules/.pnpm/[email protected]/node_modules/fastify/lib/hooks.js:174:16)\n    at hookRunner (/home/app/node/node_modules/.pnpm/[email protected]/node_modules/fastify/lib/hooks.js:196:3)\n    at Object.routeHandler [as handler] (/home/app/node/node_modules/.pnpm/[email protected]/node_modules/fastify/lib/route.js:469:7)\n    at Router.callHandler (/home/app/node/node_modules/.pnpm/[email protected]/node_modules/find-my-way/index.js:398:14)\n    at Router.lookup (/home/app/node/node_modules/.pnpm/[email protected]/node_modules/find-my-way/index.js:376:17)\n    at Server.preRouting (/home/app/node/node_modules/.pnpm/[email protected]/node_modules/fastify/fastify.js:773:14)\n    at Server.emit (node:events:513:28)\n    at Server.emit (node:domain:489:12)","type":"Error","message":"Missing Authorization header"}

https://github.com/ducktors/turborepo-remote-cache/blob/main/src/plugins/remote-cache/index.ts#L56

The following env vars are set, key and secret via ~/.aws.

STORAGE_PROVIDER - s3
STORAGE_PATH - something
AWS_REGION - eu-west-2
S3_ENDPOINT - https://XXX.s3.eu-west-2.amazonaws.com

This is a result of trying to run a command against this server, local command being

turbo run ci:lint -vvv --remote-only

with .turbo/config.json of

{
  "teamId": "team_myteam",
  "apiUrl": "XXXXX",
  "token": "XXXXX"
}

Won't lie, not too sure what aspect of this is unhappy but expect it's something to do with our config :) any help would be appreciated please!

Option to log to file

๐Ÿš€ Feature Proposal

Ability to specify a log mode, with file being an available option, alongside an option to set the file location.

Motivation

When running on docker in some environments, there's limited support for some of the docker log-driver options. A general purpose way to collect logs in these scenarios is to log to disk and collect by an agent on the host via a mount.

Example

ie new environment variable options

LOG_MODE=file
LOG_FILE=/path/to/my/file.log

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.