Giter Club home page Giter Club logo

import-map-deployer's Introduction

npm version NPM Downloads

single-spa

Join the chat on Slack

Donate to this project

Official single-spa hosting

baseplate-logo-standard

A javascript framework for front-end microservices

Build micro frontends that coexist and can (but don't need to) be written with their own framework. This allows you to:

Sponsors

DataCamp-Logo Toast-Logo asurion-logo

To add your company's logo to this section:

Documentation

You can find the single-spa documentation on the website.

Check out the Getting Started page for a quick overview.

Demo and examples

Please see the examples page on the website.

Want to help?

Want to file a bug, contribute some code, or improve documentation? Excellent! Read up on our guidelines for contributing on the single-spa website.

Contributing

The main purpose of this repository is to continue to evolve single-spa, making it better and easier to use. Development of single-spa, and the single-spa ecosystem happens in the open on GitHub, and we are grateful to the community for contributing bugfixes and improvements. Read below to learn how you can take part in improving single-spa.

Single-spa has adopted a Code of Conduct that we expect project participants to adhere to. Please read the full text so that you can understand what actions will and will not be tolerated.

Read our contributing guide to learn about our development process, how to propose bugfixes and improvements, and how to build and test your changes to single-spa.

import-map-deployer's People

Contributors

bartjanvanassen avatar blittle avatar brandones avatar cristianosl avatar danopia avatar dckesler avatar dependabot[bot] avatar dijitali avatar frehner avatar iot-resister avatar joeldenning avatar kfrederix avatar kgehmlich avatar kristianmandrup avatar mellis481 avatar nhumrich avatar thawkin3 avatar themcmurder avatar tungurlakachakcak avatar vongohren avatar yzalvin avatar zleight1 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

import-map-deployer's Issues

IAM role support to prevent the need to rotate keys for S3

Enhancement Request
A new configuration option would allow the import map deployer to run as a specific IAM Role (cli ref provided here) which would allow us to avoid using AWS CLI keys.

Context / User Story:

One of the AWS experts at my company @dsmith2828 pointed that due to our companies key rotating policies we would have to log into our EC2 instance every 3 months to update the AWS cli keys for the import map deployer

Work Plan:
I would write the PR and provide unit tests.

Add ability to specify which import map to patch

Hello,
we are happy users of import-map-deployer. We have it configured to update an import map hosted in S3 bucket. Url is like s3://<something>/prod/import-map.json.

We are going to have a brand new product with its own import map. However duplicating all our AWS infrastructure for the new product seems to be an overkill. Instead, we'd like to make import-map-deployer to be able to update s3://<something>/prod/{productName}/import-map.json where productName we will send as in a request body for PATCH/import-map.json endpoint.

Is there any reason import-map-deployer does not currently allow to specify import map to update during PATCH operation?

I am ready to open a pull request implementing that feature, but first wanted to make sure it conforms to the vision of this project.

Thanks!

CC @joeldenning @TheMcMurder

How make an HTTP request with curl or http when `username` and `password` was set?

Hello, I'm doing some experiment with Dockerfile
and setting HTTP_USERNAME and HTTP_PASSWORD

but I will fail when I call at normally (like docs)

vctqs1$ http :5000/enviroments
HTTP/1.1 401 Unauthorized
Connection: keep-alive
Content-Length: 0
Date: Wed, 09 Feb 2022 10:52:03 GMT
Keep-Alive: timeout=5
WWW-Authenticate: Basic realm="sofe-deplanifester"
X-Powered-By: Express

Minio Support

Just wanted to share. I got this working as it is similar to digital ocean spaces:

config.json

{
  "manifestFormat": "importmaps",
  "s3Endpoint": "https://<selfhosted.domain>",
  "locations":{
    "default": "spaces://minio.<selfhosted.domain>/import-map.json"
  }
}

docker-compose.yml

version: "3.7"
services:
  import-map-deployer:
    image: singlespa/import-map-deployer
    ports:
      - 5000:5000
    environment:
      AWS_ACCESS_KEY_ID: $MINIO_ID
      AWS_SECRET_ACCESS_KEY: $MINIO_SECRET
    volumes:
      - ./config.json:/www/config.json

npx command not working on Node 12.x

Hey, it's my first time using this. Very nice tooling!

I didn't find which Node version is supported in the documentation so I'm just reporting it.
It's working fine on Node.js 14.x for me.

$ npx import-map-deployer config.js
Cannot find module 'fs/promises'
Require stack:
- ~/.npm/_npx/27925/lib/node_modules/import-map-deployer/src/io-methods/filesystem.js
- ~/.npm/_npx/27925/lib/node_modules/import-map-deployer/src/io-methods/default.js
- ~/.npm/_npx/27925/lib/node_modules/import-map-deployer/src/io-operations.js
- ~/.npm/_npx/27925/lib/node_modules/import-map-deployer/src/web-server.js
- ~/.npm/_npx/27925/lib/node_modules/import-map-deployer/bin/import-map-deployer

Thanks!

NPX example doesn't work

I'm setting up the recommended single-spa workflow and in testing this tool I got some weird behavior.

If you run the example npx import-map-deployer config.json (inside another project not this one cloned) you will get a 404 error from the NPM registry:

npm ERR! code E404
npm ERR! 404 Not Found - GET https://registry.npmjs.org/import-map-deployer - Not found
npm ERR! 404
npm ERR! 404  'import-map-deployer@latest' is not in the npm registry.
npm ERR! 404 You should bug the author to publish it (or use the name yourself!)
npm ERR! 404
npm ERR! 404 Note that you can also install from a
npm ERR! 404 tarball, folder, http url, or git url.

Either you're assuming the user is running it with a private mirror, or something else is my guess.

If you could clarify what the proper way to run this from Node is, with a little more context it would probably solve this issue.

If you need more information or would like me to clarify more, let me know. Thanks for the awesome work on all the single-spa ecosystem so far!

How to delete scopes?

Was able to add scopes via PATCH import-map.json, but can't seem to find a way to clean them up after adding them. Is there currently a solution for this?

Understanding http auth for getting json

Hi,

I am trying to use the deployer to avoid cache problems in my microfronts.

I require to update the import-map.json there is some kind of security like username and password, but when I want to fetch it from my container (index.html) I don't want to use the http auth.

is there any configuration for the import-map-deployer to act this way?

Documentation: aliases missing

The documentation shows that an alias can be created to set a default environment but it the syntax is missing from the documentation.

I would like to change my current setup to make prod the default but I'm not sure how to do it:

{
    "manifestFormat": "importmap",
    "locations": {
        "prod": "google://my-bucket/prod-import-map.json",
        "staging": "google://my-bucket/staging-import-map.json",
        "dev": "google://my-bucket/dev-import-map.json"
    },
    ...
}

Create empty file when not found

I wonder about one thing. I found that this service expects it to exist a file in the storage location for it to work? Is that correct? Or is it something I have done wrong?

The problem with this is when initiating the infrastructure, it is unclear who shall controll this empty state. Setting up with terraform and creating a file a provision is not optimal, as the terraform state changes and the file is not consistent. So I do believe this service should handle empty bucket, and potentially just create an empty file.

But maybe it already does, and I had the wrong setup when testing out?

[Feature] support before and after hook, after call import map deployer

In my purpose, after cal imd i'd like to call to another service to update version. and make sure that it also put in lock to handle race-conditional

The behavior look like this

  1. use curl call to imd
  2. open lock
  3. beforeHook
  4. update import-map.json file
  5. afterHook. that maybe integrate to send noti to slack or another service
  6. close lock
  7. exec next call

Could the team help to review this, I'd be happy to open PR and contribute if possible

AWS Role assigned to ServiceAccount is ignored when running in kubernetes/EKS

import-map-deployer ignores role assigned to service account when running in k8s/EKS. As a result it is impossible to limit S3 access permissions just to particular pod with import-map-deployer running.

Looks like [email protected] locked in yarn.lock file has either bug or lack of functionality and it seems to ignore AWS_ROLE_ARN and AWS_WEB_IDENTITY_TOKEN_FILE provided by EKS integration with AWS IAM roles.

When I run your image with interactive shell and execute below commands in node REPL:

const aws = require('aws-sdk');
const sts = new aws.STS({region: "eu-west-1"});
sts.getCallerIdentity({}, function(error,data){console.log(data)});

I got EKS Worker node role in return

However when I run your base image (node:14-alpine) and install latest aws-sdk@2 (in my case it is 2.1274.0) it returns role assigned to service account. I double-checked this using same image (node:14-alpine) and explicitly installing [email protected] and behaviour is exactly same as in your image.

Possibly quickest fix is to update aws-sdk@2 version in yarn.lock to something more recent.

How to use the Docker Hub image to push to Kubernetes cluster

I'm following along your importmaps-deployer video tutorial.

I see there is a live-import-map-deployer

This is example of extending the import-map-deployer from docker hub

FROM canopytax/import-map-deployer

Wondering about the conf.js file. I've seen it in the video as config.json I believe? Please add sth to the readme for live-import-map-deployer and update it to match the latest single-spa/import-map-deployer image and conventions

Took me a while to figure all this out.

What about using the gcloud CLI to push the image and interact with the cluster?

Thanks for everything. Very excited to get this infrastructure all working :)

Authentication broken

Hi! First of all: thank you for this great tool!
While playing with this yesterday, I noticed that the authentication was not working.
A recent change to config.js removed the exports.config, but that was still used from io-operations.js.

I have linked a pull request to fix this.

Should DELETE delete both entries - with and without trailing slash?

Afer reading it, I'm not sure if this is a duplicate of #62 or not. My apologies if it is.

  1. With packagesViaTrailingSlashes=true:

Calling PATCH http://server:5000/services
with this body:

{
    "service": "TestService",
    "url": "https://apis.google.com/js/client.js"
}

Adds two entries to the import map:

{
    "imports": {
        "TestService": "https://apis.google.com/js/client.js",
        "TestService/": "https://apis.google.com/js/"
    }
}

Great.

  1. My question: When you call DELETE http://server:5000/services/TestService, it only removes one entry, and the import map now looks like:
{
    "imports": {
        "TestService/": "https://apis.google.com/js/"
    }
}

Should it delete "TestService/" as well? I tried making a separate DELETE call -- calling various combinations: DELETE server:5000/services/TestService&#47; , etc.

Im getting cannot read importmap.json during health check

Right now Im getting this issue when building and deploying to cloud run.
I have solved it by adding USER root in my wrapping Dockerfile.

But it does not feel good, so I wonder what is suppose to be possible?

There is a line in this Dockerfile, that sets www to be owned by root, and I believe that conflicts with the user is set to node. That again conflicts with IO operation of reading the local importmap file.

So just curious to what is the right way of doing it?

Cant delete a service name with /

I patch a service via http call to import-map-deployer API endpoint with a name that contains a slash in it's name like @divilo/authentication

I think import map deployer should validate service name before adding it's reference. Otherwise delete mehhod wont do

{
  "imports": {
    "single-spa": "https://cdn.jsdelivr.net/npm/[email protected]/lib/system/single-spa.min.js",
    "vue": "https://cdn.jsdelivr.net/npm/[email protected]/dist/vue.min.js",
    "vue-router": "https://cdn.jsdelivr.net/npm/[email protected]/dist/vue-router.min.js",
    "@divilo/authentication": "https://assets.divilodemo.com/authentication/cef4e230dde1d946135647428ef4a65e/divilo-authentication.js"
  }
}

image

Add cache control header to import maps in Microsoft Azure

Currently S3, Digital Ocean, and Google Cloud Storage all are instructed to send a Cache-Control HTTP response header when serving the import map file.

References:

cacheControl: "public, must-revalidate, max-age=0",

CacheControl: "public, must-revalidate, max-age=0",

However, the same thing does not exist for Microsoft Azure. We should add this to the Azure integration. The file to modify is https://github.com/single-spa/import-map-deployer/blob/master/src/io-methods/azure.js

Feature Request: Set S3 ACL other than public-read

We have a setup where setting the ACL of the import-map.json in S3 needs to be something other than public-read.

Right now in the s3 code, the PUT call is defaulted to 'public-read' and can't be overridden.

I think the best way would be another configuration variable, that validates against the list of pre-canned ACLs here: https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl

I'll open a PR for this, let me know if there is anything to know or watch out for.

Tests fail after adding `config.json`

Once a config.json file is added to the root of the project, the tests fail:

expected 200 "OK", got 401 "Unauthorized"

This looks to be an issue with the setConfig method in src/config.js: looking at the code, it seems that the intention was to allow the config to be overriden with the arg passed into this method. However, as the tests fail, it seems that this is not the case?

AWS, s3, AccessControlListNotSupported

Hello.
When I try to use AWS S3 bucket as storage for import-maps with "ACLs disabled" option enabled for the bucket I get this error:
"Could not patch service -- AccessControlListNotSupported: The bucket does not allow ACLs".

From documentation: "If the bucket that you're uploading objects to uses the bucket owner enforced setting for S3 Object Ownership, ACLs are disabled and no longer affect permissions. Buckets that use this setting only accept PUT requests that don't specify an ACL or PUT requests that specify bucket owner full control ACLs, such as the bucket-owner-full-control canned ACL or an equivalent form of this ACL expressed in the XML format. PUT requests that contain other ACLs (for example, custom grants to certain Amazon Web Services accounts) fail and return a 400 error with the error code AccessControlListNotSupported." https://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/S3.html#putObject-property

Am I missing something?
Thank you in advance.

Add cache control configuration option

This issue is a continuation of the discussion from this item regarding adding a new configuration option to be able to set the cache-control header for the importmap.json file. The default is public, must-revalidate, max-age=0 which results in the file being cached at cloud edge locations for an indeterminate amount of time.

In the linked issue, there was talk of making it a new configuration item. After reviewing the documentation again, however, I'm wondering if it's currently possible using the s3.putObject property. So... including something like this in the import-map-deployer's config.json:

{
  "s3": {
    "putObject": {
      "CacheControl": "no-store"
    }
  }
}

@joeldenning Would this work or is there a need to add something new? Based on how it was coded, it looks like it would work.

Throwing request error on updates

I'm getting following error for PATCH requests:

RequestError: The downloaded data did not match the data from the server. To be sure the content is the same, you should download the file again.

I'm not sure what this file is. I'm running the deployer as a cloud run instance with max_instances 1. Does this implementation require the service to be running all the time?

Support per environment access keys for azure

So when a storage bucket is different per application you'll have to create an instance of the import map deployer per env, because the process env vars only support one key process.env.AZURE_STORAGE_... It would be nice if you can over ride the keys from the config file.

Offering unit testing help

After #20 is complete, I'd be interested in helping to write some unit tests. Would @joeldenning be able to write some expectations here so that I can fill out the setups?

The following are just ideas. I'm open to however you want to help me to help you guys add test coverage. Since you @frehner and @joeldenning have been so kind, I'd really like to find a way to repay that kindness and to give back to the single-spa community. :)

In other words, if I was provided information I would be able to write the test code:

  • Given the following setup (i.e. Arrange)
  • Under this action (i.e. Act)
  • I expect this (i.e. Assert)

And example might be:

  • Given a malformed import map
  • That a user tries to patch
  • I expect a 400 error with a message of "The import map was malformed"

Non Existent Envoirnments in Conf.js should return error instead of returning the default envoirnment settings

I have configured two envoirnment in my conf.js with below file which are working fine when i get and patch import-map.json but it always returns sit1 file for wrong envoirnment like sit3 and sit4 which are not even existing in conf.js , ideally it should throw error if wrong envoirnment name is passed in the query string

Below url still works which should not work
import-map.json?env=sit3

module.exports = {
manifestFormat: "importmap",
packagesViaTrailingSlashes: false,
locations: {
sit1: "/www/mfe-sit/sit1/microfrontends/import-map.json",
sit2: "/www/mfe-sit/sit2/microfrontends/import-map.json",
},
};

Upcoming AWS ACL Changes

Back in 2020 I had done some work here to add ACL configs for AWS - see PR, it's been a few years but I think originally the issue was the deployer assumed the bucket access was public-read only so we needed to add more ACL options - see issue.

Anyways, it seems like AWS is changing the default way bucket access/ACLs work come April 2023 and if I understand correctly any new buckets created will have issues using the import-map-deployer as-is unless they specifically set the ACL to the previous behavior, which many people would miss. It seems that existing buckets should be OK, but my guess is that these will eventually need to be migrated.

Any new buckets that need to use import-map-deployer could have issues either with the API calls (since we'd still be sending ACLs) and config or with the ownership changes (might need specific user rights and would be good to document).

The blog post can be found here

This isn't necessarily an issue but more of a discussion (but it will be an issue/PR eventually is my guess), so my first question is:

  • Should we make changes to the way the ACL config works in the deployer?
    Note also there could be a situation of mixed old/new bucket types so we might need a fallback or additional type of flag, etc.

[Question] Is PATCH /import-map.json supposed to work?

It doesn't seem to; and I am not clear on why I would use that over /services.

If there is no difference can I remove this /import-map.json PATCH from the docs because I just spend a stupid amount of time bumbling around trying to update

Noisy logging of health check requests

We use the /health endpoint as a Kubernetes readiness probe which results in a lot log noise because it gets called every few seconds.

Could we suppress logging of the /health endpoint if it returns a 200 response?

Happy to make a PR but looking for confirmation of:

  • is the proposal OK?
  • should I bother making it a configurable setting?
  • if so, should it be off or on by default?

Cannot delete from import map if service name includes a `/` in it

Having a / in a service name is pretty common, for namespacing. Example: @openmrs/root-config.

But deleting something from the import map doesn't work if you attempt to do so, probably because the / is interpreted to mean a different part of the route entirely.

Auth environment variables have no effect

Setting HTTP_USERNAME and HTTP_PASSWORD as in the Dockerfile example has no noticeable effect. They neither set nor override the username and password values for basic auth.

Have I misunderstood the point of these environment variables?

Allow adding/updating service with non-public url

When I run a curl command to a running instance of import-map-deployer

curl -d '{"service":"service-name", "url":"service-url"}' -X PATCH localhost:5000/services\?env=dev ...

, it first verifies that the file in service-url exists by making a network request. However, if the service-url is not public, it fails.

It might be preferable if this check/verification can be disabled optionally, to allow adding service urls which are secured in some cloud storage.

Docker Image Versioned Tags

Hi,

I've noticed that the Docker image on Docker Hub only has a "latest" tag. It would be great if new tagged releases could generate a new tag on Docker Hub as well, so it is possible for us to run a specific version in Production.

Thanks!

Eslint validation

Hey!

I was looking at some pr's i was writing on this project and noticed the eslint was ran by travis, though my editor + precommit hook is only checking for prettier. Can we also add eslint to the project as dev dependencies + precommit? the configuration is already on the project i noticed.

How cdn url is same for every environment?

@joeldenning

https://github.com/single-spa/import-map-deployer/blob/main/examples/ci-for-javascript-repo/gitlab-aws-no-import-map-deployer/.gitlab-ci.yml

I am trying to implement deploy process for our single spa apps. Above example has same CDN URL's for all environment. But each environment has its own bucket. So how would one cdn url point to different bucket?

And if that is not the case and cdn url's are supposed to be different, then it takes me to another question. In root-config we have to specify cdn url to download import-map.json. So do we have any example how different cdn url can be configured in root config for different environment?

Way I am doing right now is when I build the app, I pass environment variable for cdn url and that gets replaced in index.html for importmap.json.

To do that, I will have to build my apps again for other environment (qa, prod) before deploying to s3.

Enable configuration of exposed container port

Currently the Dockerfile is hardcoded to expose port 5000

EXPOSE 5000

However in web-server.js it allows for a custom port supplied via the config file supplied as an argument when the webserver is started

let config = require('./config.js').config

app.listen(config.port || 5000, 
// ...
)

Can be done nicely via ARG apparently get-environment-variable-value-in-dockerfile

ARG container_port=5000
ENV PORT=$container_port
EXPOSE $PORT

Pretty neat

$ docker build --build-arg container_port=5000

Typescript support / source code + Deno?

There has been some interest in porting the source code to Typescript (e.g. @dgreene1 has expressed interest)

At the same time, Deno has just been released which supports TS natively. There have been some that have been interested in trying it out/supporting it. (@filoxo maybe?)

Perhaps we can take this as an opportunity to do both at the same time? Create a Deno version of this project that's written in TS?

Or maybe that would be too much to bite off in one pass, so perhaps we create a TS fork of this repo, and then take that as source material and port it to Deno?

Just spitballing here. Thoughts?

Feature request: allow non-auth access to health check

When basic auth is enabled, accessing /health will return a 401 if the auth header is missing. It would be nice to have the ability to turn off basic authentication for the /health request path.

In the meantime, / will work.

Contribute - Support for Angular Module Federation (mf.manifest.json)

Hello,

I would love to contribute to this project. I am not using single-spa - I am unsing Angular Module Federation.

They have a concept very very similart to the import map:

https://github.com/angular-architects/module-federation-plugin/blob/main/libs/mf/tutorial/tutorial.md#part-4c-use-a-registry

I would like to add a simple parameter "type = mef" to the import map deployer to suppor this type of files.

Please confirm that you will add it :)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.