Giter Club home page Giter Club logo

scim-examples's Introduction

1Password SCIM Bridge deployment examples

You can deploy 1Password SCIM Bridge on any supported infrastructure that allows ingress by your identity provider and egress to 1Password servers. Here you'll find configuration files and best practices to help with your deployment.

Tip

Before you deploy your SCIM bridge, learn more about automating provisioning in 1Password using SCIM.

Marketplace apps

For an automated deployment of 1Password SCIM Bridge where some settings are pre-configured for you, use a marketplace app from Google Cloud Platform or DigitalOcean:

Custom deployment

To customize your deployment, you can use containers as a service or create your own advanced deployment with the examples below as your base.

Before you begin

Before you begin deploying 1Password SCIM Bridge, review the Preparation Guide. The guide will help you plan for some of the technical components of the deployment and consider some issues you may encounter along the way.

Containers as a service deployment options

Containers as a service (CaaS) can simplify your deployment by using the built-in tools of the CaaS for DNS and certificate management. This gives you an easy, low-cost SCIM bridge with minimal infrastructure management requirements.

Advanced deployment options

If you have particular requirements for your environment, we recommend an advanced deployment. These example configurations will give you a base to create the deployment from, as well as explain what 1Password SCIM Bridge needs to function and how to maintain your bridge once you've deployed it.

Beta deployment

These are beta versions of 1Password SCIM Bridge deployment examples. These deployments should work, but aren't guaranteed and will change in the future. See the README for more information about the "beta" designation.

Deprecated deployment methods

A list of recently-deprecated deployments can be found in /deprecated. At the time of deprecation, these deployments were fully functional, but will soon become unsupported.

Deprecation schedule

When a deployment method is deprecated, we will simultaneously append a deprecation notice to the deployment name listed in this README and move all files associated with the deployment method to /deprecated.

Deprecated deployments will remain in /deprecated for approximately three months, after which time they will be deleted. The deletion date of deprecated deployments will be posted in /deprecated/README.md.

Where possible, we will provide suggested alternatives in /deprecated/README.md.

Get help

If you encounter issues with your SCIM bridge deployment or have general questions about automated provisioning, contact 1Password Support. If you need additional deployment examples or some information in these guides needs improvement, file an issue or open a pull request in this repo to let us know.

scim-examples's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

scim-examples's Issues

Move general-purpose account setup to more generalized documentation

Instead of repeating the same account setup guidelines repeatedly throughout the examples, the generic setup information should be within its own doc, and linked to by the more specific readmes.

For example, in the Kubernetes setup readme, remove the "Provision Manager" account setup steps sprinkled throughout, and have it linked in there instead, with only Kubernetes-specific details within that documentation.

This would make maintainability easier in the future.

Seeking Advice: Does a WAF make sense to place in front of the SCIM bridge?

Hello,

A team member of mine has recently deployed a 1Password SCIM bridge in AWS. Reportedly, this must be public facing. As a result, I am trying to decide if it makes sense to place a web application firewall on the load balancer in front the SCIM bridge.

I don't know what the actual application looks like, but it looks like traffic is over HTTP (443 forwarded to 3002). Looking at SCIM2 documentation https://scim.cloud/, its all over HTTP.

So, would 1password's team here agree that it makes sense to apply a WAF? If not, is there any reasons in particular?

And if you do agree the SCIM bridge would benefit from a WAF's protection, could you tell me what the web server is based on? (tomcat, nginx, jetty, etc?). I can't find any documentation on this.

Much appreciated!

AWS ECS Fargate Terraform - Feedback and tweaks

Today I tried the ECS + Fargate Terraform

Feedback

First of all, excellent work. It was a breeze to setup. It worked the first time, even after several changes I did to match our internal best practices, it worked like a charm.

Suggested tweaks

  • Tags: Most organizations have a well defined taxonomy for tags. Define a var.tags variable that users can populate, and set it in all the resources supporting it.
  • Default name prefix: Define variable var.name_prefix, and use it in all the resources where some name, name prefix or id is required:
  • Specify the retention days of the log group
resource "aws_cloudwatch_log_group" "scim-bridge" {
  name = local.name_prefix
  tags = local.tags
  retention_in_days = 14
}

resource "aws_ecs_cluster" "scim-bridge" {
  name = local.name_prefix
  tags = local.tags
}

...
  • Obtain the Zone ID from a data source: Instead of taking the Route53 Zone ID as a variable, use the domain to determine the Zone ID:
data "aws_route53_zone" "zone" {
  name         = var.domain // var.domain = example.com
  private_zone = false
}
  • Obtain the VPC and subnets from a data source: Most organizations don't use the default VPC, and there's a VPC already setup.
# Find the vpc by name
data "aws_vpc" "vpc" {
  tags = {
    Name = "my-vpc-name"
  }
}

# Find the public subnets in the VPC
data "aws_subnet_ids" "subnets" {
  vpc_id = data.aws_vpc.vpc.id
  tags = { SubnetTier = "public"}
}
  • Specify the VPC id when setting up the sec. groups:
resource "aws_security_group" "scim-bridge-sg" {
  vpc_id = data.aws_vpc.vpc.id
...
}
  • Get the ACM certificate from a data source instead of creating a new one: Most organizations already have an ACM cert "*.example.com"
data "aws_acm_certificate" "scim_certificate" {
  domain = "*.${var.domain}"
}

Stop YOLO development in the helm chart.

Hi,

It seems that the helm chart provided by 1Password has met some growing pains as part of it's development.

There is no proper changelog, default value changes and other random issues related to a disconnect between the chart and the bridge itself. Here's a related issue in that repo

Our bridge has been down for the whole day due to an issue after upgrading to the latest version of the chart. We've been able to fix it only after setting the following in the chart:

  • scim.config.tlsDomain: null
  • manually editing the deployment and adding: OP_TLS_KEY_FILE, OP_TLS_CERT_FILE as empty, else the binary would stubbornly try to run in TLS mode
  • redis.enabled: false

Only after setting the above and updating the key paths that changed were we able to have a functioning bridge.

I've raised a couple of issues in that repo, but it seems that it's not really a priority, I've also tried contacting support with no luck.

Hope this raises awareness about this issue.

OS_REDIS_PASSWORD environment variable

When deploying the scim with docker instances, it would be good to have an environment variable on scim.env called OS_REDIS_PASSWORD to allow using redis instances with authentication mechanism.

AWS-Terraform implementation documentation unclear

I want to create and deploy a SCIM bridge between 1Password and Okta.

I would like to use AWS and Terraform for this purpose, but your scim-example for this is Linux specific.

Can you please provide an example of this which would be suitable for creating and deploying a SCIM bridge using Mac.

Preparation Guide references deprecated port requirements

This line in the Preparation Guide is out-of-date, since it references port 80, which is no longer requried as of v2.2.0:

SSL certificates are handled through the [https://letsencrypt.org/](LetsEncrypt) service which automatically generates and renews an SSL certificate based on the domain name you've decided on. On your firewall, you should ensure that the service can access Port 80 and Port 443, as Port 80 is required for the LetsEncrypt service to complete its domain challenge and issue your SCIM bridge an SSL certificate. Note that a TLS connection is mandatory for connecting to the 1Password service.

Versions ≥2.2.0 of 1Password SCIM bridge uses the TLS-ALPN-01 challenge, which requires port 443 exclusively.

Kubernetes deployment example: Deployment containers have no resource limits

Noted in the code linter for the Kubernetes plugin for VS Code:

One or more containers do not have resource limits - this could starve other processes

We should investigate and spec out reasonable default resource limits for the SCIM bridge and Redis containers in their respective deployment manifests (op-scim-deployment.yaml and redis-deployment.yaml).

See also Resource Management for Pods and Containers [Kubernetes Documentation].

Google Workspace credentials and settings secrets not correctly available to pods in K8s deployment

Although op-scim-config.yaml specifies the workspace credentials as part of the data object:

OP_WORKSPACE_CREDENTIALS: "/secret/workspace-credentials"
OP_WORKSPACE_SETTINGS: "/secret/workspace-settings"

They do not get mounted in the deployment:

volumeMounts:
- name: scimsession
mountPath: "/secret"
readOnly: false

volumes:
- name: scimsession
secret:
secretName: scimsession

  • Create the reference in .spec.volumes[] for each GW secret
  • Create the mountpoint in spec.containers[].volumeMounts[]

Happy to be the assignee once my repo access is sorted and I'll get this fixed up.

aws-ecsfargate-terraform doesn't have the ELB target the Fargate cluster

Looking at the configuration for the ELB for the Terraform code, it seems that the target for the LB is empty. The rest of the deployment seems to be working, but the health check is failing and there's nothing being re-directed by the LB. Is there a missing rule from the terraform.tf file?

Cannot disable letsencrypt

Hi,

op-scim will start with letsencrypt mode enabled even when the flag is not passed, as long as I pass a previously generated scimsession file. Using the v1.4.2 docker image.

Valid scimsession file:

# /op-scim/op-scim --port=80 --redis-host=op-scim-bridge-redis-master --session=/secret/scimsession
[LOG] [1.0] 2020/06/23 15:27:25 (INFO) using host op-scim-bridge-redis-master for redis connection
[LOG] [1.0] 2020/06/23 15:27:25 (INFO) creating redis connection with address op-scim-bridge-redis-master:6379
[LOG] [1.0] 2020/06/23 15:27:25 (INFO) successfully connected to cache
[LOG] [1.0] 2020/06/23 15:27:25 (INFO) waiting for bearer token to begin provisioned user watcher
[LOG] [1.0] 2020/06/23 15:27:25 (INFO) requesting TLS certificate for domain "<hidden>"
[LOG] [1.0] 2020/06/23 15:27:25 (INFO) starting LetsEncrypt challenge server on :8080
[LOG] [1.0] 2020/06/23 15:27:25 (INFO) starting 1Password SCIM Bridge server on :8443
...

Empty session file: (expected behavior)

# /op-scim/op-scim --port=80 --redis-host=op-scim-bridge-redis-master --session=/test
[LOG] [1.0] 2020/06/23 15:26:33 (INFO) using host op-scim-bridge-redis-master for redis connection
[LOG] [1.0] 2020/06/23 15:26:33 (INFO) creating redis connection with address op-scim-bridge-redis-master:6379
[LOG] [1.0] 2020/06/23 15:26:33 (INFO) successfully connected to cache
[LOG] [1.0] 2020/06/23 15:26:33 (INFO) waiting for bearer token to begin provisioned user watcher
[LOG] [1.0] 2020/06/23 15:26:33 (INFO) Starting 1Password SCIM bridge server on :80
...

Update AWS Fargate instructions for not using Route53

Our documentation around how to deploy the SCIM Bridge to ECS could use some clarification for the use case where Route53 will not be used. This includes using an AWS certificate and the fact that it does not support LetsEncrypt as is.

Docker Compose v1 is deprecated

Docker Compose v2 graduated to GA earlier this year and Docker Compose v1 was deprecated (see Announcing Compose V2 General Availability - Docker).

Our docs and scripts should be updated to use the new docker compose subcommand. A formal investigation is still needed, but there doesn't seem to be any breaking changes for our deployments, and through my experience deploying with customers, simply removing the hyphen to use v2 (i.e. replace all instances of docker-compose with docker compose) does the trick using Docker Engine without separately installing Docker Desktop or the Docker Compose plugin.

See also Compose command compatibility with docker-compose | Docker Documentation.

Helm support

Is there a plan to add helm support for deploying this?

I am using connect and the secret-injector (the last one seems to be using an older image than the latest), but I would like to know if you plan to continue to support helm deployments, and if that is the case if scim is on the list of adding helm support.

Document `OP_REDIS_*` variables

As part of v2.8.5, the environment variables related to redis were undeprecated and reintroduced as configuration options. We need to document those variables.

These include:

  • OP_REDIS_HOST - setting the redis hostname (i.e: localhost)
  • OP_REDIS_PORT - setting the redis port (i.e: 6379)
  • OP_REDIS_USERNAME - setting the username used to connect to redis (i.e: admin)
  • OP_REDIS_PASSWORD - setting the password used to connect to redis (i.e: hunter2)
  • OP_REDIS_SSL_ENABLED - setting whether the connection should be SSL-enabled (0 or 1 (boolean))
  • OP_REDIS_INSECURE_SSL - setting whether insecure SSL connections should be allowed when OP_REDIS_SSL_ENABLED is also enabled (0 or 1 (boolean))

Also of note is that a user must unset OP_REDIS_URL for these to be used. In other words, if OP_REDIS_URL is set, none of the other OP_REDIS_* variables will be used.

v1.1.1 User add errors synching to AzureAD

I recently upgraded my SCIM to v1.1.1 from docker hub and we immediately noticed errors when new users were being synched. I rolled back to version v1.0 (skipped v1.1.0 since it was only up for 2 days and assumed it was bad) and synch for my added users seems to work.

From Azure:

ACTIVITY TYPE
Export

CATEGORY
ApplicationManagement

STATUS
Failure

STATUS REASON
Failed to create User '[email protected]' in customappsso; Error: The SCIM endpoint is not fully compatible with the Azure Active Directory SCIM client. Please refer to the Azure Active Directory SCIM provisioning documentation and adapt the SCIM endpoint to be able to process provisioning requests from Azure Active Directory.
StatusCode: BadRequest
Message: Processing of the HTTP request resulted in an exception. Please see the HTTP response returned by the 'Response' property of this exception for details.
Web Response: Bad Request . This operation was retried 0 times. It will be retried again after this date: 2019-10-30T15:13:17.6715300Z UTC

ErrorCode
SystemForCrossDomainIdentityManagementBadRequest

EventName
EntryExportAdd

Type:
ServicePrincipal

DISPLAY NAME
1Password SCIM

TARGET

TYPE
User

Let me know if there are any other details you need.

(Feedback) Google Workspace with Kubernetes deployment

Hey team,

Thanks for tool and docs. I've got a bit of feedback, maybe an unexplored use case or order of operations. I'm on an Ops team and our IT came to me looking to deploy the SCIM bridge for use with Google Workspace. I came here, looked through your Kubernetes examples and deployed pretty much the same as what's in your README here https://github.com/1Password/scim-examples/blob/master/kubernetes/README.md

I know our IT uses Google Workspace but skipped the optional section since I wasn't sure what they were planning. I gave IT the go ahead and they got stuck here in the user documentation where they are directed to upload the service account key.

From the UI they get this error when trying to load the file:
read-only

Makes sense to me, they are trying to write to a Secret that doesn't exist. Easy fix, I worked with them to get the details and populated the necessary Secrets using kubectl. Obviously this is a bit clunky for the user and doesn't line up with the user docs so I thought I'd pass along the experience.

Fix redis config map name and amend README

We recently introduced a config map for Redis (op-redis-config.yaml), but we neglected to:

  • follow the naming convention for the rest of the redis templates, and also
  • update the README to mention the new template.

Consider supporting redis sentinel

We run this scim bridge in kubernetes and would like to use a highly available redis instance. The k8s redis-operator makes this trivial, but to use it properly, the application needs to be sentinel aware. This allows having a master redis and 2 read-only slaves ensuring uninterrupted availability when a node gets evicted due to hardware problems or whatnot.

Error: no matching Route53Zone found

Hi.

I'm trying to use the AWS Fargate Terraform script. So far I have had no luck running it. There seems to be pre-requisites of set up that are neither in the PREPARATION.md nor the the README.md, because it keeps giving me various errors about things not being available. I have not set anything up in AWS in advance, as I assumed the terraform script would be self-contained and do all of the set up for me.

Whenever I run the script, I get the following error

│ Error: no matching Route53Zone found
│ 
│   with data.aws_route53_zone.zone[0],
│   on main.tf line 101, in data "aws_route53_zone" "zone":
│  101: data "aws_route53_zone" "zone" {

My terraform.tfvars file, redacted

# Required: Set a domain name for your SCIM bridge
domain_name = "scim.<my-domain>.com"

# Optional: Specify a different region
aws_region = "eu-west-2"

# Optional: Specify an existing VPC to use, add a common name prefix to all resources, specify the CloudWatch Logs retention period, and add tags for all supported resources.
vpc_name           = ""
name_prefix        = "scim-1password"
log_retention_days = 3

tags = {
  service = "1password"
}

# Uncomment the below line to use an existing wildcard certificate in AWS Certificate Manager.
#wildcard_cert = true

# Uncomment the below line if you are *not* using Route 53
#using_route53 = false

# Uncomment the below line to enable Google Workspace configuration for 1Password SCIM bridge
#using_google_workspace = true

What I have tried

I have tried to create a Route53 Zone manually, however the error still occurs

If I uncomment the using_route53 = false then I get this error instead:

╷
│ Error: error creating ELBv2 Listener (arn:aws:elasticloadbalancing:eu-west-2:592415678207:loadbalancer/app/scim-1password-alb/f1040eee7987c466): UnsupportedCertificate: The certificate 'arn:aws:acm:eu-west-2:592415678207:certificate/46643f87-39a2-4331-bc34-c7b566b48ebd' must have a fully-qualified domain name, a supported signature, and a supported key size.
│       status code: 400, request id: af4d0b61-a773-48fc-8272-9d025e6ccdd5
│ 
│   with aws_lb_listener.https,
│   on main.tf line 258, in resource "aws_lb_listener" "https":
│  258: resource "aws_lb_listener" "https" {
│ 

And, If I look in the console, I do not have a certificate with my configured domain name.

So, my question is, what am I supposed to do here? Am I supposed to have set up Route53 or a certificate before running this script, if so, why?

Thanks in advance

SCIM bridge starts in setup mode when not using Let's Encrypt

Overview

The SCIM bridge shows setup mode when not using Let's Encrypt and accessing the bridge via http:3002.

Steps to produce

  1. Deploy 1Password SCIM bridge using our Kubernetes manifest deployment example in the default configuration using Let's Encrypt and a public load balancer.
  2. After the SCIM bridge is deployed (kubectl get pods, kubectl logs <pod-name> for status), visit the status page in a browser (e.g. https://scim.example.com) and confirm that it is functioning as expected with a sign-in screen accepting the bearer token.
  3. Edit the ConfigMap manifest to configure the SCIM bridge to not use Let's Encrypt (set the value for the OP_LETSENCRYPT_DOMAIN environment variable key to "").
  4. Edit the SCIM bridge Service manifest to forward traffic on port 3002 to the SCIM bridge service, eg.:
…
  ports:
  - protocol: TCP
    name: http
    port: 3002
…
  1. Re-apply the manifests to update the configuration:
    kubectl apply -f .
  2. Visit the IP address or URL of the SCIM bridge via HTTP (e.g.: http://scim.example.com:3002)

Disable logging of "method=GET path=/ping"

If you enable healthcheck in Kuberntes (which you should) or in AWS ALB (which you must), your log gets flooded with the same line over and over:

6:14PM INF HTTP request application=op-scim component=SCIMServer duration=0.03671 method=GET path=/ping remote_addr=10.4.3.114 request_id=c1q8s2ej546h3p76mrr0 size=5 status=200 version=2.0.0
6:14PM INF HTTP request application=op-scim component=SCIMServer duration=0.058281 method=GET path=/ping remote_addr=10.4.0.31 request_id=c1q8s2ej546h3p76mrrg size=5 status=200 version=2.0.0
6:14PM INF HTTP request application=op-scim component=SCIMServer duration=0.02752 method=GET path=/ping remote_addr=10.4.1.88 request_id=c1q8s2ej546h3p76mrs0 size=5 status=200 version=2.0.0
6:14PM INF HTTP request application=op-scim component=SCIMServer duration=0.037801 method=GET path=/ping remote_addr=10.4.2.35 request_id=c1q8s2uj546h3p76mrt0 size=5 status=200 version=2.0.0
6:14PM INF HTTP request application=op-scim component=SCIMServer duration=0.032611 method=GET path=/ping remote_addr=10.4.1.88 request_id=c1q8s2uj546h3p76mrsg size=5 status=200 version=2.0.0
6:14PM INF HTTP request application=op-scim component=SCIMServer duration=0.02445 method=GET path=/ping remote_addr=10.4.0.31 request_id=c1q8s2uj546h3p76mrtg size=5 status=200 version=2.0.0
6:14PM INF HTTP request application=op-scim component=SCIMServer duration=0.032401 method=GET path=/ping remote_addr=10.4.3.114 request_id=c1q8s2uj546h3p76mru0 size=5 status=200 version=2.0.0

Azure Kubernetes Service deployment issue "http: TLS handshake error from <IP>:<PORT>: EOF"

Hello! Could you please help us find out the problem with the AKS deployment of the scim? We are using these configuration files to deploy the scim-bridge https://github.com/1Password/scim-examples/tree/master/kubernetes (the only change we made is the domain name https://github.com/1Password/scim-examples/blob/master/kubernetes/op-scim-config.yaml#L7 The deployment is successful, everything is running, however, we have a bunch of error messages from the op-scim-bridge pod:

2020/12/31 11:00:52 http: TLS handshake error from 10.244.2.1:9467: EOF
2020/12/31 11:00:55 http: TLS handshake error from 10.240.0.5:31516: EOF
2020/12/31 11:00:58 http: TLS handshake error from 10.244.2.1:5250: EOF
2020/12/31 11:01:01 http: TLS handshake error from 10.240.0.5:28929: EOF
2020/12/31 11:01:05 http: TLS handshake error from 10.244.2.1:17797: EOF
2020/12/31 11:01:07 http: TLS handshake error from 10.240.0.5:57041: EOF
2020/12/31 11:01:11 http: TLS handshake error from 10.244.2.1:25111: EOF
2020/12/31 11:01:13 http: TLS handshake error from 10.240.0.5:45293: EOF
2020/12/31 11:01:17 http: TLS handshake error from 10.244.2.1:1966: EOF
2020/12/31 11:01:19 http: TLS handshake error from 10.240.0.5:52336: EOF
2020/12/31 11:01:23 http: TLS handshake error from 10.244.2.1:6739: EOF
2020/12/31 11:01:25 http: TLS handshake error from 10.240.0.5:42490: EOF

We suspect that it might be related to the readiness\liveness probe from the LoadBalancer. What do you think?

Different resource block labels "scimsession" and "scimsession_1" necessary?

Currently main.tf in /aws-fargate-terraform contains the following resource block definitions.

resource "aws_secretsmanager_secret" "scimsession" {
  name_prefix             = local.name_prefix
  # Allow `terraform destroy` to delete secret (hint: save your scimsession file in 1Password)
  recovery_window_in_days = 0

  tags                    = local.tags
}

resource "aws_secretsmanager_secret_version" "scimsession_1" {
  secret_id     = aws_secretsmanager_secret.scimsession.id
  secret_string = base64encode(file("${path.module}/scimsession"))
}

Since there is no chance of a namespace collision (since they would be aws_secretsmanager_secret.scimsession and aws_secretsmanager_secret_version.scimsession_1, respectively, is it necesasry for the secret_version to be labelled scimsession_1? If we can revert to scimsession for both it would clean things up and avoid confusion.
Thanks to @ag-adampike for the assist.

Add pod anti-affinity config for Kubernetes

We should add pod anti-affinity for op-scim and redis pods to instruct the Kubernetes scheduler to not place these pods on the same nodes. This helps ensure that resource utilization by the one does not starve or affect the other.

Google Workspace configuration handling for Docker deployment examples

Handling for Google Workspace in Docker Deployment examples

Update Google Workspace references in the readme.md and the deploy.sh script to remove wording around beta participants.

Information for Google Workspace beta participants

Updating swarm manual secret creation in readme.md to reference workspace-settings.json and workspace-credentials.json files and secret naming (workspace-settings and workspace-credentials)

# this is the path of the JSON file you edited in the paragraph above
cat /path/to/workspace-settings.json | docker secret create workspace-settings -
# replace <google keyfile> with the name of the file Google generated for your Google Service Account
cat /path/to/<google keyfile>.json | docker secret create workspace-settings -

Swarm deployment does not reference or use created Docker secrets based off workspace-credentials.json and workspace-settings.json after the secrets are created.

  • Update readme.md
  • Update deploy.sh
  • Create secret references in /swarm/docker-compose.yml for optional workspace-settings and workspace-credentials secrets
  • Test disconnecting Workspace from SCIM bridge GUI after Docker compose and Docker Swarm deployments

Update Terraform README for AWS CLI version 2 functionality

When using the AWS CLI version 2, the following parameter is needed:

--cli-binary-format raw-in-base64-out

scim-examples/aws-terraform/README.md Example:

aws secretsmanager create-secret --name op-scim/scimsession --secret-binary file:///path/to/scimsession --region <aws_region>

Updated example:

aws secretsmanager create-secret --name op-scim/scimsession --secret-binary file:///path/to/scimsession --region <aws_region> --cli-binary-format raw-in-base64-out

Otherwise, the following error is presented:

Invalid base64: "{...

Consider One-click Azure Marketplace App

Please consider providing One-click Azure Marketplace App

Adding this should result in:

  • improved customer satisfaction
  • lower number of support tickets

keeping mu fingers crossed :)

Docker Swarm self-managed TLS reverts to LetsEncrypt

Within the Docker Swarm deployment the current method of performing self-managed TLS by providing KEY and CERT files to the deployment reverts back to utilizing Let's Encrypt for certificate management instead of the provided self-managed certificate as the swarm can not access the files on the host system.

OP_TLS_KEY_FILE=/path/to/key.pem

OP_TLS_CERT_FILE=/path/to/cert.pem

Logs show: certificate manager obtained certificate application instead of certificate files specified; disabling Let's Encrypt functionality

Fix startup order when deploying with Docker Compose

Redis can take some time to start up. The problem is that the SCIM bridge startup fails if it cannot establish a connection to Redis. We should ensure that we give Redis enough time to start up before we attempt to launch the SCIM bridge.

On kubernetes the SCIM bridge will automatically be restarted on a failure, but we don't have that luxury with our Docker-based deployments.

Also, the depends_on option we currently rely on is not sufficient. Luckily the documentation describes some potential approaches to ensure the containers are started in the correct order, and with sufficient time for the applications to start up.

See Control startup and shutdown order in Compose for more details.

Kubernetes deployment fails to load if certificate and key files are not found

When OP_TLS_DOMAIN (or OP_LETSENCRYPT_DOMAIN), OP_TLS_CERT_FILE, and OP_TLS_KEY_FILE are set for the op-scim-bridge Deployment, the SCIM bridge container fails to load:

FTL failed to generate TLS config, retrying after backoff delay error="could not find cert file with given path secrets/tls-certificate: stat /secrets/tls-certificate: no such file or directory" application=op-scim build=207041 version=2.7.4

Unsetting these environment variables in the op-scim-configmap ConfigMap and redeploying is a workaround:

OP_TLS_KEY_FILE: "/secrets/tls-key"
OP_TLS_CERT_FILE: "/secrets/tls-certificate"

SCIM Bridge returning a 500 when I attempt to connect from Okta

After resolving a slightly different issue, The bridge is returning a 500 error when I try to connect to it to test my API Credentials. Additionally, after I deployed the bridge, 4 "health checks" show up green when I first login with my Bearer Token and view the scim dashboard at /app/status.

But after a few minutes, a couple more health checks appear, including the "ConfirmationWatcher" which shows as "Disconnected". I'm not sure how to resolve that problem.

404 on application using AWS TF

Hi team, getting a 404 not found, everything seems to have deployed and installed correctly, the scimsession secret is accessible, LBs work, things look good, no other errors, redis and op-scim service are started successfully.

AWS Terraform deployment example egress requirements can be too restrictive

Changes introduced in #220 seem to cause issues in some customers' AWS environments. Egress is currently restricted to TCP traffic on port 443 to allow SCIM bridge to reach 1Password.com servers.

Example error:

ResourceInitializationError: unable to pull secrets or registry auth: execution resource retrieval failed: unable to retrieve secret from asm: service call has been retried 5 time(s): failed to fetch secret arn:aws:secretsmanager:us-east-1:…:secret:op-scim-bridge-sercretARN from secrets manager: RequestCanceled: request context canceled caused by: context deadline exceeded. Please check your task network configuration.

A known workaround is to allow egress to anywhere for the aws_security_group.service resource, but further investigation is needed to reproduce and determine appropriate relaxations.

Instructions for testing SCIM bridge using SCIM API endpoints do not work when integrating with Google Workspace

Under the self-managed-tls section it says:

Traffic from your TLS endpoint should be directed to this port (80, by default)

From my understanding that's wrong. The deployment is configured to work with port 3002:

# HTTP port (forward unencrypted traffic to this port if not using Let's Encrypt)
- name: http
containerPort: 3002


Under the test-the-scim-bridge section we can read:

curl --header "Authorization: Bearer TOKEN_GOES_HERE" https://scim.example.com/Users

Thing is, after using the proper domain name, none of the following work:

I'm getting a 404 which is VERY confusing since this is the "verification" step and does not work.

The bridge is working though and everything is syncing properly.
https://scim.example.com/ redirects to https://scim.example.com/app/login and works (as expected).

issue with transition to kubernetes/op-scim-config.yaml

Had a deployment work fine last week to azure kube service before the latest commit. Tried to same deployment today to AKS with the updated code and nothing seems to work. Not listening on 443/80, cert didn't get created etc..

I think there's an issue with the vars in the new config.yaml that's being used. Looking into it today will update what I find.

Docker Compose file versioning is deprecated

The Docker Compose file format is now defined by the Compose Specification, and versioning the Compose file is no longer recommended.

Formal investigation and testing is needed to ensure compatibility, but simply removing the version line from each respective file in our Docker deployment examples should suffice:

See also:

Deactivating an unconfirmed user: "http: panic serving 172.20.xxx.yyy:13190: runtime error: invalid memory address or nil pointer dereference"

This should obviously be addressed (op-scim Version 1.3.0-2):

[LOG] [1.3.0] 2020/03/03 08:31:49 (INFO) Handling PATCH: /Users/H2LKSSXXXXXXXXXXXXXXXXY754
[LOG] [1.3.0] 2020/03/03 08:31:49 (INFO) found singleOperation with type Replace, path active
[LOG] [1.3.0] 2020/03/03 08:31:49 (INFO) found singleOperation with type Replace, path emails[type eq "work"].value
[LOG] [1.3.0] 2020/03/03 08:31:49 (INFO) applying 2 field operations to user H2LKSSXXXXXXXXXXXXXXXXY754
[LOG] [1.3.0] 2020/03/03 08:31:49 (INFO) ApplyUserFieldOperation applying op.Type "Replace" for op.Path "active" to user [email protected]
[LOG] [1.3.0] 2020/03/03 08:31:49 (WARN) deactivateUser cannot suspended user from state (3), user will be deleted
[LOG] [1.3.0] 2020/03/03 08:31:49 (INFO) deactivateUser deleted user [email protected]
2020/03/03 08:31:49 http: panic serving 172.20.xxx.yyy:13190: runtime error: invalid memory address or nil pointer dereference
goroutine 610562 [running]:
net/http.(*conn).serve.func1(0xc000204820)
        /usr/local/go/src/net/http/server.go:1767 +0x139
panic(0x8fd6a0, 0xdc1910)
        /usr/local/go/src/runtime/panic.go:679 +0x1b2
go.1password.io/op-scim/action.ApplyUserFieldOperation(0xc0000a2740, 0xc0000fe280, 0x0, 0xc0000ac410, 0x7, 0xc00035b6e0, 0x1c, 0x8cee80, 0xc000376430, 0x0, ...)
        /go/src/go.1password.io/op-scim/action/users.go:257 +0x87
go.1password.io/op-scim/action.ApplyUserFieldOperations(0xc0000a2740, 0xc0000fe280, 0x0, 0xc000134090, 0x2, 0x2, 0xc0000a2da0, 0x0, 0x7ffac33876d0)
        /go/src/go.1password.io/op-scim/action/users.go:233 +0x170
go.1password.io/op-scim/handler.PatchUserHandler.func1(0xa5f120, 0xc0003fc000, 0xc0001f0600)
        /go/src/go.1password.io/op-scim/handler/users.go:151 +0x276
go.1password.io/op-scim/handler.AuthWrap.func1(0xa5f120, 0xc0003fc000, 0xc0001f0600)
        /go/src/go.1password.io/op-scim/handler/root.go:87 +0x23b
go.1password.io/op-scim/handler.HealthWrap.func1(0xa5f120, 0xc0003fc000, 0xc0001f0600)
        /go/src/go.1password.io/op-scim/handler/root.go:133 +0x6f
go.1password.io/op-scim/handler.LogWrap.func1(0xa5f120, 0xc0003fc000, 0xc0001f0600)
        /go/src/go.1password.io/op-scim/handler/root.go:125 +0x151
net/http.HandlerFunc.ServeHTTP(0xc0001966e0, 0xa5f120, 0xc0003fc000, 0xc0001f0600)
        /usr/local/go/src/net/http/server.go:2007 +0x44
go.1password.io/op-scim/vendor/github.com/gorilla/mux.(*Router).ServeHTTP(0xc00017a000, 0xa5f120, 0xc0003fc000, 0xc0001f0300)
        /go/src/go.1password.io/op-scim/vendor/github.com/gorilla/mux/mux.go:212 +0xe2
net/http.serverHandler.ServeHTTP(0xc0001a60e0, 0xa5f120, 0xc0003fc000, 0xc0001f0300)
        /usr/local/go/src/net/http/server.go:2802 +0xa4
net/http.(*conn).serve(0xc000204820, 0xa601e0, 0xc00016a280)
        /usr/local/go/src/net/http/server.go:1890 +0x875
created by net/http.(*Server).Serve
        /usr/local/go/src/net/http/server.go:2928 +0x384

assume_role_policy needs to point to a data object with an sts:AssumeRole policy

@ag-adampike:
The assume_role_policy attribute of a resource e.g.,:

assume_role_policy = data.aws_iam_policy_document.workspace_config.json

needs to point to a data object with an assume role policy, e.g.:

data "aws_iam_policy_document" "assume_role_policy" {

And as per the docs we also need to include a iam_policy resource in the child GW main.tf that points to the policy document at line 1:

data "aws_iam_policy_document" "workspace_config" {

Terraform example improvements

This issue is to track a few fairly minor issues with our AWS Terraform deployment example:

Some nice-to-haves, possibly for future work:

  • Modularize the deployment to enable different sets of AWS credentials for certain resources (for example, if a separate account is required to manage Route53).
  • Gracefully handle TLS cert management for customers using something other than Route53. Currently the plan fails to apply until ACM validates the external domain.
  • We may be able to optionally create the necessary VPCs and subnets instead of choosing between using the default VPC or specifying an existing VPC. In my experience working directly with customers, subnets are often created specifically for the SCIM bridge anyway. If the script can automate that work as well, all the better.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.