Giter Club home page Giter Club logo

aks-store-demo's Introduction

page_type languages products urlFragment name description
sample
azdeveloper
go
javascript
rust
nodejs
python
bicep
terraform
dockerfile
azure
azure-kubernetes-service
azure-openai
azure-cosmos-db
azure-container-registry
azure-service-bus
azure-monitor
azure-log-analytics
azure-managed-grafana
azure-key-vault
aks-store-demo
AKS Store Demo
This sample demo app consists of a group of containerized microservices that can be easily deployed into an Azure Kubernetes Service (AKS) cluster.

AKS Store Demo

This sample demo app consists of a group of containerized microservices that can be easily deployed into an Azure Kubernetes Service (AKS) cluster. This is meant to show a realistic scenario using a polyglot architecture, event-driven design, and common open source back-end services (eg - RabbitMQ, MongoDB). The application also leverages OpenAI's GPT-3 models to generate product descriptions. This can be done using either Azure OpenAI or OpenAI.

This application is inspired by another demo app called Red Dog.

Note

This is not meant to be an example of perfect code to be used in production, but more about showing a realistic application running in AKS.

Architecture

The application has the following services:

Service Description
makeline-service This service handles processing orders from the queue and completing them (Golang)
order-service This service is used for placing orders (Javascript)
product-service This service is used to perform CRUD operations on products (Rust)
store-front Web app for customers to place orders (Vue.js)
store-admin Web app used by store employees to view orders in queue and manage products (Vue.js)
virtual-customer Simulates order creation on a scheduled basis (Rust)
virtual-worker Simulates order completion on a scheduled basis (Rust)
ai-service Optional service for adding generative text and graphics creation (Python)
mongodb MongoDB instance for persisted data
rabbitmq RabbitMQ for an order queue

Logical Application Architecture Diagram

Run the app on Azure Kubernetes Service (AKS)

To learn how to deploy this app on AKS, see Quickstart: Deploy an Azure Kubernetes Service (AKS) cluster using Azure CLI.

Note

The above article shows a simplified version of the store app with some services removed. For the full application, you can use the aks-store-all-in-one.yaml file in this repo.

Run on any Kubernetes

This application uses public images stored in GitHub Container Registry and Microsoft Container Registry (MCR). Once your Kubernetes cluster of choice is setup, you can deploy the full app with the below commands.

This deployment deploys everything except the ai-service that integrates OpenAI. If you want to try integrating the OpenAI component, take a look at this article: Deploy an application that uses OpenAI on Azure Kubernetes Service (AKS).

kubectl create ns pets

kubectl apply -f https://raw.githubusercontent.com/Azure-Samples/aks-store-demo/main/aks-store-all-in-one.yaml -n pets

Run the app locally

The application is designed to be run in an AKS cluster, but can also be run locally using Docker Compose.

Tip

You must have Docker Desktop installed to run this app locally. If you do not have it installed locally, you can try opening this repo in a GitHub Codespace instead

To run this app locally:

Clone the repo to your development computer and navigate to the directory:

git clone https://github.com/Azure-Samples/aks-store-demo.git
cd aks-store-demo

Configure your Azure OpenAI or OpenAI API keys in docker-compose.yml using the environment variables in the ai-service section:

  ai-service:
    build: src/ai-service
    container_name: 'ai-service'
    ...
    environment:
      - USE_AZURE_OPENAI=True # set to False if you are not using Azure OpenAI
      - AZURE_OPENAI_DEPLOYMENT_NAME= # required if using Azure OpenAI
      - AZURE_OPENAI_ENDPOINT= # required if using Azure OpenAI
      - OPENAI_API_KEY= # always required
      - OPENAI_ORG_ID= # required if using OpenAI
    ...

Alternatively, if you do not have access to Azure OpenAI or OpenAI API keys, you can run the app without the ai-service by commenting out the ai-service section in docker-compose.yml. For example:

#  ai-service:
#    build: src/ai-service
#    container_name: 'ai-service'
...
#    networks:
#      - backend_services

Start the app using docker compose. For example:

docker compose up

To stop the app, you can hit the CTRL+C key combination in the terminal window where the app is running.

Run the app with GitHub Codespaces

This repo also includes DevContainer configuration, so you can open the repo using GitHub Codespaces. This will allow you to run the app in a container in the cloud, without having to install Docker on your local machine. When the Codespace is created, you can run the app using the same instructions as above.

Open in GitHub Codespaces

Deploy the app to Azure using Azure Developer CLI

See the Azure Developer CLI documentation for instructions on how to quickly deploy the app to Azure.

Additional Resources

aks-store-demo's People

Contributors

chzbrgr71 avatar cvgore avatar dependabot[bot] avatar huangyingting avatar jongio avatar jschluchter avatar ksubrmnn avatar microsoft-github-operations[bot] avatar microsoftopensource avatar mosabami avatar pauldotyu avatar sabbour avatar senyangcai avatar smurawski avatar tonybaloney avatar v-xuto avatar zr-msft avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

aks-store-demo's Issues

[BUG] Inaccessible Open AI Model

Describe the bug

Not all models on Open AI are available for usage/consumption. As a result, some subscriptions aren't able to use the gpt-35-turbo model and it results in a hung cluster. Instead of a cluster stuck in a hung status, have it still run, just without the OpenAI Service or try a different model.

To Reproduce
Steps to reproduce the behavior:

  1. Run azd up
  2. Select a subscription without access to gpt-35-turbo
  3. Check Azure Portal for AKS Cluster in Hung/Failed state.
  4. See error message

This operation requires 30 new capacity in quota Tokens Per Minute (thousands) - GPT-35-Turbo, which is bigger than the current available capacity 0. The current quota usage is 300 and the quota limit is 300 for quota Tokens Per Minute (thousands) - GPT-35-Turbo. (Code: InsufficientQuota)

Expected behavior
A clear and concise description of what you expected to happen.

  1. Runs and doesn't hang.

Desktop (please complete the following information):

  • OS: Ubuntu/Windows
  • Version: See Dev container/11

Additional context
Add any other context about the problem here.

Add the ability to authenticate to Azure OpenAI using either OpenAI API key or Azure AD Workload Identity

Is your feature request related to a problem? Please describe.
When deploying this demo using Azure OpenAI service, putting the OpenAI API key in a configMap, secret, or even worse environment variable is not ideal.

Describe the solution you'd like
Being able to authenticate to Azure OpenAI using a managed identity and Azure AD Workload Identity is a better way. No need to store any credentials in the Kubernetes cluster.

Describe alternatives you've considered
N/A

Additional context
N/A

Review and update feature registrations in azd-hooks/preprovision.{ps1,sh}

Not all feature flags are required as some of these previously preview features are now GA.

See files:

GAs:

  • KEDA (don't require AKS-KedaPreview feature flag)
  • Prometheus (don't require AKS-PrometheusAddonPreview feature flag)

The other features still appear to be in public preview.

Ensure ai-service works with gpt-35-turbo model deployments from Azure OpenAI

Is your feature request related to a problem? Please describe.
As of July 5, 2024 Azure OpenAI service will no longer be offering text-davinci-003 models as new deployments.

Describe the solution you'd like
The ai-service project was originally written and tested against this model and will need to be updated to ensure it works with the next best option, gpt-35-turbo.

Additional context
https://techcommunity.microsoft.com/t5/ai-cognitive-services-blog/announcing-updates-to-azure-openai-service-models/ba-p/3866757

Add Bicep support

Is your feature request related to a problem? Please describe.
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]

Add Bicep support to the repo.

Describe the solution you'd like
A clear and concise description of what you want to happen.

Create a manual setup to switch between terraform or bicep.

Describe alternatives you've considered
A clear and concise description of any alternative solutions or features you've considered.

We thought of making a new repo, but it's difficult to maintain and results in desynced content.

Additional context
Add any other context or screenshots about the feature request here.

This will be used in AKS Docs on Learn.

AZD Template does not support Windows OS/PowerShell commands.

Describe the bug
A clear and concise description of what the bug is.

The AZD Template does not run on Windows OS as it cannot find the hooks file.

To Reproduce

Steps to reproduce the behavior:

  1. Download the AZD CLI for Windows
  2. Clone and Init the project with azd init
  3. Download any missing dependencies on your local machine.
  4. Run azd auth login to sign into Azure
  5. Run azd up

Expected behavior
A clear and concise description of what you expected to happen.

Should find the hooks script and run it.

Screenshots
If applicable, add screenshots to help explain your problem.
image

Desktop (please complete the following information):

  • OS: Windows
  • Version 11

Additional context
Add any other context about the problem here.

This project will be part of an AKS Quickstart article that supports all OS. To ensure a smooth transition for usage with AZD, I'd like to keep this the code/steps system agnostic. Is there a way to update the script to detect the user's OS and then run the respective command(s).

Failed to run ''azd up" command in Dev Container and Codespace

Describe the issue:
When runningazd up in Dev Container and Codespace , get error as follow:
image

Environment:

  • OS: Dev Container , Codespace
  • azd version: 1.5.1 (commit 3856d1e98281683b8d112e222c0a7c7b3e148e96)

Repro Steps:

  1. Clone the repo to local and reopen with Dev Container and Codespace.
  2. Run azd auth login.
  3. Run azd up .

Expected behavior:
Run azd up can pass in Dev Container and Codespace.

@hemarina and @pauldotyu for notification.

BUG: order-service stuck in CrashLoopBackOff state

Describe the bug
order-service deployment fails in aks cluster
To Reproduce
Steps to reproduce the behavior:
kubectl get pods -n pets | grep order-service

Expected behavior
order-service-78d6dc9fbb-7kvnz 0/1 CrashLoopBackOff 17 (4m13s ago) 43m

Screenshots
If applicable, add screenshots to help explain your problem.

Desktop (please complete the following information):

  • OS: macos
  • Browser: chrome
  • Version 120.0.6099.234
Screenshot 2024-01-26 at 1 00 35 AM Screenshot 2024-01-26 at 1 01 31 AM

Modularize Terraform deployment

Is your feature request related to a problem? Please describe.
Currently the infrastructure as code is set to deploy all Azure resources as the default behavior. However, there may be scenarios where some of the Azure services are not needed for various demonstrations.

Describe the solution you'd like
In order to better support various demo scenarios, it would be best to modularize the deployment and deploy ancillary Azure services as needed based on Terraform input parameters. This is already being done with Azure Container Registry and we should look to take a similar approach with the other services.

We can start with the following boolean parameters:

  • DEPLOY_AZURE_CONTAINER_REGISTRY deploys Azure Container Registry
  • DEPLOY_WORKLOAD_IDENTITY deploys Azure Managed Identities for services that support it and enables workload identity and OIDC Issuer URL on AKS
  • DEPLOY_AZURE_OPENAI deploys Azure OpenAI, the ai-service microservice, and configures workload identity if that option is set to true
  • DEPLOY_AZURE_SERVICE_BUS deploys Azure Service Bus and configures workload identity if that option is set to true
  • DEPLOY_AZURE_COSMOSDB deploys Azure CosmosDB and configures workload identity if that option is set to true. This setting will also take into account the AZURE_COSMOSDB_ACCOUNT_KIND parameter which is used to determine database API (either MongoDB or GlobalDocumentDB with MongoDB being the default)
  • DEPLOY_OBSERVABILITY_TOOLS deploys Azure Log Analytics workspace, Azure Monitor managed service for Promethues, Azure Managed Grafana, and enables monitoring on the AKS cluster with Container Insights

The only services that will be deployed by default at all times will be the AKS cluster and Azure Key Vault (with Secret Store CSI driver enabled on AKS)

Adding ability to deploy with azd CLI

Is your feature request related to a problem? Please describe.
It's great when sample apps can get deployed with azd up

Describe the solution you'd like
add azd support to this sample like in https://github.com/Azure-Samples/azure-search-openai-demo

Describe alternatives you've considered
alternatives are to continue to copy and paste CLI commands to deploy the app in 15 steps

Additional context
A good example is https://github.com/Azure-Samples/azure-search-openai-demo
or https://github.com/Azure-Samples/azure-search-openai-demo-csharp

Warning: Update Azure Developer CLI.

Describe the solution you'd like
A clear and concise description of what you want to happen.
Update AZD CLI to prevent future issues by using the most recent image.

Additional context
Add any other context or screenshots about the feature request here.
WARNING: your version of azd is out of date, you have 1.5.0 and the latest version is 1.5.1

docker-compose-quickstart.yml failed on the "cargo build --release" step

Describe the bug
docker-compose-quickstart.yml failed on the "cargo build --release" step

To Reproduce

  1. Follow the steps from the Totorial
  2. In the step "Create container images and run application", docker compose command failed

Error

110.4    Compiling time-macros v0.2.17
111.7    Compiling miniz_oxide v0.7.1
124.3 The following warnings were emitted during compilation:
124.3
124.3 warning: Failed to run `rustfmt` on ISLE-generated code: Os { code: 32, kind: BrokenPipe, message: "Broken pipe" }
124.3 warning: Failed to run `rustfmt` on ISLE-generated code: Os { code: 32, kind: BrokenPipe, message: "Broken pipe" }
124.3 warning: Failed to run `rustfmt` on ISLE-generated code: Os { code: 32, kind: BrokenPipe, message: "Broken pipe" }
124.3 warning: Failed to run `rustfmt` on ISLE-generated code: Os { code: 32, kind: BrokenPipe, message: "Broken pipe" }
124.3 warning: Failed to run `rustfmt` on ISLE-generated code: Os { code: 32, kind: BrokenPipe, message: "Broken pipe" }
124.3
124.3 error: could not compile `cranelift-codegen` (lib)
124.3
124.3 Caused by:
124.3   process didn't exit successfully: `/usr/local/rustup/toolchains/1.71.0-x86_64-unknown-linux-gnu/bin/rustc --crate-name cranelift_codegen --edition=2021 /usr/local/cargo/git/checkouts/wasmtime-41807828cb3a7a7e/d2d52de/cranelift/codegen/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts,future-incompat --crate-type lib --emit=dep-info,metadata,link -C opt-level=3 -C embed-bitcode=no --cfg 'feature="default"' --cfg 'feature="gimli"' --cfg 'feature="host-arch"' --cfg 'feature="std"' --cfg 'feature="unwind"' -C metadata=fd2672b737491666 -C extra-filename=-fd2672b737491666 --out-dir /product-service/target/release/deps -L dependency=/product-service/target/release/deps --extern bumpalo=/product-service/target/release/deps/libbumpalo-51867bc8407bd2c4.rmeta --extern cranelift_bforest=/product-service/target/release/deps/libcranelift_bforest-fcc0a5e10c43642f.rmeta --extern cranelift_codegen_shared=/product-service/target/release/deps/libcranelift_codegen_shared-476bab6346790238.rmeta --extern cranelift_control=/product-service/target/release/deps/libcranelift_control-293578d895c9c474.rmeta --extern cranelift_entity=/product-service/target/release/deps/libcranelift_entity-28b8b98f3278710c.rmeta --extern gimli=/product-service/target/release/deps/libgimli-6b438b41a4ab53a1.rmeta --extern hashbrown=/product-service/target/release/deps/libhashbrown-9c71eec4057772df.rmeta --extern log=/product-service/target/release/deps/liblog-f4212246997fb990.rmeta --extern regalloc2=/product-service/target/release/deps/libregalloc2-1611756928eaac44.rmeta --extern smallvec=/product-service/target/release/deps/libsmallvec-fb10b8d8d9696e28.rmeta --extern target_lexicon=/product-service/target/release/deps/libtarget_lexicon-74367c5154a2f425.rmeta --cap-lints allow --cfg 'feature="x86"'` (signal: 9, SIGKILL: kill)
124.3 warning: build failed, waiting for other jobs to finish...
------
failed to solve: process "/bin/sh -c cargo build --release" did not complete successfully: exit code: 101

Screenshot
Screenshot 2024-02-29 111904

Environment

  • OS: Windows 1` Enterprise
  • Docker Desktop 4.28.0
  • Docker compose v2.24.6-desktop.1
  • Docker:
       Client:
        Cloud integration: v1.0.35+desktop.11
        Version:           25.0.3
        API version:       1.44
        Go version:        go1.21.6
        Git commit:        4debf41
        Built:             Tue Feb  6 21:13:02 2024
        OS/Arch:           windows/amd64
        Context:           default
       
       Server: Docker Desktop 4.28.0 (139021)
        Engine:
         Version:          25.0.3
         API version:      1.44 (minimum version 1.24)
         Go version:       go1.21.6
         Git commit:       f417435
         Built:            Tue Feb  6 21:14:25 2024
         OS/Arch:          linux/amd64
         Experimental:     false
        containerd:
         Version:          1.6.28
         GitCommit:        ae07eda36dd25f8a1b98dfbf587313b99c0190bb
        runc:
         Version:          1.1.12
         GitCommit:        v1.1.12-0-g51d5e94
        docker-init:
         Version:          0.19.0
         GitCommit:        de40ad0
    

[BUG] Error: OverconstrainedAllocationRequest in AKS VM configuration

Describe the bug

Deployment fails due to an error with the Azure Kubernetes Cluster configuration.

To Reproduce
Steps to reproduce the behavior:

  1. Run azd up
  2. In the deployment, terraform will fail.
  3. See error code message: OverconstrainedAllocationRequest
Message="Code=\"OverconstrainedAllocationRequest\" Message=\"Allocation failed. 
VM(s) with the following constraints cannot be allocated, because the condition is too restrictive. Please remove some constraints and try again. Constraints applied are:\\n - Networking Constraints (such as Accelerated Networking or IPv6)\\n - VM Size\\n\" Target=\"0\""
Details=[{"code":"OverconstrainedAllocationRequest","message":"Allocation failed. VM(s) with the following constraints cannot be allocated, because the condition is too restrictive. Please remove some constraints and try again. Constraints applied are:\n - Networking Constraints (such as Accelerated Networking or IPv6)\n - VM Size\n","target":"0"}]

Expected behavior
A clear and concise description of what you expected to happen.

Expected deployment to run and not fail on the terraform deployment step.

Machine Details
Ran and tested on GitHub Codespaces

  • OS: Ubuntu
  • Version: (see Dev Container)

Additional context

Can you also test to see if this error occurs on Bicep as well?
They both run the same VM specs: Standard_D4s_v4 with a node_count of 3.

Add support for Azure CosmosDB SQL API

Is your feature request related to a problem? Please describe.
The makeline service currently only support MongoDB API. This works for both mongodb as a local container and Azure CosmosDB using MongoAPI. However, there is no way to test this sample app against Azure CosmosDB using the SQL API.

Describe the solution you'd like
Update the makeline service to support both MongoDB and SQL APIs when testing against Azure CosmosDB

Crashes post Product Service is restarted

Describe the bug
Make any change to Product manifest file & deploy. This would cause the application to fail

To Reproduce
Steps to reproduce the behavior:

  • Add a new product using store-admin
  • Change product Spec e.g. CPU limit update
  • Deploy the manifest file

Expected behavior
Application should continue to work

Screenshots
Unable to retrieve the products

Additional context
Services are accessible by Private IP but not through Load Balancer IP.

Flaky product-service pod stuck in ContainerCreating status

Describe the bug
Product service pod deployment is flaky. Sometimes it will sit in ContainerCreating status and never get to Running status

Expected behavior
Product service should be running within a minute or two

Workaround
Run kubectl rollout restart product-service to redeploy the pod

To Reproduce
Steps to reproduce the behavior:

  1. Clone repo
  2. Log into azd and az cli
  3. Run azd up command
  4. Wait until the deployment is complete

Run the following command to see status:

$ k get po | grep product-service                                                                                                                                                                                                                                                                                                                                                                                      product-service-8498ccfd54-r8w77   0/1     ContainerCreating   0          64m

Also kube events:

$ k events | grep product-service                                                                                                                                                                                                                                                                                                                                                                                      [11:04:19]
56m                 Normal    SuccessfulCreate       ReplicaSet/product-service-8498ccfd54   Created pod: product-service-8498ccfd54-r8w77
56m                 Normal    Scheduled              Pod/product-service-8498ccfd54-r8w77    Successfully assigned default/product-service-8498ccfd54-r8w77 to aks-system-13967210-vmss000002
56m                 Normal    ScalingReplicaSet      Deployment/product-service              Scaled up replica set product-service-8498ccfd54 to 1
26m                 Warning   FailedCreatePodSandBox   Pod/product-service-8498ccfd54-r8w77    Failed to create pod sandbox: rpc error: code = DeadlineExceeded desc = context deadline exceeded

Mistral 7b instruct not working in pet store for local LLM

Describe the bug

Hi, all. Working on a blog article, following a mix of local documentation + Intelligent app workshop, but instead of going Falcon, I've gone with the Mistral 7b model. and at the end - the switch of the pet store app to use it.

I can prompt the model locally from the cluster using:

kubectl run -it --rm --restart=Never curl --image=curlimages/curl -- curl -X POST "http://workspace-mistral-7b-instruct/generate" -H "accept: application/json" -H "Content-Type: application/json" -d "{"prompt":"What is your fav
orite ice cream flavor?"}"

However, using a pet store is not working, and it responds with 'Production Description' text instead.

Screenshots

Mistral_NoResponse

image

This is my config map:

kubectl apply -n pets -f - <<EOF
apiVersion: v1
kind: ConfigMap
metadata:
name: ai-service-configmap
data:
USE_LOCAL_LLM: "True"
AI_ENDPOINT: "http://workspace-mistral-7b-instruct/chat"

apiVersion: apps/v1
kind: Deployment
metadata:
name: ai-service
spec:
replicas: 1
selector:
matchLabels:
app: ai-service
template:
metadata:
labels:
app: ai-service
spec:
nodeSelector:
"kubernetes.io/os": linux
containers:
- name: order-service
image: ghcr.io/azure-samples/aks-store-demo/ai-service:latest
ports:
- containerPort: 5001
envFrom:
- configMapRef:
name: ai-service-configmap
resources:
requests:
cpu: 20m
memory: 50Mi
limits:
cpu: 30m
memory: 85Mi
startupProbe:
httpGet:
path: /health
port: 5001
initialDelaySeconds: 60
failureThreshold: 3
timeoutSeconds: 3
periodSeconds: 5
readinessProbe:
httpGet:
path: /health
port: 5001
initialDelaySeconds: 3
failureThreshold: 3
timeoutSeconds: 3
periodSeconds: 5
livenessProbe:
httpGet:
path: /health
port: 5001
failureThreshold: 3
initialDelaySeconds: 3
timeoutSeconds: 3
periodSeconds: 3

apiVersion: v1
kind: Service
metadata:
name: ai-service
spec:
type: ClusterIP
ports:

  • name: http
    port: 5001
    targetPort: 5001
    selector:
    app: ai-service
    EOF

Hoping someone can point me in the direction on whats happening here, whether its a bug, or what needs changing.

Modularize Bicep deployment

Is your feature request related to a problem? Please describe.
Currently the infrastructure as code is set to deploy all Azure resources as the default behavior. However, there may be scenarios where some of the Azure services are not needed for various demonstrations.

Describe the solution you'd like
In order to better support various demo scenarios, it would be best to modularize the deployment and deploy ancillary Azure services as needed based on Bicep input parameters. This is already being done with Azure Container Registry and we should look to take a similar approach with the other services.

We can start with the following boolean parameters:

  • DEPLOY_AZURE_CONTAINER_REGISTRY deploys Azure Container Registry
  • DEPLOY_WORKLOAD_IDENTITY deploys Azure Managed Identities for services that support it and enables workload identity and OIDC Issuer URL on AKS
  • DEPLOY_AZURE_OPENAI deploys Azure OpenAI, the ai-service microservice, and configures workload identity if that option is set to true
  • DEPLOY_AZURE_SERVICE_BUS deploys Azure Service Bus and configures workload identity if that option is set to true
  • DEPLOY_AZURE_COSMOSDB deploys Azure CosmosDB and configures workload identity if that option is set to true. This setting will also take into account the AZURE_COSMOSDB_ACCOUNT_KIND parameter which is used to determine database API (either MongoDB or GlobalDocumentDB with MongoDB being the default)
  • DEPLOY_OBSERVABILITY_TOOLS deploys Azure Log Analytics workspace, Azure Monitor managed service for Promethues, Azure Managed Grafana, and enables monitoring on the AKS cluster with Container Insights

The only services that will be deployed by default at all times will be the AKS cluster and Azure Key Vault (with Secret Store CSI driver enabled on AKS)

Host this repository helm chart publicly

Describe the solution you'd like
As a user/operator, I'd like to be able to install the application utilizing the helm chart from a hosted chart repository so that I can easily deploy the application with various parameters onto various clusters, especially with gitops agents such as ArgoCD or Flux.

Describe alternatives you've considered
Cloning the repository locally and then running commands takes a lot longer and also doesn't work (at all or easily) with GitOps agent deployments.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.