Giter Club home page Giter Club logo

istio's Introduction

Istio

Issues for this repository are disabled

Issues for this repository are tracked in Red Hat Jira. Please head to https://issues.redhat.com/browse/OSSM in order to browse or open an issue.

CII Best Practices Go Report Card GoDoc

Istio logo

Istio is an open source service mesh that layers transparently onto existing distributed applications. Istio’s powerful features provide a uniform and more efficient way to secure, connect, and monitor services. Istio is the path to load balancing, service-to-service authentication, and monitoring – with few or no service code changes.

  • For in-depth information about how to use Istio, visit istio.io
  • To ask questions and get assistance from our community, visit discuss.istio.io
  • To learn how to participate in our overall community, visit our community page

In this README:

In addition, here are some other documents you may wish to read:

You'll find many other useful documents on our Wiki.

Introduction

Istio is an open platform for providing a uniform way to integrate microservices, manage traffic flow across microservices, enforce policies and aggregate telemetry data. Istio's control plane provides an abstraction layer over the underlying cluster management platform, such as Kubernetes.

Istio is composed of these components:

  • Envoy - Sidecar proxies per microservice to handle ingress/egress traffic between services in the cluster and from a service to external services. The proxies form a secure microservice mesh providing a rich set of functions like discovery, rich layer-7 routing, circuit breakers, policy enforcement and telemetry recording/reporting functions.

    Note: The service mesh is not an overlay network. It simplifies and enhances how microservices in an application talk to each other over the network provided by the underlying platform.

  • Istiod - The Istio control plane. It provides service discovery, configuration and certificate management. It consists of the following sub-components:

    • Pilot - Responsible for configuring the proxies at runtime.

    • Citadel - Responsible for certificate issuance and rotation.

    • Galley - Responsible for validating, ingesting, aggregating, transforming and distributing config within Istio.

  • Operator - The component provides user friendly options to operate the Istio service mesh.

Repositories

The Istio project is divided across a few GitHub repositories:

  • istio/api. This repository defines component-level APIs and common configuration formats for the Istio platform.

  • istio/community. This repository contains information on the Istio community, including the various documents that govern the Istio open source project.

  • istio/istio. This is the main code repository. It hosts Istio's core components, install artifacts, and sample programs. It includes:

    • istioctl. This directory contains code for the istioctl command line utility.

    • operator. This directory contains code for the Istio Operator.

    • pilot. This directory contains platform-specific code to populate the abstract service model, dynamically reconfigure the proxies when the application topology changes, as well as translate routing rules into proxy specific configuration.

    • security. This directory contains security related code, including Citadel (acting as Certificate Authority), citadel agent, etc.

  • istio/proxy. The Istio proxy contains extensions to the Envoy proxy (in the form of Envoy filters) that support authentication, authorization, and telemetry collection.

Issue management

We use GitHub to track all of our bugs and feature requests. Each issue we track has a variety of metadata:

  • Epic. An epic represents a feature area for Istio as a whole. Epics are fairly broad in scope and are basically product-level things. Each issue is ultimately part of an epic.

  • Milestone. Each issue is assigned a milestone. This is 0.1, 0.2, ..., or 'Nebulous Future'. The milestone indicates when we think the issue should get addressed.

  • Priority. Each issue has a priority which is represented by the column in the Prioritization project. Priority can be one of P0, P1, P2, or >P2. The priority indicates how important it is to address the issue within the milestone. P0 says that the milestone cannot be considered achieved if the issue isn't resolved.


Cloud Native Computing Foundation logo

Istio is a Cloud Native Computing Foundation project.

istio's People

Contributors

andraxylia avatar ayj avatar bianpengyuan avatar costinm avatar douglas-reid avatar ericvn avatar esnible avatar frankbu avatar geeknoid avatar hanxiaop avatar howardjohn avatar hzxuzhonghu avatar istio-testing avatar jimmycyj avatar kyessenov avatar ldemailly avatar linsun avatar mandarjog avatar myidpt avatar nmittler avatar ostromart avatar ozevren avatar ramaraochavali avatar richardwxn avatar rshriram avatar sebastienvas avatar stevenctl avatar yangminzhu avatar ymesika avatar zirain avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

istio's Issues

Admission webhook doesn't allow to create AuthorizationPolicy with request.regex.headers condition

Bug description
admission webhook "pilot.validation.istio.io" denied the request: configuration is invalid: invalid condition: unknown attribute (request.regex.headers[x-forwarded-client-cert])

Affected product area (please put an X in all that apply)

[ ] Configuration Infrastructure
[ ] Docs
[ ] Installation
[ ] Networking
[ ] Performance and Scalability
[ ] Policies and Telemetry
[X] Security
[ ] Test and Release
[ ] User Experience
[ ] Developer Infrastructure

Affected features (please put an X in all that apply)

Expected behavior
According to MAISTRA-224 feature there should be availability to specify when condition in AuthorizationPolicy like:

spec:
  rules:
    - when:
        - key: 'request.regex.headers[x-forwarded-client-cert]'
          values: 
            - "foo-[0-9].*"

It's necessary to authorize external consumers on istio-proxy sidecars.

Steps to reproduce the bug
Create AuthorizationPolicy:

oc apply -f - <<EOF
apiVersion: security.istio.io/v1beta1
kind: AuthorizationPolicy
metadata:
  name: new-policy
spec:
  rules:
    - when:
        - key: 'request.regex.headers[x-forwarded-client-cert]'
          values:
            - "foo-[0-9].*"
  selector:
    matchLabels:
      app: simple-server
EOF

Get error:
Error from server: error when creating "STDIN": admission webhook "pilot.validation.istio.io" denied the request: configuration is invalid: invalid condition: unknown attribute (request.regex.headers[x-forwarded-client-cert])

Version (include the output of istioctl version --remote and kubectl version and helm version if you used Helm)
OSE 4.3 ServiceMesh 1.1.3

How was Istio installed?

Environment where bug was observed (cloud vendor, OS, etc)

It seems it was forgotten to add request.regex.headers case into istio/pkg/config/security/security.go although it exist in istio/pilot/pkg/security/authz/model

MutatingWebhookConfiguration matches system namespaces

Describe the bug
In my Openshift 3.11 installation I had sidecar-injector and SDN/OVS pods evicted. Subsequently the SDN pod could not start up:

Error creating: Internal error occurred: failed calling admission webhook "sidecar-injector.istio.io": Post https://istio-sidecar-injector.istio-system.svc:443/inject?timeout=30s: dial tcp 172.30.209.68:443: connect: connection refused

Apparently the webhook was called for openshift-sdn (and other namespaces) as these lacked the ignore label required by the webhook:

namespaceSelector:
    matchExpressions:
    - key: istio.openshift.com/ignore-namespace
      operator: DoesNotExist

The sidecar injector itself could not start because it was blocked by missing SDN.

Expected behavior
Any webhook configuration should exclude namespaces openshift-*, kube-* and default.

Steps to reproduce the bug

oc delete -n openshift-sdn `oc get po -n openshift-sdn -o name` && oc delete -n istio-system `oc get po -n istio-system -o name`

Version
Maistra 0.10.

Installation
Istio-operator.

Environment
Openshift 3.11 on RHEL 7.6

After updating Mainstra vom 1.0.9 to 1.1 Prometheus is no longer able to get metrics

(NOTE: This is used to report product bugs:
To report a security vulnerability, please visit https://istio.io/about/security-vulnerabilities/
To ask questions about how to use Istio, please visit https://discuss.istio.io
)

Bug description

Affected product area (please put an X in all that apply)

[X] Configuration Infrastructure
[ ] Docs
[ ] Installation
[ ] Networking
[ ] Performance and Scalability
[ ] Policies and Telemetry
[ ] Security
[ ] Test and Release
[X] User Experience
[ ] Developer Infrastructure

Expected behavior
I had a test application running on our OpenShift cluster which, is using mTLS and RBAC. Before the update I was able to see the request flow in Kiali. After updating Istio the version to 1.1. The application is working like before. However, I can't see the request flow in Kiali. Furthermore, if I open the "Targets" view in Prometheus, I see the error message "read: connection reset by peer" for my application pods. I attached a image with the complete error message. From what I get the Envoy container in pod block the incoming request. Do you have any idea, what the reason for the error could be? Maybe I miss a configuration that Prometheus can access the metrics of the pod.

Can I still use the resource "ServiceMeshMemberRoll" instead of the "ServiceMeshMember"?

Steps to reproduce the bug

Version (include the output of istioctl version --remote and kubectl version)

{
  "clientVersion": {
    "version": "1.4.6",
    "revision": "f288658b710d932bd4b0200728920fe3cbe0af61-dirty",
    "golang_version": "go1.13.8",
    "status": "Modified",
    "tag": "1.4.6"
  },
  "meshVersion": [
    {
      "Component": "citadel",
      "Info": {
        "version": "OSSM_1.1.0",
        "revision": "02dbb339c6d9fa07763c81ebc192899d26287403",
        "golang_version": "go1.13.4",
        "status": "Clean",
        "tag": "unknown"
      }
    },
    {
      "Component": "egressgateway",
      "Info": {
        "version": "OSSM_1.1.0",
        "revision": "02dbb339c6d9fa07763c81ebc192899d26287403",
        "golang_version": "go1.13.4",
        "status": "Clean",
        "tag": "unknown"
      }
    },
    {
      "Component": "galley",
      "Info": {
        "version": "OSSM_1.1.0",
        "revision": "02dbb339c6d9fa07763c81ebc192899d26287403",
        "golang_version": "go1.13.4",
        "status": "Clean",
        "tag": "unknown"
      }
    },
    {
      "Component": "ingressgateway",
      "Info": {
        "version": "OSSM_1.1.0",
        "revision": "02dbb339c6d9fa07763c81ebc192899d26287403",
        "golang_version": "go1.13.4",
        "status": "Clean",
        "tag": "unknown"
      }
    },
    {
      "Component": "pilot",
      "Info": {
        "version": "OSSM_1.1.0",
        "revision": "02dbb339c6d9fa07763c81ebc192899d26287403",
        "golang_version": "go1.13.4",
        "status": "Clean",
        "tag": "unknown"
      }
    },
    {
      "Component": "policy",
      "Info": {
        "version": "OSSM_1.1.0",
        "revision": "02dbb339c6d9fa07763c81ebc192899d26287403",
        "golang_version": "go1.13.4",
        "status": "Clean",
        "tag": "unknown"
      }
    },
    {
      "Component": "sidecar-injector",
      "Info": {
        "version": "OSSM_1.1.0",
        "revision": "02dbb339c6d9fa07763c81ebc192899d26287403",
        "golang_version": "go1.13.4",
        "status": "Clean",
        "tag": "unknown"
      }
    },
    {
      "Component": "telemetry",
      "Info": {
        "version": "OSSM_1.1.0",
        "revision": "02dbb339c6d9fa07763c81ebc192899d26287403",
        "golang_version": "go1.13.4",
        "status": "Clean",
        "tag": "unknown"
      }
    }
  ],
  "dataPlaneVersion": [
    {
      "ID": "recommendation-v2-78857d66f8-b6zmc.acme",
      "IstioVersion": "maistra-1.1.0"
    },
    {
      "ID": "preference-v1-56d5c67b9f-pll7d.acme",
      "IstioVersion": "maistra-1.1.0"
    },
    {
      "ID": "istio-ingressgateway-bf6644dd5-bdjwc.istio",
      "IstioVersion": "maistra-1.1.0"
    },
    {
      "ID": "istio-egressgateway-85cd64f885-q99ml.istio",
      "IstioVersion": "maistra-1.1.0"
    },
    {
      "ID": "recommendation-v1-55fcdc774-wm8sj.acme",
      "IstioVersion": "maistra-1.1.0"
    },
    {
      "ID": "customer-9d7c57848-wjp5w.acme",
      "IstioVersion": "maistra-1.1.0"
    }
  ]
}

Client Version: openshift-clients-4.3.0-201910250623-88-g6a937dfe
Server Version: 4.3.5
Kubernetes Version: v1.16.2

How was Istio installed?
Istio (Maistra 1.0.8) was installed using the Operator from the OperatorHub and updating to version 1.1.0 adding the version attribute to the existing ServiceMeshControlPlane resource.

Environment where bug was observed (cloud vendor, OS, etc)

Additionally, please consider attaching a cluster state archive by attaching
the dump file to this issue.
image

Help needed: unable to import package "github.com/Maistra/istio/pkg/servicemesh"

Hi team,

I am trying to import ServiceMeshMemberRoll type in a Go client, but failed.

Go version: go1.13.4 darwin/amd64
The code looks like:

import
maistrav1 "github.com/Maistra/istio/pkg/servicemesh/client/clientset/versioned/typed/servicemesh/v1"
...
type ServiceMeshMemberRollInterface struct {
V1 *maistrav1.ServiceMeshMemberRollInterface
}

In the go.mod, I added:
github.com/Maistra/istio v0.0.0-20181105172658-41d45b880a2f // indirect

Did I import the wrong library?

Thanks.
Longbin

mTLS uses wrong SNI in TLS Client Hello

Bug description
Hi,

In our setup we have deployed two namespaces, first x2 and afterwards x3. Both are from a configuration and deployment perspective identical (of course the namespace specific config within the yamls differ accordingly), both have mTLS enabled and a headless service.
In our setup we have one Istio control plane (istio-system) and are trying to do mTLS within the namespaces. Just in case, we are not trying to do mTLS between multiple namespaces.

---
apiVersion: v1
kind: Service
metadata:
  name: headless
spec:
  clusterIP: None
  selector:
    galera.v1beta2.sql.databases/galera-name: galera-cluster
  ports:
    - name: s3306
      protocol: TCP
      port: 3306
      targetPort: 3306
    - name: s4444
      protocol: TCP
      port: 4444
      targetPort: 4444
    - name: s4567
      protocol: TCP
      port: 4567
      targetPort: 4567
    - name: s4568
      protocol: TCP
      port: 4568
      targetPort: 4568
---
apiVersion: "authentication.istio.io/v1alpha1"
kind: "Policy"
metadata:
  name: default
spec:
  peers:
    - mtls: {}
---
apiVersion: "networking.istio.io/v1alpha3"
kind: "DestinationRule"
metadata:
  name: default
spec:
  host: "*.x2.svc.cluster.local"
  trafficPolicy:
    tls:
      mode: ISTIO_MUTUAL
---

In the first namespace, x2, mTLS is working as expected.

istioctl authn tls-check galera-cluster-bb55l -n x2 | grep x2.svc
headless.x2.svc.cluster.local:3306                                 OK         STRICT         ISTIO_MUTUAL     x2/default                                     x2/default
headless.x2.svc.cluster.local:4444                                 OK         STRICT         ISTIO_MUTUAL     x2/default                                     x2/default
headless.x2.svc.cluster.local:4567                                 OK         STRICT         ISTIO_MUTUAL     x2/default                                     x2/default
headless.x2.svc.cluster.local:4568                                 OK         STRICT         ISTIO_MUTUAL     x2/default                                     x2/default

When we deploy x3 with the same configuration as x2, the x3 pods are not able to communicate with each other.

istioctl authn tls-check galera-cluster-24z99 -n x3 | grep x3.svc
headless.x3.svc.cluster.local:3306                                 OK         STRICT         ISTIO_MUTUAL     x3/default                                     x3/default
headless.x3.svc.cluster.local:4444                                 OK         STRICT         ISTIO_MUTUAL     x3/default                                     x3/default
headless.x3.svc.cluster.local:4567                                 OK         STRICT         ISTIO_MUTUAL     x3/default                                     x3/default
headless.x3.svc.cluster.local:4568                                 OK         STRICT         ISTIO_MUTUAL     x3/default                                x3/default     

A tcpdump revealed that the TLS handshake between the envoy proxies fails with "Certificate Unknown (46)". The reason for this is that in the TLS Client Hello the SNI for x2 is used (outbound_.4567_._.headless.x2.svc.cluster.local), which is obviously wrong. It seems that the mesh (i use this term on purpose because i don't know which component of it is responsible for this behaviour) uses the first service fqdn that is created for this tcp port. When we delete the x2 namespace the mTLS communication in x3 starts working as expected.
If needed i can provide further configuration and tcpdumps.
We did not find a way to change this behaviour by configuration (different ServiceEntries, DestinationRules etc.) nor did we find a hint in the documentation that this should or should not work.

From an architectural or configuration point of view is this behaviour expected? As for now it seems to me that it is a bug.

Thank you for you support!

Best Regards,

Affected product area (please put an X in all that apply)

[ ] Configuration Infrastructure
[ ] Docs
[ ] Installation
[ x ] Networking
[ ] Performance and Scalability
[ ] Policies and Telemetry
[ x ] Security
[ ] Test and Release
[ ] User Experience
[ ] Developer Infrastructure

Expected behavior
The expected behaviour is that mTLS is working within more than one namespace and the same tcp port.

Steps to reproduce the bug

Version (include the output of istioctl version --remote and kubectl version)

How was Istio installed?
RedHat ServiceMesh 1.1.2

Environment where bug was observed (cloud vendor, OS, etc)
RedHat OpenShift 4.3.22 (bare metal)

Additionally, please consider attaching a cluster state archive by attaching
the dump file to this issue.

Rewrite webhook controller tests

As part of integrating the xns-informers library, we've skipped some unit tests for the webhook controller because they don't work with xns-informers and Maistra doesn't use the webhook controller anyway.

We should look into rewriting these tests. They modify informer caches directly and otherwise do some things that seem to be a bit flaky. It would be nice to not have to disable them, regardless of whether Maistra uses this controller or not.

Failing to install ServiceMeshControlPlane on OKD version 4.3.0-0 and 4.4.0-0 running kubernetes version v1.17.1

Bug description
unable to recognize "smcp.yaml": no matches for kind "ServiceMeshControlPlane" in version "maistra.io/v1

Affected product area (please put an X in all that apply)
OKD 4.3.0 on AWS

Expected behavior

Steps to reproduce the bug

Version (include the output of istioctl version --remote and kubectl version)

How was Istio installed?
STEP 1
Install okd successfully
k8_client_url: https://github.com/openshift/okd/releases/download/4.4.0-0.okd-2020-01-28-022517/openshift-client-mac-4.4.0-0.okd-2020-01-28-022517.tar.gz
ocp_installer_url: https://github.com/openshift/okd/releases/download/4.4.0-0.okd-2020-01-28-022517/openshift-install-mac-4.4.0-0.okd-2020-01-28-022517.tar.gz

OR

k8_client_url: https://github.com/openshift/okd/releases/download/4.3.0-0.okd-2019-11-15-182656/openshift-client-mac-4.3.0-0.okd-2019-11-15-182656.tar.gz
ocp_installer_url: https://github.com/openshift/okd/releases/download/4.3.0-0.okd-2019-11-15-182656/openshift-install-mac-4.3.0-0.okd-2019-11-15-182656.tar.gz

STEP 2
i) Create oc project
oc adm new-project istio-system
ii) Install catalog sources
oc apply -f redhat-operators-csc.yaml

apiVersion: operators.coreos.com/v1 kind: CatalogSourceConfig metadata: name: redhat-operators-packages namespace: openshift-marketplace spec: targetNamespace: openshift-operators packages: serverless-operator,servicemeshoperator,kiali-ossm,jaeger-product,elasticsearch-operator source: redhat-operators

&&

oc apply -f community-operators-csc.yaml

apiVersion: operators.coreos.com/v1 kind: CatalogSourceConfig metadata: name: community-operators-packages namespace: openshift-marketplace spec: targetNamespace: openshift-operators packages: knative-eventing-operator,openshift-pipelines-operator,knative-kafka-operator ,strimzi-kafka-operator source: community-operators
iii) Install Servicemesh subscription
oc apply -f subscription.yaml

`---
apiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
name: elastic-search
namespace: openshift-operators
spec:
channel: preview
source: redhat-operators-packages
name: elasticsearch-operator
sourceNamespace: openshift-operators

apiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
name: jaeger
namespace: openshift-operators
spec:
channel: stable
source: redhat-operators-packages
name: jaeger-product
sourceNamespace: openshift-operators

apiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
name: kiali
namespace: openshift-operators
spec:
channel: stable
source: redhat-operators-packages
name: kiali-ossm
sourceNamespace: openshift-operators

apiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
name: servicemesh
namespace: openshift-operators
spec:
channel: "1.0"
source: redhat-operators-packages
name: servicemeshoperator
sourceNamespace: openshift-operators`

iv) Install Servicemesh components
oc apply -f smcp.yaml
apiVersion: maistra.io/v1 kind: ServiceMeshControlPlane metadata: name: istio-control-plane namespace: istio-system spec: istio: global: proxy: resources: requests: cpu: 100m memory: 128Mi limits: cpu: 500m memory: 128Mi mtls: enabled: true disablePolicyChecks: true policyCheckFailOpen: false gateways: istio-egressgateway: autoscaleEnabled: true autoscaleMin: 1 autoscaleMax: 5 istio-ingressgateway: autoscaleEnabled: true autoscaleMin: 1 autoscaleMax: 5 cluster-local-gateway: autoscaleEnabled: false enabled: true labels: app: cluster-local-gateway istio: cluster-local-gateway ports: - name: status-port port: 15020 - name: http2 port: 80 targetPort: 8080 - name: https port: 443 mixer: enabled: true policy: autoscaleEnabled: false telemetry: autoscaleEnabled: false resources: requests: cpu: 100m memory: 1G limits: cpu: 500m memory: 4G pilot: resources: requests: cpu: 100m memory: 128Mi autoscaleEnabled: false traceSampling: 100 kiali: enabled: true dashboard: viewOnlyMode: false ingress: enabled: true grafana: enabled: true tracing: enabled: true jaeger: template: all-in-one
&&

oc apply -f smmr.yaml
`apiVersion: maistra.io/v1
kind: ServiceMeshMemberRoll
metadata:
name: default
namespace: istio-system
spec:
members:

  • default
  • bookinfo`

Environment where bug was observed (cloud vendor, OS, etc)
--> okd 4.4.0 and 4.3.0 AWS

Additionally, please consider attaching a cluster state archive by attaching
the dump file to this issue.

oc explain shows empty description

Describe the feature request
When using oc explain with custom resources brought by Maistra (e.g. servicemeshcontrolplane) this should report the available schema. Now it reports just .

ServiceMeshControlPlane should not remove an active ingress Deployment object when resources are adjusted

Bug Description

After increasing the resources.requests.memory value on my default Istio Ingress Gateway, the ServiceMeshControlPlane deleted the entire Ingress Gateway Deployment due to the resources.requests.memory value being in excess of the internal default resources.limits.memory value. When a configuration change is made to ask for more of a given resources, the available resources should never go to 0.

The reason reported by the ServiceMeshControlPlane was:
Error processing component istio-ingress: Deployment.apps "istio-ingressgateway" is invalid: spec.template.spec.containers[0].resources.requests: Invalid value: "2Gi": must be less than or equal to memory limit: error: Deployment.apps "istio-ingressgateway" is invalid: spec.template.spec.containers[0].resources.requests: Invalid value: "2Gi": must be less than or equal to memory limit
If a live valid deployment configuration is changed to an invalid configuration, the ServiceMeshControlPlane should do nothing rather than delete the live ingress.

Version

% istioctl version
1.12.0

% oc version
Client Version: 4.8.0-202111041632.p0.git.88e7eba.assembly.stream-88e7eba
Server Version: 4.8.21
Kubernetes Version: v1.21.5+6a39d04

% helm version --short
v3.8.2+g6e3701e

Additional Information

Steps to reproduce:
On an existing maistra.io/v2 ServiceMeshControlPlane object, set:
spec.gateways.ingress.runtime.container.resources.requests.memory: 1Gi
Once the change is fully deployed, set:
spec.gateways.ingress.runtime.container.resources.requests.memory: 2Gi
The entire default istio-ingressgateway Deployment will be deleted, taking down ingress for the entire cluster.

Affected product area

  • Docs
  • Installation
  • Networking
  • Performance and Scalability
  • Extensions and Telemetry
  • Security
  • Test and Release
  • User Experience
  • Developer Infrastructure
  • Upgrade
  • Multi Cluster
  • Virtual Machine
  • Control Plane Revisions

Is this the right place to submit this?

  • This is not a security vulnerability
  • This is not a question about how to use Istio

Add property to enable/disable mixer spans

Describe the feature request
There should be a tracer property to remove trace_zipkin_url parameter, as referenced in https://istio.io/faq/distributed-tracing/#why-mixer-spans.

Describe alternatives you've considered
In https://github.com/Maistra/istio/blob/maistra-1.1/install/kubernetes/helm/istio/charts/mixer/templates/deployment.yaml#L54-L58, a property check should be added before to avoid adding this parameter if the mixer spans are not desired in traces.

Additional context

Cannot create a ServiceMeshControlPlane with prometheus enabled, in a project with resource limits quotas

Bug description

Red Hat OpenShift Service Mesh v1.1.7 is used.
The targetted project has resource quotas defined : limits.cpu and limits.memory.

I try to deploy a ServiceMeshControlPlane in my project, with following yaml CR:

  istio:
    security:
      resources:
        limits:
          cpu: 500m
          memory: 256Mi
        requests:
          cpu: 200m
          memory: 128Mi
    kiali:
      enabled: true
    tracing:
      enabled: true
      jaeger:
        resources:
          limits:
            cpu: 500m
            memory: 256Mi
          requests:
            cpu: 200m
            memory: 128Mi
        template: all-in-one
    grafana:
      enabled: true
    mixer:
      policy:
        autoscaleEnabled: false
      telemetry:
        autoscaleEnabled: false
        resources:
          limits:
            cpu: 500m
            memory: 256Mi
          requests:
            cpu: 200m
            memory: 128Mi
    prometheus:
      resources:
        limits:
          cpu: 500m
          memory: 256Mi
        requests:
          cpu: 200m
          memory: 128Mi
    galley:
      resources:
        limits:
          cpu: 500m
          memory: 256Mi
        requests:
          cpu: 200m
          memory: 128Mi
    gateways:
      istio-egressgateway:
        autoscaleEnabled: false
        enabled: false
      istio-ingressgateway:
        autoscaleEnabled: false
        enable: true
        ior_enabled: false
    pilot:
      autoscaleEnabled: false
      resources:
        limits:
          cpu: 500m
          memory: 256Mi
        requests:
          cpu: 200m
          memory: 128Mi
      traceSampling: 100

prometheus pod cannot start due to lack of limit resource specification.
After looking carefully at deploy prometheus replicaset, I observed that prometheus pod is made of two containers ( prometheus and prometheus-proxy).
prometheus container is well patched by the resource section defined in ServiceMeshControlPlane prometheus part, while prometheus-proxy container is not. As a consequence, prometheus-proxy container has no resource limit defined, and cannot be started in my project.
The consequence is : Among 60 kubernetes items deployed by an healthy control plane, only 14 of them are deployed, blocking on prometheus install.

Affected product area (please put an X in all that apply)

[x ] Configuration Infrastructure
[ ] Docs
[ ] Installation
[ ] Networking
[ ] Performance and Scalability
[ ] Policies and Telemetry
[ ] Security
[ ] Test and Release
[ ] User Experience
[ ] Developer Infrastructure

Affected features (please put an X in all that apply)

[ ] Multi Cluster
[ ] Virtual Machine
[x ] Multi Control Plane

Expected behavior

Steps to reproduce the bug

Version (include the output of istioctl version --remote and kubectl version and helm version if you used Helm)
OpenShift4.4
Red Hat OpenShift Service Mesh v1.1.7
How was Istio installed?
OpenShift 4.4
Red Hat OpenShift Service Mesh v1.1.7
Environment where bug was observed (cloud vendor, OS, etc)

Additionally, please consider attaching a cluster state archive by attaching
the dump file to this issue.

MutatingWebhookConfiguration sometimes disapears

Describe the bug
In our Lab environnement, we were asked to shutdown the cluster overnight. Most of the time everything starts just fine, but from time to time the MutatingWebhookConfiguration disapears, then istio-sidecar-injector pods then crash and auto-injection is no longer working.

I would expect behavior
I would normally expect the MutatingWebhookConfiguration to remain

Steps to reproduce the bug
I was unable to find steps to reproduces it every times, shutting down the cluster and restarting it seems to trigger it though

Workaround
As a workaround, I have exported the MutatingWebHookConfiguration to a yaml file that I can apply when it goes missing, then delete the istio-sidecar-injector pod so it restarts.

Version
istio:
version.BuildInfo{Version:"1.1.0-rc.1", GitRevision:"cdc39e70054be670d2c141dec7b8517a0812b021", User:"root", Host:"72433e36-3ac1-11e9-8dad-0a580a2c0205", GolangVersion:"go1.10.4", DockerHub :"docker.io/istio", BuildStatus:"Clean", GitTag:"1.1.0-rc.0-54-gcdc39e7"}

Kubernetes:
Client Version: version.Info{Major:"1", Minor:"11+", GitVersion:"v1.11.0+d4cacc0", GitCommit:"d4cacc0", GitTreeState:"clean", BuildDate:"2018-12-18T16:26:37Z", GoVersion:"go1.10.3", Compile r:"gc", Platform:"linux/amd64"} Server Version: version.Info{Major:"1", Minor:"11+", GitVersion:"v1.11.0+d4cacc0", GitCommit:"d4cacc0", GitTreeState:"clean", BuildDate:"2018-12-18T16:26:37Z", GoVersion:"go1.10.3", Compile r:"gc", Platform:"linux/amd64"}

Openshift: oc v3.11.59 kubernetes v1.11.0+d4cacc0 features: Basic-Auth GSSAPI Kerberos SPNEGO Server https:// openshift v3.11.59 kubernetes v1.11.0+d4cacc0

Installation
Istio was installed using the Maistra Ansible playbook

Environment
RedHat 7.6 hosted in Azure

Cluster state
istio.tar.gz

Error applying ServiceMeshPolicy

(NOTE: This is used to report product bugs:
To report a security vulnerability, please visit https://istio.io/about/security-vulnerabilities/
To ask questions about how to use Istio, please visit https://discuss.istio.io
)

Bug description

When attempting to apply mTLS ServiceMeshPolicy, API returns: error: unable to recognize "servicemeshpolicy.yaml": no matches for kind "ServiceMeshPolicy" in version "maistra.io/v1"

Following these instructions: https://maistra.io/docs/examples/mesh-wide_mtls/

Affected product area (please put an X in all that apply)

[ ] Configuration Infrastructure
[x] Docs
[ ] Installation
[ ] Networking
[ ] Performance and Scalability
[ ] Policies and Telemetry
[x] Security
[ ] Test and Release
[ ] User Experience
[ ] Developer Infrastructure

Expected behavior

Expect to be able to apply mTLS policy at service mesh control plane level.

Steps to reproduce the bug

Apply yaml below in OpenShift 4.3.5 with OpenShift Service Mesh installed via operators.

apiVersion: "maistra.io/v1"
kind: "ServiceMeshPolicy"
metadata:
  name: "default"
  namespace: istio-system-test
spec:
  peers:
  - mtls: {}

Version (include the output of istioctl version --remote and kubectl version)

Red Hat OpenShift Service Mesh version 1.1.1

How was Istio installed?

Operator catalog

Environment where bug was observed (cloud vendor, OS, etc)

User provisioned infrastructure on vSphere

Additionally, please consider attaching a cluster state archive by attaching
the dump file to this issue.

note on ordering of install

I've been trying to install the operator (v1.1.2+3 on openshift 4.3) and its resources automatically. That has led me to notice some things that I thought it would be worth noting here.

  1. If the istio operator gets removed before the ServiceMeshControlPlane object has finished terminating then it gets stuck in terminating indefinitely and the istio-system namespace gets stuck with it. It then does get cleaned up if I re-install the istio operator. I think something like this has been seen before and can be resolved with some manual patching.. So it's important to fully remove any control planes before removing the operator.
  2. The control plane has to be fully up before installing a memberroll. Otherwise the memberroll will never get a status and any namespaces added won't be part of the mesh. The install instructions do say to wait for the control plane - just noting what happens if you don't.
  3. It's important to clean up the webhooks especially on removal as otherwise they can fire and prevent pods being created.
  4. The operator also needs to be fully installed before trying to create any resources managed by it or any other resources that result in pods as the webhooks can fire and fail.

This isn't really an issue as the intended instructions avoid these problems so I'll close this right away. Just noting it so that it can be found in the future.

Spam messages: unstructured.Unstructured ended with: Underlying Result Channel closed

Describe the bug

istio-galley pod in istio-system logs spam messages as follows

2019-08-19T09:00:10.186732Z	warn	istio.io/istio/galley/pkg/source/kube/dynamic/source.go:147: watch of *unstructured.Unstructured ended with: Underlying Result Channel closed

Expected behavior
No spam messages

Steps to reproduce the bug
It always reproduce without any configuration, it starts logging the spam messages after running istio-galley pod.

Version
Red Hat OpenShift Service Mesh 0.12 on OCPv3.11

Installation
Istallation steps are based on Installing the Red Hat OpenShift Service Mesh.

Environment
Bare metal

Cluster state
istio-dump.tar.gz

add option to not allow traffic from the router in a tenant namespace

(This is used to request new product features, please visit https://discuss.istio.io for questions on using Istio)

Describe the feature request

currently the network policy created by maistra allow traffic from the router in a tenant namespace as long as a pod as a specific annotation. The RFE is to provide an option in the servicemesh control plane to disable that.
This allow for building service mesh deployment where the tenants traffic is locked and always has to go through ingreess/egreess gateways.

Describe alternatives you've considered
remove the ability to create routes via RBAC
prevent from creating pods with the given annotation via OPA.

Affected product area (please put an X in all that apply)

[ ] Configuration Infrastructure
[ ] Docs
[x] Installation
[ ] Networking
[ ] Performance and Scalability
[ ] Policies and Telemetry
[ ] Security
[ ] Test and Release
[ ] User Experience
[ ] Developer Infrastructure

Additional context

Kibana service not responding after injecting istio-proxy

(NOTE: This is used to report product bugs:
To report a security vulnerability, please visit https://istio.io/about/security-vulnerabilities/
To ask questions about how to use Istio, please visit https://discuss.istio.io
)

I've deployed an ECK stack on openshift using this guide. After deploying the app I saw that I'm getting into the Kibana logging screen.
Afterwards I injected the istio-proxy (using the service mesh operator CRD) and then the login page stopped to work.

Logs from Kibana when istio-proxy is injected:

{"type":"response","@timestamp":"2020-07-30T05:43:01Z","tags":[],"pid":8,"method":"get","statusCode":200,"req":{"url":"/login","method":"get","headers":{"host":"10.129.2.73:5601","user-agent":"kube-probe/1.18+","accept-encoding":"gzip","connection":"close"},"remoteAddress":"127.0.0.1","userAgent":"127.0.0.1"},"res":{"statusCode":200,"responseTime":65,"contentLength":9},"message":"GET /login 200 65ms - 9.0B"}

Logs from istio proxy:

[Envoy (Epoch 0)] [2020-07-29 20:38:11.665][28][debug][misc] [external/envoy/source/common/network/io_socket_error_impl.cc:29] Unknown error code 104 details Connection reset by peer

Detailed logs from istio proxy:

[Envoy (Epoch 0)] [2020-07-29 20:38:21.585][28][debug][filter] [external/envoy/source/extensions/filters/listener/original_dst/original_dst.cc:18] original_dst: New connection accepted
[Envoy (Epoch 0)] [2020-07-29 20:38:21.585][28][debug][filter] [external/envoy/source/extensions/filters/listener/tls_inspector/tls_inspector.cc:79] tls inspector: new connection accepted
[Envoy (Epoch 0)] [2020-07-29 20:38:21.585][28][debug][filter] [src/envoy/tcp/mixer/filter.cc:30] Called tcp filter: Filter
[Envoy (Epoch 0)] [2020-07-29 20:38:21.585][28][debug][filter] [src/envoy/tcp/mixer/filter.cc:40] Called tcp filter: initializeReadFilterCallbacks
[Envoy (Epoch 0)] [2020-07-29 20:38:21.585][28][debug][filter] [external/envoy/source/common/tcp_proxy/tcp_proxy.cc:204] [C1672] new tcp proxy session
[Envoy (Epoch 0)] [2020-07-29 20:38:21.585][28][debug][filter] [src/envoy/tcp/mixer/filter.cc:133] [C1672] Called tcp filter onNewConnection: remote 10.129.2.1:46076, local 10.129.2.73:5601
[Envoy (Epoch 0)] [2020-07-29 20:38:21.586][28][debug][filter] [external/envoy/source/common/tcp_proxy/tcp_proxy.cc:347] [C1672] Creating connection to cluster inbound|5601|https|kibana-sample-kb-http.elastic.svc.cluster.local
[Envoy (Epoch 0)] [2020-07-29 20:38:21.586][28][debug][pool] [external/envoy/source/common/tcp/conn_pool.cc:83] creating a new connection
[Envoy (Epoch 0)] [2020-07-29 20:38:21.586][28][debug][pool] [external/envoy/source/common/tcp/conn_pool.cc:364] [C1673] connecting
[Envoy (Epoch 0)] [2020-07-29 20:38:21.586][28][debug][connection] [external/envoy/source/common/network/connection_impl.cc:732] [C1673] connecting to 127.0.0.1:5601
[Envoy (Epoch 0)] [2020-07-29 20:38:21.586][28][debug][connection] [external/envoy/source/common/network/connection_impl.cc:741] [C1673] connection in progress
[Envoy (Epoch 0)] [2020-07-29 20:38:21.586][28][debug][pool] [external/envoy/source/common/tcp/conn_pool.cc:109] queueing request due to no available connections
[Envoy (Epoch 0)] [2020-07-29 20:38:21.586][28][debug][conn_handler] [external/envoy/source/server/connection_handler_impl.cc:343] [C1672] new connection
[Envoy (Epoch 0)] [2020-07-29 20:38:21.586][28][debug][connection] [external/envoy/source/common/network/connection_impl.cc:580] [C1673] connected
[Envoy (Epoch 0)] [2020-07-29 20:38:21.586][28][debug][pool] [external/envoy/source/common/tcp/conn_pool.cc:285] [C1673] assigning connection
[Envoy (Epoch 0)] [2020-07-29 20:38:21.586][28][debug][filter] [external/envoy/source/common/tcp_proxy/tcp_proxy.cc:541] TCP:onUpstreamEvent(), requestedServerName: 
[Envoy (Epoch 0)] [2020-07-29 20:38:21.586][28][debug][filter] [src/istio/control/client_context_base.cc:132] Check attributes: attributes {
  key: "connection.id"
  value {
    string_value: "f95d1aa1-ef1e-45ad-b9ab-10a3f3fd6440-1672"
  }
}
attributes {
  key: "connection.mtls"
  value {
    bool_value: false
  }
}
[Envoy (Epoch 0)] [2020-07-29 20:38:21.586][28][debug][filter] [src/istio/mixerclient/client_impl.cc:87] Policy cache hit=false, status=UNAVAILABLE
[Envoy (Epoch 0)] [2020-07-29 20:38:21.586][28][debug][grpc] [src/envoy/utils/grpc_transport.cc:44] Sending Check request: attributes {
  words: "65535.65535.65535"
  words: "kubernetes://kibana-sample-kb-64cb65c9c5-rbfjv.elastic"
  words: "origin.ip"
  words: "elastic"
  words: "f95d1aa1-ef1e-45ad-b9ab-10a3f3fd6440-1672"
  strings {
    key: 125
    value: -5
  }
  strings {
    key: 131
    value: 124
  }
  strings {
    key: 154
    value: -2
  }
  strings {
    key: 155
    value: -4
  }
  strings {
    key: 197
    value: -2
  }
  strings {
    key: 201
    value: 219
  }
  strings {
    key: 221
    value: -1
  }
  int64s {
    key: 151
    value: 5601
  }
  bools {
    key: 177
    value: false
  }
  timestamps {
    key: 133
    value {
      seconds: 1596055101
      nanos: 586020174
    }
  }
  bytes {
    key: -3
    value: "\n\201\002\001"
  }
  bytes {
    key: 0
    value: "\n\201\002\001"
  }
  bytes {
    key: 150
    value: "\000\000\000\000\000\000\000\000\000\000\377\377\n\201\002I"
  }
}
global_word_count: 222
deduplication_id: "0654879b-e36c-4910-b8d7-724be54a9ff1560"
[Envoy (Epoch 0)] [2020-07-29 20:38:21.586][28][debug][filter] [./src/envoy/utils/header_update.h:45] Mixer forward attributes set: CkYKCnNvdXJjZS51aWQSOBI2a3ViZXJuZXRlczovL2tpYmFuYS1zYW1wbGUta2ItNjRjYjY1YzljNS1yYmZqdi5lbGFzdGlj
[Envoy (Epoch 0)] [2020-07-29 20:38:21.586][28][debug][router] [external/envoy/source/common/router/router.cc:434] [C0][S18146019625426063270] cluster 'outbound|9091||istio-policy.istio-system.svc.cluster.local' match for URL '/istio.mixer.v1.Mixer/Check'
[Envoy (Epoch 0)] [2020-07-29 20:38:21.586][28][debug][router] [external/envoy/source/common/router/router.cc:549] [C0][S18146019625426063270] router decoding headers:
':method', 'POST'
':path', '/istio.mixer.v1.Mixer/Check'
':authority', 'mixer'
':scheme', 'http'
'te', 'trailers'
'grpc-timeout', '5000m'
'content-type', 'application/grpc'
'x-istio-attributes', 'CkYKCnNvdXJjZS51aWQSOBI2a3ViZXJuZXRlczovL2tpYmFuYS1zYW1wbGUta2ItNjRjYjY1YzljNS1yYmZqdi5lbGFzdGlj'
'x-envoy-internal', 'true'
'x-forwarded-for', '10.129.2.73'
'x-envoy-expected-rq-timeout-ms', '5000'
[Envoy (Epoch 0)] [2020-07-29 20:38:21.586][28][debug][pool] [external/envoy/source/common/http/http2/conn_pool.cc:98] [C27] creating stream
[Envoy (Epoch 0)] [2020-07-29 20:38:21.586][28][debug][router] [external/envoy/source/common/router/router.cc:1618] [C0][S18146019625426063270] pool ready
[Envoy (Epoch 0)] [2020-07-29 20:38:21.586][28][debug][filter] [src/envoy/tcp/mixer/filter.cc:100] [C1672] Called tcp filter onRead bytes: 265
[Envoy (Epoch 0)] [2020-07-29 20:38:21.588][28][debug][router] [external/envoy/source/common/router/router.cc:1036] [C0][S18146019625426063270] upstream headers complete: end_stream=false
[Envoy (Epoch 0)] [2020-07-29 20:38:21.588][28][debug][http] [external/envoy/source/common/http/async_client_impl.cc:93] async http request response headers (end_stream=false):
':status', '200'
'content-type', 'application/grpc'
'x-envoy-upstream-service-time', '1'
'date', 'Wed, 29 Jul 2020 20:38:21 GMT'
'server', 'envoy'
[Envoy (Epoch 0)] [2020-07-29 20:38:21.588][28][debug][client] [external/envoy/source/common/http/codec_client.cc:101] [C27] response complete
[Envoy (Epoch 0)] [2020-07-29 20:38:21.588][28][debug][pool] [external/envoy/source/common/http/http2/conn_pool.cc:236] [C27] destroying stream: 0 remaining
[Envoy (Epoch 0)] [2020-07-29 20:38:21.588][28][debug][http] [external/envoy/source/common/http/async_client_impl.cc:119] async http request response trailers:
'grpc-status', '0'
'grpc-message', ''
[Envoy (Epoch 0)] [2020-07-29 20:38:21.588][28][debug][grpc] [src/envoy/utils/grpc_transport.cc:76] Check response: precondition {
  status {
  }
  valid_duration {
    seconds: 60
  }
  valid_use_count: 10000
  referenced_attributes {
    attribute_matches {
      name: 151
      condition: EXACT
    }
    attribute_matches {
      name: 15
      condition: ABSENCE
    }
    attribute_matches {
      name: 174
      condition: ABSENCE
    }
    attribute_matches {
      name: 191
      condition: ABSENCE
    }
    attribute_matches {
      name: 155
      condition: EXACT
    }
    attribute_matches {
      name: 154
      condition: EXACT
    }
    attribute_matches {
      name: 125
      condition: EXACT
    }
    attribute_matches {
      name: 19
      condition: ABSENCE
    }
    attribute_matches {
      name: 203
      condition: ABSENCE
    }
    attribute_matches {
      condition: EXACT
    }
    attribute_matches {
      name: 131
      condition: EXACT
    }
    attribute_matches {
      name: 16
      condition: ABSENCE
    }
    attribute_matches {
      name: 150
      condition: EXACT
    }
    attribute_matches {
      name: 201
      condition: EXACT
    }
    attribute_matches {
      name: 186
      condition: ABSENCE
    }
    attribute_matches {
      name: 3
      condition: ABSENCE
    }
  }
}
[Envoy (Epoch 0)] [2020-07-29 20:38:21.588][28][debug][filter] [src/istio/mixerclient/client_impl.cc:271] CheckResult transport=OK, policy=OK, quota=NA, attempt=0
[Envoy (Epoch 0)] [2020-07-29 20:38:21.588][28][debug][filter] [src/envoy/tcp/mixer/filter.cc:143] Called tcp filter completeCheck: OK
[Envoy (Epoch 0)] [2020-07-29 20:38:21.588][28][debug][filter] [src/istio/control/client_context_base.cc:139] Report attributes: attributes {
  key: "check.cache_hit"
  value {
    bool_value: false
  }
}
attributes {
  key: "connection.event"
  value {
    string_value: "open"
  }
}
attributes {
  key: "connection.i
[Envoy (Epoch 0)] [2020-07-29 20:38:21.589][28][debug][http2] [external/envoy/source/common/http/http2/codec_impl.cc:784] [C27] stream closed: 0
[Envoy (Epoch 0)] [2020-07-29 20:38:21.590][28][debug][filter] [src/envoy/tcp/mixer/filter.cc:123] [C1672] Called tcp filter onWrite bytes: 2243
[Envoy (Epoch 0)] [2020-07-29 20:38:21.591][28][debug][filter] [src/envoy/tcp/mixer/filter.cc:100] [C1672] Called tcp filter onRead bytes: 93
[Envoy (Epoch 0)] [2020-07-29 20:38:21.591][28][debug][filter] [src/envoy/tcp/mixer/filter.cc:123] [C1672] Called tcp filter onWrite bytes: 51
[Envoy (Epoch 0)] [2020-07-29 20:38:21.591][28][debug][filter] [src/envoy/tcp/mixer/filter.cc:100] [C1672] Called tcp filter onRead bytes: 148
[Envoy (Epoch 0)] [2020-07-29 20:38:21.640][28][debug][filter] [src/envoy/tcp/mixer/filter.cc:100] [C70] Called tcp filter onRead bytes: 329
[Envoy (Epoch 0)] [2020-07-29 20:38:21.641][28][debug][filter] [src/envoy/tcp/mixer/filter.cc:123] [C70] Called tcp filter onWrite bytes: 772
[Envoy (Epoch 0)] [2020-07-29 20:38:21.655][28][debug][filter] [src/envoy/tcp/mixer/filter.cc:123] [C1672] Called tcp filter onWrite bytes: 569
[Envoy (Epoch 0)] [2020-07-29 20:38:21.657][28][debug][filter] [src/envoy/tcp/mixer/filter.cc:123] [C1672] Called tcp filter onWrite bytes: 20570
[Envoy (Epoch 0)] [2020-07-29 20:38:21.657][28][debug][misc] [external/envoy/source/common/network/io_socket_error_impl.cc:29] Unknown error code 104 details Connection reset by peer
[Envoy (Epoch 0)] [2020-07-29 20:38:21.657][28][debug][filter] [src/envoy/tcp/mixer/filter.cc:100] [C1672] Called tcp filter onRead bytes: 31
[Envoy (Epoch 0)] [2020-07-29 20:38:21.657][28][debug][connection] [external/envoy/source/common/network/connection_impl.cc:548] [C1672] remote close
[Envoy (Epoch 0)] [2020-07-29 20:38:21.657][28][debug][connection] [external/envoy/source/common/network/connection_impl.cc:193] [C1672] closing socket: 0
[Envoy (Epoch 0)] [2020-07-29 20:38:21.657][28][debug][filter] [src/envoy/tcp/mixer/filter.cc:174] [C1672] Called tcp filter onEvent: 0 upstream 127.0.0.1:5601
[Envoy (Epoch 0)] [2020-07-29 20:38:21.658][28][debug][filter] [src/istio/control/client_context_base.cc:139] Report attributes: attributes {
  key: "check.cache_hit"
  value {
    bool_value: false
  }
}
attributes {
  key: "connection.duration"
  value {
    duration_value {
      nanos: 72001789
    }
  }
}
attrib
[Envoy (Epoch 0)] [2020-07-29 20:38:21.658][28][debug][connection] [external/envoy/source/common/network/connection_impl.cc:104] [C1673] closing data_to_write=31 type=0
[Envoy (Epoch 0)] [2020-07-29 20:38:21.658][28][debug][conn_handler] [external/envoy/source/server/connection_handler_impl.cc:88] [C1672] adding to cleanup list
[Envoy (Epoch 0)] [2020-07-29 20:38:21.658][28][debug][filter] [src/envoy/tcp/mixer/filter.cc:35] Called tcp filter : ~Filter
[Envoy (Epoch 0)] [2020-07-29 20:38:21.658][28][debug][connection] [external/envoy/source/common/network/connection_impl.cc:610] [C1673] write flush complete
[Envoy (Epoch 0)] [2020-07-29 20:38:21.658][28][debug][connection] [external/envoy/source/common/network/connection_impl.cc:193] [C1673] closing socket: 1
[Envoy (Epoch 0)] [2020-07-29 20:38:21.658][28][debug][pool] [external/envoy/source/common/tcp/conn_pool.cc:124] [C1673] client disconnected
[Envoy (Epoch 0)] [2020-07-29 20:38:21.658][28][debug][pool] [external/envoy/source/common/tcp/conn_pool.cc:238] [C1673] connection destroyed

Logs from istio-proxy when its injected:
Affected product area (please put an X in all that apply)

[X] Configuration Infrastructure
[ ] Docs
[ ] Installation
[X] Networking
[ ] Performance and Scalability
[ ] Policies and Telemetry
[ ] Security
[ ] Test and Release
[ ] User Experience
[ ] Developer Infrastructure

Affected features (please put an X in all that apply)

[ ] Multi Cluster
[ ] Virtual Machine
[ ] Multi Control Plane

Expected behavior
Every supposed to work as before

Steps to reproduce the bug
Install the ECK stack using the guide, install service-mesh with the operator, inject istio-proxy to the stack.

Version (include the output of istioctl version --remote and kubectl version and helm version if you used Helm)
1.4.8

How was Istio installed?
service-mesh operator

Environment where bug was observed (cloud vendor, OS, etc)
Openshift

Additionally, please consider attaching a cluster state archive by attaching
the dump file to this issue.

cleanup steps don't remove NetworkAttachmentDefinitions

I've run the removal/cleanup steps from https://docs.openshift.com/container-platform/4.3/service_mesh/service_mesh_install/removing-ossm.html#ossm-remove-cleanup_removing-ossm. But when I do kubectl get network-attachment-definitions.k8s.cni.cncf.io --all-namespaces I find I still have NetworkAttachmentDefinitions in the namespaces that were part of the member roll. When I reinstall and create the ServiceMeshMemberRoll again then I see the below in the servicemesh operator logs:

{"level":"error","ts":1592296162.4312015,"logger":"kubebuilder.controller","msg":"Reconciler error","controller":"servicemeshmemberroll-controller","request":"istio-system/default","error":"Could not create NetworkAttachmentDefinition seldon/istio-cni: network-attachment-definitions.k8s.cni.cncf.io \"istio-cni\" already exists","errorCauses":[{"error":"Could not create NetworkAttachmentDefinition seldon/istio-cni: network-attachment-definitions.k8s.cni.cncf.io \"istio-cni\" already exists","errorCauses":[{"error":"Could not create NetworkAttachmentDefinition seldon/istio-cni: network-attachment-definitions.k8s.cni.cncf.io \"istio-cni\" already exists","errorCauses":[{"error":"Could not create NetworkAttachmentDefinition seldon/istio-cni: network-attachment-definitions.k8s.cni.cncf.io \"istio-cni\" already exists"}]}]}],"stacktrace":"github.com/go-logr/zapr.(*zapLogger).Error\n\t/builddir/build/BUILD/OPERATOR/src/github.com/maistra/istio-operator/vendor/github.com/go-logr/zapr/zapr.go:128\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/builddir/build/BUILD/OPERATOR/src/github.com/maistra/istio-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:217\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func1\n\t/builddir/build/BUILD/OPERATOR/src/github.com/maistra/istio-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:158\nk8s.io/apimachinery/pkg/util/wait.JitterUntil.func1\n\t/builddir/build/BUILD/OPERATOR/src/github.com/maistra/istio-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133\nk8s.io/apimachinery/pkg/util/wait.JitterUntil\n\t/builddir/build/BUILD/OPERATOR/src/github.com/maistra/istio-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:134\nk8s.io/apimachinery/pkg/util/wait.Until\n\t/builddir/build/BUILD/OPERATOR/src/github.com/maistra/istio-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:88"}

I'm thinking that for a full cleanup the removal instructions should also do:

kubectl label ns <namespace> maistra.io/member-of-
kubectl delete network-attachment-definitions.k8s.cni.cncf.io -n <namespace> istio-cni

trying to use this operator as a dependency for another operator

I want to automatically install a service mesh control plane and member roll when the operator I'm working on installs an instance. This is through openshift marketplace/operatorhub. The operator I'm working on is https://github.com/SeldonIO/seldon-deploy-operator

But I find that installing the ServiceMeshControlPlane and ServiceMeshMemberRoll through another operator has some limitations.

Installing them on the command line and I can install the control plane and then wait until the pods in istio-system are up and install the other and that works. The wait seems to be necessary or else the ServiceMeshMemberRoll never gets a status and any members I add to it don't get registered in the mesh.

So I need a similar wait condition in the automated install. I can do that by adding a helm hook with a sleep in it. But once all the istio objects are created then istio somehow blocks the operator itself from creating further resources:

{"level":"error","ts":1592242827.2561026,"logger":"helm.controller","msg":"Release failed","namespace":"seldon","name":"example-seldondeploy","apiVersion":"machinelearning.seldon.io/v1alpha1","kind":"SeldonDeploy","release":"example-seldondeploy","error":"failed to install release: failed post-install: warning: Hook post-install seldon-deploy/templates/openshift-istio.yaml failed: admission webhook \"smmr.validation.maistra.io\" denied the request: user 'system:serviceaccount:openshift-operators:seldon-deploy-operator' does not have permission to access namespace(s): [seldon]"

That doesn't happen from the command line. The extra-weird thing is that post-install hook isn't even trying to install anything in the seldon namespace, at least not directly. It's trying to create a ServiceMeshMemberRoll in the istio-system namespace and mark the seldon namespace as a member. I guess I am not creating any default member roll at that point. Not sure what it does when there is no member roll.

Any ideas on what could be going on here or what I might be able to do to work around it?

OKD 3.11 - Maistra 0.12 : Automatic sidecard injection doesn't work

Describe the bug
From an installation from scratch, we tried to deploy the application sleep in the namespace default
The application is correctly created but the sidecar is not automatically injected
It's not possible to manually inject the sidecar with istioctl according to the maistra documentation
However, if I use template with sidecar injection via manual approach (old on eIstio 1.0.6), it works
Additional element: only the namespace istio-system is only displayed on kiali not more

Expected behavior
automatic sidecar injection in the pod sleep in the namespace default

Steps to reproduce the bug
see file in attachment --> just follow the documentation and deploy sleep pod in namespace default

Version
Version:
oc v3.11.0+0cbc58b
kubernetes v1.11.0+d4cacc0
features: Basic-Auth GSSAPI Kerberos SPNEGO
Server https://xxx.xxx.xxx.xxx:8443
kubernetes v1.11.0+d4cacc0

Installation
based on maistra 0.12 documentation
installation.txt
The installation from scratch with all the commands
Remark: For the preparation of okd 3.11, the master-config.patch is executed on the master-config yaml in the directory of openshift-apiserver and openshift-controller-manager.
The YAML file for the ressources smcp smmr
Remark: grafana, kiali, grafana,jaeger enabled
Remark2: in the ressource smmr add namespace default and bookinfo (not exist yet) because according to the documentation:

apiVersion: maistra.io/v1
kind: ServiceMeshMemberRoll
metadata:
name: default
spec:
members:

  • bookinfo
  • default

Environment
more /etc/system-release
CentOS Linux release 7.6.1810 (Core)

Cluster state
I'm running on OKD 3.11

troubleshooting
here is the troubleshooting that I made following the approach described in istio/istio#11808
troubleshooting.txt

Could you please help me. I would like to show a automatic sidecar injection in my demo?
and I don't see any other test that I can make...
Please ask me what you need more to have a complete view of the environment

Olivier

envoy istio-proxy container not getting ready

Describe the bug
{{ Succinctly describe the bug }}

Expected behavior
{{ What did you expect to happen? }}

Steps to reproduce the bug
{{ Minimal steps to reproduce the behavior }}

Version
{{ What version of Istio and Kubernetes are you using? Use istioctl version and kubectl version }}

Installation
{{ Please describe how Istio was installed }}

Environment
{{ Which environment, cloud vendor, OS, etc are you using? }}

Cluster state
{{ If you're running on Kubernetes, consider following the
instructions

to generate "istio-dump.tar.gz", then attach it here by dragging and dropping
the file onto this issue. }}

Allow managed network policies in Service Mesh to provide a more clean and secure mesh.

We like to have the possibility to manage our own network policies for certain or all namespaces in one service mesh by making it configurable via annotations (i.e. managed_by_maistra: false) on the service mesh member namespace(s) in order to define own managed (by Argocd) netpols. This way we are able to use our own managed (templated with Helm and Argocd) network policies to isolate the namespaces within the mesh and thereby decreasing the overhead by use of multiple meshes, unneeded pods etc and make it a less complex setup. Furthermore this adds more security within the mesh by making it possible to isolate specific namespaces in the mesh instead of 'trusting' all namespaces within the mesh.

Now there is only one netpol in each member namespace that allows traffic from all mesh namespaces but we like to isolate some of these in the same mesh with the network policies provided by Openshift, this is not possible now.

Alternative is that we need to create multiple control planes by which for each control plane a new namespace is needed where the mesh pods are running. 10 control planes for example are running 50 pods this does not seem very scalable in a large stretched cluster and gives soon a lot of unneeded overhead. We like to grow our cluster with multiple tenants > 50 of maybe > 100 in the future and we do not want the overhead that comes with the current setup.

What we like to accomplish is to use one mesh with multiple trusted groups tenants and each tenant has ns-dev, ns-tst, ns-acc, ns-prd, ns-ops namespaces which are isolated from each other on network policy level in the mesh. Everything manged by Argocd and rolled out from one helm template. This seems to work perfectly with a Istio install but is not possible with the Openshift (Maistra) installation.

We now from Redhat that there are multiple clients asking for this kind of functionality.

[x ] Configuration Infrastructure
[ ] Docs
[ ] Installation
[x ] Networking
[ ] Performance and Scalability
[ ] Policies and Telemetry
[x] Security
[ ] Test and Release
[ ] User Experience
[ ] Developer Infrastructure

Affected features (please put an X in all that apply)

[ ] Multi Cluster
[ ] Virtual Machine
[x] Multi Control Plane

Additional context

Question: Any other way than ServiceMeshMemberRoll to add namespaces?

We are trying to automate the creation/update of namespaces. It is a bit awkward for a script that creates a namespace, to also try to maintain (insert if needed) a list of namespaces in the default ServiceMeshMemberRoll.

Is there any other cleaner mechanism for this? Labels or annotations on the namespace? Additional ServiceMeshMemberRoll objects besides default?

envoyfilter to remove server name

Bug description

I'm trying to create an EnvoyFilter to remove the server name from ingress gateways. However, it fails with following error.

apiVersion: networking.istio.io/v1alpha3
kind: EnvoyFilter
metadata:
  name: remove-server-name
  namespace: istio-system
spec:
  configPatches:
  - applyTo: NETWORK_FILTER # http connection manager is a filter in Envoy
    match:
      listener:
        filterChain:
          filter:
            name: "envoy.http_connection_manager"
    patch:
      operation: MERGE
      value:
        config:
          server_name: "test"
Error from server: error when creating "remove-server-name.yaml": admission webhook "pilot.validation.istio.io" denied the request: configuration is invalid: envoy filter: missing filters

I tried to use the filter envoy.http_connection_manager but it doesn't have the server remove option.

Affected product area (please put an X in all that apply)

[ ] Configuration Infrastructure
[ ] Docs
[ ] Installation
[ ] Networking
[ ] Performance and Scalability
[x] Policies and Telemetry
[ ] Security
[ ] Test and Release
[ ] User Experience
[ ] Developer Infrastructure

How was Istio installed?

Redhat servicemesh operator 1.0.2

Unable to delete SMMR after deleting the SMCP

I wanted to clean my project so I deleted the SMCP before I deleted the SMMR,
Then I tried to delete the SMMR but I wasn't able to, I also tried to delete the operator, re-install it, create new SMCP, and check the status of the SMMR but it still presents "ErrSMCPMissing" and I'm unable to delete it - the same outcome for force deletion which didn't work.

$ oc get smcp
NAME READY STATUS PROFILES VERSION AGE
basic 9/9 ComponentsReady ["default"] 2.0.0.2 15m
$ oc get smmr
NAME READY STATUS AGE
default 0/1 ErrSMCPMissing 47h
$ oc delete smmr default
servicemeshmemberroll.maistra.io "default" deleted
^C
$ oc delete smmr default --force --grace-period=0
warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
servicemeshmemberroll.maistra.io "default" force deleted
^C
$ oc get smmr
NAME READY STATUS AGE
default 0/1 ErrSMCPMissing 47h

Documentation for custom control plane installation is poor

Describe the bug
{{ Documentation for custom control plane installation is really poor.

Eg.

  1. How to change Istio ingress gateway controller service type to LoadBalancer is not described

  2. How to enable SDS (Secret Discovery Service) not provided

  3. No mention of istioctl support

  4. How to disable 3Scald not described

  5. Can u provide a full options YAML with comments on each option?

}}

Expected behavior
{{ Less trial and error installation experience }}

Steps to reproduce the bug
{{ Creating a ELB loadbalancer based ingress gateway. Creating SDS based TLS configuration for ingress gateway }}

Version
{{ 4.2.7 OpenShift on AWS IPI, Maistra v 1.0.3 }}

Installation
{{ operator based installation of Redhat OpenShift service mesh control plane }}

Environment
{{ sandbox on AWS? }}

Cluster state
{{ If you're running on Kubernetes, consider following the
instructions

to generate "istio-dump.tar.gz", then attach it here by dragging and dropping
the file onto this issue. }}

[Operator] Remove transitive dependency to Operator-SDK from the API

Describe the feature request
Depending solely on the API of the Maistra Operator (https://github.com/maistra/istio-operator) forces dependent application to handle dependency resolution for the transitive Operator SDK dependency.

The reason for this is because the CRD status has a dependency to the maistra-operator version (BuildInfo) (https://github.com/maistra/istio-operator/blob/maistra-2.0/pkg/apis/maistra/status/status.go#L161) and within the init function of the BuildInfo the version of the Operator SDK is used (https://github.com/maistra/istio-operator/blob/maistra-2.0/pkg/version/version.go#L43).

It would be great if this indirect dependency could be removed in version 1.1.x and 2.x e.g. by initializing the BuildInfo with the concrete operator sdk version from "externally".

Describe alternatives you've considered

Handling the dependency "resolution" in the dependent application (e.g. go.mod)

replace github.com/operator-framework/operator-sdk => github.com/operator-framework/operator-sdk v0.18.0

replace github.com/Azure/go-autorest => github.com/Azure/go-autorest v13.3.3+incompatible

Affected product area (please put an X in all that apply)

Go API

Auto injection fails and sidecar does not get ready after manually injecting

Describe the bug
Fresh install and tried to verify the install with the Bookinfo sample app.
Auto injection did not work and after manually injecting the istio-proxy gives an error and does not get ready:
2019-09-10T10:10:31.566346Z     info    Envoy proxy is NOT ready: config not received from Pilot (is Pilot running?): cds updates: 0 successful, 0 rejected; lds updates: 0 successful, 0 rejected
[2019-09-10 10:10:31.704][22][warning][config] [bazel-out/k8-opt/bin/external/envoy/source/common/config/_virtual_includes/grpc_stream_lib/common/config/grpc_stream.h:86] gRPC config stream closed: 14, upstream connect error or disconnect/reset before headers. reset reason: connection failure
The pilot is running without errors.

Another bod failed with
Error creating: Internal error occurred: failed calling admission webhook "sidecar-injector.istio.io": Post https://istio-sidecar-injector.istio-system.svc:443/inject?timeout=30s: x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "cluster.local")

Expected behavior
Sidecar is injecting and starting

Steps to reproduce the bug
Follow https://access.redhat.com/documentation/en-us/openshift_container_platform/3.11/html/service_mesh_install/service-mesh-installation

and try to install the sample application

Version
 v1.11.0+d4cacc0 
OKD v3.11.0+ec8630f-265
maistra 0.12.0-4

Installation
https://access.redhat.com/documentation/en-us/openshift_container_platform/3.11/html/service_mesh_install/service-mesh-installation
Followed the documentation and pushed with the below settings.
apiVersion: maistra.io/v1
kind: ServiceMeshControlPlane
metadata:
  name: minimal-install
spec:
  istio:
    global:
      proxy:
        # constrain resources for use in smaller environments
        resources:
          requests:
            cpu: 100m
            memory: 128Mi
          limits:
            cpu: 500m
            memory: 128Mi
      proxy_init:
          image: proxy-init-centos7
    gateways:
      istio-egressgateway:
        # disable autoscaling for use in smaller environments
        autoscaleEnabled: false
      istio-ingressgateway:
        # disable autoscaling for use in smaller environments
        autoscaleEnabled: false
 
    mixer:
      policy:
        # disable autoscaling for use in smaller environments
        autoscaleEnabled: false
 
      telemetry:
        # disable autoscaling for use in smaller environments
        autoscaleEnabled: false
        # constrain resources for use in smaller environments
        resources:
          requests:
            cpu: 100m
            memory: 1G
          limits:
            cpu: 500m
            memory: 4G
 
    pilot:
      # disable autoscaling for use in smaller environments
      autoscaleEnabled: false
      # increase random sampling rate for development/testing
      traceSampling: 100.0
 
    kiali:
       dashboard:
         user: admin
         passphrase: admin
 
    # disable grafana
    grafana:
      enabled: false
 
    # to disable tracing (i.e. jaeger)
    tracing:
      enabled: false
      jaeger:
        # simple, all-in-one strategy
        template: all-in-one
        # production strategy, utilizing elasticsearch
        #template: production-elasticsearch
        # if required. only one instance may use agentStrategy=DaemonSet
        #agentStrategy: DaemonSet

oc new-project myproject
oc adm policy add-scc-to-user anyuid -z default -n myproject
oc adm policy add-scc-to-user privileged -z default -n myproject
oc apply -n myproject -f https://raw.githubusercontent.com/Maistra/bookinfo/master/bookinfo.yaml

Environment
CentOS Linux release 7.6.1810 (Core)
VmWare Cloud
istio-dump.tar.gz

IPVLAN interface is not being created while sidecar is injected

Bug description

I am having an issue with OpenShift ServiceMesh 1.1.0 (Maistra, Istio) in combination with pods that have or should have an additionalNetwork with IPVLAN. Without the sidecar injection the additional IPVLAN interface is being created, with the sidecar annotation on the pod the sidecar is being created but the IPVLAN interface (=additional network annotation) is gone.

I am not sure if this expected, i could not find any hint in the documentation that IPVLAN/MACVLAN interfaces are not supported while the sidecar is injected. Does it anyhow relate to #26 / https://issues.redhat.com/projects/MAISTRA/issues/MAISTRA-518 ?

Affected product area (please put an X in all that apply)

[ ] Configuration Infrastructure
[ X ] Docs
[ ] Installation
[ X ] Networking
[ ] Performance and Scalability
[ ] Policies and Telemetry
[ ] Security
[ ] Test and Release
[ ] User Experience
[ ] Developer Infrastructure

Expected behavior

The expected behaviour is that while the sidecar is injected the IPVLAN interface is also being created.

Steps to reproduce the bug

In order to attach the additional interface to pod you have to annotate the pod accordingly.[1] In my case the needed annotation is "k8s.v1.cni.cncf.io/networks: ipvlan-net". Note that the sidecar injection is disabled:

apiVersion: v1
kind: Pod
metadata:
    name: ip-pod
labels:
    app: ip
annotations:
    k8s.v1.cni.cncf.io/networks: ipvlan-net
    sidecar.istio.io/inject: "false"
spec:
  containers:
    - resources:
      name: ip-container
      image: nginxinc/nginx-unprivileged
    ports:
      - name: http
        containerPort: 8080
        protocol: TCP
}

This creates the pod with the additional IPVLAN interface:

~ $ oc rsh -c ip-container  ip-pod1
~ $ ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
3: eth0@if31: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1450 qdisc noqueue state UP
link/ether 0a:58:0a:83:00:16 brd ff:ff:ff:ff:ff:ff
inet 10.131.0.22/23 brd 10.131.1.255 scope global eth0
valid_lft forever preferred_lft forever
inet6 fe80::1091:8eff:fefe:658a/64 scope link
valid_lft forever preferred_lft forever
4: net1@if2: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue state UNKNOWN
link/ether 00:1a:4a:16:01:e0 brd ff:ff:ff:ff:ff:ff
inet 10.10.10.10/24 brd 10.10.10.255 scope global net1
valid_lft forever preferred_lft forever
inet6 fe80::1a:4a00:116:1e0/64 scope link
valid_lft forever preferred_lft forever

The next step is to enable the sidecar injection and re-create the pod. The creation of the pod and the sidecar is successful but the IPVLAN interface is not being created. In fact the sidecar injection process removes the "k8s.v1.cni.cncf.io/networks: ipvlan-net" annotation and puts it in another field:

apiVersion: v1
metadata:
  generateName: ip-pod1
  annotations:
    k8s.v1.cni.cncf.io/networks: v1-1-istio-cni
    k8s.v1.cni.cncf.io/networks-status: ''
    openshift.io/scc: restricted
    sidecar.istio.io/inject: 'true'
    sidecar.istio.io/status: >-  {"version":"773f050195566e10249458ebfe7cbc78e3fd9b728f7fdeaa355ab0cbe171fc2a","initContainers":null,"containers":["istio-proxy"],"volumes":["istio-envoy","istio-certs"],"imagePullSecrets":null}
....
....  
    - name: ISTIO_METAJSON_ANNOTATIONS
      value: >
        {"k8s.v1.cni.cncf.io/networks":"[ { \"name\": \"ipvlan-net\" }
        ]","openshift.io/scc":"restricted","sidecar.istio.io/inject":"true"}  
....

As a result the additional IPVLAN interface is not being created:

~ $ ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever
3: eth0@if39: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1450 qdisc noqueue state UP link/ether 0a:58:0a:83:00:1e brd ff:ff:ff:ff:ff:ff inet 10.131.0.30/23 brd 10.131.1.255 scope global eth0 valid_lft forever preferred_lft forever inet6 fe80::e0c2:5ff:fe51:b360/64 scope link valid_lft forever preferred_lft forever

Version (include the output of istioctl version --remote and kubectl version)

$ istioctl version
client version: 1.5.1
control plane version: OSSM_1.1.0
data plane version: maistra-1.1.0

$ oc version
Client Version: 4.3.8
Server Version: 4.3.13
Kubernetes Version: v1.16.2

How was Istio installed?

OpenShift ServiceMesh Operator 1.1.0

Environment where bug was observed (cloud vendor, OS, etc)

OpenShift 4.3.13

Additionally, please consider attaching a cluster state archive by attaching
the dump file to this issue.

3scale-istio-adapter container doesn't appear in the namespace istio-system

Describe the bug
3scale-istio adapter is not installed in the namespace istio-system even if the option threescale is enabled in the yaml file to deploy istio control plane
the option disablePolicyChecks is set to false in the global section
Expected behavior
3scale-istio-adapter container is installed in the namespace istio-system

Steps to reproduce the bug
oc create/apply -f "file in attachment"

Version
okd 3.11
maistra 0.11 -f "file in attachment"

Installation
oc apply -n istio-operator -f https://raw.githubusercontent.com/Maistra/istio-operator/maistra-0.11/deploy/maistra-operator.yaml
oc create "file in attachment"

istio-controlplane.txt

Environment
on premise in a virtual machine centos
Cluster state
N/A

Thanks in advance for your help

Olivier

How to use self-signed for jwksUri and issuer in a Policy

@olaf-meyer commented on Mon Apr 06 2020

Issue Overview

I'm sorry, but I'm not sure whether this is the correct place to open this issue. However, I'm running an OpenShift cluster with a self-signed certificate. My application is using a Keycloak server running on this OpenShift cluster for authentication. If I create a Policy that points to this Keycloak server in the attributes jwksUri and issuer, I get the following error in the Istio-pilot pod (in the discovery container):

2020-02-17T12:57:34.675759Z	error	model	Failed to fetch public key from "https://keycloak-olaf-sso.apps.acme.de/auth/realms/customer/protocol/openid-connect/certs": Get https://keycloak-olaf-sso.apps.acme.de/auth/realms/customer/protocol/openid-connect/certs: x509: certificate signed by unknown authority
2020-02-17T12:57:34.675778Z	warn	Failed to fetch jwt public key from "https://keycloak-olaf-sso.apps.acme.de/auth/realms/customer/protocol/openid-connect/certs"

If I mount a secret with my signer CA public certificate (with the file name cert.pem) in the path /etc/pki/tls/ of the Istio pilot, then my application is running.

Expected Behaviour
Current Behaviour

Is overwriting the file /etc/pki/tls/cert.pem in the Istio pilot container the correct way to add a self-signed certificate to the Istio pilot. How can I ensure that this certificate is still used after an version update of Pilot image by the Istio Operator? Would it make sense to have an attribute in the resource ServiceMeshControlPlane for this purpose?

OKD 3.11 - Maistra 0.12 : Sidecar not pushing metrics to Prometheus

Describe the bug
Even after sending traffic through service/pod, no metrics appear in Kiali/Prometheus even with sidecar injected into the container without errors. No 'istio_request_*' metrics are available in Prometheus UI.

Expected behavior
Metrics/graphs appear in Kiali as traffic is sent through service/proxy/pod.

Steps to reproduce the bug
See Installation Steps

Version
Istio:

Version: 1.0.6
GitRevision: 98598f88f6ee9c1e6b3f03b652d8e0e3cd114fa2
User: root@464fc845-2bf8-11e9-b805-0a580a2c0506
Hub: docker.io/istio
GolangVersion: go1.10.4
BuildStatus: Clean

Kubernetes:

Client Version: version.Info{Major:"1", Minor:"12", GitVersion:"v1.12.2", GitCommit:"17c77c7898218073f14c8d573582e8d2313dc740", GitTreeState:"clean", BuildDate:"2018-10-24T06:54:59Z", GoVersion:"go1.10.4", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"11+", GitVersion:"v1.11.0+d4cacc0", GitCommit:"d4cacc0", GitTreeState:"clean", BuildDate:"2019-08-01T13:19:47Z", GoVersion:"go1.10.8", Compiler:"gc", Platform:"linux/amd64"}

Openshift:

openshift v3.11.0+82a43f6-231
kubernetes v1.11.0+d4cacc0

Installation
Using https://maistra.io/docs/getting_started/install/

My Istio installation YAML:

apiVersion: maistra.io/v1
kind: ServiceMeshControlPlane
metadata:
  name: demo-install
spec:
  istio:
    global:
      proxy:
        # constrain resources for use in smaller environments
        resources:
          requests:
            cpu: 100m
            memory: 128Mi
          limits:
            cpu: 500m
            memory: 128Mi

    gateways:
      istio-egressgateway:
        # disable autoscaling for use in smaller environments
        autoscaleEnabled: false
      istio-ingressgateway:
        # disable autoscaling for use in smaller environments
        autoscaleEnabled: false

    mixer:
      policy:
        # disable autoscaling for use in smaller environments
        autoscaleEnabled: false

      telemetry:
        # disable autoscaling for use in smaller environments
        autoscaleEnabled: false
        # constrain resources for use in smaller environments
        resources:
          requests:
            cpu: 100m
            memory: 1G
          limits:
            cpu: 500m
            memory: 4G

    pilot:
      # disable autoscaling for use in smaller environments
      autoscaleEnabled: false
      # increase random sampling rate for development/testing
      traceSampling: 100.0

    kiali:
      # to disable kiali
      enabled: true

      # create a secret for accessing kiali dashboard with the following credentials
      # dashboard:
      #   user: admin
      #   passphrase: admin

    # disable grafana
    grafana:
      enabled: true

    # to disable tracing (i.e. jaeger)
    tracing:
      enabled: true
      jaeger:
        # simple, all-in-one strategy
        template: all-in-one
        # production strategy, utilizing elasticsearch
        #template: production-elasticsearch
        # if required. only one instance may use agentStrategy=DaemonSet
        #agentStrategy: DaemonSet
---
apiVersion: maistra.io/v1
kind: ServiceMeshMemberRoll
metadata:
  name: default
spec:
  members:
  # a list of projects that should be joined into the service mesh
  # for example, to add the bookinfo project
  - demo-project

My NGINX test deployment/service YAML:

apiVersion: v1
kind: Service
metadata:
  name: nginx
  namespace: demo-project
  labels:
    app: nginx
spec:
  ports:
  - port: 80
    targetPort: 80
    name: http
  selector:
    app: nginx
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: nginx
  namespace: demo-project
spec:
  replicas: 3
  template:
    metadata:
      labels:
        app: nginx
        version: "1.0"
      annotations:
        sidecar.istio.io/inject: "true"
    spec:
      securityContext:
        runAsUser: 0
      containers:
        - name: nginx
          image: nginx
          ports:
            - containerPort: 80
              protocol: TCP
              name: http

Environment
Cloud Vendor: AWS
OS: CentOS Linux release 7.6.1810 (Core)

Cluster state
N/A

ServiceMeshMemberRoll stops updating

I installed the OpenShift mesh operator, v1.1.0 on an openshift 4.3.9 cluster. Initially everything seemed to work and I got the bookinfo demo working as well as a project in a second namespace. But then the memberroll stopped updating. I ended up with:

apiVersion: maistra.io/v1
kind: ServiceMeshMemberRoll
metadata:
  annotations:
    kubectl.kubernetes.io/last-applied-configuration: >
      {"apiVersion":"maistra.io/v1","kind":"ServiceMeshMemberRoll","metadata":{"annotations":{},"finalizers":["maistra.io/istio-operator"],"name":"default","namespace":"istio-system"},"spec":{"members":["seldon","bookinfo","seldon-system"]}}
  selfLink: /apis/maistra.io/v1/namespaces/istio-system/servicemeshmemberrolls/default
  resourceVersion: '8137725'
  name: default
  uid: 371391c3-07ba-4aad-b308-5c034062e688
  creationTimestamp: '2020-04-08T16:17:37Z'
  generation: 12
  namespace: istio-system
  ownerReferences:
    - apiVersion: maistra.io/v1
      kind: ServiceMeshControlPlane
      name: basic-install
      uid: f80bb366-f5f5-4c36-b437-88908082ed7f
  finalizers:
    - maistra.io/istio-operator
spec:
  members:
    - bookinfo
    - seldon
    - seldon-system
status:
  annotations:
    configuredMemberCount: 2/2
  conditions:
    - lastTransitionTime: '2020-04-09T06:47:11Z'
      message: All namespaces have been configured successfully
      reason: Configured
      status: 'True'
      type: Ready
  configuredMembers:
    - bookinfo
    - seldon
  meshGeneration: 1
  meshReconciledVersion: 1.1.0-9.el8-1
  observedGeneration: 4

That lastTransitionTime is two weeks ago. Even if I remove bookinfo as a member, it still remains in the configuredMembers and I can still access it through the istio ingress. Adding new members into the member roll does not get them into currentMembers and those projects don't become accessible through ingress. I've tried restarting all the pods in the istio-system namespace but that doesn't bring things to life either. Seems to get stuck somehow.

I then uninstalled the operator and reinstalled it and it worked. Good that it worked but I don't think I should have had to reinstall the operator.

How service mesh to avoid assign a privilege policy

I read from istio setup docuement on openshift and it need a privlige container to run on the target namespaces,such as
oc adm policy add-scc-to-group privileged system:serviceaccounts:
oc adm policy add-scc-to-group anyuid system:serviceaccounts:

I find the service mesh on openshift do not needs to do these, my question is do you have more documents or materials about how to avoid the security problems?

Thank you.

Error processing component prometheus: error: no matches for kind "Route" in version "route.openshift.io/v1"

Bug description

I have followed instructions on deploying a service mesh control plane, but it fails to create the Route object for Prometheus.
I'm seeing the following error output on the operator output:

{"level":"error","ts":1585644318.4681323,"logger":"controller_servicemeshcontrolplane","caller":"controlplane/reconciler.go:302","msg":"Error processing component prometheus","error":"no matches for kind \"Route\" in version \"route.openshift.io/v1\"","errorCauses":[{"error":"no matches for kind \"Route\" in version \"route.openshift.io/v1\""}],"stacktrace":"github.com/maistra/istio-operator/vendor/github.com/go-logr/zapr.(*zapLogger).Error\n\t/builddir/build/BUILD/OPERATOR/src/github.com/maistra/istio-operator/vendor/github.com/go-logr/zapr/zapr.go:128\ngithub.com/maistra/istio-operator/pkg/controller/servicemesh/controlplane.(*ControlPlaneReconciler).pauseReconciliation\n\t/builddir/build/BUILD/OPERATOR/src/github.com/maistra/istio-operator/pkg/controller/servicemesh/controlplane/reconciler.go:302\ngithub.com/maistra/istio-operator/pkg/controller/servicemesh/controlplane.(*ControlPlaneReconciler).Reconcile\n\t/builddir/build/BUILD/OPERATOR/src/github.com/maistra/istio-operator/pkg/controller/servicemesh/controlplane/reconciler.go:225\ngithub.com/maistra/istio-operator/pkg/controller/servicemesh/controlplane.(*ReconcileControlPlane).Reconcile\n\t/builddir/build/BUILD/OPERATOR/src/github.com/maistra/istio-operator/pkg/controller/servicemesh/controlplane/controller.go:271\ngithub.com/maistra/istio-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/builddir/build/BUILD/OPERATOR/src/github.com/maistra/istio-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:207\ngithub.com/maistra/istio-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func1\n\t/builddir/build/BUILD/OPERATOR/src/github.com/maistra/istio-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:157\ngithub.com/maistra/istio-operator/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil.func1\n\t/builddir/build/BUILD/OPERATOR/src/github.com/maistra/istio-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133\ngithub.com/maistra/istio-operator/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil\n\t/builddir/build/BUILD/OPERATOR/src/github.com/maistra/istio-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:134\ngithub.com/maistra/istio-operator/vendor/k8s.io/apimachinery/pkg/util/wait.Until\n\t/builddir/build/BUILD/OPERATOR/src/github.com/maistra/istio-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:88"}

Manually creating a route works fine:

apiVersion: route.openshift.io/v1
kind: Route
metadata:
  name: prometheus
  namespace: istio-system
  labels:
    app: prometheus
spec:
  to:
    kind: Service
    name: prometheus
  tls:
    termination: reencrypt
oc apply -f route.yml 
route.route.openshift.io/prometheus created

Affected product area (please put an X in all that apply)

[ ] Configuration Infrastructure
[ ] Docs
[X ] Installation
[ ] Networking
[ ] Performance and Scalability
[ ] Policies and Telemetry
[ ] Security
[ ] Test and Release
[ ] User Experience
[ ] Developer Infrastructure

Expected behavior
A functional service mesh control plane.

Steps to reproduce the bug
Follow documentation instructions for installation on OpenShift 4.3.

Version (include the output of istioctl version --remote and kubectl version)
Red Hat OpenShift Service Mesh 1.0.10.

oc version
Client Version: 4.4.0-202003060720-2576e48
Server Version: 4.3.8
Kubernetes Version: v1.16.2

How was Istio installed?
Operator through Operatorhub.
Custom resource like this:

apiVersion: maistra.io/v1
kind: ServiceMeshControlPlane
metadata:
  name: minimal-install
spec:
  istio:
    global:
      proxy:
        # constrain resources for use in smaller environments
        resources:
          requests:
            cpu: 100m
            memory: 128Mi
          limits:
            cpu: 500m
            memory: 128Mi

    gateways:
      istio-egressgateway:
        # disable autoscaling for use in smaller environments
        autoscaleEnabled: false
      istio-ingressgateway:
        # disable autoscaling for use in smaller environments
        autoscaleEnabled: false

    mixer:
      policy:
        # disable autoscaling for use in smaller environments
        autoscaleEnabled: false

      telemetry:
        # disable autoscaling for use in smaller environments
        autoscaleEnabled: false
        # constrain resources for use in smaller environments
        resources:
          requests:
            cpu: 100m
            memory: 1G
          limits:
            cpu: 500m
            memory: 4G

    pilot:
      # disable autoscaling for use in smaller environments
      autoscaleEnabled: false
      # increase random sampling rate for development/testing
      traceSampling: 100.0

    kiali:
      # to disable kiali
      enabled: false

      # create a secret for accessing kiali dashboard with the following credentials
      # dashboard:
      #   user: admin
      #   passphrase: admin

    # disable grafana
    grafana:
      enabled: false

    # to disable tracing (i.e. jaeger)
    tracing:
      enabled: false
      jaeger:
        tag: 1.13.1
        # simple, all-in-one strategy
        template: all-in-one
        # production strategy, utilizing elasticsearch
        #template: production-elasticsearch
        # if required. only one instance may use agentStrategy=DaemonSet
        #agentStrategy: DaemonSet

Environment where bug was observed (cloud vendor, OS, etc)
Bare metal. (Lab environment)
OpenShift 4.3.8.

IOR Feature, for TLS Termination configuration.

When using Istio gateway generated Openshift routes. We need the ability to configure the type of TLS termination created. Currently, this Defaults to passthrough termination here: https://github.com/maistra/istio/blob/maistra-2.0/pilot/pkg/config/kube/ior/route.go#L183 .

My hope is that we can discuss how this is configured and move to a solution. One option is to look for an annotation on the Istio Gateway configuration and depending on its value set the TLS termination type. There may be another way, let's discuss

[ ] Configuration Infrastructure
[ ] Docs
[ ] Installation
[x] Networking
[ ] Performance and Scalability
[ ] Policies and Telemetry
[ ] Security
[ ] Test and Release
[ ] User Experience
[ ] Developer Infrastructure

Istio-Operator: Operator error with Jaeger - no matches for kind "jaeger" in version "jaegertracing.io/v1"

Steps

  1. Follow steps https://github.com/Maistra/istio-operator#installation
{"level":"error","ts":1561709980.3073165,"logger":"controller_servicemeshcontrolplane","caller":"controlplane/pruner.go:96","msg":"Error retrieving resources to prune","type":"jaegertracing.io/v1, Kind=jaeger","error":"no matches for kind \"jaeger\" in version \"jaegertracing.io/v1\"","stacktrace":"github.com/maistra/istio-operator/vendor/github.com/go-logr/zapr.(*zapLogger).Error\n\t/home/alberto/source/go/kiali/kiali/src/github.com/maistra/istio-operator/vendor/github.com/go-logr/zapr/zapr.go:128\ngithub.com/maistra/istio-operator/pkg/controller/servicemesh/controlplane.(*ControlPlaneReconciler).pruneResources\n\t/home/alberto/source/go/kiali/kiali/src/github.com/maistra/istio-operator/pkg/controller/servicemesh/controlplane/pruner.go:96\ngithub.com/maistra/istio-operator/pkg/controller/servicemesh/controlplane.(*ControlPlaneReconciler).prune\n\t/home/alberto/source/go/kiali/kiali/src/github.com/maistra/istio-operator/pkg/controller/servicemesh/controlplane/pruner.go:76\ngithub.com/maistra/istio-operator/pkg/controller/servicemesh/controlplane.(*ControlPlaneReconciler).Reconcile\n\t/home/alberto/source/go/kiali/kiali/src/github.com/maistra/istio-operator/pkg/controller/servicemesh/controlplane/reconciler.go:131\ngithub.com/maistra/istio-operator/pkg/controller/servicemesh/controlplane.(*ReconcileControlPlane).Reconcile\n\t/home/alberto/source/go/kiali/kiali/src/github.com/maistra/istio-operator/pkg/controller/servicemesh/controlplane/controller.go:179\ngithub.com/maistra/istio-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/home/alberto/source/go/kiali/kiali/src/github.com/maistra/istio-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:207\ngithub.com/maistra/istio-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func1\n\t/home/alberto/source/go/kiali/kiali/src/github.com/maistra/istio-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:157\ngithub.com/maistra/istio-operator/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil.func1\n\t/home/alberto/source/go/kiali/kiali/src/github.com/maistra/istio-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133\ngithub.com/maistra/istio-operator/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil\n\t/home/alberto/source/go/kiali/kiali/src/github.com/maistra/istio-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:134\ngithub.com/maistra/istio-operator/vendor/k8s.io/apimachinery/pkg/util/wait.Until\n\t/home/alberto/source/go/kiali/kiali/src/github.com/maistra/istio-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:88"}

Define PVC size for Elasticsearch deployments

Describe the feature request

Sorry, I'm not sure if this is the correct git repository for the question/feature request. During the installation, the user can define the number of Elasticsearch nodes. I haven't found a description how to define the PVC size for the Elasticsearch pods. Is there a way to do that? Altering the PVC size for Elasticsearch could be a bit tricky.

Describe alternatives you've considered

Affected product area (please put an X in all that apply)

[ ] Configuration Infrastructure
[ ] Docs
[x] Installation
[ ] Networking
[ ] Performance and Scalability
[ ] Policies and Telemetry
[ ] Security
[ ] Test and Release
[ ] User Experience
[ ] Developer Infrastructure

Affected features (please put an X in all that apply)

[ ] Multi Cluster
[ ] Virtual Machine
[ ] Multi Control Plane

Additional context

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.