Giter Club home page Giter Club logo

deployments-k8s's Introduction

deployment-k8s

GithubActions Build Status Weekly minutes Mailing list GitHub license Slack Channel Slack Invite CII Best Practices

deployments-k8s repository provides native kubernetes yaml deployments and markdown examples for Network Service Mesh.

Contents

Requirements

Minimum required kubectl client version is v1.21.0

Using local applications

By default deployments-k8s uses applications in github ref format. For local development you can use to-local.sh script:

$ ./to-local.sh

It translates all github refs to local paths. If you want to switch back to github refs, you can use to-ref.sh script:

$ ./to-ref.sh

For some cases you may probably need to share your local changes with someone else. In such case please use to-export.sh instead of to-local.sh:

$ ./to-export.sh

IMPORTANT: to-export.sh cannot be undone back to github refs format with to-ref.sh. Please don't use it for local development, use it only for sharing your branch with someone else.

deployments-k8s's People

Contributors

anastasia-malysheva avatar arp-est avatar bellycat77 avatar chunosov avatar d-uzlov avatar denis-tingaikin avatar dualbreath avatar edwarnicke avatar github-actions[bot] avatar glazychev-art avatar haiodo avatar ljkiraly avatar mardim91 avatar marinashustowa avatar mixaster995 avatar nikitaskrynnik avatar nsmbot avatar pperiyasamy avatar sol-0 avatar szvincze avatar thetadr avatar tiswanso avatar vitalygushin avatar wazsone avatar xzfc avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

deployments-k8s's Issues

Scalability Testing: Decompose NSMgr testing

Test plan

Use cases:

  1. Endpoint registers on NSMgr.
  2. Endpoint updates itself on NSMgr.
  3. NSMgr unregisters expired Endpoint from itself.
  4. Forwarder registers on NSMgr.
  5. Forwarder updates itself on NSMgr.
  6. NSMgr unregisters expired Forwarder from itself.
  7. Client sends request to NSMgr.
  8. Client sends refresh request to NSMgr.
  9. Client closes connection on NSMgr.

Test scenarios:

  1. Endpoint registers on NSMgr:
    • Setup
      1. Start fake Registry (just returning OK to all Register/Unregister events).
      2. Start NSMgr.
    • Test
      1. Start E Endpoints each registering itself on NSMgr.
  2. Endpoint updates itself on NSMgr:
    • Setup
      1. Start fake Registry (just returning OK to all Register/Unregister events).
      2. Start NSMgr.
      3. Start E Endpoints each registering itself on NSMgr with T update time.
    • Test
      1. Wait T time for updates.
  3. NSMgr unregisters expired Endpoint from itself:
    • Setup
      1. Start fake Registry (just returning OK to all Register/Unregister events).
      2. Start NSMgr.
      3. Start E Endpoints each registering itself on NSMgr with T expiration time.
    • Test
      1. Wait T time for unregisters.
  4. Forwarder registers on NSMgr:
    • (1)
  5. Forwarder updates itself on NSMgr:
    • (2)
  6. NSMgr unregisters expired Forwarder from itself:
    • (3)
  7. Client sends request to NSMgr:
    • Setup
      1. Start NSMgr.
      2. Start E Endpoints each registering itself on NSMgr.
    • Test
      1. Start C Clients each requesting for E network services (Endpoints).
  8. Client sends refresh request to NSMgr:
    • Setup
      1. Start NSMgr.
      2. Start E Endpoints each registering itself on NSMgr.
      3. Start C Clients each requesting for E Endpoints with T refresh time.
    • Test
      1. Wait T time for refreshes.
  9. Client closes connection on NSMgr.
    • Setup
      1. Start NSMgr.
      2. Start E Endpoints each registering itself on NSMgr.
      3. Start C Clients each requesting for E Endpoints.
    • Test
      1. Cancel Clients.
  10. NSMgr closes expired connection with Client.
    • Setup
      1. Start NSMgr.
      2. Start E Endpoints each registering itself on NSMgr.
      3. Start C Clients each requesting for E Endpoints with T expiration time.
    • Test
      1. Wait T time for closes.

Tasks

  1. Create fake Registry CMD (returns OK for all).
    Estimation: 2h
  2. Create MD integration test for (1) test scenario with E variable parameter and measure time, CPU, memory usage for NSMgr during the test.
    Estimation: 1d
  3. Create MD integration test for (2) test scenario with E, T variable parameters and measure time, CPU, memory usage for NSMgr during the test.
    Estimation: 3h
  4. Create MD integration test for (3) test scenario with E, T variable parameters and measure time, CPU, memory usage for NSMgr during the test.
    Estimation: 3h
  5. Create MD integration test for (4) test scenario with E variable parameter and measure time, CPU, memory usage for NSMgr during the test.
    Estimation: 1d
  6. Create MD integration test for (5) test scenario with E, T variable parameters and measure time, CPU, memory usage for NSMgr during the test.
    Estimation: 3h
  7. Create MD integration test for (6) test scenario with E, T variable parameters and measure time, CPU, memory usage for NSMgr during the test.
    Estimation: 3h
  8. Create fake Forwarder CMD (simply requests NSMgr).
    Estimation: 2h
  9. Create fake Endpoint CMD (returns OK for all).
    Estimation: 2h
  10. Create MD integration test for (7) test scenario with E, C variable parameters and measure time, CPU, memory usage for NSMgr during the test.
    Estimation: 1d
  11. Create MD integration test for (8) test scenario with E, C, T variable parameters and measure time, CPU, memory usage for NSMgr during the test.
    Estimation: 3h
  12. Create MD integration test for (9) test scenario with E, C variable parameters and measure time, CPU, memory usage for NSMgr during the test.
    Estimation: 3h
  13. Create MD integration test for (10) test scenario with E, C, T variable parameters and measure time, CPU, memory usage for NSMgr during the test.
    Estimation: 3h

Estimation

8d

Sometimes heal continue working indefinitely

Expected Behavior

Heal never starts, or stops right after client pod deletion, or at the very least after few minutes after client deletion.

Current Behavior

Sometimes heal works indefinitely.

Steps to Reproduce

  1. Deploy clients and endpoints.
  2. Remove everything.
  3. Wait few minutes.
  4. Check logs.
  5. Repeat until you see messages like [ERRO] [cmd:Nsmgr] [healServer:processHeal] Failed to heal connection cbdd4f32-354d-4983-9caf-92ae5fc42f53: no match endpoints or all endpoints fail: context deadline exceeded

I got this issue after running scalability tests (they are not uploaded anywhere at the moment of creating this issue). These tests are basically Kernek2Kerner tests on steroids: they create many clients and endpoints, and each client makes not 1 but many requests, so they should be functionally identical to running Kernek2Kerner test many times.

Context

deployments-k8s git revision 18fdb9c (though, I believe I also saw this exact issue on a revision from 2 weeks ago).

Failure Logs

Warning, 50 MB of logs: nsmgr-2021-07-02T14.01.27+07.00.zip
These are full logs of nsmgr during several scalability tests and some time after the last test ended. I added some delay after each test, to check if heal works. After first few tests there were only register requests from forwarder, but after some time after the last test ended I discovered that logs contain a lot of heal requests.
Because the tests make a lot of requests (50 during last test, 50-100 during previous tests), so logs are not pretty.

Alpine container for webhook can fail on update

Expected Behavior

Postgresql should be correctly installed

Current Behavior

Postgresql is not installed on alpine, so TestWebhook failed

Failure Information (for bugs)

Sometimes apk update can return the next errors:

ERROR: http://dl-cdn.alpinelinux.org/alpine/v3.13/main: temporary error (try again later)
ERROR: http://dl-cdn.alpinelinux.org/alpine/v3.13/community: temporary error (try again later)

Proposal

Use postgres container as a client instead of alpine

Scalability Testing: Decompose Forwarder testing

Test plan

Use cases:

  1. NSMgr sends request to Forwarder.
  2. NSMgr sends refresh request to Forwarder.
  3. NSMgr closes connection on Forwarder.
  4. Forwarder closes expired connection with NSMgr.

Test scenarios:

  1. NSMgr sends request to Forwarder:
    • Setup
      1. Start fake NSMgr (handling both Client/Remote and Endpoint/Remote sides).
      2. Start Forwarder.
    • Test
      1. Request C connections with Cm Client mechanism and Em Endpoint mechanism.
  2. NSMgr sends refresh request to Forwarder:
    • Setup
      1. Start fake NSMgr.
      2. Start Forwarder.
      3. Request C connections with Cm Client mechanism and Em Endpoint mechanism with T refresh time.
    • Test
      1. Wait T time for refreshes.
  3. NSMgr closes connection on Forwarder:
    • Setup
      1. Start fake NSMgr.
      2. Start Forwarder.
      3. Request C connections with Cm Client mechanism and Em Endpoint mechanism.
    • Test
      1. Close connections.
  4. Forwarder closes expired connection with NSMgr:
    • Setup
      1. Start fake NSMgr.
      2. Start Forwarder.
      3. Request C connections with Cm Client mechanism and Em Endpoint mechanism with T expiration time.
    • Test
      1. Wait T time for closes.

Tasks

  1. Create fake NSMgr CMD.
    Estimation: 3h
  2. Create MD integration test for (1) test scenario with C, Cm, Em variable parameters and measure time, CPU, memory usage for Forwarder during the test.
    Estimation: 1d
  3. Create MD integration test for (2) test scenario with C, Cm, Em, T variable parameters and measure time, CPU, memory usage for Forwarder during the test.
    Estimation: 3h
  4. Create MD integration test for (3) test scenario with C, Cm, Em variable parameters and measure time, CPU, memory usage for Forwarder during the test.
    Estimation: 3h
  5. Create MD integration test for (4) test scenario with C, Cm, Em, T variable parameters and measure time, CPU, memory usage for Forwarder during the test.
    Estimation: 3h

Estimation

3d

Add OPA example

Desription

Currently, all examples pass with a correct token chain. To check that the invalid token chain will fail we could add an example with NSC that trying to request service with the expired token.

Motivation

This can help to user show how works OPA policies and this can help us to track regressions related to OPA stuff.

NSM interface deleting periodically

Logs

stderr F Jun 17 10:50:48.232 [ERRO] [cmd:[/bin/app]] [healServer:processHeal] Failed to heal connection alpine-cl-0: Error returned from sdk/pkg/networkservice/common/authorize/authorizeClient.Request: rpc error: code = PermissionDenied desc = no sufficient privileges

Steps to reproduce

  1. Run webhook example: https://github.com/networkservicemesh/deployments-k8s/tree/main/examples/features/webhook
  2. don't cleanup
  3. wait for 20-30 min

Actual:

nsm-toggling.webm.zip

Expected:
NSM interface should not be deleted if data plane and control plane are fine

Scalability Testing: Decompose Endpoint testing

Test plan

Use cases:

  1. NSMgr sends request to Endpoint.
  2. NSMgr sends refresh request to Endpoint.
  3. NSMgr closes connection on Endpoint.
  4. Endpoint closes expired connection with NSMgr.

Test scenarios:

  1. NSMgr sends request to Endpoint:
    • Setup
      1. Start fake NSMgr.
      2. Start Endpoint.
    • Test
      1. Request C connections.
  2. NSMgr sends refresh request to Endpoint:
    • Setup
      1. Start fake NSMgr.
      2. Start Endpoint.
      3. Request C connections.
    • Test
      1. Wait T time for refreshes.
  3. NSMgr closes connection on Endpoint:
    • Setup
      1. Start fake NSMgr.
      2. Start Endpoint.
      3. Request C connections.
    • Test
      1. Close connections.
  4. Endpoint closes expired connection with NSMgr:
    • Setup
      1. Start fake NSMgr.
      2. Start Endpoint.
      3. Request C connections.
    • Test
      1. Wait T time for closes.

Tasks

  1. Create fake NSMgr CMD.
    Estimation: 3h
  2. Create MD integration test for (1) test scenario with C variable parameter and measure time, CPU, memory usage for Endpoint during the test.
    Estimation: 1d
  3. Create MD integration test for (2) test scenario with C, T variable parameters and measure time, CPU, memory usage for Endpoint during the test.
    Estimation: 3h
  4. Create MD integration test for (3) test scenario with C variable parameter and measure time, CPU, memory usage for Endpoint during the test.
    Estimation: 3h
  5. Create MD integration test for (4) test scenario with C, T variable parameters and measure time, CPU, memory usage for Endpoint during the test.
    Estimation: 3h

Estimation

3d

iperf examples

Add examples for iperf testing.

iperf is usually run as a client and a server.

The iperf client can be run as any other workload using the nsc client, similar to Kernel2Kernel or Kernel2Vxlan2Kernel but adding iperf client to as a container to the Pod spec for nsc-kernel and iperf server to the Pod spec for nse-kernel.

Please note: you can use gotestmd to build go based tests in integration-tests and run in integration-k8s-kind. In this way you can simply document how to use iperf and the tests will be automatically generated from that documentation.

While intgration-k8s-kind won't optimize performance, its a very fast environment to get going in while developing those tests.

Scalability Testing: System NSM testing

Test plan

Use cases:

  1. Client requests Local Endpoint.
  2. Client requests Remote Endpoint.

Test scenarios:

  1. Client requests Local Endpoint.
    • Setup
      1. Start NSM (Registry, NSMgr, Forwarder) on single node.
      2. Start E endpoints implementing N network services on the same node.
    • Test
      1. Start C clients each requesting R network services on the same node.
  2. Client requests Remote Endpoint.
    • Setup
      1. Start NSM (Registry, NSMgr, Forwarder) on 2 nodes.
      2. Start E endpoints implementing N network services on the second node.
    • Test
      1. Start C clients each requesting R network services on the first node.

Tasks

  1. Investigate how to run MD integration test with variable parameters to build graph dependencies for time, CPU, memory usage of NSM components during the test over those parameters.
    Estimation: 3d
  2. Create MD integration test for (1) test scenario with E, N, C, R variable parameters and measure time, CPU, memory usage for all NSM components during the test.
    Estimation: 2d
  3. Create MD integration test for (2) test scenario with E, N, C, R variable parameters and measure time, CPU, memory usage for all NSM components during the test.
    Estimation: 1d

Estimation

6d

Use spire federation in interdomain examples

Description

Currently, we are using sharing cert for two spire servers to make nsm working over two domains. To generate cert and key we are using openssl. It is can be improved if we'll rework spire deployments and spire examples to use spire federation feature.

Implementation details

  1. Remove using openssl in spire examples:https://github.com/networkservicemesh/deployments-k8s/tree/main/examples/spire
  2. Apply https://spiffe.io/docs/latest/architecture/federation/readme/
  3. Test that all examples are still working on CI.

Use nodeName instead of kubernetes.io/hostname label

Overview

Currently this repository uses spec.nodeSelector[kubernetes.io/hostname] label for defining node affinity.
This label is equivalent to spec.nodeName, unless the label was deliberately changed.
Using spec.nodeName field for node affinity would be simpler.

Example:

Replace this:

spec:
  nodeSelector:
    kubernetes.io/hostname: ${NODES[0]}

with this:

spec:
  nodeName: ${NODES[0]}

Scalability Testing: Decompose Registry testing

Test plan

Use cases:

  1. NSMgr registers Endpoint on Registry.
  2. NSMgr updates Endpoint on Registry.
  3. NSMgr unregisters Endpoint from Registry.
  4. NSMgr creates a find request to Registry.
  5. NSMgr creates a watching find request to Registry.
  6. Registry unregisters expired Endpoint from itself.

Test scenarios:

  1. NSMgr registers Endpoint on Registry:
    • Setup
      1. Start Registry.
    • Test
      1. Start N fake NSMgrs each registering E Endpoints.
  2. NSMgr updates Endpoint on Registry:
    • Setup
      1. Start Registry.
      2. Start N fake NSMgrs each registering E Endpoints with T update time.
    • Test
      1. Wait T time for updates.
  3. NSMgr unregisters Endpoint from Registry:
    • Setup
      1. Start Registry.
      2. Start N fake NSMgrs each registering E Endpoints.
    • Test
      1. Unregister endpoints.
  4. NSMgr creates a find request to Registry:
    • Setup
      1. Start Registry.
      2. Start N fake NSMgrs each registering E Endpoints.
    • Test
      1. Create F find requests from each NSMgr to Registry.
  5. NSMgr creates a watching find request to Registry:
    • Setup
      1. Start Registry.
      2. Start N fake NSMgrs each registering E Endpoints.
    • Test
      1. Create W watching find requests from each NSMgr to Registry.
  6. Registry unregisters expired Endpoint from itself:
    • Setup
      1. Start Registry.
      2. Start N fake NSMgrs each registering E Endpoints with T expiration time.
    • Test
      1. Wait T time for unregisters.

Tasks

  1. Create fake NSMgr CMD.
    Estimation: 1d
  2. Create MD integration test for (1) test scenario with N, E variable parameters and measure time, CPU, memory usage for memory, k8s Registry during the test.
    Estimation: 1d
  3. Create MD integration test for (2) test scenario with N, E, T variable parameters and measure time, CPU, memory usage for memory, k8s Registry during the test.
    Estimation: 3h
  4. Create MD integration test for (3) test scenario with N, E variable parameters and measure time, CPU, memory usage for memory, k8s Registry during the test.
    Estimation: 3h
  5. Create MD integration test for (4) test scenario with N, E, F variable parameters and measure time, CPU, memory usage for memory, k8s Registry during the test.
    Estimation: 1d
  6. Create MD integration test for (5) test scenario with N, E, W variable parameters and measure time, CPU, memory usage for memory, k8s Registry during the test.
    Estimation: 3h
  7. Create MD integration test for (6) test scenario with N, E, T variable parameters and measure time, CPU, memory usage for memory, k8s Registry during the test.
    Estimation: 3h

Estimation

5d

Scalability Testing: Decompose Client testing

Test plan

Use cases:

  1. Client sends request to NSMgr.
  2. Client sends refresh request to NSMgr.

Test scenarios:

  1. Client sends request to NSMgr:
    • Setup
      1. Start fake NSMgr (same as fake Endpoint from #1016).
    • Test
      1. Start Client requesting C connections.
  2. Client sends refresh request to NSMgr:
    • Setup
      1. Start fake NSMgr.
      2. Start Client requesting C connections with T refresh time.
      3. Wait some time before T.
    • Test
      1. Wait time until T for refreshes.

Tasks

  1. Create MD integration test for (1) test scenario with C variable parameter and measure time, CPU, memory usage for Client during the test.
    Estimation: 3h
  2. Create MD integration test for (2) test scenario with C, T variable parameters and measure time, CPU, memory usage for Client during the test.
    Estimation: 3h

Estimation

1d

cmd-nsc-vpp: issues with NOT default NSM_NAME

Hi!

When I set the NSM_NAME parameter of cmd-nsc-vpp to anything other than the default, I get this error:
Jun 17 16:19:42.714 [ERRO] [cmd:/bin/cmd-nsc-vpp] (19.1) proxyListener unable to listen on /tmp/memifproxy/endpoint-nsc-795886dc88-577t6-f96ec20f-ede0-499f-b0cc-819e8f735869/memif.socket: listen unixpacket /tmp/memifproxy/endpoint-nsc-795886dc88-577t6-f96ec20f-ede0-499f-b0cc-819e8f735869/memif.socket: bind: invalid argument

The interface seems to be in place for a second, but there are no neighbors and the pod keeps restarting.
vppctl show interface address
local0 (dn):
memif1/0 (up):
L3 172.16.1.96/32

I followed this guide: https://github.com/networkservicemesh/deployments-k8s/tree/main/examples/use-cases/Memif2Memif
If I don't set the NSM_NAME parameter, it works correctly.

Is it possible that the given name is not handled correctly somewhere, or could you help me on where to look for issues?

Scale from Zero not using multiple matches

Why is:

---
apiVersion: networkservicemesh.io/v1
kind: NetworkService
metadata:
name: autoscale-icmp-responder
namespace: nsm-system
spec:
payload: ETHERNET
name: autoscale-icmp-responder
matches:
- source_selector:
routes:
- destination_selector:
app: nse-icmp-responder
nodeName: "{{.nodeName}}"
- destination_selector:
app: icmp-responder-supplier

Still being done with a single match rather than two matches as specified:

networkservicemesh/sdk#892

I thought we had fixed this already?

IP collisions when using more than 1 replica of icmp-responder

Overview

When we set replicas value for an nse-kernel deployment to more than 1, we create several identical endpoints (=endpoints with the same configuration).

When client makes several requests, and these requests go to different endpoints with the same configuration, these endpoints can give the client the same IP addresses, and second request will overwrite the connection of the first request.
While we could call this an invalid configuration in case of completely distinct endpoints, from user side there is nothing to be done when using replicas, so this must be a valid configuration.

The issue is present when the following preconditions are met:

  1. There are at least 2 endpoints with the same IP prefix in config
  2. A client makes at least 2 requests, which can go the those endpoints

Context

When making scalability tests I slightly modified icmp responder to register itself for several services and set 2 replicas, and added many connections to client config.
I immediately stumbled upon instability in tests: sometimes clients were getting all required connections, sometimes they were few connections short. This was caused, as explained above, by the fact that 1 client went to 2 servers and got the same IP for 2 distinct connections, and second connection overwrote the first.

Possible solutions

Solution 1

We have the excluded_prefixes field in request connection context, which we can use to pass the list of occupied IP addresses, so the endpoint doesn't try to overwrite them.

However, this solution assumes that we know occupied addresses beforehand.
But if we were to make requests from several threads, we wouldn't know which addresses could be taken by other threads.
I'm not sure if it would be possible to solve this without some kind of global mutex in the NSC, which would prevent parallel requests.

Solution 2

Maybe we could have some synchronization in the moment of creating a connection.
At the time of writing this I haven't researched how hard it would be to add such synchronization, and which exact components we would need to change or create.

Also I would think that maybe there is some method of non-destructive interface assignment, so that we would get an error on an attempt to use already occupied IP address instead of silently overwriting previous connection.

Solution 3

We could add some synchronization for endpoints.
Imagine we had some kind of registry, that would hold the data of already used IP prefixes.
Endpoints could query this registry on startup, so when we create several replicas of an endpoint, each instance would get its own IP prefix, so they just wouldn't have the same config.

Intermediate conclusion

Solution 1 seems simple but not universal to me.
Solution 2 seems very promising, but I'm yet to verify how hard (if possible) it would be to implement.
Solution 3 would probably require quite a lot of changes inside all of the endpoints, and we would also introduce a new concept, which would increase complexity of the system, and I'm really not sure it is justified.

Add resource requests for vpp-applications

Expected Behavior

The application uses only the quantity of resources it needs

Current Behavior

Vpp apps reserve more than they need

Context

If we specify a container's limits, but not its request, then request is set to match its limit.

Destructive Chaos Testing: Integration tests

Overview

Cover the following heal scenarios with integration tests:

  1. Local NSMgr(r) +
    1. none - #1789
    2. Local Endpoint(d) - moved to #1928
    3. Remote NSMgr(r/d) - moved to #1928
    4. Remote Forwarder(d) - moved to #1928
    5. Remote Endpoint(d) - moved to #1928
  2. Local Forwarder(d) - #1789
  3. Local Endpoint(r/d) - #1789
  4. Remote NSMgr(r/d) +
    1. none - #1789
    2. Remote Endpoint(d) - moved to #1928
  5. Remote Forwarder(d) - #1789
  6. Remote Endpoint(r/d) - #1789
  7. Registry(r) - #1789

Blockers

Sometimes Close doesn't reach nsmgr

When running scalability tests (not yet uploaded) I found an issue with heal working indefinitely.
When investigating it, I found that one of the reasons this was happening was that nsmgr never received Close for some of the connections.
Logs: logs.zip
There is a bunch of connections in these logs. You can use grep 79b30cf1-1629-440c-8507-7e535d60295d to get the part of the logs that allows you to see the issue.

nsm-spire image

In NSM old gen there was an nsm-spire sidecar image for handling spire configuration. Is there any plan to create it for the next gen also? Or is there some recommended way how to handle spire workload registration based on config files, without entry create commands?

Request for information about Roles and ClusterRoles

Could you add some example Roles and ClusterRoles?
I need to define Roles/ClusterRoles for the cmd-registry-memory, cmd-nsmgr and cmd-forwarder-vpp and have no info about exactly what resources and verbs are needed for each of them. An example yaml would be very welcome.

Add topology aware scale-from-zero running of NSEs in response to NSC demand example

Implementation details

  1. Add new app cmd-nse-supplier-k8s in apps/
  2. Add a new example for the feature suite. See at ipv6 example from feature suite: https://github.com/networkservicemesh/deployments-k8s/tree/main/examples/features/ipv6/Kernel2Kernel
  3. Make sure that scenario is working networkservicemesh/sdk#821 (comment)

Note: Network service for cmd-nse-supplier-k8s should be applied via kubectl

References

networkservicemesh/sdk#892

memif2vxlan2memif: errors in nsmgr, no response for nsc

Hi,

I'm trying to deploy the memif2vxlan2memif example but getting strange errors in the local nsmgr of the nsc. On one machine the connection is fine, on the other the NSC is not getting a response, but the error is there in both try.

I post the logs from the NSC and the NSMGR in both cases (bad, working). Could you please take a look at them?
bad-memif2vxlan2memif-nsc.txt
bad-memif2vxlan2memif-nsmgr.txt
working-memif2vxlan2memif-nsc.txt
working-memif2vxlan2memif-nsmgr.txt

The difference between the machines is the k8s version:
Working machine:

Client Version: version.Info{Major:"1", Minor:"21", GitVersion:"v1.21.1", GitCommit:"5e58841cce77d4bc13713ad2b91fa0d961e69192", GitTreeState:"clean", BuildDate:"2021-05-12T14:18:45Z", GoVersion:"go1.16.4", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"20", GitVersion:"v1.20.4", GitCommit:"ec2760d6d916781de466541a6babb4309766c995", GitTreeState:"clean", BuildDate:"2021-02-27T17:18:03Z", GoVersion:"go1.15.8", Compiler:"gc", Platform:"linux/amd64"}

NOT working machine:

Client Version: version.Info{Major:"1", Minor:"20", GitVersion:"v1.20.5", GitCommit:"6b1d87acf3c8253c123756b9e61dac642678305f", GitTreeState:"clean", BuildDate:"2021-03-18T01:10:43Z", GoVersion:"go1.15.8", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.2", GitCommit:"66049e3b21efe110454d67df4fa62b08ea79a19b", GitTreeState:"clean", BuildDate:"2019-05-28T05:33:22Z", GoVersion:"go1.12.5", Compiler:"gc", Platform:"linux/amd64"}

Thanks!

Add example or explanation on how to interconnect ns endpoints (nse)

Good afternoon:

With this new approach for NSM, I have noticed that you have uploaded several cases of connectivity between NSC and NSE. However, there is no example on how to interconnect different NSEs in order to perform a bigger/more complex service with NSM in this case.

If possible, do you consider adding an example of such characteristics to this repository? In my humble opinion, I think it could be a very positive addition to the examples that are already present.

Thanks!

[Question] Do we need to add Caddyfile based NSE?

Motivation

We could create one cmd-${TODO: consider name}-nse that would parse config based on caddyfile format. The main point is: Have a possible to build NSEs based on SDKs without building new app. That can be used for testing goals (we do not more add new cmd-repo for each new typically NSE) and for users that do not want to code NSE and want just a play with NSM.

Example

To create an copy of current cmd-icmp-responder we could use this config:

my-endpoint {
   point2pointipam
   recvfd
   mechanisms {
      'kernel' {
            kernel
      }
   }
   dnscontext
   sendfd
}

To create a copy of the current cmd-icmp-responder-vpp we could use this config:

my-vpp-endpoint {
   point2pointipam
   mechanisms {
      'memif' {
        sendfd
        up
        connectioncontext
        tag
        memif
      }
   }
}

Let me know if this direction can be interested, then I can provide more technical details on how to achieve this :)

Reduce tokens time from 24h to 10 minutes for each application

Motivation

Previously we've fixed issues with refresh/timeout:

networkservicemesh/sdk#778
networkservicemesh/sdk#650
networkservicemesh/sdk#520

and now we can reduce tokens expiration for each application to 10 minutes (it is 24h at this moment).

Also, we plan to add refresh/timeout examples, but it can be done separately:
https://github.com/orgs/networkservicemesh/projects/1#card-55928687
https://github.com/orgs/networkservicemesh/projects/1#card-55928794

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.