This example shows that NSM keeps working after the local NSE death.
NSC and NSE are using the kernel
mechanism to connect with each other.
Make sure that you have completed steps from basic or memory setup.
Deploy NSC and NSE:
kubectl apply -k https://github.com/networkservicemesh/deployments-k8s/examples/heal/local-nse-death/nse-before-death?ref=7a2735f6f8c8ed02d058c1a6a2f04846a3d88cad
Wait for applications ready:
kubectl wait --for=condition=ready --timeout=1m pod -l app=alpine -n ns-local-nse-death
kubectl wait --for=condition=ready --timeout=1m pod -l app=nse-kernel -n ns-local-nse-death
Ping from NSC to NSE:
kubectl exec pods/alpine -n ns-local-nse-death -- ping -c 4 172.16.1.100 -I 172.16.1.101
Ping from NSE to NSC:
kubectl exec deployments/nse-kernel -n ns-local-nse-death -- ping -c 4 172.16.1.101 -I 172.16.1.100
Stop NSE pod:
kubectl scale deployment nse-kernel -n ns-local-nse-death --replicas=0
kubectl exec pods/alpine -n ns-local-nse-death -- ping -c 4 172.16.1.100 -I 172.16.1.101 2>&1 | egrep "100% packet loss|Network unreachable|can't set multicast source"
Apply patch:
kubectl apply -k https://github.com/networkservicemesh/deployments-k8s/examples/heal/local-nse-death/nse-after-death?ref=7a2735f6f8c8ed02d058c1a6a2f04846a3d88cad
Restore NSE pod:
kubectl scale deployment nse-kernel -n ns-local-nse-death --replicas=1
Wait for new NSE to start:
kubectl wait --for=condition=ready --timeout=1m pod -l app=nse-kernel -l version=new -n ns-local-nse-death
Find new NSE pod:
NEW_NSE=$(kubectl get pods -l app=nse-kernel -l version=new -n ns-local-nse-death --template '{{range .items}}{{.metadata.name}}{{"\n"}}{{end}}')
Ping should pass with newly configured addresses.
Ping from NSC to new NSE:
kubectl exec pods/alpine -n ns-local-nse-death -- ping -c 4 172.16.1.102 -I 172.16.1.103
Ping from new NSE to NSC:
kubectl exec ${NEW_NSE} -n ns-local-nse-death -- ping -c 4 172.16.1.103 -I 172.16.1.102
Delete ns:
kubectl delete ns ns-local-nse-death