Page MenuHomePhabricator

toolforge: new k8s: issues with routing interfering with DNS in the cluster as well as the webhook controllers
Closed, ResolvedPublic

Description

I noticed this today while working on T237643: toolforge: new k8s: figure out metrics / observability:

root@tools-k8s-control-1:~# kubectl logs kube-apiserver-tools-k8s-control-1 -n kube-system
[...]
I1119 13:49:20.526975       1 trace.go:81] Trace[1080159501]: "Create /api/v1/namespaces/ingress-nginx/pods" (started: 2019-11-19 13:48:50.512338394 +0000 UTC m=+1121687.622499007) (total time: 30.014606337s):
Trace[1080159501]: [30.014606337s] [30.000487738s] END
W1119 13:49:20.527584       1 dispatcher.go:105] Failed calling webhook, failing open registry-admission.tools.wmflabs.org: failed calling webhook "registry-admission.tools.wmflabs.org": Post https://registry-admission.registry-admission.svc:443/?timeout=30s: context deadline exceeded
E1119 13:49:20.527737       1 dispatcher.go:106] failed calling webhook "registry-admission.tools.wmflabs.org": Post https://registry-admission.registry-admission.svc:443/?timeout=30s: context deadline exceeded

This prevents the ingress-nginx pod from being created apparently.

Event Timeline

The logs are full of a etcd errors that are supposedly fixed in the version we are using. That's odd. watch chan error: etcdserver: mvcc: required revision has been compacted. Digging around and trying to find issues with the webhook.

I see we are seeing those errors in toolsbeta as well. Weird.

I deleted the webhook controller definition to see what happens.

109s        Warning   Failed      pod/test-shell   Failed to pull image "docker-registry.tools.wmflabs.org/maintain-kubeusers:latest": rpc error: code = Unknown desc = Error response from daemon: manifest for docker-registry.tools.wmflabs.org/maintain-kubeusers:latest not found: manifest unknown: manifest unknown

It...can't pull images at all? There's something wrong with the cluster overall as far as I can tell.

No, whew, that was my error. I've only uploaded the beta version of the image. Using that for now, since I'm just using this for a shell.

The cluster networking is the issue. It's busted somehow. For instance:

/app # ping www.google.com
ping: bad address 'www.google.com'

You can reproduce with the shell in the pod here: root@tools-k8s-control-1:~# kubectl exec -it test-shell -- /bin/ash

Looks like it is just DNS?

/app # ping 172.16.6.127
PING 172.16.6.127 (172.16.6.127): 56 data bytes
64 bytes from 172.16.6.127: seq=0 ttl=63 time=0.453 ms
64 bytes from 172.16.6.127: seq=1 ttl=63 time=3.799 ms
64 bytes from 172.16.6.127: seq=2 ttl=63 time=1.390 ms
^C
--- 172.16.6.127 ping statistics ---
3 packets transmitted, 3 packets received, 0% packet loss
round-trip min/avg/max = 0.453/1.880/3.799 ms
/app # ping tools-puppetmaster-01.tools.eqiad.wmflabs
ping: bad address 'tools-puppetmaster-01.tools.eqiad.wmflabs'

Yes, that should work. It works in toolsbeta.

The webhook controller works using coredns, so this makes sense.

So @aborrero we have a DNS problem. The controller is no longer in the toolchain (to put it back just kubectl apply -f service.yaml on that controller's checkout again), and the controller uses a DNS name in its service. (A name we may want to change, btw, because it probably should be cluster-local name, but I can mess with that once we know why DNS doesn't work inside pods).

The service for kube-dns (coredns) looks fine:
kube-system kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP,9153/TCP 13d
Are we using security groups on these? We could be blocking something?

The only problem with that pod I created for testing is you only have ping and nc. If you want more tools in THAT pod...we'd need to fix DNS lol.

Nah, not sgs. That wouldn't make sense because we use those mostly for whitelisting.

I've been investigating network packet flows with tcpdump:

root@tools-k8s-control-1:~# kubectl exec -it test-shell -- /bin/ash
/app # ping www.google.es
ping: bad address 'www.google.es'
aborrero@tools-k8s-worker-2:~$ sudo tcpdump -i any udp port 53
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on any, link-type LINUX_SLL (Linux cooked), capture size 262144 bytes
13:54:07.234363 IP 192.168.34.132.46564 > 10.96.0.10.domain: 35317+ A? www.google.es.default.svc.tools.local. (55)
13:54:07.234437 IP 192.168.34.132.46564 > 192.168.48.129.domain: 35317+ A? www.google.es.default.svc.tools.local. (55)
13:54:07.234468 IP 192.168.34.132.46564 > 10.96.0.10.domain: 35679+ AAAA? www.google.es.default.svc.tools.local. (55)
13:54:07.234475 IP 192.168.34.132.46564 > 192.168.48.129.domain: 35679+ AAAA? www.google.es.default.svc.tools.local. (55)
13:54:07.234922 IP tools-k8s-worker-2.tools.eqiad.wmflabs.32952 > cloud-recursor0.wikimedia.org.domain: 13673+ PTR? 10.0.96.10.in-addr.arpa. (41)
13:54:07.235831 IP cloud-recursor0.wikimedia.org.domain > tools-k8s-worker-2.tools.eqiad.wmflabs.32952: 13673 NXDomain 0/1/0 (105)
13:54:07.235853 IP cloud-recursor0.wikimedia.org.domain > tools-k8s-worker-2.tools.eqiad.wmflabs.32952: 13673 NXDomain 0/1/0 (105)
13:54:07.239178 IP tools-k8s-worker-2.tools.eqiad.wmflabs.38522 > cloud-recursor0.wikimedia.org.domain: 53755+ PTR? 129.48.168.192.in-addr.arpa. (45)
13:54:07.246095 IP tools-k8s-worker-2.tools.eqiad.wmflabs.43710 > cloud-recursor0.wikimedia.org.domain: 14261+ PTR? 143.154.80.208.in-addr.arpa. (45)
13:54:07.248813 IP cloud-recursor0.wikimedia.org.domain > tools-k8s-worker-2.tools.eqiad.wmflabs.43710: 14261 1/0/0 PTR cloud-recursor0.wikimedia.org. (88)
13:54:07.248825 IP cloud-recursor0.wikimedia.org.domain > tools-k8s-worker-2.tools.eqiad.wmflabs.43710: 14261 1/0/0 PTR cloud-recursor0.wikimedia.org. (88)
13:54:09.737118 IP 192.168.34.132.46564 > 10.96.0.10.domain: 35317+ A? www.google.es.default.svc.tools.local. (55)
13:54:09.737171 IP 192.168.34.132.46564 > 192.168.48.129.domain: 35317+ A? www.google.es.default.svc.tools.local. (55)
13:54:09.737230 IP 192.168.34.132.46564 > 10.96.0.10.domain: 35679+ AAAA? www.google.es.default.svc.tools.local. (55)
13:54:09.737237 IP 192.168.34.132.46564 > 192.168.48.129.domain: 35679+ AAAA? www.google.es.default.svc.tools.local. (55)
aborrero@cloudservices1003:~$ sudo tcpdump -i any host 172.16.0.103
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on any, link-type LINUX_SLL (Linux cooked), capture size 262144 bytes
13:54:07.238123 IP 172.16.0.103.32952 > cloud-recursor0.wikimedia.org.domain: 13673+ PTR? 10.0.96.10.in-addr.arpa. (41)
13:54:07.238697 IP cloud-recursor0.wikimedia.org.domain > 172.16.0.103.32952: 13673 NXDomain 0/1/0 (105)
13:54:07.240625 IP 172.16.0.103.56543 > cloud-recursor0.wikimedia.org.domain: 6587+ PTR? 132.34.168.192.in-addr.arpa. (45)
13:54:07.241243 IP cloud-recursor0.wikimedia.org.domain > 172.16.0.103.56543: 6587 NXDomain 0/1/0 (94)
13:54:07.242731 IP 172.16.0.103.38522 > cloud-recursor0.wikimedia.org.domain: 53755+ PTR? 129.48.168.192.in-addr.arpa. (45)
13:54:07.243055 IP cloud-recursor0.wikimedia.org.domain > 172.16.0.103.38522: 53755 NXDomain 0/1/0 (94)
13:54:07.249867 IP 172.16.0.103.43710 > cloud-recursor0.wikimedia.org.domain: 14261+ PTR? 143.154.80.208.in-addr.arpa. (45)
13:54:07.250857 IP cloud-recursor0.wikimedia.org.domain > 172.16.0.103.43710: 14261 1/0/0 PTR cloud-recursor0.wikimedia.org. (88)

There is apparently good network connectivity between the node and the DNS server. But look how weird queries are: www.google.es.default.svc.tools.local.
In fact, that query never reaches the DNS server.

If I change a bit the query I get interesting results:

root@tools-k8s-control-1:~# kubectl exec -it test-shell -- /bin/ash
/app # ping www.google.es.
ping: bad address 'www.google.es.'

(note trailing dot)

aborrero@tools-k8s-worker-2:~$ sudo tcpdump -i any udp port 53
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on any, link-type LINUX_SLL (Linux cooked), capture size 262144 bytes
14:04:19.560224 IP 192.168.34.132.53222 > 10.96.0.10.domain: 46471+ A? www.google.es. (31)
14:04:19.560283 IP 192.168.34.132.53222 > 192.168.48.131.domain: 46471+ A? www.google.es. (31)
14:04:19.560870 IP tools-k8s-worker-2.tools.eqiad.wmflabs.56619 > cloud-recursor0.wikimedia.org.domain: 29798+ PTR? 10.0.96.10.in-addr.arpa. (41)
14:04:19.560898 IP 192.168.34.132.53222 > 10.96.0.10.domain: 46803+ AAAA? www.google.es. (31)
14:04:19.560913 IP 192.168.34.132.53222 > 192.168.48.131.domain: 46803+ AAAA? www.google.es. (31)
14:04:19.562496 IP cloud-recursor0.wikimedia.org.domain > tools-k8s-worker-2.tools.eqiad.wmflabs.56619: 29798 NXDomain 0/1/0 (105)
14:04:19.562502 IP cloud-recursor0.wikimedia.org.domain > tools-k8s-worker-2.tools.eqiad.wmflabs.56619: 29798 NXDomain 0/1/0 (105)
14:04:19.563222 IP tools-k8s-worker-2.tools.eqiad.wmflabs.55580 > cloud-recursor0.wikimedia.org.domain: 31604+ PTR? 131.48.168.192.in-addr.arpa. (45)
14:04:19.564105 IP tools-k8s-worker-2.tools.eqiad.wmflabs.36998 > cloud-recursor0.wikimedia.org.domain: 9757+ PTR? 143.154.80.208.in-addr.arpa. (45)
14:04:22.063553 IP 192.168.34.132.53222 > 10.96.0.10.domain: 46471+ A? www.google.es. (31)
14:04:22.063642 IP 192.168.34.132.53222 > 192.168.48.131.domain: 46471+ A? www.google.es. (31)
14:04:22.063725 IP 192.168.34.132.53222 > 10.96.0.10.domain: 46803+ AAAA? www.google.es. (31)
14:04:22.063737 IP 192.168.34.132.53222 > 192.168.48.131.domain: 46803+ AAAA? www.google.es. (31)
^C
13 packets captured
20 packets received by filter
7 packets dropped by kernel

But again, somehow the query isn't being redirected to the upstream DNS servers. It seems like CoreDNS refuses to forward the query upstream.

More experimentation.

For whatever reason, both coredns pods are running in tools-k8s-control-1:

root@tools-k8s-control-1:~# kubectl get pods --all-namespaces -o wide | grep coredns
kube-system          coredns-5c98db65d4-knjcc                      1/1     Running   0          14d   192.168.48.131   tools-k8s-control-1   <none>           <none>
kube-system          coredns-5c98db65d4-qzmvf                      1/1     Running   0          14d   192.168.48.129   tools-k8s-control-1   <none>           <none>

I can query coredns from the node the pods are running on:

root@tools-k8s-control-1:~# dig @192.168.48.129 www.google.es +short
172.253.122.94
root@tools-k8s-control-1:~# dig @192.168.48.131 www.google.es +short
172.253.122.94

And I see packets traces of coredns doing what it is supposed to do, asking upstream:

17:59:27.814376 IP 172.16.0.104.34383 > 192.168.48.131.53: 43995+ [1au] A? www.google.es. (54)
17:59:27.814708 IP 192.168.48.131.47542 > 208.80.154.143.53: 43995+ [1au] A? www.google.es. (54)
17:59:27.816776 IP 208.80.154.143.53 > 192.168.48.131.47542: 43995 1/0/1 A 172.253.122.94 (58)
17:59:27.816969 IP 192.168.48.131.53 > 172.16.0.104.34383: 43995 1/0/1 A 172.253.122.94 (71)

I can even ping the pod from the same node!

root@tools-k8s-control-1:~# ping 192.168.48.129
PING 192.168.48.129 (192.168.48.129) 56(84) bytes of data.
64 bytes from 192.168.48.129: icmp_seq=1 ttl=64 time=0.066 ms
64 bytes from 192.168.48.129: icmp_seq=2 ttl=64 time=0.082 ms
^C
--- 192.168.48.129 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 14ms
rtt min/avg/max/mdev = 0.066/0.074/0.082/0.008 ms
root@tools-k8s-control-1:~# ping 192.168.48.131
PING 192.168.48.131 (192.168.48.131) 56(84) bytes of data.
64 bytes from 192.168.48.131: icmp_seq=1 ttl=64 time=0.074 ms
64 bytes from 192.168.48.131: icmp_seq=2 ttl=64 time=0.113 ms
^C
--- 192.168.48.131 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 19ms
rtt min/avg/max/mdev = 0.074/0.093/0.113/0.021 ms

Then, I see the problem is clearly a filtering/routing one, since I can't communicate from other nodes. even though the routes exists and are known to the system:

aborrero@tools-k8s-control-2:~$ ip r get 192.168.48.129
192.168.48.129 via 172.16.0.104 dev tunl0 src 192.168.226.128 uid 18194 
    cache expires 269sec mtu 1440 
aborrero@tools-k8s-control-2:~$ dig @192.168.48.131 www.google.es +short
[..]
;; connection timed out; no servers could be reached
aborrero@tools-k8s-control-2:~$ ping 192.168.48.131
[..]
2 packets transmitted, 0 received, 100% packet loss, time 23ms
aborrero@tools-k8s-control-2:~$ ping 192.168.48.129
[..]
2 packets transmitted, 0 received, 100% packet loss, time 30ms
aborrero@tools-k8s-worker-2:~$ dig @192.168.48.131 www.google.es +short
[..]
;; connection timed out; no servers could be reached
aborrero@tools-k8s-worker-2:~$ ping 192.168.48.131
[..]
2 packets transmitted, 0 received, 100% packet loss, time 30ms
aborrero@tools-k8s-worker-2:~$ ping 192.168.48.129
[..]
2 packets transmitted, 0 received, 100% packet loss, time 19ms
aborrero@tools-k8s-worker-2:~$ ip r get 192.168.48.131
192.168.48.131 via 172.16.0.104 dev tunl0 src 192.168.34.128 uid 18194 
    cache expires 525sec mtu 1440 

Oh! We are in a partially upgraded state on tools.

root@tools-k8s-control-1:~# kubectl version
Client Version: version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.5", GitCommit:"20c265fef0741dd71a66480e35bd69f18351daea", GitTreeState:"clean", BuildDate:"2019-10-15T19:16:51Z", GoVersion:"go1.12.10", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.1", GitCommit:"4485c6f18cee9a5d3c3b4e523bd27972b1b53892", GitTreeState:"clean", BuildDate:"2019-07-18T09:09:21Z", GoVersion:"go1.12.5", Compiler:"gc", Platform:"linux/amd64"}

That could be introducing some quirks since that's a 1.15.1 to 1.15.5 jump. We should complete the upgrade process fully there before doing much else in case the version spread is an issue for kube-proxy for some reason as it talks to kubelet (since kubelet will be 1.15.5). Single numbers shouldn't do this, but that's 4 minor numbers, so maybe.

In toolsbeta, we are in a consistent state still because we didn't start the upgrade there yet:

root@toolsbeta-test-k8s-control-1:~# kubectl version
Client Version: version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.1", GitCommit:"4485c6f18cee9a5d3c3b4e523bd27972b1b53892", GitTreeState:"clean", BuildDate:"2019-07-18T09:18:22Z", GoVersion:"go1.12.5", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.1", GitCommit:"4485c6f18cee9a5d3c3b4e523bd27972b1b53892", GitTreeState:"clean", BuildDate:"2019-07-18T09:09:21Z", GoVersion:"go1.12.5", Compiler:"gc", Platform:"linux/amd64"}
Bstorm renamed this task from toolforge: new k8s: issues with the registry admission controller to toolforge: new k8s: issues with routing interfering with DNS in the cluster as well as the webhook controllers.Nov 21 2019, 2:06 AM

Walking through the routing of the service:

root@toolsbeta-test-k8s-control-1:~# iptables -L -t nat | grep dns
KUBE-MARK-MASQ  udp  -- !192.168.0.0/16       10.96.0.10           /* kube-system/kube-dns:dns cluster IP */ udp dpt:domain
KUBE-SVC-TCOU7JCQXEZGVUNU  udp  --  anywhere             10.96.0.10           /* kube-system/kube-dns:dns cluster IP */ udp dpt:domain
KUBE-MARK-MASQ  tcp  -- !192.168.0.0/16       10.96.0.10           /* kube-system/kube-dns:dns-tcp cluster IP */ tcp dpt:domain
KUBE-SVC-ERIFXISQEP7F7OF4  tcp  --  anywhere             10.96.0.10           /* kube-system/kube-dns:dns-tcp cluster IP */ tcp dpt:domain
KUBE-MARK-MASQ  tcp  -- !192.168.0.0/16       10.96.0.10           /* kube-system/kube-dns:metrics cluster IP */ tcp dpt:9153
KUBE-SVC-JD5MR3NA4I4DYORP  tcp  --  anywhere             10.96.0.10           /* kube-system/kube-dns:metrics cluster IP */ tcp dpt:9153
root@toolsbeta-test-k8s-control-1:~# iptables -L KUBE-SVC-ERIFXISQEP7F7OF4 -t nat
Chain KUBE-SVC-ERIFXISQEP7F7OF4 (1 references)
target     prot opt source               destination         
KUBE-SEP-O7FPKCO4DCIFN2KV  all  --  anywhere             anywhere             statistic mode random probability 0.50000000000
KUBE-SEP-5LX2FYRDFZJCI2T4  all  --  anywhere             anywhere            
root@toolsbeta-test-k8s-control-1:~# iptables -L KUBE-SEP-O7FPKCO4DCIFN2KV -t nat
Chain KUBE-SEP-O7FPKCO4DCIFN2KV (1 references)
target     prot opt source               destination         
KUBE-MARK-MASQ  all  --  192.168.132.132      anywhere            
DNAT       tcp  --  anywhere             anywhere             tcp to:192.168.132.132:53

If you get what I did there. Basically, like you were saying, things get where they are supposed to but don't work (and I could ping that pod 192.168.132.132 from both control node 1 and 2, btw...for whatever reason). The service routing looks ok on the other nodes as well, but if there's anything not routing between them for pods, that would be a problem. I'm not sure I have time to dig into the pods right now, but I can tomorrow a bit if you haven't beaten me to completing the upgrade. The upgrade completion might fix things, especially since this is the sort of mysterious error that smells like an incompatibility somewhere.

Just documenting what I tried since we are working very async 😁

Mentioned in SAL (#wikimedia-operations) [2019-11-21T10:17:59Z] <arturo> update buster-wikimedia thirdparty/kubeadm-k8s packages (newer version will be used to handle T238654)

Mentioned in SAL (#wikimedia-cloud) [2019-11-21T10:28:11Z] <arturo> install kubeadm 1.15.6 on worker/control nodes in the new k8s cluster (T238654)

Mentioned in SAL (#wikimedia-cloud) [2019-11-21T10:29:40Z] <arturo> upgrading new k8s cluster version to 1.15.6 using kubeadm (T238654)

Mentioned in SAL (#wikimedia-cloud) [2019-11-21T11:44:03Z] <arturo> upgrading new k8s kubelet version to 1.15.6 (T238654)

Mentioned in SAL (#wikimedia-cloud) [2019-11-21T11:49:16Z] <arturo> upgrading new k8s kubectl version to 1.15.6 (T238654)

After the upgrade, there is some behavior change:

root@tools-k8s-control-1:~# kubectl exec -it test-shell -- /bin/ash
/app # ping www.google.es
ping: bad address 'www.google.es'

/app # ping www.google.es.
PING www.google.es. (172.217.164.131): 56 data bytes
64 bytes from 172.217.164.131: seq=0 ttl=57 time=0.558 ms
64 bytes from 172.217.164.131: seq=1 ttl=57 time=2.066 ms
^C
--- www.google.es. ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max = 0.558/1.312/2.066 ms

Depending on how you query the FQDN, there are results or not.
I enabled logs in coredns and you can see:

root@tools-k8s-control-1:~# kubectl logs -n kube-system coredns-5c98db65d4-4swsb
.:53
2019-11-21T10:34:34.273Z [INFO] CoreDNS-1.3.1
2019-11-21T10:34:34.273Z [INFO] linux/amd64, go1.11.4, 6b56a9c
CoreDNS-1.3.1
linux/amd64, go1.11.4, 6b56a9c
2019-11-21T10:34:34.273Z [INFO] plugin/reload: Running configuration MD5 = 7aff417a3a8b19f74ef8546359bec62b
2019-11-21T10:34:34.275Z [INFO] 127.0.0.1:50731 - 44432 "HINFO IN 1894934715647939326.1635828024008875143. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.001982241s
2019-11-21T11:49:54.647Z [INFO] 192.168.34.132:48055 - 30124 "A IN www.google.es. udp 31 false 512" NOERROR qr,rd,ra 60 0.010737109s
2019-11-21T11:49:54.783Z [INFO] 192.168.34.132:48055 - 30435 "AAAA IN www.google.es. udp 31 false 512" NOERROR qr,rd,ra 72 0.146851257s
2019-11-21T11:50:02.003Z [INFO] 192.168.34.132:42669 - 33269 "AAAA IN www.google.es.default.svc.tools.local. udp 55 false 512" NXDOMAIN qr,aa,rd 142 0.000298104s
2019-11-21T11:50:02.003Z [INFO] 192.168.34.132:42669 - 32836 "A IN www.google.es.default.svc.tools.local. udp 55 false 512" NXDOMAIN qr,aa,rd 142 0.000234002s
2019-11-21T11:50:02.003Z [INFO] 192.168.34.132:39329 - 63396 "AAAA IN www.google.es.svc.tools.local. udp 47 false 512" NXDOMAIN qr,aa,rd 134 0.000109636s
2019-11-21T11:50:02.003Z [INFO] 192.168.34.132:39329 - 63066 "A IN www.google.es.svc.tools.local. udp 47 false 512" NXDOMAIN qr,aa,rd 134 0.000189766s
2019-11-21T11:50:02.004Z [INFO] 192.168.34.132:46077 - 55705 "AAAA IN www.google.es.tools.local. udp 43 false 512" NXDOMAIN qr,aa,rd 130 0.000141579s
2019-11-21T11:50:02.004Z [INFO] 192.168.34.132:46077 - 55476 "A IN www.google.es.tools.local. udp 43 false 512" NXDOMAIN qr,aa,rd 130 0.000232162s

it appends .default.svc.tools.local. or .svc.tools.local. or .tools.local. to the FQDN unless you request it with a trailing dot. Not sure if this is the expected behavior.

The original issue is gone. Closing task now, please reopen if required.

Thanks! @aborrero, I'll see if I can corner a CoreDNS expert or dev here today about the weirdness there.

Basic DNS lookups in pods don't work as expected, so I'd expect the webhook controller to also be broken. So in a weird way, we still have the issue...it's just different in cause now.

root@tools-k8s-control-1:~# kubectl get ValidatingWebhookConfiguration 
NAME                CREATED AT
ingress-admission   2019-11-07T13:24:34Z

You see I haven't enabled the controller yet.

Tried killing both coredns pods. This changed nothing. Overall, requiring a dot at the end will keep the cluster from working right.

BTW I upgraded both tools and toolsbeta cluster to the same version (1.15.6) so at least we have a consistent testing experience.

That said, I'll enable the controller and see how things behave.

BTW I upgraded both tools and toolsbeta cluster to the same version (1.15.6) so at least we have a consistent testing experience.

Perfect! Thank you.

root@toolsbeta-test-k8s-control-1:~# kubectl exec -it -n maintain-kubeusers maintain-kubeusers-7b6bb8f79d-xc9qb -- /bin/ash
/app # ping www.google.com
PING www.google.com (172.217.13.228): 56 data bytes
64 bytes from 172.217.13.228: seq=0 ttl=57 time=7.346 ms
64 bytes from 172.217.13.228: seq=1 ttl=57 time=2.528 ms
^C
--- www.google.com ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max = 2.528/4.937/7.346 ms

This is not caused by the version then! :)

Just noticed security groups are different in toolsbeta and in tools. In the new image, top one is tools bottom one is toolsbeta:

image.png (436×557 px, 28 KB)

But no... we seems to have pretty open rules there.

Nah, not sgs. That wouldn't make sense because we use those mostly for whitelisting.

right.

This is one of the most simple test cases that I could think of to experiment this issue:

In toolsbeta:

root@toolsbeta-test-k8s-control-1:~# ping 192.168.23.215 -c1
PING 192.168.23.215 (192.168.23.215) 56(84) bytes of data.
64 bytes from 192.168.23.215: icmp_seq=1 ttl=63 time=1.14 ms

--- 192.168.23.215 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 1.144/1.144/1.144/0.000 ms

In tools:

aborrero@tools-k8s-control-1:~$ ping 192.168.50.9 -c1
PING 192.168.50.9 (192.168.50.9) 56(84) bytes of data.

--- 192.168.50.9 ping statistics ---
1 packets transmitted, 0 received, 100% packet loss, time 0ms

In both cases, the dst IP is the IP addr of the nginx-ingress pod running on a worker node.

I enabled netfilter tracing for those packets as described above, trying to understand what's different in the iptables rulesets.

toolsbeta
[Fri Nov 22 10:54:42 2019] TRACE: raw:OUTPUT:policy:3 IN= OUT=tunl0 SRC=192.168.132.128 DST=192.168.23.215 LEN=84 TOS=0x00 PREC=0x00 TTL=64 ID=57031 DF PROTO=ICMP TYPE=8 CODE=0 ID=27401 SEQ=1 UID=18194 GID=500 
[Fri Nov 22 10:54:42 2019] TRACE: mangle:OUTPUT:policy:1 IN= OUT=tunl0 SRC=192.168.132.128 DST=192.168.23.215 LEN=84 TOS=0x00 PREC=0x00 TTL=64 ID=57031 DF PROTO=ICMP TYPE=8 CODE=0 ID=27401 SEQ=1 UID=18194 GID=500 
[Fri Nov 22 10:54:42 2019] TRACE: nat:OUTPUT:rule:1 IN= OUT=tunl0 SRC=192.168.132.128 DST=192.168.23.215 LEN=84 TOS=0x00 PREC=0x00 TTL=64 ID=57031 DF PROTO=ICMP TYPE=8 CODE=0 ID=27401 SEQ=1 UID=18194 GID=500 
[Fri Nov 22 10:54:42 2019] TRACE: nat:cali-OUTPUT:rule:1 IN= OUT=tunl0 SRC=192.168.132.128 DST=192.168.23.215 LEN=84 TOS=0x00 PREC=0x00 TTL=64 ID=57031 DF PROTO=ICMP TYPE=8 CODE=0 ID=27401 SEQ=1 UID=18194 GID=500 
[Fri Nov 22 10:54:42 2019] TRACE: nat:cali-fip-dnat:return:1 IN= OUT=tunl0 SRC=192.168.132.128 DST=192.168.23.215 LEN=84 TOS=0x00 PREC=0x00 TTL=64 ID=57031 DF PROTO=ICMP TYPE=8 CODE=0 ID=27401 SEQ=1 UID=18194 GID=500 
[Fri Nov 22 10:54:42 2019] TRACE: nat:cali-OUTPUT:return:2 IN= OUT=tunl0 SRC=192.168.132.128 DST=192.168.23.215 LEN=84 TOS=0x00 PREC=0x00 TTL=64 ID=57031 DF PROTO=ICMP TYPE=8 CODE=0 ID=27401 SEQ=1 UID=18194 GID=500 
[Fri Nov 22 10:54:42 2019] TRACE: nat:OUTPUT:rule:2 IN= OUT=tunl0 SRC=192.168.132.128 DST=192.168.23.215 LEN=84 TOS=0x00 PREC=0x00 TTL=64 ID=57031 DF PROTO=ICMP TYPE=8 CODE=0 ID=27401 SEQ=1 UID=18194 GID=500 
[Fri Nov 22 10:54:42 2019] TRACE: nat:KUBE-SERVICES:return:20 IN= OUT=tunl0 SRC=192.168.132.128 DST=192.168.23.215 LEN=84 TOS=0x00 PREC=0x00 TTL=64 ID=57031 DF PROTO=ICMP TYPE=8 CODE=0 ID=27401 SEQ=1 UID=18194 GID=500 
[Fri Nov 22 10:54:42 2019] TRACE: nat:OUTPUT:policy:4 IN= OUT=tunl0 SRC=192.168.132.128 DST=192.168.23.215 LEN=84 TOS=0x00 PREC=0x00 TTL=64 ID=57031 DF PROTO=ICMP TYPE=8 CODE=0 ID=27401 SEQ=1 UID=18194 GID=500 
[Fri Nov 22 10:54:42 2019] TRACE: filter:OUTPUT:rule:1 IN= OUT=tunl0 SRC=192.168.132.128 DST=192.168.23.215 LEN=84 TOS=0x00 PREC=0x00 TTL=64 ID=57031 DF PROTO=ICMP TYPE=8 CODE=0 ID=27401 SEQ=1 UID=18194 GID=500 
[Fri Nov 22 10:54:42 2019] TRACE: filter:cali-OUTPUT:rule:4 IN= OUT=tunl0 SRC=192.168.132.128 DST=192.168.23.215 LEN=84 TOS=0x00 PREC=0x00 TTL=64 ID=57031 DF PROTO=ICMP TYPE=8 CODE=0 ID=27401 SEQ=1 UID=18194 GID=500 
[Fri Nov 22 10:54:42 2019] TRACE: filter:cali-OUTPUT:rule:5 IN= OUT=tunl0 SRC=192.168.132.128 DST=192.168.23.215 LEN=84 TOS=0x00 PREC=0x00 TTL=64 ID=57031 DF PROTO=ICMP TYPE=8 CODE=0 ID=27401 SEQ=1 UID=18194 GID=500 
[Fri Nov 22 10:54:42 2019] TRACE: filter:cali-to-host-endpoint:return:1 IN= OUT=tunl0 SRC=192.168.132.128 DST=192.168.23.215 LEN=84 TOS=0x00 PREC=0x00 TTL=64 ID=57031 DF PROTO=ICMP TYPE=8 CODE=0 ID=27401 SEQ=1 UID=18194 GID=500 
[Fri Nov 22 10:54:42 2019] TRACE: filter:cali-OUTPUT:return:7 IN= OUT=tunl0 SRC=192.168.132.128 DST=192.168.23.215 LEN=84 TOS=0x00 PREC=0x00 TTL=64 ID=57031 DF PROTO=ICMP TYPE=8 CODE=0 ID=27401 SEQ=1 UID=18194 GID=500 
[Fri Nov 22 10:54:42 2019] TRACE: filter:OUTPUT:rule:2 IN= OUT=tunl0 SRC=192.168.132.128 DST=192.168.23.215 LEN=84 TOS=0x00 PREC=0x00 TTL=64 ID=57031 DF PROTO=ICMP TYPE=8 CODE=0 ID=27401 SEQ=1 UID=18194 GID=500 
[Fri Nov 22 10:54:42 2019] TRACE: filter:KUBE-SERVICES:return:1 IN= OUT=tunl0 SRC=192.168.132.128 DST=192.168.23.215 LEN=84 TOS=0x00 PREC=0x00 TTL=64 ID=57031 DF PROTO=ICMP TYPE=8 CODE=0 ID=27401 SEQ=1 UID=18194 GID=500 
[Fri Nov 22 10:54:42 2019] TRACE: filter:OUTPUT:rule:3 IN= OUT=tunl0 SRC=192.168.132.128 DST=192.168.23.215 LEN=84 TOS=0x00 PREC=0x00 TTL=64 ID=57031 DF PROTO=ICMP TYPE=8 CODE=0 ID=27401 SEQ=1 UID=18194 GID=500 
[Fri Nov 22 10:54:42 2019] TRACE: filter:KUBE-FIREWALL:return:2 IN= OUT=tunl0 SRC=192.168.132.128 DST=192.168.23.215 LEN=84 TOS=0x00 PREC=0x00 TTL=64 ID=57031 DF PROTO=ICMP TYPE=8 CODE=0 ID=27401 SEQ=1 UID=18194 GID=500 
[Fri Nov 22 10:54:42 2019] TRACE: filter:OUTPUT:policy:4 IN= OUT=tunl0 SRC=192.168.132.128 DST=192.168.23.215 LEN=84 TOS=0x00 PREC=0x00 TTL=64 ID=57031 DF PROTO=ICMP TYPE=8 CODE=0 ID=27401 SEQ=1 UID=18194 GID=500 
[Fri Nov 22 10:54:42 2019] TRACE: mangle:POSTROUTING:policy:1 IN= OUT=tunl0 SRC=192.168.132.128 DST=192.168.23.215 LEN=84 TOS=0x00 PREC=0x00 TTL=64 ID=57031 DF PROTO=ICMP TYPE=8 CODE=0 ID=27401 SEQ=1 UID=18194 GID=500 
[Fri Nov 22 10:54:42 2019] TRACE: nat:POSTROUTING:rule:1 IN= OUT=tunl0 SRC=192.168.132.128 DST=192.168.23.215 LEN=84 TOS=0x00 PREC=0x00 TTL=64 ID=57031 DF PROTO=ICMP TYPE=8 CODE=0 ID=27401 SEQ=1 UID=18194 GID=500 
[Fri Nov 22 10:54:42 2019] TRACE: nat:cali-POSTROUTING:rule:1 IN= OUT=tunl0 SRC=192.168.132.128 DST=192.168.23.215 LEN=84 TOS=0x00 PREC=0x00 TTL=64 ID=57031 DF PROTO=ICMP TYPE=8 CODE=0 ID=27401 SEQ=1 UID=18194 GID=500 
[Fri Nov 22 10:54:42 2019] TRACE: nat:cali-fip-snat:return:1 IN= OUT=tunl0 SRC=192.168.132.128 DST=192.168.23.215 LEN=84 TOS=0x00 PREC=0x00 TTL=64 ID=57031 DF PROTO=ICMP TYPE=8 CODE=0 ID=27401 SEQ=1 UID=18194 GID=500 
[Fri Nov 22 10:54:42 2019] TRACE: nat:cali-POSTROUTING:rule:2 IN= OUT=tunl0 SRC=192.168.132.128 DST=192.168.23.215 LEN=84 TOS=0x00 PREC=0x00 TTL=64 ID=57031 DF PROTO=ICMP TYPE=8 CODE=0 ID=27401 SEQ=1 UID=18194 GID=500 
[Fri Nov 22 10:54:42 2019] TRACE: nat:cali-nat-outgoing:return:2 IN= OUT=tunl0 SRC=192.168.132.128 DST=192.168.23.215 LEN=84 TOS=0x00 PREC=0x00 TTL=64 ID=57031 DF PROTO=ICMP TYPE=8 CODE=0 ID=27401 SEQ=1 UID=18194 GID=500 
[Fri Nov 22 10:54:42 2019] TRACE: nat:cali-POSTROUTING:return:4 IN= OUT=tunl0 SRC=192.168.132.128 DST=192.168.23.215 LEN=84 TOS=0x00 PREC=0x00 TTL=64 ID=57031 DF PROTO=ICMP TYPE=8 CODE=0 ID=27401 SEQ=1 UID=18194 GID=500 
[Fri Nov 22 10:54:42 2019] TRACE: nat:POSTROUTING:rule:2 IN= OUT=tunl0 SRC=192.168.132.128 DST=192.168.23.215 LEN=84 TOS=0x00 PREC=0x00 TTL=64 ID=57031 DF PROTO=ICMP TYPE=8 CODE=0 ID=27401 SEQ=1 UID=18194 GID=500 
[Fri Nov 22 10:54:42 2019] TRACE: nat:KUBE-POSTROUTING:return:2 IN= OUT=tunl0 SRC=192.168.132.128 DST=192.168.23.215 LEN=84 TOS=0x00 PREC=0x00 TTL=64 ID=57031 DF PROTO=ICMP TYPE=8 CODE=0 ID=27401 SEQ=1 UID=18194 GID=500 
[Fri Nov 22 10:54:42 2019] TRACE: nat:POSTROUTING:policy:4 IN= OUT=tunl0 SRC=192.168.132.128 DST=192.168.23.215 LEN=84 TOS=0x00 PREC=0x00 TTL=64 ID=57031 DF PROTO=ICMP TYPE=8 CODE=0 ID=27401 SEQ=1 UID=18194 GID=500
tools
[Fri Nov 22 10:55:41 2019] TRACE: raw:OUTPUT:policy:3 IN= OUT=tunl0 SRC=192.168.48.128 DST=192.168.50.9 LEN=84 TOS=0x00 PREC=0x00 TTL=64 ID=6919 DF PROTO=ICMP TYPE=8 CODE=0 ID=32429 SEQ=1 UID=18194 GID=500 
[Fri Nov 22 10:55:41 2019] TRACE: mangle:OUTPUT:policy:1 IN= OUT=tunl0 SRC=192.168.48.128 DST=192.168.50.9 LEN=84 TOS=0x00 PREC=0x00 TTL=64 ID=6919 DF PROTO=ICMP TYPE=8 CODE=0 ID=32429 SEQ=1 UID=18194 GID=500 
[Fri Nov 22 10:55:41 2019] TRACE: nat:OUTPUT:rule:1 IN= OUT=tunl0 SRC=192.168.48.128 DST=192.168.50.9 LEN=84 TOS=0x00 PREC=0x00 TTL=64 ID=6919 DF PROTO=ICMP TYPE=8 CODE=0 ID=32429 SEQ=1 UID=18194 GID=500 
[Fri Nov 22 10:55:41 2019] TRACE: nat:cali-OUTPUT:rule:1 IN= OUT=tunl0 SRC=192.168.48.128 DST=192.168.50.9 LEN=84 TOS=0x00 PREC=0x00 TTL=64 ID=6919 DF PROTO=ICMP TYPE=8 CODE=0 ID=32429 SEQ=1 UID=18194 GID=500 
[Fri Nov 22 10:55:41 2019] TRACE: nat:cali-fip-dnat:return:1 IN= OUT=tunl0 SRC=192.168.48.128 DST=192.168.50.9 LEN=84 TOS=0x00 PREC=0x00 TTL=64 ID=6919 DF PROTO=ICMP TYPE=8 CODE=0 ID=32429 SEQ=1 UID=18194 GID=500 
[Fri Nov 22 10:55:41 2019] TRACE: nat:cali-OUTPUT:return:2 IN= OUT=tunl0 SRC=192.168.48.128 DST=192.168.50.9 LEN=84 TOS=0x00 PREC=0x00 TTL=64 ID=6919 DF PROTO=ICMP TYPE=8 CODE=0 ID=32429 SEQ=1 UID=18194 GID=500 
[Fri Nov 22 10:55:41 2019] TRACE: nat:OUTPUT:rule:2 IN= OUT=tunl0 SRC=192.168.48.128 DST=192.168.50.9 LEN=84 TOS=0x00 PREC=0x00 TTL=64 ID=6919 DF PROTO=ICMP TYPE=8 CODE=0 ID=32429 SEQ=1 UID=18194 GID=500 
[Fri Nov 22 10:55:41 2019] TRACE: nat:KUBE-SERVICES:return:18 IN= OUT=tunl0 SRC=192.168.48.128 DST=192.168.50.9 LEN=84 TOS=0x00 PREC=0x00 TTL=64 ID=6919 DF PROTO=ICMP TYPE=8 CODE=0 ID=32429 SEQ=1 UID=18194 GID=500 
[Fri Nov 22 10:55:41 2019] TRACE: nat:OUTPUT:policy:4 IN= OUT=tunl0 SRC=192.168.48.128 DST=192.168.50.9 LEN=84 TOS=0x00 PREC=0x00 TTL=64 ID=6919 DF PROTO=ICMP TYPE=8 CODE=0 ID=32429 SEQ=1 UID=18194 GID=500 
[Fri Nov 22 10:55:41 2019] TRACE: filter:OUTPUT:rule:1 IN= OUT=tunl0 SRC=192.168.48.128 DST=192.168.50.9 LEN=84 TOS=0x00 PREC=0x00 TTL=64 ID=6919 DF PROTO=ICMP TYPE=8 CODE=0 ID=32429 SEQ=1 UID=18194 GID=500 
[Fri Nov 22 10:55:41 2019] TRACE: filter:cali-OUTPUT:rule:4 IN= OUT=tunl0 SRC=192.168.48.128 DST=192.168.50.9 LEN=84 TOS=0x00 PREC=0x00 TTL=64 ID=6919 DF PROTO=ICMP TYPE=8 CODE=0 ID=32429 SEQ=1 UID=18194 GID=500 
[Fri Nov 22 10:55:41 2019] TRACE: filter:cali-OUTPUT:rule:5 IN= OUT=tunl0 SRC=192.168.48.128 DST=192.168.50.9 LEN=84 TOS=0x00 PREC=0x00 TTL=64 ID=6919 DF PROTO=ICMP TYPE=8 CODE=0 ID=32429 SEQ=1 UID=18194 GID=500 
[Fri Nov 22 10:55:41 2019] TRACE: filter:cali-to-host-endpoint:return:1 IN= OUT=tunl0 SRC=192.168.48.128 DST=192.168.50.9 LEN=84 TOS=0x00 PREC=0x00 TTL=64 ID=6919 DF PROTO=ICMP TYPE=8 CODE=0 ID=32429 SEQ=1 UID=18194 GID=500 
[Fri Nov 22 10:55:41 2019] TRACE: filter:cali-OUTPUT:return:7 IN= OUT=tunl0 SRC=192.168.48.128 DST=192.168.50.9 LEN=84 TOS=0x00 PREC=0x00 TTL=64 ID=6919 DF PROTO=ICMP TYPE=8 CODE=0 ID=32429 SEQ=1 UID=18194 GID=500 
[Fri Nov 22 10:55:41 2019] TRACE: filter:OUTPUT:rule:2 IN= OUT=tunl0 SRC=192.168.48.128 DST=192.168.50.9 LEN=84 TOS=0x00 PREC=0x00 TTL=64 ID=6919 DF PROTO=ICMP TYPE=8 CODE=0 ID=32429 SEQ=1 UID=18194 GID=500 
[Fri Nov 22 10:55:41 2019] TRACE: filter:KUBE-SERVICES:return:1 IN= OUT=tunl0 SRC=192.168.48.128 DST=192.168.50.9 LEN=84 TOS=0x00 PREC=0x00 TTL=64 ID=6919 DF PROTO=ICMP TYPE=8 CODE=0 ID=32429 SEQ=1 UID=18194 GID=500 
[Fri Nov 22 10:55:41 2019] TRACE: filter:OUTPUT:rule:3 IN= OUT=tunl0 SRC=192.168.48.128 DST=192.168.50.9 LEN=84 TOS=0x00 PREC=0x00 TTL=64 ID=6919 DF PROTO=ICMP TYPE=8 CODE=0 ID=32429 SEQ=1 UID=18194 GID=500 
[Fri Nov 22 10:55:41 2019] TRACE: filter:KUBE-FIREWALL:return:2 IN= OUT=tunl0 SRC=192.168.48.128 DST=192.168.50.9 LEN=84 TOS=0x00 PREC=0x00 TTL=64 ID=6919 DF PROTO=ICMP TYPE=8 CODE=0 ID=32429 SEQ=1 UID=18194 GID=500 
[Fri Nov 22 10:55:41 2019] TRACE: filter:OUTPUT:policy:4 IN= OUT=tunl0 SRC=192.168.48.128 DST=192.168.50.9 LEN=84 TOS=0x00 PREC=0x00 TTL=64 ID=6919 DF PROTO=ICMP TYPE=8 CODE=0 ID=32429 SEQ=1 UID=18194 GID=500 
[Fri Nov 22 10:55:41 2019] TRACE: mangle:POSTROUTING:policy:1 IN= OUT=tunl0 SRC=192.168.48.128 DST=192.168.50.9 LEN=84 TOS=0x00 PREC=0x00 TTL=64 ID=6919 DF PROTO=ICMP TYPE=8 CODE=0 ID=32429 SEQ=1 UID=18194 GID=500 
[Fri Nov 22 10:55:41 2019] TRACE: nat:POSTROUTING:rule:1 IN= OUT=tunl0 SRC=192.168.48.128 DST=192.168.50.9 LEN=84 TOS=0x00 PREC=0x00 TTL=64 ID=6919 DF PROTO=ICMP TYPE=8 CODE=0 ID=32429 SEQ=1 UID=18194 GID=500 
[Fri Nov 22 10:55:41 2019] TRACE: nat:cali-POSTROUTING:rule:1 IN= OUT=tunl0 SRC=192.168.48.128 DST=192.168.50.9 LEN=84 TOS=0x00 PREC=0x00 TTL=64 ID=6919 DF PROTO=ICMP TYPE=8 CODE=0 ID=32429 SEQ=1 UID=18194 GID=500 
[Fri Nov 22 10:55:41 2019] TRACE: nat:cali-fip-snat:return:1 IN= OUT=tunl0 SRC=192.168.48.128 DST=192.168.50.9 LEN=84 TOS=0x00 PREC=0x00 TTL=64 ID=6919 DF PROTO=ICMP TYPE=8 CODE=0 ID=32429 SEQ=1 UID=18194 GID=500 
[Fri Nov 22 10:55:41 2019] TRACE: nat:cali-POSTROUTING:rule:2 IN= OUT=tunl0 SRC=192.168.48.128 DST=192.168.50.9 LEN=84 TOS=0x00 PREC=0x00 TTL=64 ID=6919 DF PROTO=ICMP TYPE=8 CODE=0 ID=32429 SEQ=1 UID=18194 GID=500 
[Fri Nov 22 10:55:41 2019] TRACE: nat:cali-nat-outgoing:return:2 IN= OUT=tunl0 SRC=192.168.48.128 DST=192.168.50.9 LEN=84 TOS=0x00 PREC=0x00 TTL=64 ID=6919 DF PROTO=ICMP TYPE=8 CODE=0 ID=32429 SEQ=1 UID=18194 GID=500 
[Fri Nov 22 10:55:41 2019] TRACE: nat:cali-POSTROUTING:return:4 IN= OUT=tunl0 SRC=192.168.48.128 DST=192.168.50.9 LEN=84 TOS=0x00 PREC=0x00 TTL=64 ID=6919 DF PROTO=ICMP TYPE=8 CODE=0 ID=32429 SEQ=1 UID=18194 GID=500 
[Fri Nov 22 10:55:41 2019] TRACE: nat:POSTROUTING:rule:2 IN= OUT=tunl0 SRC=192.168.48.128 DST=192.168.50.9 LEN=84 TOS=0x00 PREC=0x00 TTL=64 ID=6919 DF PROTO=ICMP TYPE=8 CODE=0 ID=32429 SEQ=1 UID=18194 GID=500 
[Fri Nov 22 10:55:41 2019] TRACE: nat:KUBE-POSTROUTING:return:2 IN= OUT=tunl0 SRC=192.168.48.128 DST=192.168.50.9 LEN=84 TOS=0x00 PREC=0x00 TTL=64 ID=6919 DF PROTO=ICMP TYPE=8 CODE=0 ID=32429 SEQ=1 UID=18194 GID=500 
[Fri Nov 22 10:55:41 2019] TRACE: nat:POSTROUTING:policy:4 IN= OUT=tunl0 SRC=192.168.48.128 DST=192.168.50.9 LEN=84 TOS=0x00 PREC=0x00 TTL=64 ID=6919 DF PROTO=ICMP TYPE=8 CODE=0 ID=32429 SEQ=1 UID=18194 GID=500

they are exactly the same.

I compared the output of kubectl get all --all-namespaces -o yaml in both clusters. The diff is here:

1--- tools.yaml 2019-11-22 12:08:43.869861011 +0100
2+++ toolsbeta.yaml 2019-11-22 12:09:20.430644282 +0100
3@@ -4,111 +4,14 @@
4 kind: Pod
5 metadata:
6 annotations:
7- cni.projectcalico.org/podIP: 192.168.34.138/32
8- kubectl.kubernetes.io/last-applied-configuration: |
9- {"apiVersion":"v1","kind":"Pod","metadata":{"annotations":{},"labels":{"test":"yes"},"name":"test-shell","namespace":"default"},"spec":{"containers":[{"args":["-c","while true; do echo hello; sleep 10;done"],"command":["/bin/ash"],"image":"docker-registry.tools.wmflabs.org/maintain-kubeusers:beta","name":"test-deleteme"}]}}
10+ cni.projectcalico.org/podIP: 192.168.23.206/32
11 kubernetes.io/psp: privileged-psp
12- creationTimestamp: "2019-11-20T01:29:35Z"
13- labels:
14- test: "yes"
15- name: test-shell
16- namespace: default
17- resourceVersion: "2555323"
18- selfLink: /api/v1/namespaces/default/pods/test-shell
19- uid: 64f36c9a-ac8c-46ac-b46d-ba9761430aad
20- spec:
21- containers:
22- - args:
23- - -c
24- - while true; do echo hello; sleep 10;done
25- command:
26- - /bin/ash
27- image: docker-registry.tools.wmflabs.org/maintain-kubeusers:beta
28- imagePullPolicy: IfNotPresent
29- name: test-deleteme
30- resources: {}
31- terminationMessagePath: /dev/termination-log
32- terminationMessagePolicy: File
33- volumeMounts:
34- - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
35- name: default-token-c4m9z
36- readOnly: true
37- dnsPolicy: ClusterFirst
38- enableServiceLinks: true
39- nodeName: tools-k8s-worker-2
40- priority: 0
41- restartPolicy: Always
42- schedulerName: default-scheduler
43- securityContext: {}
44- serviceAccount: default
45- serviceAccountName: default
46- terminationGracePeriodSeconds: 30
47- tolerations:
48- - effect: NoExecute
49- key: node.kubernetes.io/not-ready
50- operator: Exists
51- tolerationSeconds: 300
52- - effect: NoExecute
53- key: node.kubernetes.io/unreachable
54- operator: Exists
55- tolerationSeconds: 300
56- volumes:
57- - name: default-token-c4m9z
58- secret:
59- defaultMode: 420
60- secretName: default-token-c4m9z
61- status:
62- conditions:
63- - lastProbeTime: null
64- lastTransitionTime: "2019-11-20T01:29:35Z"
65- status: "True"
66- type: Initialized
67- - lastProbeTime: null
68- lastTransitionTime: "2019-11-21T12:49:43Z"
69- status: "True"
70- type: Ready
71- - lastProbeTime: null
72- lastTransitionTime: "2019-11-21T12:49:43Z"
73- status: "True"
74- type: ContainersReady
75- - lastProbeTime: null
76- lastTransitionTime: "2019-11-20T01:29:35Z"
77- status: "True"
78- type: PodScheduled
79- containerStatuses:
80- - containerID: docker://48958fa20451daef12ed5b8307ddf32f3600210cb27fe7516f37c9ada8a82ccd
81- image: docker-registry.tools.wmflabs.org/maintain-kubeusers:beta
82- imageID: docker-pullable://docker-registry.tools.wmflabs.org/maintain-kubeusers@sha256:0507770d60cd0b931beaf2ab855dbcfaaee7c8a807dbfef82704ce940f18f742
83- lastState:
84- terminated:
85- containerID: docker://d77fcfa34037a8113283ac8ddd8bd934c916af6c4bd5d13f136d2252fe2bb50f
86- exitCode: 137
87- finishedAt: "2019-11-21T12:48:40Z"
88- reason: Error
89- startedAt: "2019-11-20T01:29:45Z"
90- name: test-deleteme
91- ready: true
92- restartCount: 1
93- state:
94- running:
95- startedAt: "2019-11-21T12:49:42Z"
96- hostIP: 172.16.0.103
97- phase: Running
98- podIP: 192.168.34.138
99- qosClass: BestEffort
100- startTime: "2019-11-20T01:29:35Z"
101-- apiVersion: v1
102- kind: Pod
103- metadata:
104- annotations:
105- cni.projectcalico.org/podIP: 192.168.50.10/32
106- kubernetes.io/psp: privileged-psp
107- creationTimestamp: "2019-11-07T13:24:34Z"
108+ creationTimestamp: "2019-11-07T21:52:42Z"
109 generateName: ingress-admission-55fb8554b5-
110 labels:
111 name: ingress-admission
112 pod-template-hash: 55fb8554b5
113- name: ingress-admission-55fb8554b5-6c98v
114+ name: ingress-admission-55fb8554b5-9mjnv
115 namespace: ingress-admission
116 ownerReferences:
117 - apiVersion: apps/v1
118@@ -116,10 +19,10 @@
119 controller: true
120 kind: ReplicaSet
121 name: ingress-admission-55fb8554b5
122- uid: 5a38fd74-55c6-40d8-b7ca-62739bd8cafe
123- resourceVersion: "2555268"
124- selfLink: /api/v1/namespaces/ingress-admission/pods/ingress-admission-55fb8554b5-6c98v
125- uid: 26de84f5-f795-4d82-820c-d1af193c268b
126+ uid: cbff34d9-c890-4184-acac-e8c1648c3d71
127+ resourceVersion: "2674876"
128+ selfLink: /api/v1/namespaces/ingress-admission/pods/ingress-admission-55fb8554b5-9mjnv
129+ uid: 1cde56a2-06eb-43b6-8045-d079a970bdeb
130 spec:
131 containers:
132 - image: docker-registry.tools.wmflabs.org/ingress-admission:latest
133@@ -141,11 +44,11 @@
134 name: webhook-certs
135 readOnly: true
136 - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
137- name: default-token-h7vmh
138+ name: default-token-sttp9
139 readOnly: true
140 dnsPolicy: ClusterFirst
141 enableServiceLinks: true
142- nodeName: tools-k8s-worker-1
143+ nodeName: toolsbeta-test-k8s-worker-2
144 priority: 0
145 restartPolicy: Always
146 schedulerName: default-scheduler
147@@ -167,62 +70,56 @@
148 secret:
149 defaultMode: 420
150 secretName: ingress-admission-certs
151- - name: default-token-h7vmh
152+ - name: default-token-sttp9
153 secret:
154 defaultMode: 420
155- secretName: default-token-h7vmh
156+ secretName: default-token-sttp9
157 status:
158 conditions:
159 - lastProbeTime: null
160- lastTransitionTime: "2019-11-07T13:24:34Z"
161+ lastTransitionTime: "2019-11-07T21:52:42Z"
162 status: "True"
163 type: Initialized
164 - lastProbeTime: null
165- lastTransitionTime: "2019-11-21T12:49:41Z"
166+ lastTransitionTime: "2019-11-07T21:52:47Z"
167 status: "True"
168 type: Ready
169 - lastProbeTime: null
170- lastTransitionTime: "2019-11-21T12:49:41Z"
171+ lastTransitionTime: "2019-11-07T21:52:47Z"
172 status: "True"
173 type: ContainersReady
174 - lastProbeTime: null
175- lastTransitionTime: "2019-11-07T13:24:34Z"
176+ lastTransitionTime: "2019-11-07T21:52:42Z"
177 status: "True"
178 type: PodScheduled
179 containerStatuses:
180- - containerID: docker://0f753968439f6a27ba9b8134b8ccfe899e0b435639f3bbff481d5e12944209ec
181+ - containerID: docker://b1c580d77b00c47bc856447398ba55b59dad015750fd82ad8f8e8acde4cf3246
182 image: docker-registry.tools.wmflabs.org/ingress-admission:latest
183 imageID: docker-pullable://docker-registry.tools.wmflabs.org/ingress-admission@sha256:548910ab1d8e06ae8b83554d739536ce0eda1c6bae3116340d1775e6a522d3c2
184- lastState:
185- terminated:
186- containerID: docker://a94f975b077866cc2da164015079b452d9ff6c772213997232d1e1c71342ca44
187- exitCode: 2
188- finishedAt: "2019-11-21T12:48:30Z"
189- reason: Error
190- startedAt: "2019-11-07T13:24:36Z"
191+ lastState: {}
192 name: webhook
193 ready: true
194- restartCount: 1
195+ restartCount: 0
196 state:
197 running:
198- startedAt: "2019-11-21T12:49:41Z"
199- hostIP: 172.16.0.78
200+ startedAt: "2019-11-07T21:52:46Z"
201+ hostIP: 172.16.0.151
202 phase: Running
203- podIP: 192.168.50.10
204+ podIP: 192.168.23.206
205 qosClass: Guaranteed
206- startTime: "2019-11-07T13:24:34Z"
207+ startTime: "2019-11-07T21:52:42Z"
208 - apiVersion: v1
209 kind: Pod
210 metadata:
211 annotations:
212- cni.projectcalico.org/podIP: 192.168.34.137/32
213+ cni.projectcalico.org/podIP: 192.168.44.210/32
214 kubernetes.io/psp: privileged-psp
215- creationTimestamp: "2019-11-07T13:24:34Z"
216+ creationTimestamp: "2019-11-07T21:52:42Z"
217 generateName: ingress-admission-55fb8554b5-
218 labels:
219 name: ingress-admission
220 pod-template-hash: 55fb8554b5
221- name: ingress-admission-55fb8554b5-btp7h
222+ name: ingress-admission-55fb8554b5-svld6
223 namespace: ingress-admission
224 ownerReferences:
225 - apiVersion: apps/v1
226@@ -230,10 +127,10 @@
227 controller: true
228 kind: ReplicaSet
229 name: ingress-admission-55fb8554b5
230- uid: 5a38fd74-55c6-40d8-b7ca-62739bd8cafe
231- resourceVersion: "2555305"
232- selfLink: /api/v1/namespaces/ingress-admission/pods/ingress-admission-55fb8554b5-btp7h
233- uid: f99470b2-8171-4985-863a-459aa9ed77ae
234+ uid: cbff34d9-c890-4184-acac-e8c1648c3d71
235+ resourceVersion: "2674882"
236+ selfLink: /api/v1/namespaces/ingress-admission/pods/ingress-admission-55fb8554b5-svld6
237+ uid: 9e1d4d99-1d81-4b20-bf5f-93e58473560f
238 spec:
239 containers:
240 - image: docker-registry.tools.wmflabs.org/ingress-admission:latest
241@@ -255,11 +152,11 @@
242 name: webhook-certs
243 readOnly: true
244 - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
245- name: default-token-h7vmh
246+ name: default-token-sttp9
247 readOnly: true
248 dnsPolicy: ClusterFirst
249 enableServiceLinks: true
250- nodeName: tools-k8s-worker-2
251+ nodeName: toolsbeta-test-k8s-worker-1
252 priority: 0
253 restartPolicy: Always
254 schedulerName: default-scheduler
255@@ -281,76 +178,70 @@
256 secret:
257 defaultMode: 420
258 secretName: ingress-admission-certs
259- - name: default-token-h7vmh
260+ - name: default-token-sttp9
261 secret:
262 defaultMode: 420
263- secretName: default-token-h7vmh
264+ secretName: default-token-sttp9
265 status:
266 conditions:
267 - lastProbeTime: null
268- lastTransitionTime: "2019-11-07T13:24:34Z"
269+ lastTransitionTime: "2019-11-07T21:52:42Z"
270 status: "True"
271 type: Initialized
272 - lastProbeTime: null
273- lastTransitionTime: "2019-11-21T12:49:42Z"
274+ lastTransitionTime: "2019-11-07T21:52:47Z"
275 status: "True"
276 type: Ready
277 - lastProbeTime: null
278- lastTransitionTime: "2019-11-21T12:49:42Z"
279+ lastTransitionTime: "2019-11-07T21:52:47Z"
280 status: "True"
281 type: ContainersReady
282 - lastProbeTime: null
283- lastTransitionTime: "2019-11-07T13:24:34Z"
284+ lastTransitionTime: "2019-11-07T21:52:42Z"
285 status: "True"
286 type: PodScheduled
287 containerStatuses:
288- - containerID: docker://c61f4b739246b3246d4b3023301ca9adcb147f3ed52f4b2c92be59abf2e29cce
289+ - containerID: docker://63dcb1d452199686729c40f38bd6d770f6c15048e50029b40bce0597d8919752
290 image: docker-registry.tools.wmflabs.org/ingress-admission:latest
291 imageID: docker-pullable://docker-registry.tools.wmflabs.org/ingress-admission@sha256:548910ab1d8e06ae8b83554d739536ce0eda1c6bae3116340d1775e6a522d3c2
292- lastState:
293- terminated:
294- containerID: docker://76d86b61d345bf80138d22b7ec1c34d87eed67e5162d5781c9096bdff7d85eae
295- exitCode: 2
296- finishedAt: "2019-11-21T12:48:30Z"
297- reason: Error
298- startedAt: "2019-11-07T13:24:36Z"
299+ lastState: {}
300 name: webhook
301 ready: true
302- restartCount: 1
303+ restartCount: 0
304 state:
305 running:
306- startedAt: "2019-11-21T12:49:42Z"
307- hostIP: 172.16.0.103
308+ startedAt: "2019-11-07T21:52:46Z"
309+ hostIP: 172.16.0.138
310 phase: Running
311- podIP: 192.168.34.137
312+ podIP: 192.168.44.210
313 qosClass: Guaranteed
314- startTime: "2019-11-07T13:24:34Z"
315+ startTime: "2019-11-07T21:52:42Z"
316 - apiVersion: v1
317 kind: Pod
318 metadata:
319 annotations:
320- cni.projectcalico.org/podIP: 192.168.50.9/32
321+ cni.projectcalico.org/podIP: 192.168.23.215/32
322 kubernetes.io/psp: privileged-psp
323 prometheus.io/port: "10254"
324 prometheus.io/scrape: "true"
325- creationTimestamp: "2019-11-21T12:30:56Z"
326- generateName: nginx-ingress-5dbf7cb65c-
327+ creationTimestamp: "2019-11-15T12:26:56Z"
328+ generateName: nginx-ingress-5d586d964b-
329 labels:
330 app.kubernetes.io/name: ingress-nginx
331 app.kubernetes.io/part-of: ingress-nginx
332- pod-template-hash: 5dbf7cb65c
333- name: nginx-ingress-5dbf7cb65c-68nn6
334+ pod-template-hash: 5d586d964b
335+ name: nginx-ingress-5d586d964b-n5k2c
336 namespace: ingress-nginx
337 ownerReferences:
338 - apiVersion: apps/v1
339 blockOwnerDeletion: true
340 controller: true
341 kind: ReplicaSet
342- name: nginx-ingress-5dbf7cb65c
343- uid: 7c4dbeb7-3867-43a6-a21b-0f9fc6fa466e
344- resourceVersion: "2555244"
345- selfLink: /api/v1/namespaces/ingress-nginx/pods/nginx-ingress-5dbf7cb65c-68nn6
346- uid: 4c194707-8322-4842-9c2f-c7149828a002
347+ name: nginx-ingress-5d586d964b
348+ uid: 6c77c7f1-1a6b-40b2-a79c-71af7f48d089
349+ resourceVersion: "5045341"
350+ selfLink: /api/v1/namespaces/ingress-nginx/pods/nginx-ingress-5d586d964b-n5k2c
351+ uid: eda0a5f4-b457-4e32-8891-5b941690c87c
352 spec:
353 containers:
354 - args:
355@@ -361,6 +252,7 @@
356 - --udp-services-configmap=$(POD_NAMESPACE)/udp-services
357 - --publish-service=$(POD_NAMESPACE)/ingress-nginx
358 - --annotations-prefix=nginx.ingress.kubernetes.io
359+ - --default-backend-service=tool-fourohfour/fourohfour
360 env:
361 - name: POD_NAME
362 valueFrom:
363@@ -414,11 +306,11 @@
364 terminationMessagePolicy: File
365 volumeMounts:
366 - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
367- name: nginx-ingress-token-47594
368+ name: nginx-ingress-token-bmndn
369 readOnly: true
370 dnsPolicy: ClusterFirst
371 enableServiceLinks: true
372- nodeName: tools-k8s-worker-1
373+ nodeName: toolsbeta-test-k8s-worker-2
374 priority: 0
375 restartPolicy: Always
376 schedulerName: default-scheduler
377@@ -436,63 +328,57 @@
378 operator: Exists
379 tolerationSeconds: 300
380 volumes:
381- - name: nginx-ingress-token-47594
382+ - name: nginx-ingress-token-bmndn
383 secret:
384 defaultMode: 420
385- secretName: nginx-ingress-token-47594
386+ secretName: nginx-ingress-token-bmndn
387 status:
388 conditions:
389 - lastProbeTime: null
390- lastTransitionTime: "2019-11-21T12:30:56Z"
391+ lastTransitionTime: "2019-11-15T12:26:56Z"
392 status: "True"
393 type: Initialized
394 - lastProbeTime: null
395- lastTransitionTime: "2019-11-21T12:49:39Z"
396+ lastTransitionTime: "2019-11-21T13:11:18Z"
397 status: "True"
398 type: Ready
399 - lastProbeTime: null
400- lastTransitionTime: "2019-11-21T12:49:39Z"
401+ lastTransitionTime: "2019-11-21T13:11:18Z"
402 status: "True"
403 type: ContainersReady
404 - lastProbeTime: null
405- lastTransitionTime: "2019-11-21T12:30:56Z"
406+ lastTransitionTime: "2019-11-15T12:26:56Z"
407 status: "True"
408 type: PodScheduled
409 containerStatuses:
410- - containerID: docker://a941b074cc51a8650cbd8c9a724fbefdf6f22daa27a4c5eac7b51f3a1a07f2c3
411+ - containerID: docker://11d5b141dde24679e64be506e6ebe80100bb17f21245eb0339b06676c6ae984b
412 image: docker-registry.tools.wmflabs.org/nginx-ingress-controller:0.25.1
413 imageID: docker-pullable://docker-registry.tools.wmflabs.org/nginx-ingress-controller@sha256:b9cd638b8849f25210740b075d27ef2e55ffd2861488ead98276aa70b8a859ab
414- lastState:
415- terminated:
416- containerID: docker://ef87dc860706055e52eaf40e7da8ebe172480985a8633e01fd461d17efdc25ee
417- exitCode: 1
418- finishedAt: "2019-11-21T12:48:40Z"
419- reason: Error
420- startedAt: "2019-11-21T12:30:57Z"
421+ lastState: {}
422 name: nginx-ingress-controller
423 ready: true
424- restartCount: 1
425+ restartCount: 0
426 state:
427 running:
428- startedAt: "2019-11-21T12:49:30Z"
429- hostIP: 172.16.0.78
430+ startedAt: "2019-11-15T12:27:00Z"
431+ hostIP: 172.16.0.151
432 phase: Running
433- podIP: 192.168.50.9
434+ podIP: 192.168.23.215
435 qosClass: BestEffort
436- startTime: "2019-11-21T12:30:56Z"
437+ startTime: "2019-11-15T12:26:56Z"
438 - apiVersion: v1
439 kind: Pod
440 metadata:
441 annotations:
442- cni.projectcalico.org/podIP: 192.168.48.141/32
443+ cni.projectcalico.org/podIP: 192.168.44.204/32
444 kubernetes.io/psp: privileged-psp
445 scheduler.alpha.kubernetes.io/critical-pod: ""
446- creationTimestamp: "2019-11-06T14:15:36Z"
447+ creationTimestamp: "2019-10-25T13:37:31Z"
448 generateName: calico-kube-controllers-59f54d6bbc-
449 labels:
450 k8s-app: calico-kube-controllers
451 pod-template-hash: 59f54d6bbc
452- name: calico-kube-controllers-59f54d6bbc-rwqkg
453+ name: calico-kube-controllers-59f54d6bbc-79bht
454 namespace: kube-system
455 ownerReferences:
456 - apiVersion: apps/v1
457@@ -500,10 +386,10 @@
458 controller: true
459 kind: ReplicaSet
460 name: calico-kube-controllers-59f54d6bbc
461- uid: 0afe7af0-f000-4c5e-a333-25e7991c6801
462- resourceVersion: "2555295"
463- selfLink: /api/v1/namespaces/kube-system/pods/calico-kube-controllers-59f54d6bbc-rwqkg
464- uid: 56e417c3-77c7-4028-8780-4a13d0429731
465+ uid: 4173b03e-bd7e-4c9a-b83c-87dbafffb109
466+ resourceVersion: "5045348"
467+ selfLink: /api/v1/namespaces/kube-system/pods/calico-kube-controllers-59f54d6bbc-79bht
468+ uid: 50b19109-832c-43f1-8459-05a23fa1bee7
469 spec:
470 containers:
471 - env:
472@@ -528,11 +414,11 @@
473 terminationMessagePolicy: File
474 volumeMounts:
475 - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
476- name: calico-kube-controllers-token-fm6lg
477+ name: calico-kube-controllers-token-7jn8x
478 readOnly: true
479 dnsPolicy: ClusterFirst
480 enableServiceLinks: true
481- nodeName: tools-k8s-control-1
482+ nodeName: toolsbeta-test-k8s-worker-1
483 nodeSelector:
484 beta.kubernetes.io/os: linux
485 priority: 2000000000
486@@ -557,63 +443,63 @@
487 operator: Exists
488 tolerationSeconds: 300
489 volumes:
490- - name: calico-kube-controllers-token-fm6lg
491+ - name: calico-kube-controllers-token-7jn8x
492 secret:
493 defaultMode: 420
494- secretName: calico-kube-controllers-token-fm6lg
495+ secretName: calico-kube-controllers-token-7jn8x
496 status:
497 conditions:
498 - lastProbeTime: null
499- lastTransitionTime: "2019-11-06T14:15:55Z"
500+ lastTransitionTime: "2019-10-25T13:37:31Z"
501 status: "True"
502 type: Initialized
503 - lastProbeTime: null
504- lastTransitionTime: "2019-11-21T12:49:42Z"
505+ lastTransitionTime: "2019-11-21T13:11:19Z"
506 status: "True"
507 type: Ready
508 - lastProbeTime: null
509- lastTransitionTime: "2019-11-21T12:49:42Z"
510+ lastTransitionTime: "2019-11-21T13:11:19Z"
511 status: "True"
512 type: ContainersReady
513 - lastProbeTime: null
514- lastTransitionTime: "2019-11-06T14:15:55Z"
515+ lastTransitionTime: "2019-10-25T13:37:31Z"
516 status: "True"
517 type: PodScheduled
518 containerStatuses:
519- - containerID: docker://6d43e76ae7fb25094f1ae8342f0176be8b846c0863a472a04a1b4bce05ce77b1
520+ - containerID: docker://1d249d0fb65d71f4728b46597fbb49d191371fc74482c524d963e925582908ff
521 image: calico/kube-controllers:v3.8.0
522 imageID: docker-pullable://calico/kube-controllers@sha256:cf461efd25ee74d1855e1ee26db98fe87de00293f7d039212adb03c91fececcd
523 lastState:
524 terminated:
525- containerID: docker://f7d39668f5c73dcd1abbd6063b0262b82f3bb109126f2ade9cb10cc453c3379e
526- exitCode: 2
527- finishedAt: "2019-11-21T12:48:30Z"
528+ containerID: docker://b83c3f2c61958d18898747dacd9705011a9676540bf07033998fdf74d6420bf2
529+ exitCode: 1
530+ finishedAt: "2019-10-25T16:12:07Z"
531 reason: Error
532- startedAt: "2019-11-06T14:16:23Z"
533+ startedAt: "2019-10-25T16:11:57Z"
534 name: calico-kube-controllers
535 ready: true
536- restartCount: 1
537+ restartCount: 35
538 state:
539 running:
540- startedAt: "2019-11-21T12:49:39Z"
541- hostIP: 172.16.0.104
542+ startedAt: "2019-10-25T16:15:20Z"
543+ hostIP: 172.16.0.138
544 phase: Running
545- podIP: 192.168.48.141
546+ podIP: 192.168.44.204
547 qosClass: BestEffort
548- startTime: "2019-11-06T14:15:55Z"
549+ startTime: "2019-10-25T13:37:31Z"
550 - apiVersion: v1
551 kind: Pod
552 metadata:
553 annotations:
554 kubernetes.io/psp: privileged-psp
555 scheduler.alpha.kubernetes.io/critical-pod: ""
556- creationTimestamp: "2019-11-07T13:10:23Z"
557+ creationTimestamp: "2019-10-23T12:36:51Z"
558 generateName: calico-node-
559 labels:
560 controller-revision-hash: 844ddd97c6
561 k8s-app: calico-node
562 pod-template-generation: "1"
563- name: calico-node-44ntg
564+ name: calico-node-2sf9d
565 namespace: kube-system
566 ownerReferences:
567 - apiVersion: apps/v1
568@@ -621,10 +507,10 @@
569 controller: true
570 kind: DaemonSet
571 name: calico-node
572- uid: 677e71b7-e034-4826-baa2-4fee1de6e3d1
573- resourceVersion: "2555148"
574- selfLink: /api/v1/namespaces/kube-system/pods/calico-node-44ntg
575- uid: 675198f2-2988-49c8-bca7-4342492b4d38
576+ uid: 5c6b278d-68cd-4a45-913d-941ccd990f3e
577+ resourceVersion: "5045352"
578+ selfLink: /api/v1/namespaces/kube-system/pods/calico-node-2sf9d
579+ uid: ac511332-3548-4219-a2b9-fe9ad2d73715
580 spec:
581 affinity:
582 nodeAffinity:
583@@ -634,7 +520,7 @@
584 - key: metadata.name
585 operator: In
586 values:
587- - tools-k8s-worker-2
588+ - toolsbeta-test-k8s-worker-1
589 containers:
590 - env:
591 - name: DATASTORE_TYPE
592@@ -718,7 +604,7 @@
593 - mountPath: /var/run/nodeagent
594 name: policysync
595 - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
596- name: calico-node-token-rl5z2
597+ name: calico-node-token-7vltx
598 readOnly: true
599 dnsPolicy: ClusterFirst
600 enableServiceLinks: true
601@@ -750,7 +636,7 @@
602 - mountPath: /host/opt/cni/bin
603 name: cni-bin-dir
604 - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
605- name: calico-node-token-rl5z2
606+ name: calico-node-token-7vltx
607 readOnly: true
608 - command:
609 - /install-cni.sh
610@@ -786,7 +672,7 @@
611 - mountPath: /host/etc/cni/net.d
612 name: cni-net-dir
613 - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
614- name: calico-node-token-rl5z2
615+ name: calico-node-token-7vltx
616 readOnly: true
617 - image: calico/pod2daemon-flexvol:v3.8.0
618 imagePullPolicy: IfNotPresent
619@@ -798,9 +684,9 @@
620 - mountPath: /host/driver
621 name: flexvol-driver-host
622 - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
623- name: calico-node-token-rl5z2
624+ name: calico-node-token-7vltx
625 readOnly: true
626- nodeName: tools-k8s-worker-2
627+ nodeName: toolsbeta-test-k8s-worker-1
628 nodeSelector:
629 beta.kubernetes.io/os: linux
630 priority: 2000001000
631@@ -876,62 +762,62 @@
632 path: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds
633 type: DirectoryOrCreate
634 name: flexvol-driver-host
635- - name: calico-node-token-rl5z2
636+ - name: calico-node-token-7vltx
637 secret:
638 defaultMode: 420
639- secretName: calico-node-token-rl5z2
640+ secretName: calico-node-token-7vltx
641 status:
642 conditions:
643 - lastProbeTime: null
644- lastTransitionTime: "2019-11-07T13:10:34Z"
645+ lastTransitionTime: "2019-10-25T16:14:52Z"
646 status: "True"
647 type: Initialized
648 - lastProbeTime: null
649- lastTransitionTime: "2019-11-21T12:49:24Z"
650+ lastTransitionTime: "2019-11-21T13:11:19Z"
651 status: "True"
652 type: Ready
653 - lastProbeTime: null
654- lastTransitionTime: "2019-11-21T12:49:24Z"
655+ lastTransitionTime: "2019-11-21T13:11:19Z"
656 status: "True"
657 type: ContainersReady
658 - lastProbeTime: null
659- lastTransitionTime: "2019-11-07T13:10:23Z"
660+ lastTransitionTime: "2019-10-23T12:36:51Z"
661 status: "True"
662 type: PodScheduled
663 containerStatuses:
664- - containerID: docker://5db78298f78649f76e3528ab7796b58b815220fdc8113bb354972dc47f13b963
665+ - containerID: docker://ffb1b518e2cd798449430de22c17b33d5c3ea7dadc78abe6d82ec605f3e044a1
666 image: calico/node:v3.8.0
667 imageID: docker-pullable://calico/node@sha256:6679ccc9f19dba3eb084db991c788dc9661ad3b5d5bafaa3379644229dca6b05
668 lastState:
669 terminated:
670- containerID: docker://b8814e8783ce4069bbf9626a4d2f78696f5dcdbb96c116454087e060dbc1205f
671+ containerID: docker://bdf771ecb664fc49cbba8d825c327afdb0a133150ce8b098d11557acc82113a4
672 exitCode: 0
673- finishedAt: "2019-11-21T12:48:30Z"
674+ finishedAt: "2019-10-25T16:13:48Z"
675 reason: Completed
676- startedAt: "2019-11-07T13:10:37Z"
677+ startedAt: "2019-10-25T13:20:00Z"
678 name: calico-node
679 ready: true
680- restartCount: 1
681+ restartCount: 2
682 state:
683 running:
684- startedAt: "2019-11-21T12:49:03Z"
685- hostIP: 172.16.0.103
686+ startedAt: "2019-10-25T16:14:53Z"
687+ hostIP: 172.16.0.138
688 initContainerStatuses:
689- - containerID: docker://dec178bc2eff4fabfa10155e23fac916296f6eb6714bc25f68ef4650ee29059e
690+ - containerID: docker://1df9e7ec6af51da8710d81fb77418fc0f9032819adc23ca194db49d95068911a
691 image: calico/cni:v3.8.0
692 imageID: docker-pullable://calico/cni@sha256:decba0501ab0658e6e7da2f5625f1eabb8aba5690f9206caba3bf98caca5094c
693 lastState: {}
694 name: upgrade-ipam
695 ready: true
696- restartCount: 1
697+ restartCount: 2
698 state:
699 terminated:
700- containerID: docker://dec178bc2eff4fabfa10155e23fac916296f6eb6714bc25f68ef4650ee29059e
701+ containerID: docker://1df9e7ec6af51da8710d81fb77418fc0f9032819adc23ca194db49d95068911a
702 exitCode: 0
703- finishedAt: "2019-11-21T12:49:00Z"
704+ finishedAt: "2019-10-25T16:14:18Z"
705 reason: Completed
706- startedAt: "2019-11-21T12:49:00Z"
707- - containerID: docker://6d2f16bef8e201c0494b92a1f682e724ff4220f68e3c658af65da936f072a6f8
708+ startedAt: "2019-10-25T16:14:17Z"
709+ - containerID: docker://e4223ef83c64b5f0cced30dd549481e9fab07c186ca3131b091b91d6544f62b9
710 image: calico/cni:v3.8.0
711 imageID: docker-pullable://calico/cni@sha256:decba0501ab0658e6e7da2f5625f1eabb8aba5690f9206caba3bf98caca5094c
712 lastState: {}
713@@ -940,12 +826,12 @@
714 restartCount: 0
715 state:
716 terminated:
717- containerID: docker://6d2f16bef8e201c0494b92a1f682e724ff4220f68e3c658af65da936f072a6f8
718+ containerID: docker://e4223ef83c64b5f0cced30dd549481e9fab07c186ca3131b091b91d6544f62b9
719 exitCode: 0
720- finishedAt: "2019-11-21T12:49:01Z"
721+ finishedAt: "2019-10-25T16:14:48Z"
722 reason: Completed
723- startedAt: "2019-11-21T12:49:01Z"
724- - containerID: docker://29266b4adb067431930ea7cc652ddbb53104a471604c17d1afdd20c450c13683
725+ startedAt: "2019-10-25T16:14:47Z"
726+ - containerID: docker://c9e781c0e2e1848cda0c393ac8dcdecd29ed7e77e010b801b79132b1770399dd
727 image: calico/pod2daemon-flexvol:v3.8.0
728 imageID: docker-pullable://calico/pod2daemon-flexvol@sha256:6ec8b823e5ce3440318edfcdd2ab8b6660110782713f24f53dac5a3c227afb11
729 lastState: {}
730@@ -954,28 +840,28 @@
731 restartCount: 0
732 state:
733 terminated:
734- containerID: docker://29266b4adb067431930ea7cc652ddbb53104a471604c17d1afdd20c450c13683
735+ containerID: docker://c9e781c0e2e1848cda0c393ac8dcdecd29ed7e77e010b801b79132b1770399dd
736 exitCode: 0
737- finishedAt: "2019-11-21T12:49:02Z"
738+ finishedAt: "2019-10-25T16:14:50Z"
739 reason: Completed
740- startedAt: "2019-11-21T12:49:02Z"
741+ startedAt: "2019-10-25T16:14:50Z"
742 phase: Running
743- podIP: 172.16.0.103
744+ podIP: 172.16.0.138
745 qosClass: Burstable
746- startTime: "2019-11-07T13:10:23Z"
747+ startTime: "2019-10-23T12:36:52Z"
748 - apiVersion: v1
749 kind: Pod
750 metadata:
751 annotations:
752 kubernetes.io/psp: privileged-psp
753 scheduler.alpha.kubernetes.io/critical-pod: ""
754- creationTimestamp: "2019-11-06T16:07:24Z"
755+ creationTimestamp: "2019-10-23T10:06:16Z"
756 generateName: calico-node-
757 labels:
758 controller-revision-hash: 844ddd97c6
759 k8s-app: calico-node
760 pod-template-generation: "1"
761- name: calico-node-c8rxg
762+ name: calico-node-dfbqd
763 namespace: kube-system
764 ownerReferences:
765 - apiVersion: apps/v1
766@@ -983,10 +869,10 @@
767 controller: true
768 kind: DaemonSet
769 name: calico-node
770- uid: 677e71b7-e034-4826-baa2-4fee1de6e3d1
771- resourceVersion: "2555154"
772- selfLink: /api/v1/namespaces/kube-system/pods/calico-node-c8rxg
773- uid: 2f9e73b0-674b-4343-8457-e0e16eb5d23d
774+ uid: 5c6b278d-68cd-4a45-913d-941ccd990f3e
775+ resourceVersion: "5045282"
776+ selfLink: /api/v1/namespaces/kube-system/pods/calico-node-dfbqd
777+ uid: 6fef798c-6786-4315-b8ab-65554d2d0373
778 spec:
779 affinity:
780 nodeAffinity:
781@@ -996,7 +882,7 @@
782 - key: metadata.name
783 operator: In
784 values:
785- - tools-k8s-control-2
786+ - toolsbeta-test-k8s-control-2
787 containers:
788 - env:
789 - name: DATASTORE_TYPE
790@@ -1080,7 +966,7 @@
791 - mountPath: /var/run/nodeagent
792 name: policysync
793 - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
794- name: calico-node-token-rl5z2
795+ name: calico-node-token-7vltx
796 readOnly: true
797 dnsPolicy: ClusterFirst
798 enableServiceLinks: true
799@@ -1112,7 +998,7 @@
800 - mountPath: /host/opt/cni/bin
801 name: cni-bin-dir
802 - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
803- name: calico-node-token-rl5z2
804+ name: calico-node-token-7vltx
805 readOnly: true
806 - command:
807 - /install-cni.sh
808@@ -1148,7 +1034,7 @@
809 - mountPath: /host/etc/cni/net.d
810 name: cni-net-dir
811 - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
812- name: calico-node-token-rl5z2
813+ name: calico-node-token-7vltx
814 readOnly: true
815 - image: calico/pod2daemon-flexvol:v3.8.0
816 imagePullPolicy: IfNotPresent
817@@ -1160,9 +1046,9 @@
818 - mountPath: /host/driver
819 name: flexvol-driver-host
820 - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
821- name: calico-node-token-rl5z2
822+ name: calico-node-token-7vltx
823 readOnly: true
824- nodeName: tools-k8s-control-2
825+ nodeName: toolsbeta-test-k8s-control-2
826 nodeSelector:
827 beta.kubernetes.io/os: linux
828 priority: 2000001000
829@@ -1238,62 +1124,56 @@
830 path: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds
831 type: DirectoryOrCreate
832 name: flexvol-driver-host
833- - name: calico-node-token-rl5z2
834+ - name: calico-node-token-7vltx
835 secret:
836 defaultMode: 420
837- secretName: calico-node-token-rl5z2
838+ secretName: calico-node-token-7vltx
839 status:
840 conditions:
841 - lastProbeTime: null
842- lastTransitionTime: "2019-11-06T16:07:35Z"
843+ lastTransitionTime: "2019-10-23T10:06:27Z"
844 status: "True"
845 type: Initialized
846 - lastProbeTime: null
847- lastTransitionTime: "2019-11-21T12:49:27Z"
848+ lastTransitionTime: "2019-11-21T13:11:11Z"
849 status: "True"
850 type: Ready
851 - lastProbeTime: null
852- lastTransitionTime: "2019-11-21T12:49:27Z"
853+ lastTransitionTime: "2019-11-21T13:11:11Z"
854 status: "True"
855 type: ContainersReady
856 - lastProbeTime: null
857- lastTransitionTime: "2019-11-06T16:07:24Z"
858+ lastTransitionTime: "2019-10-23T10:06:16Z"
859 status: "True"
860 type: PodScheduled
861 containerStatuses:
862- - containerID: docker://7220731f1d0b716c6377551eba3412c9d6fe8d6da30908fc693c9f44f462fb9f
863+ - containerID: docker://ce8c6427280c2c96f120959f6ddc57b8a65fb261ff04a9a056e52cadd5919ba7
864 image: calico/node:v3.8.0
865 imageID: docker-pullable://calico/node@sha256:6679ccc9f19dba3eb084db991c788dc9661ad3b5d5bafaa3379644229dca6b05
866- lastState:
867- terminated:
868- containerID: docker://62c5ab831fdf5930667b3991bdcbfdfbf1cfb07722d432a740853b3dbbf2b4e9
869- exitCode: 0
870- finishedAt: "2019-11-21T12:48:30Z"
871- reason: Completed
872- startedAt: "2019-11-06T16:07:38Z"
873+ lastState: {}
874 name: calico-node
875 ready: true
876- restartCount: 1
877+ restartCount: 0
878 state:
879 running:
880- startedAt: "2019-11-21T12:48:55Z"
881- hostIP: 172.16.0.93
882+ startedAt: "2019-10-23T10:06:29Z"
883+ hostIP: 172.16.0.137
884 initContainerStatuses:
885- - containerID: docker://1bc577f8a491dfaab475a6e8d84c7b8842e43d470f658c8e4de4e49a9df6f6d3
886+ - containerID: docker://e01454b97f53af45902b972add0ab38f699bba90cc8bdc250e22b4eeb147de0c
887 image: calico/cni:v3.8.0
888 imageID: docker-pullable://calico/cni@sha256:decba0501ab0658e6e7da2f5625f1eabb8aba5690f9206caba3bf98caca5094c
889 lastState: {}
890 name: upgrade-ipam
891 ready: true
892- restartCount: 1
893+ restartCount: 0
894 state:
895 terminated:
896- containerID: docker://1bc577f8a491dfaab475a6e8d84c7b8842e43d470f658c8e4de4e49a9df6f6d3
897+ containerID: docker://e01454b97f53af45902b972add0ab38f699bba90cc8bdc250e22b4eeb147de0c
898 exitCode: 0
899- finishedAt: "2019-11-21T12:48:52Z"
900+ finishedAt: "2019-10-23T10:06:23Z"
901 reason: Completed
902- startedAt: "2019-11-21T12:48:51Z"
903- - containerID: docker://1160bba27b5a426b46f53bd62bf40d2a9002ea094f670528d2b8cfb8add51ff1
904+ startedAt: "2019-10-23T10:06:23Z"
905+ - containerID: docker://34edfea80ef9037dffd29ab5847bf138c6a81dfd9c14be7ab6c8e61358586e4d
906 image: calico/cni:v3.8.0
907 imageID: docker-pullable://calico/cni@sha256:decba0501ab0658e6e7da2f5625f1eabb8aba5690f9206caba3bf98caca5094c
908 lastState: {}
909@@ -1302,12 +1182,12 @@
910 restartCount: 0
911 state:
912 terminated:
913- containerID: docker://1160bba27b5a426b46f53bd62bf40d2a9002ea094f670528d2b8cfb8add51ff1
914+ containerID: docker://34edfea80ef9037dffd29ab5847bf138c6a81dfd9c14be7ab6c8e61358586e4d
915 exitCode: 0
916- finishedAt: "2019-11-21T12:48:53Z"
917+ finishedAt: "2019-10-23T10:06:24Z"
918 reason: Completed
919- startedAt: "2019-11-21T12:48:53Z"
920- - containerID: docker://dc857f0e0456433697c46aa9a6ccdc67de1824cb0fad33b1c38de8767a805cdb
921+ startedAt: "2019-10-23T10:06:24Z"
922+ - containerID: docker://7e7020cb8b67bfac6ea1662f52b296ff759b36caa8f4bf8d3c0a8b99689e1236
923 image: calico/pod2daemon-flexvol:v3.8.0
924 imageID: docker-pullable://calico/pod2daemon-flexvol@sha256:6ec8b823e5ce3440318edfcdd2ab8b6660110782713f24f53dac5a3c227afb11
925 lastState: {}
926@@ -1316,28 +1196,28 @@
927 restartCount: 0
928 state:
929 terminated:
930- containerID: docker://dc857f0e0456433697c46aa9a6ccdc67de1824cb0fad33b1c38de8767a805cdb
931+ containerID: docker://7e7020cb8b67bfac6ea1662f52b296ff759b36caa8f4bf8d3c0a8b99689e1236
932 exitCode: 0
933- finishedAt: "2019-11-21T12:48:54Z"
934+ finishedAt: "2019-10-23T10:06:26Z"
935 reason: Completed
936- startedAt: "2019-11-21T12:48:54Z"
937+ startedAt: "2019-10-23T10:06:26Z"
938 phase: Running
939- podIP: 172.16.0.93
940+ podIP: 172.16.0.137
941 qosClass: Burstable
942- startTime: "2019-11-06T16:07:24Z"
943+ startTime: "2019-10-23T10:06:16Z"
944 - apiVersion: v1
945 kind: Pod
946 metadata:
947 annotations:
948 kubernetes.io/psp: privileged-psp
949 scheduler.alpha.kubernetes.io/critical-pod: ""
950- creationTimestamp: "2019-11-06T16:08:07Z"
951+ creationTimestamp: "2019-10-23T09:58:22Z"
952 generateName: calico-node-
953 labels:
954 controller-revision-hash: 844ddd97c6
955 k8s-app: calico-node
956 pod-template-generation: "1"
957- name: calico-node-g64gn
958+ name: calico-node-g4hr7
959 namespace: kube-system
960 ownerReferences:
961 - apiVersion: apps/v1
962@@ -1345,10 +1225,10 @@
963 controller: true
964 kind: DaemonSet
965 name: calico-node
966- uid: 677e71b7-e034-4826-baa2-4fee1de6e3d1
967- resourceVersion: "2555150"
968- selfLink: /api/v1/namespaces/kube-system/pods/calico-node-g64gn
969- uid: e7369d3b-50c4-4251-9628-21a7f7464f7a
970+ uid: 5c6b278d-68cd-4a45-913d-941ccd990f3e
971+ resourceVersion: "5045279"
972+ selfLink: /api/v1/namespaces/kube-system/pods/calico-node-g4hr7
973+ uid: e383aa73-93d3-4c62-b406-ec0863e11a89
974 spec:
975 affinity:
976 nodeAffinity:
977@@ -1358,7 +1238,7 @@
978 - key: metadata.name
979 operator: In
980 values:
981- - tools-k8s-control-3
982+ - toolsbeta-test-k8s-control-1
983 containers:
984 - env:
985 - name: DATASTORE_TYPE
986@@ -1442,7 +1322,7 @@
987 - mountPath: /var/run/nodeagent
988 name: policysync
989 - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
990- name: calico-node-token-rl5z2
991+ name: calico-node-token-7vltx
992 readOnly: true
993 dnsPolicy: ClusterFirst
994 enableServiceLinks: true
995@@ -1474,7 +1354,7 @@
996 - mountPath: /host/opt/cni/bin
997 name: cni-bin-dir
998 - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
999- name: calico-node-token-rl5z2
1000+ name: calico-node-token-7vltx
1001 readOnly: true
1002 - command:
1003 - /install-cni.sh
1004@@ -1510,7 +1390,7 @@
1005 - mountPath: /host/etc/cni/net.d
1006 name: cni-net-dir
1007 - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
1008- name: calico-node-token-rl5z2
1009+ name: calico-node-token-7vltx
1010 readOnly: true
1011 - image: calico/pod2daemon-flexvol:v3.8.0
1012 imagePullPolicy: IfNotPresent
1013@@ -1522,9 +1402,9 @@
1014 - mountPath: /host/driver
1015 name: flexvol-driver-host
1016 - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
1017- name: calico-node-token-rl5z2
1018+ name: calico-node-token-7vltx
1019 readOnly: true
1020- nodeName: tools-k8s-control-3
1021+ nodeName: toolsbeta-test-k8s-control-1
1022 nodeSelector:
1023 beta.kubernetes.io/os: linux
1024 priority: 2000001000
1025@@ -1600,62 +1480,62 @@
1026 path: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds
1027 type: DirectoryOrCreate
1028 name: flexvol-driver-host
1029- - name: calico-node-token-rl5z2
1030+ - name: calico-node-token-7vltx
1031 secret:
1032 defaultMode: 420
1033- secretName: calico-node-token-rl5z2
1034+ secretName: calico-node-token-7vltx
1035 status:
1036 conditions:
1037 - lastProbeTime: null
1038- lastTransitionTime: "2019-11-21T12:49:13Z"
1039+ lastTransitionTime: "2019-11-06T15:57:39Z"
1040 status: "True"
1041 type: Initialized
1042 - lastProbeTime: null
1043- lastTransitionTime: "2019-11-21T12:49:25Z"
1044+ lastTransitionTime: "2019-11-21T13:11:10Z"
1045 status: "True"
1046 type: Ready
1047 - lastProbeTime: null
1048- lastTransitionTime: "2019-11-21T12:49:25Z"
1049+ lastTransitionTime: "2019-11-21T13:11:10Z"
1050 status: "True"
1051 type: ContainersReady
1052 - lastProbeTime: null
1053- lastTransitionTime: "2019-11-06T16:08:07Z"
1054+ lastTransitionTime: "2019-10-23T09:58:22Z"
1055 status: "True"
1056 type: PodScheduled
1057 containerStatuses:
1058- - containerID: docker://beb9bd3b8093e7b35a2dee4c4a0589b1d992a1bf643bbcf45a6c3832c3f5ca96
1059+ - containerID: docker://76ed8fbcb8ebc1f748d68e9841fee0c1d4ee68b4bacf42b68364fb3431dcc0f8
1060 image: calico/node:v3.8.0
1061 imageID: docker-pullable://calico/node@sha256:6679ccc9f19dba3eb084db991c788dc9661ad3b5d5bafaa3379644229dca6b05
1062 lastState:
1063 terminated:
1064- containerID: docker://d4af8c1022d16f46a94f9771fd3f0f31cb0f64b65e70b0df5a6311fbfd146665
1065+ containerID: docker://1d251658a27c21e453ac89d658821edd5783d0df2a6d93df4a2ededb3903969a
1066 exitCode: 0
1067- finishedAt: "2019-11-21T12:48:30Z"
1068+ finishedAt: "2019-11-06T15:56:51Z"
1069 reason: Completed
1070- startedAt: "2019-11-06T16:08:20Z"
1071+ startedAt: "2019-10-23T09:58:34Z"
1072 name: calico-node
1073 ready: true
1074 restartCount: 1
1075 state:
1076 running:
1077- startedAt: "2019-11-21T12:49:15Z"
1078- hostIP: 172.16.0.96
1079+ startedAt: "2019-11-06T15:57:41Z"
1080+ hostIP: 172.16.0.112
1081 initContainerStatuses:
1082- - containerID: docker://493ae66774941d3d361b4b98983b85b10641d25a3a02d1554cfd9dafbb2288b3
1083+ - containerID: docker://49485d9e39b1e9432318a1fd221a319f7ba4a3b8184487cfde9f7c01c7c79167
1084 image: calico/cni:v3.8.0
1085 imageID: docker-pullable://calico/cni@sha256:decba0501ab0658e6e7da2f5625f1eabb8aba5690f9206caba3bf98caca5094c
1086 lastState: {}
1087 name: upgrade-ipam
1088 ready: true
1089- restartCount: 2
1090+ restartCount: 1
1091 state:
1092 terminated:
1093- containerID: docker://493ae66774941d3d361b4b98983b85b10641d25a3a02d1554cfd9dafbb2288b3
1094+ containerID: docker://49485d9e39b1e9432318a1fd221a319f7ba4a3b8184487cfde9f7c01c7c79167
1095 exitCode: 0
1096- finishedAt: "2019-11-21T12:49:13Z"
1097+ finishedAt: "2019-11-06T15:57:09Z"
1098 reason: Completed
1099- startedAt: "2019-11-21T12:49:13Z"
1100- - containerID: docker://26ae34f300e8333535620aeb43caeb017ec5ba05076a1659bee7c6a0cddc3d99
1101+ startedAt: "2019-11-06T15:57:09Z"
1102+ - containerID: docker://b09e15bc0812d83f65845ee3e68f671355698c6561d4bcf408e0e241c7c5d645
1103 image: calico/cni:v3.8.0
1104 imageID: docker-pullable://calico/cni@sha256:decba0501ab0658e6e7da2f5625f1eabb8aba5690f9206caba3bf98caca5094c
1105 lastState: {}
1106@@ -1664,12 +1544,12 @@
1107 restartCount: 0
1108 state:
1109 terminated:
1110- containerID: docker://26ae34f300e8333535620aeb43caeb017ec5ba05076a1659bee7c6a0cddc3d99
1111+ containerID: docker://b09e15bc0812d83f65845ee3e68f671355698c6561d4bcf408e0e241c7c5d645
1112 exitCode: 0
1113- finishedAt: "2019-11-21T12:49:14Z"
1114+ finishedAt: "2019-11-06T15:57:38Z"
1115 reason: Completed
1116- startedAt: "2019-11-21T12:49:13Z"
1117- - containerID: docker://7c65c41e591f300d692b7f39133f2dc0280085ff5daaa3bda7c3618cbfdc426f
1118+ startedAt: "2019-11-06T15:57:38Z"
1119+ - containerID: docker://ec3549d88883d2c71751c56d7b5d0bedc65a5390fa1bc85c2cff5df02fcaccee
1120 image: calico/pod2daemon-flexvol:v3.8.0
1121 imageID: docker-pullable://calico/pod2daemon-flexvol@sha256:6ec8b823e5ce3440318edfcdd2ab8b6660110782713f24f53dac5a3c227afb11
1122 lastState: {}
1123@@ -1678,28 +1558,28 @@
1124 restartCount: 0
1125 state:
1126 terminated:
1127- containerID: docker://7c65c41e591f300d692b7f39133f2dc0280085ff5daaa3bda7c3618cbfdc426f
1128+ containerID: docker://ec3549d88883d2c71751c56d7b5d0bedc65a5390fa1bc85c2cff5df02fcaccee
1129 exitCode: 0
1130- finishedAt: "2019-11-21T12:49:14Z"
1131+ finishedAt: "2019-11-06T15:57:40Z"
1132 reason: Completed
1133- startedAt: "2019-11-21T12:49:14Z"
1134+ startedAt: "2019-11-06T15:57:40Z"
1135 phase: Running
1136- podIP: 172.16.0.96
1137+ podIP: 172.16.0.112
1138 qosClass: Burstable
1139- startTime: "2019-11-06T16:08:07Z"
1140+ startTime: "2019-10-23T09:58:22Z"
1141 - apiVersion: v1
1142 kind: Pod
1143 metadata:
1144 annotations:
1145 kubernetes.io/psp: privileged-psp
1146 scheduler.alpha.kubernetes.io/critical-pod: ""
1147- creationTimestamp: "2019-11-06T14:15:36Z"
1148+ creationTimestamp: "2019-10-23T10:07:28Z"
1149 generateName: calico-node-
1150 labels:
1151 controller-revision-hash: 844ddd97c6
1152 k8s-app: calico-node
1153 pod-template-generation: "1"
1154- name: calico-node-nrk46
1155+ name: calico-node-q5phv
1156 namespace: kube-system
1157 ownerReferences:
1158 - apiVersion: apps/v1
1159@@ -1707,10 +1587,10 @@
1160 controller: true
1161 kind: DaemonSet
1162 name: calico-node
1163- uid: 677e71b7-e034-4826-baa2-4fee1de6e3d1
1164- resourceVersion: "2555135"
1165- selfLink: /api/v1/namespaces/kube-system/pods/calico-node-nrk46
1166- uid: 84b1e18d-d3bb-4fd4-9aca-678492bf3eba
1167+ uid: 5c6b278d-68cd-4a45-913d-941ccd990f3e
1168+ resourceVersion: "5045318"
1169+ selfLink: /api/v1/namespaces/kube-system/pods/calico-node-q5phv
1170+ uid: 662d6887-6cd4-4784-b617-ee0746e66a13
1171 spec:
1172 affinity:
1173 nodeAffinity:
1174@@ -1720,7 +1600,7 @@
1175 - key: metadata.name
1176 operator: In
1177 values:
1178- - tools-k8s-control-1
1179+ - toolsbeta-test-k8s-control-3
1180 containers:
1181 - env:
1182 - name: DATASTORE_TYPE
1183@@ -1804,7 +1684,7 @@
1184 - mountPath: /var/run/nodeagent
1185 name: policysync
1186 - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
1187- name: calico-node-token-rl5z2
1188+ name: calico-node-token-7vltx
1189 readOnly: true
1190 dnsPolicy: ClusterFirst
1191 enableServiceLinks: true
1192@@ -1836,7 +1716,7 @@
1193 - mountPath: /host/opt/cni/bin
1194 name: cni-bin-dir
1195 - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
1196- name: calico-node-token-rl5z2
1197+ name: calico-node-token-7vltx
1198 readOnly: true
1199 - command:
1200 - /install-cni.sh
1201@@ -1872,7 +1752,7 @@
1202 - mountPath: /host/etc/cni/net.d
1203 name: cni-net-dir
1204 - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
1205- name: calico-node-token-rl5z2
1206+ name: calico-node-token-7vltx
1207 readOnly: true
1208 - image: calico/pod2daemon-flexvol:v3.8.0
1209 imagePullPolicy: IfNotPresent
1210@@ -1884,9 +1764,9 @@
1211 - mountPath: /host/driver
1212 name: flexvol-driver-host
1213 - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
1214- name: calico-node-token-rl5z2
1215+ name: calico-node-token-7vltx
1216 readOnly: true
1217- nodeName: tools-k8s-control-1
1218+ nodeName: toolsbeta-test-k8s-control-3
1219 nodeSelector:
1220 beta.kubernetes.io/os: linux
1221 priority: 2000001000
1222@@ -1962,48 +1842,48 @@
1223 path: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds
1224 type: DirectoryOrCreate
1225 name: flexvol-driver-host
1226- - name: calico-node-token-rl5z2
1227+ - name: calico-node-token-7vltx
1228 secret:
1229 defaultMode: 420
1230- secretName: calico-node-token-rl5z2
1231+ secretName: calico-node-token-7vltx
1232 status:
1233 conditions:
1234 - lastProbeTime: null
1235- lastTransitionTime: "2019-11-21T12:48:57Z"
1236+ lastTransitionTime: "2019-11-06T15:56:51Z"
1237 status: "True"
1238 type: Initialized
1239 - lastProbeTime: null
1240- lastTransitionTime: "2019-11-21T12:49:20Z"
1241+ lastTransitionTime: "2019-11-21T13:11:15Z"
1242 status: "True"
1243 type: Ready
1244 - lastProbeTime: null
1245- lastTransitionTime: "2019-11-21T12:49:20Z"
1246+ lastTransitionTime: "2019-11-21T13:11:15Z"
1247 status: "True"
1248 type: ContainersReady
1249 - lastProbeTime: null
1250- lastTransitionTime: "2019-11-06T14:15:36Z"
1251+ lastTransitionTime: "2019-10-23T10:07:28Z"
1252 status: "True"
1253 type: PodScheduled
1254 containerStatuses:
1255- - containerID: docker://bda1cea66ded3506266a16daf40085c7d3068dd2687ab6c6dd53acfa9e5d66fe
1256+ - containerID: docker://15aa282401840d547ab9637607bb92b859d7033b434df409ffe157e3342c5993
1257 image: calico/node:v3.8.0
1258 imageID: docker-pullable://calico/node@sha256:6679ccc9f19dba3eb084db991c788dc9661ad3b5d5bafaa3379644229dca6b05
1259 lastState:
1260 terminated:
1261- containerID: docker://d8d88ce00bbf9a1bd27f180d1f0446dc05d00a006824fec2f9e749ef54c72be8
1262+ containerID: docker://716a384fe79911a889977b1ee39a6e21a1d6c1987b6e75d29cd8ac9b086fafa2
1263 exitCode: 0
1264- finishedAt: "2019-11-21T12:48:30Z"
1265+ finishedAt: "2019-11-06T15:56:32Z"
1266 reason: Completed
1267- startedAt: "2019-11-06T14:15:48Z"
1268+ startedAt: "2019-10-23T10:07:42Z"
1269 name: calico-node
1270 ready: true
1271 restartCount: 1
1272 state:
1273 running:
1274- startedAt: "2019-11-21T12:49:00Z"
1275- hostIP: 172.16.0.104
1276+ startedAt: "2019-11-06T15:56:52Z"
1277+ hostIP: 172.16.0.136
1278 initContainerStatuses:
1279- - containerID: docker://c8019f8d81ddf944eb50e7beb28d5d45a253443f7e28d81ca737d9d576ae68a0
1280+ - containerID: docker://ec6a578a21f2ebcf899fbd3bca478d8867b5daff53f3dc893f3fbe9a42b4e094
1281 image: calico/cni:v3.8.0
1282 imageID: docker-pullable://calico/cni@sha256:decba0501ab0658e6e7da2f5625f1eabb8aba5690f9206caba3bf98caca5094c
1283 lastState: {}
1284@@ -2012,12 +1892,12 @@
1285 restartCount: 1
1286 state:
1287 terminated:
1288- containerID: docker://c8019f8d81ddf944eb50e7beb28d5d45a253443f7e28d81ca737d9d576ae68a0
1289+ containerID: docker://ec6a578a21f2ebcf899fbd3bca478d8867b5daff53f3dc893f3fbe9a42b4e094
1290 exitCode: 0
1291- finishedAt: "2019-11-21T12:48:56Z"
1292+ finishedAt: "2019-11-06T15:56:47Z"
1293 reason: Completed
1294- startedAt: "2019-11-21T12:48:56Z"
1295- - containerID: docker://2a08b7d888b981045a5cf5e236a11ccd188836738b4e2998ceae7e95a48bfadf
1296+ startedAt: "2019-11-06T15:56:47Z"
1297+ - containerID: docker://bd5bf6059c0403ae1e48cf4216fdd186e96915d506daa2492b3e45dfaebc93d0
1298 image: calico/cni:v3.8.0
1299 imageID: docker-pullable://calico/cni@sha256:decba0501ab0658e6e7da2f5625f1eabb8aba5690f9206caba3bf98caca5094c
1300 lastState: {}
1301@@ -2026,12 +1906,12 @@
1302 restartCount: 0
1303 state:
1304 terminated:
1305- containerID: docker://2a08b7d888b981045a5cf5e236a11ccd188836738b4e2998ceae7e95a48bfadf
1306+ containerID: docker://bd5bf6059c0403ae1e48cf4216fdd186e96915d506daa2492b3e45dfaebc93d0
1307 exitCode: 0
1308- finishedAt: "2019-11-21T12:48:58Z"
1309+ finishedAt: "2019-11-06T15:56:50Z"
1310 reason: Completed
1311- startedAt: "2019-11-21T12:48:57Z"
1312- - containerID: docker://c083e480bf2cb21034a7b843e8526339a492be45a96c506e24846d8c401f88ad
1313+ startedAt: "2019-11-06T15:56:49Z"
1314+ - containerID: docker://ee0f921b1307b184d8d4e18c5a5363fbb16b01ff95fc17303de70f92fd710c7f
1315 image: calico/pod2daemon-flexvol:v3.8.0
1316 imageID: docker-pullable://calico/pod2daemon-flexvol@sha256:6ec8b823e5ce3440318edfcdd2ab8b6660110782713f24f53dac5a3c227afb11
1317 lastState: {}
1318@@ -2040,28 +1920,28 @@
1319 restartCount: 0
1320 state:
1321 terminated:
1322- containerID: docker://c083e480bf2cb21034a7b843e8526339a492be45a96c506e24846d8c401f88ad
1323+ containerID: docker://ee0f921b1307b184d8d4e18c5a5363fbb16b01ff95fc17303de70f92fd710c7f
1324 exitCode: 0
1325- finishedAt: "2019-11-21T12:48:58Z"
1326+ finishedAt: "2019-11-06T15:56:51Z"
1327 reason: Completed
1328- startedAt: "2019-11-21T12:48:58Z"
1329+ startedAt: "2019-11-06T15:56:51Z"
1330 phase: Running
1331- podIP: 172.16.0.104
1332+ podIP: 172.16.0.136
1333 qosClass: Burstable
1334- startTime: "2019-11-06T14:15:36Z"
1335+ startTime: "2019-10-23T10:07:28Z"
1336 - apiVersion: v1
1337 kind: Pod
1338 metadata:
1339 annotations:
1340 kubernetes.io/psp: privileged-psp
1341 scheduler.alpha.kubernetes.io/critical-pod: ""
1342- creationTimestamp: "2019-11-07T13:10:04Z"
1343+ creationTimestamp: "2019-10-23T12:37:24Z"
1344 generateName: calico-node-
1345 labels:
1346 controller-revision-hash: 844ddd97c6
1347 k8s-app: calico-node
1348 pod-template-generation: "1"
1349- name: calico-node-qz4tn
1350+ name: calico-node-x2n9j
1351 namespace: kube-system
1352 ownerReferences:
1353 - apiVersion: apps/v1
1354@@ -2069,10 +1949,10 @@
1355 controller: true
1356 kind: DaemonSet
1357 name: calico-node
1358- uid: 677e71b7-e034-4826-baa2-4fee1de6e3d1
1359- resourceVersion: "2555140"
1360- selfLink: /api/v1/namespaces/kube-system/pods/calico-node-qz4tn
1361- uid: 27291fa9-134a-4b26-a35b-c3f6ca046eaa
1362+ uid: 5c6b278d-68cd-4a45-913d-941ccd990f3e
1363+ resourceVersion: "5045360"
1364+ selfLink: /api/v1/namespaces/kube-system/pods/calico-node-x2n9j
1365+ uid: d0b261b5-aed3-4bad-b521-d43bd6e8a3f6
1366 spec:
1367 affinity:
1368 nodeAffinity:
1369@@ -2082,7 +1962,7 @@
1370 - key: metadata.name
1371 operator: In
1372 values:
1373- - tools-k8s-worker-1
1374+ - toolsbeta-test-k8s-worker-2
1375 containers:
1376 - env:
1377 - name: DATASTORE_TYPE
1378@@ -2166,7 +2046,7 @@
1379 - mountPath: /var/run/nodeagent
1380 name: policysync
1381 - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
1382- name: calico-node-token-rl5z2
1383+ name: calico-node-token-7vltx
1384 readOnly: true
1385 dnsPolicy: ClusterFirst
1386 enableServiceLinks: true
1387@@ -2198,7 +2078,7 @@
1388 - mountPath: /host/opt/cni/bin
1389 name: cni-bin-dir
1390 - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
1391- name: calico-node-token-rl5z2
1392+ name: calico-node-token-7vltx
1393 readOnly: true
1394 - command:
1395 - /install-cni.sh
1396@@ -2234,7 +2114,7 @@
1397 - mountPath: /host/etc/cni/net.d
1398 name: cni-net-dir
1399 - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
1400- name: calico-node-token-rl5z2
1401+ name: calico-node-token-7vltx
1402 readOnly: true
1403 - image: calico/pod2daemon-flexvol:v3.8.0
1404 imagePullPolicy: IfNotPresent
1405@@ -2246,9 +2126,9 @@
1406 - mountPath: /host/driver
1407 name: flexvol-driver-host
1408 - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
1409- name: calico-node-token-rl5z2
1410+ name: calico-node-token-7vltx
1411 readOnly: true
1412- nodeName: tools-k8s-worker-1
1413+ nodeName: toolsbeta-test-k8s-worker-2
1414 nodeSelector:
1415 beta.kubernetes.io/os: linux
1416 priority: 2000001000
1417@@ -2324,48 +2204,48 @@
1418 path: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds
1419 type: DirectoryOrCreate
1420 name: flexvol-driver-host
1421- - name: calico-node-token-rl5z2
1422+ - name: calico-node-token-7vltx
1423 secret:
1424 defaultMode: 420
1425- secretName: calico-node-token-rl5z2
1426+ secretName: calico-node-token-7vltx
1427 status:
1428 conditions:
1429 - lastProbeTime: null
1430- lastTransitionTime: "2019-11-07T13:10:12Z"
1431+ lastTransitionTime: "2019-10-25T16:15:08Z"
1432 status: "True"
1433 type: Initialized
1434 - lastProbeTime: null
1435- lastTransitionTime: "2019-11-21T12:49:22Z"
1436+ lastTransitionTime: "2019-11-21T13:11:24Z"
1437 status: "True"
1438 type: Ready
1439 - lastProbeTime: null
1440- lastTransitionTime: "2019-11-21T12:49:22Z"
1441+ lastTransitionTime: "2019-11-21T13:11:24Z"
1442 status: "True"
1443 type: ContainersReady
1444 - lastProbeTime: null
1445- lastTransitionTime: "2019-11-07T13:10:04Z"
1446+ lastTransitionTime: "2019-10-23T12:37:24Z"
1447 status: "True"
1448 type: PodScheduled
1449 containerStatuses:
1450- - containerID: docker://f97ccdbe5bfa49ceba8507ff239bf39e1bb2b21c2fc344e1f29ee938fb4802ef
1451+ - containerID: docker://007fc22eae4e271f0516e724de330be43ba55b578829c8f8ef98fc88ee107263
1452 image: calico/node:v3.8.0
1453 imageID: docker-pullable://calico/node@sha256:6679ccc9f19dba3eb084db991c788dc9661ad3b5d5bafaa3379644229dca6b05
1454 lastState:
1455 terminated:
1456- containerID: docker://ae142098884cf9d35cd3ebadbb2b35c865ddf7f11a9596545c972e0c3c09a41b
1457+ containerID: docker://7c5c50cd1ef568ad06598927a5ea4cde79f4cc8bbc8b62b80a1c9c078ce79f21
1458 exitCode: 0
1459- finishedAt: "2019-11-21T12:48:30Z"
1460+ finishedAt: "2019-10-25T16:14:29Z"
1461 reason: Completed
1462- startedAt: "2019-11-07T13:10:15Z"
1463+ startedAt: "2019-10-23T12:37:49Z"
1464 name: calico-node
1465 ready: true
1466 restartCount: 1
1467 state:
1468 running:
1469- startedAt: "2019-11-21T12:49:03Z"
1470- hostIP: 172.16.0.78
1471+ startedAt: "2019-10-25T16:15:09Z"
1472+ hostIP: 172.16.0.151
1473 initContainerStatuses:
1474- - containerID: docker://85ae43f8669ce1726db5dc0a221b7b23e1e2e535326dd40833abcece88dd8ac4
1475+ - containerID: docker://974c3c099f4f02afd30aa88a70633b3bf6a41121524e7e9dcd902f6d58722dba
1476 image: calico/cni:v3.8.0
1477 imageID: docker-pullable://calico/cni@sha256:decba0501ab0658e6e7da2f5625f1eabb8aba5690f9206caba3bf98caca5094c
1478 lastState: {}
1479@@ -2374,12 +2254,12 @@
1480 restartCount: 1
1481 state:
1482 terminated:
1483- containerID: docker://85ae43f8669ce1726db5dc0a221b7b23e1e2e535326dd40833abcece88dd8ac4
1484+ containerID: docker://974c3c099f4f02afd30aa88a70633b3bf6a41121524e7e9dcd902f6d58722dba
1485 exitCode: 0
1486- finishedAt: "2019-11-21T12:49:00Z"
1487+ finishedAt: "2019-10-25T16:15:01Z"
1488 reason: Completed
1489- startedAt: "2019-11-21T12:49:00Z"
1490- - containerID: docker://a23983c2376917705abdbde8383e5b182c09d5249c786854963a0346dcc201d9
1491+ startedAt: "2019-10-25T16:15:00Z"
1492+ - containerID: docker://a51c0415501483cb0e1d547b1bd8a5854f7da27d0d36e1be39b8cee584d3dc76
1493 image: calico/cni:v3.8.0
1494 imageID: docker-pullable://calico/cni@sha256:decba0501ab0658e6e7da2f5625f1eabb8aba5690f9206caba3bf98caca5094c
1495 lastState: {}
1496@@ -2388,12 +2268,12 @@
1497 restartCount: 0
1498 state:
1499 terminated:
1500- containerID: docker://a23983c2376917705abdbde8383e5b182c09d5249c786854963a0346dcc201d9
1501+ containerID: docker://a51c0415501483cb0e1d547b1bd8a5854f7da27d0d36e1be39b8cee584d3dc76
1502 exitCode: 0
1503- finishedAt: "2019-11-21T12:49:01Z"
1504+ finishedAt: "2019-10-25T16:15:04Z"
1505 reason: Completed
1506- startedAt: "2019-11-21T12:49:01Z"
1507- - containerID: docker://4550a34fa5a6c4a76fa0eba3ecd2ee93da9173364a5e1d2dc87089a8dc0522e6
1508+ startedAt: "2019-10-25T16:15:03Z"
1509+ - containerID: docker://e69321a69f73ae1ec29aa1b989bc8e698d19953cafa06607ee9b242e1c4ccb2d
1510 image: calico/pod2daemon-flexvol:v3.8.0
1511 imageID: docker-pullable://calico/pod2daemon-flexvol@sha256:6ec8b823e5ce3440318edfcdd2ab8b6660110782713f24f53dac5a3c227afb11
1512 lastState: {}
1513@@ -2402,27 +2282,27 @@
1514 restartCount: 0
1515 state:
1516 terminated:
1517- containerID: docker://4550a34fa5a6c4a76fa0eba3ecd2ee93da9173364a5e1d2dc87089a8dc0522e6
1518+ containerID: docker://e69321a69f73ae1ec29aa1b989bc8e698d19953cafa06607ee9b242e1c4ccb2d
1519 exitCode: 0
1520- finishedAt: "2019-11-21T12:49:02Z"
1521+ finishedAt: "2019-10-25T16:15:07Z"
1522 reason: Completed
1523- startedAt: "2019-11-21T12:49:02Z"
1524+ startedAt: "2019-10-25T16:15:07Z"
1525 phase: Running
1526- podIP: 172.16.0.78
1527+ podIP: 172.16.0.151
1528 qosClass: Burstable
1529- startTime: "2019-11-07T13:10:04Z"
1530+ startTime: "2019-10-23T12:37:24Z"
1531 - apiVersion: v1
1532 kind: Pod
1533 metadata:
1534 annotations:
1535- cni.projectcalico.org/podIP: 192.168.50.12/32
1536+ cni.projectcalico.org/podIP: 192.168.132.132/32
1537 kubernetes.io/psp: privileged-psp
1538- creationTimestamp: "2019-11-21T16:54:33Z"
1539+ creationTimestamp: "2019-10-23T09:58:18Z"
1540 generateName: coredns-5c98db65d4-
1541 labels:
1542 k8s-app: kube-dns
1543 pod-template-hash: 5c98db65d4
1544- name: coredns-5c98db65d4-cqn87
1545+ name: coredns-5c98db65d4-5xmnt
1546 namespace: kube-system
1547 ownerReferences:
1548 - apiVersion: apps/v1
1549@@ -2430,10 +2310,10 @@
1550 controller: true
1551 kind: ReplicaSet
1552 name: coredns-5c98db65d4
1553- uid: 3a55b4be-f788-4173-a21f-b6b2f0e7bcda
1554- resourceVersion: "2584866"
1555- selfLink: /api/v1/namespaces/kube-system/pods/coredns-5c98db65d4-cqn87
1556- uid: 7f7e33c3-93a3-4e1b-bf5c-bf411a3cd320
1557+ uid: 2bf774e2-a509-4e2f-bbfb-022ebdd290f5
1558+ resourceVersion: "5050850"
1559+ selfLink: /api/v1/namespaces/kube-system/pods/coredns-5c98db65d4-5xmnt
1560+ uid: 0663f8bd-84f9-4114-a55b-5c7c04d1c0c3
1561 spec:
1562 containers:
1563 - args:
1564@@ -2492,11 +2372,11 @@
1565 name: config-volume
1566 readOnly: true
1567 - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
1568- name: coredns-token-lffqk
1569+ name: coredns-token-pz4ch
1570 readOnly: true
1571 dnsPolicy: Default
1572 enableServiceLinks: true
1573- nodeName: tools-k8s-worker-1
1574+ nodeName: toolsbeta-test-k8s-control-1
1575 nodeSelector:
1576 beta.kubernetes.io/os: linux
1577 priority: 2000000000
1578@@ -2528,56 +2408,62 @@
1579 path: Corefile
1580 name: coredns
1581 name: config-volume
1582- - name: coredns-token-lffqk
1583+ - name: coredns-token-pz4ch
1584 secret:
1585 defaultMode: 420
1586- secretName: coredns-token-lffqk
1587+ secretName: coredns-token-pz4ch
1588 status:
1589 conditions:
1590 - lastProbeTime: null
1591- lastTransitionTime: "2019-11-21T16:54:33Z"
1592+ lastTransitionTime: "2019-10-23T09:58:39Z"
1593 status: "True"
1594 type: Initialized
1595 - lastProbeTime: null
1596- lastTransitionTime: "2019-11-21T16:54:43Z"
1597+ lastTransitionTime: "2019-11-21T13:50:03Z"
1598 status: "True"
1599 type: Ready
1600 - lastProbeTime: null
1601- lastTransitionTime: "2019-11-21T16:54:43Z"
1602+ lastTransitionTime: "2019-11-21T13:50:03Z"
1603 status: "True"
1604 type: ContainersReady
1605 - lastProbeTime: null
1606- lastTransitionTime: "2019-11-21T16:54:33Z"
1607+ lastTransitionTime: "2019-10-23T09:58:39Z"
1608 status: "True"
1609 type: PodScheduled
1610 containerStatuses:
1611- - containerID: docker://a8135c4cd19d065cae886baa96071919c1898152e645eea35e731a8860009e55
1612+ - containerID: docker://67fe56f0be664a01162608aac818a195876279c6bbc982ac8902366938a9ff76
1613 image: k8s.gcr.io/coredns:1.3.1
1614 imageID: docker-pullable://k8s.gcr.io/coredns@sha256:02382353821b12c21b062c59184e227e001079bb13ebd01f9d3270ba0fcbf1e4
1615- lastState: {}
1616+ lastState:
1617+ terminated:
1618+ containerID: docker://8a2a256b058fc120bb25c10289f724149da081cf1a93444021eb853d0a56687f
1619+ exitCode: 2
1620+ finishedAt: "2019-11-21T13:49:34Z"
1621+ reason: Error
1622+ startedAt: "2019-11-21T13:49:34Z"
1623 name: coredns
1624 ready: true
1625- restartCount: 0
1626+ restartCount: 7
1627 state:
1628 running:
1629- startedAt: "2019-11-21T16:54:34Z"
1630- hostIP: 172.16.0.78
1631+ startedAt: "2019-11-21T13:49:56Z"
1632+ hostIP: 172.16.0.112
1633 phase: Running
1634- podIP: 192.168.50.12
1635+ podIP: 192.168.132.132
1636 qosClass: Burstable
1637- startTime: "2019-11-21T16:54:33Z"
1638+ startTime: "2019-10-23T09:58:39Z"
1639 - apiVersion: v1
1640 kind: Pod
1641 metadata:
1642 annotations:
1643- cni.projectcalico.org/podIP: 192.168.34.139/32
1644+ cni.projectcalico.org/podIP: 192.168.230.2/32
1645 kubernetes.io/psp: privileged-psp
1646- creationTimestamp: "2019-11-21T16:56:17Z"
1647+ creationTimestamp: "2019-11-06T15:47:57Z"
1648 generateName: coredns-5c98db65d4-
1649 labels:
1650 k8s-app: kube-dns
1651 pod-template-hash: 5c98db65d4
1652- name: coredns-5c98db65d4-ql89m
1653+ name: coredns-5c98db65d4-j2pxb
1654 namespace: kube-system
1655 ownerReferences:
1656 - apiVersion: apps/v1
1657@@ -2585,10 +2471,10 @@
1658 controller: true
1659 kind: ReplicaSet
1660 name: coredns-5c98db65d4
1661- uid: 3a55b4be-f788-4173-a21f-b6b2f0e7bcda
1662- resourceVersion: "2585103"
1663- selfLink: /api/v1/namespaces/kube-system/pods/coredns-5c98db65d4-ql89m
1664- uid: 662d0f1d-ad04-4817-a2d4-76b1b3b799f5
1665+ uid: 2bf774e2-a509-4e2f-bbfb-022ebdd290f5
1666+ resourceVersion: "5052483"
1667+ selfLink: /api/v1/namespaces/kube-system/pods/coredns-5c98db65d4-j2pxb
1668+ uid: 658f98ee-6da8-4b0b-a039-fc5bbaaf695b
1669 spec:
1670 containers:
1671 - args:
1672@@ -2647,11 +2533,11 @@
1673 name: config-volume
1674 readOnly: true
1675 - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
1676- name: coredns-token-lffqk
1677+ name: coredns-token-pz4ch
1678 readOnly: true
1679 dnsPolicy: Default
1680 enableServiceLinks: true
1681- nodeName: tools-k8s-worker-2
1682+ nodeName: toolsbeta-test-k8s-control-3
1683 nodeSelector:
1684 beta.kubernetes.io/os: linux
1685 priority: 2000000000
1686@@ -2683,67 +2569,73 @@
1687 path: Corefile
1688 name: coredns
1689 name: config-volume
1690- - name: coredns-token-lffqk
1691+ - name: coredns-token-pz4ch
1692 secret:
1693 defaultMode: 420
1694- secretName: coredns-token-lffqk
1695+ secretName: coredns-token-pz4ch
1696 status:
1697 conditions:
1698 - lastProbeTime: null
1699- lastTransitionTime: "2019-11-21T16:56:17Z"
1700+ lastTransitionTime: "2019-11-06T15:47:57Z"
1701 status: "True"
1702 type: Initialized
1703 - lastProbeTime: null
1704- lastTransitionTime: "2019-11-21T16:56:24Z"
1705+ lastTransitionTime: "2019-11-21T14:01:27Z"
1706 status: "True"
1707 type: Ready
1708 - lastProbeTime: null
1709- lastTransitionTime: "2019-11-21T16:56:24Z"
1710+ lastTransitionTime: "2019-11-21T14:01:27Z"
1711 status: "True"
1712 type: ContainersReady
1713 - lastProbeTime: null
1714- lastTransitionTime: "2019-11-21T16:56:17Z"
1715+ lastTransitionTime: "2019-11-06T15:47:57Z"
1716 status: "True"
1717 type: PodScheduled
1718 containerStatuses:
1719- - containerID: docker://307fe2bf054fc104ad0f35be00ae792e07c82d0eeb8dbb492afe8744263ae0f9
1720+ - containerID: docker://87e7a2026faa2fe33ff80b73030fc850f3417a74e935d74cf79847f790372e00
1721 image: k8s.gcr.io/coredns:1.3.1
1722 imageID: docker-pullable://k8s.gcr.io/coredns@sha256:02382353821b12c21b062c59184e227e001079bb13ebd01f9d3270ba0fcbf1e4
1723- lastState: {}
1724+ lastState:
1725+ terminated:
1726+ containerID: docker://c7a57ee70da2b06db0dba5b5c1f1604f6f1a19551efac543445624c156302f5c
1727+ exitCode: 2
1728+ finishedAt: "2019-11-21T14:01:00Z"
1729+ reason: Error
1730+ startedAt: "2019-11-21T14:01:00Z"
1731 name: coredns
1732 ready: true
1733- restartCount: 0
1734+ restartCount: 5
1735 state:
1736 running:
1737- startedAt: "2019-11-21T16:56:18Z"
1738- hostIP: 172.16.0.103
1739+ startedAt: "2019-11-21T14:01:17Z"
1740+ hostIP: 172.16.0.136
1741 phase: Running
1742- podIP: 192.168.34.139
1743+ podIP: 192.168.230.2
1744 qosClass: Burstable
1745- startTime: "2019-11-21T16:56:17Z"
1746+ startTime: "2019-11-06T15:47:57Z"
1747 - apiVersion: v1
1748 kind: Pod
1749 metadata:
1750 annotations:
1751- kubernetes.io/config.hash: de5b4a204dca6a22fe626f330482ddd4
1752- kubernetes.io/config.mirror: de5b4a204dca6a22fe626f330482ddd4
1753- kubernetes.io/config.seen: "2019-11-21T10:34:16.239948061Z"
1754+ kubernetes.io/config.hash: 5c8a94aee865c6e560a2f0e683d4f45a
1755+ kubernetes.io/config.mirror: 5c8a94aee865c6e560a2f0e683d4f45a
1756+ kubernetes.io/config.seen: "2019-11-21T14:00:58.889893448Z"
1757 kubernetes.io/config.source: file
1758 kubernetes.io/psp: privileged-psp
1759- creationTimestamp: "2019-11-21T10:34:16Z"
1760+ creationTimestamp: "2019-11-21T14:00:58Z"
1761 labels:
1762 component: kube-apiserver
1763 tier: control-plane
1764- name: kube-apiserver-tools-k8s-control-1
1765+ name: kube-apiserver-toolsbeta-test-k8s-control-1
1766 namespace: kube-system
1767- resourceVersion: "2554887"
1768- selfLink: /api/v1/namespaces/kube-system/pods/kube-apiserver-tools-k8s-control-1
1769- uid: 14e24539-17f0-445f-8a12-cae12eab032c
1770+ resourceVersion: "5052415"
1771+ selfLink: /api/v1/namespaces/kube-system/pods/kube-apiserver-toolsbeta-test-k8s-control-1
1772+ uid: 22cb6080-8139-4bc1-8372-31cde750d2b3
1773 spec:
1774 containers:
1775 - command:
1776 - kube-apiserver
1777- - --advertise-address=172.16.0.104
1778+ - --advertise-address=172.16.0.112
1779 - --allow-privileged=true
1780 - --authorization-mode=Node,RBAC
1781 - --client-ca-file=/etc/kubernetes/pki/ca.crt
1782@@ -2752,7 +2644,7 @@
1783 - --etcd-cafile=/etc/kubernetes/pki/puppet_ca.pem
1784 - --etcd-certfile=/etc/kubernetes/pki/puppet_etcd_client.crt
1785 - --etcd-keyfile=/etc/kubernetes/pki/puppet_etcd_client.key
1786- - --etcd-servers=https://tools-k8s-etcd-4.tools.eqiad.wmflabs:2379,https://tools-k8s-etcd-5.tools.eqiad.wmflabs:2379,https://tools-k8s-etcd-6.tools.eqiad.wmflabs:2379
1787+ - --etcd-servers=https://toolsbeta-test-k8s-etcd-1.toolsbeta.eqiad.wmflabs:2379,https://toolsbeta-test-k8s-etcd-2.toolsbeta.eqiad.wmflabs:2379,https://toolsbeta-test-k8s-etcd-3.toolsbeta.eqiad.wmflabs:2379
1788 - --insecure-port=0
1789 - --kubelet-client-certificate=/etc/kubernetes/pki/apiserver-kubelet-client.crt
1790 - --kubelet-client-key=/etc/kubernetes/pki/apiserver-kubelet-client.key
1791@@ -2775,7 +2667,7 @@
1792 livenessProbe:
1793 failureThreshold: 8
1794 httpGet:
1795- host: 172.16.0.104
1796+ host: 172.16.0.112
1797 path: /healthz
1798 port: 6443
1799 scheme: HTTPS
1800@@ -2808,7 +2700,7 @@
1801 dnsPolicy: ClusterFirst
1802 enableServiceLinks: true
1803 hostNetwork: true
1804- nodeName: tools-k8s-control-1
1805+ nodeName: toolsbeta-test-k8s-control-1
1806 priority: 2000000000
1807 priorityClassName: system-cluster-critical
1808 restartPolicy: Always
1809@@ -2842,66 +2734,60 @@
1810 status:
1811 conditions:
1812 - lastProbeTime: null
1813- lastTransitionTime: "2019-11-21T11:43:58Z"
1814+ lastTransitionTime: "2019-11-21T14:01:07Z"
1815 status: "True"
1816 type: Initialized
1817 - lastProbeTime: null
1818- lastTransitionTime: "2019-11-21T11:43:58Z"
1819+ lastTransitionTime: "2019-11-21T14:01:07Z"
1820 status: "True"
1821 type: Ready
1822 - lastProbeTime: null
1823- lastTransitionTime: "2019-11-21T11:43:58Z"
1824+ lastTransitionTime: "2019-11-21T14:01:07Z"
1825 status: "True"
1826 type: ContainersReady
1827 - lastProbeTime: null
1828- lastTransitionTime: "2019-11-21T11:43:58Z"
1829+ lastTransitionTime: "2019-11-21T14:01:07Z"
1830 status: "True"
1831 type: PodScheduled
1832 containerStatuses:
1833- - containerID: docker://a518b17f8322e3ce3e87b22a8ffb7f5faa92365a9885ece9dc31c6c1d4f919f9
1834+ - containerID: docker://6eaa7ec3b56c0a85f8a99d03ef37d70de3cac64c1d56742e91600c37ea9174cd
1835 image: k8s.gcr.io/kube-apiserver:v1.15.6
1836 imageID: docker-pullable://k8s.gcr.io/kube-apiserver@sha256:b50e135bec86da5378ba2f8852d5bb966098d34abcb16510e36150d7b7dfd7b1
1837- lastState:
1838- terminated:
1839- containerID: docker://0c7cc8210a7d1ef83680681fb08f8297696a6344a3742e0c146921965efca27f
1840- exitCode: 0
1841- finishedAt: "2019-11-21T12:48:30Z"
1842- reason: Completed
1843- startedAt: "2019-11-21T10:34:17Z"
1844+ lastState: {}
1845 name: kube-apiserver
1846 ready: true
1847- restartCount: 1
1848+ restartCount: 0
1849 state:
1850 running:
1851- startedAt: "2019-11-21T12:48:45Z"
1852- hostIP: 172.16.0.104
1853+ startedAt: "2019-11-21T14:01:00Z"
1854+ hostIP: 172.16.0.112
1855 phase: Running
1856- podIP: 172.16.0.104
1857+ podIP: 172.16.0.112
1858 qosClass: Burstable
1859- startTime: "2019-11-21T11:43:58Z"
1860+ startTime: "2019-11-21T14:01:07Z"
1861 - apiVersion: v1
1862 kind: Pod
1863 metadata:
1864 annotations:
1865- kubernetes.io/config.hash: 9509d4b9d68cf5ab6d169a0e557b3a0a
1866- kubernetes.io/config.mirror: 9509d4b9d68cf5ab6d169a0e557b3a0a
1867- kubernetes.io/config.seen: "2019-11-21T11:47:29.324595076Z"
1868+ kubernetes.io/config.hash: 4fd84db0bea1bcf8d080949057b38c17
1869+ kubernetes.io/config.mirror: 4fd84db0bea1bcf8d080949057b38c17
1870+ kubernetes.io/config.seen: "2019-11-21T13:48:22.751552363Z"
1871 kubernetes.io/config.source: file
1872 kubernetes.io/psp: privileged-psp
1873- creationTimestamp: "2019-11-21T11:47:29Z"
1874+ creationTimestamp: "2019-11-21T13:48:22Z"
1875 labels:
1876 component: kube-apiserver
1877 tier: control-plane
1878- name: kube-apiserver-tools-k8s-control-2
1879+ name: kube-apiserver-toolsbeta-test-k8s-control-2
1880 namespace: kube-system
1881- resourceVersion: "2554867"
1882- selfLink: /api/v1/namespaces/kube-system/pods/kube-apiserver-tools-k8s-control-2
1883- uid: ab103558-d949-4bcc-8430-f800b3f5abfa
1884+ resourceVersion: "5050251"
1885+ selfLink: /api/v1/namespaces/kube-system/pods/kube-apiserver-toolsbeta-test-k8s-control-2
1886+ uid: e8924566-aa9e-4dbf-81b2-93e37d13d25a
1887 spec:
1888 containers:
1889 - command:
1890 - kube-apiserver
1891- - --advertise-address=172.16.0.93
1892+ - --advertise-address=172.16.0.137
1893 - --allow-privileged=true
1894 - --authorization-mode=Node,RBAC
1895 - --client-ca-file=/etc/kubernetes/pki/ca.crt
1896@@ -2910,7 +2796,7 @@
1897 - --etcd-cafile=/etc/kubernetes/pki/puppet_ca.pem
1898 - --etcd-certfile=/etc/kubernetes/pki/puppet_etcd_client.crt
1899 - --etcd-keyfile=/etc/kubernetes/pki/puppet_etcd_client.key
1900- - --etcd-servers=https://tools-k8s-etcd-4.tools.eqiad.wmflabs:2379,https://tools-k8s-etcd-5.tools.eqiad.wmflabs:2379,https://tools-k8s-etcd-6.tools.eqiad.wmflabs:2379
1901+ - --etcd-servers=https://toolsbeta-test-k8s-etcd-1.toolsbeta.eqiad.wmflabs:2379,https://toolsbeta-test-k8s-etcd-2.toolsbeta.eqiad.wmflabs:2379,https://toolsbeta-test-k8s-etcd-3.toolsbeta.eqiad.wmflabs:2379
1902 - --insecure-port=0
1903 - --kubelet-client-certificate=/etc/kubernetes/pki/apiserver-kubelet-client.crt
1904 - --kubelet-client-key=/etc/kubernetes/pki/apiserver-kubelet-client.key
1905@@ -2933,7 +2819,7 @@
1906 livenessProbe:
1907 failureThreshold: 8
1908 httpGet:
1909- host: 172.16.0.93
1910+ host: 172.16.0.137
1911 path: /healthz
1912 port: 6443
1913 scheme: HTTPS
1914@@ -2966,7 +2852,7 @@
1915 dnsPolicy: ClusterFirst
1916 enableServiceLinks: true
1917 hostNetwork: true
1918- nodeName: tools-k8s-control-2
1919+ nodeName: toolsbeta-test-k8s-control-2
1920 priority: 2000000000
1921 priorityClassName: system-cluster-critical
1922 restartPolicy: Always
1923@@ -3000,66 +2886,60 @@
1924 status:
1925 conditions:
1926 - lastProbeTime: null
1927- lastTransitionTime: "2019-11-21T12:48:46Z"
1928+ lastTransitionTime: "2019-10-30T17:46:36Z"
1929 status: "True"
1930 type: Initialized
1931 - lastProbeTime: null
1932- lastTransitionTime: "2019-11-21T12:48:48Z"
1933+ lastTransitionTime: "2019-11-21T13:48:24Z"
1934 status: "True"
1935 type: Ready
1936 - lastProbeTime: null
1937- lastTransitionTime: "2019-11-21T12:48:48Z"
1938+ lastTransitionTime: "2019-11-21T13:48:24Z"
1939 status: "True"
1940 type: ContainersReady
1941 - lastProbeTime: null
1942- lastTransitionTime: "2019-11-21T12:48:46Z"
1943+ lastTransitionTime: "2019-10-30T17:46:36Z"
1944 status: "True"
1945 type: PodScheduled
1946 containerStatuses:
1947- - containerID: docker://773b565f6c08d18f1557b33f0655297f0b1735920d6de9394340509def69b3c5
1948+ - containerID: docker://2d1839d95ab3d0c3fdc1a6cdf8174f89d1e40b4b4829e903ed6ff52e0a1b4630
1949 image: k8s.gcr.io/kube-apiserver:v1.15.6
1950 imageID: docker-pullable://k8s.gcr.io/kube-apiserver@sha256:b50e135bec86da5378ba2f8852d5bb966098d34abcb16510e36150d7b7dfd7b1
1951- lastState:
1952- terminated:
1953- containerID: docker://0ee3c1bba627a18da9ef5b38d560d9291fb256297e4bb9ea2560b3ddd47afca0
1954- exitCode: 0
1955- finishedAt: "2019-11-21T12:48:30Z"
1956- reason: Completed
1957- startedAt: "2019-11-21T11:47:30Z"
1958+ lastState: {}
1959 name: kube-apiserver
1960 ready: true
1961- restartCount: 1
1962+ restartCount: 0
1963 state:
1964 running:
1965- startedAt: "2019-11-21T12:48:47Z"
1966- hostIP: 172.16.0.93
1967+ startedAt: "2019-11-21T13:48:23Z"
1968+ hostIP: 172.16.0.137
1969 phase: Running
1970- podIP: 172.16.0.93
1971+ podIP: 172.16.0.137
1972 qosClass: Burstable
1973- startTime: "2019-11-21T12:48:46Z"
1974+ startTime: "2019-10-30T17:46:36Z"
1975 - apiVersion: v1
1976 kind: Pod
1977 metadata:
1978 annotations:
1979- kubernetes.io/config.hash: 61aef328017b7e6f2388bb619d84477d
1980- kubernetes.io/config.mirror: 61aef328017b7e6f2388bb619d84477d
1981- kubernetes.io/config.seen: "2019-11-21T11:47:28.820668514Z"
1982+ kubernetes.io/config.hash: 97c8f6af2603787b4c3a2f5aa835d56c
1983+ kubernetes.io/config.mirror: 97c8f6af2603787b4c3a2f5aa835d56c
1984+ kubernetes.io/config.seen: "2019-11-21T13:49:33.020645317Z"
1985 kubernetes.io/config.source: file
1986 kubernetes.io/psp: privileged-psp
1987- creationTimestamp: "2019-11-21T11:47:28Z"
1988+ creationTimestamp: "2019-11-21T13:49:33Z"
1989 labels:
1990 component: kube-apiserver
1991 tier: control-plane
1992- name: kube-apiserver-tools-k8s-control-3
1993+ name: kube-apiserver-toolsbeta-test-k8s-control-3
1994 namespace: kube-system
1995- resourceVersion: "2554890"
1996- selfLink: /api/v1/namespaces/kube-system/pods/kube-apiserver-tools-k8s-control-3
1997- uid: 5e68e88a-06ea-487a-b9da-49a4b5b92e95
1998+ resourceVersion: "5050708"
1999+ selfLink: /api/v1/namespaces/kube-system/pods/kube-apiserver-toolsbeta-test-k8s-control-3
2000+ uid: f67c541d-cda6-4a13-9724-75c6b35f4ffe
2001 spec:
2002 containers:
2003 - command:
2004 - kube-apiserver
2005- - --advertise-address=172.16.0.96
2006+ - --advertise-address=172.16.0.136
2007 - --allow-privileged=true
2008 - --authorization-mode=Node,RBAC
2009 - --client-ca-file=/etc/kubernetes/pki/ca.crt
2010@@ -3068,7 +2948,7 @@
2011 - --etcd-cafile=/etc/kubernetes/pki/puppet_ca.pem
2012 - --etcd-certfile=/etc/kubernetes/pki/puppet_etcd_client.crt
2013 - --etcd-keyfile=/etc/kubernetes/pki/puppet_etcd_client.key
2014- - --etcd-servers=https://tools-k8s-etcd-4.tools.eqiad.wmflabs:2379,https://tools-k8s-etcd-5.tools.eqiad.wmflabs:2379,https://tools-k8s-etcd-6.tools.eqiad.wmflabs:2379
2015+ - --etcd-servers=https://toolsbeta-test-k8s-etcd-1.toolsbeta.eqiad.wmflabs:2379,https://toolsbeta-test-k8s-etcd-2.toolsbeta.eqiad.wmflabs:2379,https://toolsbeta-test-k8s-etcd-3.toolsbeta.eqiad.wmflabs:2379
2016 - --insecure-port=0
2017 - --kubelet-client-certificate=/etc/kubernetes/pki/apiserver-kubelet-client.crt
2018 - --kubelet-client-key=/etc/kubernetes/pki/apiserver-kubelet-client.key
2019@@ -3091,7 +2971,7 @@
2020 livenessProbe:
2021 failureThreshold: 8
2022 httpGet:
2023- host: 172.16.0.96
2024+ host: 172.16.0.136
2025 path: /healthz
2026 port: 6443
2027 scheme: HTTPS
2028@@ -3124,7 +3004,7 @@
2029 dnsPolicy: ClusterFirst
2030 enableServiceLinks: true
2031 hostNetwork: true
2032- nodeName: tools-k8s-control-3
2033+ nodeName: toolsbeta-test-k8s-control-3
2034 priority: 2000000000
2035 priorityClassName: system-cluster-critical
2036 restartPolicy: Always
2037@@ -3158,61 +3038,55 @@
2038 status:
2039 conditions:
2040 - lastProbeTime: null
2041- lastTransitionTime: "2019-11-21T11:43:57Z"
2042+ lastTransitionTime: "2019-10-30T17:47:48Z"
2043 status: "True"
2044 type: Initialized
2045 - lastProbeTime: null
2046- lastTransitionTime: "2019-11-21T11:47:30Z"
2047+ lastTransitionTime: "2019-11-21T13:49:34Z"
2048 status: "True"
2049 type: Ready
2050 - lastProbeTime: null
2051- lastTransitionTime: "2019-11-21T11:47:30Z"
2052+ lastTransitionTime: "2019-11-21T13:49:34Z"
2053 status: "True"
2054 type: ContainersReady
2055 - lastProbeTime: null
2056- lastTransitionTime: "2019-11-21T11:43:57Z"
2057+ lastTransitionTime: "2019-10-30T17:47:48Z"
2058 status: "True"
2059 type: PodScheduled
2060 containerStatuses:
2061- - containerID: docker://eec220d285c95d0d3ea741a3ce85d8a94aa59c93bbfb71cd318c57390248b96a
2062+ - containerID: docker://978d7c546c0ba9b01d6b57993956c02fefdd03a02eaf411ba43bba6d6bbc82cb
2063 image: k8s.gcr.io/kube-apiserver:v1.15.6
2064 imageID: docker-pullable://k8s.gcr.io/kube-apiserver@sha256:b50e135bec86da5378ba2f8852d5bb966098d34abcb16510e36150d7b7dfd7b1
2065- lastState:
2066- terminated:
2067- containerID: docker://1f55de19b19c2824c87d9c9eff48cf1ba8a1c5835123e18349c564ee760dbc0b
2068- exitCode: 0
2069- finishedAt: "2019-11-21T12:48:30Z"
2070- reason: Completed
2071- startedAt: "2019-11-21T11:47:29Z"
2072+ lastState: {}
2073 name: kube-apiserver
2074 ready: true
2075- restartCount: 1
2076+ restartCount: 0
2077 state:
2078 running:
2079- startedAt: "2019-11-21T12:48:42Z"
2080- hostIP: 172.16.0.96
2081+ startedAt: "2019-11-21T13:49:34Z"
2082+ hostIP: 172.16.0.136
2083 phase: Running
2084- podIP: 172.16.0.96
2085+ podIP: 172.16.0.136
2086 qosClass: Burstable
2087- startTime: "2019-11-21T11:43:57Z"
2088+ startTime: "2019-10-30T17:47:48Z"
2089 - apiVersion: v1
2090 kind: Pod
2091 metadata:
2092 annotations:
2093 kubernetes.io/config.hash: 5a319af4eacbe155f497ddaddaf59398
2094 kubernetes.io/config.mirror: 5a319af4eacbe155f497ddaddaf59398
2095- kubernetes.io/config.seen: "2019-11-21T10:34:18.536626401Z"
2096+ kubernetes.io/config.seen: "2019-11-21T14:01:08.206837154Z"
2097 kubernetes.io/config.source: file
2098 kubernetes.io/psp: privileged-psp
2099- creationTimestamp: "2019-11-21T10:34:18Z"
2100+ creationTimestamp: "2019-11-21T14:01:08Z"
2101 labels:
2102 component: kube-controller-manager
2103 tier: control-plane
2104- name: kube-controller-manager-tools-k8s-control-1
2105+ name: kube-controller-manager-toolsbeta-test-k8s-control-1
2106 namespace: kube-system
2107- resourceVersion: "2554908"
2108- selfLink: /api/v1/namespaces/kube-system/pods/kube-controller-manager-tools-k8s-control-1
2109- uid: d2e010a4-0129-4cda-a640-2dfb52077a8e
2110+ resourceVersion: "5052442"
2111+ selfLink: /api/v1/namespaces/kube-system/pods/kube-controller-manager-toolsbeta-test-k8s-control-1
2112+ uid: 2c8cda6e-06d4-43e3-a99a-7f5ac0f2a57f
2113 spec:
2114 containers:
2115 - command:
2116@@ -3276,7 +3150,7 @@
2117 dnsPolicy: ClusterFirst
2118 enableServiceLinks: true
2119 hostNetwork: true
2120- nodeName: tools-k8s-control-1
2121+ nodeName: toolsbeta-test-k8s-control-1
2122 priority: 2000000000
2123 priorityClassName: system-cluster-critical
2124 restartPolicy: Always
2125@@ -3318,61 +3192,55 @@
2126 status:
2127 conditions:
2128 - lastProbeTime: null
2129- lastTransitionTime: "2019-11-21T11:43:58Z"
2130+ lastTransitionTime: "2019-11-21T13:11:05Z"
2131 status: "True"
2132 type: Initialized
2133 - lastProbeTime: null
2134- lastTransitionTime: "2019-11-21T11:43:58Z"
2135+ lastTransitionTime: "2019-11-21T14:01:10Z"
2136 status: "True"
2137 type: Ready
2138 - lastProbeTime: null
2139- lastTransitionTime: "2019-11-21T11:43:58Z"
2140+ lastTransitionTime: "2019-11-21T14:01:10Z"
2141 status: "True"
2142 type: ContainersReady
2143 - lastProbeTime: null
2144- lastTransitionTime: "2019-11-21T11:43:58Z"
2145+ lastTransitionTime: "2019-11-21T13:11:05Z"
2146 status: "True"
2147 type: PodScheduled
2148 containerStatuses:
2149- - containerID: docker://e7645727c62a1f6af96668a19d0704e198642d703b27d74c2e8ba3e5785cae3e
2150+ - containerID: docker://7190bb1d73eea121af776961e4cf50f5909b0ae4b5e88bde3b1f524107acce24
2151 image: k8s.gcr.io/kube-controller-manager:v1.15.6
2152 imageID: docker-pullable://k8s.gcr.io/kube-controller-manager@sha256:1a1ccd6546b2149f3ec8ea42608046bdf74da7dc4d46e09e78f832060425b26d
2153- lastState:
2154- terminated:
2155- containerID: docker://7f509a6b02fbb44815fcecd942d64bb099e7fa36899ddd6927de8ce50dee6ebc
2156- exitCode: 2
2157- finishedAt: "2019-11-21T12:48:30Z"
2158- reason: Error
2159- startedAt: "2019-11-21T10:34:19Z"
2160+ lastState: {}
2161 name: kube-controller-manager
2162 ready: true
2163- restartCount: 1
2164+ restartCount: 0
2165 state:
2166 running:
2167- startedAt: "2019-11-21T12:48:45Z"
2168- hostIP: 172.16.0.104
2169+ startedAt: "2019-11-21T14:01:09Z"
2170+ hostIP: 172.16.0.112
2171 phase: Running
2172- podIP: 172.16.0.104
2173+ podIP: 172.16.0.112
2174 qosClass: Burstable
2175- startTime: "2019-11-21T11:43:58Z"
2176+ startTime: "2019-11-21T13:11:05Z"
2177 - apiVersion: v1
2178 kind: Pod
2179 metadata:
2180 annotations:
2181 kubernetes.io/config.hash: 5a319af4eacbe155f497ddaddaf59398
2182 kubernetes.io/config.mirror: 5a319af4eacbe155f497ddaddaf59398
2183- kubernetes.io/config.seen: "2019-11-21T11:47:41.584794569Z"
2184+ kubernetes.io/config.seen: "2019-11-21T13:48:30.075091685Z"
2185 kubernetes.io/config.source: file
2186 kubernetes.io/psp: privileged-psp
2187- creationTimestamp: "2019-11-21T11:47:41Z"
2188+ creationTimestamp: "2019-11-21T13:48:30Z"
2189 labels:
2190 component: kube-controller-manager
2191 tier: control-plane
2192- name: kube-controller-manager-tools-k8s-control-2
2193+ name: kube-controller-manager-toolsbeta-test-k8s-control-2
2194 namespace: kube-system
2195- resourceVersion: "2554909"
2196- selfLink: /api/v1/namespaces/kube-system/pods/kube-controller-manager-tools-k8s-control-2
2197- uid: 821829cb-655b-4182-9e98-0d310d2a6d3e
2198+ resourceVersion: "5050294"
2199+ selfLink: /api/v1/namespaces/kube-system/pods/kube-controller-manager-toolsbeta-test-k8s-control-2
2200+ uid: 5fb61e80-f482-4a3a-9536-8dd7ddc2f78c
2201 spec:
2202 containers:
2203 - command:
2204@@ -3436,7 +3304,7 @@
2205 dnsPolicy: ClusterFirst
2206 enableServiceLinks: true
2207 hostNetwork: true
2208- nodeName: tools-k8s-control-2
2209+ nodeName: toolsbeta-test-k8s-control-2
2210 priority: 2000000000
2211 priorityClassName: system-cluster-critical
2212 restartPolicy: Always
2213@@ -3478,61 +3346,55 @@
2214 status:
2215 conditions:
2216 - lastProbeTime: null
2217- lastTransitionTime: "2019-11-21T12:48:46Z"
2218+ lastTransitionTime: "2019-11-21T13:11:03Z"
2219 status: "True"
2220 type: Initialized
2221 - lastProbeTime: null
2222- lastTransitionTime: "2019-11-21T12:48:48Z"
2223+ lastTransitionTime: "2019-11-21T13:48:31Z"
2224 status: "True"
2225 type: Ready
2226 - lastProbeTime: null
2227- lastTransitionTime: "2019-11-21T12:48:48Z"
2228+ lastTransitionTime: "2019-11-21T13:48:31Z"
2229 status: "True"
2230 type: ContainersReady
2231 - lastProbeTime: null
2232- lastTransitionTime: "2019-11-21T12:48:46Z"
2233+ lastTransitionTime: "2019-11-21T13:11:03Z"
2234 status: "True"
2235 type: PodScheduled
2236 containerStatuses:
2237- - containerID: docker://9554f988f250a028b16d9c2bc4c81d03f65e8fcedfe5d4da8ee42b8e4639e090
2238+ - containerID: docker://8cd9c31046563d74ffe4b1b8a1e6ca267cf51da230783d9d01673321a06fb2a7
2239 image: k8s.gcr.io/kube-controller-manager:v1.15.6
2240 imageID: docker-pullable://k8s.gcr.io/kube-controller-manager@sha256:1a1ccd6546b2149f3ec8ea42608046bdf74da7dc4d46e09e78f832060425b26d
2241- lastState:
2242- terminated:
2243- containerID: docker://606b4a17c5cde78f942bc8ce9f54e118339d623c1afd8028888678add746bb65
2244- exitCode: 2
2245- finishedAt: "2019-11-21T12:48:30Z"
2246- reason: Error
2247- startedAt: "2019-11-21T11:47:42Z"
2248+ lastState: {}
2249 name: kube-controller-manager
2250 ready: true
2251- restartCount: 1
2252+ restartCount: 0
2253 state:
2254 running:
2255- startedAt: "2019-11-21T12:48:47Z"
2256- hostIP: 172.16.0.93
2257+ startedAt: "2019-11-21T13:48:31Z"
2258+ hostIP: 172.16.0.137
2259 phase: Running
2260- podIP: 172.16.0.93
2261+ podIP: 172.16.0.137
2262 qosClass: Burstable
2263- startTime: "2019-11-21T12:48:46Z"
2264+ startTime: "2019-11-21T13:11:03Z"
2265 - apiVersion: v1
2266 kind: Pod
2267 metadata:
2268 annotations:
2269 kubernetes.io/config.hash: 5a319af4eacbe155f497ddaddaf59398
2270 kubernetes.io/config.mirror: 5a319af4eacbe155f497ddaddaf59398
2271- kubernetes.io/config.seen: "2019-11-21T11:47:41.55550058Z"
2272+ kubernetes.io/config.seen: "2019-11-21T13:49:42.600265389Z"
2273 kubernetes.io/config.source: file
2274 kubernetes.io/psp: privileged-psp
2275- creationTimestamp: "2019-11-21T11:47:41Z"
2276+ creationTimestamp: "2019-11-21T13:49:42Z"
2277 labels:
2278 component: kube-controller-manager
2279 tier: control-plane
2280- name: kube-controller-manager-tools-k8s-control-3
2281+ name: kube-controller-manager-toolsbeta-test-k8s-control-3
2282 namespace: kube-system
2283- resourceVersion: "2554898"
2284- selfLink: /api/v1/namespaces/kube-system/pods/kube-controller-manager-tools-k8s-control-3
2285- uid: dc4156b1-04de-43e6-a42e-4e3b388cabc0
2286+ resourceVersion: "5050772"
2287+ selfLink: /api/v1/namespaces/kube-system/pods/kube-controller-manager-toolsbeta-test-k8s-control-3
2288+ uid: 2d24cb65-889d-451f-932e-7aee8ca9d415
2289 spec:
2290 containers:
2291 - command:
2292@@ -3596,7 +3458,7 @@
2293 dnsPolicy: ClusterFirst
2294 enableServiceLinks: true
2295 hostNetwork: true
2296- nodeName: tools-k8s-control-3
2297+ nodeName: toolsbeta-test-k8s-control-3
2298 priority: 2000000000
2299 priorityClassName: system-cluster-critical
2300 restartPolicy: Always
2301@@ -3638,55 +3500,49 @@
2302 status:
2303 conditions:
2304 - lastProbeTime: null
2305- lastTransitionTime: "2019-11-21T11:43:57Z"
2306+ lastTransitionTime: "2019-11-21T13:11:05Z"
2307 status: "True"
2308 type: Initialized
2309 - lastProbeTime: null
2310- lastTransitionTime: "2019-11-21T11:47:43Z"
2311+ lastTransitionTime: "2019-11-21T13:49:44Z"
2312 status: "True"
2313 type: Ready
2314 - lastProbeTime: null
2315- lastTransitionTime: "2019-11-21T11:47:43Z"
2316+ lastTransitionTime: "2019-11-21T13:49:44Z"
2317 status: "True"
2318 type: ContainersReady
2319 - lastProbeTime: null
2320- lastTransitionTime: "2019-11-21T11:43:57Z"
2321+ lastTransitionTime: "2019-11-21T13:11:05Z"
2322 status: "True"
2323 type: PodScheduled
2324 containerStatuses:
2325- - containerID: docker://b05b7f2c0b7ecc12eacfb21b712ff918db7aa61eb0df18f092cf7e112466ad4a
2326+ - containerID: docker://872b43ed8729134f6b0b46f8398c284745ff323133bd69987c662ef883a4c0e8
2327 image: k8s.gcr.io/kube-controller-manager:v1.15.6
2328 imageID: docker-pullable://k8s.gcr.io/kube-controller-manager@sha256:1a1ccd6546b2149f3ec8ea42608046bdf74da7dc4d46e09e78f832060425b26d
2329- lastState:
2330- terminated:
2331- containerID: docker://e4249c0d0e5f96727e599eeb5a77235c8e3a4ee458a84d51b4d553ad863032ac
2332- exitCode: 2
2333- finishedAt: "2019-11-21T12:48:29Z"
2334- reason: Error
2335- startedAt: "2019-11-21T11:47:42Z"
2336+ lastState: {}
2337 name: kube-controller-manager
2338 ready: true
2339- restartCount: 1
2340+ restartCount: 0
2341 state:
2342 running:
2343- startedAt: "2019-11-21T12:48:42Z"
2344- hostIP: 172.16.0.96
2345+ startedAt: "2019-11-21T13:49:44Z"
2346+ hostIP: 172.16.0.136
2347 phase: Running
2348- podIP: 172.16.0.96
2349+ podIP: 172.16.0.136
2350 qosClass: Burstable
2351- startTime: "2019-11-21T11:43:57Z"
2352+ startTime: "2019-11-21T13:11:05Z"
2353 - apiVersion: v1
2354 kind: Pod
2355 metadata:
2356 annotations:
2357 kubernetes.io/psp: privileged-psp
2358- creationTimestamp: "2019-11-21T11:44:17Z"
2359+ creationTimestamp: "2019-11-21T13:49:44Z"
2360 generateName: kube-proxy-
2361 labels:
2362 controller-revision-hash: 74cdb8d98b
2363 k8s-app: kube-proxy
2364 pod-template-generation: "2"
2365- name: kube-proxy-77qj4
2366+ name: kube-proxy-4n59c
2367 namespace: kube-system
2368 ownerReferences:
2369 - apiVersion: apps/v1
2370@@ -3694,10 +3550,10 @@
2371 controller: true
2372 kind: DaemonSet
2373 name: kube-proxy
2374- uid: 450ae8e1-c9ef-415e-a983-ee2180410f3b
2375- resourceVersion: "2554995"
2376- selfLink: /api/v1/namespaces/kube-system/pods/kube-proxy-77qj4
2377- uid: 438403b3-7a57-4cfc-a207-80c3c275db15
2378+ uid: 594edc17-8735-4841-8faf-b01e84811848
2379+ resourceVersion: "5050804"
2380+ selfLink: /api/v1/namespaces/kube-system/pods/kube-proxy-4n59c
2381+ uid: da5af20a-4c20-4610-8331-bee79c383998
2382 spec:
2383 affinity:
2384 nodeAffinity:
2385@@ -3707,7 +3563,7 @@
2386 - key: metadata.name
2387 operator: In
2388 values:
2389- - tools-k8s-worker-2
2390+ - toolsbeta-test-k8s-worker-2
2391 containers:
2392 - command:
2393 - /usr/local/bin/kube-proxy
2394@@ -3736,12 +3592,12 @@
2395 name: lib-modules
2396 readOnly: true
2397 - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
2398- name: kube-proxy-token-ntj7t
2399+ name: kube-proxy-token-2qn5h
2400 readOnly: true
2401 dnsPolicy: ClusterFirst
2402 enableServiceLinks: true
2403 hostNetwork: true
2404- nodeName: tools-k8s-worker-2
2405+ nodeName: toolsbeta-test-k8s-worker-2
2406 nodeSelector:
2407 beta.kubernetes.io/os: linux
2408 priority: 2000001000
2409@@ -3790,62 +3646,56 @@
2410 path: /lib/modules
2411 type: ""
2412 name: lib-modules
2413- - name: kube-proxy-token-ntj7t
2414+ - name: kube-proxy-token-2qn5h
2415 secret:
2416 defaultMode: 420
2417- secretName: kube-proxy-token-ntj7t
2418+ secretName: kube-proxy-token-2qn5h
2419 status:
2420 conditions:
2421 - lastProbeTime: null
2422- lastTransitionTime: "2019-11-21T11:44:17Z"
2423+ lastTransitionTime: "2019-11-21T13:49:44Z"
2424 status: "True"
2425 type: Initialized
2426 - lastProbeTime: null
2427- lastTransitionTime: "2019-11-21T12:49:00Z"
2428+ lastTransitionTime: "2019-11-21T13:49:49Z"
2429 status: "True"
2430 type: Ready
2431 - lastProbeTime: null
2432- lastTransitionTime: "2019-11-21T12:49:00Z"
2433+ lastTransitionTime: "2019-11-21T13:49:49Z"
2434 status: "True"
2435 type: ContainersReady
2436 - lastProbeTime: null
2437- lastTransitionTime: "2019-11-21T11:44:17Z"
2438+ lastTransitionTime: "2019-11-21T13:49:44Z"
2439 status: "True"
2440 type: PodScheduled
2441 containerStatuses:
2442- - containerID: docker://e3089db3bccbfd8a219f16625e46cd91c11cb0aa4cc325036a5ecd1ad31fb866
2443+ - containerID: docker://459769e765b2cec915bfea1b4e288ef2959fb8abbf837b86569cf1950dda8995
2444 image: k8s.gcr.io/kube-proxy:v1.15.6
2445 imageID: docker-pullable://k8s.gcr.io/kube-proxy@sha256:ef245ddefef697c8b42611c237603acf41bfdb2b7ec3e434b7c3592864dcfff8
2446- lastState:
2447- terminated:
2448- containerID: docker://59c46cc257eecca983bb0a0331fda78b6142082855785408c893934fa22af899
2449- exitCode: 2
2450- finishedAt: "2019-11-21T12:48:30Z"
2451- reason: Error
2452- startedAt: "2019-11-21T11:44:20Z"
2453+ lastState: {}
2454 name: kube-proxy
2455 ready: true
2456- restartCount: 1
2457+ restartCount: 0
2458 state:
2459 running:
2460- startedAt: "2019-11-21T12:49:00Z"
2461- hostIP: 172.16.0.103
2462+ startedAt: "2019-11-21T13:49:48Z"
2463+ hostIP: 172.16.0.151
2464 phase: Running
2465- podIP: 172.16.0.103
2466+ podIP: 172.16.0.151
2467 qosClass: BestEffort
2468- startTime: "2019-11-21T11:44:17Z"
2469+ startTime: "2019-11-21T13:49:44Z"
2470 - apiVersion: v1
2471 kind: Pod
2472 metadata:
2473 annotations:
2474 kubernetes.io/psp: privileged-psp
2475- creationTimestamp: "2019-11-21T11:44:22Z"
2476+ creationTimestamp: "2019-11-21T13:49:54Z"
2477 generateName: kube-proxy-
2478 labels:
2479 controller-revision-hash: 74cdb8d98b
2480 k8s-app: kube-proxy
2481 pod-template-generation: "2"
2482- name: kube-proxy-7ssq4
2483+ name: kube-proxy-8c4lp
2484 namespace: kube-system
2485 ownerReferences:
2486 - apiVersion: apps/v1
2487@@ -3853,10 +3703,10 @@
2488 controller: true
2489 kind: DaemonSet
2490 name: kube-proxy
2491- uid: 450ae8e1-c9ef-415e-a983-ee2180410f3b
2492- resourceVersion: "2554873"
2493- selfLink: /api/v1/namespaces/kube-system/pods/kube-proxy-7ssq4
2494- uid: dd970e94-04d7-4b4a-80d1-456eb036149c
2495+ uid: 594edc17-8735-4841-8faf-b01e84811848
2496+ resourceVersion: "5050879"
2497+ selfLink: /api/v1/namespaces/kube-system/pods/kube-proxy-8c4lp
2498+ uid: aec07fc9-1d89-4ca4-b866-78a0ba380f9d
2499 spec:
2500 affinity:
2501 nodeAffinity:
2502@@ -3866,7 +3716,7 @@
2503 - key: metadata.name
2504 operator: In
2505 values:
2506- - tools-k8s-control-2
2507+ - toolsbeta-test-k8s-worker-1
2508 containers:
2509 - command:
2510 - /usr/local/bin/kube-proxy
2511@@ -3895,12 +3745,12 @@
2512 name: lib-modules
2513 readOnly: true
2514 - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
2515- name: kube-proxy-token-ntj7t
2516+ name: kube-proxy-token-2qn5h
2517 readOnly: true
2518 dnsPolicy: ClusterFirst
2519 enableServiceLinks: true
2520 hostNetwork: true
2521- nodeName: tools-k8s-control-2
2522+ nodeName: toolsbeta-test-k8s-worker-1
2523 nodeSelector:
2524 beta.kubernetes.io/os: linux
2525 priority: 2000001000
2526@@ -3949,62 +3799,56 @@
2527 path: /lib/modules
2528 type: ""
2529 name: lib-modules
2530- - name: kube-proxy-token-ntj7t
2531+ - name: kube-proxy-token-2qn5h
2532 secret:
2533 defaultMode: 420
2534- secretName: kube-proxy-token-ntj7t
2535+ secretName: kube-proxy-token-2qn5h
2536 status:
2537 conditions:
2538 - lastProbeTime: null
2539- lastTransitionTime: "2019-11-21T11:44:22Z"
2540+ lastTransitionTime: "2019-11-21T13:50:03Z"
2541 status: "True"
2542 type: Initialized
2543 - lastProbeTime: null
2544- lastTransitionTime: "2019-11-21T12:48:52Z"
2545+ lastTransitionTime: "2019-11-21T13:50:09Z"
2546 status: "True"
2547 type: Ready
2548 - lastProbeTime: null
2549- lastTransitionTime: "2019-11-21T12:48:52Z"
2550+ lastTransitionTime: "2019-11-21T13:50:09Z"
2551 status: "True"
2552 type: ContainersReady
2553 - lastProbeTime: null
2554- lastTransitionTime: "2019-11-21T11:44:22Z"
2555+ lastTransitionTime: "2019-11-21T13:50:03Z"
2556 status: "True"
2557 type: PodScheduled
2558 containerStatuses:
2559- - containerID: docker://bc8a5e2332491355957fb7e6f35304583d7f3658fbbc1dea32f6fb2713e41579
2560+ - containerID: docker://813693b4087c450aef3c8e27f83ef6d6e13a8ba96e638d36e438bd38fcce512b
2561 image: k8s.gcr.io/kube-proxy:v1.15.6
2562 imageID: docker-pullable://k8s.gcr.io/kube-proxy@sha256:ef245ddefef697c8b42611c237603acf41bfdb2b7ec3e434b7c3592864dcfff8
2563- lastState:
2564- terminated:
2565- containerID: docker://ef8d55f3613d4e46d473b9b8f66ddd94474784a2b618f97079b74f15d235dc03
2566- exitCode: 2
2567- finishedAt: "2019-11-21T12:48:30Z"
2568- reason: Error
2569- startedAt: "2019-11-21T11:44:24Z"
2570+ lastState: {}
2571 name: kube-proxy
2572 ready: true
2573- restartCount: 1
2574+ restartCount: 0
2575 state:
2576 running:
2577- startedAt: "2019-11-21T12:48:51Z"
2578- hostIP: 172.16.0.93
2579+ startedAt: "2019-11-21T13:50:08Z"
2580+ hostIP: 172.16.0.138
2581 phase: Running
2582- podIP: 172.16.0.93
2583+ podIP: 172.16.0.138
2584 qosClass: BestEffort
2585- startTime: "2019-11-21T11:44:22Z"
2586+ startTime: "2019-11-21T13:50:03Z"
2587 - apiVersion: v1
2588 kind: Pod
2589 metadata:
2590 annotations:
2591 kubernetes.io/psp: privileged-psp
2592- creationTimestamp: "2019-11-21T10:34:50Z"
2593+ creationTimestamp: "2019-11-21T13:49:14Z"
2594 generateName: kube-proxy-
2595 labels:
2596 controller-revision-hash: 74cdb8d98b
2597 k8s-app: kube-proxy
2598 pod-template-generation: "2"
2599- name: kube-proxy-8fv9z
2600+ name: kube-proxy-frwmb
2601 namespace: kube-system
2602 ownerReferences:
2603 - apiVersion: apps/v1
2604@@ -4012,10 +3856,10 @@
2605 controller: true
2606 kind: DaemonSet
2607 name: kube-proxy
2608- uid: 450ae8e1-c9ef-415e-a983-ee2180410f3b
2609- resourceVersion: "2554911"
2610- selfLink: /api/v1/namespaces/kube-system/pods/kube-proxy-8fv9z
2611- uid: 109a25d6-b7fd-49b4-98b7-48aa8ae9bdf4
2612+ uid: 594edc17-8735-4841-8faf-b01e84811848
2613+ resourceVersion: "5050424"
2614+ selfLink: /api/v1/namespaces/kube-system/pods/kube-proxy-frwmb
2615+ uid: 9b4c7fd0-cc0d-44af-954b-757d41b58816
2616 spec:
2617 affinity:
2618 nodeAffinity:
2619@@ -4025,7 +3869,7 @@
2620 - key: metadata.name
2621 operator: In
2622 values:
2623- - tools-k8s-control-3
2624+ - toolsbeta-test-k8s-control-3
2625 containers:
2626 - command:
2627 - /usr/local/bin/kube-proxy
2628@@ -4054,12 +3898,12 @@
2629 name: lib-modules
2630 readOnly: true
2631 - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
2632- name: kube-proxy-token-ntj7t
2633+ name: kube-proxy-token-2qn5h
2634 readOnly: true
2635 dnsPolicy: ClusterFirst
2636 enableServiceLinks: true
2637 hostNetwork: true
2638- nodeName: tools-k8s-control-3
2639+ nodeName: toolsbeta-test-k8s-control-3
2640 nodeSelector:
2641 beta.kubernetes.io/os: linux
2642 priority: 2000001000
2643@@ -4108,62 +3952,56 @@
2644 path: /lib/modules
2645 type: ""
2646 name: lib-modules
2647- - name: kube-proxy-token-ntj7t
2648+ - name: kube-proxy-token-2qn5h
2649 secret:
2650 defaultMode: 420
2651- secretName: kube-proxy-token-ntj7t
2652+ secretName: kube-proxy-token-2qn5h
2653 status:
2654 conditions:
2655 - lastProbeTime: null
2656- lastTransitionTime: "2019-11-21T10:34:50Z"
2657+ lastTransitionTime: "2019-11-21T13:49:14Z"
2658 status: "True"
2659 type: Initialized
2660 - lastProbeTime: null
2661- lastTransitionTime: "2019-11-21T12:48:57Z"
2662+ lastTransitionTime: "2019-11-21T13:49:17Z"
2663 status: "True"
2664 type: Ready
2665 - lastProbeTime: null
2666- lastTransitionTime: "2019-11-21T12:48:57Z"
2667+ lastTransitionTime: "2019-11-21T13:49:17Z"
2668 status: "True"
2669 type: ContainersReady
2670 - lastProbeTime: null
2671- lastTransitionTime: "2019-11-21T10:34:50Z"
2672+ lastTransitionTime: "2019-11-21T13:49:14Z"
2673 status: "True"
2674 type: PodScheduled
2675 containerStatuses:
2676- - containerID: docker://1692fb3c38857cca618ec0c13bd233d4e0ab2e48da91b5b35ef1be1232dfa978
2677+ - containerID: docker://f26a3e79ee1f7b454e00865cf2c8a29c01d310230cd5cf274c45268ce3036f62
2678 image: k8s.gcr.io/kube-proxy:v1.15.6
2679 imageID: docker-pullable://k8s.gcr.io/kube-proxy@sha256:ef245ddefef697c8b42611c237603acf41bfdb2b7ec3e434b7c3592864dcfff8
2680- lastState:
2681- terminated:
2682- containerID: docker://905536055fd4f87e00cbfd666f410cb5c61c6dac5da5f532acb24bd91d60d8c2
2683- exitCode: 2
2684- finishedAt: "2019-11-21T12:48:29Z"
2685- reason: Error
2686- startedAt: "2019-11-21T11:44:00Z"
2687+ lastState: {}
2688 name: kube-proxy
2689 ready: true
2690- restartCount: 1
2691+ restartCount: 0
2692 state:
2693 running:
2694- startedAt: "2019-11-21T12:48:57Z"
2695- hostIP: 172.16.0.96
2696+ startedAt: "2019-11-21T13:49:17Z"
2697+ hostIP: 172.16.0.136
2698 phase: Running
2699- podIP: 172.16.0.96
2700+ podIP: 172.16.0.136
2701 qosClass: BestEffort
2702- startTime: "2019-11-21T10:34:50Z"
2703+ startTime: "2019-11-21T13:49:14Z"
2704 - apiVersion: v1
2705 kind: Pod
2706 metadata:
2707 annotations:
2708 kubernetes.io/psp: privileged-psp
2709- creationTimestamp: "2019-11-21T11:44:02Z"
2710+ creationTimestamp: "2019-11-21T13:49:30Z"
2711 generateName: kube-proxy-
2712 labels:
2713 controller-revision-hash: 74cdb8d98b
2714 k8s-app: kube-proxy
2715 pod-template-generation: "2"
2716- name: kube-proxy-qvfvp
2717+ name: kube-proxy-jjdnj
2718 namespace: kube-system
2719 ownerReferences:
2720 - apiVersion: apps/v1
2721@@ -4171,10 +4009,10 @@
2722 controller: true
2723 kind: DaemonSet
2724 name: kube-proxy
2725- uid: 450ae8e1-c9ef-415e-a983-ee2180410f3b
2726- resourceVersion: "2554989"
2727- selfLink: /api/v1/namespaces/kube-system/pods/kube-proxy-qvfvp
2728- uid: f44f96a5-dbd3-44c2-9af9-7482471d2536
2729+ uid: 594edc17-8735-4841-8faf-b01e84811848
2730+ resourceVersion: "5050687"
2731+ selfLink: /api/v1/namespaces/kube-system/pods/kube-proxy-jjdnj
2732+ uid: c5d40e17-c95a-4856-bb27-2de0d014fa00
2733 spec:
2734 affinity:
2735 nodeAffinity:
2736@@ -4184,7 +4022,7 @@
2737 - key: metadata.name
2738 operator: In
2739 values:
2740- - tools-k8s-worker-1
2741+ - toolsbeta-test-k8s-control-2
2742 containers:
2743 - command:
2744 - /usr/local/bin/kube-proxy
2745@@ -4213,12 +4051,12 @@
2746 name: lib-modules
2747 readOnly: true
2748 - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
2749- name: kube-proxy-token-ntj7t
2750+ name: kube-proxy-token-2qn5h
2751 readOnly: true
2752 dnsPolicy: ClusterFirst
2753 enableServiceLinks: true
2754 hostNetwork: true
2755- nodeName: tools-k8s-worker-1
2756+ nodeName: toolsbeta-test-k8s-control-2
2757 nodeSelector:
2758 beta.kubernetes.io/os: linux
2759 priority: 2000001000
2760@@ -4267,62 +4105,56 @@
2761 path: /lib/modules
2762 type: ""
2763 name: lib-modules
2764- - name: kube-proxy-token-ntj7t
2765+ - name: kube-proxy-token-2qn5h
2766 secret:
2767 defaultMode: 420
2768- secretName: kube-proxy-token-ntj7t
2769+ secretName: kube-proxy-token-2qn5h
2770 status:
2771 conditions:
2772 - lastProbeTime: null
2773- lastTransitionTime: "2019-11-21T11:44:02Z"
2774+ lastTransitionTime: "2019-11-21T13:49:31Z"
2775 status: "True"
2776 type: Initialized
2777 - lastProbeTime: null
2778- lastTransitionTime: "2019-11-21T12:49:01Z"
2779+ lastTransitionTime: "2019-11-21T13:49:33Z"
2780 status: "True"
2781 type: Ready
2782 - lastProbeTime: null
2783- lastTransitionTime: "2019-11-21T12:49:01Z"
2784+ lastTransitionTime: "2019-11-21T13:49:33Z"
2785 status: "True"
2786 type: ContainersReady
2787 - lastProbeTime: null
2788- lastTransitionTime: "2019-11-21T11:44:02Z"
2789+ lastTransitionTime: "2019-11-21T13:49:30Z"
2790 status: "True"
2791 type: PodScheduled
2792 containerStatuses:
2793- - containerID: docker://90259366689a0acec3fe6f61d4bbb4d08ec6eff385a53db3779ea50166116054
2794+ - containerID: docker://6872ae905e779164965d3450497b30a4103776de63fa5acd20f1034009bae75d
2795 image: k8s.gcr.io/kube-proxy:v1.15.6
2796 imageID: docker-pullable://k8s.gcr.io/kube-proxy@sha256:ef245ddefef697c8b42611c237603acf41bfdb2b7ec3e434b7c3592864dcfff8
2797- lastState:
2798- terminated:
2799- containerID: docker://4b03828b2581f86e3e3e13a4f705657c48e1372df00e1b2c0003a991e33f7841
2800- exitCode: 2
2801- finishedAt: "2019-11-21T12:48:30Z"
2802- reason: Error
2803- startedAt: "2019-11-21T11:44:04Z"
2804+ lastState: {}
2805 name: kube-proxy
2806 ready: true
2807- restartCount: 1
2808+ restartCount: 0
2809 state:
2810 running:
2811- startedAt: "2019-11-21T12:49:00Z"
2812- hostIP: 172.16.0.78
2813+ startedAt: "2019-11-21T13:49:33Z"
2814+ hostIP: 172.16.0.137
2815 phase: Running
2816- podIP: 172.16.0.78
2817+ podIP: 172.16.0.137
2818 qosClass: BestEffort
2819- startTime: "2019-11-21T11:44:02Z"
2820+ startTime: "2019-11-21T13:49:31Z"
2821 - apiVersion: v1
2822 kind: Pod
2823 metadata:
2824 annotations:
2825 kubernetes.io/psp: privileged-psp
2826- creationTimestamp: "2019-11-21T11:44:37Z"
2827+ creationTimestamp: "2019-11-21T13:49:24Z"
2828 generateName: kube-proxy-
2829 labels:
2830 controller-revision-hash: 74cdb8d98b
2831 k8s-app: kube-proxy
2832 pod-template-generation: "2"
2833- name: kube-proxy-r94mq
2834+ name: kube-proxy-wkkdk
2835 namespace: kube-system
2836 ownerReferences:
2837 - apiVersion: apps/v1
2838@@ -4330,10 +4162,10 @@
2839 controller: true
2840 kind: DaemonSet
2841 name: kube-proxy
2842- uid: 450ae8e1-c9ef-415e-a983-ee2180410f3b
2843- resourceVersion: "2554943"
2844- selfLink: /api/v1/namespaces/kube-system/pods/kube-proxy-r94mq
2845- uid: 60edb8e2-cc7b-4fe9-b818-5cf03af82c58
2846+ uid: 594edc17-8735-4841-8faf-b01e84811848
2847+ resourceVersion: "5050523"
2848+ selfLink: /api/v1/namespaces/kube-system/pods/kube-proxy-wkkdk
2849+ uid: d54f00ae-b583-44d0-a5f1-df00e857f51a
2850 spec:
2851 affinity:
2852 nodeAffinity:
2853@@ -4343,7 +4175,7 @@
2854 - key: metadata.name
2855 operator: In
2856 values:
2857- - tools-k8s-control-1
2858+ - toolsbeta-test-k8s-control-1
2859 containers:
2860 - command:
2861 - /usr/local/bin/kube-proxy
2862@@ -4372,12 +4204,12 @@
2863 name: lib-modules
2864 readOnly: true
2865 - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
2866- name: kube-proxy-token-ntj7t
2867+ name: kube-proxy-token-2qn5h
2868 readOnly: true
2869 dnsPolicy: ClusterFirst
2870 enableServiceLinks: true
2871 hostNetwork: true
2872- nodeName: tools-k8s-control-1
2873+ nodeName: toolsbeta-test-k8s-control-1
2874 nodeSelector:
2875 beta.kubernetes.io/os: linux
2876 priority: 2000001000
2877@@ -4426,68 +4258,62 @@
2878 path: /lib/modules
2879 type: ""
2880 name: lib-modules
2881- - name: kube-proxy-token-ntj7t
2882+ - name: kube-proxy-token-2qn5h
2883 secret:
2884 defaultMode: 420
2885- secretName: kube-proxy-token-ntj7t
2886+ secretName: kube-proxy-token-2qn5h
2887 status:
2888 conditions:
2889 - lastProbeTime: null
2890- lastTransitionTime: "2019-11-21T11:44:37Z"
2891+ lastTransitionTime: "2019-11-21T13:49:24Z"
2892 status: "True"
2893 type: Initialized
2894 - lastProbeTime: null
2895- lastTransitionTime: "2019-11-21T12:48:57Z"
2896+ lastTransitionTime: "2019-11-21T13:49:28Z"
2897 status: "True"
2898 type: Ready
2899 - lastProbeTime: null
2900- lastTransitionTime: "2019-11-21T12:48:57Z"
2901+ lastTransitionTime: "2019-11-21T13:49:28Z"
2902 status: "True"
2903 type: ContainersReady
2904 - lastProbeTime: null
2905- lastTransitionTime: "2019-11-21T11:44:37Z"
2906+ lastTransitionTime: "2019-11-21T13:49:24Z"
2907 status: "True"
2908 type: PodScheduled
2909 containerStatuses:
2910- - containerID: docker://86ed2206fc3a8178549cd12955c63fc80c30a2f073ad34d266e8f5000dfb4677
2911+ - containerID: docker://38292187e11bcb01e0adb049f3bbbc45001e555447d243c11a5b2ebefbedf4f5
2912 image: k8s.gcr.io/kube-proxy:v1.15.6
2913 imageID: docker-pullable://k8s.gcr.io/kube-proxy@sha256:ef245ddefef697c8b42611c237603acf41bfdb2b7ec3e434b7c3592864dcfff8
2914- lastState:
2915- terminated:
2916- containerID: docker://ba15226d2e98b9f220baa928e21b7b08f9a766ebeece5a4a218aa70cbf25b173
2917- exitCode: 2
2918- finishedAt: "2019-11-21T12:48:30Z"
2919- reason: Error
2920- startedAt: "2019-11-21T11:44:40Z"
2921+ lastState: {}
2922 name: kube-proxy
2923 ready: true
2924- restartCount: 1
2925+ restartCount: 0
2926 state:
2927 running:
2928- startedAt: "2019-11-21T12:48:56Z"
2929- hostIP: 172.16.0.104
2930+ startedAt: "2019-11-21T13:49:27Z"
2931+ hostIP: 172.16.0.112
2932 phase: Running
2933- podIP: 172.16.0.104
2934+ podIP: 172.16.0.112
2935 qosClass: BestEffort
2936- startTime: "2019-11-21T11:44:37Z"
2937+ startTime: "2019-11-21T13:49:24Z"
2938 - apiVersion: v1
2939 kind: Pod
2940 metadata:
2941 annotations:
2942 kubernetes.io/config.hash: 4c9fe2c16888e009cff100467a01a432
2943 kubernetes.io/config.mirror: 4c9fe2c16888e009cff100467a01a432
2944- kubernetes.io/config.seen: "2019-11-21T10:34:28.372748315Z"
2945+ kubernetes.io/config.seen: "2019-11-21T14:01:11.015115333Z"
2946 kubernetes.io/config.source: file
2947 kubernetes.io/psp: privileged-psp
2948- creationTimestamp: "2019-11-21T10:34:28Z"
2949+ creationTimestamp: "2019-11-21T14:01:11Z"
2950 labels:
2951 component: kube-scheduler
2952 tier: control-plane
2953- name: kube-scheduler-tools-k8s-control-1
2954+ name: kube-scheduler-toolsbeta-test-k8s-control-1
2955 namespace: kube-system
2956- resourceVersion: "2554900"
2957- selfLink: /api/v1/namespaces/kube-system/pods/kube-scheduler-tools-k8s-control-1
2958- uid: 83bce63e-7f73-4784-a129-ed45f4bace4d
2959+ resourceVersion: "5052454"
2960+ selfLink: /api/v1/namespaces/kube-system/pods/kube-scheduler-toolsbeta-test-k8s-control-1
2961+ uid: 6d1aaf58-8165-4a2b-9261-c386daa984f8
2962 spec:
2963 containers:
2964 - command:
2965@@ -4521,7 +4347,7 @@
2966 dnsPolicy: ClusterFirst
2967 enableServiceLinks: true
2968 hostNetwork: true
2969- nodeName: tools-k8s-control-1
2970+ nodeName: toolsbeta-test-k8s-control-1
2971 priority: 2000000000
2972 priorityClassName: system-cluster-critical
2973 restartPolicy: Always
2974@@ -4539,61 +4365,55 @@
2975 status:
2976 conditions:
2977 - lastProbeTime: null
2978- lastTransitionTime: "2019-11-21T11:43:58Z"
2979+ lastTransitionTime: "2019-11-21T13:11:05Z"
2980 status: "True"
2981 type: Initialized
2982 - lastProbeTime: null
2983- lastTransitionTime: "2019-11-21T11:43:58Z"
2984+ lastTransitionTime: "2019-11-21T14:01:12Z"
2985 status: "True"
2986 type: Ready
2987 - lastProbeTime: null
2988- lastTransitionTime: "2019-11-21T11:43:58Z"
2989+ lastTransitionTime: "2019-11-21T14:01:12Z"
2990 status: "True"
2991 type: ContainersReady
2992 - lastProbeTime: null
2993- lastTransitionTime: "2019-11-21T11:43:58Z"
2994+ lastTransitionTime: "2019-11-21T13:11:05Z"
2995 status: "True"
2996 type: PodScheduled
2997 containerStatuses:
2998- - containerID: docker://4200342510d83095ca4bfe59d7a0c881f6592c22bf79d2313ac3de3ac5fe2cec
2999+ - containerID: docker://bc6b2047612e57f07f65f64e897cfaec2c15c70ede645e8ac6b169e612a9e723
3000 image: k8s.gcr.io/kube-scheduler:v1.15.6
3001 imageID: docker-pullable://k8s.gcr.io/kube-scheduler@sha256:73b26c3ab2b80920196b723d86f7a8f698026bfae9808edcec9f1a8b588f30f1
3002- lastState:
3003- terminated:
3004- containerID: docker://6f69e0120e23076879f32f2b2325eb10641babf39afa027015ae2852a0074ea4
3005- exitCode: 2
3006- finishedAt: "2019-11-21T12:48:30Z"
3007- reason: Error
3008- startedAt: "2019-11-21T10:34:29Z"
3009+ lastState: {}
3010 name: kube-scheduler
3011 ready: true
3012- restartCount: 1
3013+ restartCount: 0
3014 state:
3015 running:
3016- startedAt: "2019-11-21T12:48:45Z"
3017- hostIP: 172.16.0.104
3018+ startedAt: "2019-11-21T14:01:12Z"
3019+ hostIP: 172.16.0.112
3020 phase: Running
3021- podIP: 172.16.0.104
3022+ podIP: 172.16.0.112
3023 qosClass: Burstable
3024- startTime: "2019-11-21T11:43:58Z"
3025+ startTime: "2019-11-21T13:11:05Z"
3026 - apiVersion: v1
3027 kind: Pod
3028 metadata:
3029 annotations:
3030 kubernetes.io/config.hash: 4c9fe2c16888e009cff100467a01a432
3031 kubernetes.io/config.mirror: 4c9fe2c16888e009cff100467a01a432
3032- kubernetes.io/config.seen: "2019-11-21T11:47:43.800826801Z"
3033+ kubernetes.io/config.seen: "2019-11-21T13:48:31.872306239Z"
3034 kubernetes.io/config.source: file
3035 kubernetes.io/psp: privileged-psp
3036- creationTimestamp: "2019-11-21T11:47:43Z"
3037+ creationTimestamp: "2019-11-21T13:48:31Z"
3038 labels:
3039 component: kube-scheduler
3040 tier: control-plane
3041- name: kube-scheduler-tools-k8s-control-2
3042+ name: kube-scheduler-toolsbeta-test-k8s-control-2
3043 namespace: kube-system
3044- resourceVersion: "2554901"
3045- selfLink: /api/v1/namespaces/kube-system/pods/kube-scheduler-tools-k8s-control-2
3046- uid: d5ac63ed-e6b3-423f-a222-9f4844939694
3047+ resourceVersion: "5050306"
3048+ selfLink: /api/v1/namespaces/kube-system/pods/kube-scheduler-toolsbeta-test-k8s-control-2
3049+ uid: f3a9ae1d-0eff-4627-b9b7-69003d4e8ad4
3050 spec:
3051 containers:
3052 - command:
3053@@ -4627,7 +4447,7 @@
3054 dnsPolicy: ClusterFirst
3055 enableServiceLinks: true
3056 hostNetwork: true
3057- nodeName: tools-k8s-control-2
3058+ nodeName: toolsbeta-test-k8s-control-2
3059 priority: 2000000000
3060 priorityClassName: system-cluster-critical
3061 restartPolicy: Always
3062@@ -4645,61 +4465,55 @@
3063 status:
3064 conditions:
3065 - lastProbeTime: null
3066- lastTransitionTime: "2019-11-21T12:48:46Z"
3067+ lastTransitionTime: "2019-10-23T10:06:16Z"
3068 status: "True"
3069 type: Initialized
3070 - lastProbeTime: null
3071- lastTransitionTime: "2019-11-21T12:48:48Z"
3072+ lastTransitionTime: "2019-11-21T13:48:33Z"
3073 status: "True"
3074 type: Ready
3075 - lastProbeTime: null
3076- lastTransitionTime: "2019-11-21T12:48:48Z"
3077+ lastTransitionTime: "2019-11-21T13:48:33Z"
3078 status: "True"
3079 type: ContainersReady
3080 - lastProbeTime: null
3081- lastTransitionTime: "2019-11-21T12:48:46Z"
3082+ lastTransitionTime: "2019-10-23T10:06:16Z"
3083 status: "True"
3084 type: PodScheduled
3085 containerStatuses:
3086- - containerID: docker://b0e7db18c93e0428786c9390bdac2566623b7c9b1c2f3f7d03275101e4276900
3087+ - containerID: docker://b7c8d5ec86ff3babb2ef46d515357434f02f5eeb04ecf43d9e79866c6b287745
3088 image: k8s.gcr.io/kube-scheduler:v1.15.6
3089 imageID: docker-pullable://k8s.gcr.io/kube-scheduler@sha256:73b26c3ab2b80920196b723d86f7a8f698026bfae9808edcec9f1a8b588f30f1
3090- lastState:
3091- terminated:
3092- containerID: docker://0a3436abfee871145d3ec6be5619a70f214129aa0192f652bca8bd59024cd776
3093- exitCode: 2
3094- finishedAt: "2019-11-21T12:48:30Z"
3095- reason: Error
3096- startedAt: "2019-11-21T11:47:44Z"
3097+ lastState: {}
3098 name: kube-scheduler
3099 ready: true
3100- restartCount: 1
3101+ restartCount: 0
3102 state:
3103 running:
3104- startedAt: "2019-11-21T12:48:47Z"
3105- hostIP: 172.16.0.93
3106+ startedAt: "2019-11-21T13:48:32Z"
3107+ hostIP: 172.16.0.137
3108 phase: Running
3109- podIP: 172.16.0.93
3110+ podIP: 172.16.0.137
3111 qosClass: Burstable
3112- startTime: "2019-11-21T12:48:46Z"
3113+ startTime: "2019-10-23T10:06:16Z"
3114 - apiVersion: v1
3115 kind: Pod
3116 metadata:
3117 annotations:
3118 kubernetes.io/config.hash: 4c9fe2c16888e009cff100467a01a432
3119 kubernetes.io/config.mirror: 4c9fe2c16888e009cff100467a01a432
3120- kubernetes.io/config.seen: "2019-11-21T11:47:43.794049403Z"
3121+ kubernetes.io/config.seen: "2019-11-21T13:49:45.415422546Z"
3122 kubernetes.io/config.source: file
3123 kubernetes.io/psp: privileged-psp
3124- creationTimestamp: "2019-11-21T11:47:43Z"
3125+ creationTimestamp: "2019-11-21T13:49:45Z"
3126 labels:
3127 component: kube-scheduler
3128 tier: control-plane
3129- name: kube-scheduler-tools-k8s-control-3
3130+ name: kube-scheduler-toolsbeta-test-k8s-control-3
3131 namespace: kube-system
3132- resourceVersion: "2554881"
3133- selfLink: /api/v1/namespaces/kube-system/pods/kube-scheduler-tools-k8s-control-3
3134- uid: 07affe8a-2cbb-4541-847b-d93029bdf431
3135+ resourceVersion: "5050784"
3136+ selfLink: /api/v1/namespaces/kube-system/pods/kube-scheduler-toolsbeta-test-k8s-control-3
3137+ uid: ab9d4782-4e75-4073-9dad-80e7498f5556
3138 spec:
3139 containers:
3140 - command:
3141@@ -4733,7 +4547,7 @@
3142 dnsPolicy: ClusterFirst
3143 enableServiceLinks: true
3144 hostNetwork: true
3145- nodeName: tools-k8s-control-3
3146+ nodeName: toolsbeta-test-k8s-control-3
3147 priority: 2000000000
3148 priorityClassName: system-cluster-critical
3149 restartPolicy: Always
3150@@ -4751,55 +4565,175 @@
3151 status:
3152 conditions:
3153 - lastProbeTime: null
3154- lastTransitionTime: "2019-11-21T11:43:57Z"
3155+ lastTransitionTime: "2019-10-23T10:07:28Z"
3156 status: "True"
3157 type: Initialized
3158 - lastProbeTime: null
3159- lastTransitionTime: "2019-11-21T11:47:45Z"
3160+ lastTransitionTime: "2019-11-21T13:49:46Z"
3161 status: "True"
3162 type: Ready
3163 - lastProbeTime: null
3164- lastTransitionTime: "2019-11-21T11:47:45Z"
3165+ lastTransitionTime: "2019-11-21T13:49:46Z"
3166 status: "True"
3167 type: ContainersReady
3168 - lastProbeTime: null
3169- lastTransitionTime: "2019-11-21T11:43:57Z"
3170+ lastTransitionTime: "2019-10-23T10:07:28Z"
3171 status: "True"
3172 type: PodScheduled
3173 containerStatuses:
3174- - containerID: docker://35fd2475dc68a8a7eade174cb03ddee6ffade96aa108a253b9b7d03e6d5ad5a6
3175+ - containerID: docker://60a3f92d4476a694eba909704eb95d6a710b5ed0bd660e1c27cfeab718e2050d
3176 image: k8s.gcr.io/kube-scheduler:v1.15.6
3177 imageID: docker-pullable://k8s.gcr.io/kube-scheduler@sha256:73b26c3ab2b80920196b723d86f7a8f698026bfae9808edcec9f1a8b588f30f1
3178+ lastState: {}
3179+ name: kube-scheduler
3180+ ready: true
3181+ restartCount: 0
3182+ state:
3183+ running:
3184+ startedAt: "2019-11-21T13:49:46Z"
3185+ hostIP: 172.16.0.136
3186+ phase: Running
3187+ podIP: 172.16.0.136
3188+ qosClass: Burstable
3189+ startTime: "2019-10-23T10:07:28Z"
3190+- apiVersion: v1
3191+ kind: Pod
3192+ metadata:
3193+ annotations:
3194+ cni.projectcalico.org/podIP: 192.168.23.205/32
3195+ kubernetes.io/psp: privileged-psp
3196+ creationTimestamp: "2019-11-06T16:49:37Z"
3197+ generateName: maintain-kubeusers-7b6bb8f79d-
3198+ labels:
3199+ app: maintain-kubeusers
3200+ pod-template-hash: 7b6bb8f79d
3201+ name: maintain-kubeusers-7b6bb8f79d-xc9qb
3202+ namespace: maintain-kubeusers
3203+ ownerReferences:
3204+ - apiVersion: apps/v1
3205+ blockOwnerDeletion: true
3206+ controller: true
3207+ kind: ReplicaSet
3208+ name: maintain-kubeusers-7b6bb8f79d
3209+ uid: 98b0991e-6af8-4086-be56-dda95e4f008d
3210+ resourceVersion: "5036100"
3211+ selfLink: /api/v1/namespaces/maintain-kubeusers/pods/maintain-kubeusers-7b6bb8f79d-xc9qb
3212+ uid: 36775c8c-844b-43b4-9564-685c55d9276c
3213+ spec:
3214+ containers:
3215+ - args:
3216+ - /app/maintain_kubeusers.py
3217+ - --project=toolsbeta
3218+ command:
3219+ - /app/venv/bin/python
3220+ image: docker-registry.tools.wmflabs.org/maintain-kubeusers:beta
3221+ imagePullPolicy: Always
3222+ livenessProbe:
3223+ exec:
3224+ command:
3225+ - find
3226+ - /tmp/run.check
3227+ - -mmin
3228+ - "+5"
3229+ - -exec
3230+ - rm
3231+ - /tmp/run.check
3232+ - ;
3233+ failureThreshold: 3
3234+ initialDelaySeconds: 5
3235+ periodSeconds: 5
3236+ successThreshold: 1
3237+ timeoutSeconds: 1
3238+ name: maintain-kubeusers
3239+ resources: {}
3240+ terminationMessagePath: /dev/termination-log
3241+ terminationMessagePolicy: File
3242+ volumeMounts:
3243+ - mountPath: /data/project
3244+ name: my-host-nfs
3245+ - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
3246+ name: user-maintainer-token-xqk7q
3247+ readOnly: true
3248+ dnsPolicy: ClusterFirst
3249+ enableServiceLinks: true
3250+ nodeName: toolsbeta-test-k8s-worker-2
3251+ priority: 0
3252+ restartPolicy: Always
3253+ schedulerName: default-scheduler
3254+ securityContext: {}
3255+ serviceAccount: user-maintainer
3256+ serviceAccountName: user-maintainer
3257+ terminationGracePeriodSeconds: 30
3258+ tolerations:
3259+ - effect: NoExecute
3260+ key: node.kubernetes.io/not-ready
3261+ operator: Exists
3262+ tolerationSeconds: 300
3263+ - effect: NoExecute
3264+ key: node.kubernetes.io/unreachable
3265+ operator: Exists
3266+ tolerationSeconds: 300
3267+ volumes:
3268+ - hostPath:
3269+ path: /data/project
3270+ type: Directory
3271+ name: my-host-nfs
3272+ - name: user-maintainer-token-xqk7q
3273+ secret:
3274+ defaultMode: 420
3275+ secretName: user-maintainer-token-xqk7q
3276+ status:
3277+ conditions:
3278+ - lastProbeTime: null
3279+ lastTransitionTime: "2019-11-06T16:49:37Z"
3280+ status: "True"
3281+ type: Initialized
3282+ - lastProbeTime: null
3283+ lastTransitionTime: "2019-11-16T11:12:51Z"
3284+ status: "True"
3285+ type: Ready
3286+ - lastProbeTime: null
3287+ lastTransitionTime: "2019-11-16T11:12:51Z"
3288+ status: "True"
3289+ type: ContainersReady
3290+ - lastProbeTime: null
3291+ lastTransitionTime: "2019-11-06T16:49:37Z"
3292+ status: "True"
3293+ type: PodScheduled
3294+ containerStatuses:
3295+ - containerID: docker://9592a0312975f25e54f3c391fa054fb82d36649a337e0a4d1a2993b118cf5c50
3296+ image: docker-registry.tools.wmflabs.org/maintain-kubeusers:beta
3297+ imageID: docker-pullable://docker-registry.tools.wmflabs.org/maintain-kubeusers@sha256:0507770d60cd0b931beaf2ab855dbcfaaee7c8a807dbfef82704ce940f18f742
3298 lastState:
3299 terminated:
3300- containerID: docker://595c3cb40d4b8dc423af9b4597b3e441aac20be02735b7d9520ec43429eb901d
3301- exitCode: 2
3302- finishedAt: "2019-11-21T12:48:29Z"
3303+ containerID: docker://bde0946bfcf9e4dfeef2eeb523bbf5449aec33a2e0a533f5271570a1255f740f
3304+ exitCode: 137
3305+ finishedAt: "2019-11-21T11:55:30Z"
3306 reason: Error
3307- startedAt: "2019-11-21T11:47:44Z"
3308- name: kube-scheduler
3309+ startedAt: "2019-11-16T11:12:51Z"
3310+ name: maintain-kubeusers
3311 ready: true
3312- restartCount: 1
3313+ restartCount: 4
3314 state:
3315 running:
3316- startedAt: "2019-11-21T12:48:42Z"
3317- hostIP: 172.16.0.96
3318+ startedAt: "2019-11-21T11:55:31Z"
3319+ hostIP: 172.16.0.151
3320 phase: Running
3321- podIP: 172.16.0.96
3322- qosClass: Burstable
3323- startTime: "2019-11-21T11:43:57Z"
3324+ podIP: 192.168.23.205
3325+ qosClass: BestEffort
3326+ startTime: "2019-11-06T16:49:37Z"
3327 - apiVersion: v1
3328 kind: Pod
3329 metadata:
3330 annotations:
3331- cni.projectcalico.org/podIP: 192.168.34.136/32
3332+ cni.projectcalico.org/podIP: 192.168.23.201/32
3333 kubernetes.io/psp: privileged-psp
3334- creationTimestamp: "2019-11-07T13:26:04Z"
3335+ creationTimestamp: "2019-10-25T23:39:02Z"
3336 generateName: registry-admission-6f5f6589c5-
3337 labels:
3338 name: registry-admission
3339 pod-template-hash: 6f5f6589c5
3340- name: registry-admission-6f5f6589c5-clj2w
3341+ name: registry-admission-6f5f6589c5-n7r8k
3342 namespace: registry-admission
3343 ownerReferences:
3344 - apiVersion: apps/v1
3345@@ -4807,10 +4741,10 @@
3346 controller: true
3347 kind: ReplicaSet
3348 name: registry-admission-6f5f6589c5
3349- uid: 0f1a22c2-fb9f-498d-95f5-22942abe8b48
3350- resourceVersion: "2555300"
3351- selfLink: /api/v1/namespaces/registry-admission/pods/registry-admission-6f5f6589c5-clj2w
3352- uid: 25aead46-6634-44ec-bb57-eaa9ae0df835
3353+ uid: 04ac00bb-ca33-4163-bc16-310287f38984
3354+ resourceVersion: "429033"
3355+ selfLink: /api/v1/namespaces/registry-admission/pods/registry-admission-6f5f6589c5-n7r8k
3356+ uid: bc2bc159-fd78-4a41-906f-ff2160ee3575
3357 spec:
3358 containers:
3359 - image: docker-registry.tools.wmflabs.org/registry-admission:latest
3360@@ -4832,11 +4766,11 @@
3361 name: webhook-certs
3362 readOnly: true
3363 - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
3364- name: default-token-z7njg
3365+ name: default-token-csfnp
3366 readOnly: true
3367 dnsPolicy: ClusterFirst
3368 enableServiceLinks: true
3369- nodeName: tools-k8s-worker-2
3370+ nodeName: toolsbeta-test-k8s-worker-2
3371 priority: 0
3372 restartPolicy: Always
3373 schedulerName: default-scheduler
3374@@ -4858,62 +4792,56 @@
3375 secret:
3376 defaultMode: 420
3377 secretName: registry-admission-certs
3378- - name: default-token-z7njg
3379+ - name: default-token-csfnp
3380 secret:
3381 defaultMode: 420
3382- secretName: default-token-z7njg
3383+ secretName: default-token-csfnp
3384 status:
3385 conditions:
3386 - lastProbeTime: null
3387- lastTransitionTime: "2019-11-07T13:26:05Z"
3388+ lastTransitionTime: "2019-10-25T23:39:02Z"
3389 status: "True"
3390 type: Initialized
3391 - lastProbeTime: null
3392- lastTransitionTime: "2019-11-21T12:49:42Z"
3393+ lastTransitionTime: "2019-10-25T23:39:06Z"
3394 status: "True"
3395 type: Ready
3396 - lastProbeTime: null
3397- lastTransitionTime: "2019-11-21T12:49:42Z"
3398+ lastTransitionTime: "2019-10-25T23:39:06Z"
3399 status: "True"
3400 type: ContainersReady
3401 - lastProbeTime: null
3402- lastTransitionTime: "2019-11-07T13:26:05Z"
3403+ lastTransitionTime: "2019-10-25T23:39:02Z"
3404 status: "True"
3405 type: PodScheduled
3406 containerStatuses:
3407- - containerID: docker://e3723e5388521099cabe6ac1af2b0ef6ddafccef6ebc5806f1062de879e900a7
3408+ - containerID: docker://d6db5aefba0916a9221bbf2e8b53c88dd092296512a1efdc197e86d5a5c78760
3409 image: docker-registry.tools.wmflabs.org/registry-admission:latest
3410 imageID: docker-pullable://docker-registry.tools.wmflabs.org/registry-admission@sha256:dbabc4475d6a6c4c61938cb91622fd6b1980e8d8bb17cf5b6beb49f2112547d8
3411- lastState:
3412- terminated:
3413- containerID: docker://e84205635a90ded6eabcac4aad4d0f8800ca22b86d3c456cc6521331f2c002d8
3414- exitCode: 2
3415- finishedAt: "2019-11-21T12:48:30Z"
3416- reason: Error
3417- startedAt: "2019-11-07T13:26:07Z"
3418+ lastState: {}
3419 name: webhook
3420 ready: true
3421- restartCount: 1
3422+ restartCount: 0
3423 state:
3424 running:
3425- startedAt: "2019-11-21T12:49:42Z"
3426- hostIP: 172.16.0.103
3427+ startedAt: "2019-10-25T23:39:05Z"
3428+ hostIP: 172.16.0.151
3429 phase: Running
3430- podIP: 192.168.34.136
3431+ podIP: 192.168.23.201
3432 qosClass: Guaranteed
3433- startTime: "2019-11-07T13:26:05Z"
3434+ startTime: "2019-10-25T23:39:02Z"
3435 - apiVersion: v1
3436 kind: Pod
3437 metadata:
3438 annotations:
3439- cni.projectcalico.org/podIP: 192.168.50.11/32
3440+ cni.projectcalico.org/podIP: 192.168.44.206/32
3441 kubernetes.io/psp: privileged-psp
3442- creationTimestamp: "2019-11-07T13:26:05Z"
3443+ creationTimestamp: "2019-10-25T23:39:02Z"
3444 generateName: registry-admission-6f5f6589c5-
3445 labels:
3446 name: registry-admission
3447 pod-template-hash: 6f5f6589c5
3448- name: registry-admission-6f5f6589c5-tzzk5
3449+ name: registry-admission-6f5f6589c5-x6xwp
3450 namespace: registry-admission
3451 ownerReferences:
3452 - apiVersion: apps/v1
3453@@ -4921,10 +4849,10 @@
3454 controller: true
3455 kind: ReplicaSet
3456 name: registry-admission-6f5f6589c5
3457- uid: 0f1a22c2-fb9f-498d-95f5-22942abe8b48
3458- resourceVersion: "2555328"
3459- selfLink: /api/v1/namespaces/registry-admission/pods/registry-admission-6f5f6589c5-tzzk5
3460- uid: 76a204f4-cd22-44bf-b6de-9373472136c3
3461+ uid: 04ac00bb-ca33-4163-bc16-310287f38984
3462+ resourceVersion: "429041"
3463+ selfLink: /api/v1/namespaces/registry-admission/pods/registry-admission-6f5f6589c5-x6xwp
3464+ uid: 4223d6cc-77ef-401f-bc9d-241d6328c7f6
3465 spec:
3466 containers:
3467 - image: docker-registry.tools.wmflabs.org/registry-admission:latest
3468@@ -4946,11 +4874,11 @@
3469 name: webhook-certs
3470 readOnly: true
3471 - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
3472- name: default-token-z7njg
3473+ name: default-token-csfnp
3474 readOnly: true
3475 dnsPolicy: ClusterFirst
3476 enableServiceLinks: true
3477- nodeName: tools-k8s-worker-1
3478+ nodeName: toolsbeta-test-k8s-worker-1
3479 priority: 0
3480 restartPolicy: Always
3481 schedulerName: default-scheduler
3482@@ -4972,62 +4900,236 @@
3483 secret:
3484 defaultMode: 420
3485 secretName: registry-admission-certs
3486- - name: default-token-z7njg
3487+ - name: default-token-csfnp
3488 secret:
3489 defaultMode: 420
3490- secretName: default-token-z7njg
3491+ secretName: default-token-csfnp
3492 status:
3493 conditions:
3494 - lastProbeTime: null
3495- lastTransitionTime: "2019-11-07T13:26:06Z"
3496+ lastTransitionTime: "2019-10-25T23:39:03Z"
3497 status: "True"
3498 type: Initialized
3499 - lastProbeTime: null
3500- lastTransitionTime: "2019-11-21T12:49:44Z"
3501+ lastTransitionTime: "2019-10-25T23:39:07Z"
3502 status: "True"
3503 type: Ready
3504 - lastProbeTime: null
3505- lastTransitionTime: "2019-11-21T12:49:44Z"
3506+ lastTransitionTime: "2019-10-25T23:39:07Z"
3507 status: "True"
3508 type: ContainersReady
3509 - lastProbeTime: null
3510- lastTransitionTime: "2019-11-07T13:26:06Z"
3511+ lastTransitionTime: "2019-10-25T23:39:03Z"
3512 status: "True"
3513 type: PodScheduled
3514 containerStatuses:
3515- - containerID: docker://f8b3f15c44b27775e78e1d2e38deb8e0fd44b6a10207a92bc65a0b9875d4a5e2
3516+ - containerID: docker://9bb1f35378866646e2337f28d637c64b9771f101479a73258a04bb6345fff21e
3517 image: docker-registry.tools.wmflabs.org/registry-admission:latest
3518 imageID: docker-pullable://docker-registry.tools.wmflabs.org/registry-admission@sha256:dbabc4475d6a6c4c61938cb91622fd6b1980e8d8bb17cf5b6beb49f2112547d8
3519- lastState:
3520- terminated:
3521- containerID: docker://9e5ac7d669ea56d94f54fa2c144d34f7a9c2f738e574d7caa00c1745b54c6ce9
3522- exitCode: 2
3523- finishedAt: "2019-11-21T12:48:30Z"
3524- reason: Error
3525- startedAt: "2019-11-07T13:26:08Z"
3526+ lastState: {}
3527 name: webhook
3528 ready: true
3529- restartCount: 1
3530+ restartCount: 0
3531 state:
3532 running:
3533- startedAt: "2019-11-21T12:49:43Z"
3534- hostIP: 172.16.0.78
3535+ startedAt: "2019-10-25T23:39:06Z"
3536+ hostIP: 172.16.0.138
3537 phase: Running
3538- podIP: 192.168.50.11
3539+ podIP: 192.168.44.206
3540 qosClass: Guaranteed
3541- startTime: "2019-11-07T13:26:06Z"
3542+ startTime: "2019-10-25T23:39:03Z"
3543+- apiVersion: v1
3544+ kind: Pod
3545+ metadata:
3546+ annotations:
3547+ cni.projectcalico.org/podIP: 192.168.44.217/32
3548+ kubernetes.io/limit-ranger: 'LimitRanger plugin set: cpu, memory request for
3549+ container webservice; cpu, memory limit for container webservice'
3550+ kubernetes.io/psp: tool-fourohfour-psp
3551+ podpreset.admission.kubernetes.io/podpreset-mount-toolforge-vols: "3257537"
3552+ seccomp.security.alpha.kubernetes.io/pod: runtime/default
3553+ creationTimestamp: "2019-11-11T06:44:13Z"
3554+ generateName: fourohfour-66bf569f4f-
3555+ labels:
3556+ name: fourohfour
3557+ pod-template-hash: 66bf569f4f
3558+ toolforge: tool
3559+ tools.wmflabs.org/webservice: "true"
3560+ tools.wmflabs.org/webservice-version: "1"
3561+ name: fourohfour-66bf569f4f-zkzhg
3562+ namespace: tool-fourohfour
3563+ ownerReferences:
3564+ - apiVersion: apps/v1
3565+ blockOwnerDeletion: true
3566+ controller: true
3567+ kind: ReplicaSet
3568+ name: fourohfour-66bf569f4f
3569+ uid: c36f4507-8e47-4f49-a5a5-112c36af769a
3570+ resourceVersion: "3260770"
3571+ selfLink: /api/v1/namespaces/tool-fourohfour/pods/fourohfour-66bf569f4f-zkzhg
3572+ uid: d790d109-5d15-48c9-b4de-a40d878d6d78
3573+ spec:
3574+ containers:
3575+ - command:
3576+ - /usr/bin/webservice-runner
3577+ - --type
3578+ - uwsgi-python
3579+ - --port
3580+ - "8000"
3581+ env:
3582+ - name: HOME
3583+ value: /data/project/fourohfour
3584+ image: docker-registry.tools.wmflabs.org/toolforge-python35-sssd-web:latest
3585+ imagePullPolicy: Always
3586+ name: webservice
3587+ ports:
3588+ - containerPort: 8000
3589+ name: http
3590+ protocol: TCP
3591+ resources:
3592+ limits:
3593+ cpu: 500m
3594+ memory: 512Mi
3595+ requests:
3596+ cpu: 250m
3597+ memory: 256Mi
3598+ securityContext:
3599+ allowPrivilegeEscalation: false
3600+ runAsGroup: 54201
3601+ runAsUser: 54201
3602+ terminationMessagePath: /dev/termination-log
3603+ terminationMessagePolicy: File
3604+ volumeMounts:
3605+ - mountPath: /public/dumps
3606+ name: dumps
3607+ readOnly: true
3608+ - mountPath: /data/project
3609+ name: home
3610+ - mountPath: /etc/wmcs-project
3611+ name: wmcs-project
3612+ readOnly: true
3613+ - mountPath: /data/scratch
3614+ name: scratch
3615+ - mountPath: /etc/ldap.conf
3616+ name: etcldap-conf
3617+ readOnly: true
3618+ - mountPath: /etc/ldap.yaml
3619+ name: etcldap-yaml
3620+ readOnly: true
3621+ - mountPath: /etc/novaobserver.yaml
3622+ name: etcnovaobserver-yaml
3623+ readOnly: true
3624+ - mountPath: /var/lib/sss/pipes
3625+ name: sssd-pipes
3626+ - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
3627+ name: default-token-67457
3628+ readOnly: true
3629+ workingDir: /data/project/fourohfour/
3630+ dnsPolicy: ClusterFirst
3631+ enableServiceLinks: true
3632+ nodeName: toolsbeta-test-k8s-worker-1
3633+ priority: 0
3634+ restartPolicy: Always
3635+ schedulerName: default-scheduler
3636+ securityContext:
3637+ fsGroup: 54201
3638+ supplementalGroups:
3639+ - 1
3640+ serviceAccount: default
3641+ serviceAccountName: default
3642+ terminationGracePeriodSeconds: 30
3643+ tolerations:
3644+ - effect: NoExecute
3645+ key: node.kubernetes.io/not-ready
3646+ operator: Exists
3647+ tolerationSeconds: 300
3648+ - effect: NoExecute
3649+ key: node.kubernetes.io/unreachable
3650+ operator: Exists
3651+ tolerationSeconds: 300
3652+ volumes:
3653+ - hostPath:
3654+ path: /public/dumps
3655+ type: Directory
3656+ name: dumps
3657+ - hostPath:
3658+ path: /data/project
3659+ type: Directory
3660+ name: home
3661+ - hostPath:
3662+ path: /etc/wmcs-project
3663+ type: File
3664+ name: wmcs-project
3665+ - hostPath:
3666+ path: /data/scratch
3667+ type: Directory
3668+ name: scratch
3669+ - hostPath:
3670+ path: /etc/ldap.conf
3671+ type: File
3672+ name: etcldap-conf
3673+ - hostPath:
3674+ path: /etc/ldap.yaml
3675+ type: File
3676+ name: etcldap-yaml
3677+ - hostPath:
3678+ path: /etc/novaobserver.yaml
3679+ type: File
3680+ name: etcnovaobserver-yaml
3681+ - hostPath:
3682+ path: /var/lib/sss/pipes
3683+ type: Directory
3684+ name: sssd-pipes
3685+ - name: default-token-67457
3686+ secret:
3687+ defaultMode: 420
3688+ secretName: default-token-67457
3689+ status:
3690+ conditions:
3691+ - lastProbeTime: null
3692+ lastTransitionTime: "2019-11-11T06:44:13Z"
3693+ status: "True"
3694+ type: Initialized
3695+ - lastProbeTime: null
3696+ lastTransitionTime: "2019-11-11T06:44:17Z"
3697+ status: "True"
3698+ type: Ready
3699+ - lastProbeTime: null
3700+ lastTransitionTime: "2019-11-11T06:44:17Z"
3701+ status: "True"
3702+ type: ContainersReady
3703+ - lastProbeTime: null
3704+ lastTransitionTime: "2019-11-11T06:44:13Z"
3705+ status: "True"
3706+ type: PodScheduled
3707+ containerStatuses:
3708+ - containerID: docker://a5c096981fb11df5d874dda3c60a07a5087c61468ceb905e40e2e833714ccbc5
3709+ image: docker-registry.tools.wmflabs.org/toolforge-python35-sssd-web:latest
3710+ imageID: docker-pullable://docker-registry.tools.wmflabs.org/toolforge-python35-sssd-web@sha256:5de157b4ec83c060eafd985cdb715242d0beed8f15a30d9fe29aeae617cca3c1
3711+ lastState: {}
3712+ name: webservice
3713+ ready: true
3714+ restartCount: 0
3715+ state:
3716+ running:
3717+ startedAt: "2019-11-11T06:44:16Z"
3718+ hostIP: 172.16.0.138
3719+ phase: Running
3720+ podIP: 192.168.44.217
3721+ qosClass: Burstable
3722+ startTime: "2019-11-11T06:44:13Z"
3723 - apiVersion: v1
3724 kind: Service
3725 metadata:
3726- creationTimestamp: "2019-11-06T14:14:16Z"
3727+ creationTimestamp: "2019-10-23T09:55:18Z"
3728 labels:
3729 component: apiserver
3730 provider: kubernetes
3731 name: kubernetes
3732 namespace: default
3733- resourceVersion: "151"
3734+ resourceVersion: "6199"
3735 selfLink: /api/v1/namespaces/default/services/kubernetes
3736- uid: 7a8a6fd0-37f8-4396-86cf-faf81e4a95aa
3737+ uid: b80c4bd1-847c-412f-900f-ced7a4f64be4
3738 spec:
3739 clusterIP: 10.96.0.1
3740 ports:
3741@@ -5042,16 +5144,19 @@
3742 - apiVersion: v1
3743 kind: Service
3744 metadata:
3745- creationTimestamp: "2019-11-07T13:24:34Z"
3746+ annotations:
3747+ kubectl.kubernetes.io/last-applied-configuration: |
3748+ {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"name":"ingress-admission"},"name":"ingress-admission","namespace":"ingress-admission"},"spec":{"ports":[{"name":"webhook","port":443,"targetPort":8080}],"selector":{"name":"ingress-admission"}}}
3749+ creationTimestamp: "2019-10-25T23:37:48Z"
3750 labels:
3751 name: ingress-admission
3752 name: ingress-admission
3753 namespace: ingress-admission
3754- resourceVersion: "135427"
3755+ resourceVersion: "428787"
3756 selfLink: /api/v1/namespaces/ingress-admission/services/ingress-admission
3757- uid: dde7968d-e10b-48a4-a509-2cd992afbd7c
3758+ uid: d7cc16bd-8e92-4477-8db7-b0193eaf9eec
3759 spec:
3760- clusterIP: 10.97.9.199
3761+ clusterIP: 10.101.150.247
3762 ports:
3763 - name: webhook
3764 port: 443
3765@@ -5069,17 +5174,17 @@
3766 annotations:
3767 kubectl.kubernetes.io/last-applied-configuration: |
3768 {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"app.kubernetes.io/name":"ingress-nginx","app.kubernetes.io/part-of":"ingress-nginx"},"name":"ingress-nginx","namespace":"ingress-nginx"},"spec":{"ports":[{"name":"http","nodePort":30000,"port":8080,"protocol":"TCP","targetPort":8080}],"selector":{"app.kubernetes.io/name":"ingress-nginx","app.kubernetes.io/part-of":"ingress-nginx"},"type":"NodePort"}}
3769- creationTimestamp: "2019-11-07T13:11:20Z"
3770+ creationTimestamp: "2019-10-25T11:59:09Z"
3771 labels:
3772 app.kubernetes.io/name: ingress-nginx
3773 app.kubernetes.io/part-of: ingress-nginx
3774 name: ingress-nginx
3775 namespace: ingress-nginx
3776- resourceVersion: "133802"
3777+ resourceVersion: "345454"
3778 selfLink: /api/v1/namespaces/ingress-nginx/services/ingress-nginx
3779- uid: 54962981-4455-4bd3-9692-a8f59643ed23
3780+ uid: b4c22bee-aade-4abc-a55f-446bd5d1f488
3781 spec:
3782- clusterIP: 10.111.217.96
3783+ clusterIP: 10.99.26.164
3784 externalTrafficPolicy: Cluster
3785 ports:
3786 - name: http
3787@@ -5100,14 +5205,14 @@
3788 annotations:
3789 kubectl.kubernetes.io/last-applied-configuration: |
3790 {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"name":"nginx-ingress-metrics","namespace":"ingress-nginx"},"spec":{"ports":[{"port":10254,"protocol":"TCP","targetPort":10254}],"selector":{"app.kubernetes.io/name":"ingress-nginx","app.kubernetes.io/part-of":"ingress-nginx"}}}
3791- creationTimestamp: "2019-11-19T12:47:06Z"
3792+ creationTimestamp: "2019-11-15T12:52:37Z"
3793 name: nginx-ingress-metrics
3794 namespace: ingress-nginx
3795- resourceVersion: "2213463"
3796+ resourceVersion: "4000557"
3797 selfLink: /api/v1/namespaces/ingress-nginx/services/nginx-ingress-metrics
3798- uid: 90ce3155-c9d2-4015-a2f9-3283b7ebe71d
3799+ uid: 0ddcb8dd-99c7-4024-b388-782256a11b6d
3800 spec:
3801- clusterIP: 10.106.68.73
3802+ clusterIP: 10.109.241.53
3803 ports:
3804 - port: 10254
3805 protocol: TCP
3806@@ -5125,16 +5230,16 @@
3807 annotations:
3808 prometheus.io/port: "9153"
3809 prometheus.io/scrape: "true"
3810- creationTimestamp: "2019-11-06T14:14:18Z"
3811+ creationTimestamp: "2019-10-23T09:55:20Z"
3812 labels:
3813 k8s-app: kube-dns
3814 kubernetes.io/cluster-service: "true"
3815 kubernetes.io/name: KubeDNS
3816 name: kube-dns
3817 namespace: kube-system
3818- resourceVersion: "219"
3819+ resourceVersion: "6268"
3820 selfLink: /api/v1/namespaces/kube-system/services/kube-dns
3821- uid: d7dbe458-0993-4c53-9c10-66f01d9f9d22
3822+ uid: 6aa48bfe-e99b-4ad8-92e5-36cff6d03581
3823 spec:
3824 clusterIP: 10.96.0.10
3825 ports:
3826@@ -5159,16 +5264,19 @@
3827 - apiVersion: v1
3828 kind: Service
3829 metadata:
3830- creationTimestamp: "2019-11-07T13:26:04Z"
3831+ annotations:
3832+ kubectl.kubernetes.io/last-applied-configuration: |
3833+ {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"name":"registry-admission"},"name":"registry-admission","namespace":"registry-admission"},"spec":{"ports":[{"name":"webhook","port":443,"targetPort":8080}],"selector":{"name":"registry-admission"}}}
3834+ creationTimestamp: "2019-10-25T23:39:02Z"
3835 labels:
3836 name: registry-admission
3837 name: registry-admission
3838 namespace: registry-admission
3839- resourceVersion: "135659"
3840+ resourceVersion: "428987"
3841 selfLink: /api/v1/namespaces/registry-admission/services/registry-admission
3842- uid: b1529294-de34-4fa4-8ecd-a7627685648a
3843+ uid: d9d9217b-b3bd-48d4-b232-2b36cd68eaac
3844 spec:
3845- clusterIP: 10.99.145.199
3846+ clusterIP: 10.103.3.17
3847 ports:
3848 - name: webhook
3849 port: 443
3850@@ -5180,6 +5288,33 @@
3851 type: ClusterIP
3852 status:
3853 loadBalancer: {}
3854+- apiVersion: v1
3855+ kind: Service
3856+ metadata:
3857+ creationTimestamp: "2019-11-11T06:44:13Z"
3858+ labels:
3859+ name: fourohfour
3860+ toolforge: tool
3861+ tools.wmflabs.org/webservice: "true"
3862+ tools.wmflabs.org/webservice-version: "1"
3863+ name: fourohfour
3864+ namespace: tool-fourohfour
3865+ resourceVersion: "3260744"
3866+ selfLink: /api/v1/namespaces/tool-fourohfour/services/fourohfour
3867+ uid: cf8e6b82-440c-4125-95a7-542a6ab6e8e1
3868+ spec:
3869+ clusterIP: 10.100.6.148
3870+ ports:
3871+ - name: http
3872+ port: 8000
3873+ protocol: TCP
3874+ targetPort: 8000
3875+ selector:
3876+ name: fourohfour
3877+ sessionAffinity: None
3878+ type: ClusterIP
3879+ status:
3880+ loadBalancer: {}
3881 - apiVersion: apps/v1
3882 kind: DaemonSet
3883 metadata:
3884@@ -5187,15 +5322,15 @@
3885 deprecated.daemonset.template.generation: "1"
3886 kubectl.kubernetes.io/last-applied-configuration: |
3887 {"apiVersion":"apps/v1","kind":"DaemonSet","metadata":{"annotations":{},"labels":{"k8s-app":"calico-node"},"name":"calico-node","namespace":"kube-system"},"spec":{"selector":{"matchLabels":{"k8s-app":"calico-node"}},"template":{"metadata":{"annotations":{"scheduler.alpha.kubernetes.io/critical-pod":""},"labels":{"k8s-app":"calico-node"}},"spec":{"containers":[{"env":[{"name":"DATASTORE_TYPE","value":"kubernetes"},{"name":"WAIT_FOR_DATASTORE","value":"true"},{"name":"NODENAME","valueFrom":{"fieldRef":{"fieldPath":"spec.nodeName"}}},{"name":"CALICO_NETWORKING_BACKEND","valueFrom":{"configMapKeyRef":{"key":"calico_backend","name":"calico-config"}}},{"name":"CLUSTER_TYPE","value":"k8s,bgp"},{"name":"IP","value":"autodetect"},{"name":"CALICO_IPV4POOL_IPIP","value":"Always"},{"name":"FELIX_IPINIPMTU","valueFrom":{"configMapKeyRef":{"key":"veth_mtu","name":"calico-config"}}},{"name":"CALICO_IPV4POOL_CIDR","value":"192.168.0.0/16"},{"name":"CALICO_DISABLE_FILE_LOGGING","value":"true"},{"name":"FELIX_DEFAULTENDPOINTTOHOSTACTION","value":"ACCEPT"},{"name":"FELIX_IPV6SUPPORT","value":"false"},{"name":"FELIX_LOGSEVERITYSCREEN","value":"info"},{"name":"FELIX_HEALTHENABLED","value":"true"}],"image":"calico/node:v3.8.0","livenessProbe":{"failureThreshold":6,"httpGet":{"host":"localhost","path":"/liveness","port":9099},"initialDelaySeconds":10,"periodSeconds":10},"name":"calico-node","readinessProbe":{"exec":{"command":["/bin/calico-node","-bird-ready","-felix-ready"]},"periodSeconds":10},"resources":{"requests":{"cpu":"250m"}},"securityContext":{"privileged":true},"volumeMounts":[{"mountPath":"/lib/modules","name":"lib-modules","readOnly":true},{"mountPath":"/run/xtables.lock","name":"xtables-lock","readOnly":false},{"mountPath":"/var/run/calico","name":"var-run-calico","readOnly":false},{"mountPath":"/var/lib/calico","name":"var-lib-calico","readOnly":false},{"mountPath":"/var/run/nodeagent","name":"policysync"}]}],"hostNetwork":true,"initContainers":[{"command":["/opt/cni/bin/calico-ipam","-upgrade"],"env":[{"name":"KUBERNETES_NODE_NAME","valueFrom":{"fieldRef":{"fieldPath":"spec.nodeName"}}},{"name":"CALICO_NETWORKING_BACKEND","valueFrom":{"configMapKeyRef":{"key":"calico_backend","name":"calico-config"}}}],"image":"calico/cni:v3.8.0","name":"upgrade-ipam","volumeMounts":[{"mountPath":"/var/lib/cni/networks","name":"host-local-net-dir"},{"mountPath":"/host/opt/cni/bin","name":"cni-bin-dir"}]},{"command":["/install-cni.sh"],"env":[{"name":"CNI_CONF_NAME","value":"10-calico.conflist"},{"name":"CNI_NETWORK_CONFIG","valueFrom":{"configMapKeyRef":{"key":"cni_network_config","name":"calico-config"}}},{"name":"KUBERNETES_NODE_NAME","valueFrom":{"fieldRef":{"fieldPath":"spec.nodeName"}}},{"name":"CNI_MTU","valueFrom":{"configMapKeyRef":{"key":"veth_mtu","name":"calico-config"}}},{"name":"SLEEP","value":"false"}],"image":"calico/cni:v3.8.0","name":"install-cni","volumeMounts":[{"mountPath":"/host/opt/cni/bin","name":"cni-bin-dir"},{"mountPath":"/host/etc/cni/net.d","name":"cni-net-dir"}]},{"image":"calico/pod2daemon-flexvol:v3.8.0","name":"flexvol-driver","volumeMounts":[{"mountPath":"/host/driver","name":"flexvol-driver-host"}]}],"nodeSelector":{"beta.kubernetes.io/os":"linux"},"priorityClassName":"system-node-critical","serviceAccountName":"calico-node","terminationGracePeriodSeconds":0,"tolerations":[{"effect":"NoSchedule","operator":"Exists"},{"key":"CriticalAddonsOnly","operator":"Exists"},{"effect":"NoExecute","operator":"Exists"}],"volumes":[{"hostPath":{"path":"/lib/modules"},"name":"lib-modules"},{"hostPath":{"path":"/var/run/calico"},"name":"var-run-calico"},{"hostPath":{"path":"/var/lib/calico"},"name":"var-lib-calico"},{"hostPath":{"path":"/run/xtables.lock","type":"FileOrCreate"},"name":"xtables-lock"},{"hostPath":{"path":"/opt/cni/bin"},"name":"cni-bin-dir"},{"hostPath":{"path":"/etc/cni/net.d"},"name":"cni-net-dir"},{"hostPath":{"path":"/var/lib/cni/networks"},"name":"host-local-net-dir"},{"hostPath":{"path":"/var/run/nodeagent","type":"DirectoryOrCreate"},"name":"policysync"},{"hostPath":{"path":"/usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds","type":"DirectoryOrCreate"},"name":"flexvol-driver-host"}]}},"updateStrategy":{"rollingUpdate":{"maxUnavailable":1},"type":"RollingUpdate"}}}
3888- creationTimestamp: "2019-11-06T14:15:35Z"
3889+ creationTimestamp: "2019-10-23T09:58:21Z"
3890 generation: 1
3891 labels:
3892 k8s-app: calico-node
3893 name: calico-node
3894 namespace: kube-system
3895- resourceVersion: "2555155"
3896+ resourceVersion: "5045361"
3897 selfLink: /apis/apps/v1/namespaces/kube-system/daemonsets/calico-node
3898- uid: 677e71b7-e034-4826-baa2-4fee1de6e3d1
3899+ uid: 5c6b278d-68cd-4a45-913d-941ccd990f3e
3900 spec:
3901 revisionHistoryLimit: 10
3902 selector:
3903@@ -5431,15 +5566,15 @@
3904 metadata:
3905 annotations:
3906 deprecated.daemonset.template.generation: "2"
3907- creationTimestamp: "2019-11-06T14:14:18Z"
3908+ creationTimestamp: "2019-10-23T09:55:20Z"
3909 generation: 2
3910 labels:
3911 k8s-app: kube-proxy
3912 name: kube-proxy
3913 namespace: kube-system
3914- resourceVersion: "2546554"
3915+ resourceVersion: "5050880"
3916 selfLink: /apis/apps/v1/namespaces/kube-system/daemonsets/kube-proxy
3917- uid: 450ae8e1-c9ef-415e-a983-ee2180410f3b
3918+ uid: 594edc17-8735-4841-8faf-b01e84811848
3919 spec:
3920 revisionHistoryLimit: 10
3921 selector:
3922@@ -5523,15 +5658,17 @@
3923 metadata:
3924 annotations:
3925 deployment.kubernetes.io/revision: "1"
3926- creationTimestamp: "2019-11-07T13:24:34Z"
3927+ kubectl.kubernetes.io/last-applied-configuration: |
3928+ {"apiVersion":"apps/v1beta1","kind":"Deployment","metadata":{"annotations":{},"labels":{"name":"ingress-admission"},"name":"ingress-admission","namespace":"ingress-admission"},"spec":{"replicas":2,"template":{"metadata":{"labels":{"name":"ingress-admission"},"name":"ingress-admission"},"spec":{"containers":[{"image":"docker-registry.tools.wmflabs.org/ingress-admission:latest","name":"webhook","resources":{"limits":{"cpu":"300m","memory":"50Mi"},"requests":{"cpu":"300m","memory":"50Mi"}},"securityContext":{"readOnlyRootFilesystem":true},"volumeMounts":[{"mountPath":"/etc/webhook/certs","name":"webhook-certs","readOnly":true}]}],"volumes":[{"name":"webhook-certs","secret":{"secretName":"ingress-admission-certs"}}]}}}}
3929+ creationTimestamp: "2019-10-25T23:37:48Z"
3930 generation: 1
3931 labels:
3932 name: ingress-admission
3933 name: ingress-admission
3934 namespace: ingress-admission
3935- resourceVersion: "2555309"
3936+ resourceVersion: "2674885"
3937 selfLink: /apis/apps/v1/namespaces/ingress-admission/deployments/ingress-admission
3938- uid: 8bb99257-284a-4d48-9b5f-6ea427f005e9
3939+ uid: 890de787-7031-4083-a94e-224826b5f908
3940 spec:
3941 progressDeadlineSeconds: 600
3942 replicas: 2
3943@@ -5583,14 +5720,14 @@
3944 status:
3945 availableReplicas: 2
3946 conditions:
3947- - lastTransitionTime: "2019-11-07T13:24:34Z"
3948- lastUpdateTime: "2019-11-07T13:24:37Z"
3949+ - lastTransitionTime: "2019-10-25T23:37:48Z"
3950+ lastUpdateTime: "2019-10-25T23:37:54Z"
3951 message: ReplicaSet "ingress-admission-55fb8554b5" has successfully progressed.
3952 reason: NewReplicaSetAvailable
3953 status: "True"
3954 type: Progressing
3955- - lastTransitionTime: "2019-11-21T12:49:42Z"
3956- lastUpdateTime: "2019-11-21T12:49:42Z"
3957+ - lastTransitionTime: "2019-11-07T21:52:47Z"
3958+ lastUpdateTime: "2019-11-07T21:52:47Z"
3959 message: Deployment has minimum availability.
3960 reason: MinimumReplicasAvailable
3961 status: "True"
3962@@ -5603,19 +5740,19 @@
3963 kind: Deployment
3964 metadata:
3965 annotations:
3966- deployment.kubernetes.io/revision: "2"
3967+ deployment.kubernetes.io/revision: "9"
3968 kubectl.kubernetes.io/last-applied-configuration: |
3969- {"apiVersion":"apps/v1","kind":"Deployment","metadata":{"annotations":{},"labels":{"app.kubernetes.io/name":"ingress-nginx","app.kubernetes.io/part-of":"ingress-nginx"},"name":"nginx-ingress","namespace":"ingress-nginx"},"spec":{"replicas":1,"selector":{"matchLabels":{"app.kubernetes.io/name":"ingress-nginx","app.kubernetes.io/part-of":"ingress-nginx"}},"template":{"metadata":{"annotations":{"prometheus.io/port":"10254","prometheus.io/scrape":"true"},"labels":{"app.kubernetes.io/name":"ingress-nginx","app.kubernetes.io/part-of":"ingress-nginx"}},"spec":{"containers":[{"args":["/nginx-ingress-controller","--http-port=8080","--configmap=$(POD_NAMESPACE)/nginx-configuration","--tcp-services-configmap=$(POD_NAMESPACE)/tcp-services","--udp-services-configmap=$(POD_NAMESPACE)/udp-services","--publish-service=$(POD_NAMESPACE)/ingress-nginx","--annotations-prefix=nginx.ingress.kubernetes.io"],"env":[{"name":"POD_NAME","valueFrom":{"fieldRef":{"fieldPath":"metadata.name"}}},{"name":"POD_NAMESPACE","valueFrom":{"fieldRef":{"fieldPath":"metadata.namespace"}}}],"image":"docker-registry.tools.wmflabs.org/nginx-ingress-controller:0.25.1","livenessProbe":{"failureThreshold":3,"httpGet":{"path":"/healthz","port":10254,"scheme":"HTTP"},"initialDelaySeconds":10,"periodSeconds":10,"successThreshold":1,"timeoutSeconds":10},"name":"nginx-ingress-controller","ports":[{"containerPort":8080,"name":"http"},{"containerPort":10254,"name":"metrics"}],"readinessProbe":{"failureThreshold":3,"httpGet":{"path":"/healthz","port":10254,"scheme":"HTTP"},"periodSeconds":10,"successThreshold":1,"timeoutSeconds":10},"securityContext":{"allowPrivilegeEscalation":true,"capabilities":{"add":["NET_BIND_SERVICE"],"drop":["ALL"]},"runAsUser":33}}],"serviceAccountName":"nginx-ingress"}}}}
3970- creationTimestamp: "2019-11-07T13:11:20Z"
3971- generation: 2
3972+ {"apiVersion":"apps/v1","kind":"Deployment","metadata":{"annotations":{},"labels":{"app.kubernetes.io/name":"ingress-nginx","app.kubernetes.io/part-of":"ingress-nginx"},"name":"nginx-ingress","namespace":"ingress-nginx"},"spec":{"replicas":1,"selector":{"matchLabels":{"app.kubernetes.io/name":"ingress-nginx","app.kubernetes.io/part-of":"ingress-nginx"}},"template":{"metadata":{"annotations":{"prometheus.io/port":"10254","prometheus.io/scrape":"true"},"labels":{"app.kubernetes.io/name":"ingress-nginx","app.kubernetes.io/part-of":"ingress-nginx"}},"spec":{"containers":[{"args":["/nginx-ingress-controller","--http-port=8080","--configmap=$(POD_NAMESPACE)/nginx-configuration","--tcp-services-configmap=$(POD_NAMESPACE)/tcp-services","--udp-services-configmap=$(POD_NAMESPACE)/udp-services","--publish-service=$(POD_NAMESPACE)/ingress-nginx","--annotations-prefix=nginx.ingress.kubernetes.io"],"env":[{"name":"POD_NAME","valueFrom":{"fieldRef":{"fieldPath":"metadata.name"}}},{"name":"POD_NAMESPACE","valueFrom":{"fieldRef":{"fieldPath":"metadata.namespace"}}}],"image":"docker-registry.tools.wmflabs.org/nginx-ingress-controller:0.25.1","livenessProbe":{"failureThreshold":3,"httpGet":{"path":"/healthz","port":10254,"scheme":"HTTP"},"initialDelaySeconds":10,"periodSeconds":10,"successThreshold":1,"timeoutSeconds":10},"name":"nginx-ingress-controller","ports":[{"containerPort":8080,"name":"http"}],"readinessProbe":{"failureThreshold":3,"httpGet":{"path":"/healthz","port":10254,"scheme":"HTTP"},"periodSeconds":10,"successThreshold":1,"timeoutSeconds":10},"securityContext":{"allowPrivilegeEscalation":true,"capabilities":{"add":["NET_BIND_SERVICE"],"drop":["ALL"]},"runAsUser":33}}],"serviceAccountName":"nginx-ingress"}}}}
3973+ creationTimestamp: "2019-10-25T11:59:09Z"
3974+ generation: 9
3975 labels:
3976 app.kubernetes.io/name: ingress-nginx
3977 app.kubernetes.io/part-of: ingress-nginx
3978 name: nginx-ingress
3979 namespace: ingress-nginx
3980- resourceVersion: "2555248"
3981+ resourceVersion: "5045346"
3982 selfLink: /apis/apps/v1/namespaces/ingress-nginx/deployments/nginx-ingress
3983- uid: 0b412fc9-853d-4291-923d-2786f815687b
3984+ uid: b405e85f-a8d3-4621-bda2-37e3c5f2896a
3985 spec:
3986 progressDeadlineSeconds: 600
3987 replicas: 1
3988@@ -5648,6 +5785,7 @@
3989 - --udp-services-configmap=$(POD_NAMESPACE)/udp-services
3990 - --publish-service=$(POD_NAMESPACE)/ingress-nginx
3991 - --annotations-prefix=nginx.ingress.kubernetes.io
3992+ - --default-backend-service=tool-fourohfour/fourohfour
3993 env:
3994 - name: POD_NAME
3995 valueFrom:
3996@@ -5709,19 +5847,19 @@
3997 status:
3998 availableReplicas: 1
3999 conditions:
4000- - lastTransitionTime: "2019-11-20T04:59:47Z"
4001- lastUpdateTime: "2019-11-20T05:00:05Z"
4002- message: ReplicaSet "nginx-ingress-5dbf7cb65c" has successfully progressed.
4003+ - lastTransitionTime: "2019-10-25T16:15:11Z"
4004+ lastUpdateTime: "2019-11-15T12:27:11Z"
4005+ message: ReplicaSet "nginx-ingress-5d586d964b" has successfully progressed.
4006 reason: NewReplicaSetAvailable
4007 status: "True"
4008 type: Progressing
4009- - lastTransitionTime: "2019-11-21T12:49:39Z"
4010- lastUpdateTime: "2019-11-21T12:49:39Z"
4011+ - lastTransitionTime: "2019-11-21T13:11:18Z"
4012+ lastUpdateTime: "2019-11-21T13:11:18Z"
4013 message: Deployment has minimum availability.
4014 reason: MinimumReplicasAvailable
4015 status: "True"
4016 type: Available
4017- observedGeneration: 2
4018+ observedGeneration: 9
4019 readyReplicas: 1
4020 replicas: 1
4021 updatedReplicas: 1
4022@@ -5732,15 +5870,15 @@
4023 deployment.kubernetes.io/revision: "1"
4024 kubectl.kubernetes.io/last-applied-configuration: |
4025 {"apiVersion":"apps/v1","kind":"Deployment","metadata":{"annotations":{},"labels":{"k8s-app":"calico-kube-controllers"},"name":"calico-kube-controllers","namespace":"kube-system"},"spec":{"replicas":1,"selector":{"matchLabels":{"k8s-app":"calico-kube-controllers"}},"strategy":{"type":"Recreate"},"template":{"metadata":{"annotations":{"scheduler.alpha.kubernetes.io/critical-pod":""},"labels":{"k8s-app":"calico-kube-controllers"},"name":"calico-kube-controllers","namespace":"kube-system"},"spec":{"containers":[{"env":[{"name":"ENABLED_CONTROLLERS","value":"node"},{"name":"DATASTORE_TYPE","value":"kubernetes"}],"image":"calico/kube-controllers:v3.8.0","name":"calico-kube-controllers","readinessProbe":{"exec":{"command":["/usr/bin/check-status","-r"]}}}],"nodeSelector":{"beta.kubernetes.io/os":"linux"},"priorityClassName":"system-cluster-critical","serviceAccountName":"calico-kube-controllers","tolerations":[{"key":"CriticalAddonsOnly","operator":"Exists"},{"effect":"NoSchedule","key":"node-role.kubernetes.io/master"}]}}}}
4026- creationTimestamp: "2019-11-06T14:15:35Z"
4027+ creationTimestamp: "2019-10-23T09:58:21Z"
4028 generation: 1
4029 labels:
4030 k8s-app: calico-kube-controllers
4031 name: calico-kube-controllers
4032 namespace: kube-system
4033- resourceVersion: "2555297"
4034+ resourceVersion: "5045350"
4035 selfLink: /apis/apps/v1/namespaces/kube-system/deployments/calico-kube-controllers
4036- uid: b4a16ed2-5d17-404e-939d-27c0c1c3be63
4037+ uid: a636b7f2-62af-4bb0-8fc9-b211d38512a9
4038 spec:
4039 progressDeadlineSeconds: 600
4040 replicas: 1
4041@@ -5799,14 +5937,14 @@
4042 status:
4043 availableReplicas: 1
4044 conditions:
4045- - lastTransitionTime: "2019-11-06T14:15:35Z"
4046- lastUpdateTime: "2019-11-06T14:16:31Z"
4047+ - lastTransitionTime: "2019-10-23T09:58:21Z"
4048+ lastUpdateTime: "2019-10-23T09:58:51Z"
4049 message: ReplicaSet "calico-kube-controllers-59f54d6bbc" has successfully progressed.
4050 reason: NewReplicaSetAvailable
4051 status: "True"
4052 type: Progressing
4053- - lastTransitionTime: "2019-11-21T12:49:42Z"
4054- lastUpdateTime: "2019-11-21T12:49:42Z"
4055+ - lastTransitionTime: "2019-11-21T13:11:19Z"
4056+ lastUpdateTime: "2019-11-21T13:11:19Z"
4057 message: Deployment has minimum availability.
4058 reason: MinimumReplicasAvailable
4059 status: "True"
4060@@ -5820,15 +5958,15 @@
4061 metadata:
4062 annotations:
4063 deployment.kubernetes.io/revision: "1"
4064- creationTimestamp: "2019-11-06T14:14:17Z"
4065+ creationTimestamp: "2019-10-23T09:55:20Z"
4066 generation: 5
4067 labels:
4068 k8s-app: kube-dns
4069 name: coredns
4070 namespace: kube-system
4071- resourceVersion: "2585106"
4072+ resourceVersion: "5052507"
4073 selfLink: /apis/apps/v1/namespaces/kube-system/deployments/coredns
4074- uid: 2c551d5e-387e-4cce-8ba5-1df6e4a5918f
4075+ uid: d1cd11f8-2858-457c-b97b-a8ae3d727a46
4076 spec:
4077 progressDeadlineSeconds: 600
4078 replicas: 2
4079@@ -5929,14 +6067,14 @@
4080 status:
4081 availableReplicas: 2
4082 conditions:
4083- - lastTransitionTime: "2019-11-06T14:14:33Z"
4084- lastUpdateTime: "2019-11-06T14:16:31Z"
4085+ - lastTransitionTime: "2019-10-23T09:55:35Z"
4086+ lastUpdateTime: "2019-10-23T09:58:50Z"
4087 message: ReplicaSet "coredns-5c98db65d4" has successfully progressed.
4088 reason: NewReplicaSetAvailable
4089 status: "True"
4090 type: Progressing
4091- - lastTransitionTime: "2019-11-21T12:49:36Z"
4092- lastUpdateTime: "2019-11-21T12:49:36Z"
4093+ - lastTransitionTime: "2019-11-21T13:49:47Z"
4094+ lastUpdateTime: "2019-11-21T13:49:47Z"
4095 message: Deployment has minimum availability.
4096 reason: MinimumReplicasAvailable
4097 status: "True"
4098@@ -5950,15 +6088,113 @@
4099 metadata:
4100 annotations:
4101 deployment.kubernetes.io/revision: "1"
4102- creationTimestamp: "2019-11-07T13:26:04Z"
4103+ kubectl.kubernetes.io/last-applied-configuration: |
4104+ {"apiVersion":"apps/v1","kind":"Deployment","metadata":{"annotations":{},"labels":{"app":"maintain-kubeusers"},"name":"maintain-kubeusers","namespace":"maintain-kubeusers"},"spec":{"replicas":1,"selector":{"matchLabels":{"app":"maintain-kubeusers"}},"template":{"metadata":{"labels":{"app":"maintain-kubeusers"}},"spec":{"containers":[{"args":["/app/maintain_kubeusers.py","--project=toolsbeta"],"command":["/app/venv/bin/python"],"image":"docker-registry.tools.wmflabs.org/maintain-kubeusers:beta","imagePullPolicy":"Always","livenessProbe":{"exec":{"command":["find","/tmp/run.check","-mmin","+5","-exec","rm","/tmp/run.check",";"]},"initialDelaySeconds":5,"periodSeconds":5},"name":"maintain-kubeusers","volumeMounts":[{"mountPath":"/data/project","name":"my-host-nfs"}]}],"serviceAccountName":"user-maintainer","volumes":[{"hostPath":{"path":"/data/project","type":"Directory"},"name":"my-host-nfs"}]}}}}
4105+ creationTimestamp: "2019-11-06T16:49:37Z"
4106+ generation: 1
4107+ labels:
4108+ app: maintain-kubeusers
4109+ name: maintain-kubeusers
4110+ namespace: maintain-kubeusers
4111+ resourceVersion: "4162271"
4112+ selfLink: /apis/apps/v1/namespaces/maintain-kubeusers/deployments/maintain-kubeusers
4113+ uid: c11182a1-4011-42d5-a22e-630925857a8a
4114+ spec:
4115+ progressDeadlineSeconds: 600
4116+ replicas: 1
4117+ revisionHistoryLimit: 10
4118+ selector:
4119+ matchLabels:
4120+ app: maintain-kubeusers
4121+ strategy:
4122+ rollingUpdate:
4123+ maxSurge: 25%
4124+ maxUnavailable: 25%
4125+ type: RollingUpdate
4126+ template:
4127+ metadata:
4128+ creationTimestamp: null
4129+ labels:
4130+ app: maintain-kubeusers
4131+ spec:
4132+ containers:
4133+ - args:
4134+ - /app/maintain_kubeusers.py
4135+ - --project=toolsbeta
4136+ command:
4137+ - /app/venv/bin/python
4138+ image: docker-registry.tools.wmflabs.org/maintain-kubeusers:beta
4139+ imagePullPolicy: Always
4140+ livenessProbe:
4141+ exec:
4142+ command:
4143+ - find
4144+ - /tmp/run.check
4145+ - -mmin
4146+ - "+5"
4147+ - -exec
4148+ - rm
4149+ - /tmp/run.check
4150+ - ;
4151+ failureThreshold: 3
4152+ initialDelaySeconds: 5
4153+ periodSeconds: 5
4154+ successThreshold: 1
4155+ timeoutSeconds: 1
4156+ name: maintain-kubeusers
4157+ resources: {}
4158+ terminationMessagePath: /dev/termination-log
4159+ terminationMessagePolicy: File
4160+ volumeMounts:
4161+ - mountPath: /data/project
4162+ name: my-host-nfs
4163+ dnsPolicy: ClusterFirst
4164+ restartPolicy: Always
4165+ schedulerName: default-scheduler
4166+ securityContext: {}
4167+ serviceAccount: user-maintainer
4168+ serviceAccountName: user-maintainer
4169+ terminationGracePeriodSeconds: 30
4170+ volumes:
4171+ - hostPath:
4172+ path: /data/project
4173+ type: Directory
4174+ name: my-host-nfs
4175+ status:
4176+ availableReplicas: 1
4177+ conditions:
4178+ - lastTransitionTime: "2019-11-06T16:49:37Z"
4179+ lastUpdateTime: "2019-11-06T16:49:41Z"
4180+ message: ReplicaSet "maintain-kubeusers-7b6bb8f79d" has successfully progressed.
4181+ reason: NewReplicaSetAvailable
4182+ status: "True"
4183+ type: Progressing
4184+ - lastTransitionTime: "2019-11-16T11:12:51Z"
4185+ lastUpdateTime: "2019-11-16T11:12:51Z"
4186+ message: Deployment has minimum availability.
4187+ reason: MinimumReplicasAvailable
4188+ status: "True"
4189+ type: Available
4190+ observedGeneration: 1
4191+ readyReplicas: 1
4192+ replicas: 1
4193+ updatedReplicas: 1
4194+- apiVersion: apps/v1
4195+ kind: Deployment
4196+ metadata:
4197+ annotations:
4198+ deployment.kubernetes.io/revision: "1"
4199+ kubectl.kubernetes.io/last-applied-configuration: |
4200+ {"apiVersion":"apps/v1beta1","kind":"Deployment","metadata":{"annotations":{},"labels":{"name":"registry-admission"},"name":"registry-admission","namespace":"registry-admission"},"spec":{"replicas":2,"template":{"metadata":{"labels":{"name":"registry-admission"},"name":"registry-admission"},"spec":{"containers":[{"image":"docker-registry.tools.wmflabs.org/registry-admission:latest","name":"webhook","resources":{"limits":{"cpu":"300m","memory":"50Mi"},"requests":{"cpu":"300m","memory":"50Mi"}},"securityContext":{"readOnlyRootFilesystem":true},"volumeMounts":[{"mountPath":"/etc/webhook/certs","name":"webhook-certs","readOnly":true}]}],"volumes":[{"name":"webhook-certs","secret":{"secretName":"registry-admission-certs"}}]}}}}
4201+ creationTimestamp: "2019-10-25T23:39:02Z"
4202 generation: 1
4203 labels:
4204 name: registry-admission
4205 name: registry-admission
4206 namespace: registry-admission
4207- resourceVersion: "2555331"
4208+ resourceVersion: "429044"
4209 selfLink: /apis/apps/v1/namespaces/registry-admission/deployments/registry-admission
4210- uid: 20f91a23-77d6-46ef-88e8-25e7df8be698
4211+ uid: ede069ee-db94-434a-8a31-d38164a1b791
4212 spec:
4213 progressDeadlineSeconds: 600
4214 replicas: 2
4215@@ -6010,22 +6246,99 @@
4216 status:
4217 availableReplicas: 2
4218 conditions:
4219- - lastTransitionTime: "2019-11-07T13:26:04Z"
4220- lastUpdateTime: "2019-11-07T13:26:08Z"
4221+ - lastTransitionTime: "2019-10-25T23:39:07Z"
4222+ lastUpdateTime: "2019-10-25T23:39:07Z"
4223+ message: Deployment has minimum availability.
4224+ reason: MinimumReplicasAvailable
4225+ status: "True"
4226+ type: Available
4227+ - lastTransitionTime: "2019-10-25T23:39:02Z"
4228+ lastUpdateTime: "2019-10-25T23:39:07Z"
4229 message: ReplicaSet "registry-admission-6f5f6589c5" has successfully progressed.
4230 reason: NewReplicaSetAvailable
4231 status: "True"
4232 type: Progressing
4233- - lastTransitionTime: "2019-11-21T12:49:44Z"
4234- lastUpdateTime: "2019-11-21T12:49:44Z"
4235+ observedGeneration: 1
4236+ readyReplicas: 2
4237+ replicas: 2
4238+ updatedReplicas: 2
4239+- apiVersion: apps/v1
4240+ kind: Deployment
4241+ metadata:
4242+ annotations:
4243+ deployment.kubernetes.io/revision: "1"
4244+ creationTimestamp: "2019-11-11T06:44:13Z"
4245+ generation: 1
4246+ labels:
4247+ name: fourohfour
4248+ toolforge: tool
4249+ tools.wmflabs.org/webservice: "true"
4250+ tools.wmflabs.org/webservice-version: "1"
4251+ name: fourohfour
4252+ namespace: tool-fourohfour
4253+ resourceVersion: "3260773"
4254+ selfLink: /apis/apps/v1/namespaces/tool-fourohfour/deployments/fourohfour
4255+ uid: eb73e8ea-dca5-4201-8fa9-159eb4156230
4256+ spec:
4257+ progressDeadlineSeconds: 2147483647
4258+ replicas: 1
4259+ revisionHistoryLimit: 2147483647
4260+ selector:
4261+ matchLabels:
4262+ name: fourohfour
4263+ toolforge: tool
4264+ tools.wmflabs.org/webservice: "true"
4265+ tools.wmflabs.org/webservice-version: "1"
4266+ strategy:
4267+ rollingUpdate:
4268+ maxSurge: 1
4269+ maxUnavailable: 1
4270+ type: RollingUpdate
4271+ template:
4272+ metadata:
4273+ creationTimestamp: null
4274+ labels:
4275+ name: fourohfour
4276+ toolforge: tool
4277+ tools.wmflabs.org/webservice: "true"
4278+ tools.wmflabs.org/webservice-version: "1"
4279+ spec:
4280+ containers:
4281+ - command:
4282+ - /usr/bin/webservice-runner
4283+ - --type
4284+ - uwsgi-python
4285+ - --port
4286+ - "8000"
4287+ image: docker-registry.tools.wmflabs.org/toolforge-python35-sssd-web:latest
4288+ imagePullPolicy: Always
4289+ name: webservice
4290+ ports:
4291+ - containerPort: 8000
4292+ name: http
4293+ protocol: TCP
4294+ resources: {}
4295+ terminationMessagePath: /dev/termination-log
4296+ terminationMessagePolicy: File
4297+ workingDir: /data/project/fourohfour/
4298+ dnsPolicy: ClusterFirst
4299+ restartPolicy: Always
4300+ schedulerName: default-scheduler
4301+ securityContext: {}
4302+ terminationGracePeriodSeconds: 30
4303+ status:
4304+ availableReplicas: 1
4305+ conditions:
4306+ - lastTransitionTime: "2019-11-11T06:44:13Z"
4307+ lastUpdateTime: "2019-11-11T06:44:13Z"
4308 message: Deployment has minimum availability.
4309 reason: MinimumReplicasAvailable
4310 status: "True"
4311 type: Available
4312 observedGeneration: 1
4313- readyReplicas: 2
4314- replicas: 2
4315- updatedReplicas: 2
4316+ readyReplicas: 1
4317+ replicas: 1
4318+ updatedReplicas: 1
4319 - apiVersion: apps/v1
4320 kind: ReplicaSet
4321 metadata:
4322@@ -6033,7 +6346,7 @@
4323 deployment.kubernetes.io/desired-replicas: "2"
4324 deployment.kubernetes.io/max-replicas: "3"
4325 deployment.kubernetes.io/revision: "1"
4326- creationTimestamp: "2019-11-07T13:24:34Z"
4327+ creationTimestamp: "2019-10-25T23:37:48Z"
4328 generation: 1
4329 labels:
4330 name: ingress-admission
4331@@ -6046,10 +6359,10 @@
4332 controller: true
4333 kind: Deployment
4334 name: ingress-admission
4335- uid: 8bb99257-284a-4d48-9b5f-6ea427f005e9
4336- resourceVersion: "2555306"
4337+ uid: 890de787-7031-4083-a94e-224826b5f908
4338+ resourceVersion: "2674883"
4339 selfLink: /apis/apps/v1/namespaces/ingress-admission/replicasets/ingress-admission-55fb8554b5
4340- uid: 5a38fd74-55c6-40d8-b7ca-62739bd8cafe
4341+ uid: cbff34d9-c890-4184-acac-e8c1648c3d71
4342 spec:
4343 replicas: 2
4344 selector:
4345@@ -6105,14 +6418,14 @@
4346 annotations:
4347 deployment.kubernetes.io/desired-replicas: "1"
4348 deployment.kubernetes.io/max-replicas: "2"
4349- deployment.kubernetes.io/revision: "2"
4350- creationTimestamp: "2019-11-19T12:46:57Z"
4351+ deployment.kubernetes.io/revision: "9"
4352+ creationTimestamp: "2019-11-15T12:26:56Z"
4353 generation: 1
4354 labels:
4355 app.kubernetes.io/name: ingress-nginx
4356 app.kubernetes.io/part-of: ingress-nginx
4357- pod-template-hash: 5dbf7cb65c
4358- name: nginx-ingress-5dbf7cb65c
4359+ pod-template-hash: 5d586d964b
4360+ name: nginx-ingress-5d586d964b
4361 namespace: ingress-nginx
4362 ownerReferences:
4363 - apiVersion: apps/v1
4364@@ -6120,17 +6433,17 @@
4365 controller: true
4366 kind: Deployment
4367 name: nginx-ingress
4368- uid: 0b412fc9-853d-4291-923d-2786f815687b
4369- resourceVersion: "2555247"
4370- selfLink: /apis/apps/v1/namespaces/ingress-nginx/replicasets/nginx-ingress-5dbf7cb65c
4371- uid: 7c4dbeb7-3867-43a6-a21b-0f9fc6fa466e
4372+ uid: b405e85f-a8d3-4621-bda2-37e3c5f2896a
4373+ resourceVersion: "5045343"
4374+ selfLink: /apis/apps/v1/namespaces/ingress-nginx/replicasets/nginx-ingress-5d586d964b
4375+ uid: 6c77c7f1-1a6b-40b2-a79c-71af7f48d089
4376 spec:
4377 replicas: 1
4378 selector:
4379 matchLabels:
4380 app.kubernetes.io/name: ingress-nginx
4381 app.kubernetes.io/part-of: ingress-nginx
4382- pod-template-hash: 5dbf7cb65c
4383+ pod-template-hash: 5d586d964b
4384 template:
4385 metadata:
4386 annotations:
4387@@ -6140,7 +6453,7 @@
4388 labels:
4389 app.kubernetes.io/name: ingress-nginx
4390 app.kubernetes.io/part-of: ingress-nginx
4391- pod-template-hash: 5dbf7cb65c
4392+ pod-template-hash: 5d586d964b
4393 spec:
4394 containers:
4395 - args:
4396@@ -6151,6 +6464,7 @@
4397 - --udp-services-configmap=$(POD_NAMESPACE)/udp-services
4398 - --publish-service=$(POD_NAMESPACE)/ingress-nginx
4399 - --annotations-prefix=nginx.ingress.kubernetes.io
4400+ - --default-backend-service=tool-fourohfour/fourohfour
4401 env:
4402 - name: POD_NAME
4403 valueFrom:
4404@@ -6221,9 +6535,120 @@
4405 annotations:
4406 deployment.kubernetes.io/desired-replicas: "1"
4407 deployment.kubernetes.io/max-replicas: "2"
4408- deployment.kubernetes.io/revision: "1"
4409- creationTimestamp: "2019-11-07T13:11:20Z"
4410- generation: 2
4411+ deployment.kubernetes.io/revision: "8"
4412+ creationTimestamp: "2019-11-11T18:24:26Z"
4413+ generation: 3
4414+ labels:
4415+ app.kubernetes.io/name: ingress-nginx
4416+ app.kubernetes.io/part-of: ingress-nginx
4417+ pod-template-hash: 5df9b77677
4418+ name: nginx-ingress-5df9b77677
4419+ namespace: ingress-nginx
4420+ ownerReferences:
4421+ - apiVersion: apps/v1
4422+ blockOwnerDeletion: true
4423+ controller: true
4424+ kind: Deployment
4425+ name: nginx-ingress
4426+ uid: b405e85f-a8d3-4621-bda2-37e3c5f2896a
4427+ resourceVersion: "3997479"
4428+ selfLink: /apis/apps/v1/namespaces/ingress-nginx/replicasets/nginx-ingress-5df9b77677
4429+ uid: 8423061f-aba4-46a2-be35-03c760938d9b
4430+ spec:
4431+ replicas: 0
4432+ selector:
4433+ matchLabels:
4434+ app.kubernetes.io/name: ingress-nginx
4435+ app.kubernetes.io/part-of: ingress-nginx
4436+ pod-template-hash: 5df9b77677
4437+ template:
4438+ metadata:
4439+ annotations:
4440+ prometheus.io/port: "10254"
4441+ prometheus.io/scrape: "true"
4442+ creationTimestamp: null
4443+ labels:
4444+ app.kubernetes.io/name: ingress-nginx
4445+ app.kubernetes.io/part-of: ingress-nginx
4446+ pod-template-hash: 5df9b77677
4447+ spec:
4448+ containers:
4449+ - args:
4450+ - /nginx-ingress-controller
4451+ - --http-port=8080
4452+ - --configmap=$(POD_NAMESPACE)/nginx-configuration
4453+ - --tcp-services-configmap=$(POD_NAMESPACE)/tcp-services
4454+ - --udp-services-configmap=$(POD_NAMESPACE)/udp-services
4455+ - --publish-service=$(POD_NAMESPACE)/ingress-nginx
4456+ - --annotations-prefix=nginx.ingress.kubernetes.io
4457+ - --default-backend-service=tool-fourohfour/fourohfour
4458+ env:
4459+ - name: POD_NAME
4460+ valueFrom:
4461+ fieldRef:
4462+ apiVersion: v1
4463+ fieldPath: metadata.name
4464+ - name: POD_NAMESPACE
4465+ valueFrom:
4466+ fieldRef:
4467+ apiVersion: v1
4468+ fieldPath: metadata.namespace
4469+ image: docker-registry.tools.wmflabs.org/nginx-ingress-controller:0.25.1
4470+ imagePullPolicy: IfNotPresent
4471+ livenessProbe:
4472+ failureThreshold: 3
4473+ httpGet:
4474+ path: /healthz
4475+ port: 10254
4476+ scheme: HTTP
4477+ initialDelaySeconds: 10
4478+ periodSeconds: 10
4479+ successThreshold: 1
4480+ timeoutSeconds: 10
4481+ name: nginx-ingress-controller
4482+ ports:
4483+ - containerPort: 8080
4484+ name: http
4485+ protocol: TCP
4486+ readinessProbe:
4487+ failureThreshold: 3
4488+ httpGet:
4489+ path: /healthz
4490+ port: 10254
4491+ scheme: HTTP
4492+ periodSeconds: 10
4493+ successThreshold: 1
4494+ timeoutSeconds: 10
4495+ resources: {}
4496+ securityContext:
4497+ allowPrivilegeEscalation: true
4498+ capabilities:
4499+ add:
4500+ - NET_BIND_SERVICE
4501+ drop:
4502+ - ALL
4503+ runAsUser: 33
4504+ terminationMessagePath: /dev/termination-log
4505+ terminationMessagePolicy: File
4506+ dnsPolicy: ClusterFirst
4507+ restartPolicy: Always
4508+ schedulerName: default-scheduler
4509+ securityContext: {}
4510+ serviceAccount: nginx-ingress
4511+ serviceAccountName: nginx-ingress
4512+ terminationGracePeriodSeconds: 30
4513+ status:
4514+ observedGeneration: 3
4515+ replicas: 0
4516+- apiVersion: apps/v1
4517+ kind: ReplicaSet
4518+ metadata:
4519+ annotations:
4520+ deployment.kubernetes.io/desired-replicas: "1"
4521+ deployment.kubernetes.io/max-replicas: "2"
4522+ deployment.kubernetes.io/revision: "4"
4523+ creationTimestamp: "2019-11-08T11:37:37Z"
4524+ generation: 3
4525 labels:
4526 app.kubernetes.io/name: ingress-nginx
4527 app.kubernetes.io/part-of: ingress-nginx
4528@@ -6236,10 +6661,10 @@
4529 controller: true
4530 kind: Deployment
4531 name: nginx-ingress
4532- uid: 0b412fc9-853d-4291-923d-2786f815687b
4533- resourceVersion: "2323526"
4534+ uid: b405e85f-a8d3-4621-bda2-37e3c5f2896a
4535+ resourceVersion: "3345551"
4536 selfLink: /apis/apps/v1/namespaces/ingress-nginx/replicasets/nginx-ingress-77bfb79f66
4537- uid: 6e7c537d-342e-448a-8165-75d779092c25
4538+ uid: 00e6a385-9707-4b16-97cd-c76eb32760cf
4539 spec:
4540 replicas: 0
4541 selector:
4542@@ -6323,6 +6748,228 @@
4543 serviceAccountName: nginx-ingress
4544 terminationGracePeriodSeconds: 30
4545 status:
4546+ observedGeneration: 3
4547+ replicas: 0
4548+- apiVersion: apps/v1
4549+ kind: ReplicaSet
4550+ metadata:
4551+ annotations:
4552+ deployment.kubernetes.io/desired-replicas: "1"
4553+ deployment.kubernetes.io/max-replicas: "2"
4554+ deployment.kubernetes.io/revision: "3"
4555+ deployment.kubernetes.io/revision-history: "1"
4556+ creationTimestamp: "2019-10-25T11:59:09Z"
4557+ generation: 4
4558+ labels:
4559+ app.kubernetes.io/name: ingress-nginx
4560+ app.kubernetes.io/part-of: ingress-nginx
4561+ pod-template-hash: 789cc967b9
4562+ name: nginx-ingress-789cc967b9
4563+ namespace: ingress-nginx
4564+ ownerReferences:
4565+ - apiVersion: apps/v1
4566+ blockOwnerDeletion: true
4567+ controller: true
4568+ kind: Deployment
4569+ name: nginx-ingress
4570+ uid: b405e85f-a8d3-4621-bda2-37e3c5f2896a
4571+ resourceVersion: "2774480"
4572+ selfLink: /apis/apps/v1/namespaces/ingress-nginx/replicasets/nginx-ingress-789cc967b9
4573+ uid: 3d6ad053-9b67-4d5f-b1ee-c00b8d2b495d
4574+ spec:
4575+ replicas: 0
4576+ selector:
4577+ matchLabels:
4578+ app.kubernetes.io/name: ingress-nginx
4579+ app.kubernetes.io/part-of: ingress-nginx
4580+ pod-template-hash: 789cc967b9
4581+ template:
4582+ metadata:
4583+ annotations:
4584+ prometheus.io/port: "10254"
4585+ prometheus.io/scrape: "true"
4586+ creationTimestamp: null
4587+ labels:
4588+ app.kubernetes.io/name: ingress-nginx
4589+ app.kubernetes.io/part-of: ingress-nginx
4590+ pod-template-hash: 789cc967b9
4591+ spec:
4592+ containers:
4593+ - args:
4594+ - /nginx-ingress-controller
4595+ - --http-port=8080
4596+ - --configmap=$(POD_NAMESPACE)/nginx-configuration
4597+ - --tcp-services-configmap=$(POD_NAMESPACE)/tcp-services
4598+ - --udp-services-configmap=$(POD_NAMESPACE)/udp-services
4599+ - --publish-service=$(POD_NAMESPACE)/ingress-nginx
4600+ - --annotations-prefix=nginx.ingress.kubernetes.io
4601+ env:
4602+ - name: POD_NAME
4603+ valueFrom:
4604+ fieldRef:
4605+ apiVersion: v1
4606+ fieldPath: metadata.name
4607+ - name: POD_NAMESPACE
4608+ valueFrom:
4609+ fieldRef:
4610+ apiVersion: v1
4611+ fieldPath: metadata.namespace
4612+ image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.25.1
4613+ imagePullPolicy: IfNotPresent
4614+ livenessProbe:
4615+ failureThreshold: 3
4616+ httpGet:
4617+ path: /healthz
4618+ port: 10254
4619+ scheme: HTTP
4620+ initialDelaySeconds: 10
4621+ periodSeconds: 10
4622+ successThreshold: 1
4623+ timeoutSeconds: 10
4624+ name: nginx-ingress-controller
4625+ ports:
4626+ - containerPort: 8080
4627+ name: http
4628+ protocol: TCP
4629+ readinessProbe:
4630+ failureThreshold: 3
4631+ httpGet:
4632+ path: /healthz
4633+ port: 10254
4634+ scheme: HTTP
4635+ periodSeconds: 10
4636+ successThreshold: 1
4637+ timeoutSeconds: 10
4638+ resources: {}
4639+ securityContext:
4640+ allowPrivilegeEscalation: true
4641+ capabilities:
4642+ add:
4643+ - NET_BIND_SERVICE
4644+ drop:
4645+ - ALL
4646+ runAsUser: 33
4647+ terminationMessagePath: /dev/termination-log
4648+ terminationMessagePolicy: File
4649+ dnsPolicy: ClusterFirst
4650+ restartPolicy: Always
4651+ schedulerName: default-scheduler
4652+ securityContext: {}
4653+ serviceAccount: nginx-ingress
4654+ serviceAccountName: nginx-ingress
4655+ terminationGracePeriodSeconds: 30
4656+ status:
4657+ observedGeneration: 4
4658+ replicas: 0
4659+- apiVersion: apps/v1
4660+ kind: ReplicaSet
4661+ metadata:
4662+ annotations:
4663+ deployment.kubernetes.io/desired-replicas: "1"
4664+ deployment.kubernetes.io/max-replicas: "2"
4665+ deployment.kubernetes.io/revision: "5"
4666+ creationTimestamp: "2019-11-11T18:20:37Z"
4667+ generation: 2
4668+ labels:
4669+ app.kubernetes.io/name: ingress-nginx
4670+ app.kubernetes.io/part-of: ingress-nginx
4671+ pod-template-hash: 7f769cfb68
4672+ name: nginx-ingress-7f769cfb68
4673+ namespace: ingress-nginx
4674+ ownerReferences:
4675+ - apiVersion: apps/v1
4676+ blockOwnerDeletion: true
4677+ controller: true
4678+ kind: Deployment
4679+ name: nginx-ingress
4680+ uid: b405e85f-a8d3-4621-bda2-37e3c5f2896a
4681+ resourceVersion: "3345075"
4682+ selfLink: /apis/apps/v1/namespaces/ingress-nginx/replicasets/nginx-ingress-7f769cfb68
4683+ uid: ab271f77-c439-4898-958b-60afcc41a0f4
4684+ spec:
4685+ replicas: 0
4686+ selector:
4687+ matchLabels:
4688+ app.kubernetes.io/name: ingress-nginx
4689+ app.kubernetes.io/part-of: ingress-nginx
4690+ pod-template-hash: 7f769cfb68
4691+ template:
4692+ metadata:
4693+ annotations:
4694+ prometheus.io/port: "10254"
4695+ prometheus.io/scrape: "true"
4696+ creationTimestamp: null
4697+ labels:
4698+ app.kubernetes.io/name: ingress-nginx
4699+ app.kubernetes.io/part-of: ingress-nginx
4700+ pod-template-hash: 7f769cfb68
4701+ spec:
4702+ containers:
4703+ - args:
4704+ - /nginx-ingress-controller
4705+ - --http-port=8080
4706+ - --configmap=$(POD_NAMESPACE)/nginx-configuration
4707+ - --tcp-services-configmap=$(POD_NAMESPACE)/tcp-services
4708+ - --udp-services-configmap=$(POD_NAMESPACE)/udp-services
4709+ - --publish-service=$(POD_NAMESPACE)/ingress-nginx
4710+ - --annotations-prefix=nginx.ingress.kubernetes.io
4711+ - --default-backend-service tool-fourohfour/fourohfour
4712+ env:
4713+ - name: POD_NAME
4714+ valueFrom:
4715+ fieldRef:
4716+ apiVersion: v1
4717+ fieldPath: metadata.name
4718+ - name: POD_NAMESPACE
4719+ valueFrom:
4720+ fieldRef:
4721+ apiVersion: v1
4722+ fieldPath: metadata.namespace
4723+ image: docker-registry.tools.wmflabs.org/nginx-ingress-controller:0.25.1
4724+ imagePullPolicy: IfNotPresent
4725+ livenessProbe:
4726+ failureThreshold: 3
4727+ httpGet:
4728+ path: /healthz
4729+ port: 10254
4730+ scheme: HTTP
4731+ initialDelaySeconds: 10
4732+ periodSeconds: 10
4733+ successThreshold: 1
4734+ timeoutSeconds: 10
4735+ name: nginx-ingress-controller
4736+ ports:
4737+ - containerPort: 8080
4738+ name: http
4739+ protocol: TCP
4740+ readinessProbe:
4741+ failureThreshold: 3
4742+ httpGet:
4743+ path: /healthz
4744+ port: 10254
4745+ scheme: HTTP
4746+ periodSeconds: 10
4747+ successThreshold: 1
4748+ timeoutSeconds: 10
4749+ resources: {}
4750+ securityContext:
4751+ allowPrivilegeEscalation: true
4752+ capabilities:
4753+ add:
4754+ - NET_BIND_SERVICE
4755+ drop:
4756+ - ALL
4757+ runAsUser: 33
4758+ terminationMessagePath: /dev/termination-log
4759+ terminationMessagePolicy: File
4760+ dnsPolicy: ClusterFirst
4761+ restartPolicy: Always
4762+ schedulerName: default-scheduler
4763+ securityContext: {}
4764+ serviceAccount: nginx-ingress
4765+ serviceAccountName: nginx-ingress
4766+ terminationGracePeriodSeconds: 30
4767+ status:
4768 observedGeneration: 2
4769 replicas: 0
4770 - apiVersion: apps/v1
4771@@ -6330,9 +6977,342 @@
4772 metadata:
4773 annotations:
4774 deployment.kubernetes.io/desired-replicas: "1"
4775+ deployment.kubernetes.io/max-replicas: "2"
4776+ deployment.kubernetes.io/revision: "2"
4777+ creationTimestamp: "2019-10-25T12:52:45Z"
4778+ generation: 2
4779+ labels:
4780+ app.kubernetes.io/name: ingress-nginx
4781+ app.kubernetes.io/part-of: ingress-nginx
4782+ pod-template-hash: 95c8858c9
4783+ name: nginx-ingress-95c8858c9
4784+ namespace: ingress-nginx
4785+ ownerReferences:
4786+ - apiVersion: apps/v1
4787+ blockOwnerDeletion: true
4788+ controller: true
4789+ kind: Deployment
4790+ name: nginx-ingress
4791+ uid: b405e85f-a8d3-4621-bda2-37e3c5f2896a
4792+ resourceVersion: "2774529"
4793+ selfLink: /apis/apps/v1/namespaces/ingress-nginx/replicasets/nginx-ingress-95c8858c9
4794+ uid: 8d64209b-82ff-448d-b604-8aa375cd8f02
4795+ spec:
4796+ replicas: 0
4797+ selector:
4798+ matchLabels:
4799+ app.kubernetes.io/name: ingress-nginx
4800+ app.kubernetes.io/part-of: ingress-nginx
4801+ pod-template-hash: 95c8858c9
4802+ template:
4803+ metadata:
4804+ annotations:
4805+ prometheus.io/port: "10254"
4806+ prometheus.io/scrape: "true"
4807+ creationTimestamp: null
4808+ labels:
4809+ app.kubernetes.io/name: ingress-nginx
4810+ app.kubernetes.io/part-of: ingress-nginx
4811+ pod-template-hash: 95c8858c9
4812+ spec:
4813+ containers:
4814+ - args:
4815+ - /nginx-ingress-controller
4816+ - --http-port=8080
4817+ - --configmap=$(POD_NAMESPACE)/nginx-configuration
4818+ - --tcp-services-configmap=$(POD_NAMESPACE)/tcp-services
4819+ - --udp-services-configmap=$(POD_NAMESPACE)/udp-services
4820+ - --publish-service=$(POD_NAMESPACE)/ingress-nginx
4821+ - --annotations-prefix=nginx.ingress.kubernetes.io
4822+ - --v=5
4823+ env:
4824+ - name: POD_NAME
4825+ valueFrom:
4826+ fieldRef:
4827+ apiVersion: v1
4828+ fieldPath: metadata.name
4829+ - name: POD_NAMESPACE
4830+ valueFrom:
4831+ fieldRef:
4832+ apiVersion: v1
4833+ fieldPath: metadata.namespace
4834+ image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.25.1
4835+ imagePullPolicy: IfNotPresent
4836+ livenessProbe:
4837+ failureThreshold: 3
4838+ httpGet:
4839+ path: /healthz
4840+ port: 10254
4841+ scheme: HTTP
4842+ initialDelaySeconds: 10
4843+ periodSeconds: 10
4844+ successThreshold: 1
4845+ timeoutSeconds: 10
4846+ name: nginx-ingress-controller
4847+ ports:
4848+ - containerPort: 8080
4849+ name: http
4850+ protocol: TCP
4851+ readinessProbe:
4852+ failureThreshold: 3
4853+ httpGet:
4854+ path: /healthz
4855+ port: 10254
4856+ scheme: HTTP
4857+ periodSeconds: 10
4858+ successThreshold: 1
4859+ timeoutSeconds: 10
4860+ resources: {}
4861+ securityContext:
4862+ allowPrivilegeEscalation: true
4863+ capabilities:
4864+ add:
4865+ - NET_BIND_SERVICE
4866+ drop:
4867+ - ALL
4868+ runAsUser: 33
4869+ terminationMessagePath: /dev/termination-log
4870+ terminationMessagePolicy: File
4871+ dnsPolicy: ClusterFirst
4872+ restartPolicy: Always
4873+ schedulerName: default-scheduler
4874+ securityContext: {}
4875+ serviceAccount: nginx-ingress
4876+ serviceAccountName: nginx-ingress
4877+ terminationGracePeriodSeconds: 30
4878+ status:
4879+ observedGeneration: 2
4880+ replicas: 0
4881+- apiVersion: apps/v1
4882+ kind: ReplicaSet
4883+ metadata:
4884+ annotations:
4885+ deployment.kubernetes.io/desired-replicas: "1"
4886+ deployment.kubernetes.io/max-replicas: "2"
4887+ deployment.kubernetes.io/revision: "6"
4888+ creationTimestamp: "2019-11-11T18:22:06Z"
4889+ generation: 3
4890+ labels:
4891+ app.kubernetes.io/name: ingress-nginx
4892+ app.kubernetes.io/part-of: ingress-nginx
4893+ pod-template-hash: bfbf7c964
4894+ name: nginx-ingress-bfbf7c964
4895+ namespace: ingress-nginx
4896+ ownerReferences:
4897+ - apiVersion: apps/v1
4898+ blockOwnerDeletion: true
4899+ controller: true
4900+ kind: Deployment
4901+ name: nginx-ingress
4902+ uid: b405e85f-a8d3-4621-bda2-37e3c5f2896a
4903+ resourceVersion: "3345266"
4904+ selfLink: /apis/apps/v1/namespaces/ingress-nginx/replicasets/nginx-ingress-bfbf7c964
4905+ uid: e56079d4-d057-441f-b6e0-5d95e4720b64
4906+ spec:
4907+ replicas: 0
4908+ selector:
4909+ matchLabels:
4910+ app.kubernetes.io/name: ingress-nginx
4911+ app.kubernetes.io/part-of: ingress-nginx
4912+ pod-template-hash: bfbf7c964
4913+ template:
4914+ metadata:
4915+ annotations:
4916+ prometheus.io/port: "10254"
4917+ prometheus.io/scrape: "true"
4918+ creationTimestamp: null
4919+ labels:
4920+ app.kubernetes.io/name: ingress-nginx
4921+ app.kubernetes.io/part-of: ingress-nginx
4922+ pod-template-hash: bfbf7c964
4923+ spec:
4924+ containers:
4925+ - args:
4926+ - /nginx-ingress-controller
4927+ - --http-port=8080
4928+ - --configmap=$(POD_NAMESPACE)/nginx-configuration
4929+ - --tcp-services-configmap=$(POD_NAMESPACE)/tcp-services
4930+ - --udp-services-configmap=$(POD_NAMESPACE)/udp-services
4931+ - --publish-service=$(POD_NAMESPACE)/ingress-nginx
4932+ - --annotations-prefix=nginx.ingress.kubernetes.io
4933+ - --default-backend-service "tool-fourohfour/fourohfour"
4934+ env:
4935+ - name: POD_NAME
4936+ valueFrom:
4937+ fieldRef:
4938+ apiVersion: v1
4939+ fieldPath: metadata.name
4940+ - name: POD_NAMESPACE
4941+ valueFrom:
4942+ fieldRef:
4943+ apiVersion: v1
4944+ fieldPath: metadata.namespace
4945+ image: docker-registry.tools.wmflabs.org/nginx-ingress-controller:0.25.1
4946+ imagePullPolicy: IfNotPresent
4947+ livenessProbe:
4948+ failureThreshold: 3
4949+ httpGet:
4950+ path: /healthz
4951+ port: 10254
4952+ scheme: HTTP
4953+ initialDelaySeconds: 10
4954+ periodSeconds: 10
4955+ successThreshold: 1
4956+ timeoutSeconds: 10
4957+ name: nginx-ingress-controller
4958+ ports:
4959+ - containerPort: 8080
4960+ name: http
4961+ protocol: TCP
4962+ readinessProbe:
4963+ failureThreshold: 3
4964+ httpGet:
4965+ path: /healthz
4966+ port: 10254
4967+ scheme: HTTP
4968+ periodSeconds: 10
4969+ successThreshold: 1
4970+ timeoutSeconds: 10
4971+ resources: {}
4972+ securityContext:
4973+ allowPrivilegeEscalation: true
4974+ capabilities:
4975+ add:
4976+ - NET_BIND_SERVICE
4977+ drop:
4978+ - ALL
4979+ runAsUser: 33
4980+ terminationMessagePath: /dev/termination-log
4981+ terminationMessagePolicy: File
4982+ dnsPolicy: ClusterFirst
4983+ restartPolicy: Always
4984+ schedulerName: default-scheduler
4985+ securityContext: {}
4986+ serviceAccount: nginx-ingress
4987+ serviceAccountName: nginx-ingress
4988+ terminationGracePeriodSeconds: 30
4989+ status:
4990+ observedGeneration: 3
4991+ replicas: 0
4992+- apiVersion: apps/v1
4993+ kind: ReplicaSet
4994+ metadata:
4995+ annotations:
4996+ deployment.kubernetes.io/desired-replicas: "1"
4997+ deployment.kubernetes.io/max-replicas: "2"
4998+ deployment.kubernetes.io/revision: "7"
4999+ creationTimestamp: "2019-11-11T18:23:11Z"
5000+ generation: 3
5001+ labels:
5002+ app.kubernetes.io/name: ingress-nginx
5003+ app.kubernetes.io/part-of: ingress-nginx
5004+ pod-template-hash: f9bdb6597
5005+ name: nginx-ingress-f9bdb6597
5006+ namespace: ingress-nginx
5007+ ownerReferences:
5008+ - apiVersion: apps/v1
5009+ blockOwnerDeletion: true
5010+ controller: true
5011+ kind: Deployment
5012+ name: nginx-ingress
5013+ uid: b405e85f-a8d3-4621-bda2-37e3c5f2896a
5014+ resourceVersion: "3345482"
5015+ selfLink: /apis/apps/v1/namespaces/ingress-nginx/replicasets/nginx-ingress-f9bdb6597
5016+ uid: e842dfcd-65e8-4b84-8d6b-0fb44398bf8c
5017+ spec:
5018+ replicas: 0
5019+ selector:
5020+ matchLabels:
5021+ app.kubernetes.io/name: ingress-nginx
5022+ app.kubernetes.io/part-of: ingress-nginx
5023+ pod-template-hash: f9bdb6597
5024+ template:
5025+ metadata:
5026+ annotations:
5027+ prometheus.io/port: "10254"
5028+ prometheus.io/scrape: "true"
5029+ creationTimestamp: null
5030+ labels:
5031+ app.kubernetes.io/name: ingress-nginx
5032+ app.kubernetes.io/part-of: ingress-nginx
5033+ pod-template-hash: f9bdb6597
5034+ spec:
5035+ containers:
5036+ - args:
5037+ - /nginx-ingress-controller
5038+ - --http-port=8080
5039+ - --configmap=$(POD_NAMESPACE)/nginx-configuration
5040+ - --tcp-services-configmap=$(POD_NAMESPACE)/tcp-services
5041+ - --udp-services-configmap=$(POD_NAMESPACE)/udp-services
5042+ - --publish-service=$(POD_NAMESPACE)/ingress-nginx
5043+ - --annotations-prefix=nginx.ingress.kubernetes.io
5044+ - --default-backend-service="tool-fourohfour/fourohfour"
5045+ env:
5046+ - name: POD_NAME
5047+ valueFrom:
5048+ fieldRef:
5049+ apiVersion: v1
5050+ fieldPath: metadata.name
5051+ - name: POD_NAMESPACE
5052+ valueFrom:
5053+ fieldRef:
5054+ apiVersion: v1
5055+ fieldPath: metadata.namespace
5056+ image: docker-registry.tools.wmflabs.org/nginx-ingress-controller:0.25.1
5057+ imagePullPolicy: IfNotPresent
5058+ livenessProbe:
5059+ failureThreshold: 3
5060+ httpGet:
5061+ path: /healthz
5062+ port: 10254
5063+ scheme: HTTP
5064+ initialDelaySeconds: 10
5065+ periodSeconds: 10
5066+ successThreshold: 1
5067+ timeoutSeconds: 10
5068+ name: nginx-ingress-controller
5069+ ports:
5070+ - containerPort: 8080
5071+ name: http
5072+ protocol: TCP
5073+ readinessProbe:
5074+ failureThreshold: 3
5075+ httpGet:
5076+ path: /healthz
5077+ port: 10254
5078+ scheme: HTTP
5079+ periodSeconds: 10
5080+ successThreshold: 1
5081+ timeoutSeconds: 10
5082+ resources: {}
5083+ securityContext:
5084+ allowPrivilegeEscalation: true
5085+ capabilities:
5086+ add:
5087+ - NET_BIND_SERVICE
5088+ drop:
5089+ - ALL
5090+ runAsUser: 33
5091+ terminationMessagePath: /dev/termination-log
5092+ terminationMessagePolicy: File
5093+ dnsPolicy: ClusterFirst
5094+ restartPolicy: Always
5095+ schedulerName: default-scheduler
5096+ securityContext: {}
5097+ serviceAccount: nginx-ingress
5098+ serviceAccountName: nginx-ingress
5099+ terminationGracePeriodSeconds: 30
5100+ status:
5101+ observedGeneration: 3
5102+ replicas: 0
5103+- apiVersion: apps/v1
5104+ kind: ReplicaSet
5105+ metadata:
5106+ annotations:
5107+ deployment.kubernetes.io/desired-replicas: "1"
5108 deployment.kubernetes.io/max-replicas: "1"
5109 deployment.kubernetes.io/revision: "1"
5110- creationTimestamp: "2019-11-06T14:15:35Z"
5111+ creationTimestamp: "2019-10-23T09:58:21Z"
5112 generation: 1
5113 labels:
5114 k8s-app: calico-kube-controllers
5115@@ -6345,10 +7325,10 @@
5116 controller: true
5117 kind: Deployment
5118 name: calico-kube-controllers
5119- uid: b4a16ed2-5d17-404e-939d-27c0c1c3be63
5120- resourceVersion: "2555296"
5121+ uid: a636b7f2-62af-4bb0-8fc9-b211d38512a9
5122+ resourceVersion: "5045349"
5123 selfLink: /apis/apps/v1/namespaces/kube-system/replicasets/calico-kube-controllers-59f54d6bbc
5124- uid: 0afe7af0-f000-4c5e-a333-25e7991c6801
5125+ uid: 4173b03e-bd7e-4c9a-b83c-87dbafffb109
5126 spec:
5127 replicas: 1
5128 selector:
5129@@ -6415,7 +7395,7 @@
5130 deployment.kubernetes.io/desired-replicas: "2"
5131 deployment.kubernetes.io/max-replicas: "3"
5132 deployment.kubernetes.io/revision: "1"
5133- creationTimestamp: "2019-11-06T14:14:33Z"
5134+ creationTimestamp: "2019-10-23T09:55:34Z"
5135 generation: 1
5136 labels:
5137 k8s-app: kube-dns
5138@@ -6428,10 +7408,10 @@
5139 controller: true
5140 kind: Deployment
5141 name: coredns
5142- uid: 2c551d5e-387e-4cce-8ba5-1df6e4a5918f
5143- resourceVersion: "2585105"
5144+ uid: d1cd11f8-2858-457c-b97b-a8ae3d727a46
5145+ resourceVersion: "5052506"
5146 selfLink: /apis/apps/v1/namespaces/kube-system/replicasets/coredns-5c98db65d4
5147- uid: 3a55b4be-f788-4173-a21f-b6b2f0e7bcda
5148+ uid: 2bf774e2-a509-4e2f-bbfb-022ebdd290f5
5149 spec:
5150 replicas: 2
5151 selector:
5152@@ -6534,10 +7514,96 @@
5153 kind: ReplicaSet
5154 metadata:
5155 annotations:
5156+ deployment.kubernetes.io/desired-replicas: "1"
5157+ deployment.kubernetes.io/max-replicas: "2"
5158+ deployment.kubernetes.io/revision: "1"
5159+ creationTimestamp: "2019-11-06T16:49:37Z"
5160+ generation: 1
5161+ labels:
5162+ app: maintain-kubeusers
5163+ pod-template-hash: 7b6bb8f79d
5164+ name: maintain-kubeusers-7b6bb8f79d
5165+ namespace: maintain-kubeusers
5166+ ownerReferences:
5167+ - apiVersion: apps/v1
5168+ blockOwnerDeletion: true
5169+ controller: true
5170+ kind: Deployment
5171+ name: maintain-kubeusers
5172+ uid: c11182a1-4011-42d5-a22e-630925857a8a
5173+ resourceVersion: "4162270"
5174+ selfLink: /apis/apps/v1/namespaces/maintain-kubeusers/replicasets/maintain-kubeusers-7b6bb8f79d
5175+ uid: 98b0991e-6af8-4086-be56-dda95e4f008d
5176+ spec:
5177+ replicas: 1
5178+ selector:
5179+ matchLabels:
5180+ app: maintain-kubeusers
5181+ pod-template-hash: 7b6bb8f79d
5182+ template:
5183+ metadata:
5184+ creationTimestamp: null
5185+ labels:
5186+ app: maintain-kubeusers
5187+ pod-template-hash: 7b6bb8f79d
5188+ spec:
5189+ containers:
5190+ - args:
5191+ - /app/maintain_kubeusers.py
5192+ - --project=toolsbeta
5193+ command:
5194+ - /app/venv/bin/python
5195+ image: docker-registry.tools.wmflabs.org/maintain-kubeusers:beta
5196+ imagePullPolicy: Always
5197+ livenessProbe:
5198+ exec:
5199+ command:
5200+ - find
5201+ - /tmp/run.check
5202+ - -mmin
5203+ - "+5"
5204+ - -exec
5205+ - rm
5206+ - /tmp/run.check
5207+ - ;
5208+ failureThreshold: 3
5209+ initialDelaySeconds: 5
5210+ periodSeconds: 5
5211+ successThreshold: 1
5212+ timeoutSeconds: 1
5213+ name: maintain-kubeusers
5214+ resources: {}
5215+ terminationMessagePath: /dev/termination-log
5216+ terminationMessagePolicy: File
5217+ volumeMounts:
5218+ - mountPath: /data/project
5219+ name: my-host-nfs
5220+ dnsPolicy: ClusterFirst
5221+ restartPolicy: Always
5222+ schedulerName: default-scheduler
5223+ securityContext: {}
5224+ serviceAccount: user-maintainer
5225+ serviceAccountName: user-maintainer
5226+ terminationGracePeriodSeconds: 30
5227+ volumes:
5228+ - hostPath:
5229+ path: /data/project
5230+ type: Directory
5231+ name: my-host-nfs
5232+ status:
5233+ availableReplicas: 1
5234+ fullyLabeledReplicas: 1
5235+ observedGeneration: 1
5236+ readyReplicas: 1
5237+ replicas: 1
5238+- apiVersion: apps/v1
5239+ kind: ReplicaSet
5240+ metadata:
5241+ annotations:
5242 deployment.kubernetes.io/desired-replicas: "2"
5243 deployment.kubernetes.io/max-replicas: "3"
5244 deployment.kubernetes.io/revision: "1"
5245- creationTimestamp: "2019-11-07T13:26:04Z"
5246+ creationTimestamp: "2019-10-25T23:39:02Z"
5247 generation: 1
5248 labels:
5249 name: registry-admission
5250@@ -6550,10 +7616,10 @@
5251 controller: true
5252 kind: Deployment
5253 name: registry-admission
5254- uid: 20f91a23-77d6-46ef-88e8-25e7df8be698
5255- resourceVersion: "2555330"
5256+ uid: ede069ee-db94-434a-8a31-d38164a1b791
5257+ resourceVersion: "429043"
5258 selfLink: /apis/apps/v1/namespaces/registry-admission/replicasets/registry-admission-6f5f6589c5
5259- uid: 0f1a22c2-fb9f-498d-95f5-22942abe8b48
5260+ uid: 04ac00bb-ca33-4163-bc16-310287f38984
5261 spec:
5262 replicas: 2
5263 selector:
5264@@ -6603,6 +7669,81 @@
5265 observedGeneration: 1
5266 readyReplicas: 2
5267 replicas: 2
5268+- apiVersion: apps/v1
5269+ kind: ReplicaSet
5270+ metadata:
5271+ annotations:
5272+ deployment.kubernetes.io/desired-replicas: "1"
5273+ deployment.kubernetes.io/max-replicas: "2"
5274+ deployment.kubernetes.io/revision: "1"
5275+ creationTimestamp: "2019-11-11T06:44:13Z"
5276+ generation: 1
5277+ labels:
5278+ name: fourohfour
5279+ pod-template-hash: 66bf569f4f
5280+ toolforge: tool
5281+ tools.wmflabs.org/webservice: "true"
5282+ tools.wmflabs.org/webservice-version: "1"
5283+ name: fourohfour-66bf569f4f
5284+ namespace: tool-fourohfour
5285+ ownerReferences:
5286+ - apiVersion: apps/v1
5287+ blockOwnerDeletion: true
5288+ controller: true
5289+ kind: Deployment
5290+ name: fourohfour
5291+ uid: eb73e8ea-dca5-4201-8fa9-159eb4156230
5292+ resourceVersion: "3260771"
5293+ selfLink: /apis/apps/v1/namespaces/tool-fourohfour/replicasets/fourohfour-66bf569f4f
5294+ uid: c36f4507-8e47-4f49-a5a5-112c36af769a
5295+ spec:
5296+ replicas: 1
5297+ selector:
5298+ matchLabels:
5299+ name: fourohfour
5300+ pod-template-hash: 66bf569f4f
5301+ toolforge: tool
5302+ tools.wmflabs.org/webservice: "true"
5303+ tools.wmflabs.org/webservice-version: "1"
5304+ template:
5305+ metadata:
5306+ creationTimestamp: null
5307+ labels:
5308+ name: fourohfour
5309+ pod-template-hash: 66bf569f4f
5310+ toolforge: tool
5311+ tools.wmflabs.org/webservice: "true"
5312+ tools.wmflabs.org/webservice-version: "1"
5313+ spec:
5314+ containers:
5315+ - command:
5316+ - /usr/bin/webservice-runner
5317+ - --type
5318+ - uwsgi-python
5319+ - --port
5320+ - "8000"
5321+ image: docker-registry.tools.wmflabs.org/toolforge-python35-sssd-web:latest
5322+ imagePullPolicy: Always
5323+ name: webservice
5324+ ports:
5325+ - containerPort: 8000
5326+ name: http
5327+ protocol: TCP
5328+ resources: {}
5329+ terminationMessagePath: /dev/termination-log
5330+ terminationMessagePolicy: File
5331+ workingDir: /data/project/fourohfour/
5332+ dnsPolicy: ClusterFirst
5333+ restartPolicy: Always
5334+ schedulerName: default-scheduler
5335+ securityContext: {}
5336+ terminationGracePeriodSeconds: 30
5337+ status:
5338+ availableReplicas: 1
5339+ fullyLabeledReplicas: 1
5340+ observedGeneration: 1
5341+ readyReplicas: 1
5342+ replicas: 1
5343 kind: List
5344 metadata:
5345 resourceVersion: ""

I don't see anything obvious that may be causing our issue.

Things that we know are legitimate different in the diff:

  • IP addresses, timestamps, uids, etc
  • server names (etcd, naming schemes, etc)
  • test pod in tools
  • maintain kubeusers and fourohfour in toolsbeta
  • some ingress changes, to integrate fourohfour in toolsbeta

At this point, I think I will just rebuild everything (in both toolsbeta and tools) and see what happens.

I think I found the issue.

TL;DR: we need an additional security group configuration, because neutron blocks the IPIP tunnels if we just use the default security group in the tools project. This is different in toolsbeta for whatever reason.

For my future self reference, this is what I did:

  • In toolsbeta, I was able to trace a simple icmp packet from the host (control-1) to the pod (running in worker-1). This ICMP packet is tunneled using the IPIP protocol.
  • In tools, I saw the icmp packet leaving the control-1 VM, but the icmp packet didn't reach the worker-1 node. Where is this packet being filtered? I checked the most obvious/usual suspects (our own k8s config, iptables ruleset generated by calico/docker/kube-proxy, neutron security groups, see previous comments)
  • At first sight I didn't find anything interesting so I tried to discover exactly where the packet was being dropped. This is were things turns out interesting
  • Try to see the packet flowing in the virtio interface for the control-1 VM:
aborrero@cloudvirt1017:~ $ sudo virsh dumpxml i-0000e7f1 | egrep nova:name\|tap
      <nova:name>tools-k8s-control-1</nova:name>
      <target dev='tap37b75244-b5'/>
aborrero@cloudvirt1017:~ $ sudo tcpdump -i tap37b75244-b5 ip proto 4
[..]
12:31:28.845005 IP 172.16.0.104 > 172.16.0.78: IP 192.168.48.128 > 192.168.50.9: ICMP echo request, id 31532, seq 1, length 64 (ipip-proto-4)
  • NOTE: figuring out the right tcpdump filter was really challenging. Look how wonderful that packet is. You can see the internal k8s addresses!
  • It turns out the destination VM is running in the same hypervisor. I could see packet in the neutron bridge.
aborrero@cloudvirt1017:~ 31s $ sudo brctl show
bridge name	bridge id		STP enabled	interfaces
brq7425e328-56		8000.f4e9d4bab742	no		enp4s0f1.1105
[..]
aborrero@cloudvirt1017:~ $ sudo tcpdump -i brq7425e328-56 ip proto 4
[..]
12:32:01.922910 IP 172.16.0.103 > 172.16.0.78: IP 192.168.34.128.34720 > 192.168.50.9.http-alt: Flags [S], seq 580761307, win 29200, options [mss 1460,sackOK,TS val 3875236126 ecr 0,nop,wscale 9], length 0 (ipip-proto-4)
12:32:13.514430 IP 172.16.0.103 > 172.16.0.78: IP 192.168.34.128.37378 > 192.168.50.9.http-alt: Flags [S], seq 3822388130, win 29200, options [mss 1460,sackOK,TS val 225313027 ecr 0,nop,wscale 9], length 0 (ipip-proto-4)
12:32:16.147737 IP 172.16.0.104 > 172.16.0.78: IP 192.168.48.128 > 192.168.50.9: ICMP echo request, id 32592, seq 1, length 64 (ipip-proto-4)
  • NOTE how packets other than my icmp test are being filtered too. Those first 2 are :8080, trying to reach the nginx-ingress pod BTW. Something is really wrong.
  • Packets don't enter the destination virtio interface:
aborrero@cloudvirt1017:~ $ sudo virsh dumpxml i-0000e888 | egrep nova:name\|tap
      <nova:name>tools-k8s-worker-1</nova:name>
      <target dev='tap3f8a199d-19'/>
aborrero@cloudvirt1017:~ $ sudo tcpdump -i tap3f8a199d-19 ip proto 4
[..no packets..]
  • I don't know how, but I remembered that neutron security groups can't specify such protocol easily. Even though I had checked the security groups at T238654#5682004, decided to create a new security group specially for the new k8s cluster:
root@cloudcontrol1004:~# neutron security-group-rule-create --tenant-id tools --direction ingress --remote-group-id 8354bcbb-c856-4991-acfe-781c0cd1a230 --description "new-k8s-test-inbound" 8354bcbb-c856-4991-acfe-781c0cd1a230
Created a new security_group_rule:
+-------------------+--------------------------------------+
| Field             | Value                                |
+-------------------+--------------------------------------+
| created_at        | 2019-11-22T12:47:36Z                 |
| description       | new-k8s-test-inbound                 |
| direction         | ingress                              |
| ethertype         | IPv4                                 |
| id                | d865dda5-1b8a-4a9c-8fe5-6c19900757ac |
| port_range_max    |                                      |
| port_range_min    |                                      |
| project_id        | tools                                |
| protocol          |                                      |
| remote_group_id   | 8354bcbb-c856-4991-acfe-781c0cd1a230 |
| remote_ip_prefix  |                                      |
| revision_number   | 1                                    |
| security_group_id | 8354bcbb-c856-4991-acfe-781c0cd1a230 |
| tenant_id         | tools                                |
| updated_at        | 2019-11-22T12:47:36Z                 |
+-------------------+--------------------------------------+
root@cloudcontrol1004:~# neutron security-group-rule-create --tenant-id tools --direction egress --remote-group-id 8354bcbb-c856-4991-acfe-781c0cd1a230 --description "new-k8s-test-outbound" 8354bcbb-c856-4991-acfe-781c0cd1a230
Created a new security_group_rule:
+-------------------+--------------------------------------+
| Field             | Value                                |
+-------------------+--------------------------------------+
| created_at        | 2019-11-22T12:47:59Z                 |
| description       | new-k8s-test-outbound                |
| direction         | egress                               |
| ethertype         | IPv4                                 |
| id                | 644becb8-36a2-4f3c-9d76-1ebac954869b |
| port_range_max    |                                      |
| port_range_min    |                                      |
| project_id        | tools                                |
| protocol          |                                      |
| remote_group_id   | 8354bcbb-c856-4991-acfe-781c0cd1a230 |
| remote_ip_prefix  |                                      |
| revision_number   | 1                                    |
| security_group_id | 8354bcbb-c856-4991-acfe-781c0cd1a230 |
| tenant_id         | tools                                |
| updated_at        | 2019-11-22T12:47:59Z                 |
+-------------------+--------------------------------------+
root@cloudcontrol1004:~# neutron security-group-show new-k8s-test
+----------------------+--------------------------------------------------------------------+
| Field                | Value                                                              |
+----------------------+--------------------------------------------------------------------+
| created_at           | 2019-11-22T12:43:23Z                                               |
| description          |                                                                    |
| id                   | 8354bcbb-c856-4991-acfe-781c0cd1a230                               |
| name                 | new-k8s-test                                                       |
| project_id           | tools                                                              |
| revision_number      | 5                                                                  |
| security_group_rules | {                                                                  |
|                      |      "remote_group_id": "8354bcbb-c856-4991-acfe-781c0cd1a230",    |
|                      |      "direction": "egress",                                        |
|                      |      "protocol": null,                                             |
|                      |      "description": "new-k8s-test-outbound",                       |
|                      |      "ethertype": "IPv4",                                          |
|                      |      "remote_ip_prefix": null,                                     |
|                      |      "port_range_max": null,                                       |
|                      |      "updated_at": "2019-11-22T12:47:59Z",                         |
|                      |      "security_group_id": "8354bcbb-c856-4991-acfe-781c0cd1a230",  |
|                      |      "port_range_min": null,                                       |
|                      |      "revision_number": 1,                                         |
|                      |      "tenant_id": "tools",                                         |
|                      |      "created_at": "2019-11-22T12:47:59Z",                         |
|                      |      "project_id": "tools",                                        |
|                      |      "id": "644becb8-36a2-4f3c-9d76-1ebac954869b"                  |
|                      | }                                                                  |
|                      | {                                                                  |
|                      |      "remote_group_id": "8354bcbb-c856-4991-acfe-781c0cd1a230",    |
|                      |      "direction": "ingress",                                       |
|                      |      "protocol": null,                                             |
|                      |      "description": "new-k8s-test-inbound",                        |
|                      |      "ethertype": "IPv4",                                          |
|                      |      "remote_ip_prefix": null,                                     |
|                      |      "port_range_max": null,                                       |
|                      |      "updated_at": "2019-11-22T12:47:36Z",                         |
|                      |      "security_group_id": "8354bcbb-c856-4991-acfe-781c0cd1a230",  |
|                      |      "port_range_min": null,                                       |
|                      |      "revision_number": 1,                                         |
|                      |      "tenant_id": "tools",                                         |
|                      |      "created_at": "2019-11-22T12:47:36Z",                         |
|                      |      "project_id": "tools",                                        |
|                      |      "id": "d865dda5-1b8a-4a9c-8fe5-6c19900757ac"                  |
|                      | }                                                                  |
| tenant_id            | tools                                                              |
| updated_at           | 2019-11-22T12:47:59Z                                               |
+----------------------+--------------------------------------------------------------------+
  • applied it to the affected VMs in the tools project and magic! it works.
aborrero@tools-k8s-control-1:~$ ping 192.168.50.9 -c1
PING 192.168.50.9 (192.168.50.9) 56(84) bytes of data.
64 bytes from 192.168.50.9: icmp_seq=1 ttl=63 time=0.670 ms

--- 192.168.50.9 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.670/0.670/0.670/0.000 ms
aborrero@cloudvirt1017:~ 1 $ sudo tcpdump -i tap3f8a199d-19 ip proto 4
[..]
13:18:26.726234 IP 172.16.0.104 > 172.16.0.78: IP 192.168.48.128 > 192.168.50.9: ICMP echo request, id 29506, seq 1, length 64 (ipip-proto-4)
13:18:26.726518 IP 172.16.0.78 > 172.16.0.104: IP 192.168.50.9 > 192.168.48.128: ICMP echo reply, id 29506, seq 1, length 64 (ipip-proto-4)
  • all of our previous tests work now:
aborrero@tools-k8s-control-1:~$ sudo -i kubectl exec -it test-shell -- /bin/ash
/app # ping www.google.es -c1
PING www.google.es (172.217.164.131): 56 data bytes
64 bytes from 172.217.164.131: seq=0 ttl=57 time=0.517 ms

--- www.google.es ping statistics ---
1 packets transmitted, 1 packets received, 0% packet loss
round-trip min/avg/max = 0.517/0.517/0.517 ms

/app # ping 192.168.50.9 -c1
PING 192.168.50.9 (192.168.50.9): 56 data bytes
64 bytes from 192.168.50.9: seq=0 ttl=62 time=0.364 ms

--- 192.168.50.9 ping statistics ---
1 packets transmitted, 1 packets received, 0% packet loss
round-trip min/avg/max = 0.364/0.364/0.364 ms

Will create proper config for this and write the docs for https://wikitech.wikimedia.org/wiki/Portal:Toolforge/Admin/Deploying_k8s

Mentioned in SAL (#wikimedia-cloud) [2019-11-22T13:32:46Z] <arturo> created security group tools-new-k8s-full-connectivity and add new k8s VMs to it (T238654)

Closing task again. Please reopen if required.