Using webservice 0.47 from inside a pod works as expected:
$ webservice restart ****************************************************************************** Note that access.log is no longer enabled by default (see https://w.wiki/9go) ****************************************************************************** Restarting webservice... $
Using webservice 0.51 from tools-sgebastion-08:
$ webservice restart Traceback (most recent call last): File "/usr/local/bin/webservice", line 230, in <module> start(job, 'Your job is not running, starting') File "/usr/local/bin/webservice", line 95, in start job.request_start() File "/usr/lib/python2.7/dist-packages/toollabs/webservice/backends/kubernetesbackend.py", line 642, in request_start pykube.Deployment(self.api, self._get_deployment()).create() File "/usr/lib/python2.7/dist-packages/pykube/objects.py", line 76, in create self.api.raise_for_status(r) File "/usr/lib/python2.7/dist-packages/pykube/http.py", line 104, in raise_for_status raise HTTPError(payload["message"]) pykube.exceptions.HTTPError: deployments.extensions "fourohfour" already exists
The pod actually seems to be restarted as hoped. It looks like the delete of the Deployment that is done in KubernetesBackend.request_stop() is failing however. It also seems that the guard checking for an existing Deployment in KubernetesBackend.request_start() is failing, so maybe the root problem is that something changed such that KubernetesBackend._find_obj(pykube.Deployment, self.webservice_label_selector) always fails?
Related, but separate: Docker images seem to have not been updated to use the latest webservice package. Actually kind of nice in this instance as it let me make this comparison and see that this is a regression in webservice and not some other problem with the legacy k8s cluster.