Job:
#1842002bug2 years agoKubePodCrashLooping kube-contoller-manager cluster-policy-controller: 6443: connect: connection refused RELEASE_PENDING
$ curl -s https://storage.googleapis.com/origin-ci-test/logs/release-openshift-origin-installer-e2e-gcp-4.5/2428/artifacts/e2e-gcp/events.json | jq -r '.items[] | select(.metadata.namespace == "openshift-kube-apiserver") | .firstTimestamp + " " + .lastTimestamp + " " + .message' | sort
...
2020-05-30T01:10:53Z 2020-05-30T01:10:53Z All pending requests processed
2020-05-30T01:10:53Z 2020-05-30T01:10:53Z Server has stopped listening
2020-05-30T01:10:53Z 2020-05-30T01:10:53Z The minimal shutdown duration of 1m10s finished
...
2020-05-30T01:11:58Z 2020-05-30T01:11:58Z Created container kube-apiserver-cert-regeneration-controller
2020-05-30T01:11:58Z 2020-05-30T01:11:58Z Created container kube-apiserver-cert-syncer
2020-05-30T01:11:58Z 2020-05-30T01:11:58Z Started container kube-apiserver
...
#1934628bug17 months agoAPI server stopped reporting healthy during upgrade to 4.7.0 ASSIGNED
during that time the API server was restarted by kubelet due to a failed liveness probe
14:18:00	openshift-kube-apiserver	kubelet	kube-apiserver-ip-10-0-159-123.ec2.internal	Killing	Container kube-apiserver failed liveness probe, will be restarted
14:19:17	openshift-kube-apiserver	apiserver	kube-apiserver-ip-10-0-159-123.ec2.internal	TerminationMinimalShutdownDurationFinished	The minimal shutdown duration of 1m10s finished
moving to etcd team to investigate why etcd was unavailable during that time
Comment 15200626 by mfojtik@redhat.com at 2021-06-17T18:29:50Z
The LifecycleStale keyword was removed because the bug got commented on recently.
#1943804bug18 months agoAPI server on AWS takes disruption between 70s and 110s after pod begins termination via external LB RELEASE_PENDING
    "name": "kube-apiserver-ip-10-0-131-183.ec2.internal",
    "namespace": "openshift-kube-apiserver"
  },
  "kind": "Event",
  "lastTimestamp": null,
  "message": "The minimal shutdown duration of 1m10s finished",
  "metadata": {
    "creationTimestamp": "2021-03-29T12:18:04Z",
    "name": "kube-apiserver-ip-10-0-131-183.ec2.internal.1670cf61b0f72d2d",
    "namespace": "openshift-kube-apiserver",
    "resourceVersion": "89139",
#1921157bug23 months ago[sig-api-machinery] Kubernetes APIs remain available for new connections ASSIGNED
T2: At 06:45:58: systemd-shutdown was sending SIGTERM to remaining processes...
T3: At 06:45:58: kube-apiserver-ci-op-z52cbzhi-6d7cd-pz2jw-master-0: Received signal to terminate, becoming unready, but keeping serving (TerminationStart event)
T4: At 06:47:08 kube-apiserver-ci-op-z52cbzhi-6d7cd-pz2jw-master-0: The minimal shutdown duration of 1m10s finished (TerminationMinimalShutdownDurationFinished event)
T5: At 06:47:08 kube-apiserver-ci-op-z52cbzhi-6d7cd-pz2jw-master-0: Server has stopped listening (TerminationStoppedServing event)
T5 is the last event reported from that api server. At T5 the server might wait up to 60s for all requests to complete and then it fires TerminationGracefulTerminationFinished event.
#1932097bug18 months agoApiserver liveness probe is marking it as unhealthy during normal shutdown RELEASE_PENDING
Feb 23 20:18:04.212 - 1s    E kube-apiserver-new-connection kube-apiserver-new-connection is not responding to GET requests
Feb 23 20:18:05.318 I kube-apiserver-new-connection kube-apiserver-new-connection started responding to GET requests
Deeper detail from the node log shows that right as we get this error one of the instances finishes its connection ,which is right when the error happens.
Feb 23 20:18:02.505 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-203-7.us-east-2.compute.internal node/ip-10-0-203-7 reason/TerminationMinimalShutdownDurationFinished The minimal shutdown duration of 1m10s finished
Feb 23 20:18:02.509 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-203-7.us-east-2.compute.internal node/ip-10-0-203-7 reason/TerminationStoppedServing Server has stopped listening
Feb 23 20:18:03.148 I ns/openshift-console-operator deployment/console-operator reason/OperatorStatusChanged Status for clusteroperator/console changed: Degraded message changed from "CustomRouteSyncDegraded: the server is currently unable to handle the request (delete routes.route.openshift.io console-custom)\nSyncLoopRefreshDegraded: the server is currently unable to handle the request (get routes.route.openshift.io console)" to "SyncLoopRefreshDegraded: the server is currently unable to handle the request (get routes.route.openshift.io console)" (2 times)
Feb 23 20:18:03.880 E kube-apiserver-reused-connection kube-apiserver-reused-connection started failing: Get "https://api.ci-op-ivyvzgrr-0b477.origin-ci-int-aws.dev.rhcloud.com:6443/api/v1/namespaces/default": dial tcp 3.21.250.132:6443: connect: connection refused
This kind of looks like the load balancer didn't remove the kube-apiserver and kept sending traffic and the connection didn't cleanly shut down - did something regress in the apiserver traffic connection?
#1995804bug15 months agoRewrite carry "UPSTREAM: <carry>: create termination events" to lifecycleEvents RELEASE_PENDING
Use the new lifecycle event names for the events that we generate when an apiserver is gracefully terminating.
Comment 15454963 by kewang@redhat.com at 2021-09-03T09:36:37Z
$ w3m -dump -cols 200 'https://search.ci.openshift.org/?search=The+minimal+shutdown+duration&maxAge=168h&context=5&type=build-log&name=4%5C.9&excludeName=&maxMatches=5&maxBytes=20971520&groupBy=job' | grep -E 'kube-system node\/apiserver|openshift-kube-apiserver|openshift-apiserver' > test.log
$ grep 'The minimal shutdown duration of' test.log | head -2
Sep 03 05:22:37.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-163-71.us-west-1.compute.internal node/ip-10-0-163-71 reason/AfterShutdownDelayDuration The minimal shutdown duration of 3m30s finished
Sep 03 05:22:37.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-163-71.us-west-1.compute.internal node/ip-10-0-163-71 reason/AfterShutdownDelayDuration The minimal shutdown duration of 3m30s finished
$ grep 'Received signal to terminate' test.log | head -2
Sep 03 08:49:11.000 I ns/default namespace/kube-system node/apiserver-75cf4778cb-9zk42 reason/TerminationStart Received signal to terminate, becoming unready, but keeping serving
Sep 03 08:53:40.000 I ns/default namespace/kube-system node/apiserver-75cf4778cb-c8429 reason/TerminationStart Received signal to terminate, becoming unready, but keeping serving
#1955333bug11 months ago"Kubernetes APIs remain available for new connections" and similar failing on 4.8 Azure updates NEW
  2021-05-01T03:59:42Z 1 kube-apiserver-ip-10-0-189-59.ec2.internal Killing: Stopping container kube-apiserver-check-endpoints
  2021-05-01T03:59:42Z 1 kube-apiserver-ip-10-0-189-59.ec2.internal Killing: Stopping container kube-apiserver-insecure-readyz
  2021-05-01T03:59:43Z null kube-apiserver-ip-10-0-189-59.ec2.internal TerminationPreShutdownHooksFinished: All pre-shutdown hooks have been finished
  2021-05-01T03:59:43Z null kube-apiserver-ip-10-0-189-59.ec2.internal TerminationStart: Received signal to terminate, becoming unready, but keeping serving
  2021-05-01T03:59:49Z 1 cert-regeneration-controller-lock LeaderElection: ip-10-0-239-74_02f2b687-97f4-44c4-9516-e3fb364deb85 became leader
  2021-05-01T04:00:53Z null kube-apiserver-ip-10-0-189-59.ec2.internal TerminationMinimalShutdownDurationFinished: The minimal shutdown duration of 1m10s finished
  2021-05-01T04:00:53Z null kube-apiserver-ip-10-0-189-59.ec2.internal TerminationStoppedServing: Server has stopped listening
  2021-05-01T04:01:53Z null kube-apiserver-ip-10-0-189-59.ec2.internal TerminationGracefulTerminationFinished: All pending requests processed
  2021-05-01T04:01:55Z 1 kube-apiserver-ip-10-0-189-59.ec2.internal Pulling: Pulling image "registry.ci.openshift.org/ocp/4.8-2021-04-30-212732@sha256:e4c7be2f0e8b1e9ef1ad9161061449ec1bdc6953a58f6d456971ee945a8d3197"
  2021-05-01T04:02:05Z 1 kube-apiserver-ip-10-0-189-59.ec2.internal Created: Created container setup
  2021-05-01T04:02:05Z 1 kube-apiserver-ip-10-0-189-59.ec2.internal Pulled: Container image "registry.ci.openshift.org/ocp/4.8-2021-04-30-212732@sha256:e4c7be2f0e8b1e9ef1ad9161061449ec1bdc6953a58f6d456971ee945a8d3197" already present on machine
That really looks like kube-apiserver is rolling out a new version, and for some reason there is not the graceful LB handoff we need to avoid connection issues.  Unifying the two timelines:
* 03:59:43Z TerminationPreShutdownHooksFinished
* 03:59:43Z TerminationStart: Received signal to terminate, becoming unready, but keeping serving
* 04:00:53Z TerminationMinimalShutdownDurationFinished: The minimal shutdown duration of 1m10s finished
* 04:00:53Z TerminationStoppedServing: Server has stopped listening
* 04:00:58.307Z kube-apiserver-new-connection started failing... connection refused
* 04:00:59.314Z kube-apiserver-new-connection started responding to GET requests
* 04:01:03.307Z kube-apiserver-new-connection started failing... connection refused
* 04:01:04.313Z kube-apiserver-new-connection started responding to GET requests
#1979916bug18 months agokube-apiserver constantly receiving signals to terminate after a fresh install, but still keeps serving ASSIGNED
kube-apiserver-master-0-2
Server has stopped listening
kube-apiserver-master-0-2
The minimal shutdown duration of 1m10s finished
redhat-operators-7p4nb
Stopping container registry-server
Successfully pulled image "registry.redhat.io/redhat/redhat-operator-index:v4.8" in 3.09180991s
periodic-ci-openshift-release-master-nightly-4.9-upgrade-from-stable-4.8-e2e-aws-upgrade (all) - 14 runs, 50% failed, 57% of failures match = 29% impact
#1616029442511998976junit11 days ago
Jan 19 13:04:35.266 E ns/openshift-cluster-csi-drivers pod/aws-ebs-csi-driver-controller-7d64686c4b-rfb5t node/ip-10-0-154-216.us-west-2.compute.internal container/csi-resizer reason/ContainerExit code/2 cause/Error
Jan 19 13:04:35.266 E ns/openshift-cluster-csi-drivers pod/aws-ebs-csi-driver-controller-7d64686c4b-rfb5t node/ip-10-0-154-216.us-west-2.compute.internal container/csi-driver reason/ContainerExit code/2 cause/Error
Jan 19 13:04:35.266 E ns/openshift-cluster-csi-drivers pod/aws-ebs-csi-driver-controller-7d64686c4b-rfb5t node/ip-10-0-154-216.us-west-2.compute.internal container/csi-liveness-probe reason/ContainerExit code/2 cause/Error
Jan 19 13:04:36.568 E ns/openshift-cluster-storage-operator pod/csi-snapshot-webhook-987f7bc9c-6mlj6 node/ip-10-0-154-216.us-west-2.compute.internal container/webhook reason/ContainerExit code/2 cause/Error
Jan 19 13:04:37.668 E ns/openshift-controller-manager-operator pod/openshift-controller-manager-operator-6f4687459b-8sgpf node/ip-10-0-154-216.us-west-2.compute.internal container/openshift-controller-manager-operator reason/ContainerExit code/1 cause/Error 7: Watch close - *v1.Build total 8 items received\nI0119 13:03:53.802179       1 reflector.go:535] k8s.io/client-go@v0.22.0-rc.0/tools/cache/reflector.go:167: Watch close - *v1.Secret total 8 items received\nI0119 13:03:55.054769       1 httplog.go:104] "HTTP" verb="GET" URI="/metrics" latency="8.56748ms" userAgent="Prometheus/2.29.2" audit-ID="50efddd7-34c3-4058-b715-40cc9a42e574" srcIP="10.128.2.20:45218" resp=200\nI0119 13:03:59.396294       1 reflector.go:535] k8s.io/client-go@v0.22.0-rc.0/tools/cache/reflector.go:167: Watch close - *v1.Role total 9 items received\nI0119 13:04:02.256624       1 reflector.go:535] k8s.io/client-go@v0.22.0-rc.0/tools/cache/reflector.go:167: Watch close - *v1.Proxy total 9 items received\nI0119 13:04:07.883826       1 httplog.go:104] "HTTP" verb="GET" URI="/metrics" latency="6.271536ms" userAgent="Prometheus/2.29.2" audit-ID="9e60af01-2d30-4cb7-86d9-5b7898671698" srcIP="10.129.2.8:53556" resp=200\nI0119 13:04:11.010669       1 reflector.go:535] k8s.io/client-go@v0.22.0-rc.0/tools/cache/reflector.go:167: Watch close - *v1.Secret total 9 items received\nI0119 13:04:15.256646       1 reflector.go:535] k8s.io/client-go@v0.22.0-rc.0/tools/cache/reflector.go:167: Watch close - *v1.Secret total 8 items received\nI0119 13:04:21.381291       1 reflector.go:535] k8s.io/client-go@v0.22.0-rc.0/tools/cache/reflector.go:167: Watch close - *v1.ConfigMap total 25 items received\nI0119 13:04:25.038197       1 httplog.go:104] "HTTP" verb="GET" URI="/metrics" latency="4.136544ms" userAgent="Prometheus/2.29.2" audit-ID="d8a0852e-dc3e-4ce6-afff-42ec49bf6e20" srcIP="10.128.2.20:45218" resp=200\nI0119 13:04:32.896384       1 cmd.go:97] Received SIGTERM or SIGINT signal, shutting down controller.\nI0119 13:04:32.896752       1 genericapiserver.go:386] [graceful-termination] RunPreShutdownHooks has completed\nI0119 13:04:32.896810       1 operator.go:115] Shutting down OpenShiftControllerManagerOperator\nW0119 13:04:32.896850       1 builder.go:101] graceful termination failed, controllers failed with error: stopped\n
Jan 19 13:04:37.693 E ns/openshift-console-operator pod/console-operator-7bddd84bf-2zjlr node/ip-10-0-154-216.us-west-2.compute.internal container/console-operator reason/ContainerExit code/1 cause/Error 1 genericapiserver.go:709] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-console-operator", Name:"console-operator-7bddd84bf-2zjlr", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TerminationPreShutdownHooksFinished' All pre-shutdown hooks have been finished\nI0119 13:04:32.010421       1 genericapiserver.go:355] "[graceful-termination] shutdown event" name="ShutdownInitiated"\nI0119 13:04:32.010431       1 genericapiserver.go:709] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-console-operator", Name:"console-operator-7bddd84bf-2zjlr", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TerminationStart' Received signal to terminate, becoming unready, but keeping serving\nI0119 13:04:32.010441       1 genericapiserver.go:709] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-console-operator", Name:"console-operator-7bddd84bf-2zjlr", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TerminationMinimalShutdownDurationFinished' The minimal shutdown duration of 0s finished\nI0119 13:04:32.010456       1 genericapiserver.go:362] "[graceful-termination] shutdown event" name="AfterShutdownDelayDuration"\nI0119 13:04:32.010474       1 dynamic_serving_content.go:144] "Shutting down controller" name="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key"\nI0119 13:04:32.010829       1 base_controller.go:167] Shutting down ResourceSyncController ...\nI0119 13:04:32.010845       1 base_controller.go:167] Shutting down ConsoleDownloadsDeploymentSyncController ...\nI0119 13:04:32.010854       1 base_controller.go:167] Shutting down ManagementStateController ...\nI0119 13:04:32.010863       1 base_controller.go:167] Shutting down RemoveStaleConditionsController ...\nW0119 13:04:32.011070       1 builder.go:101] graceful termination failed, controllers failed with error: stopped\nI0119 13:04:32.011071       1 base_controller.go:167] Shutting down ConsoleOperator ...\n
Jan 19 13:04:37.777 E ns/openshift-cluster-storage-operator pod/csi-snapshot-controller-operator-649ccd8b7b-wbl5x node/ip-10-0-154-216.us-west-2.compute.internal container/csi-snapshot-controller-operator reason/ContainerExit code/1 cause/Error sExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2023-01-19T13:04:31Z","message":"Progressing: Waiting for Deployment to deploy csi-snapshot-controller pods","reason":"_Deploying","status":"True","type":"Progressing"},{"lastTransitionTime":"2023-01-19T11:19:58Z","message":"All is well","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2023-01-19T11:19:24Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"}]}}\nI0119 13:04:31.258654       1 operator.go:159] Finished syncing operator at 170.95745ms\nI0119 13:04:31.260234       1 operator.go:157] Starting syncing operator at 2023-01-19 13:04:31.260227875 +0000 UTC m=+460.761517012\nI0119 13:04:31.276556       1 event.go:282] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-cluster-storage-operator", Name:"csi-snapshot-controller-operator", UID:"39e3ecb2-1d5c-4dd3-bc3c-6f4a036a5360", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/csi-snapshot-controller changed: Progressing message changed from "CSISnapshotWebhookControllerProgressing: 1 out of 2 pods running\nProgressing: Waiting for Deployment to deploy csi-snapshot-controller pods" to "Progressing: Waiting for Deployment to deploy csi-snapshot-controller pods"\nI0119 13:04:31.526352       1 operator.go:159] Finished syncing operator at 266.115299ms\nI0119 13:04:31.572596       1 cmd.go:97] Received SIGTERM or SIGINT signal, shutting down controller.\nI0119 13:04:31.573557       1 base_controller.go:167] Shutting down StaticResourceController ...\nI0119 13:04:31.573834       1 base_controller.go:167] Shutting down CSISnapshotWebhookController ...\nI0119 13:04:31.573899       1 base_controller.go:167] Shutting down ManagementStateController ...\nI0119 13:04:31.573948       1 base_controller.go:167] Shutting down LoggingSyncer ...\nW0119 13:04:31.574005       1 builder.go:101] graceful termination failed, controllers failed with error: stopped\n
Jan 19 13:04:38.879 E ns/openshift-insights pod/insights-operator-79c7d54757-8wm74 node/ip-10-0-154-216.us-west-2.compute.internal container/insights-operator reason/ContainerExit code/2 cause/Error config/node/ip-10-0-151-175.us-west-2.compute.internal with fingerprint=\nI0119 13:04:34.658513       1 recorder.go:55] Recording config/node/ip-10-0-154-216.us-west-2.compute.internal with fingerprint=\nI0119 13:04:34.658644       1 recorder.go:55] Recording config/node/ip-10-0-206-24.us-west-2.compute.internal with fingerprint=\nI0119 13:04:34.658916       1 recorder.go:55] Recording config/node/ip-10-0-225-77.us-west-2.compute.internal with fingerprint=\nI0119 13:04:34.660460       1 gather.go:118] Gather clusterconfig's function nodes took 168.077826ms to process 6 records\nI0119 13:04:34.660514       1 gather.go:118] Gather clusterconfig's function ceph_cluster took 168.3826ms to process 0 records\nI0119 13:04:34.660546       1 gather.go:118] Gather clusterconfig's function openshift_authentication_logs took 940.100319ms to process 0 records\nI0119 13:04:34.660574       1 tasks_processing.go:72] worker 15 working on container_runtime_configs task.\nI0119 13:04:34.630915       1 sap_vsystem_iptables_logs.go:47] SAP resources weren't found\nI0119 13:04:34.677006       1 tasks_processing.go:72] worker 6 working on sap_config task.\nI0119 13:04:34.677434       1 tasks_processing.go:72] worker 0 working on cost_management_metrics_configs task.\nI0119 13:04:34.679572       1 tasks_processing.go:72] worker 7 working on version task.\nI0119 13:04:34.688913       1 tasks_processing.go:72] worker 4 working on metrics task.\nI0119 13:04:34.698877       1 gather.go:118] Gather clusterconfig's function sap_license_management_logs took 574.032618ms to process 0 records\nI0119 13:04:34.699014       1 tasks_processing.go:72] worker 10 working on install_plans task.\nI0119 13:04:34.699752       1 tasks_processing.go:72] worker 5 working on mutating_webhook_configurations task.\nE0119 13:04:34.714442       1 gather.go:101] gatherer clusterconfig's function openshift_apiserver_operator_logs failed with error: container "openshift-apiserver-operator" in pod "openshift-apiserver-operator-59ff5c7cd7-djhhb" is waiting to start: ContainerCreating\n
Jan 19 13:04:42.247 E ns/openshift-machine-config-operator pod/machine-config-operator-66769c8cc4-thhlf node/ip-10-0-154-216.us-west-2.compute.internal container/machine-config-operator reason/ContainerExit code/2 cause/Error I0119 12:49:48.714499       1 start.go:43] Version: 4.9.0-0.nightly-2023-01-18-114336 (Raw: v4.9.0-202212051626.p0.gb2055c0.assembly.stream-dirty, Hash: b2055c07f694f100de0d45cde8e8ca72b661826d)\nI0119 12:49:48.716677       1 leaderelection.go:248] attempting to acquire leader lease openshift-machine-config-operator/machine-config...\nI0119 12:51:44.368228       1 leaderelection.go:258] successfully acquired lease openshift-machine-config-operator/machine-config\nI0119 12:51:44.812102       1 operator.go:262] Starting MachineConfigOperator\nE0119 12:59:27.960046       1 sync.go:626] Error syncingUpgradeableStatus: "rpc error: code = Unknown desc = OK: HTTP status code 200; transport: missing content-type field"\nE0119 13:01:49.976999       1 sync.go:646] Error syncing Required MachineConfigPools: "pool master has not progressed to latest configuration: controller version mismatch for rendered-master-a6f0df413aa64c45405676dee395deef expected b2055c07f694f100de0d45cde8e8ca72b661826d has ace2072d71e56e1e644e9e078bf73f2b8f2875ae: 1 (ready 1) out of 3 nodes are updating to latest configuration rendered-master-99a8a4bf6db42d3a46ee3e857fb29819, retrying"\nI0119 13:01:49.980989       1 event.go:282] Event(v1.ObjectReference{Kind:"", Namespace:"", Name:"machine-config", UID:"38eae726-009f-45f3-89d1-ebed01251fe8", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'OperatorDegraded: RequiredPoolsFailed' Unable to apply 4.9.0-0.nightly-2023-01-18-114336: timed out waiting for the condition during syncRequiredMachineConfigPools: pool master has not progressed to latest configuration: controller version mismatch for rendered-master-a6f0df413aa64c45405676dee395deef expected b2055c07f694f100de0d45cde8e8ca72b661826d has ace2072d71e56e1e644e9e078bf73f2b8f2875ae: 1 (ready 1) out of 3 nodes are updating to latest configuration rendered-master-99a8a4bf6db42d3a46ee3e857fb29819, retrying\n
Jan 19 13:04:42.347 E ns/openshift-machine-api pod/machine-api-operator-66cf7d4f68-mz99x node/ip-10-0-154-216.us-west-2.compute.internal container/machine-api-operator reason/ContainerExit code/2 cause/Error
Jan 19 13:04:58.817 E ns/openshift-console pod/console-6d7c76c588-6l8bp node/ip-10-0-154-216.us-west-2.compute.internal container/console reason/ContainerExit code/2 cause/Error W0119 12:33:09.861124       1 main.go:206] Flag inactivity-timeout is set to less then 300 seconds and will be ignored!\nI0119 12:33:09.861200       1 main.go:278] cookies are secure!\nI0119 12:33:09.907058       1 main.go:660] Binding to [::]:8443...\nI0119 12:33:09.907078       1 main.go:662] using TLS\n
#1617597120171216896junit7 days ago
Jan 23 20:53:33.664 E ns/openshift-machine-config-operator pod/machine-config-server-cxh6p node/ip-10-0-207-95.us-west-2.compute.internal container/machine-config-server reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar)
Jan 23 20:53:33.709 E ns/openshift-sdn pod/sdn-controller-5w6tr node/ip-10-0-207-95.us-west-2.compute.internal container/sdn-controller reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar)
Jan 23 20:53:33.741 E ns/openshift-image-registry pod/node-ca-p8sp6 node/ip-10-0-207-95.us-west-2.compute.internal container/node-ca reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar)
Jan 23 20:53:33.744 E ns/openshift-cluster-node-tuning-operator pod/tuned-8vh2g node/ip-10-0-207-95.us-west-2.compute.internal container/tuned reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar)
Jan 23 20:53:33.744 E ns/openshift-dns pod/node-resolver-pfcjn node/ip-10-0-207-95.us-west-2.compute.internal container/dns-node-resolver reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar)
Jan 23 20:53:59.956 E ns/openshift-console-operator pod/console-operator-85775bb4dc-b2b4q node/ip-10-0-153-195.us-west-2.compute.internal container/console-operator reason/ContainerExit code/1 cause/Error eShutdownHooks has completed\nI0123 20:53:58.981380       1 genericapiserver.go:709] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-console-operator", Name:"console-operator-85775bb4dc-b2b4q", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TerminationPreShutdownHooksFinished' All pre-shutdown hooks have been finished\nI0123 20:53:58.981432       1 genericapiserver.go:355] "[graceful-termination] shutdown event" name="ShutdownInitiated"\nI0123 20:53:58.981458       1 genericapiserver.go:709] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-console-operator", Name:"console-operator-85775bb4dc-b2b4q", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TerminationStart' Received signal to terminate, becoming unready, but keeping serving\nI0123 20:53:58.981494       1 genericapiserver.go:709] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-console-operator", Name:"console-operator-85775bb4dc-b2b4q", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TerminationMinimalShutdownDurationFinished' The minimal shutdown duration of 0s finished\nI0123 20:53:58.981519       1 genericapiserver.go:362] "[graceful-termination] shutdown event" name="AfterShutdownDelayDuration"\nI0123 20:53:58.981548       1 genericapiserver.go:709] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-console-operator", Name:"console-operator-85775bb4dc-b2b4q", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TerminationStoppedServing' Server has stopped listening\nI0123 20:53:58.981577       1 genericapiserver.go:387] "[graceful-termination] shutdown event" name="InFlightRequestsDrained"\nI0123 20:53:58.981650       1 base_controller.go:167] Shutting down ManagementStateController ...\nI0123 20:53:58.981669       1 base_controller.go:167] Shutting down LoggingSyncer ...\nW0123 20:53:58.981862       1 builder.go:101] graceful termination failed, controllers failed with error: stopped\n
Jan 23 20:54:01.972 E ns/openshift-operator-lifecycle-manager pod/packageserver-5d494f5d8b-nlzz2 node/ip-10-0-153-195.us-west-2.compute.internal container/packageserver reason/ContainerExit code/2 cause/Error server-authentication::client-ca-file"\nI0123 20:54:01.069357       1 requestheader_controller.go:183] Shutting down RequestHeaderAuthRequestController\nI0123 20:54:01.069448       1 requestheader_controller.go:183] Shutting down RequestHeaderAuthRequestController\nI0123 20:54:01.069477       1 configmap_cafile_content.go:222] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"\nI0123 20:54:01.069509       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"\nI0123 20:54:01.069568       1 secure_serving.go:301] Stopped listening on [::]:5443\nI0123 20:54:01.069596       1 genericapiserver.go:363] "[graceful-termination] shutdown event" name="HTTPServerStoppedListening"\nI0123 20:54:01.069643       1 dynamic_serving_content.go:144] "Shutting down controller" name="serving-cert::apiserver.local.config/certificates/apiserver.crt::apiserver.local.config/certificates/apiserver.key"\nI0123 20:54:01.077581       1 reflector.go:225] Stopping reflector *v1.ConfigMap (12h0m0s) from k8s.io/apiserver/pkg/authentication/request/headerrequest/requestheader_controller.go:172\npanic: send on closed channel\n\ngoroutine 97 [running]:\ngithub.com/operator-framework/operator-lifecycle-manager/pkg/lib/queueinformer.(*operator).processNextWorkItem(0xc0005304d0, 0x246fd10, 0xc00022a080, 0xc00030c1e0, 0x1fa0357bf448c00)\n	/build/vendor/github.com/operator-framework/operator-lifecycle-manager/pkg/lib/queueinformer/queueinformer_operator.go:297 +0x6fb\ngithub.com/operator-framework/operator-lifecycle-manager/pkg/lib/queueinformer.(*operator).worker(0xc0005304d0, 0x246fd10, 0xc00022a080, 0xc00030c1e0)\n	/build/vendor/github.com/operator-framework/operator-lifecycle-manager/pkg/lib/queueinformer/queueinformer_operator.go:231 +0x49\ncreated by github.com/operator-framework/operator-lifecycle-manager/pkg/lib/queueinformer.(*operator).start\n	/build/vendor/github.com/operator-framework/operator-lifecycle-manager/pkg/lib/queueinformer/queueinformer_operator.go:221 +0x465\n
Jan 23 20:54:03.846 E ns/openshift-sdn pod/sdn-vz492 node/ip-10-0-207-95.us-west-2.compute.internal container/kube-rbac-proxy reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar)
Jan 23 20:54:03.846 E ns/openshift-sdn pod/sdn-vz492 node/ip-10-0-207-95.us-west-2.compute.internal container/sdn reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar)
Jan 23 20:54:03.873 E ns/openshift-monitoring pod/node-exporter-wdq67 node/ip-10-0-207-95.us-west-2.compute.internal container/node-exporter reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar)
Jan 23 20:54:03.873 E ns/openshift-monitoring pod/node-exporter-wdq67 node/ip-10-0-207-95.us-west-2.compute.internal container/kube-rbac-proxy reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar)
#1617848799626006528junit6 days ago
Jan 24 13:33:10.747 E ns/openshift-cluster-csi-drivers pod/aws-ebs-csi-driver-controller-7bdd8ccdcf-qvvj5 node/ip-10-0-197-41.us-west-2.compute.internal container/csi-provisioner reason/ContainerExit code/1 cause/Error Lost connection to CSI driver, exiting
Jan 24 13:33:10.747 E ns/openshift-cluster-csi-drivers pod/aws-ebs-csi-driver-controller-7bdd8ccdcf-qvvj5 node/ip-10-0-197-41.us-west-2.compute.internal container/csi-driver reason/ContainerExit code/2 cause/Error
Jan 24 13:33:10.747 E ns/openshift-cluster-csi-drivers pod/aws-ebs-csi-driver-controller-7bdd8ccdcf-qvvj5 node/ip-10-0-197-41.us-west-2.compute.internal container/csi-liveness-probe reason/ContainerExit code/2 cause/Error
Jan 24 13:33:12.156 E ns/openshift-cluster-storage-operator pod/csi-snapshot-controller-758f5b59c5-mbff9 node/ip-10-0-197-41.us-west-2.compute.internal container/snapshot-controller reason/ContainerExit code/2 cause/Error
Jan 24 13:33:12.569 E ns/openshift-cluster-storage-operator pod/csi-snapshot-webhook-987f7bc9c-psz6j node/ip-10-0-197-41.us-west-2.compute.internal container/webhook reason/ContainerExit code/2 cause/Error
Jan 24 13:33:12.759 E ns/openshift-console-operator pod/console-operator-5485c9d7c9-xl8l6 node/ip-10-0-197-41.us-west-2.compute.internal container/console-operator reason/ContainerExit code/1 cause/Error 1 genericapiserver.go:709] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-console-operator", Name:"console-operator-5485c9d7c9-xl8l6", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TerminationMinimalShutdownDurationFinished' The minimal shutdown duration of 0s finished\nI0124 13:33:09.421944       1 genericapiserver.go:362] "[graceful-termination] shutdown event" name="AfterShutdownDelayDuration"\nI0124 13:33:09.422873       1 base_controller.go:167] Shutting down StatusSyncer_console ...\nI0124 13:33:09.422928       1 base_controller.go:145] All StatusSyncer_console post start hooks have been terminated\nI0124 13:33:09.422967       1 base_controller.go:167] Shutting down ConsoleServiceController ...\nI0124 13:33:09.423000       1 base_controller.go:167] Shutting down UnsupportedConfigOverridesController ...\nI0124 13:33:09.423032       1 base_controller.go:167] Shutting down ConsoleOperator ...\nI0124 13:33:09.423064       1 base_controller.go:167] Shutting down ConsoleRouteController ...\nI0124 13:33:09.423092       1 base_controller.go:167] Shutting down ManagementStateController ...\nI0124 13:33:09.423130       1 base_controller.go:167] Shutting down ConsoleServiceController ...\nI0124 13:33:09.423160       1 base_controller.go:167] Shutting down LoggingSyncer ...\nI0124 13:33:09.423206       1 base_controller.go:167] Shutting down DownloadsRouteController ...\nI0124 13:33:09.423251       1 base_controller.go:167] Shutting down HealthCheckController ...\nI0124 13:33:09.423273       1 base_controller.go:167] Shutting down ConsoleDownloadsDeploymentSyncController ...\nI0124 13:33:09.423294       1 base_controller.go:167] Shutting down ResourceSyncController ...\nI0124 13:33:09.423314       1 base_controller.go:167] Shutting down ConsoleCLIDownloadsController ...\nI0124 13:33:09.423335       1 base_controller.go:167] Shutting down RemoveStaleConditionsController ...\nW0124 13:33:09.423566       1 builder.go:101] graceful termination failed, controllers failed with error: stopped\n
Jan 24 13:33:13.546 E ns/openshift-machine-config-operator pod/machine-config-operator-78b976f87b-zvnqc node/ip-10-0-197-41.us-west-2.compute.internal container/machine-config-operator reason/ContainerExit code/2 cause/Error 4.9.0-202212051626.p0.gb2055c0.assembly.stream-dirty, Hash: b2055c07f694f100de0d45cde8e8ca72b661826d)\nI0124 13:12:19.723721       1 leaderelection.go:248] attempting to acquire leader lease openshift-machine-config-operator/machine-config...\nI0124 13:14:15.377956       1 leaderelection.go:258] successfully acquired lease openshift-machine-config-operator/machine-config\nI0124 13:14:15.788764       1 operator.go:262] Starting MachineConfigOperator\nI0124 13:14:15.793374       1 event.go:282] Event(v1.ObjectReference{Kind:"", Namespace:"", Name:"machine-config", UID:"0063a5ca-f80f-4837-9ef4-54e8e25660f8", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorVersionChanged' clusteroperator/machine-config-operator started a version change from [{operator 4.8.57}] to [{operator 4.9.0-0.nightly-2023-01-24-113444}]\nE0124 13:29:00.298199       1 sync.go:646] Error syncing Required MachineConfigPools: "pool master has not progressed to latest configuration: controller version mismatch for rendered-master-bccb0f9884f53f672846e1db36db3b61 expected b2055c07f694f100de0d45cde8e8ca72b661826d has ace2072d71e56e1e644e9e078bf73f2b8f2875ae: 1 (ready 1) out of 3 nodes are updating to latest configuration rendered-master-a220cca27cc8b20f25e5fe016bf6accf, retrying"\nI0124 13:29:00.341802       1 event.go:282] Event(v1.ObjectReference{Kind:"", Namespace:"", Name:"machine-config", UID:"0063a5ca-f80f-4837-9ef4-54e8e25660f8", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'OperatorDegraded: RequiredPoolsFailed' Unable to apply 4.9.0-0.nightly-2023-01-24-113444: timed out waiting for the condition during syncRequiredMachineConfigPools: pool master has not progressed to latest configuration: controller version mismatch for rendered-master-bccb0f9884f53f672846e1db36db3b61 expected b2055c07f694f100de0d45cde8e8ca72b661826d has ace2072d71e56e1e644e9e078bf73f2b8f2875ae: 1 (ready 1) out of 3 nodes are updating to latest configuration rendered-master-a220cca27cc8b20f25e5fe016bf6accf, retrying\n
Jan 24 13:33:13.668 E ns/openshift-machine-config-operator pod/machine-config-controller-64c44dcd64-x6llf node/ip-10-0-197-41.us-west-2.compute.internal container/machine-config-controller reason/ContainerExit code/2 cause/Error  13:32:52.138232       1 node_controller.go:424] Pool master: node ip-10-0-197-41.us-west-2.compute.internal: changed annotation machineconfiguration.openshift.io/desiredConfig = rendered-master-a220cca27cc8b20f25e5fe016bf6accf\nI0124 13:32:52.138616       1 event.go:282] Event(v1.ObjectReference{Kind:"MachineConfigPool", Namespace:"", Name:"master", UID:"e61640b8-e7f2-490c-865e-94108e644fc0", APIVersion:"machineconfiguration.openshift.io/v1", ResourceVersion:"75075", FieldPath:""}): type: 'Normal' reason: 'AnnotationChange' Node ip-10-0-197-41.us-west-2.compute.internal now has machineconfiguration.openshift.io/desiredConfig=rendered-master-a220cca27cc8b20f25e5fe016bf6accf\nI0124 13:32:53.148360       1 node_controller.go:424] Pool master: node ip-10-0-197-41.us-west-2.compute.internal: changed annotation machineconfiguration.openshift.io/state = Working\nI0124 13:32:53.148495       1 event.go:282] Event(v1.ObjectReference{Kind:"MachineConfigPool", Namespace:"", Name:"master", UID:"e61640b8-e7f2-490c-865e-94108e644fc0", APIVersion:"machineconfiguration.openshift.io/v1", ResourceVersion:"80540", FieldPath:""}): type: 'Normal' reason: 'AnnotationChange' Node ip-10-0-197-41.us-west-2.compute.internal now has machineconfiguration.openshift.io/state=Working\nI0124 13:32:53.248999       1 node_controller.go:424] Pool master: node ip-10-0-197-41.us-west-2.compute.internal: Reporting unready: node ip-10-0-197-41.us-west-2.compute.internal is reporting Unschedulable\nE0124 13:32:57.202870       1 render_controller.go:460] Error updating MachineConfigPool master: Operation cannot be fulfilled on machineconfigpools.machineconfiguration.openshift.io "master": the object has been modified; please apply your changes to the latest version and try again\nI0124 13:32:57.202893       1 render_controller.go:377] Error syncing machineconfigpool master: Operation cannot be fulfilled on machineconfigpools.machineconfiguration.openshift.io "master": the object has been modified; please apply your changes to the latest version and try again\n
Jan 24 13:33:15.578 E ns/openshift-insights pod/insights-operator-685d799d8c-gf4mn node/ip-10-0-197-41.us-west-2.compute.internal container/insights-operator reason/ContainerExit code/2 cause/Error 302-6f60682e5529" srcIP="10.131.0.20:59054" resp=200\nI0124 13:31:31.046320       1 reflector.go:535] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: Watch close - *v1.ConfigMap total 5 items received\nI0124 13:31:31.046871       1 reflector.go:535] k8s.io/apiserver/pkg/authentication/request/headerrequest/requestheader_controller.go:172: Watch close - *v1.ConfigMap total 7 items received\nI0124 13:31:31.069981       1 reflector.go:535] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: Watch close - *v1.ConfigMap total 6 items received\nI0124 13:31:45.911524       1 httplog.go:104] "HTTP" verb="GET" URI="/metrics" latency="13.644788ms" userAgent="Prometheus/2.29.2" audit-ID="74a449b1-8dff-469c-9151-37e061c4eb08" srcIP="10.129.2.10:38540" resp=200\nI0124 13:31:52.275563       1 httplog.go:104] "HTTP" verb="GET" URI="/metrics" latency="8.506321ms" userAgent="Prometheus/2.29.2" audit-ID="03b54646-c945-4f85-a302-87710a9d18e5" srcIP="10.131.0.20:59054" resp=200\nI0124 13:32:05.974528       1 status.go:354] The operator is healthy\nI0124 13:32:05.974573       1 status.go:441] No status update necessary, objects are identical\nI0124 13:32:15.903783       1 httplog.go:104] "HTTP" verb="GET" URI="/metrics" latency="5.376432ms" userAgent="Prometheus/2.29.2" audit-ID="31920689-a002-4c21-9890-fa875a68525b" srcIP="10.129.2.10:38540" resp=200\nI0124 13:32:22.271060       1 httplog.go:104] "HTTP" verb="GET" URI="/metrics" latency="4.162819ms" userAgent="Prometheus/2.29.2" audit-ID="93f366b5-34d4-46ca-bdea-0e0f8d656cbc" srcIP="10.131.0.20:59054" resp=200\nI0124 13:32:45.914936       1 httplog.go:104] "HTTP" verb="GET" URI="/metrics" latency="16.804778ms" userAgent="Prometheus/2.29.2" audit-ID="689d79ad-ba75-4734-bb46-37aa0e3cc058" srcIP="10.129.2.10:38540" resp=200\nI0124 13:32:52.276814       1 httplog.go:104] "HTTP" verb="GET" URI="/metrics" latency="10.310184ms" userAgent="Prometheus/2.29.2" audit-ID="101ee608-9b2e-470d-9193-b4d7a2dfa760" srcIP="10.131.0.20:59054" resp=200\n
Jan 24 13:33:28.092 E ns/openshift-multus pod/multus-additional-cni-plugins-jjst5 node/ip-10-0-170-124.us-west-2.compute.internal container/kube-multus-additional-cni-plugins reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar)
Jan 24 13:33:35.130 E ns/openshift-e2e-loki pod/loki-promtail-g5lx6 node/ip-10-0-170-124.us-west-2.compute.internal container/oauth-proxy reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar)
#1616532076536795136junit10 days ago
Jan 20 22:23:00.801 E ns/openshift-cluster-csi-drivers pod/aws-ebs-csi-driver-controller-7865b6c886-vcbkx node/ip-10-0-238-37.us-west-2.compute.internal container/csi-liveness-probe reason/ContainerExit code/2 cause/Error
Jan 20 22:23:00.801 E ns/openshift-cluster-csi-drivers pod/aws-ebs-csi-driver-controller-7865b6c886-vcbkx node/ip-10-0-238-37.us-west-2.compute.internal container/csi-driver reason/ContainerExit code/2 cause/Error
Jan 20 22:23:00.801 E ns/openshift-cluster-csi-drivers pod/aws-ebs-csi-driver-controller-7865b6c886-vcbkx node/ip-10-0-238-37.us-west-2.compute.internal container/csi-snapshotter reason/ContainerExit code/2 cause/Error
Jan 20 22:23:00.801 E ns/openshift-cluster-csi-drivers pod/aws-ebs-csi-driver-controller-7865b6c886-vcbkx node/ip-10-0-238-37.us-west-2.compute.internal container/csi-resizer reason/ContainerExit code/2 cause/Error
Jan 20 22:23:01.809 E ns/openshift-controller-manager-operator pod/openshift-controller-manager-operator-58d787795b-mmfbl node/ip-10-0-238-37.us-west-2.compute.internal container/openshift-controller-manager-operator reason/ContainerExit code/1 cause/Error  reflector.go:225] Stopping reflector *v1.Secret (10m0s) from k8s.io/client-go@v0.22.0-rc.0/tools/cache/reflector.go:167\nI0120 22:23:00.089198       1 reflector.go:225] Stopping reflector *v1.Secret (10m0s) from k8s.io/client-go@v0.22.0-rc.0/tools/cache/reflector.go:167\nI0120 22:23:00.089430       1 base_controller.go:167] Shutting down StatusSyncer_openshift-controller-manager ...\nI0120 22:23:00.089470       1 base_controller.go:145] All StatusSyncer_openshift-controller-manager post start hooks have been terminated\nI0120 22:23:00.089499       1 base_controller.go:167] Shutting down ConfigObserver ...\nI0120 22:23:00.089522       1 base_controller.go:167] Shutting down StaticResourceController ...\nI0120 22:23:00.089544       1 base_controller.go:167] Shutting down UserCAObservationController ...\nI0120 22:23:00.089589       1 base_controller.go:167] Shutting down ResourceSyncController ...\nI0120 22:23:00.089655       1 operator.go:115] Shutting down OpenShiftControllerManagerOperator\nI0120 22:23:00.089702       1 base_controller.go:114] Shutting down worker of ResourceSyncController controller ...\nI0120 22:23:00.089716       1 base_controller.go:104] All ResourceSyncController workers have been terminated\nI0120 22:23:00.089725       1 base_controller.go:114] Shutting down worker of UserCAObservationController controller ...\nI0120 22:23:00.089729       1 base_controller.go:104] All UserCAObservationController workers have been terminated\nI0120 22:23:00.089734       1 base_controller.go:114] Shutting down worker of ConfigObserver controller ...\nI0120 22:23:00.089739       1 base_controller.go:104] All ConfigObserver workers have been terminated\nI0120 22:23:00.089709       1 base_controller.go:114] Shutting down worker of StatusSyncer_openshift-controller-manager controller ...\nI0120 22:23:00.089768       1 base_controller.go:104] All StatusSyncer_openshift-controller-manager workers have been terminated\nW0120 22:23:00.089755       1 builder.go:101] graceful termination failed, controllers failed with error: stopped\n
Jan 20 22:23:01.854 E ns/openshift-console-operator pod/console-operator-6d6b66587c-22x8z node/ip-10-0-238-37.us-west-2.compute.internal container/console-operator reason/ContainerExit code/1 cause/Error 1 genericapiserver.go:709] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-console-operator", Name:"console-operator-6d6b66587c-22x8z", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TerminationMinimalShutdownDurationFinished' The minimal shutdown duration of 0s finished\nI0120 22:22:59.214343       1 genericapiserver.go:362] "[graceful-termination] shutdown event" name="AfterShutdownDelayDuration"\nI0120 22:22:59.213754       1 base_controller.go:167] Shutting down ConsoleCLIDownloadsController ...\nI0120 22:22:59.213763       1 base_controller.go:167] Shutting down DownloadsRouteController ...\nI0120 22:22:59.213771       1 base_controller.go:167] Shutting down ConsoleRouteController ...\nI0120 22:22:59.213781       1 base_controller.go:167] Shutting down ResourceSyncController ...\nI0120 22:22:59.213788       1 base_controller.go:167] Shutting down ConsoleOperator ...\nI0120 22:22:59.213797       1 base_controller.go:167] Shutting down ConsoleServiceController ...\nI0120 22:22:59.213804       1 base_controller.go:167] Shutting down UnsupportedConfigOverridesController ...\nI0120 22:22:59.213811       1 base_controller.go:167] Shutting down HealthCheckController ...\nI0120 22:22:59.213818       1 base_controller.go:167] Shutting down ConsoleDownloadsDeploymentSyncController ...\nI0120 22:22:59.213825       1 base_controller.go:167] Shutting down ManagementStateController ...\nI0120 22:22:59.213831       1 base_controller.go:167] Shutting down LoggingSyncer ...\nI0120 22:22:59.213840       1 base_controller.go:167] Shutting down ConsoleServiceController ...\nI0120 22:22:59.213849       1 base_controller.go:167] Shutting down StatusSyncer_console ...\nI0120 22:22:59.214379       1 base_controller.go:145] All StatusSyncer_console post start hooks have been terminated\nI0120 22:22:59.213856       1 base_controller.go:167] Shutting down RemoveStaleConditionsController ...\nW0120 22:22:59.213951       1 builder.go:101] graceful termination failed, controllers failed with error: stopped\n
Jan 20 22:23:05.889 E ns/openshift-sdn pod/sdn-7ngb9 node/ip-10-0-130-60.us-west-2.compute.internal container/kube-rbac-proxy reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar)
Jan 20 22:23:05.889 E ns/openshift-sdn pod/sdn-7ngb9 node/ip-10-0-130-60.us-west-2.compute.internal container/sdn reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar)
Jan 20 22:23:19.154 E ns/openshift-network-diagnostics pod/network-check-target-954gf node/ip-10-0-130-60.us-west-2.compute.internal container/network-check-target-container reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar)
Jan 20 22:23:21.162 E ns/openshift-controller-manager pod/controller-manager-j62kw node/ip-10-0-130-60.us-west-2.compute.internal container/controller-manager reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar)
Jan 20 22:23:23.334 E ns/openshift-cluster-csi-drivers pod/aws-ebs-csi-driver-node-pggt2 node/ip-10-0-130-60.us-west-2.compute.internal container/csi-node-driver-registrar reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar)

Found in 28.57% of runs (57.14% of failures) across 14 total runs and 1 jobs (50.00% failed) in 91ms - clear search | chart view - source code located on github