Job:
#1921157bug23 months ago[sig-api-machinery] Kubernetes APIs remain available for new connections ASSIGNED
T2: At 06:45:58: systemd-shutdown was sending SIGTERM to remaining processes...
T3: At 06:45:58: kube-apiserver-ci-op-z52cbzhi-6d7cd-pz2jw-master-0: Received signal to terminate, becoming unready, but keeping serving (TerminationStart event)
T4: At 06:47:08 kube-apiserver-ci-op-z52cbzhi-6d7cd-pz2jw-master-0: The minimal shutdown duration of 1m10s finished (TerminationMinimalShutdownDurationFinished event)
T5: At 06:47:08 kube-apiserver-ci-op-z52cbzhi-6d7cd-pz2jw-master-0: Server has stopped listening (TerminationStoppedServing event)
T5 is the last event reported from that api server. At T5 the server might wait up to 60s for all requests to complete and then it fires TerminationGracefulTerminationFinished event.
periodic-ci-openshift-multiarch-master-nightly-4.10-upgrade-from-nightly-4.9-ocp-remote-libvirt-ppc64le (all) - 14 runs, 79% failed, 36% of failures match = 29% impact
#1619999016102137856junit22 hours ago
Jan 30 10:51:17.202 E ns/openshift-insights pod/insights-operator-dd97657c9-4fmvv node/libvirt-ppc64le-2-1-7-86b95-master-0 container/insights-operator reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar)
Jan 30 10:51:22.823 E ns/openshift-cluster-storage-operator pod/csi-snapshot-controller-operator-5b6cd64744-9hxch node/libvirt-ppc64le-2-1-7-86b95-master-0 container/csi-snapshot-controller-operator reason/ContainerExit code/1 cause/Error 76 +0000 UTC m=+2160.994097550\nI0130 10:51:13.085462       1 operator.go:159] Finished syncing operator at 123.785241ms\nI0130 10:51:13.088137       1 operator.go:157] Starting syncing operator at 2023-01-30 10:51:13.088133841 +0000 UTC m=+2161.120560894\nI0130 10:51:13.453536       1 operator.go:159] Finished syncing operator at 365.395663ms\nI0130 10:51:13.453568       1 operator.go:157] Starting syncing operator at 2023-01-30 10:51:13.453565085 +0000 UTC m=+2161.485992140\nI0130 10:51:13.569966       1 operator.go:159] Finished syncing operator at 116.394516ms\nI0130 10:51:19.773919       1 operator.go:157] Starting syncing operator at 2023-01-30 10:51:19.773912699 +0000 UTC m=+2167.806339773\nI0130 10:51:19.814652       1 operator.go:159] Finished syncing operator at 40.733817ms\nI0130 10:51:20.235059       1 operator.go:157] Starting syncing operator at 2023-01-30 10:51:20.235053375 +0000 UTC m=+2168.267480433\nI0130 10:51:20.262107       1 cmd.go:97] Received SIGTERM or SIGINT signal, shutting down controller.\nI0130 10:51:20.262330       1 genericapiserver.go:386] [graceful-termination] RunPreShutdownHooks has completed\nI0130 10:51:20.262349       1 genericapiserver.go:349] "[graceful-termination] shutdown event" name="ShutdownInitiated"\nI0130 10:51:20.262360       1 genericapiserver.go:352] "[graceful-termination] shutdown event" name="AfterShutdownDelayDuration"\nI0130 10:51:20.262376       1 genericapiserver.go:376] "[graceful-termination] shutdown event" name="InFlightRequestsDrained"\nI0130 10:51:20.262575       1 base_controller.go:167] Shutting down ManagementStateController ...\nI0130 10:51:20.262582       1 base_controller.go:167] Shutting down LoggingSyncer ...\nI0130 10:51:20.262593       1 base_controller.go:167] Shutting down StatusSyncer_csi-snapshot-controller ...\nW0130 10:51:20.262663       1 builder.go:101] graceful termination failed, controllers failed with error: stopped\nI0130 10:51:20.262693       1 base_controller.go:114] Shutting down worker of StatusSyncer_csi-snapshot-controller controller ...\n
Jan 30 10:51:23.736 - 3s    E clusteroperator/csi-snapshot-controller condition/Available status/Unknown reason/CSISnapshotControllerAvailable: Waiting for the initial sync of the operator
Jan 30 10:51:27.319 E ns/openshift-kube-storage-version-migrator pod/migrator-56bcfdc948-llc99 node/libvirt-ppc64le-2-1-7-86b95-master-2 container/migrator reason/ContainerExit code/2 cause/Error I0130 10:15:20.238232       1 migrator.go:18] FLAG: --add_dir_header="false"\nI0130 10:15:20.238330       1 migrator.go:18] FLAG: --alsologtostderr="true"\nI0130 10:15:20.238337       1 migrator.go:18] FLAG: --kube-api-burst="1000"\nI0130 10:15:20.238343       1 migrator.go:18] FLAG: --kube-api-qps="40"\nI0130 10:15:20.238350       1 migrator.go:18] FLAG: --kubeconfig=""\nI0130 10:15:20.238356       1 migrator.go:18] FLAG: --log_backtrace_at=":0"\nI0130 10:15:20.238363       1 migrator.go:18] FLAG: --log_dir=""\nI0130 10:15:20.238369       1 migrator.go:18] FLAG: --log_file=""\nI0130 10:15:20.238373       1 migrator.go:18] FLAG: --log_file_max_size="1800"\nI0130 10:15:20.238377       1 migrator.go:18] FLAG: --logtostderr="true"\nI0130 10:15:20.238381       1 migrator.go:18] FLAG: --one_output="false"\nI0130 10:15:20.238386       1 migrator.go:18] FLAG: --skip_headers="false"\nI0130 10:15:20.238390       1 migrator.go:18] FLAG: --skip_log_headers="false"\nI0130 10:15:20.238394       1 migrator.go:18] FLAG: --stderrthreshold="2"\nI0130 10:15:20.238398       1 migrator.go:18] FLAG: --v="2"\nI0130 10:15:20.238402       1 migrator.go:18] FLAG: --vmodule=""\nI0130 10:15:20.240971       1 reflector.go:219] Starting reflector *v1alpha1.StorageVersionMigration (0s) from k8s.io/client-go@v0.21.0/tools/cache/reflector.go:167\nI0130 10:15:34.565462       1 kubemigrator.go:110] flowcontrol-flowschema-storage-version-migration: migration running\nI0130 10:15:35.622501       1 kubemigrator.go:127] flowcontrol-flowschema-storage-version-migration: migration succeeded\nI0130 10:15:36.643233       1 kubemigrator.go:110] flowcontrol-prioritylevel-storage-version-migration: migration running\nI0130 10:15:36.793723       1 kubemigrator.go:127] flowcontrol-prioritylevel-storage-version-migration: migration succeeded\n
Jan 30 10:51:29.329 E ns/openshift-monitoring pod/cluster-monitoring-operator-5c8c6d94d9-2k6v9 node/libvirt-ppc64le-2-1-7-86b95-master-0 container/kube-rbac-proxy reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar)
Jan 30 10:51:29.678 E ns/openshift-console-operator pod/console-operator-5874579dd5-z48jr node/libvirt-ppc64le-2-1-7-86b95-master-2 container/console-operator reason/ContainerExit code/1 cause/Error _controller.go:104] All ConsoleServiceController workers have been terminated\nI0130 10:51:25.461169       1 base_controller.go:114] Shutting down worker of RemoveStaleConditionsController controller ...\nI0130 10:51:25.461309       1 genericapiserver.go:709] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-console-operator", Name:"console-operator-5874579dd5-z48jr", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TerminationMinimalShutdownDurationFinished' The minimal shutdown duration of 0s finished\nI0130 10:51:25.461447       1 genericapiserver.go:362] "[graceful-termination] shutdown event" name="AfterShutdownDelayDuration"\nI0130 10:51:25.461058       1 base_controller.go:167] Shutting down RemoveStaleConditionsController ...\nI0130 10:51:25.461460       1 base_controller.go:104] All RemoveStaleConditionsController workers have been terminated\nI0130 10:51:25.461330       1 base_controller.go:114] Shutting down worker of ResourceSyncController controller ...\nI0130 10:51:25.461477       1 base_controller.go:104] All ResourceSyncController workers have been terminated\nI0130 10:51:25.461081       1 base_controller.go:114] Shutting down worker of DownloadsRouteController controller ...\nI0130 10:51:25.461468       1 genericapiserver.go:709] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-console-operator", Name:"console-operator-5874579dd5-z48jr", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TerminationStoppedServing' Server has stopped listening\nI0130 10:51:25.461490       1 base_controller.go:104] All DownloadsRouteController workers have been terminated\nI0130 10:51:25.461497       1 genericapiserver.go:387] "[graceful-termination] shutdown event" name="InFlightRequestsDrained"\nI0130 10:51:25.461062       1 base_controller.go:114] Shutting down worker of ConsoleDownloadsDeploymentSyncController controller ...\nW0130 10:51:25.461108       1 builder.go:101] graceful termination failed, controllers failed with error: stopped\n
Jan 30 10:51:29.817 E ns/openshift-cluster-storage-operator pod/csi-snapshot-webhook-b8fb4f9b5-r8tlh node/libvirt-ppc64le-2-1-7-86b95-master-2 container/webhook reason/ContainerExit code/2 cause/Error
Jan 30 10:51:29.898 E ns/openshift-cluster-storage-operator pod/csi-snapshot-controller-5cc49f7c76-qjvxw node/libvirt-ppc64le-2-1-7-86b95-master-2 container/snapshot-controller reason/ContainerExit code/2 cause/Error
Jan 30 10:51:32.393 E ns/openshift-cluster-storage-operator pod/cluster-storage-operator-75fd5bb48c-gbvqr node/libvirt-ppc64le-2-1-7-86b95-master-0 container/cluster-storage-operator reason/ContainerExit code/1 cause/Error il="\"apiserver-loopback-client@1675073781\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1675073780\" (2023-01-30 09:16:20 +0000 UTC to 2024-01-30 09:16:20 +0000 UTC (now=2023-01-30 10:17:39.101198236 +0000 UTC))"\nI0130 10:51:29.708960       1 cmd.go:97] Received SIGTERM or SIGINT signal, shutting down controller.\nI0130 10:51:29.709045       1 genericapiserver.go:386] [graceful-termination] RunPreShutdownHooks has completed\nI0130 10:51:29.709074       1 genericapiserver.go:349] "[graceful-termination] shutdown event" name="ShutdownInitiated"\nI0130 10:51:29.709207       1 base_controller.go:167] Shutting down SnapshotCRDController ...\nI0130 10:51:29.709227       1 base_controller.go:167] Shutting down StatusSyncer_storage ...\nI0130 10:51:29.709233       1 base_controller.go:145] All StatusSyncer_storage post start hooks have been terminated\nI0130 10:51:29.709243       1 base_controller.go:167] Shutting down LoggingSyncer ...\nI0130 10:51:29.709256       1 base_controller.go:167] Shutting down DefaultStorageClassController ...\nI0130 10:51:29.709266       1 base_controller.go:167] Shutting down ConfigObserver ...\nI0130 10:51:29.709271       1 requestheader_controller.go:183] Shutting down RequestHeaderAuthRequestController\nI0130 10:51:29.709275       1 base_controller.go:167] Shutting down CSIDriverStarter ...\nI0130 10:51:29.709286       1 base_controller.go:167] Shutting down ManagementStateController ...\nI0130 10:51:29.709289       1 configmap_cafile_content.go:222] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"\nI0130 10:51:29.709295       1 base_controller.go:167] Shutting down VSphereProblemDetectorStarter ...\nI0130 10:51:29.709305       1 configmap_cafile_content.go:222] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"\nW0130 10:51:29.709412       1 builder.go:101] graceful termination failed, controllers failed with error: stopped\n
Jan 30 10:51:32.393 E ns/openshift-cluster-storage-operator pod/cluster-storage-operator-75fd5bb48c-gbvqr node/libvirt-ppc64le-2-1-7-86b95-master-0 container/cluster-storage-operator reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar)
Jan 30 10:51:33.507 E ns/openshift-ingress-canary pod/ingress-canary-62rdl node/libvirt-ppc64le-2-1-7-86b95-worker-0-xrssg container/serve-healthcheck-canary reason/ContainerExit code/2 cause/Error serving on 8888\nserving on 8080\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\n
#1618911768350822400junit3 days ago
Jan 27 11:00:20.410 E ns/openshift-cluster-storage-operator pod/csi-snapshot-webhook-b8fb4f9b5-sjh5f node/libvirt-ppc64le-2-2-7-l2cfl-master-2 container/webhook reason/ContainerExit code/2 cause/Error
Jan 27 11:00:22.230 E ns/openshift-cluster-storage-operator pod/csi-snapshot-webhook-b8fb4f9b5-wskv5 node/libvirt-ppc64le-2-2-7-l2cfl-master-1 container/webhook reason/ContainerExit code/2 cause/Error
Jan 27 11:00:24.026 E ns/openshift-image-registry pod/cluster-image-registry-operator-b5d7c786f-gfxn8 node/libvirt-ppc64le-2-2-7-l2cfl-master-0 container/cluster-image-registry-operator reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar)
Jan 27 11:00:24.100 E ns/openshift-cluster-storage-operator pod/cluster-storage-operator-dfb65465f-h4t94 node/libvirt-ppc64le-2-2-7-l2cfl-master-0 container/cluster-storage-operator reason/ContainerExit code/1 cause/Error watch stream: http2: client connection lost") has prevented the request from succeeding\nW0127 10:20:11.053155       1 reflector.go:441] k8s.io/client-go@v12.0.0+incompatible/tools/cache/reflector.go:167: watch of *v1.RoleBinding ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding\nI0127 11:00:21.529145       1 cmd.go:97] Received SIGTERM or SIGINT signal, shutting down controller.\nI0127 11:00:21.529633       1 genericapiserver.go:386] [graceful-termination] RunPreShutdownHooks has completed\nI0127 11:00:21.529658       1 genericapiserver.go:349] "[graceful-termination] shutdown event" name="ShutdownInitiated"\nI0127 11:00:21.530688       1 base_controller.go:167] Shutting down SnapshotCRDController ...\nI0127 11:00:21.530708       1 base_controller.go:167] Shutting down ManagementStateController ...\nI0127 11:00:21.530721       1 base_controller.go:167] Shutting down DefaultStorageClassController ...\nI0127 11:00:21.530738       1 base_controller.go:167] Shutting down StatusSyncer_storage ...\nI0127 11:00:21.530743       1 base_controller.go:145] All StatusSyncer_storage post start hooks have been terminated\nI0127 11:00:21.530756       1 base_controller.go:167] Shutting down LoggingSyncer ...\nI0127 11:00:21.530767       1 base_controller.go:167] Shutting down ConfigObserver ...\nI0127 11:00:21.530779       1 base_controller.go:167] Shutting down CSIDriverStarter ...\nI0127 11:00:21.530790       1 base_controller.go:167] Shutting down VSphereProblemDetectorStarter ...\nI0127 11:00:21.530826       1 base_controller.go:114] Shutting down worker of SnapshotCRDController controller ...\nI0127 11:00:21.530833       1 base_controller.go:104] All SnapshotCRDController workers have been terminated\nW0127 11:00:21.530836       1 builder.go:101] graceful termination failed, controllers failed with error: stopped\nI0127 11:00:21.530842       1 base_controller.go:114] Shutting down worker of ManagementStateController controller ...\n
Jan 27 11:00:24.100 E ns/openshift-cluster-storage-operator pod/cluster-storage-operator-dfb65465f-h4t94 node/libvirt-ppc64le-2-2-7-l2cfl-master-0 container/cluster-storage-operator reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar)
Jan 27 11:00:24.303 E ns/openshift-console-operator pod/console-operator-565785b695-4vcl8 node/libvirt-ppc64le-2-2-7-l2cfl-master-1 container/console-operator reason/ContainerExit code/1 cause/Error "Pod", Namespace:"openshift-console-operator", Name:"console-operator-565785b695-4vcl8", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TerminationMinimalShutdownDurationFinished' The minimal shutdown duration of 0s finished\nI0127 11:00:22.494132       1 genericapiserver.go:362] "[graceful-termination] shutdown event" name="AfterShutdownDelayDuration"\nI0127 11:00:22.494147       1 genericapiserver.go:709] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-console-operator", Name:"console-operator-565785b695-4vcl8", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TerminationStoppedServing' Server has stopped listening\nI0127 11:00:22.494168       1 genericapiserver.go:387] "[graceful-termination] shutdown event" name="InFlightRequestsDrained"\nI0127 11:00:22.495460       1 base_controller.go:167] Shutting down ConsoleOperator ...\nI0127 11:00:22.495487       1 base_controller.go:167] Shutting down StatusSyncer_console ...\nI0127 11:00:22.495503       1 base_controller.go:145] All StatusSyncer_console post start hooks have been terminated\nI0127 11:00:22.495515       1 base_controller.go:167] Shutting down ConsoleDownloadsDeploymentSyncController ...\nI0127 11:00:22.495534       1 base_controller.go:167] Shutting down ConsoleCLIDownloadsController ...\nI0127 11:00:22.495546       1 base_controller.go:167] Shutting down HealthCheckController ...\nI0127 11:00:22.495564       1 base_controller.go:167] Shutting down ResourceSyncController ...\nI0127 11:00:22.495580       1 base_controller.go:167] Shutting down UnsupportedConfigOverridesController ...\nI0127 11:00:22.495593       1 base_controller.go:167] Shutting down ConsoleServiceController ...\nI0127 11:00:22.495602       1 base_controller.go:167] Shutting down ConsoleRouteController ...\nI0127 11:00:22.495619       1 base_controller.go:167] Shutting down ManagementStateController ...\nW0127 11:00:22.495695       1 builder.go:101] graceful termination failed, controllers failed with error: stopped\n
Jan 27 11:00:26.521 E ns/openshift-cluster-storage-operator pod/csi-snapshot-controller-5cc49f7c76-hm9rw node/libvirt-ppc64le-2-2-7-l2cfl-master-2 container/snapshot-controller reason/ContainerExit code/2 cause/Error
Jan 27 11:00:27.523 E ns/openshift-ingress-canary pod/ingress-canary-ghcjz node/libvirt-ppc64le-2-2-7-l2cfl-worker-0-s26z8 container/serve-healthcheck-canary reason/ContainerExit code/2 cause/Error serving on 8888\nserving on 8080\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\n
Jan 27 11:00:29.759 E ns/openshift-monitoring pod/alertmanager-main-1 node/libvirt-ppc64le-2-2-7-l2cfl-worker-0-5x6h7 container/alertmanager-proxy reason/ContainerExit code/2 cause/Error 2023/01/27 10:23:25 provider.go:128: Defaulting client-id to system:serviceaccount:openshift-monitoring:alertmanager-main\n2023/01/27 10:23:25 provider.go:133: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2023/01/27 10:23:25 provider.go:351: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2023/01/27 10:23:25 oauthproxy.go:203: mapping path "/" => upstream "http://localhost:9093/"\n2023/01/27 10:23:25 oauthproxy.go:230: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:alertmanager-main\n2023/01/27 10:23:25 oauthproxy.go:240: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> samesite: refresh:disabled\nI0127 10:23:25.093183       1 dynamic_serving_content.go:130] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n2023/01/27 10:23:25 http.go:107: HTTPS: listening on [::]:9095\n
Jan 27 11:00:29.759 E ns/openshift-monitoring pod/alertmanager-main-1 node/libvirt-ppc64le-2-2-7-l2cfl-worker-0-5x6h7 container/config-reloader reason/ContainerExit code/2 cause/Error level=info ts=2023-01-27T10:23:24.310884072Z caller=main.go:148 msg="Starting prometheus-config-reloader" version="(version=0.49.0, branch=rhaos-4.9-rhel-8, revision=d709566)"\nlevel=info ts=2023-01-27T10:23:24.311009829Z caller=main.go:149 build_context="(go=go1.16.12, user=root, date=20221205-20:41:36)"\nlevel=info ts=2023-01-27T10:23:24.311191777Z caller=main.go:183 msg="Starting web server for metrics" listen=localhost:8080\nlevel=info ts=2023-01-27T10:23:24.311387006Z caller=reloader.go:219 msg="started watching config file and directories for changes" cfg= out= dirs=/etc/alertmanager/config,/etc/alertmanager/secrets/alertmanager-main-tls,/etc/alertmanager/secrets/alertmanager-main-proxy,/etc/alertmanager/secrets/alertmanager-kube-rbac-proxy\nlevel=info ts=2023-01-27T10:23:26.126862934Z caller=reloader.go:355 msg="Reload triggered" cfg_in= cfg_out= watched_dirs="/etc/alertmanager/config, /etc/alertmanager/secrets/alertmanager-main-tls, /etc/alertmanager/secrets/alertmanager-main-proxy, /etc/alertmanager/secrets/alertmanager-kube-rbac-proxy"\n
Jan 27 11:00:29.834 E ns/openshift-monitoring pod/alertmanager-main-0 node/libvirt-ppc64le-2-2-7-l2cfl-worker-0-5x6h7 container/alertmanager-proxy reason/ContainerExit code/2 cause/Error 2023/01/27 10:23:24 provider.go:128: Defaulting client-id to system:serviceaccount:openshift-monitoring:alertmanager-main\n2023/01/27 10:23:24 provider.go:133: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2023/01/27 10:23:24 provider.go:351: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2023/01/27 10:23:24 oauthproxy.go:203: mapping path "/" => upstream "http://localhost:9093/"\n2023/01/27 10:23:24 oauthproxy.go:230: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:alertmanager-main\n2023/01/27 10:23:24 oauthproxy.go:240: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> samesite: refresh:disabled\n2023/01/27 10:23:24 http.go:107: HTTPS: listening on [::]:9095\nI0127 10:23:24.620489       1 dynamic_serving_content.go:130] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n
#1615650469005234176junit12 days ago
Jan 18 11:15:49.596 E ns/openshift-cluster-storage-operator pod/csi-snapshot-webhook-b8fb4f9b5-z6scb node/libvirt-ppc64le-1-0-7-4z2wz-master-0 container/webhook reason/ContainerExit code/2 cause/Error
Jan 18 11:15:52.950 E ns/openshift-monitoring pod/cluster-monitoring-operator-6c59cfc69d-jkq5m node/libvirt-ppc64le-1-0-7-4z2wz-master-2 container/kube-rbac-proxy reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar)
Jan 18 11:15:53.298 E ns/openshift-cluster-storage-operator pod/csi-snapshot-controller-5cc49f7c76-qp7d9 node/libvirt-ppc64le-1-0-7-4z2wz-master-1 container/snapshot-controller reason/ContainerExit code/2 cause/Error
Jan 18 11:15:54.201 E ns/openshift-cluster-storage-operator pod/cluster-storage-operator-9466d889b-b4pcn node/libvirt-ppc64le-1-0-7-4z2wz-master-2 container/cluster-storage-operator reason/ContainerExit code/1 cause/Error :06.160430       1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1674037879\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1674037879\" (2023-01-18 09:31:18 +0000 UTC to 2024-01-18 09:31:18 +0000 UTC (now=2023-01-18 10:34:06.160413667 +0000 UTC))"\nE0118 10:35:51.475534       1 leaderelection.go:330] error retrieving resource lock openshift-cluster-storage-operator/cluster-storage-operator-lock: Get "https://172.30.0.1:443/api/v1/namespaces/openshift-cluster-storage-operator/configmaps/cluster-storage-operator-lock?timeout=1m47s": read tcp 10.128.0.15:40548->172.30.0.1:443: read: connection timed out\nI0118 11:15:51.338254       1 cmd.go:97] Received SIGTERM or SIGINT signal, shutting down controller.\nI0118 11:15:51.338639       1 genericapiserver.go:386] [graceful-termination] RunPreShutdownHooks has completed\nI0118 11:15:51.338701       1 genericapiserver.go:349] "[graceful-termination] shutdown event" name="ShutdownInitiated"\nI0118 11:15:51.339359       1 base_controller.go:167] Shutting down SnapshotCRDController ...\nI0118 11:15:51.339374       1 base_controller.go:167] Shutting down CSIDriverStarter ...\nI0118 11:15:51.339383       1 base_controller.go:167] Shutting down DefaultStorageClassController ...\nI0118 11:15:51.339392       1 base_controller.go:167] Shutting down ConfigObserver ...\nI0118 11:15:51.339400       1 base_controller.go:167] Shutting down ManagementStateController ...\nI0118 11:15:51.339408       1 base_controller.go:167] Shutting down LoggingSyncer ...\nI0118 11:15:51.339419       1 base_controller.go:167] Shutting down StatusSyncer_storage ...\nI0118 11:15:51.341070       1 base_controller.go:145] All StatusSyncer_storage post start hooks have been terminated\nI0118 11:15:51.339428       1 base_controller.go:167] Shutting down VSphereProblemDetectorStarter ...\nW0118 11:15:51.339631       1 builder.go:101] graceful termination failed, controllers failed with error: stopped\n
Jan 18 11:15:54.201 E ns/openshift-cluster-storage-operator pod/cluster-storage-operator-9466d889b-b4pcn node/libvirt-ppc64le-1-0-7-4z2wz-master-2 container/cluster-storage-operator reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar)
Jan 18 11:15:55.636 E ns/openshift-console-operator pod/console-operator-58d9b8bd78-qvgdp node/libvirt-ppc64le-1-0-7-4z2wz-master-0 container/console-operator reason/ContainerExit code/1 cause/Error n:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TerminationStart' Received signal to terminate, becoming unready, but keeping serving\nI0118 11:15:53.714183       1 genericapiserver.go:709] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-console-operator", Name:"console-operator-58d9b8bd78-qvgdp", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TerminationMinimalShutdownDurationFinished' The minimal shutdown duration of 0s finished\nI0118 11:15:53.714201       1 genericapiserver.go:362] "[graceful-termination] shutdown event" name="AfterShutdownDelayDuration"\nI0118 11:15:53.714224       1 genericapiserver.go:709] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-console-operator", Name:"console-operator-58d9b8bd78-qvgdp", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TerminationStoppedServing' Server has stopped listening\nI0118 11:15:53.714248       1 genericapiserver.go:387] "[graceful-termination] shutdown event" name="InFlightRequestsDrained"\nI0118 11:15:53.714321       1 base_controller.go:167] Shutting down ResourceSyncController ...\nI0118 11:15:53.714368       1 base_controller.go:167] Shutting down DownloadsRouteController ...\nI0118 11:15:53.714382       1 base_controller.go:167] Shutting down ConsoleServiceController ...\nI0118 11:15:53.714396       1 base_controller.go:167] Shutting down ManagementStateController ...\nI0118 11:15:53.714410       1 base_controller.go:167] Shutting down ConsoleOperator ...\nI0118 11:15:53.714421       1 base_controller.go:167] Shutting down ConsoleRouteController ...\nI0118 11:15:53.714434       1 base_controller.go:167] Shutting down ConsoleServiceController ...\nI0118 11:15:53.714446       1 base_controller.go:167] Shutting down UnsupportedConfigOverridesController ...\nW0118 11:15:53.714447       1 builder.go:101] graceful termination failed, controllers failed with error: stopped\nI0118 11:15:53.714457       1 base_controller.go:167] Shutting down LoggingSyncer ...\n
Jan 18 11:15:57.342 E ns/openshift-ingress-canary pod/ingress-canary-vlz7p node/libvirt-ppc64le-1-0-7-4z2wz-worker-0-rbqs2 container/serve-healthcheck-canary reason/ContainerExit code/2 cause/Error serving on 8888\nserving on 8080\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\n
Jan 18 11:16:00.341 E ns/openshift-cluster-storage-operator pod/csi-snapshot-webhook-b8fb4f9b5-zfx46 node/libvirt-ppc64le-1-0-7-4z2wz-master-1 container/webhook reason/ContainerExit code/2 cause/Error
Jan 18 11:16:04.421 E ns/openshift-monitoring pod/prometheus-k8s-1 node/libvirt-ppc64le-1-0-7-4z2wz-worker-0-rbqs2 container/prometheus-proxy reason/ContainerExit code/2 cause/Error 2023/01/18 10:38:32 provider.go:128: Defaulting client-id to system:serviceaccount:openshift-monitoring:prometheus-k8s\n2023/01/18 10:38:32 provider.go:133: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2023/01/18 10:38:32 provider.go:351: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2023/01/18 10:38:32 oauthproxy.go:203: mapping path "/" => upstream "http://localhost:9090/"\n2023/01/18 10:38:32 oauthproxy.go:230: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:prometheus-k8s\n2023/01/18 10:38:32 oauthproxy.go:240: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> samesite: refresh:disabled\n2023/01/18 10:38:32 main.go:156: using htpasswd file /etc/proxy/htpasswd/auth\n2023/01/18 10:38:32 http.go:107: HTTPS: listening on [::]:9091\nI0118 10:38:32.527719       1 dynamic_serving_content.go:130] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n
Jan 18 11:16:04.421 E ns/openshift-monitoring pod/prometheus-k8s-1 node/libvirt-ppc64le-1-0-7-4z2wz-worker-0-rbqs2 container/config-reloader reason/ContainerExit code/2 cause/Error level=info ts=2023-01-18T10:38:31.520259427Z caller=main.go:148 msg="Starting prometheus-config-reloader" version="(version=0.49.0, branch=rhaos-4.9-rhel-8, revision=d709566)"\nlevel=info ts=2023-01-18T10:38:31.520475542Z caller=main.go:149 build_context="(go=go1.16.12, user=root, date=20221205-20:41:36)"\nlevel=info ts=2023-01-18T10:38:31.520652346Z caller=main.go:183 msg="Starting web server for metrics" listen=localhost:8080\nlevel=info ts=2023-01-18T10:38:31.938578588Z caller=reloader.go:355 msg="Reload triggered" cfg_in=/etc/prometheus/config/prometheus.yaml.gz cfg_out=/etc/prometheus/config_out/prometheus.env.yaml watched_dirs=/etc/prometheus/rules/prometheus-k8s-rulefiles-0\nlevel=info ts=2023-01-18T10:38:31.938709711Z caller=reloader.go:219 msg="started watching config file and directories for changes" cfg=/etc/prometheus/config/prometheus.yaml.gz out=/etc/prometheus/config_out/prometheus.env.yaml dirs=/etc/prometheus/rules/prometheus-k8s-rulefiles-0\nlevel=info ts=2023-01-18T10:38:36.290197028Z caller=reloader.go:355 msg="Reload triggered" cfg_in=/etc/prometheus/config/prometheus.yaml.gz cfg_out=/etc/prometheus/config_out/prometheus.env.yaml watched_dirs=/etc/prometheus/rules/prometheus-k8s-rulefiles-0\nlevel=info ts=2023-01-18T10:39:45.106834373Z caller=reloader.go:355 msg="Reload triggered" cfg_in=/etc/prometheus/config/prometheus.yaml.gz cfg_out=/etc/prometheus/config_out/prometheus.env.yaml watched_dirs=/etc/prometheus/rules/prometheus-k8s-rulefiles-0\nlevel=info ts=2023-01-18T10:40:56.279405735Z caller=r
Jan 18 11:16:04.545 E ns/openshift-monitoring pod/alertmanager-main-0 node/libvirt-ppc64le-1-0-7-4z2wz-worker-0-rbqs2 container/alertmanager-proxy reason/ContainerExit code/2 cause/Error 2023/01/18 10:38:27 provider.go:128: Defaulting client-id to system:serviceaccount:openshift-monitoring:alertmanager-main\n2023/01/18 10:38:27 provider.go:133: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2023/01/18 10:38:27 provider.go:351: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2023/01/18 10:38:27 oauthproxy.go:203: mapping path "/" => upstream "http://localhost:9093/"\n2023/01/18 10:38:27 oauthproxy.go:230: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:alertmanager-main\n2023/01/18 10:38:27 oauthproxy.go:240: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> samesite: refresh:disabled\nI0118 10:38:27.710553       1 dynamic_serving_content.go:130] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n2023/01/18 10:38:27 http.go:107: HTTPS: listening on [::]:9095\n
#1618549440136613888junit4 days ago
Jan 26 10:55:33.850 E ns/openshift-insights pod/insights-operator-646cc8f845-sscbs node/libvirt-ppc64le-1-2-7-55xhw-master-2 container/insights-operator reason/ContainerExit code/2 cause/Error 2372       1 httplog.go:104] "HTTP" verb="GET" URI="/metrics" latency="20.051722ms" userAgent="Prometheus/2.29.2" audit-ID="015e6f7e-e2cd-4176-b680-f3a07eb6f4e3" srcIP="10.131.0.19:56604" resp=200\nI0126 10:53:53.535590       1 httplog.go:104] "HTTP" verb="GET" URI="/metrics" latency="18.103501ms" userAgent="Prometheus/2.29.2" audit-ID="3b1ba488-dad6-4c79-ad2e-b6b5a0866794" srcIP="10.128.2.12:45510" resp=200\nI0126 10:54:21.340210       1 httplog.go:104] "HTTP" verb="GET" URI="/metrics" latency="27.912318ms" userAgent="Prometheus/2.29.2" audit-ID="9a87ba8b-53c1-4c47-bd83-b6aec00cd88c" srcIP="10.131.0.19:56604" resp=200\nI0126 10:54:23.521605       1 httplog.go:104] "HTTP" verb="GET" URI="/metrics" latency="4.736017ms" userAgent="Prometheus/2.29.2" audit-ID="bbcf3303-851d-40b7-8d14-1f78d001c8df" srcIP="10.128.2.12:45510" resp=200\nI0126 10:54:50.694811       1 status.go:354] The operator is healthy\nI0126 10:54:50.694946       1 status.go:441] No status update necessary, objects are identical\nI0126 10:54:51.328802       1 httplog.go:104] "HTTP" verb="GET" URI="/metrics" latency="16.817066ms" userAgent="Prometheus/2.29.2" audit-ID="0b3fed0a-61d2-48a8-821a-842b7239a29c" srcIP="10.131.0.19:56604" resp=200\nI0126 10:54:53.539490       1 httplog.go:104] "HTTP" verb="GET" URI="/metrics" latency="13.836656ms" userAgent="Prometheus/2.29.2" audit-ID="ecb6c7ce-2aeb-455d-872c-e91c6a81bfa3" srcIP="10.128.2.12:45510" resp=200\nI0126 10:55:06.341935       1 reflector.go:535] k8s.io/apiserver/pkg/authentication/request/headerrequest/requestheader_controller.go:172: Watch close - *v1.ConfigMap total 10 items received\nI0126 10:55:21.341152       1 httplog.go:104] "HTTP" verb="GET" URI="/metrics" latency="27.670676ms" userAgent="Prometheus/2.29.2" audit-ID="60ecd0c4-55c2-478e-b15d-29b708423d6e" srcIP="10.131.0.19:56604" resp=200\nI0126 10:55:23.522414       1 httplog.go:104] "HTTP" verb="GET" URI="/metrics" latency="4.702347ms" userAgent="Prometheus/2.29.2" audit-ID="babe55bd-cfc8-4c86-8f8c-7635fc7e20f4" srcIP="10.128.2.12:45510" resp=200\n
Jan 26 10:55:33.850 E ns/openshift-insights pod/insights-operator-646cc8f845-sscbs node/libvirt-ppc64le-1-2-7-55xhw-master-2 container/insights-operator reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar)
Jan 26 10:55:35.927 E ns/openshift-authentication-operator pod/authentication-operator-56cf5d89f4-rn624 node/libvirt-ppc64le-1-2-7-55xhw-master-2 container/authentication-operator reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar)
Jan 26 10:55:46.055 E ns/openshift-machine-api pod/cluster-autoscaler-operator-664844cddb-z9tfs node/libvirt-ppc64le-1-2-7-55xhw-master-2 container/cluster-autoscaler-operator reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar)
Jan 26 10:55:46.133 E ns/openshift-monitoring pod/cluster-monitoring-operator-78f9df8d55-ht29n node/libvirt-ppc64le-1-2-7-55xhw-master-2 container/kube-rbac-proxy reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar)
Jan 26 10:55:49.125 E ns/openshift-console-operator pod/console-operator-565785b695-869w5 node/libvirt-ppc64le-1-2-7-55xhw-master-1 container/console-operator reason/ContainerExit code/1 cause/Error ing down ConsoleRouteController ...\nI0126 10:55:47.659894       1 base_controller.go:167] Shutting down ConsoleServiceController ...\nI0126 10:55:47.659908       1 base_controller.go:167] Shutting down DownloadsRouteController ...\nI0126 10:55:47.659925       1 base_controller.go:167] Shutting down ConsoleServiceController ...\nI0126 10:55:47.659938       1 base_controller.go:167] Shutting down ConsoleDownloadsDeploymentSyncController ...\nI0126 10:55:47.659951       1 base_controller.go:167] Shutting down ConsoleCLIDownloadsController ...\nI0126 10:55:47.659965       1 base_controller.go:167] Shutting down HealthCheckController ...\nI0126 10:55:47.659978       1 base_controller.go:167] Shutting down UnsupportedConfigOverridesController ...\nI0126 10:55:47.659991       1 base_controller.go:167] Shutting down ManagementStateController ...\nI0126 10:55:47.660004       1 base_controller.go:167] Shutting down ConsoleOperator ...\nI0126 10:55:47.660017       1 base_controller.go:167] Shutting down LoggingSyncer ...\nI0126 10:55:47.660030       1 base_controller.go:167] Shutting down RemoveStaleConditionsController ...\nI0126 10:55:47.660057       1 genericapiserver.go:355] "[graceful-termination] shutdown event" name="ShutdownInitiated"\nI0126 10:55:47.660096       1 genericapiserver.go:709] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-console-operator", Name:"console-operator-565785b695-869w5", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TerminationStart' Received signal to terminate, becoming unready, but keeping serving\nI0126 10:55:47.660144       1 genericapiserver.go:709] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-console-operator", Name:"console-operator-565785b695-869w5", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TerminationMinimalShutdownDurationFinished' The minimal shutdown duration of 0s finished\nW0126 10:55:47.660183       1 builder.go:101] graceful termination failed, controllers failed with error: stopped\n
Jan 26 10:55:49.125 E ns/openshift-console-operator pod/console-operator-565785b695-869w5 node/libvirt-ppc64le-1-2-7-55xhw-master-1 container/console-operator reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar)
Jan 26 10:55:49.214 E ns/openshift-cluster-storage-operator pod/cluster-storage-operator-dfb65465f-6dz4c node/libvirt-ppc64le-1-2-7-55xhw-master-2 container/cluster-storage-operator reason/ContainerExit code/1 cause/Error  listed" 19708ms (10:20:33.696)\nTrace[197198973]: [19.70941732s] [19.70941732s] END\nI0126 10:20:33.703871       1 trace.go:205] Trace[179895819]: "Reflector ListAndWatch" name:k8s.io/client-go@v12.0.0+incompatible/tools/cache/reflector.go:167 (26-Jan-2023 10:20:12.605) (total time: 21098ms):\nTrace[179895819]: ---"Objects listed" 21098ms (10:20:33.703)\nTrace[179895819]: [21.098428702s] [21.098428702s] END\nI0126 10:20:34.197019       1 trace.go:205] Trace[117701109]: "Reflector ListAndWatch" name:k8s.io/client-go@v12.0.0+incompatible/tools/cache/reflector.go:167 (26-Jan-2023 10:20:12.164) (total time: 22032ms):\nTrace[117701109]: ---"Objects listed" 22032ms (10:20:34.196)\nTrace[117701109]: [22.032279306s] [22.032279306s] END\nI0126 10:55:47.322461       1 cmd.go:97] Received SIGTERM or SIGINT signal, shutting down controller.\nI0126 10:55:47.322729       1 genericapiserver.go:386] [graceful-termination] RunPreShutdownHooks has completed\nI0126 10:55:47.322744       1 genericapiserver.go:349] "[graceful-termination] shutdown event" name="ShutdownInitiated"\nI0126 10:55:47.323485       1 base_controller.go:167] Shutting down SnapshotCRDController ...\nI0126 10:55:47.323499       1 base_controller.go:167] Shutting down DefaultStorageClassController ...\nI0126 10:55:47.323510       1 base_controller.go:167] Shutting down ConfigObserver ...\nI0126 10:55:47.323520       1 base_controller.go:167] Shutting down ManagementStateController ...\nI0126 10:55:47.323532       1 base_controller.go:167] Shutting down StatusSyncer_storage ...\nI0126 10:55:47.323537       1 base_controller.go:145] All StatusSyncer_storage post start hooks have been terminated\nI0126 10:55:47.323547       1 base_controller.go:167] Shutting down CSIDriverStarter ...\nI0126 10:55:47.323558       1 base_controller.go:167] Shutting down LoggingSyncer ...\nI0126 10:55:47.323567       1 base_controller.go:167] Shutting down VSphereProblemDetectorStarter ...\nW0126 10:55:47.323857       1 builder.go:101] graceful termination failed, controllers failed with error: stopped\n
Jan 26 10:55:49.214 E ns/openshift-cluster-storage-operator pod/cluster-storage-operator-dfb65465f-6dz4c node/libvirt-ppc64le-1-2-7-55xhw-master-2 container/cluster-storage-operator reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar)
Jan 26 10:55:49.241 E ns/openshift-cluster-node-tuning-operator pod/cluster-node-tuning-operator-6b87cc77d9-kjpgb node/libvirt-ppc64le-1-2-7-55xhw-master-2 container/cluster-node-tuning-operator reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar)
Jan 26 10:55:49.310 E ns/openshift-cluster-storage-operator pod/csi-snapshot-controller-operator-5cfbb5cc68-874l8 node/libvirt-ppc64le-1-2-7-55xhw-master-2 container/csi-snapshot-controller-operator reason/ContainerExit code/1 cause/Error ncing operator at 58.685344ms\nI0126 10:55:40.002453       1 operator.go:157] Starting syncing operator at 2023-01-26 10:55:40.002450379 +0000 UTC m=+2483.104910973\nI0126 10:55:40.447436       1 operator.go:159] Finished syncing operator at 444.979303ms\nI0126 10:55:45.580431       1 operator.go:157] Starting syncing operator at 2023-01-26 10:55:45.580423372 +0000 UTC m=+2488.682883960\nI0126 10:55:45.666040       1 operator.go:159] Finished syncing operator at 85.610769ms\nI0126 10:55:47.229480       1 operator.go:157] Starting syncing operator at 2023-01-26 10:55:47.229472796 +0000 UTC m=+2490.331933396\nI0126 10:55:47.275223       1 cmd.go:97] Received SIGTERM or SIGINT signal, shutting down controller.\nI0126 10:55:47.275276       1 genericapiserver.go:386] [graceful-termination] RunPreShutdownHooks has completed\nI0126 10:55:47.275297       1 genericapiserver.go:349] "[graceful-termination] shutdown event" name="ShutdownInitiated"\nI0126 10:55:47.275305       1 base_controller.go:167] Shutting down ManagementStateController ...\nI0126 10:55:47.275312       1 genericapiserver.go:352] "[graceful-termination] shutdown event" name="AfterShutdownDelayDuration"\nI0126 10:55:47.275319       1 base_controller.go:167] Shutting down StatusSyncer_csi-snapshot-controller ...\nI0126 10:55:47.275325       1 base_controller.go:145] All StatusSyncer_csi-snapshot-controller post start hooks have been terminated\nI0126 10:55:47.275334       1 genericapiserver.go:376] "[graceful-termination] shutdown event" name="InFlightRequestsDrained"\nI0126 10:55:47.275336       1 base_controller.go:167] Shutting down CSISnapshotWebhookController ...\nI0126 10:55:47.275347       1 base_controller.go:167] Shutting down LoggingSyncer ...\nI0126 10:55:47.275357       1 base_controller.go:167] Shutting down StaticResourceController ...\nW0126 10:55:47.275433       1 builder.go:101] graceful termination failed, controllers failed with error: stopped\nI0126 10:55:47.275434       1 requestheader_controller.go:183] Shutting down RequestHeaderAuthRequestController\n

Found in 28.57% of runs (36.36% of failures) across 14 total runs and 1 jobs (78.57% failed) in 90ms - clear search | chart view - source code located on github