Job:
#1921157bug23 months ago[sig-api-machinery] Kubernetes APIs remain available for new connections ASSIGNED
T2: At 06:45:58: systemd-shutdown was sending SIGTERM to remaining processes...
T3: At 06:45:58: kube-apiserver-ci-op-z52cbzhi-6d7cd-pz2jw-master-0: Received signal to terminate, becoming unready, but keeping serving (TerminationStart event)
T4: At 06:47:08 kube-apiserver-ci-op-z52cbzhi-6d7cd-pz2jw-master-0: The minimal shutdown duration of 1m10s finished (TerminationMinimalShutdownDurationFinished event)
T5: At 06:47:08 kube-apiserver-ci-op-z52cbzhi-6d7cd-pz2jw-master-0: Server has stopped listening (TerminationStoppedServing event)
T5 is the last event reported from that api server. At T5 the server might wait up to 60s for all requests to complete and then it fires TerminationGracefulTerminationFinished event.
periodic-ci-openshift-release-master-nightly-4.10-upgrade-from-stable-4.9-e2e-metal-ipi-bm-upgrade (all) - 18 runs, 56% failed, 30% of failures match = 17% impact
#1619941310431498240junit25 hours ago
Jan 30 08:06:38.777 E ns/openshift-machine-api pod/metal3-67565587c7-pwrw7 node/host2.cluster9.ocpci.eng.rdu2.redhat.com container/ironic-deploy-ramdisk-logs reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar)
Jan 30 08:06:38.777 E ns/openshift-machine-api pod/metal3-67565587c7-pwrw7 node/host2.cluster9.ocpci.eng.rdu2.redhat.com container/ironic-inspector-ramdisk-logs reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar)
Jan 30 08:06:40.067 E ns/openshift-machine-api pod/metal3-796ff47d67-zf77p node/host3.cluster9.ocpci.eng.rdu2.redhat.com container/metal3-static-ip-set reason/ContainerExit code/1 cause/Error
Jan 30 08:06:48.073 E ns/openshift-image-registry pod/cluster-image-registry-operator-864d6d8695-27mfs node/host3.cluster9.ocpci.eng.rdu2.redhat.com container/cluster-image-registry-operator reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar)
Jan 30 08:06:48.093 E ns/openshift-monitoring pod/cluster-monitoring-operator-894d44997-75wvb node/host3.cluster9.ocpci.eng.rdu2.redhat.com container/kube-rbac-proxy reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar)
Jan 30 08:06:52.776 E ns/openshift-console-operator pod/console-operator-6bbd4fcc8c-cfdmh node/host4.cluster9.ocpci.eng.rdu2.redhat.com container/console-operator reason/ContainerExit code/1 cause/Error ination] shutdown event" name="ShutdownInitiated"\nI0130 08:06:26.749807       1 genericapiserver.go:709] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-console-operator", Name:"console-operator-6bbd4fcc8c-cfdmh", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TerminationPreShutdownHooksFinished' All pre-shutdown hooks have been finished\nI0130 08:06:26.749822       1 genericapiserver.go:709] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-console-operator", Name:"console-operator-6bbd4fcc8c-cfdmh", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TerminationStart' Received signal to terminate, becoming unready, but keeping serving\nI0130 08:06:26.749835       1 genericapiserver.go:709] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-console-operator", Name:"console-operator-6bbd4fcc8c-cfdmh", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TerminationMinimalShutdownDurationFinished' The minimal shutdown duration of 0s finished\nI0130 08:06:26.749839       1 base_controller.go:167] Shutting down ConsoleCLIDownloadsController ...\nI0130 08:06:26.749845       1 genericapiserver.go:362] "[graceful-termination] shutdown event" name="AfterShutdownDelayDuration"\nI0130 08:06:26.749850       1 base_controller.go:167] Shutting down ConsoleOperator ...\nI0130 08:06:26.749861       1 base_controller.go:167] Shutting down StatusSyncer_console ...\nI0130 08:06:26.749858       1 genericapiserver.go:709] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-console-operator", Name:"console-operator-6bbd4fcc8c-cfdmh", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TerminationStoppedServing' Server has stopped listening\nW0130 08:06:26.749865       1 builder.go:101] graceful termination failed, controllers failed with error: stopped\nI0130 08:06:26.749871       1 genericapiserver.go:387] "[graceful-termination] shutdown event" name="InFlightRequestsDrained"\n
Jan 30 08:06:56.743 E ns/openshift-kube-storage-version-migrator pod/migrator-5554c9565f-t7dz5 node/host2.cluster9.ocpci.eng.rdu2.redhat.com container/migrator reason/ContainerExit code/2 cause/Error tools/cache/reflector.go:167: watch of *v1alpha1.StorageVersionMigration ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding\nI0130 06:58:08.474029       1 trace.go:205] Trace[336122540]: "Reflector ListAndWatch" name:k8s.io/client-go@v0.21.0/tools/cache/reflector.go:167 (30-Jan-2023 06:57:08.470) (total time: 60003ms):\nTrace[336122540]: [1m0.003700608s] [1m0.003700608s] END\nE0130 06:58:08.474042       1 reflector.go:138] k8s.io/client-go@v0.21.0/tools/cache/reflector.go:167: Failed to watch *v1alpha1.StorageVersionMigration: failed to list *v1alpha1.StorageVersionMigration: the server was unable to return a response in the time allotted, but may still be processing the request (get storageversionmigrations.migration.k8s.io)\nI0130 06:58:58.544058       1 trace.go:205] Trace[646203300]: "Reflector ListAndWatch" name:k8s.io/client-go@v0.21.0/tools/cache/reflector.go:167 (30-Jan-2023 06:58:10.230) (total time: 48313ms):\nTrace[646203300]: ---"Objects listed" 48313ms (06:58:00.544)\nTrace[646203300]: [48.313936371s] [48.313936371s] END\nE0130 06:58:58.546276       1 reflector.go:138] k8s.io/client-go@v0.21.0/tools/cache/reflector.go:167: Failed to watch *v1alpha1.StorageVersionMigration: the server has received too many requests and has asked us to try again later (get storageversionmigrations.migration.k8s.io)\nI0130 06:59:14.357473       1 trace.go:205] Trace[460128162]: "Reflector ListAndWatch" name:k8s.io/client-go@v0.21.0/tools/cache/reflector.go:167 (30-Jan-2023 06:59:04.350) (total time: 10007ms):\nTrace[460128162]: [10.007057864s] [10.007057864s] END\nE0130 06:59:14.357487       1 reflector.go:138] k8s.io/client-go@v0.21.0/tools/cache/reflector.go:167: Failed to watch *v1alpha1.StorageVersionMigration: failed to list *v1alpha1.StorageVersionMigration: Get "https://172.30.0.1:443/apis/migration.k8s.io/v1alpha1/storageversionmigrations?resourceVersion=16496": dial tcp 172.30.0.1:443: connect: connection refused\n
Jan 30 08:06:57.500 E ns/openshift-monitoring pod/prometheus-k8s-0 node/host6.cluster9.ocpci.eng.rdu2.redhat.com container/prometheus-proxy reason/ContainerExit code/2 cause/Error 2023/01/30 07:14:59 provider.go:128: Defaulting client-id to system:serviceaccount:openshift-monitoring:prometheus-k8s\n2023/01/30 07:14:59 provider.go:133: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2023/01/30 07:14:59 provider.go:351: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2023/01/30 07:14:59 oauthproxy.go:203: mapping path "/" => upstream "http://localhost:9090/"\n2023/01/30 07:14:59 oauthproxy.go:230: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:prometheus-k8s\n2023/01/30 07:14:59 oauthproxy.go:240: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> samesite: refresh:disabled\n2023/01/30 07:14:59 main.go:156: using htpasswd file /etc/proxy/htpasswd/auth\nI0130 07:14:59.817271       1 dynamic_serving_content.go:130] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n2023/01/30 07:14:59 http.go:107: HTTPS: listening on [::]:9091\n
Jan 30 08:06:57.500 E ns/openshift-monitoring pod/prometheus-k8s-0 node/host6.cluster9.ocpci.eng.rdu2.redhat.com container/config-reloader reason/ContainerExit code/2 cause/Error level=info ts=2023-01-30T07:14:59.436165893Z caller=main.go:148 msg="Starting prometheus-config-reloader" version="(version=0.49.0, branch=rhaos-4.9-rhel-8, revision=d709566)"\nlevel=info ts=2023-01-30T07:14:59.436291214Z caller=main.go:149 build_context="(go=go1.16.12, user=root, date=20221205-20:41:17)"\nlevel=info ts=2023-01-30T07:14:59.436425203Z caller=main.go:183 msg="Starting web server for metrics" listen=localhost:8080\nlevel=info ts=2023-01-30T07:14:59.654796583Z caller=reloader.go:355 msg="Reload triggered" cfg_in=/etc/prometheus/config/prometheus.yaml.gz cfg_out=/etc/prometheus/config_out/prometheus.env.yaml watched_dirs=/etc/prometheus/rules/prometheus-k8s-rulefiles-0\nlevel=info ts=2023-01-30T07:14:59.654854296Z caller=reloader.go:219 msg="started watching config file and directories for changes" cfg=/etc/prometheus/config/prometheus.yaml.gz out=/etc/prometheus/config_out/prometheus.env.yaml dirs=/etc/prometheus/rules/prometheus-k8s-rulefiles-0\nlevel=info ts=2023-01-30T07:15:02.122593468Z caller=reloader.go:355 msg="Reload triggered" cfg_in=/etc/prometheus/config/prometheus.yaml.gz cfg_out=/etc/prometheus/config_out/prometheus.env.yaml watched_dirs=/etc/prometheus/rules/prometheus-k8s-rulefiles-0\nlevel=info ts=2023-01-30T07:15:04.249148147Z caller=reloader.go:355 msg="Reload triggered" cfg_in=/etc/prometheus/config/prometheus.yaml.gz cfg_out=/etc/prometheus/config_out/prometheus.env.yaml watched_dirs=/etc/prometheus/rules/prometheus-k8s-rulefiles-0\nlevel=info ts=2023-01-30T07:16:29.338061238Z caller=r
Jan 30 08:06:57.528 E ns/openshift-monitoring pod/prometheus-k8s-1 node/host6.cluster9.ocpci.eng.rdu2.redhat.com container/prometheus-proxy reason/ContainerExit code/2 cause/Error 2023/01/30 07:14:59 provider.go:128: Defaulting client-id to system:serviceaccount:openshift-monitoring:prometheus-k8s\n2023/01/30 07:14:59 provider.go:133: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2023/01/30 07:14:59 provider.go:351: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2023/01/30 07:14:59 oauthproxy.go:203: mapping path "/" => upstream "http://localhost:9090/"\n2023/01/30 07:14:59 oauthproxy.go:230: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:prometheus-k8s\n2023/01/30 07:14:59 oauthproxy.go:240: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> samesite: refresh:disabled\n2023/01/30 07:14:59 main.go:156: using htpasswd file /etc/proxy/htpasswd/auth\n2023/01/30 07:14:59 http.go:107: HTTPS: listening on [::]:9091\nI0130 07:14:59.816737       1 dynamic_serving_content.go:130] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n
Jan 30 08:06:57.528 E ns/openshift-monitoring pod/prometheus-k8s-1 node/host6.cluster9.ocpci.eng.rdu2.redhat.com container/config-reloader reason/ContainerExit code/2 cause/Error level=info ts=2023-01-30T07:14:59.435881029Z caller=main.go:148 msg="Starting prometheus-config-reloader" version="(version=0.49.0, branch=rhaos-4.9-rhel-8, revision=d709566)"\nlevel=info ts=2023-01-30T07:14:59.435924531Z caller=main.go:149 build_context="(go=go1.16.12, user=root, date=20221205-20:41:17)"\nlevel=info ts=2023-01-30T07:14:59.436065761Z caller=main.go:183 msg="Starting web server for metrics" listen=localhost:8080\nlevel=info ts=2023-01-30T07:14:59.663932901Z caller=reloader.go:355 msg="Reload triggered" cfg_in=/etc/prometheus/config/prometheus.yaml.gz cfg_out=/etc/prometheus/config_out/prometheus.env.yaml watched_dirs=/etc/prometheus/rules/prometheus-k8s-rulefiles-0\nlevel=info ts=2023-01-30T07:14:59.664055763Z caller=reloader.go:219 msg="started watching config file and directories for changes" cfg=/etc/prometheus/config/prometheus.yaml.gz out=/etc/prometheus/config_out/prometheus.env.yaml dirs=/etc/prometheus/rules/prometheus-k8s-rulefiles-0\nlevel=info ts=2023-01-30T07:15:02.125982753Z caller=reloader.go:355 msg="Reload triggered" cfg_in=/etc/prometheus/config/prometheus.yaml.gz cfg_out=/etc/prometheus/config_out/prometheus.env.yaml watched_dirs=/etc/prometheus/rules/prometheus-k8s-rulefiles-0\nlevel=info ts=2023-01-30T07:15:04.250336902Z caller=reloader.go:355 msg="Reload triggered" cfg_in=/etc/prometheus/config/prometheus.yaml.gz cfg_out=/etc/prometheus/config_out/prometheus.env.yaml watched_dirs=/etc/prometheus/rules/prometheus-k8s-rulefiles-0\nlevel=info ts=2023-01-30T07:16:19.318770429Z caller=r
#1615676802666074112junit12 days ago
Jan 18 13:34:47.169 E ns/openshift-machine-api pod/metal3-7b6b47b7ff-ck6mp node/host3.cluster13.ocpci.eng.rdu2.redhat.com container/ironic-deploy-ramdisk-logs reason/ContainerExit code/137 cause/Error
Jan 18 13:34:47.169 E ns/openshift-machine-api pod/metal3-7b6b47b7ff-ck6mp node/host3.cluster13.ocpci.eng.rdu2.redhat.com container/metal3-mariadb reason/ContainerExit code/137 cause/Error
Jan 18 13:34:47.169 E ns/openshift-machine-api pod/metal3-7b6b47b7ff-ck6mp node/host3.cluster13.ocpci.eng.rdu2.redhat.com container/ironic-inspector-ramdisk-logs reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar)
Jan 18 13:34:47.839 - 5s    E clusteroperator/csi-snapshot-controller condition/Available status/Unknown reason/CSISnapshotControllerAvailable: Waiting for the initial sync of the operator
Jan 18 13:34:56.845 E ns/openshift-machine-api pod/metal3-7f9c78bf9c-lt86q node/host4.cluster13.ocpci.eng.rdu2.redhat.com container/metal3-static-ip-set reason/ContainerExit code/1 cause/Error
Jan 18 13:34:56.859 E ns/openshift-console-operator pod/console-operator-6bbd4fcc8c-5kprp node/host4.cluster13.ocpci.eng.rdu2.redhat.com container/console-operator reason/ContainerExit code/1 cause/Error All pre-shutdown hooks have been finished\nI0118 13:34:54.540470       1 genericapiserver.go:709] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-console-operator", Name:"console-operator-6bbd4fcc8c-5kprp", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TerminationStart' Received signal to terminate, becoming unready, but keeping serving\nI0118 13:34:54.540482       1 genericapiserver.go:709] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-console-operator", Name:"console-operator-6bbd4fcc8c-5kprp", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TerminationMinimalShutdownDurationFinished' The minimal shutdown duration of 0s finished\nI0118 13:34:54.540494       1 genericapiserver.go:362] "[graceful-termination] shutdown event" name="AfterShutdownDelayDuration"\nI0118 13:34:54.540515       1 base_controller.go:167] Shutting down ConsoleServiceController ...\nI0118 13:34:54.540549       1 configmap_cafile_content.go:222] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"\nI0118 13:34:54.540556       1 base_controller.go:167] Shutting down ManagementStateController ...\nI0118 13:34:54.540564       1 base_controller.go:167] Shutting down HealthCheckController ...\nI0118 13:34:54.540565       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"\nI0118 13:34:54.540566       1 base_controller.go:167] Shutting down ConsoleDownloadsDeploymentSyncController ...\nI0118 13:34:54.540573       1 base_controller.go:167] Shutting down StatusSyncer_console ...\nI0118 13:34:54.540577       1 base_controller.go:145] All StatusSyncer_console post start hooks have been terminated\nW0118 13:34:54.540549       1 builder.go:101] graceful termination failed, controllers failed with error: stopped\nI0118 13:34:54.540580       1 configmap_cafile_content.go:222] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"\n
Jan 18 13:35:00.327 E ns/openshift-ingress-canary pod/ingress-canary-njq4p node/host5.cluster13.ocpci.eng.rdu2.redhat.com container/serve-healthcheck-canary reason/ContainerExit code/2 cause/Error serving on 8888\nserving on 8080\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\n
Jan 18 13:35:03.000 - 1s    E ns/openshift-authentication route/oauth-openshift disruption/ingress-to-oauth-server connection/reused ns/openshift-authentication route/oauth-openshift disruption/ingress-to-oauth-server connection/reused stopped responding to GET requests over reused connections: Get "https://oauth-openshift.apps.cluster13.ocpci.eng.rdu2.redhat.com/healthz": net/http: timeout awaiting response headers
Jan 18 13:35:03.000 - 1s    E ns/openshift-console route/console disruption/ingress-to-console connection/reused ns/openshift-console route/console disruption/ingress-to-console connection/reused stopped responding to GET requests over reused connections: Get "https://console-openshift-console.apps.cluster13.ocpci.eng.rdu2.redhat.com/healthz": net/http: timeout awaiting response headers
Jan 18 13:35:03.000 - 1s    E ns/openshift-image-registry route/test-disruption-reused disruption/image-registry connection/reused ns/openshift-image-registry route/test-disruption-reused disruption/image-registry connection/reused stopped responding to GET requests over reused connections: Get "https://test-disruption-reused-openshift-image-registry.apps.cluster13.ocpci.eng.rdu2.redhat.com/healthz": net/http: timeout awaiting response headers
Jan 18 13:35:03.007 E ns/openshift-controller-manager pod/controller-manager-xzjhg node/host4.cluster13.ocpci.eng.rdu2.redhat.com container/controller-manager reason/ContainerExit code/137 cause/Error r.go:247] Caches are synced for service account \nI0118 12:56:22.348174       1 templateinstance_controller.go:297] Starting TemplateInstance controller\nI0118 12:56:22.353534       1 factory.go:85] deploymentconfig controller caches are synced. Starting workers.\nI0118 12:56:22.362713       1 templateinstance_finalizer.go:194] Starting TemplateInstanceFinalizer controller\nI0118 12:56:22.431307       1 deleted_dockercfg_secrets.go:75] caches synced\nI0118 12:56:22.431314       1 deleted_token_secrets.go:70] caches synced\nI0118 12:56:22.431344       1 docker_registry_service.go:156] caches synced\nI0118 12:56:22.431344       1 create_dockercfg_secrets.go:219] urls found\nI0118 12:56:22.431354       1 create_dockercfg_secrets.go:225] caches synced\nI0118 12:56:22.431384       1 docker_registry_service.go:298] Updating registry URLs from map[172.30.122.53:5000:{} image-registry.openshift-image-registry.svc.cluster.local:5000:{} image-registry.openshift-image-registry.svc:5000:{}] to map[172.30.122.53:5000:{} image-registry.openshift-image-registry.svc.cluster.local:5000:{} image-registry.openshift-image-registry.svc:5000:{}]\nI0118 12:56:22.441769       1 build_controller.go:475] Starting build controller\nI0118 12:56:22.441779       1 build_controller.go:477] OpenShift image registry hostname: image-registry.openshift-image-registry.svc:5000\nE0118 13:34:47.387984       1 imagestream_controller.go:136] Error syncing image stream "openshift/ruby": Operation cannot be fulfilled on imagestream.image.openshift.io "ruby": the image stream was updated from "48696" to "48706"\nE0118 13:34:47.388726       1 imagestream_controller.go:136] Error syncing image stream "openshift/redis": Operation cannot be fulfilled on imagestream.image.openshift.io "redis": the image stream was updated from "48692" to "48703"\nE0118 13:34:47.389405       1 imagestream_controller.go:136] Error syncing image stream "openshift/java": Operation cannot be fulfilled on imagestream.image.openshift.io "java": the image stream was updated from "48695" to "48708"\n
#1615864572634206208junit12 days ago
Jan 19 02:05:03.411 E ns/openshift-machine-api pod/metal3-f6688b44-wmzrn node/host4.cluster8.ocpci.eng.rdu2.redhat.com container/metal3-static-ip-manager reason/ContainerExit code/137 cause/Error
Jan 19 02:05:03.411 E ns/openshift-machine-api pod/metal3-f6688b44-wmzrn node/host4.cluster8.ocpci.eng.rdu2.redhat.com container/metal3-mariadb reason/ContainerExit code/137 cause/Error
Jan 19 02:05:03.411 E ns/openshift-machine-api pod/metal3-f6688b44-wmzrn node/host4.cluster8.ocpci.eng.rdu2.redhat.com container/ironic-deploy-ramdisk-logs reason/ContainerExit code/137 cause/Error
Jan 19 02:05:03.411 E ns/openshift-machine-api pod/metal3-f6688b44-wmzrn node/host4.cluster8.ocpci.eng.rdu2.redhat.com container/ironic-inspector-ramdisk-logs reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar)
Jan 19 02:05:08.579 E ns/openshift-monitoring pod/cluster-monitoring-operator-894d44997-7kvrq node/host2.cluster8.ocpci.eng.rdu2.redhat.com container/kube-rbac-proxy reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar)
Jan 19 02:05:09.420 E ns/openshift-console-operator pod/console-operator-6bbd4fcc8c-lq78p node/host4.cluster8.ocpci.eng.rdu2.redhat.com container/console-operator reason/ContainerExit code/1 cause/Error Kind:"Pod", Namespace:"openshift-console-operator", Name:"console-operator-6bbd4fcc8c-lq78p", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TerminationMinimalShutdownDurationFinished' The minimal shutdown duration of 0s finished\nI0119 02:05:06.458655       1 genericapiserver.go:362] "[graceful-termination] shutdown event" name="AfterShutdownDelayDuration"\nI0119 02:05:06.458666       1 genericapiserver.go:709] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-console-operator", Name:"console-operator-6bbd4fcc8c-lq78p", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TerminationStoppedServing' Server has stopped listening\nI0119 02:05:06.458676       1 genericapiserver.go:387] "[graceful-termination] shutdown event" name="InFlightRequestsDrained"\nI0119 02:05:06.459036       1 base_controller.go:167] Shutting down ConsoleDownloadsDeploymentSyncController ...\nI0119 02:05:06.459042       1 base_controller.go:167] Shutting down StatusSyncer_console ...\nI0119 02:05:06.459047       1 base_controller.go:167] Shutting down ResourceSyncController ...\nI0119 02:05:06.459052       1 base_controller.go:167] Shutting down RemoveStaleConditionsController ...\nI0119 02:05:06.459063       1 base_controller.go:167] Shutting down HealthCheckController ...\nI0119 02:05:06.459072       1 base_controller.go:167] Shutting down ConsoleRouteController ...\nI0119 02:05:06.459080       1 base_controller.go:167] Shutting down DownloadsRouteController ...\nI0119 02:05:06.459088       1 base_controller.go:167] Shutting down ConsoleServiceController ...\nI0119 02:05:06.459096       1 base_controller.go:167] Shutting down ConsoleCLIDownloadsController ...\nI0119 02:05:06.459104       1 base_controller.go:167] Shutting down ConsoleServiceController ...\nW0119 02:05:06.459106       1 builder.go:101] graceful termination failed, controllers failed with error: stopped\nI0119 02:05:06.459111       1 base_controller.go:167] Shutting down UnsupportedConfigOverridesController ...\n
Jan 19 02:05:09.420 E ns/openshift-console-operator pod/console-operator-6bbd4fcc8c-lq78p node/host4.cluster8.ocpci.eng.rdu2.redhat.com container/console-operator reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar)
Jan 19 02:05:10.644 E ns/openshift-cluster-storage-operator pod/cluster-storage-operator-59456fcf98-ndrrc node/host2.cluster8.ocpci.eng.rdu2.redhat.com container/cluster-storage-operator reason/ContainerExit code/1 cause/Error 023-01-19 00:57:46.641891351 +0000 UTC))"\nE0119 00:59:54.007283       1 leaderelection.go:330] error retrieving resource lock openshift-cluster-storage-operator/cluster-storage-operator-lock: Get "https://172.30.0.1:443/api/v1/namespaces/openshift-cluster-storage-operator/configmaps/cluster-storage-operator-lock?timeout=1m47s": read tcp 10.129.0.13:46702->172.30.0.1:443: read: connection timed out\nI0119 02:05:10.008274       1 cmd.go:97] Received SIGTERM or SIGINT signal, shutting down controller.\nI0119 02:05:10.008539       1 genericapiserver.go:386] [graceful-termination] RunPreShutdownHooks has completed\nI0119 02:05:10.008565       1 genericapiserver.go:349] "[graceful-termination] shutdown event" name="ShutdownInitiated"\nI0119 02:05:10.008623       1 base_controller.go:167] Shutting down ConfigObserver ...\nI0119 02:05:10.008682       1 base_controller.go:167] Shutting down VSphereProblemDetectorStarter ...\nI0119 02:05:10.008708       1 base_controller.go:167] Shutting down DefaultStorageClassController ...\nI0119 02:05:10.008737       1 base_controller.go:167] Shutting down SnapshotCRDController ...\nI0119 02:05:10.008766       1 base_controller.go:167] Shutting down CSIDriverStarter ...\nI0119 02:05:10.008791       1 base_controller.go:167] Shutting down StatusSyncer_storage ...\nI0119 02:05:10.008810       1 base_controller.go:145] All StatusSyncer_storage post start hooks have been terminated\nI0119 02:05:10.008833       1 base_controller.go:167] Shutting down ManagementStateController ...\nI0119 02:05:10.008845       1 base_controller.go:167] Shutting down LoggingSyncer ...\nW0119 02:05:10.008889       1 builder.go:101] graceful termination failed, controllers failed with error: stopped\nI0119 02:05:10.008908       1 base_controller.go:114] Shutting down worker of LoggingSyncer controller ...\nI0119 02:05:10.008915       1 base_controller.go:104] All LoggingSyncer workers have been terminated\nI0119 02:05:10.008922       1 base_controller.go:114] Shutting down worker of VSphereProblemDetectorStarter controller ...\n
Jan 19 02:05:11.968 E ns/openshift-cluster-storage-operator pod/csi-snapshot-controller-operator-76f948cf74-lmgb5 node/host2.cluster8.ocpci.eng.rdu2.redhat.com container/csi-snapshot-controller-operator reason/ContainerExit code/1 cause/Error T signal, shutting down controller.\nI0119 02:05:10.127331       1 genericapiserver.go:386] [graceful-termination] RunPreShutdownHooks has completed\nI0119 02:05:10.127395       1 base_controller.go:167] Shutting down StaticResourceController ...\nI0119 02:05:10.127406       1 base_controller.go:167] Shutting down ManagementStateController ...\nI0119 02:05:10.127412       1 base_controller.go:167] Shutting down LoggingSyncer ...\nI0119 02:05:10.127419       1 base_controller.go:167] Shutting down CSISnapshotWebhookController ...\nI0119 02:05:10.127424       1 base_controller.go:167] Shutting down StatusSyncer_csi-snapshot-controller ...\nI0119 02:05:10.127908       1 base_controller.go:145] All StatusSyncer_csi-snapshot-controller post start hooks have been terminated\nI0119 02:05:10.127435       1 base_controller.go:114] Shutting down worker of StaticResourceController controller ...\nI0119 02:05:10.128029       1 base_controller.go:104] All StaticResourceController workers have been terminated\nI0119 02:05:10.127439       1 base_controller.go:114] Shutting down worker of ManagementStateController controller ...\nI0119 02:05:10.128121       1 base_controller.go:104] All ManagementStateController workers have been terminated\nI0119 02:05:10.127442       1 base_controller.go:114] Shutting down worker of LoggingSyncer controller ...\nI0119 02:05:10.128195       1 base_controller.go:104] All LoggingSyncer workers have been terminated\nI0119 02:05:10.127446       1 base_controller.go:114] Shutting down worker of CSISnapshotWebhookController controller ...\nI0119 02:05:10.128268       1 base_controller.go:104] All CSISnapshotWebhookController workers have been terminated\nI0119 02:05:10.127449       1 base_controller.go:114] Shutting down worker of StatusSyncer_csi-snapshot-controller controller ...\nI0119 02:05:10.128338       1 base_controller.go:104] All StatusSyncer_csi-snapshot-controller workers have been terminated\nW0119 02:05:10.127552       1 builder.go:101] graceful termination failed, controllers failed with error: stopped\n
Jan 19 02:05:12.023 E ns/openshift-image-registry pod/cluster-image-registry-operator-864d6d8695-lmfcs node/host2.cluster8.ocpci.eng.rdu2.redhat.com container/cluster-image-registry-operator reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar)
Jan 19 02:05:12.063 E ns/openshift-authentication-operator pod/authentication-operator-57868976d6-978x8 node/host2.cluster8.ocpci.eng.rdu2.redhat.com container/authentication-operator reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar)

Found in 16.67% of runs (30.00% of failures) across 18 total runs and 1 jobs (55.56% failed) in 84ms - clear search | chart view - source code located on github