Job:
#OCPBUGS-25331issue8 weeks agosome events are missing time related infomration POST
Issue 15675601: some events are missing time related infomration
Description: Description of problem:
 {code:none}
 Some events have time related infomration set to null (firstTimestamp, lastTimestamp, eventTime)
 {code}
 Version-Release number of selected component (if applicable):
 {code:none}
 cluster-logging.v5.8.0{code}
 How reproducible:
 {code:none}
 100% {code}
 Steps to Reproduce:
 {code:none}
     1.Stop one of the masters
     2.Start the master
     3.Wait untill the ENV stabilizes
     4. oc get events -A | grep unknown     {code}
 Actual results:
 {code:none}
 oc get events -A | grep unknow
 default                                      <unknown>   Normal    TerminationStart                             namespace/kube-system                                                            Received signal to terminate, becoming unready, but keeping serving
 default                                      <unknown>   Normal    TerminationPreShutdownHooksFinished          namespace/kube-system                                                            All pre-shutdown hooks have been finished
 default                                      <unknown>   Normal    TerminationMinimalShutdownDurationFinished   namespace/kube-system                                                            The minimal shutdown duration of 0s finished
 ....
 {code}
 Expected results:
 {code:none}
     All time related information is set correctly{code}
 Additional info:
 {code:none}
    This causes issues with external monitoring systems. Events with no timestamp will never show or push other events from the view depending on the sorting order of the timestamp. The operator of the environment has then trouble to see what is happening there. {code}
Status: POST
Resolution:
Priority: Minor
Creator: Roman Hodain
Assigned To: Abu H Kashem
#OCPBUGS-27075issue2 months agosome events are missing time related infomration New
Issue 15715205: some events are missing time related infomration
Description: Description of problem:
 {code:none}
 Some events have time related infomration set to null (firstTimestamp, lastTimestamp, eventTime)
 {code}
 Version-Release number of selected component (if applicable):
 {code:none}
 cluster-logging.v5.8.0{code}
 How reproducible:
 {code:none}
 100% {code}
 Steps to Reproduce:
 {code:none}
     1.Stop one of the masters
     2.Start the master
     3.Wait untill the ENV stabilizes
     4. oc get events -A | grep unknown     {code}
 Actual results:
 {code:none}
 oc get events -A | grep unknow
 default                                      <unknown>   Normal    TerminationStart                             namespace/kube-system                                                            Received signal to terminate, becoming unready, but keeping serving
 default                                      <unknown>   Normal    TerminationPreShutdownHooksFinished          namespace/kube-system                                                            All pre-shutdown hooks have been finished
 default                                      <unknown>   Normal    TerminationMinimalShutdownDurationFinished   namespace/kube-system                                                            The minimal shutdown duration of 0s finished
 ....
 {code}
 Expected results:
 {code:none}
     All time related information is set correctly{code}
 Additional info:
 {code:none}
    This causes issues with external monitoring systems. Events with no timestamp will never show or push other events from the view depending on the sorting order of the timestamp. The operator of the environment has then trouble to see what is happening there. {code}
Status: New
Resolution:
Priority: Minor
Creator: Rahul Gangwar
Assigned To:
#OCPBUGS-27074issue2 months agosome events are missing time related infomration New
Issue 15715202: some events are missing time related infomration
Description: Description of problem:
 {code:none}
 Some events have time related infomration set to null (firstTimestamp, lastTimestamp, eventTime)
 {code}
 Version-Release number of selected component (if applicable):
 {code:none}
 cluster-logging.v5.8.0{code}
 How reproducible:
 {code:none}
 100% {code}
 Steps to Reproduce:
 {code:none}
     1.Stop one of the masters
     2.Start the master
     3.Wait untill the ENV stabilizes
     4. oc get events -A | grep unknown     {code}
 Actual results:
 {code:none}
 oc get events -A | grep unknow
 default                                      <unknown>   Normal    TerminationStart                             namespace/kube-system                                                            Received signal to terminate, becoming unready, but keeping serving
 default                                      <unknown>   Normal    TerminationPreShutdownHooksFinished          namespace/kube-system                                                            All pre-shutdown hooks have been finished
 default                                      <unknown>   Normal    TerminationMinimalShutdownDurationFinished   namespace/kube-system                                                            The minimal shutdown duration of 0s finished
 ....
 {code}
 Expected results:
 {code:none}
     All time related information is set correctly{code}
 Additional info:
 {code:none}
    This causes issues with external monitoring systems. Events with no timestamp will never show or push other events from the view depending on the sorting order of the timestamp. The operator of the environment has then trouble to see what is happening there. {code}
Status: New
Resolution:
Priority: Minor
Creator: Rahul Gangwar
Assigned To: Abu H Kashem
periodic-ci-openshift-release-master-ci-4.10-upgrade-from-stable-4.9-e2e-aws-upgrade (all) - 7 runs, 57% failed, 50% of failures match = 29% impact
#1772175612132200448junit3 days ago
Mar 25 09:28:49.440 E ns/openshift-controller-manager-operator pod/openshift-controller-manager-operator-67986c586c-2qc8b node/ip-10-0-184-158.ec2.internal container/openshift-controller-manager-operator reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar)
Mar 25 09:29:05.021 E ns/openshift-kube-storage-version-migrator pod/migrator-699454c6d8-vvkgr node/ip-10-0-241-143.ec2.internal container/migrator reason/ContainerExit code/2 cause/Error I0325 08:29:15.002883       1 migrator.go:18] FLAG: --add_dir_header="false"\nI0325 08:29:15.002957       1 migrator.go:18] FLAG: --alsologtostderr="true"\nI0325 08:29:15.002961       1 migrator.go:18] FLAG: --kube-api-burst="1000"\nI0325 08:29:15.002965       1 migrator.go:18] FLAG: --kube-api-qps="40"\nI0325 08:29:15.002969       1 migrator.go:18] FLAG: --kubeconfig=""\nI0325 08:29:15.002973       1 migrator.go:18] FLAG: --log_backtrace_at=":0"\nI0325 08:29:15.002978       1 migrator.go:18] FLAG: --log_dir=""\nI0325 08:29:15.002981       1 migrator.go:18] FLAG: --log_file=""\nI0325 08:29:15.002984       1 migrator.go:18] FLAG: --log_file_max_size="1800"\nI0325 08:29:15.002987       1 migrator.go:18] FLAG: --logtostderr="true"\nI0325 08:29:15.002990       1 migrator.go:18] FLAG: --one_output="false"\nI0325 08:29:15.002993       1 migrator.go:18] FLAG: --skip_headers="false"\nI0325 08:29:15.002995       1 migrator.go:18] FLAG: --skip_log_headers="false"\nI0325 08:29:15.002998       1 migrator.go:18] FLAG: --stderrthreshold="2"\nI0325 08:29:15.003001       1 migrator.go:18] FLAG: --v="2"\nI0325 08:29:15.003004       1 migrator.go:18] FLAG: --vmodule=""\nI0325 08:29:15.003937       1 reflector.go:219] Starting reflector *v1alpha1.StorageVersionMigration (0s) from k8s.io/client-go@v0.21.0/tools/cache/reflector.go:167\nI0325 08:29:27.118638       1 kubemigrator.go:110] flowcontrol-flowschema-storage-version-migration: migration running\nI0325 08:29:27.192348       1 kubemigrator.go:127] flowcontrol-flowschema-storage-version-migration: migration succeeded\nI0325 08:29:28.198344       1 kubemigrator.go:110] flowcontrol-prioritylevel-storage-version-migration: migration running\nI0325 08:29:28.240842       1 kubemigrator.go:127] flowcontrol-prioritylevel-storage-version-migration: migration succeeded\nI0325 08:37:11.680384       1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF\n
Mar 25 09:29:09.792 E ns/openshift-cluster-storage-operator pod/csi-snapshot-controller-operator-5c4c46d757-74mg5 node/ip-10-0-184-158.ec2.internal container/csi-snapshot-controller-operator reason/ContainerExit code/1 cause/Error erator at 78.145269ms\nI0325 09:29:07.182413       1 operator.go:157] Starting syncing operator at 2024-03-25 09:29:07.182386544 +0000 UTC m=+3607.873248154\nI0325 09:29:07.419051       1 operator.go:159] Finished syncing operator at 236.653663ms\nI0325 09:29:07.422281       1 operator.go:157] Starting syncing operator at 2024-03-25 09:29:07.422273701 +0000 UTC m=+3608.113135311\nI0325 09:29:07.483983       1 cmd.go:97] Received SIGTERM or SIGINT signal, shutting down controller.\nI0325 09:29:07.485386       1 genericapiserver.go:386] [graceful-termination] RunPreShutdownHooks has completed\nI0325 09:29:07.485453       1 genericapiserver.go:349] "[graceful-termination] shutdown event" name="ShutdownInitiated"\nI0325 09:29:07.485485       1 genericapiserver.go:352] "[graceful-termination] shutdown event" name="AfterShutdownDelayDuration"\nI0325 09:29:07.485514       1 genericapiserver.go:376] "[graceful-termination] shutdown event" name="InFlightRequestsDrained"\nI0325 09:29:07.485604       1 requestheader_controller.go:183] Shutting down RequestHeaderAuthRequestController\nI0325 09:29:07.485643       1 configmap_cafile_content.go:222] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"\nI0325 09:29:07.485671       1 configmap_cafile_content.go:222] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"\nI0325 09:29:07.485739       1 secure_serving.go:311] Stopped listening on [::]:8443\nI0325 09:29:07.485769       1 genericapiserver.go:363] "[graceful-termination] shutdown event" name="HTTPServerStoppedListening"\nI0325 09:29:07.485789       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"\nI0325 09:29:07.485814       1 dynamic_serving_content.go:144] "Shutting down controller" name="serving-cert::/tmp/serving-cert-044619323/tls.crt::/tmp/serving-cert-044619323/tls.key"\nW0325 09:29:07.485857       1 builder.go:101] graceful termination failed, controllers failed with error: stopped\n
Mar 25 09:29:10.878 E ns/openshift-cluster-storage-operator pod/cluster-storage-operator-6dc5944df7-hh7b2 node/ip-10-0-184-158.ec2.internal container/cluster-storage-operator reason/ContainerExit code/1 cause/Error  gp2 found, reconciling\nI0325 09:17:14.917335       1 controller.go:174] Existing StorageClass gp2 found, reconciling\nI0325 09:27:13.611876       1 controller.go:174] Existing StorageClass gp2 found, reconciling\nI0325 09:29:07.508718       1 cmd.go:97] Received SIGTERM or SIGINT signal, shutting down controller.\nI0325 09:29:07.508768       1 genericapiserver.go:386] [graceful-termination] RunPreShutdownHooks has completed\nI0325 09:29:07.508799       1 genericapiserver.go:349] "[graceful-termination] shutdown event" name="ShutdownInitiated"\nI0325 09:29:07.509088       1 configmap_cafile_content.go:222] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"\nI0325 09:29:07.509102       1 requestheader_controller.go:183] Shutting down RequestHeaderAuthRequestController\nI0325 09:29:07.509117       1 base_controller.go:167] Shutting down SnapshotCRDController ...\nI0325 09:29:07.509129       1 base_controller.go:167] Shutting down CSIDriverStarter ...\nI0325 09:29:07.509136       1 base_controller.go:167] Shutting down AWSEBSCSIDriverOperatorDeployment ...\nI0325 09:29:07.509141       1 base_controller.go:145] All AWSEBSCSIDriverOperatorDeployment post start hooks have been terminated\nI0325 09:29:07.509171       1 base_controller.go:167] Shutting down StatusSyncer_storage ...\nI0325 09:29:07.509232       1 base_controller.go:145] All StatusSyncer_storage post start hooks have been terminated\nI0325 09:29:07.509243       1 base_controller.go:167] Shutting down DefaultStorageClassController ...\nI0325 09:29:07.509253       1 base_controller.go:167] Shutting down ConfigObserver ...\nI0325 09:29:07.509261       1 base_controller.go:167] Shutting down ManagementStateController ...\nI0325 09:29:07.509270       1 base_controller.go:167] Shutting down LoggingSyncer ...\nI0325 09:29:07.509279       1 base_controller.go:167] Shutting down VSphereProblemDetectorStarter ...\nW0325 09:29:07.509403       1 builder.go:101] graceful termination failed, controllers failed with error: stopped\n
Mar 25 09:29:10.878 E ns/openshift-cluster-storage-operator pod/cluster-storage-operator-6dc5944df7-hh7b2 node/ip-10-0-184-158.ec2.internal container/cluster-storage-operator reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar)
Mar 25 09:29:11.104 E ns/openshift-console-operator pod/console-operator-6f97f996df-fb49m node/ip-10-0-241-143.ec2.internal container/console-operator reason/ContainerExit code/1 cause/Error ksFinished' All pre-shutdown hooks have been finished\nI0325 09:29:10.012067       1 genericapiserver.go:355] "[graceful-termination] shutdown event" name="ShutdownInitiated"\nI0325 09:29:10.012103       1 genericapiserver.go:709] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-console-operator", Name:"console-operator-6f97f996df-fb49m", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TerminationStart' Received signal to terminate, becoming unready, but keeping serving\nI0325 09:29:10.012137       1 genericapiserver.go:709] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-console-operator", Name:"console-operator-6f97f996df-fb49m", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TerminationMinimalShutdownDurationFinished' The minimal shutdown duration of 0s finished\nI0325 09:29:10.012167       1 genericapiserver.go:362] "[graceful-termination] shutdown event" name="AfterShutdownDelayDuration"\nI0325 09:29:10.012215       1 genericapiserver.go:709] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-console-operator", Name:"console-operator-6f97f996df-fb49m", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TerminationStoppedServing' Server has stopped listening\nI0325 09:29:10.012246       1 genericapiserver.go:387] "[graceful-termination] shutdown event" name="InFlightRequestsDrained"\nI0325 09:29:10.012283       1 base_controller.go:167] Shutting down ConsoleOperator ...\nI0325 09:29:10.012309       1 base_controller.go:167] Shutting down ConsoleCLIDownloadsController ...\nI0325 09:29:10.012331       1 base_controller.go:167] Shutting down ConsoleDownloadsDeploymentSyncController ...\nW0325 09:29:10.012358       1 builder.go:101] graceful termination failed, controllers failed with error: stopped\nI0325 09:29:10.012391       1 base_controller.go:167] Shutting down ConsoleServiceController ...\nI0325 09:29:10.012405       1 base_controller.go:167] Shutting down DownloadsRouteController ...\n
Mar 25 09:29:11.211 E ns/openshift-monitoring pod/cluster-monitoring-operator-65c74c8b57-k7w9c node/ip-10-0-184-158.ec2.internal container/kube-rbac-proxy reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar)
Mar 25 09:29:12.976 E ns/openshift-ingress-canary pod/ingress-canary-g9hh5 node/ip-10-0-131-183.ec2.internal container/serve-healthcheck-canary reason/ContainerExit code/2 cause/Error serving on 8888\nserving on 8080\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\n
Mar 25 09:29:21.129 E ns/openshift-controller-manager pod/controller-manager-rvgc7 node/ip-10-0-241-143.ec2.internal container/controller-manager reason/ContainerExit code/137 cause/Error rcfg_secrets.go:225] caches synced\nI0325 08:40:52.279043       1 deleted_token_secrets.go:70] caches synced\nI0325 08:40:52.279173       1 docker_registry_service.go:298] Updating registry URLs from map[172.30.194.189:5000:{} image-registry.openshift-image-registry.svc.cluster.local:5000:{} image-registry.openshift-image-registry.svc:5000:{}] to map[172.30.194.189:5000:{} image-registry.openshift-image-registry.svc.cluster.local:5000:{} image-registry.openshift-image-registry.svc:5000:{}]\nI0325 08:40:52.323268       1 build_controller.go:475] Starting build controller\nI0325 08:40:52.323354       1 build_controller.go:477] OpenShift image registry hostname: image-registry.openshift-image-registry.svc:5000\nE0325 09:29:06.358662       1 imagestream_controller.go:136] Error syncing image stream "openshift/java": Operation cannot be fulfilled on imagestream.image.openshift.io "java": the image stream was updated from "45974" to "45988"\nE0325 09:29:06.694920       1 imagestream_controller.go:136] Error syncing image stream "openshift/rhpam-kieserver-rhel8": Operation cannot be fulfilled on imagestreamimports.image.openshift.io "rhpam-kieserver-rhel8": the object has been modified; please apply your changes to the latest version and try again\nE0325 09:29:06.844229       1 imagestream_controller.go:136] Error syncing image stream "openshift/java": Operation cannot be fulfilled on imagestream.image.openshift.io "java": the image stream was updated from "45988" to "46053"\nE0325 09:29:07.240392       1 imagestream_controller.go:136] Error syncing image stream "openshift/rhpam-kieserver-rhel8": Operation cannot be fulfilled on imagestream.image.openshift.io "rhpam-kieserver-rhel8": the image stream was updated from "45992" to "46141"\nE0325 09:29:07.471523       1 imagestream_controller.go:136] Error syncing image stream "openshift/dotnet-runtime": Operation cannot be fulfilled on imagestreamimports.image.openshift.io "dotnet-runtime": the object has been modified; please apply your changes to the latest version and try again\n
Mar 25 09:29:22.921 E ns/openshift-controller-manager pod/controller-manager-ccdz6 node/ip-10-0-180-199.ec2.internal container/controller-manager reason/ContainerExit code/137 cause/Error I0325 08:38:11.343888       1 controller_manager.go:39] Starting controllers on 0.0.0.0:8443 (4.9.0-202303281553.p0.g79857a3.assembly.stream-79857a3)\nI0325 08:38:11.345638       1 controller_manager.go:50] DeploymentConfig controller using images from "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d7332c3bce3c868a39d04551d01a6654ecc81622d5624d102c804259068fe38f"\nI0325 08:38:11.345655       1 controller_manager.go:56] Build controller using images from "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee93b0b87823944fd0b4192854eefc73a539e5539f20d0de47c907362102ece2"\nI0325 08:38:11.345999       1 standalone_apiserver.go:104] Started health checks at 0.0.0.0:8443\nI0325 08:38:11.346520       1 leaderelection.go:248] attempting to acquire leader lease openshift-controller-manager/openshift-master-controllers...\n
Mar 25 09:29:23.759 E ns/openshift-ingress-canary pod/ingress-canary-4xfk9 node/ip-10-0-140-246.ec2.internal container/serve-healthcheck-canary reason/ContainerExit code/2 cause/Error serving on 8888\nserving on 8080\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\n
#1768676139175251968junit13 days ago
Mar 15 17:29:15.339 E ns/openshift-machine-api pod/machine-api-operator-95484886b-tpplp node/ip-10-0-206-242.us-east-2.compute.internal container/machine-api-operator reason/ContainerExit code/2 cause/Error
Mar 15 17:31:47.926 E ns/openshift-machine-api pod/machine-api-controllers-bdb5d4fb8-546dl node/ip-10-0-175-174.us-east-2.compute.internal container/machineset-controller reason/ContainerExit code/1 cause/Error
Mar 15 17:32:29.022 E ns/openshift-kube-storage-version-migrator-operator pod/kube-storage-version-migrator-operator-6b78958844-2npqx node/ip-10-0-206-242.us-east-2.compute.internal container/kube-storage-version-migrator-operator reason/ContainerExit code/1 cause/Error KubeStorageVersionMigratorStaticResourcesDegraded: \"kube-storage-version-migrator/namespace.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-storage-version-migrator\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeStorageVersionMigratorStaticResourcesDegraded: \"kube-storage-version-migrator/serviceaccount.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-storage-version-migrator/serviceaccounts/kube-storage-version-migrator-sa\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeStorageVersionMigratorStaticResourcesDegraded: \"kube-storage-version-migrator/roles.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/storage-version-migration-migrator\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeStorageVersionMigratorStaticResourcesDegraded: " to "All is well"\nE0315 16:54:59.176290       1 leaderelection.go:325] error retrieving resource lock openshift-kube-storage-version-migrator-operator/openshift-kube-storage-version-migrator-operator-lock: Get "https://172.30.0.1:443/api/v1/namespaces/openshift-kube-storage-version-migrator-operator/configmaps/openshift-kube-storage-version-migrator-operator-lock?timeout=1m47s": dial tcp 172.30.0.1:443: connect: connection refused\nI0315 17:32:27.976995       1 cmd.go:97] Received SIGTERM or SIGINT signal, shutting down controller.\nI0315 17:32:27.977184       1 base_controller.go:167] Shutting down RemoveStaleConditionsController ...\nI0315 17:32:27.977201       1 reflector.go:225] Stopping reflector *v1.ClusterOperator (10m0s) from k8s.io/client-go@v0.21.1/tools/cache/reflector.go:167\nI0315 17:32:27.977224       1 base_controller.go:114] Shutting down worker of RemoveStaleConditionsController controller ...\nI0315 17:32:27.977230       1 base_controller.go:104] All RemoveStaleConditionsController workers have been terminated\nW0315 17:32:27.977240       1 builder.go:101] graceful termination failed, controllers failed with error: stopped\n
Mar 15 17:32:40.080 E ns/openshift-insights pod/insights-operator-5b469d8cb6-2ccgn node/ip-10-0-206-242.us-east-2.compute.internal container/insights-operator reason/ContainerExit code/2 cause/Error cIP="10.131.0.25:44252" resp=200\nI0315 17:30:55.802204       1 httplog.go:104] "HTTP" verb="GET" URI="/metrics" latency="3.329533ms" userAgent="Prometheus/2.29.2" audit-ID="166d746c-0465-471f-b4e4-69e1ac687d9c" srcIP="10.128.2.11:41874" resp=200\nI0315 17:30:58.821305       1 reflector.go:535] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: Watch close - *v1.ConfigMap total 11 items received\nI0315 17:31:16.993595       1 httplog.go:104] "HTTP" verb="GET" URI="/metrics" latency="3.75701ms" userAgent="Prometheus/2.29.2" audit-ID="02515bcc-7e51-4a35-8036-9d3771b55b14" srcIP="10.131.0.25:44252" resp=200\nI0315 17:31:25.805459       1 httplog.go:104] "HTTP" verb="GET" URI="/metrics" latency="6.577645ms" userAgent="Prometheus/2.29.2" audit-ID="5cd09a43-8a5f-4c87-8da6-2c06e22a889b" srcIP="10.128.2.11:41874" resp=200\nI0315 17:31:46.997341       1 httplog.go:104] "HTTP" verb="GET" URI="/metrics" latency="7.49746ms" userAgent="Prometheus/2.29.2" audit-ID="9f552cc7-a145-412b-b92e-850ce081fe32" srcIP="10.131.0.25:44252" resp=200\nI0315 17:31:47.821949       1 reflector.go:535] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: Watch close - *v1.ConfigMap total 10 items received\nI0315 17:31:54.177768       1 status.go:354] The operator is healthy\nI0315 17:31:54.177831       1 status.go:441] No status update necessary, objects are identical\nI0315 17:31:55.802470       1 httplog.go:104] "HTTP" verb="GET" URI="/metrics" latency="3.044689ms" userAgent="Prometheus/2.29.2" audit-ID="8d30b0b8-9f58-44db-96fd-a923d3d0b560" srcIP="10.128.2.11:41874" resp=200\nI0315 17:32:16.993359       1 httplog.go:104] "HTTP" verb="GET" URI="/metrics" latency="3.712889ms" userAgent="Prometheus/2.29.2" audit-ID="b94ae060-dac8-4773-bf9f-57214fd1bf85" srcIP="10.131.0.25:44252" resp=200\nI0315 17:32:25.805579       1 httplog.go:104] "HTTP" verb="GET" URI="/metrics" latency="6.825109ms" userAgent="Prometheus/2.29.2" audit-ID="58b70272-72a6-47a4-b44e-ea25f192d9b7" srcIP="10.128.2.11:41874" resp=200\n
Mar 15 17:32:40.080 E ns/openshift-insights pod/insights-operator-5b469d8cb6-2ccgn node/ip-10-0-206-242.us-east-2.compute.internal container/insights-operator reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar)
Mar 15 17:32:49.001 E ns/openshift-console-operator pod/console-operator-6f97f996df-bq948 node/ip-10-0-133-253.us-east-2.compute.internal container/console-operator reason/ContainerExit code/1 cause/Error "[graceful-termination] shutdown event" name="ShutdownInitiated"\nI0315 17:32:47.070121       1 genericapiserver.go:709] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-console-operator", Name:"console-operator-6f97f996df-bq948", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TerminationStart' Received signal to terminate, becoming unready, but keeping serving\nI0315 17:32:47.070622       1 genericapiserver.go:709] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-console-operator", Name:"console-operator-6f97f996df-bq948", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TerminationMinimalShutdownDurationFinished' The minimal shutdown duration of 0s finished\nI0315 17:32:47.070953       1 genericapiserver.go:362] "[graceful-termination] shutdown event" name="AfterShutdownDelayDuration"\nI0315 17:32:47.071007       1 genericapiserver.go:709] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-console-operator", Name:"console-operator-6f97f996df-bq948", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TerminationStoppedServing' Server has stopped listening\nI0315 17:32:47.071053       1 genericapiserver.go:387] "[graceful-termination] shutdown event" name="InFlightRequestsDrained"\nI0315 17:32:47.071102       1 base_controller.go:167] Shutting down ConsoleDownloadsDeploymentSyncController ...\nI0315 17:32:47.071150       1 base_controller.go:167] Shutting down ConsoleOperator ...\nI0315 17:32:47.071201       1 base_controller.go:167] Shutting down ConsoleCLIDownloadsController ...\nI0315 17:32:47.071245       1 base_controller.go:167] Shutting down DownloadsRouteController ...\nI0315 17:32:47.071273       1 base_controller.go:167] Shutting down ConsoleServiceController ...\nW0315 17:32:47.071276       1 builder.go:101] graceful termination failed, controllers failed with error: stopped\nI0315 17:32:47.071320       1 base_controller.go:114] Shutting down worker of ConsoleOperator controller ...\n
Mar 15 17:33:14.206 E ns/openshift-controller-manager-operator pod/openshift-controller-manager-operator-67986c586c-w2hs4 node/ip-10-0-206-242.us-east-2.compute.internal container/openshift-controller-manager-operator reason/ContainerExit code/1 cause/Error  17:33:13.349338       1 base_controller.go:145] All StatusSyncer_openshift-controller-manager post start hooks have been terminated\nI0315 17:33:13.349370       1 reflector.go:225] Stopping reflector *v1.Secret (10m0s) from k8s.io/client-go@v0.22.0-rc.0/tools/cache/reflector.go:167\nI0315 17:33:13.349402       1 reflector.go:225] Stopping reflector *v1.ClusterRoleBinding (10m0s) from k8s.io/client-go@v0.22.0-rc.0/tools/cache/reflector.go:167\nI0315 17:33:13.349428       1 reflector.go:225] Stopping reflector *v1.OpenShiftControllerManager (10m0s) from k8s.io/client-go@v0.22.0-rc.0/tools/cache/reflector.go:167\nI0315 17:33:13.349463       1 reflector.go:225] Stopping reflector *v1.ConfigMap (10m0s) from k8s.io/client-go@v0.22.0-rc.0/tools/cache/reflector.go:167\nI0315 17:33:13.349503       1 reflector.go:225] Stopping reflector *v1.Role (10m0s) from k8s.io/client-go@v0.22.0-rc.0/tools/cache/reflector.go:167\nI0315 17:33:13.349533       1 reflector.go:225] Stopping reflector *v1.RoleBinding (10m0s) from k8s.io/client-go@v0.22.0-rc.0/tools/cache/reflector.go:167\nI0315 17:33:13.349556       1 reflector.go:225] Stopping reflector *v1.Network (10m0s) from k8s.io/client-go@v0.22.0-rc.0/tools/cache/reflector.go:167\nI0315 17:33:13.349595       1 reflector.go:225] Stopping reflector *v1.ClusterRole (10m0s) from k8s.io/client-go@v0.22.0-rc.0/tools/cache/reflector.go:167\nI0315 17:33:13.349602       1 operator.go:115] Shutting down OpenShiftControllerManagerOperator\nI0315 17:33:13.349680       1 reflector.go:225] Stopping reflector *v1.RoleBinding (10m0s) from k8s.io/client-go@v0.22.0-rc.0/tools/cache/reflector.go:167\nI0315 17:33:13.349713       1 reflector.go:225] Stopping reflector *v1.Role (10m0s) from k8s.io/client-go@v0.22.0-rc.0/tools/cache/reflector.go:167\nI0315 17:33:13.349744       1 reflector.go:225] Stopping reflector *v1.ServiceAccount (10m0s) from k8s.io/client-go@v0.22.0-rc.0/tools/cache/reflector.go:167\nW0315 17:33:13.349757       1 builder.go:101] graceful termination failed, controllers failed with error: stopped\n
Mar 15 17:33:22.215 E ns/openshift-cluster-storage-operator pod/cluster-storage-operator-6dc5944df7-zhd78 node/ip-10-0-206-242.us-east-2.compute.internal container/cluster-storage-operator reason/ContainerExit code/1 cause/Error :21.426797       1 configmap_cafile_content.go:222] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"\nI0315 17:33:21.426812       1 base_controller.go:167] Shutting down SnapshotCRDController ...\nI0315 17:33:21.426814       1 requestheader_controller.go:183] Shutting down RequestHeaderAuthRequestController\nI0315 17:33:21.426822       1 base_controller.go:167] Shutting down VSphereProblemDetectorStarter ...\nI0315 17:33:21.426827       1 configmap_cafile_content.go:222] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"\nI0315 17:33:21.426831       1 base_controller.go:167] Shutting down ConfigObserver ...\nI0315 17:33:21.426843       1 base_controller.go:167] Shutting down ManagementStateController ...\nI0315 17:33:21.426851       1 base_controller.go:167] Shutting down CSIDriverStarter ...\nI0315 17:33:21.426859       1 base_controller.go:167] Shutting down AWSEBSCSIDriverOperatorDeployment ...\nI0315 17:33:21.426863       1 base_controller.go:145] All AWSEBSCSIDriverOperatorDeployment post start hooks have been terminated\nI0315 17:33:21.426871       1 base_controller.go:167] Shutting down LoggingSyncer ...\nI0315 17:33:21.426881       1 base_controller.go:167] Shutting down DefaultStorageClassController ...\nI0315 17:33:21.426890       1 base_controller.go:167] Shutting down StatusSyncer_storage ...\nI0315 17:33:21.426900       1 base_controller.go:145] All StatusSyncer_storage post start hooks have been terminated\nI0315 17:33:21.427040       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"\nI0315 17:33:21.427061       1 dynamic_serving_content.go:144] "Shutting down controller" name="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key"\nI0315 17:33:21.427079       1 secure_serving.go:311] Stopped listening on [::]:8443\nW0315 17:33:21.427162       1 builder.go:101] graceful termination failed, controllers failed with error: stopped\n
Mar 15 17:33:25.380 E ns/openshift-cluster-storage-operator pod/csi-snapshot-controller-operator-5c4c46d757-scq79 node/ip-10-0-206-242.us-east-2.compute.internal container/csi-snapshot-controller-operator reason/ContainerExit code/1 cause/Error rator.go:159] Finished syncing operator at 34.310319ms\nI0315 17:33:21.402917       1 operator.go:157] Starting syncing operator at 2024-03-15 17:33:21.402907632 +0000 UTC m=+3035.995704464\nI0315 17:33:21.475616       1 operator.go:159] Finished syncing operator at 72.701072ms\nI0315 17:33:21.475656       1 operator.go:157] Starting syncing operator at 2024-03-15 17:33:21.475652445 +0000 UTC m=+3036.068449277\nI0315 17:33:21.506423       1 operator.go:159] Finished syncing operator at 30.763702ms\nI0315 17:33:23.978694       1 operator.go:157] Starting syncing operator at 2024-03-15 17:33:23.978683605 +0000 UTC m=+3038.571480437\nI0315 17:33:24.050043       1 operator.go:159] Finished syncing operator at 71.350931ms\nI0315 17:33:24.110904       1 cmd.go:97] Received SIGTERM or SIGINT signal, shutting down controller.\nI0315 17:33:24.111236       1 genericapiserver.go:386] [graceful-termination] RunPreShutdownHooks has completed\nI0315 17:33:24.111255       1 genericapiserver.go:349] "[graceful-termination] shutdown event" name="ShutdownInitiated"\nI0315 17:33:24.111264       1 genericapiserver.go:352] "[graceful-termination] shutdown event" name="AfterShutdownDelayDuration"\nI0315 17:33:24.111280       1 genericapiserver.go:376] "[graceful-termination] shutdown event" name="InFlightRequestsDrained"\nI0315 17:33:24.111616       1 base_controller.go:167] Shutting down CSISnapshotWebhookController ...\nI0315 17:33:24.111633       1 base_controller.go:167] Shutting down StatusSyncer_csi-snapshot-controller ...\nI0315 17:33:24.111638       1 base_controller.go:145] All StatusSyncer_csi-snapshot-controller post start hooks have been terminated\nI0315 17:33:24.111646       1 base_controller.go:167] Shutting down StaticResourceController ...\nI0315 17:33:24.111651       1 base_controller.go:167] Shutting down ManagementStateController ...\nI0315 17:33:24.111657       1 base_controller.go:167] Shutting down LoggingSyncer ...\nW0315 17:33:24.111664       1 builder.go:101] graceful termination failed, controllers failed with error: stopped\n
Mar 15 17:33:26.379 E ns/openshift-monitoring pod/cluster-monitoring-operator-65c74c8b57-s9wq6 node/ip-10-0-206-242.us-east-2.compute.internal container/kube-rbac-proxy reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar)
Mar 15 17:33:26.953 - 4s    E clusteroperator/csi-snapshot-controller condition/Available status/Unknown reason/CSISnapshotControllerAvailable: Waiting for the initial sync of the operator

Found in 28.57% of runs (50.00% of failures) across 7 total runs and 1 jobs (57.14% failed) in 86ms - clear search | chart view - source code located on github