Job:
#OCPBUGS-27075issue3 months agosome events are missing time related infomration CLOSED
Issue 15715205: some events are missing time related infomration
Description: Description of problem:
 {code:none}
 Some events have time related infomration set to null (firstTimestamp, lastTimestamp, eventTime)
 {code}
 Version-Release number of selected component (if applicable):
 {code:none}
 cluster-logging.v5.8.0{code}
 How reproducible:
 {code:none}
 100% {code}
 Steps to Reproduce:
 {code:none}
     1.Stop one of the masters
     2.Start the master
     3.Wait untill the ENV stabilizes
     4. oc get events -A | grep unknown     {code}
 Actual results:
 {code:none}
 oc get events -A | grep unknow
 default                                      <unknown>   Normal    TerminationStart                             namespace/kube-system                                                            Received signal to terminate, becoming unready, but keeping serving
 default                                      <unknown>   Normal    TerminationPreShutdownHooksFinished          namespace/kube-system                                                            All pre-shutdown hooks have been finished
 default                                      <unknown>   Normal    TerminationMinimalShutdownDurationFinished   namespace/kube-system                                                            The minimal shutdown duration of 0s finished
 ....
 {code}
 Expected results:
 {code:none}
     All time related information is set correctly{code}
 Additional info:
 {code:none}
    This causes issues with external monitoring systems. Events with no timestamp will never show or push other events from the view depending on the sorting order of the timestamp. The operator of the environment has then trouble to see what is happening there. {code}
Status: CLOSED
Resolution:
Priority: Minor
Creator: Rahul Gangwar
Assigned To:
#OCPBUGS-27074issue3 weeks agosome events are missing time related infomration New
Issue 15715202: some events are missing time related infomration
Description: Description of problem:
 {code:none}
 Some events have time related infomration set to null (firstTimestamp, lastTimestamp, eventTime)
 {code}
 Version-Release number of selected component (if applicable):
 {code:none}
 cluster-logging.v5.8.0{code}
 How reproducible:
 {code:none}
 100% {code}
 Steps to Reproduce:
 {code:none}
     1.Stop one of the masters
     2.Start the master
     3.Wait untill the ENV stabilizes
     4. oc get events -A | grep unknown     {code}
 Actual results:
 {code:none}
 oc get events -A | grep unknow
 default                                      <unknown>   Normal    TerminationStart                             namespace/kube-system                                                            Received signal to terminate, becoming unready, but keeping serving
 default                                      <unknown>   Normal    TerminationPreShutdownHooksFinished          namespace/kube-system                                                            All pre-shutdown hooks have been finished
 default                                      <unknown>   Normal    TerminationMinimalShutdownDurationFinished   namespace/kube-system                                                            The minimal shutdown duration of 0s finished
 ....
 {code}
 Expected results:
 {code:none}
     All time related information is set correctly{code}
 Additional info:
 {code:none}
    This causes issues with external monitoring systems. Events with no timestamp will never show or push other events from the view depending on the sorting order of the timestamp. The operator of the environment has then trouble to see what is happening there. {code}
Status: New
Resolution:
Priority: Minor
Creator: Rahul Gangwar
Assigned To: Abu H Kashem
#OCPBUGS-25331issue3 weeks agosome events are missing time related infomration POST
Issue 15675601: some events are missing time related infomration
Description: Description of problem:
 {code:none}
 Some events have time related infomration set to null (firstTimestamp, lastTimestamp, eventTime)
 {code}
 Version-Release number of selected component (if applicable):
 {code:none}
 cluster-logging.v5.8.0{code}
 How reproducible:
 {code:none}
 100% {code}
 Steps to Reproduce:
 {code:none}
     1.Stop one of the masters
     2.Start the master
     3.Wait untill the ENV stabilizes
     4. oc get events -A | grep unknown     {code}
 Actual results:
 {code:none}
 oc get events -A | grep unknow
 default                                      <unknown>   Normal    TerminationStart                             namespace/kube-system                                                            Received signal to terminate, becoming unready, but keeping serving
 default                                      <unknown>   Normal    TerminationPreShutdownHooksFinished          namespace/kube-system                                                            All pre-shutdown hooks have been finished
 default                                      <unknown>   Normal    TerminationMinimalShutdownDurationFinished   namespace/kube-system                                                            The minimal shutdown duration of 0s finished
 ....
 {code}
 Expected results:
 {code:none}
     All time related information is set correctly{code}
 Additional info:
 {code:none}
    This causes issues with external monitoring systems. Events with no timestamp will never show or push other events from the view depending on the sorting order of the timestamp. The operator of the environment has then trouble to see what is happening there. {code}
Status: POST
Resolution:
Priority: Minor
Creator: Roman Hodain
Assigned To: Abu H Kashem
release-openshift-origin-installer-e2e-aws-disruptive-4.9 (all) - 1 runs, 100% failed, 100% of failures match = 100% impact
#1786939454594748416junit2 days ago
May 05 02:47:41.553 E ns/e2e-test-prometheus-zkdlw pod/execpod node/ip-10-0-194-8.us-west-2.compute.internal container/agnhost-container reason/ContainerExit code/137 cause/Error
May 05 02:50:33.599 E clusteroperator/kube-storage-version-migrator condition/Available status/False reason/KubeStorageVersionMigrator_Deploying changed: KubeStorageVersionMigratorAvailable: Waiting for Deployment
May 05 02:50:33.599 - 6s    E clusteroperator/kube-storage-version-migrator condition/Available status/False reason/KubeStorageVersionMigratorAvailable: Waiting for Deployment
May 05 02:50:34.931 E ns/openshift-machine-config-operator pod/machine-config-controller-8565f6548b-7n9b2 node/ip-10-0-172-182.us-west-2.compute.internal container/machine-config-controller reason/ContainerExit code/2 cause/Error  "https://172.30.0.1:443/apis/machineconfiguration.openshift.io/v1/machineconfigpools/worker/status": unexpected EOF\nI0505 02:31:44.281251       1 node_controller.go:830] Updated controlPlaneTopology annotation of node ip-10-0-194-8.us-west-2.compute.internal from  to \nI0505 02:31:44.289228       1 node_controller.go:830] Updated controlPlaneTopology annotation of node ip-10-0-177-187.us-west-2.compute.internal from  to \nI0505 02:33:21.074460       1 node_controller.go:424] Pool worker: node ip-10-0-177-187.us-west-2.compute.internal: changed annotation machineconfiguration.openshift.io/currentConfig = rendered-worker-fb4518e0e5300ea554cbd758eeed15a0\nI0505 02:33:21.074484       1 node_controller.go:424] Pool worker: node ip-10-0-177-187.us-west-2.compute.internal: changed annotation machineconfiguration.openshift.io/desiredConfig = rendered-worker-fb4518e0e5300ea554cbd758eeed15a0\nI0505 02:33:21.074490       1 node_controller.go:424] Pool worker: node ip-10-0-177-187.us-west-2.compute.internal: changed annotation machineconfiguration.openshift.io/state = Done\nI0505 02:33:25.026477       1 node_controller.go:424] Pool worker: node ip-10-0-194-8.us-west-2.compute.internal: changed annotation machineconfiguration.openshift.io/currentConfig = rendered-worker-fb4518e0e5300ea554cbd758eeed15a0\nI0505 02:33:25.026571       1 node_controller.go:424] Pool worker: node ip-10-0-194-8.us-west-2.compute.internal: changed annotation machineconfiguration.openshift.io/desiredConfig = rendered-worker-fb4518e0e5300ea554cbd758eeed15a0\nI0505 02:33:25.026602       1 node_controller.go:424] Pool worker: node ip-10-0-194-8.us-west-2.compute.internal: changed annotation machineconfiguration.openshift.io/state = Done\nI0505 02:43:55.072166       1 template_controller.go:137] Re-syncing ControllerConfig due to secret pull-secret change\nI0505 02:50:32.708261       1 node_controller.go:424] Pool master: node ip-10-0-172-182.us-west-2.compute.internal: Reporting unready: node ip-10-0-172-182.us-west-2.compute.internal is reporting Unschedulable\n
May 05 02:50:34.967 E ns/openshift-cluster-csi-drivers pod/aws-ebs-csi-driver-operator-5679b7b957-cxlmk node/ip-10-0-172-182.us-west-2.compute.internal container/aws-ebs-csi-driver-operator reason/ContainerExit code/1 cause/Error
May 05 02:50:35.081 E ns/openshift-console-operator pod/console-operator-db5dfd8d9-9b6xh node/ip-10-0-172-182.us-west-2.compute.internal container/console-operator reason/ContainerExit code/1 cause/Error e-operator", Name:"console-operator-db5dfd8d9-9b6xh", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TerminationStart' Received signal to terminate, becoming unready, but keeping serving\nI0505 02:50:33.788200       1 genericapiserver.go:709] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-console-operator", Name:"console-operator-db5dfd8d9-9b6xh", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TerminationMinimalShutdownDurationFinished' The minimal shutdown duration of 0s finished\nI0505 02:50:33.788252       1 genericapiserver.go:362] "[graceful-termination] shutdown event" name="AfterShutdownDelayDuration"\nI0505 02:50:33.788291       1 genericapiserver.go:709] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-console-operator", Name:"console-operator-db5dfd8d9-9b6xh", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TerminationStoppedServing' Server has stopped listening\nI0505 02:50:33.788344       1 genericapiserver.go:387] "[graceful-termination] shutdown event" name="InFlightRequestsDrained"\nI0505 02:50:33.789727       1 base_controller.go:167] Shutting down ConsoleServiceController ...\nI0505 02:50:33.789855       1 base_controller.go:167] Shutting down ResourceSyncController ...\nI0505 02:50:33.789877       1 base_controller.go:167] Shutting down UnsupportedConfigOverridesController ...\nI0505 02:50:33.789888       1 base_controller.go:167] Shutting down ConsoleRouteController ...\nI0505 02:50:33.789899       1 base_controller.go:167] Shutting down ConsoleDownloadsDeploymentSyncController ...\nI0505 02:50:33.789912       1 base_controller.go:167] Shutting down ConsoleCLIDownloadsController ...\nI0505 02:50:33.789926       1 base_controller.go:167] Shutting down HealthCheckController ...\nI0505 02:50:33.789938       1 base_controller.go:167] Shutting down ConsoleOperator ...\nW0505 02:50:33.790081       1 builder.go:101] graceful termination failed, controllers failed with error: stopped\n
May 05 02:50:35.907 E ns/openshift-kube-storage-version-migrator pod/migrator-54878c7746-t4fhv node/ip-10-0-172-182.us-west-2.compute.internal container/migrator reason/ContainerExit code/2 cause/Error I0505 02:21:49.788769       1 migrator.go:18] FLAG: --add_dir_header="false"\nI0505 02:21:49.788863       1 migrator.go:18] FLAG: --alsologtostderr="true"\nI0505 02:21:49.788868       1 migrator.go:18] FLAG: --kube-api-burst="1000"\nI0505 02:21:49.788873       1 migrator.go:18] FLAG: --kube-api-qps="40"\nI0505 02:21:49.788879       1 migrator.go:18] FLAG: --kubeconfig=""\nI0505 02:21:49.788885       1 migrator.go:18] FLAG: --log_backtrace_at=":0"\nI0505 02:21:49.788891       1 migrator.go:18] FLAG: --log_dir=""\nI0505 02:21:49.788896       1 migrator.go:18] FLAG: --log_file=""\nI0505 02:21:49.788900       1 migrator.go:18] FLAG: --log_file_max_size="1800"\nI0505 02:21:49.788905       1 migrator.go:18] FLAG: --logtostderr="true"\nI0505 02:21:49.788909       1 migrator.go:18] FLAG: --one_output="false"\nI0505 02:21:49.788913       1 migrator.go:18] FLAG: --skip_headers="false"\nI0505 02:21:49.788917       1 migrator.go:18] FLAG: --skip_log_headers="false"\nI0505 02:21:49.788922       1 migrator.go:18] FLAG: --stderrthreshold="2"\nI0505 02:21:49.788926       1 migrator.go:18] FLAG: --v="2"\nI0505 02:21:49.788930       1 migrator.go:18] FLAG: --vmodule=""\nI0505 02:21:49.789895       1 reflector.go:219] Starting reflector *v1alpha1.StorageVersionMigration (0s) from k8s.io/client-go@v0.21.0/tools/cache/reflector.go:167\nI0505 02:21:59.919184       1 kubemigrator.go:110] flowcontrol-flowschema-storage-version-migration: migration running\nI0505 02:22:00.033269       1 kubemigrator.go:127] flowcontrol-flowschema-storage-version-migration: migration succeeded\nI0505 02:22:01.042883       1 kubemigrator.go:110] flowcontrol-prioritylevel-storage-version-migration: migration running\nI0505 02:22:01.087487       1 kubemigrator.go:127] flowcontrol-prioritylevel-storage-version-migration: migration succeeded\nI0505 02:29:52.413141       1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF\n
May 05 02:50:36.062 E ns/openshift-cluster-storage-operator pod/csi-snapshot-controller-6d56686ddb-8c9fj node/ip-10-0-172-182.us-west-2.compute.internal container/snapshot-controller reason/ContainerExit code/2 cause/Error
May 05 02:50:58.224 E ns/openshift-apiserver-operator pod/openshift-apiserver-operator-74499d8bd6-p4s9v node/ip-10-0-206-121.us-west-2.compute.internal container/openshift-apiserver-operator reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar)
May 05 02:50:59.656 E ns/openshift-service-ca-operator pod/service-ca-operator-5566d9d78c-qv8fc node/ip-10-0-206-121.us-west-2.compute.internal container/service-ca-operator reason/ContainerExit code/1 cause/Error
May 05 02:50:59.656 E ns/openshift-service-ca-operator pod/service-ca-operator-5566d9d78c-qv8fc node/ip-10-0-206-121.us-west-2.compute.internal container/service-ca-operator reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar)

Found in 100.00% of runs (100.00% of failures) across 1 total runs and 1 jobs (100.00% failed) in 93ms - clear search | chart view - source code located on github