#OCPBUGS-25331 | issue | 11 days ago | some events are missing time related infomration POST |
Issue 15675601: some events are missing time related infomration Description: Description of problem: {code:none} Some events have time related infomration set to null (firstTimestamp, lastTimestamp, eventTime) {code} Version-Release number of selected component (if applicable): {code:none} cluster-logging.v5.8.0{code} How reproducible: {code:none} 100% {code} Steps to Reproduce: {code:none} 1.Stop one of the masters 2.Start the master 3.Wait untill the ENV stabilizes 4. oc get events -A | grep unknown {code} Actual results: {code:none} oc get events -A | grep unknow default <unknown> Normal TerminationStart namespace/kube-system Received signal to terminate, becoming unready, but keeping serving default <unknown> Normal TerminationPreShutdownHooksFinished namespace/kube-system All pre-shutdown hooks have been finished default <unknown> Normal TerminationMinimalShutdownDurationFinished namespace/kube-system The minimal shutdown duration of 0s finished .... {code} Expected results: {code:none} All time related information is set correctly{code} Additional info: {code:none} This causes issues with external monitoring systems. Events with no timestamp will never show or push other events from the view depending on the sorting order of the timestamp. The operator of the environment has then trouble to see what is happening there. {code} Status: POST Resolution: Priority: Minor Creator: Roman Hodain Assigned To: Abu H Kashem | |||
#OCPBUGS-27075 | issue | 3 months ago | some events are missing time related infomration CLOSED |
Issue 15715205: some events are missing time related infomration Description: Description of problem: {code:none} Some events have time related infomration set to null (firstTimestamp, lastTimestamp, eventTime) {code} Version-Release number of selected component (if applicable): {code:none} cluster-logging.v5.8.0{code} How reproducible: {code:none} 100% {code} Steps to Reproduce: {code:none} 1.Stop one of the masters 2.Start the master 3.Wait untill the ENV stabilizes 4. oc get events -A | grep unknown {code} Actual results: {code:none} oc get events -A | grep unknow default <unknown> Normal TerminationStart namespace/kube-system Received signal to terminate, becoming unready, but keeping serving default <unknown> Normal TerminationPreShutdownHooksFinished namespace/kube-system All pre-shutdown hooks have been finished default <unknown> Normal TerminationMinimalShutdownDurationFinished namespace/kube-system The minimal shutdown duration of 0s finished .... {code} Expected results: {code:none} All time related information is set correctly{code} Additional info: {code:none} This causes issues with external monitoring systems. Events with no timestamp will never show or push other events from the view depending on the sorting order of the timestamp. The operator of the environment has then trouble to see what is happening there. {code} Status: CLOSED Resolution: Priority: Minor Creator: Rahul Gangwar Assigned To: | |||
#OCPBUGS-27074 | issue | 11 days ago | some events are missing time related infomration New |
Issue 15715202: some events are missing time related infomration Description: Description of problem: {code:none} Some events have time related infomration set to null (firstTimestamp, lastTimestamp, eventTime) {code} Version-Release number of selected component (if applicable): {code:none} cluster-logging.v5.8.0{code} How reproducible: {code:none} 100% {code} Steps to Reproduce: {code:none} 1.Stop one of the masters 2.Start the master 3.Wait untill the ENV stabilizes 4. oc get events -A | grep unknown {code} Actual results: {code:none} oc get events -A | grep unknow default <unknown> Normal TerminationStart namespace/kube-system Received signal to terminate, becoming unready, but keeping serving default <unknown> Normal TerminationPreShutdownHooksFinished namespace/kube-system All pre-shutdown hooks have been finished default <unknown> Normal TerminationMinimalShutdownDurationFinished namespace/kube-system The minimal shutdown duration of 0s finished .... {code} Expected results: {code:none} All time related information is set correctly{code} Additional info: {code:none} This causes issues with external monitoring systems. Events with no timestamp will never show or push other events from the view depending on the sorting order of the timestamp. The operator of the environment has then trouble to see what is happening there. {code} Status: New Resolution: Priority: Minor Creator: Rahul Gangwar Assigned To: Abu H Kashem | |||
release-openshift-origin-installer-e2e-aws-disruptive-4.9 (all) - 1 runs, 100% failed, 100% of failures match = 100% impact | |||
#1782590738459004928 | junit | 3 days ago | |
Apr 23 02:43:43.184 E ns/e2e-test-prometheus-8pwwt pod/execpod node/ip-10-0-171-144.us-east-2.compute.internal container/agnhost-container reason/ContainerExit code/137 cause/Error Apr 23 02:46:04.268 E clusteroperator/kube-storage-version-migrator condition/Available status/False reason/KubeStorageVersionMigrator_Deploying changed: KubeStorageVersionMigratorAvailable: Waiting for Deployment Apr 23 02:46:04.268 - 4s E clusteroperator/kube-storage-version-migrator condition/Available status/False reason/KubeStorageVersionMigratorAvailable: Waiting for Deployment Apr 23 02:46:05.897 E ns/openshift-cluster-storage-operator pod/csi-snapshot-controller-6d56686ddb-lpspq node/ip-10-0-129-23.us-east-2.compute.internal container/snapshot-controller reason/ContainerExit code/2 cause/Error Apr 23 02:46:06.542 E ns/openshift-cluster-storage-operator pod/csi-snapshot-webhook-57d89449ff-w28bw node/ip-10-0-129-23.us-east-2.compute.internal container/webhook reason/ContainerExit code/2 cause/Error Apr 23 02:46:06.596 E ns/openshift-console-operator pod/console-operator-db5dfd8d9-wqhvh node/ip-10-0-129-23.us-east-2.compute.internal container/console-operator reason/ContainerExit code/1 cause/Error :03.792402 1 base_controller.go:104] All ManagementStateController workers have been terminated\nI0423 02:46:03.790753 1 base_controller.go:114] Shutting down worker of LoggingSyncer controller ...\nI0423 02:46:03.790759 1 base_controller.go:114] Shutting down worker of RemoveStaleConditionsController controller ...\nW0423 02:46:03.790803 1 builder.go:101] graceful termination failed, controllers failed with error: stopped\nI0423 02:46:03.790845 1 genericapiserver.go:355] "[graceful-termination] shutdown event" name="ShutdownInitiated"\nI0423 02:46:03.792526 1 genericapiserver.go:709] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-console-operator", Name:"console-operator-db5dfd8d9-wqhvh", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TerminationStart' Received signal to terminate, becoming unready, but keeping serving\nI0423 02:46:03.792567 1 genericapiserver.go:709] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-console-operator", Name:"console-operator-db5dfd8d9-wqhvh", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TerminationMinimalShutdownDurationFinished' The minimal shutdown duration of 0s finished\nI0423 02:46:03.792599 1 genericapiserver.go:362] "[graceful-termination] shutdown event" name="AfterShutdownDelayDuration"\nI0423 02:46:03.792639 1 genericapiserver.go:709] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-console-operator", Name:"console-operator-db5dfd8d9-wqhvh", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TerminationStoppedServing' Server has stopped listening\nI0423 02:46:03.792673 1 genericapiserver.go:387] "[graceful-termination] shutdown event" name="InFlightRequestsDrained"\nI0423 02:46:03.792702 1 base_controller.go:104] All LoggingSyncer workers have been terminated\nI0423 02:46:03.792732 1 base_controller.go:104] All RemoveStaleConditionsController workers have been terminated\n Apr 23 02:46:06.689 E ns/openshift-cluster-csi-drivers pod/aws-ebs-csi-driver-controller-54d956df5f-l88zm node/ip-10-0-129-23.us-east-2.compute.internal container/csi-provisioner reason/ContainerExit code/1 cause/Error Lost connection to CSI driver, exiting Apr 23 02:46:06.689 E ns/openshift-cluster-csi-drivers pod/aws-ebs-csi-driver-controller-54d956df5f-l88zm node/ip-10-0-129-23.us-east-2.compute.internal container/csi-snapshotter reason/ContainerExit code/1 cause/Error Lost connection to CSI driver, exiting Apr 23 02:46:06.689 E ns/openshift-cluster-csi-drivers pod/aws-ebs-csi-driver-controller-54d956df5f-l88zm node/ip-10-0-129-23.us-east-2.compute.internal container/csi-attacher reason/ContainerExit code/1 cause/Error Lost connection to CSI driver, exiting Apr 23 02:46:06.689 E ns/openshift-cluster-csi-drivers pod/aws-ebs-csi-driver-controller-54d956df5f-l88zm node/ip-10-0-129-23.us-east-2.compute.internal container/csi-driver reason/ContainerExit code/2 cause/Error Apr 23 02:46:06.689 E ns/openshift-cluster-csi-drivers pod/aws-ebs-csi-driver-controller-54d956df5f-l88zm node/ip-10-0-129-23.us-east-2.compute.internal container/csi-resizer reason/ContainerExit code/2 cause/Error |
Found in 100.00% of runs (100.00% of failures) across 1 total runs and 1 jobs (100.00% failed) in 101ms - clear search | chart view - source code located on github