Job:
#OCPBUGS-27075issue3 months agosome events are missing time related infomration CLOSED
Issue 15715205: some events are missing time related infomration
Description: Description of problem:
 {code:none}
 Some events have time related infomration set to null (firstTimestamp, lastTimestamp, eventTime)
 {code}
 Version-Release number of selected component (if applicable):
 {code:none}
 cluster-logging.v5.8.0{code}
 How reproducible:
 {code:none}
 100% {code}
 Steps to Reproduce:
 {code:none}
     1.Stop one of the masters
     2.Start the master
     3.Wait untill the ENV stabilizes
     4. oc get events -A | grep unknown     {code}
 Actual results:
 {code:none}
 oc get events -A | grep unknow
 default                                      <unknown>   Normal    TerminationStart                             namespace/kube-system                                                            Received signal to terminate, becoming unready, but keeping serving
 default                                      <unknown>   Normal    TerminationPreShutdownHooksFinished          namespace/kube-system                                                            All pre-shutdown hooks have been finished
 default                                      <unknown>   Normal    TerminationMinimalShutdownDurationFinished   namespace/kube-system                                                            The minimal shutdown duration of 0s finished
 ....
 {code}
 Expected results:
 {code:none}
     All time related information is set correctly{code}
 Additional info:
 {code:none}
    This causes issues with external monitoring systems. Events with no timestamp will never show or push other events from the view depending on the sorting order of the timestamp. The operator of the environment has then trouble to see what is happening there. {code}
Status: CLOSED
Resolution:
Priority: Minor
Creator: Rahul Gangwar
Assigned To:
#OCPBUGS-27074issue10 days agosome events are missing time related infomration New
Issue 15715202: some events are missing time related infomration
Description: Description of problem:
 {code:none}
 Some events have time related infomration set to null (firstTimestamp, lastTimestamp, eventTime)
 {code}
 Version-Release number of selected component (if applicable):
 {code:none}
 cluster-logging.v5.8.0{code}
 How reproducible:
 {code:none}
 100% {code}
 Steps to Reproduce:
 {code:none}
     1.Stop one of the masters
     2.Start the master
     3.Wait untill the ENV stabilizes
     4. oc get events -A | grep unknown     {code}
 Actual results:
 {code:none}
 oc get events -A | grep unknow
 default                                      <unknown>   Normal    TerminationStart                             namespace/kube-system                                                            Received signal to terminate, becoming unready, but keeping serving
 default                                      <unknown>   Normal    TerminationPreShutdownHooksFinished          namespace/kube-system                                                            All pre-shutdown hooks have been finished
 default                                      <unknown>   Normal    TerminationMinimalShutdownDurationFinished   namespace/kube-system                                                            The minimal shutdown duration of 0s finished
 ....
 {code}
 Expected results:
 {code:none}
     All time related information is set correctly{code}
 Additional info:
 {code:none}
    This causes issues with external monitoring systems. Events with no timestamp will never show or push other events from the view depending on the sorting order of the timestamp. The operator of the environment has then trouble to see what is happening there. {code}
Status: New
Resolution:
Priority: Minor
Creator: Rahul Gangwar
Assigned To: Abu H Kashem
#OCPBUGS-25331issue10 days agosome events are missing time related infomration POST
Issue 15675601: some events are missing time related infomration
Description: Description of problem:
 {code:none}
 Some events have time related infomration set to null (firstTimestamp, lastTimestamp, eventTime)
 {code}
 Version-Release number of selected component (if applicable):
 {code:none}
 cluster-logging.v5.8.0{code}
 How reproducible:
 {code:none}
 100% {code}
 Steps to Reproduce:
 {code:none}
     1.Stop one of the masters
     2.Start the master
     3.Wait untill the ENV stabilizes
     4. oc get events -A | grep unknown     {code}
 Actual results:
 {code:none}
 oc get events -A | grep unknow
 default                                      <unknown>   Normal    TerminationStart                             namespace/kube-system                                                            Received signal to terminate, becoming unready, but keeping serving
 default                                      <unknown>   Normal    TerminationPreShutdownHooksFinished          namespace/kube-system                                                            All pre-shutdown hooks have been finished
 default                                      <unknown>   Normal    TerminationMinimalShutdownDurationFinished   namespace/kube-system                                                            The minimal shutdown duration of 0s finished
 ....
 {code}
 Expected results:
 {code:none}
     All time related information is set correctly{code}
 Additional info:
 {code:none}
    This causes issues with external monitoring systems. Events with no timestamp will never show or push other events from the view depending on the sorting order of the timestamp. The operator of the environment has then trouble to see what is happening there. {code}
Status: POST
Resolution:
Priority: Minor
Creator: Roman Hodain
Assigned To: Abu H Kashem
periodic-ci-openshift-release-master-ci-4.9-upgrade-from-stable-4.8-e2e-aws-ovn-upgrade (all) - 1 runs, 100% failed, 100% of failures match = 100% impact
#1780922810433015808junit7 days ago
Apr 18 13:58:56.306 E ns/openshift-cluster-csi-drivers pod/aws-ebs-csi-driver-controller-655bff9cf-d622j node/ip-10-0-230-231.ec2.internal container/csi-attacher reason/ContainerExit code/1 cause/Error Lost connection to CSI driver, exiting
Apr 18 13:58:56.306 E ns/openshift-cluster-csi-drivers pod/aws-ebs-csi-driver-controller-655bff9cf-d622j node/ip-10-0-230-231.ec2.internal container/csi-provisioner reason/ContainerExit code/1 cause/Error Lost connection to CSI driver, exiting
Apr 18 13:58:56.306 E ns/openshift-cluster-csi-drivers pod/aws-ebs-csi-driver-controller-655bff9cf-d622j node/ip-10-0-230-231.ec2.internal container/csi-snapshotter reason/ContainerExit code/1 cause/Error Lost connection to CSI driver, exiting
Apr 18 13:58:56.306 E ns/openshift-cluster-csi-drivers pod/aws-ebs-csi-driver-controller-655bff9cf-d622j node/ip-10-0-230-231.ec2.internal container/csi-liveness-probe reason/ContainerExit code/2 cause/Error
Apr 18 13:58:56.306 E ns/openshift-cluster-csi-drivers pod/aws-ebs-csi-driver-controller-655bff9cf-d622j node/ip-10-0-230-231.ec2.internal container/csi-driver reason/ContainerExit code/2 cause/Error
Apr 18 13:58:57.448 E ns/openshift-console-operator pod/console-operator-db5dfd8d9-9frt2 node/ip-10-0-230-231.ec2.internal container/console-operator reason/ContainerExit code/1 cause/Error art' Received signal to terminate, becoming unready, but keeping serving\nI0418 13:58:50.177939       1 genericapiserver.go:709] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-console-operator", Name:"console-operator-db5dfd8d9-9frt2", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TerminationMinimalShutdownDurationFinished' The minimal shutdown duration of 0s finished\nI0418 13:58:50.177950       1 genericapiserver.go:362] "[graceful-termination] shutdown event" name="AfterShutdownDelayDuration"\nI0418 13:58:50.177961       1 genericapiserver.go:709] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-console-operator", Name:"console-operator-db5dfd8d9-9frt2", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TerminationStoppedServing' Server has stopped listening\nI0418 13:58:50.177971       1 genericapiserver.go:387] "[graceful-termination] shutdown event" name="InFlightRequestsDrained"\nI0418 13:58:50.178502       1 base_controller.go:167] Shutting down ResourceSyncController ...\nI0418 13:58:50.178516       1 base_controller.go:167] Shutting down StatusSyncer_console ...\nI0418 13:58:50.178521       1 base_controller.go:145] All StatusSyncer_console post start hooks have been terminated\nI0418 13:58:50.178529       1 base_controller.go:167] Shutting down ConsoleCLIDownloadsController ...\nI0418 13:58:50.178538       1 base_controller.go:167] Shutting down ConsoleOperator ...\nE0418 13:58:50.178563       1 status.go:84] OAuthServingCertValidationDegraded FailedGet oauth-serving-cert configmap not found\nE0418 13:58:50.182862       1 base_controller.go:272] ConsoleOperator reconciliation failed: context canceled\nI0418 13:58:50.182887       1 base_controller.go:114] Shutting down worker of ConsoleOperator controller ...\nI0418 13:58:50.182894       1 base_controller.go:104] All ConsoleOperator workers have been terminated\nW0418 13:58:50.183001       1 builder.go:101] graceful termination failed, controllers failed with error: stopped\n
Apr 18 13:58:58.638 E ns/openshift-machine-config-operator pod/machine-config-controller-8565f6548b-hvgnv node/ip-10-0-230-231.ec2.internal container/machine-config-controller reason/ContainerExit code/2 cause/Error 9] Pool master: filtered to 1 candidate nodes for update, capacity: 1\nI0418 13:58:42.742474       1 node_controller.go:419] Pool master: Setting node ip-10-0-230-231.ec2.internal target to rendered-master-166bd35ab6879b853710c3b36e2c10d0\nI0418 13:58:42.767227       1 event.go:282] Event(v1.ObjectReference{Kind:"MachineConfigPool", Namespace:"", Name:"master", UID:"be6b4c88-0565-47e1-90c1-3c2009592794", APIVersion:"machineconfiguration.openshift.io/v1", ResourceVersion:"85012", FieldPath:""}): type: 'Normal' reason: 'SetDesiredConfig' Targeted node ip-10-0-230-231.ec2.internal to config rendered-master-166bd35ab6879b853710c3b36e2c10d0\nI0418 13:58:42.788562       1 node_controller.go:424] Pool master: node ip-10-0-230-231.ec2.internal: changed annotation machineconfiguration.openshift.io/desiredConfig = rendered-master-166bd35ab6879b853710c3b36e2c10d0\nI0418 13:58:42.788919       1 event.go:282] Event(v1.ObjectReference{Kind:"MachineConfigPool", Namespace:"", Name:"master", UID:"be6b4c88-0565-47e1-90c1-3c2009592794", APIVersion:"machineconfiguration.openshift.io/v1", ResourceVersion:"85012", FieldPath:""}): type: 'Normal' reason: 'AnnotationChange' Node ip-10-0-230-231.ec2.internal now has machineconfiguration.openshift.io/desiredConfig=rendered-master-166bd35ab6879b853710c3b36e2c10d0\nI0418 13:58:43.846833       1 node_controller.go:424] Pool master: node ip-10-0-230-231.ec2.internal: changed annotation machineconfiguration.openshift.io/state = Working\nI0418 13:58:43.847007       1 event.go:282] Event(v1.ObjectReference{Kind:"MachineConfigPool", Namespace:"", Name:"master", UID:"be6b4c88-0565-47e1-90c1-3c2009592794", APIVersion:"machineconfiguration.openshift.io/v1", ResourceVersion:"91607", FieldPath:""}): type: 'Normal' reason: 'AnnotationChange' Node ip-10-0-230-231.ec2.internal now has machineconfiguration.openshift.io/state=Working\nI0418 13:58:43.919065       1 node_controller.go:424] Pool master: node ip-10-0-230-231.ec2.internal: Reporting unready: node ip-10-0-230-231.ec2.internal is reporting Unschedulable\n
Apr 18 13:58:59.726 E ns/openshift-insights pod/insights-operator-69b57f8df-lsh2h node/ip-10-0-230-231.ec2.internal container/insights-operator reason/ContainerExit code/2 cause/Error g report\nI0418 13:58:02.662717       1 insightsclient.go:299] Retrieving report for cluster: 3c3f9ff8-83e0-42d0-a890-5ab2625b2bc3\nI0418 13:58:02.662721       1 insightsclient.go:300] Endpoint: https://cloud.redhat.com/api/insights-results-aggregator/v1/clusters/3c3f9ff8-83e0-42d0-a890-5ab2625b2bc3/report\nI0418 13:58:02.670367       1 insightsclient.go:310] Retrieving report from https://cloud.redhat.com/api/insights-results-aggregator/v1/clusters/3c3f9ff8-83e0-42d0-a890-5ab2625b2bc3/report\nE0418 13:58:02.868258       1 insightsreport.go:99] Unexpected error retrieving the report: not found: https://cloud.redhat.com/api/insights-results-aggregator/v1/clusters/3c3f9ff8-83e0-42d0-a890-5ab2625b2bc3/report (request=d6e60a300a3349ae9866da67dab0d36c): {"status":"Item with ID 3c3f9ff8-83e0-42d0-a890-5ab2625b2bc3 was not found in the storage"}\nI0418 13:58:02.868283       1 controllerstatus.go:66] name=insightsreport healthy=false reason=NotAvailable message=Couldn't download the latest report: not found: https://cloud.redhat.com/api/insights-results-aggregator/v1/clusters/3c3f9ff8-83e0-42d0-a890-5ab2625b2bc3/report (request=d6e60a300a3349ae9866da67dab0d36c): {"status":"Item with ID 3c3f9ff8-83e0-42d0-a890-5ab2625b2bc3 was not found in the storage"}\nI0418 13:58:21.932015       1 httplog.go:104] "HTTP" verb="GET" URI="/metrics" latency="7.569134ms" userAgent="Prometheus/2.29.2" audit-ID="693c4b3d-c2d9-49de-ae64-2d94519b7398" srcIP="10.131.0.29:54502" resp=200\nI0418 13:58:26.179818       1 httplog.go:104] "HTTP" verb="GET" URI="/metrics" latency="5.510675ms" userAgent="Prometheus/2.29.2" audit-ID="2bbd75ae-9219-43c0-a814-7dc206e3e5bc" srcIP="10.128.2.6:38434" resp=200\nI0418 13:58:31.402658       1 status.go:178] Failed to download Insights report\nI0418 13:58:31.402693       1 status.go:354] The operator is healthy\nI0418 13:58:51.946036       1 httplog.go:104] "HTTP" verb="GET" URI="/metrics" latency="19.940557ms" userAgent="Prometheus/2.29.2" audit-ID="716d598f-bc8c-445b-bb5a-b36e1ab2d23f" srcIP="10.131.0.29:54502" resp=200\n
Apr 18 13:58:59.758 E ns/openshift-service-ca-operator pod/service-ca-operator-5566d9d78c-jxkwq node/ip-10-0-230-231.ec2.internal container/service-ca-operator reason/ContainerExit code/1 cause/Error
Apr 18 13:59:10.085 E ns/openshift-dns pod/dns-default-fx2zz node/ip-10-0-186-171.ec2.internal container/dns reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar)
Apr 18 13:59:10.085 E ns/openshift-dns pod/dns-default-fx2zz node/ip-10-0-186-171.ec2.internal container/kube-rbac-proxy reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar)

Found in 100.00% of runs (100.00% of failures) across 1 total runs and 1 jobs (100.00% failed) in 6.578s - clear search | chart view - source code located on github