Job:
#OCPBUGS-25331issue8 weeks agosome events are missing time related infomration POST
Issue 15675601: some events are missing time related infomration
Description: Description of problem:
 {code:none}
 Some events have time related infomration set to null (firstTimestamp, lastTimestamp, eventTime)
 {code}
 Version-Release number of selected component (if applicable):
 {code:none}
 cluster-logging.v5.8.0{code}
 How reproducible:
 {code:none}
 100% {code}
 Steps to Reproduce:
 {code:none}
     1.Stop one of the masters
     2.Start the master
     3.Wait untill the ENV stabilizes
     4. oc get events -A | grep unknown     {code}
 Actual results:
 {code:none}
 oc get events -A | grep unknow
 default                                      <unknown>   Normal    TerminationStart                             namespace/kube-system                                                            Received signal to terminate, becoming unready, but keeping serving
 default                                      <unknown>   Normal    TerminationPreShutdownHooksFinished          namespace/kube-system                                                            All pre-shutdown hooks have been finished
 default                                      <unknown>   Normal    TerminationMinimalShutdownDurationFinished   namespace/kube-system                                                            The minimal shutdown duration of 0s finished
 ....
 {code}
 Expected results:
 {code:none}
     All time related information is set correctly{code}
 Additional info:
 {code:none}
    This causes issues with external monitoring systems. Events with no timestamp will never show or push other events from the view depending on the sorting order of the timestamp. The operator of the environment has then trouble to see what is happening there. {code}
Status: POST
Resolution:
Priority: Minor
Creator: Roman Hodain
Assigned To: Abu H Kashem
#OCPBUGS-27075issue2 months agosome events are missing time related infomration New
Issue 15715205: some events are missing time related infomration
Description: Description of problem:
 {code:none}
 Some events have time related infomration set to null (firstTimestamp, lastTimestamp, eventTime)
 {code}
 Version-Release number of selected component (if applicable):
 {code:none}
 cluster-logging.v5.8.0{code}
 How reproducible:
 {code:none}
 100% {code}
 Steps to Reproduce:
 {code:none}
     1.Stop one of the masters
     2.Start the master
     3.Wait untill the ENV stabilizes
     4. oc get events -A | grep unknown     {code}
 Actual results:
 {code:none}
 oc get events -A | grep unknow
 default                                      <unknown>   Normal    TerminationStart                             namespace/kube-system                                                            Received signal to terminate, becoming unready, but keeping serving
 default                                      <unknown>   Normal    TerminationPreShutdownHooksFinished          namespace/kube-system                                                            All pre-shutdown hooks have been finished
 default                                      <unknown>   Normal    TerminationMinimalShutdownDurationFinished   namespace/kube-system                                                            The minimal shutdown duration of 0s finished
 ....
 {code}
 Expected results:
 {code:none}
     All time related information is set correctly{code}
 Additional info:
 {code:none}
    This causes issues with external monitoring systems. Events with no timestamp will never show or push other events from the view depending on the sorting order of the timestamp. The operator of the environment has then trouble to see what is happening there. {code}
Status: New
Resolution:
Priority: Minor
Creator: Rahul Gangwar
Assigned To:
#OCPBUGS-27074issue2 months agosome events are missing time related infomration New
Issue 15715202: some events are missing time related infomration
Description: Description of problem:
 {code:none}
 Some events have time related infomration set to null (firstTimestamp, lastTimestamp, eventTime)
 {code}
 Version-Release number of selected component (if applicable):
 {code:none}
 cluster-logging.v5.8.0{code}
 How reproducible:
 {code:none}
 100% {code}
 Steps to Reproduce:
 {code:none}
     1.Stop one of the masters
     2.Start the master
     3.Wait untill the ENV stabilizes
     4. oc get events -A | grep unknown     {code}
 Actual results:
 {code:none}
 oc get events -A | grep unknow
 default                                      <unknown>   Normal    TerminationStart                             namespace/kube-system                                                            Received signal to terminate, becoming unready, but keeping serving
 default                                      <unknown>   Normal    TerminationPreShutdownHooksFinished          namespace/kube-system                                                            All pre-shutdown hooks have been finished
 default                                      <unknown>   Normal    TerminationMinimalShutdownDurationFinished   namespace/kube-system                                                            The minimal shutdown duration of 0s finished
 ....
 {code}
 Expected results:
 {code:none}
     All time related information is set correctly{code}
 Additional info:
 {code:none}
    This causes issues with external monitoring systems. Events with no timestamp will never show or push other events from the view depending on the sorting order of the timestamp. The operator of the environment has then trouble to see what is happening there. {code}
Status: New
Resolution:
Priority: Minor
Creator: Rahul Gangwar
Assigned To: Abu H Kashem
periodic-ci-openshift-release-master-ci-4.9-e2e-gcp-upgrade (all) - 5 runs, 100% failed, 20% of failures match = 20% impact
#1772530868556926976junit2 days ago
Mar 26 08:54:53.005 E ns/openshift-controller-manager-operator pod/openshift-controller-manager-operator-67cdf4f944-tlgzb node/ci-op-jrm8w069-875d2-kl678-master-0 container/openshift-controller-manager-operator reason/ContainerExit code/1 cause/Error 0326 08:54:51.967697       1 reflector.go:225] Stopping reflector *v1.Secret (10m0s) from k8s.io/client-go@v0.22.0-rc.0/tools/cache/reflector.go:167\nI0326 08:54:51.967745       1 reflector.go:225] Stopping reflector *v1.ClusterRole (10m0s) from k8s.io/client-go@v0.22.0-rc.0/tools/cache/reflector.go:167\nI0326 08:54:51.967764       1 reflector.go:225] Stopping reflector *v1.Image (10m0s) from k8s.io/client-go@v0.22.0-rc.0/tools/cache/reflector.go:167\nI0326 08:54:51.967788       1 reflector.go:225] Stopping reflector *v1.Secret (10m0s) from k8s.io/client-go@v0.22.0-rc.0/tools/cache/reflector.go:167\nI0326 08:54:51.967835       1 reflector.go:225] Stopping reflector *v1.Secret (10m0s) from k8s.io/client-go@v0.22.0-rc.0/tools/cache/reflector.go:167\nI0326 08:54:51.967858       1 reflector.go:225] Stopping reflector *v1.Secret (10m0s) from k8s.io/client-go@v0.22.0-rc.0/tools/cache/reflector.go:167\nI0326 08:54:51.967928       1 reflector.go:225] Stopping reflector *v1.Build (10m0s) from k8s.io/client-go@v0.22.0-rc.0/tools/cache/reflector.go:167\nI0326 08:54:51.967991       1 reflector.go:225] Stopping reflector *v1.RoleBinding (10m0s) from k8s.io/client-go@v0.22.0-rc.0/tools/cache/reflector.go:167\nI0326 08:54:51.968217       1 base_controller.go:167] Shutting down StaticResourceController ...\nI0326 08:54:51.968242       1 base_controller.go:167] Shutting down ResourceSyncController ...\nI0326 08:54:51.968252       1 base_controller.go:167] Shutting down UserCAObservationController ...\nI0326 08:54:51.968283       1 base_controller.go:167] Shutting down StatusSyncer_openshift-controller-manager ...\nI0326 08:54:51.968293       1 base_controller.go:145] All StatusSyncer_openshift-controller-manager post start hooks have been terminated\nI0326 08:54:51.968303       1 base_controller.go:167] Shutting down ConfigObserver ...\nI0326 08:54:51.968341       1 operator.go:115] Shutting down OpenShiftControllerManagerOperator\nW0326 08:54:51.968657       1 builder.go:101] graceful termination failed, controllers failed with error: stopped\n
Mar 26 08:54:55.014 E ns/openshift-cluster-storage-operator pod/cluster-storage-operator-db79bb78d-f5mf7 node/ci-op-jrm8w069-875d2-kl678-master-0 container/cluster-storage-operator reason/ContainerExit code/1 cause/Error andard found, reconciling\nI0326 08:33:13.198404       1 controller.go:174] Existing StorageClass standard found, reconciling\nI0326 08:33:34.994613       1 controller.go:174] Existing StorageClass standard found, reconciling\nI0326 08:37:18.352999       1 controller.go:174] Existing StorageClass standard found, reconciling\nI0326 08:43:13.199467       1 controller.go:174] Existing StorageClass standard found, reconciling\nI0326 08:53:13.199766       1 controller.go:174] Existing StorageClass standard found, reconciling\nI0326 08:53:34.996734       1 controller.go:174] Existing StorageClass standard found, reconciling\nI0326 08:54:54.370906       1 cmd.go:97] Received SIGTERM or SIGINT signal, shutting down controller.\nI0326 08:54:54.371433       1 genericapiserver.go:386] [graceful-termination] RunPreShutdownHooks has completed\nI0326 08:54:54.371475       1 genericapiserver.go:349] "[graceful-termination] shutdown event" name="ShutdownInitiated"\nI0326 08:54:54.372260       1 base_controller.go:167] Shutting down SnapshotCRDController ...\nI0326 08:54:54.372285       1 base_controller.go:167] Shutting down DefaultStorageClassController ...\nI0326 08:54:54.372296       1 base_controller.go:167] Shutting down LoggingSyncer ...\nI0326 08:54:54.372308       1 base_controller.go:167] Shutting down StatusSyncer_storage ...\nI0326 08:54:54.372313       1 base_controller.go:145] All StatusSyncer_storage post start hooks have been terminated\nI0326 08:54:54.372322       1 base_controller.go:167] Shutting down ManagementStateController ...\nI0326 08:54:54.372331       1 base_controller.go:167] Shutting down ConfigObserver ...\nI0326 08:54:54.372343       1 base_controller.go:167] Shutting down CSIDriverStarter ...\nI0326 08:54:54.372479       1 base_controller.go:167] Shutting down GCPPDCSIDriverOperator ...\nI0326 08:54:54.373372       1 base_controller.go:145] All GCPPDCSIDriverOperator post start hooks have been terminated\nW0326 08:54:54.372530       1 builder.go:101] graceful termination failed, controllers failed with error: stopped\n
Mar 26 08:55:04.931 E ns/openshift-ingress-canary pod/ingress-canary-cgcc6 node/ci-op-jrm8w069-875d2-kl678-worker-a-bg5hn container/serve-healthcheck-canary reason/ContainerExit code/2 cause/Error serving on 8888\nserving on 8080\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\n
Mar 26 08:55:05.657 E ns/openshift-monitoring pod/alertmanager-main-2 node/ci-op-jrm8w069-875d2-kl678-worker-c-d7mtw container/alertmanager-proxy reason/ContainerExit code/2 cause/Error 2024/03/26 08:12:54 provider.go:128: Defaulting client-id to system:serviceaccount:openshift-monitoring:alertmanager-main\n2024/03/26 08:12:54 provider.go:133: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2024/03/26 08:12:54 provider.go:351: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2024/03/26 08:12:54 oauthproxy.go:203: mapping path "/" => upstream "http://localhost:9093/"\n2024/03/26 08:12:54 oauthproxy.go:230: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:alertmanager-main\n2024/03/26 08:12:54 oauthproxy.go:240: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> samesite: refresh:disabled\n2024/03/26 08:12:54 http.go:107: HTTPS: listening on [::]:9095\nI0326 08:12:54.688318       1 dynamic_serving_content.go:130] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n
Mar 26 08:55:05.657 E ns/openshift-monitoring pod/alertmanager-main-2 node/ci-op-jrm8w069-875d2-kl678-worker-c-d7mtw container/config-reloader reason/ContainerExit code/2 cause/Error level=info ts=2024-03-26T08:12:54.380267453Z caller=main.go:148 msg="Starting prometheus-config-reloader" version="(version=0.49.0, branch=release-4.9, revision=170b0686e)"\nlevel=info ts=2024-03-26T08:12:54.380350881Z caller=main.go:149 build_context="(go=go1.16.12, user=root, date=20220505-09:02:32)"\nlevel=info ts=2024-03-26T08:12:54.380591406Z caller=main.go:183 msg="Starting web server for metrics" listen=localhost:8080\nlevel=info ts=2024-03-26T08:12:54.382329152Z caller=reloader.go:219 msg="started watching config file and directories for changes" cfg= out= dirs=/etc/alertmanager/config,/etc/alertmanager/secrets/alertmanager-main-tls,/etc/alertmanager/secrets/alertmanager-main-proxy,/etc/alertmanager/secrets/alertmanager-kube-rbac-proxy\nlevel=info ts=2024-03-26T08:12:55.647573902Z caller=reloader.go:355 msg="Reload triggered" cfg_in= cfg_out= watched_dirs="/etc/alertmanager/config, /etc/alertmanager/secrets/alertmanager-main-tls, /etc/alertmanager/secrets/alertmanager-main-proxy, /etc/alertmanager/secrets/alertmanager-kube-rbac-proxy"\n
Mar 26 08:55:05.929 E ns/openshift-console-operator pod/console-operator-db5dfd8d9-fhlhs node/ci-op-jrm8w069-875d2-kl678-master-2 container/console-operator reason/ContainerExit code/1 cause/Error lhs", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TerminationPreShutdownHooksFinished' All pre-shutdown hooks have been finished\nI0326 08:54:59.989495       1 genericapiserver.go:355] "[graceful-termination] shutdown event" name="ShutdownInitiated"\nI0326 08:54:59.989514       1 genericapiserver.go:709] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-console-operator", Name:"console-operator-db5dfd8d9-fhlhs", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TerminationStart' Received signal to terminate, becoming unready, but keeping serving\nI0326 08:54:59.989534       1 genericapiserver.go:709] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-console-operator", Name:"console-operator-db5dfd8d9-fhlhs", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TerminationMinimalShutdownDurationFinished' The minimal shutdown duration of 0s finished\nI0326 08:54:59.989577       1 genericapiserver.go:362] "[graceful-termination] shutdown event" name="AfterShutdownDelayDuration"\nI0326 08:54:59.989600       1 genericapiserver.go:709] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-console-operator", Name:"console-operator-db5dfd8d9-fhlhs", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TerminationStoppedServing' Server has stopped listening\nI0326 08:54:59.989618       1 genericapiserver.go:387] "[graceful-termination] shutdown event" name="InFlightRequestsDrained"\nI0326 08:54:59.990697       1 base_controller.go:167] Shutting down ConsoleCLIDownloadsController ...\nI0326 08:54:59.990841       1 base_controller.go:167] Shutting down ConsoleOperator ...\nI0326 08:54:59.990894       1 base_controller.go:167] Shutting down DownloadsRouteController ...\nI0326 08:54:59.990932       1 base_controller.go:167] Shutting down ConsoleServiceController ...\nW0326 08:54:59.990938       1 builder.go:101] graceful termination failed, controllers failed with error: stopped\n
Mar 26 08:55:06.124 E ns/openshift-service-ca-operator pod/service-ca-operator-5566d9d78c-8d4wr node/ci-op-jrm8w069-875d2-kl678-master-0 container/service-ca-operator reason/ContainerExit code/1 cause/Error
Mar 26 08:55:07.585 E ns/openshift-monitoring pod/prometheus-operator-74ddbc7cdc-p4v9j node/ci-op-jrm8w069-875d2-kl678-master-1 container/kube-rbac-proxy reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar)
Mar 26 08:55:11.376 E ns/openshift-ingress-canary pod/ingress-canary-n8bgm node/ci-op-jrm8w069-875d2-kl678-worker-b-gp78s container/serve-healthcheck-canary reason/ContainerExit code/2 cause/Error serving on 8888\nserving on 8080\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\n
Mar 26 08:55:12.708 E ns/openshift-monitoring pod/alertmanager-main-2 node/ci-op-jrm8w069-875d2-kl678-worker-c-d7mtw container/alertmanager-proxy reason/ContainerExit code/2 cause/Error 2024/03/26 08:55:11 provider.go:128: Defaulting client-id to system:serviceaccount:openshift-monitoring:alertmanager-main\n2024/03/26 08:55:11 provider.go:133: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2024/03/26 08:55:11 provider.go:351: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2024/03/26 08:55:11 oauthproxy.go:203: mapping path "/" => upstream "http://localhost:9093/"\n2024/03/26 08:55:11 oauthproxy.go:230: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:alertmanager-main\n2024/03/26 08:55:11 oauthproxy.go:240: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> samesite: refresh:disabled\n2024/03/26 08:55:11 http.go:107: HTTPS: listening on [::]:9095\nI0326 08:55:11.063310       1 dynamic_serving_content.go:130] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n
Mar 26 08:55:12.708 E ns/openshift-monitoring pod/alertmanager-main-2 node/ci-op-jrm8w069-875d2-kl678-worker-c-d7mtw container/config-reloader reason/ContainerExit code/2 cause/Error level=info ts=2024-03-26T08:55:10.720043489Z caller=main.go:148 msg="Starting prometheus-config-reloader" version="(version=0.49.0, branch=release-4.9, revision=170b0686e)"\nlevel=info ts=2024-03-26T08:55:10.720390071Z caller=main.go:149 build_context="(go=go1.16.12, user=root, date=20220505-09:02:32)"\nlevel=info ts=2024-03-26T08:55:10.720674965Z caller=main.go:183 msg="Starting web server for metrics" listen=localhost:8080\nlevel=info ts=2024-03-26T08:55:10.72141334Z caller=reloader.go:219 msg="started watching config file and directories for changes" cfg= out= dirs=/etc/alertmanager/config,/etc/alertmanager/secrets/alertmanager-main-tls,/etc/alertmanager/secrets/alertmanager-main-proxy,/etc/alertmanager/secrets/alertmanager-kube-rbac-proxy\n

Found in 20.00% of runs (20.00% of failures) across 5 total runs and 1 jobs (100.00% failed) in 86ms - clear search | chart view - source code located on github