Job:
periodic-ci-openshift-release-master-nightly-4.7-e2e-aws-fips-serial (all) - 1 runs, 100% failed, 100% of failures match = 100% impact
#1781422873370431488junit4 hours ago
[sig-api-machinery][Feature:APIServer][Late] kubelet terminates kube-apiserver gracefully [Suite:openshift/conformance/parallel]
[sig-instrumentation] Prometheus when installed on the cluster should report telemetry if a cloud.openshift.com token is present [Late] [Suite:openshift/conformance/parallel]
#1781422873370431488junit4 hours ago
[sig-api-machinery][Feature:APIServer][Late] kubelet terminates kube-apiserver gracefully [Suite:openshift/conformance/parallel]
[sig-instrumentation] Prometheus when installed on the cluster should report telemetry if a cloud.openshift.com token is present [Late] [Suite:openshift/conformance/parallel]
#1781422873370431488junit4 hours ago
# [sig-api-machinery][Feature:APIServer][Late] kubelet terminates kube-apiserver gracefully [Suite:openshift/conformance/parallel]
fail [github.com/onsi/ginkgo@v4.7.0-origin.0+incompatible/internal/leafnodes/runner.go:64]: kube-apiserver reports a non-graceful termination: v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-apiserver-ip-10-0-158-113.us-east-2.compute.internal.17c7ca6158c9859c", GenerateName:"", Namespace:"openshift-kube-apiserver", SelfLink:"/api/v1/namespaces/openshift-kube-apiserver/events/kube-apiserver-ip-10-0-158-113.us-east-2.compute.internal.17c7ca6158c9859c", UID:"21ef5418-4695-448c-b84e-63f1a936b5bc", ResourceVersion:"26528", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63849157502, loc:(*time.Location)(0x952bda0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"watch-termination", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc002425740), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc002425760)}}}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"openshift-kube-apiserver", Name:"kube-apiserver-ip-10-0-158-113.us-east-2.compute.internal", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}, Reason:"NonGracefulTermination", Message:"Previous pod kube-apiserver-ip-10-0-158-113.us-east-2.compute.internal started at 2024-04-19 21:02:16.657664338 +0000 UTC did not terminate gracefully", Source:v1.EventSource{Component:"apiserver", Host:"ip-10-0-158-113"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63849157502, loc:(*time.Location)(0x952bda0)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63849157502, loc:(*time.Location)(0x952bda0)}}, Count:1, Type:"Warning", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}. Probably kubelet or CRI-O is not giving the time to cleanly shut down. This can lead to connection refused and network I/O timeout errors in other components.

Found in 50.00% of runs (50.00% of failures) across 2 total runs and 2 jobs (100.00% failed) in 46ms - clear search | chart view - source code located on github