#OCPBUGS-32517 | issue | 42 hours ago | Missing worker nodes on metal Verified |
Mon 2024-04-22 05:33:53 UTC localhost.localdomain master-bmh-update.service[12603]: Unpause all baremetal hosts Mon 2024-04-22 05:33:53 UTC localhost.localdomain master-bmh-update.service[18264]: E0422 05:33:53.630867 18264 memcache.go:265] couldn't get current server API group list: Get "https://localhost:6443/api?timeout=32s": dial tcp [::1]:6443: connect: connection refused Mon 2024-04-22 05:33:53 UTC localhost.localdomain master-bmh-update.service[18264]: E0422 05:33:53.631351 18264 memcache.go:265] couldn't get current server API group list: Get "https://localhost:6443/api?timeout=32s": dial tcp [::1]:6443: connect: connection refused ... 4 lines not shown | |||
#OCPBUGS-27755 | issue | 9 days ago | openshift-kube-apiserver down and is not being restarted New |
Issue 15736514: openshift-kube-apiserver down and is not being restarted Description: Description of problem: {code:none} SNO cluster, this is the second time that the issue happens. Error like the following are reported: ~~~ failed to fetch token: Post "https://api-int.<cluster>:6443/api/v1/namespaces/openshift-cluster-storage-operator/serviceaccounts/cluster-storage-operator/token": dial tcp <ip>:6443: connect: connection refused ~~~ Checking the pods logs, kube-apiserver pod is terminated and is not being restarted again: ~~~ 2024-01-13T09:41:40.931716166Z I0113 09:41:40.931584 1 main.go:213] Received signal terminated. Forwarding to sub-process "hyperkube". ~~~{code} Version-Release number of selected component (if applicable): {code:none} 4.13.13 {code} How reproducible: {code:none} Not reproducible but has happened twice{code} Steps to Reproduce: {code:none} 1. 2. 3. {code} Actual results: {code:none} API is not available and kube-apiserver is not being restarted{code} Expected results: {code:none} We would expect to see kube-apiserver restarts{code} Additional info: {code:none} {code} Status: New | |||
#OCPBUGS-30631 | issue | 2 weeks ago | SNO (RT kernel) sosreport crash the SNO node CLOSED |
Issue 15865131: SNO (RT kernel) sosreport crash the SNO node Description: Description of problem: {code:none} sosreport collection causes SNO XR11 node crash. {code} Version-Release number of selected component (if applicable): {code:none} - RHOCP : 4.12.30 - kernel : 4.18.0-372.69.1.rt7.227.el8_6.x86_64 - platform : x86_64{code} How reproducible: {code:none} sh-4.4# chrt -rr 99 toolbox .toolboxrc file detected, overriding defaults... Checking if there is a newer version of ocpdalmirror.xxx.yyy:8443/rhel8/support-tools-zzz-feb available... Container 'toolbox-root' already exists. Trying to start... (To remove the container and start with a fresh toolbox, run: sudo podman rm 'toolbox-root') toolbox-root Container started successfully. To exit, type 'exit'. [root@node /]# which sos /usr/sbin/sos logger: socket /dev/log: No such file or directory [root@node /]# taskset -c 29-31,61-63 sos report --batch -n networking,kernel,processor -k crio.all=on -k crio.logs=on -k podman.all=on -kpodman.logs=on sosreport (version 4.5.6) This command will collect diagnostic and configuration information from this Red Hat CoreOS system. An archive containing the collected information will be generated in /host/var/tmp/sos.c09e4f7z and may be provided to a Red Hat support representative. Any information provided to Red Hat will be treated in accordance with the published support policies at: Distribution Website : https://www.redhat.com/ Commercial Support : https://access.redhat.com/ The generated archive may contain data considered sensitive and its content should be reviewed by the originating organization before being passed to any third party. No changes will be made to system configuration. Setting up archive ... Setting up plugins ... [plugin:auditd] Could not open conf file /etc/audit/auditd.conf: [Errno 2] No such file or directory: '/etc/audit/auditd.conf' caught exception in plugin method "system.setup()" writing traceback to sos_logs/system-plugin-errors.txt [plugin:systemd] skipped command 'resolvectl status': required services missing: systemd-resolved. [plugin:systemd] skipped command 'resolvectl statistics': required services missing: systemd-resolved. Running plugins. Please wait ... Starting 1/91 alternatives [Running: alternatives] Starting 2/91 atomichost [Running: alternatives atomichost] Starting 3/91 auditd [Running: alternatives atomichost auditd] Starting 4/91 block [Running: alternatives atomichost auditd block] Starting 5/91 boot [Running: alternatives auditd block boot] Starting 6/91 cgroups [Running: auditd block boot cgroups] Starting 7/91 chrony [Running: auditd block cgroups chrony] Starting 8/91 cifs [Running: auditd block cgroups cifs] Starting 9/91 conntrack [Running: auditd block cgroups conntrack] Starting 10/91 console [Running: block cgroups conntrack console] Starting 11/91 container_log [Running: block cgroups conntrack container_log] Starting 12/91 containers_common [Running: block cgroups conntrack containers_common] Starting 13/91 crio [Running: block cgroups conntrack crio] Starting 14/91 crypto [Running: cgroups conntrack crio crypto] Starting 15/91 date [Running: cgroups conntrack crio date] Starting 16/91 dbus [Running: cgroups conntrack crio dbus] Starting 17/91 devicemapper [Running: cgroups conntrack crio devicemapper] Starting 18/91 devices [Running: cgroups conntrack crio devices] Starting 19/91 dracut [Running: cgroups conntrack crio dracut] Starting 20/91 ebpf [Running: cgroups conntrack crio ebpf] Starting 21/91 etcd [Running: cgroups crio ebpf etcd] Starting 22/91 filesys [Running: cgroups crio ebpf filesys] Starting 23/91 firewall_tables [Running: cgroups crio filesys firewall_tables] Starting 24/91 fwupd [Running: cgroups crio filesys fwupd] Starting 25/91 gluster [Running: cgroups crio filesys gluster] Starting 26/91 grub2 [Running: cgroups crio filesys grub2] Starting 27/91 gssproxy [Running: cgroups crio grub2 gssproxy] Starting 28/91 hardware [Running: cgroups crio grub2 hardware] Starting 29/91 host [Running: cgroups crio hardware host] Starting 30/91 hts [Running: cgroups crio hardware hts] Starting 31/91 i18n [Running: cgroups crio hardware i18n] Starting 32/91 iscsi [Running: cgroups crio hardware iscsi] Starting 33/91 jars [Running: cgroups crio hardware jars] Starting 34/91 kdump [Running: cgroups crio hardware kdump] Starting 35/91 kernelrt [Running: cgroups crio hardware kernelrt] Starting 36/91 keyutils [Running: cgroups crio hardware keyutils] Starting 37/91 krb5 [Running: cgroups crio hardware krb5] Starting 38/91 kvm [Running: cgroups crio hardware kvm] Starting 39/91 ldap [Running: cgroups crio kvm ldap] Starting 40/91 libraries [Running: cgroups crio kvm libraries] Starting 41/91 libvirt [Running: cgroups crio kvm libvirt] Starting 42/91 login [Running: cgroups crio kvm login] Starting 43/91 logrotate [Running: cgroups crio kvm logrotate] Starting 44/91 logs [Running: cgroups crio kvm logs] Starting 45/91 lvm2 [Running: cgroups crio logs lvm2] Starting 46/91 md [Running: cgroups crio logs md] Starting 47/91 memory [Running: cgroups crio logs memory] Starting 48/91 microshift_ovn [Running: cgroups crio logs microshift_ovn] Starting 49/91 multipath [Running: cgroups crio logs multipath] Starting 50/91 networkmanager [Running: cgroups crio logs networkmanager] Removing debug pod ... error: unable to delete the debug pod "ransno1ransnomavdallabcom-debug": Delete "https://api.ransno.mavdallab.com:6443/api/v1/namespaces/openshift-debug-mt82m/pods/ransno1ransnomavdallabcom-debug": dial tcp 10.71.136.144:6443: connect: connection refused {code} Steps to Reproduce: {code:none} Launch a debug pod and the procedure above and it crash the node{code} Actual results: {code:none} Node crash{code} Expected results: {code:none} Node does not crash{code} Additional info: {code:none} We have two vmcore on the associated SFDC ticket. This system use a RT kernel. Using an out of tree ice driver 1.13.7 (probably from 22 dec 2023) [ 103.681608] ice: module unloaded [ 103.830535] ice: loading out-of-tree module taints kernel. [ 103.831106] ice: module verification failed: signature and/or required key missing - tainting kernel [ 103.841005] ice: Intel(R) Ethernet Connection E800 Series Linux Driver - version 1.13.7 [ 103.841017] ice: Copyright (C) 2018-2023 Intel Corporation With the following kernel command line Command line: BOOT_IMAGE=(hd0,gpt3)/ostree/rhcos-f2c287e549b45a742b62e4f748bc2faae6ca907d24bb1e029e4985bc01649033/vmlinuz-4.18.0-372.69.1.rt7.227.el8_6.x86_64 ignition.platform.id=metal ostree=/ostree/boot.1/rhcos/f2c287e549b45a742b62e4f748bc2faae6ca907d24bb1e029e4985bc01649033/0 root=UUID=3e8bda80-5cf4-4c46-b139-4c84cb006354 rw rootflags=prjquota boot=UUID=1d0512c2-3f92-42c5-b26d-709ff9350b81 intel_iommu=on iommu=pt firmware_class.path=/var/lib/firmware skew_tick=1 nohz=on rcu_nocbs=3-31,35-63 tuned.non_isolcpus=00000007,00000007 systemd.cpu_affinity=0,1,2,32,33,34 intel_iommu=on iommu=pt isolcpus=managed_irq,3-31,35-63 nohz_full=3-31,35-63 tsc=nowatchdog nosoftlockup nmi_watchdog=0 mce=off rcutree.kthread_prio=11 default_hugepagesz=1G rcupdate.rcu_normal_after_boot=0 efi=runtime module_blacklist=irdma intel_pstate=passive intel_idle.max_cstate=0 crashkernel=256M vmcore1 show issue with the ice driver crash vmcore tmp/vmlinux KERNEL: tmp/vmlinux [TAINTED] DUMPFILE: vmcore [PARTIAL DUMP] CPUS: 64 DATE: Thu Mar 7 17:16:57 CET 2024 UPTIME: 02:44:28 LOAD AVERAGE: 24.97, 25.47, 25.46 TASKS: 5324 NODENAME: aaa.bbb.ccc RELEASE: 4.18.0-372.69.1.rt7.227.el8_6.x86_64 VERSION: #1 SMP PREEMPT_RT Fri Aug 4 00:21:46 EDT 2023 MACHINE: x86_64 (1500 Mhz) MEMORY: 127.3 GB PANIC: "Kernel panic - not syncing:" PID: 693 COMMAND: "khungtaskd" TASK: ff4d1890260d4000 [THREAD_INFO: ff4d1890260d4000] CPU: 0 STATE: TASK_RUNNING (PANIC) crash> ps|grep sos 449071 363440 31 ff4d189005f68000 IN 0.2 506428 314484 sos 451043 363440 63 ff4d188943a9c000 IN 0.2 506428 314484 sos 494099 363440 29 ff4d187f941f4000 UN 0.2 506428 314484 sos 8457.517696] ------------[ cut here ]------------ [ 8457.517698] NETDEV WATCHDOG: ens3f1 (ice): transmit queue 35 timed out [ 8457.517711] WARNING: CPU: 33 PID: 349 at net/sched/sch_generic.c:472 dev_watchdog+0x270/0x300 [ 8457.517718] Modules linked in: binfmt_misc macvlan pci_pf_stub iavf vfio_pci vfio_virqfd vfio_iommu_type1 vfio vhost_net vhost vhost_iotlb tap tun xt_addrtype nf_conntrack_netlink ip6t_REJECT nf_reject_ipv6 ipt_REJECT nf_reject_ipv4 xt_nat xt_CT tcp_diag inet_diag ip6t_MASQUERADE xt_mark ice(OE) xt_conntrack ipt_MASQUERADE nft_counter xt_comment nft_compat veth nft_chain_nat nf_tables overlay bridge 8021q garp mrp stp llc nfnetlink_cttimeout nfnetlink openvswitch nf_conncount nf_nat nf_conntrack nf_defrag_ipv6 nf_defrag_ipv4 ext4 mbcache jbd2 intel_rapl_msr iTCO_wdt iTCO_vendor_support dell_smbios wmi_bmof dell_wmi_descriptor dcdbas kvm_intel kvm irqbypass intel_rapl_common i10nm_edac nfit libnvdimm x86_pkg_temp_thermal intel_powerclamp coretemp rapl ipmi_ssif intel_cstate intel_uncore dm_thin_pool pcspkr isst_if_mbox_pci dm_persistent_data dm_bio_prison dm_bufio isst_if_mmio isst_if_common mei_me i2c_i801 joydev mei intel_pmt wmi acpi_ipmi ipmi_si acpi_power_meter sctp ip6_udp_tunnel [ 8457.517770] udp_tunnel ip_tables xfs libcrc32c i40e sd_mod t10_pi sg bnxt_re ib_uverbs ib_core crct10dif_pclmul crc32_pclmul crc32c_intel ghash_clmulni_intel bnxt_en ahci libahci libata dm_multipath dm_mirror dm_region_hash dm_log dm_mod ipmi_devintf ipmi_msghandler fuse [last unloaded: ice] [ 8457.517784] Red Hat flags: eBPF/rawtrace [ 8457.517787] CPU: 33 PID: 349 Comm: ktimers/33 Kdump: loaded Tainted: G OE --------- - - 4.18.0-372.69.1.rt7.227.el8_6.x86_64 #1 [ 8457.517789] Hardware name: Dell Inc. PowerEdge XR11/0P2RNT, BIOS 1.12.1 09/13/2023 [ 8457.517790] RIP: 0010:dev_watchdog+0x270/0x300 [ 8457.517793] Code: 17 00 e9 f0 fe ff ff 4c 89 e7 c6 05 c6 03 34 01 01 e8 14 43 fa ff 89 d9 4c 89 e6 48 c7 c7 90 37 98 9a 48 89 c2 e8 1d be 88 ff <0f> 0b eb ad 65 8b 05 05 13 fb 65 89 c0 48 0f a3 05 1b ab 36 01 73 [ 8457.517795] RSP: 0018:ff7aeb55c73c7d78 EFLAGS: 00010286 [ 8457.517797] RAX: 0000000000000000 RBX: 0000000000000023 RCX: 0000000000000001 [ 8457.517798] RDX: 0000000000000000 RSI: ffffffff9a908557 RDI: 00000000ffffffff [ 8457.517799] RBP: 0000000000000021 R08: ffffffff9ae6b3a0 R09: 00080000000000ff [ 8457.517800] R10: 000000006443a462 R11: 0000000000000036 R12: ff4d187f4d1f4000 [ 8457.517801] R13: ff4d187f4d20df00 R14: ff4d187f4d1f44a0 R15: 0000000000000080 [ 8457.517803] FS: 0000000000000000(0000) GS:ff4d18967a040000(0000) knlGS:0000000000000000 [ 8457.517804] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 [ 8457.517805] CR2: 00007fc47c649974 CR3: 00000019a441a005 CR4: 0000000000771ea0 [ 8457.517806] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 [ 8457.517807] DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400 [ 8457.517808] PKRU: 55555554 [ 8457.517810] Call Trace: [ 8457.517813] ? test_ti_thread_flag.constprop.50+0x10/0x10 [ 8457.517816] ? test_ti_thread_flag.constprop.50+0x10/0x10 [ 8457.517818] call_timer_fn+0x32/0x1d0 [ 8457.517822] ? test_ti_thread_flag.constprop.50+0x10/0x10 [ 8457.517825] run_timer_softirq+0x1fc/0x640 [ 8457.517828] ? _raw_spin_unlock_irq+0x1d/0x60 [ 8457.517833] ? finish_task_switch+0xea/0x320 [ 8457.517836] ? __switch_to+0x10c/0x4d0 [ 8457.517840] __do_softirq+0xa5/0x33f [ 8457.517844] run_timersd+0x61/0xb0 [ 8457.517848] smpboot_thread_fn+0x1c1/0x2b0 [ 8457.517851] ? smpboot_register_percpu_thread_cpumask+0x140/0x140 [ 8457.517853] kthread+0x151/0x170 [ 8457.517856] ? set_kthread_struct+0x50/0x50 [ 8457.517858] ret_from_fork+0x1f/0x40 [ 8457.517861] ---[ end trace 0000000000000002 ]--- [ 8458.520445] ice 0000:8a:00.1 ens3f1: tx_timeout: VSI_num: 14, Q 35, NTC: 0x99, HW_HEAD: 0x14, NTU: 0x15, INT: 0x0 [ 8458.520451] ice 0000:8a:00.1 ens3f1: tx_timeout recovery level 1, txqueue 35 [ 8506.139246] ice 0000:8a:00.1: PTP reset successful [ 8506.437047] ice 0000:8a:00.1: VSI rebuilt. VSI index 0, type ICE_VSI_PF [ 8506.445482] ice 0000:8a:00.1: VSI rebuilt. VSI index 1, type ICE_VSI_CTRL [ 8540.459707] ice 0000:8a:00.1 ens3f1: tx_timeout: VSI_num: 14, Q 35, NTC: 0xe3, HW_HEAD: 0xe7, NTU: 0xe8, INT: 0x0 [ 8540.459714] ice 0000:8a:00.1 ens3f1: tx_timeout recovery level 1, txqueue 35 [ 8563.891356] ice 0000:8a:00.1: PTP reset successful ~~~ Second vmcore on the same node show issue with the SSD drive $ crash vmcore-2 tmp/vmlinux KERNEL: tmp/vmlinux [TAINTED] DUMPFILE: vmcore-2 [PARTIAL DUMP] CPUS: 64 DATE: Thu Mar 7 14:29:31 CET 2024 UPTIME: 1 days, 07:19:52 LOAD AVERAGE: 25.55, 26.42, 28.30 TASKS: 5409 NODENAME: aaa.bbb.ccc RELEASE: 4.18.0-372.69.1.rt7.227.el8_6.x86_64 VERSION: #1 SMP PREEMPT_RT Fri Aug 4 00:21:46 EDT 2023 MACHINE: x86_64 (1500 Mhz) MEMORY: 127.3 GB PANIC: "Kernel panic - not syncing:" PID: 696 COMMAND: "khungtaskd" TASK: ff2b35ed48d30000 [THREAD_INFO: ff2b35ed48d30000] CPU: 34 STATE: TASK_RUNNING (PANIC) crash> ps |grep sos 719784 718369 62 ff2b35ff00830000 IN 0.4 1215636 563388 sos 721740 718369 61 ff2b3605579f8000 IN 0.4 1215636 563388 sos 721742 718369 63 ff2b35fa5eb9c000 IN 0.4 1215636 563388 sos 721744 718369 30 ff2b3603367fc000 IN 0.4 1215636 563388 sos 721746 718369 29 ff2b360557944000 IN 0.4 1215636 563388 sos 743356 718369 62 ff2b36042c8e0000 IN 0.4 1215636 563388 sos 743818 718369 29 ff2b35f6186d0000 IN 0.4 1215636 563388 sos 748518 718369 61 ff2b3602cfb84000 IN 0.4 1215636 563388 sos 748884 718369 62 ff2b360713418000 UN 0.4 1215636 563388 sos crash> dmesg [111871.309883] ata3.00: exception Emask 0x0 SAct 0x3ff8 SErr 0x0 action 0x6 frozen [111871.309889] ata3.00: failed command: WRITE FPDMA QUEUED [111871.309891] ata3.00: cmd 61/40:18:28:47:4b/00:00:00:00:00/40 tag 3 ncq dma 32768 out res 40/00:00:00:00:00/00:00:00:00:00/00 Emask 0x4 (timeout) [111871.309895] ata3.00: status: { DRDY } [111871.309897] ata3.00: failed command: WRITE FPDMA QUEUED [111871.309904] ata3.00: cmd 61/40:20:68:47:4b/00:00:00:00:00/40 tag 4 ncq dma 32768 out res 40/00:00:00:00:00/00:00:00:00:00/00 Emask 0x4 (timeout) [111871.309908] ata3.00: status: { DRDY } [111871.309909] ata3.00: failed command: WRITE FPDMA QUEUED [111871.309910] ata3.00: cmd 61/40:28:a8:47:4b/00:00:00:00:00/40 tag 5 ncq dma 32768 out res 40/00:ff:00:00:00/00:00:00:00:00/40 Emask 0x4 (timeout) [111871.309913] ata3.00: status: { DRDY } [111871.309914] ata3.00: failed command: WRITE FPDMA QUEUED [111871.309915] ata3.00: cmd 61/40:30:e8:47:4b/00:00:00:00:00/40 tag 6 ncq dma 32768 out res 40/00:01:00:00:00/00:00:00:00:00/00 Emask 0x4 (timeout) [111871.309918] ata3.00: status: { DRDY } [111871.309919] ata3.00: failed command: WRITE FPDMA QUEUED [111871.309919] ata3.00: cmd 61/70:38:48:37:2b/00:00:1c:00:00/40 tag 7 ncq dma 57344 out res 40/00:00:00:00:00/00:00:00:00:00/00 Emask 0x4 (timeout) [111871.309922] ata3.00: status: { DRDY } [111871.309923] ata3.00: failed command: WRITE FPDMA QUEUED [111871.309924] ata3.00: cmd 61/20:40:78:29:0c/00:00:19:00:00/40 tag 8 ncq dma 16384 out res 40/00:ff:00:00:00/00:00:00:00:00/40 Emask 0x4 (timeout) [111871.309927] ata3.00: status: { DRDY } [111871.309928] ata3.00: failed command: WRITE FPDMA QUEUED [111871.309929] ata3.00: cmd 61/08:48:08:0c:c0/00:00:1c:00:00/40 tag 9 ncq dma 4096 out res 40/00:ff:00:00:00/00:00:00:00:00/40 Emask 0x4 (timeout) [111871.309932] ata3.00: status: { DRDY } [111871.309933] ata3.00: failed command: WRITE FPDMA QUEUED [111871.309934] ata3.00: cmd 61/40:50:28:48:4b/00:00:00:00:00/40 tag 10 ncq dma 32768 out res 40/00:01:00:00:00/00:00:00:00:00/40 Emask 0x4 (timeout) [111871.309937] ata3.00: status: { DRDY } [111871.309938] ata3.00: failed command: WRITE FPDMA QUEUED [111871.309939] ata3.00: cmd 61/40:58:68:48:4b/00:00:00:00:00/40 tag 11 ncq dma 32768 out res 40/00:00:00:00:00/00:00:00:00:00/00 Emask 0x4 (timeout) [111871.309942] ata3.00: status: { DRDY } [111871.309943] ata3.00: failed command: WRITE FPDMA QUEUED [111871.309944] ata3.00: cmd 61/40:60:a8:48:4b/00:00:00:00:00/40 tag 12 ncq dma 32768 out res 40/00:00:00:00:00/00:00:00:00:00/00 Emask 0x4 (timeout) [111871.309946] ata3.00: status: { DRDY } [111871.309947] ata3.00: failed command: WRITE FPDMA QUEUED [111871.309948] ata3.00: cmd 61/40:68:e8:48:4b/00:00:00:00:00/40 tag 13 ncq dma 32768 out res 40/00:01:00:00:00/00:00:00:00:00/40 Emask 0x4 (timeout) [111871.309951] ata3.00: status: { DRDY } [111871.309953] ata3: hard resetting link ... ... ... [112789.787310] INFO: task sos:748884 blocked for more than 600 seconds. [112789.787314] Tainted: G OE --------- - - 4.18.0-372.69.1.rt7.227.el8_6.x86_64 #1 [112789.787316] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [112789.787316] task:sos state:D stack: 0 pid:748884 ppid:718369 flags:0x00084080 [112789.787320] Call Trace: [112789.787323] __schedule+0x37b/0x8e0 [112789.787330] schedule+0x6c/0x120 [112789.787333] schedule_timeout+0x2b7/0x410 [112789.787336] ? enqueue_entity+0x130/0x790 [112789.787340] wait_for_completion+0x84/0xf0 [112789.787343] flush_work+0x120/0x1d0 [112789.787347] ? flush_workqueue_prep_pwqs+0x130/0x130 [112789.787350] schedule_on_each_cpu+0xa7/0xe0 [112789.787353] vmstat_refresh+0x22/0xa0 [112789.787357] proc_sys_call_handler+0x174/0x1d0 [112789.787361] vfs_read+0x91/0x150 [112789.787364] ksys_read+0x52/0xc0 [112789.787366] do_syscall_64+0x87/0x1b0 [112789.787369] entry_SYSCALL_64_after_hwframe+0x61/0xc6 [112789.787372] RIP: 0033:0x7f2dca8c2ab4 [112789.787378] Code: Unable to access opcode bytes at RIP 0x7f2dca8c2a8a. [112789.787378] RSP: 002b:00007f2dbbffc5e0 EFLAGS: 00000246 ORIG_RAX: 0000000000000000 [112789.787380] RAX: ffffffffffffffda RBX: 0000000000000008 RCX: 00007f2dca8c2ab4 [112789.787382] RDX: 0000000000004000 RSI: 00007f2db402b5a0 RDI: 0000000000000008 [112789.787383] RBP: 00007f2db402b5a0 R08: 0000000000000000 R09: 00007f2dcace27bb [112789.787383] R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000004000 [112789.787384] R13: 0000000000000008 R14: 00007f2db402b5a0 R15: 00007f2da4001a90 [112789.787418] NMI backtrace for cpu 34 {code} Status: CLOSED | |||
#OCPBUGS-33157 | issue | 42 hours ago | IPv6 metal-ipi jobs: master-bmh-update loosing access to API Verified |
Issue 15978085: IPv6 metal-ipi jobs: master-bmh-update loosing access to API Description: The last 4 IPv6 jobs are failing on the same error https://prow.ci.openshift.org/job-history/gs/test-platform-results/logs/periodic-ci-openshift-release-master-nightly-4.16-e2e-metal-ipi-ovn-ipv6 master-bmh-update.log looses access to the the API when trying to get/update the BMH details https://prow.ci.openshift.org/view/gs/test-platform-results/logs/periodic-ci-openshift-release-master-nightly-4.16-e2e-metal-ipi-ovn-ipv6/1785492737169035264 {noformat} May 01 03:32:23 localhost.localdomain master-bmh-update.sh[4663]: Waiting for 3 masters to become provisioned May 01 03:32:23 localhost.localdomain master-bmh-update.sh[24484]: E0501 03:32:23.531242 24484 memcache.go:265] couldn't get current server API group list: Get "https://api-int.ostest.test.metalkube.org:6443/api?timeout=32s": dial tcp [fd2e:6f44:5dd8:c956::5]:6443: connect: connection refused May 01 03:32:23 localhost.localdomain master-bmh-update.sh[24484]: E0501 03:32:23.531808 24484 memcache.go:265] couldn't get current server API group list: Get "https://api-int.ostest.test.metalkube.org:6443/api?timeout=32s": dial tcp [fd2e:6f44:5dd8:c956::5]:6443: connect: connection refused May 01 03:32:23 localhost.localdomain master-bmh-update.sh[24484]: E0501 03:32:23.533281 24484 memcache.go:265] couldn't get current server API group list: Get "https://api-int.ostest.test.metalkube.org:6443/api?timeout=32s": dial tcp [fd2e:6f44:5dd8:c956::5]:6443: connect: connection refused May 01 03:32:23 localhost.localdomain master-bmh-update.sh[24484]: E0501 03:32:23.533630 24484 memcache.go:265] couldn't get current server API group list: Get "https://api-int.ostest.test.metalkube.org:6443/api?timeout=32s": dial tcp [fd2e:6f44:5dd8:c956::5]:6443: connect: connection refused May 01 03:32:23 localhost.localdomain master-bmh-update.sh[24484]: E0501 03:32:23.535180 24484 memcache.go:265] couldn't get current server API group list: Get "https://api-int.ostest.test.metalkube.org:6443/api?timeout=32s": dial tcp [fd2e:6f44:5dd8:c956::5]:6443: connect: connection refused May 01 03:32:23 localhost.localdomain master-bmh-update.sh[24484]: The connection to the server api-int.ostest.test.metalkube.org:6443 was refused - did you specify the right host or port? {noformat} Status: Verified {noformat} May 01 02:49:40 localhost.localdomain master-bmh-update.sh[12448]: E0501 02:49:40.429468 12448 memcache.go:265] couldn't get current server API group list: Get "https://api-int.ostest.test.metalkube.org:6443/api?timeout=32s": dial tcp [fd2e:6f44:5dd8:c956::5]:6443: connect: connection refused {noformat} | |||
#OCPBUGS-32375 | issue | 10 days ago | Unsuccessful cluster installation with 4.15 nightlies on s390x using ABI CLOSED |
Issue 15945005: Unsuccessful cluster installation with 4.15 nightlies on s390x using ABI Description: When used the latest s390x release builds in 4.15 nightly stream for Agent Based Installation of SNO on IBM Z KVM, installation is failing at the end while watching cluster operators even though the DNS and HA Proxy configurations are perfect as the same setup is working with 4.15.x stable release image builds Below is the error encountered multiple times when used "release:s390x-latest" image while booting the cluster. This image is used during the boot through OPENSHIFT_INSATLL_RELEASE_IMAGE_OVERRIDE while the binary is fetched using the latest stable builds from here : [https://mirror.openshift.com/pub/openshift-v4/s390x/clients/ocp/latest/] for which the version would be around 4.15.x *release-image:* {code:java} registry.build01.ci.openshift.org/ci-op-cdkdqnqn/release@sha256:c6eb4affa5c44d2ad220d7064e92270a30df5f26d221e35664f4d5547a835617 {code} ** *PROW CI Build :* [https://prow.ci.openshift.org/view/gs/test-platform-results/pr-logs/pull/openshift_release/47965/rehearse-47965-periodic-ci-openshift-multiarch-master-nightly-4.15-e2e-agent-ibmz-sno/1780162365824700416] *Error:* {code:java} '/root/agent-sno/openshift-install wait-for install-complete --dir /root/agent-sno/ --log-level debug' Warning: Permanently added '128.168.142.71' (ED25519) to the list of known hosts. level=debug msg=OpenShift Installer 4.15.8 level=debug msg=Built from commit f4f5d0ee0f7591fd9ddf03ac337c804608102919 level=debug msg=Loading Install Config... level=debug msg= Loading SSH Key... level=debug msg= Loading Base Domain... level=debug msg= Loading Platform... level=debug msg= Loading Cluster Name... level=debug msg= Loading Base Domain... level=debug msg= Loading Platform... level=debug msg= Loading Pull Secret... level=debug msg= Loading Platform... level=debug msg=Loading Agent Config... level=debug msg=Using Agent Config loaded from state file level=warning msg=An agent configuration was detected but this command is not the agent wait-for command level=info msg=Waiting up to 40m0s (until 10:15AM UTC) for the cluster at https://api.agent-sno.abi-ci.com:6443 to initialize... W0416 09:35:51.793770 1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused E0416 09:35:51.793827 1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused W0416 09:35:53.127917 1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused E0416 09:35:53.127946 1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused W0416 09:35:54.760896 1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused E0416 09:35:54.761058 1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused W0416 09:36:00.790136 1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused E0416 09:36:00.790175 1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused W0416 09:36:08.516333 1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused E0416 09:36:08.516445 1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused W0416 09:36:31.442291 1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused E0416 09:36:31.442336 1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused W0416 09:37:03.033971 1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused E0416 09:37:03.034049 1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused W0416 09:37:42.025487 1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused E0416 09:37:42.025538 1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused W0416 09:38:32.148607 1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused E0416 09:38:32.148677 1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused W0416 09:39:27.680156 1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused E0416 09:39:27.680194 1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused W0416 09:40:23.290839 1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused E0416 09:40:23.290988 1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused W0416 09:41:22.298200 1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused E0416 09:41:22.298338 1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused W0416 09:42:01.197417 1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused E0416 09:42:01.197465 1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused W0416 09:42:36.739577 1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused E0416 09:42:36.739937 1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused W0416 09:43:07.331029 1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused E0416 09:43:07.331154 1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused W0416 09:44:04.008310 1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused E0416 09:44:04.008381 1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused W0416 09:44:40.882938 1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused E0416 09:44:40.882973 1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused W0416 09:45:18.975189 1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused E0416 09:45:18.975307 1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused W0416 09:45:49.753584 1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused E0416 09:45:49.753614 1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused W0416 09:46:41.148207 1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused E0416 09:46:41.148347 1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused W0416 09:47:12.882965 1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused E0416 09:47:12.883075 1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused W0416 09:47:53.636491 1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused E0416 09:47:53.636538 1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused W0416 09:48:31.792077 1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused E0416 09:48:31.792165 1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused W0416 09:49:29.117579 1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused E0416 09:49:29.117657 1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused W0416 09:50:02.802033 1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused E0416 09:50:02.802167 1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused W0416 09:50:33.826705 1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused E0416 09:50:33.826859 1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused W0416 09:51:16.045403 1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused E0416 09:51:16.045447 1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused W0416 09:51:53.795710 1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused E0416 09:51:53.795745 1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused W0416 09:52:52.741141 1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused E0416 09:52:52.741289 1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused W0416 09:53:52.621642 1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused E0416 09:53:52.621687 1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused W0416 09:54:35.809906 1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused E0416 09:54:35.810054 1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused W0416 09:55:24.249298 1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused E0416 09:55:24.249418 1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused W0416 09:56:12.717328 1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused E0416 09:56:12.717372 1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused W0416 09:56:51.172375 1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused E0416 09:56:51.172439 1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused W0416 09:57:42.242226 1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused E0416 09:57:42.242292 1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused W0416 09:58:17.663810 1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused E0416 09:58:17.663849 1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused W0416 09:59:13.319754 1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused E0416 09:59:13.319889 1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused W0416 10:00:03.188117 1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused E0416 10:00:03.188166 1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused W0416 10:00:54.590362 1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused E0416 10:00:54.590494 1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused W0416 10:01:35.673592 1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused E0416 10:01:35.673633 1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused W0416 10:02:11.552079 1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused E0416 10:02:11.552133 1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused W0416 10:02:51.110525 1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused E0416 10:02:51.110663 1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused W0416 10:03:31.251376 1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused E0416 10:03:31.251494 1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused W0416 10:04:21.566895 1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused E0416 10:04:21.566931 1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused W0416 10:04:52.754047 1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused E0416 10:04:52.754221 1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused W0416 10:05:24.673675 1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused E0416 10:05:24.673724 1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused W0416 10:06:17.608482 1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused E0416 10:06:17.608598 1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused W0416 10:06:58.215116 1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused E0416 10:06:58.215262 1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused W0416 10:07:46.578262 1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused E0416 10:07:46.578392 1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused W0416 10:08:18.239710 1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused E0416 10:08:18.239830 1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused W0416 10:09:06.947178 1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused E0416 10:09:06.947239 1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused W0416 10:10:00.261401 1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused E0416 10:10:00.261486 1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused W0416 10:10:59.363041 1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused E0416 10:10:59.363113 1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused W0416 10:11:32.205551 1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused E0416 10:11:32.205612 1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused W0416 10:12:24.956052 1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused E0416 10:12:24.956147 1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused W0416 10:12:55.353860 1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused E0416 10:12:55.354004 1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused W0416 10:13:39.223095 1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused E0416 10:13:39.223170 1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused W0416 10:14:25.018278 1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused E0416 10:14:25.018404 1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused W0416 10:15:17.227351 1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused E0416 10:15:17.227424 1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused level=error msg=Attempted to gather ClusterOperator status after wait failure: listing ClusterOperator objects: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusteroperators": dial tcp 10.244.64.4:6443: connect: connection refused level=error msg=Cluster initialization failed because one or more operators are not functioning properly. level=error msg=The cluster should be accessible for troubleshooting as detailed in the documentation linked below, level=error msg=https://docs.openshift.com/container-platform/latest/support/troubleshooting/troubleshooting-installations.html level=error msg=The 'wait-for install-complete' subcommand can then be used to continue the installation level=error msg=failed to initialize the cluster: timed out waiting for the condition {"component":"entrypoint","error":"wrapped process failed: exit status 6","file":"k8s.io/test-infra/prow/entrypoint/run.go:84","func":"k8s.io/test-infra/prow/entrypoint.Options.internalRun","level":"error","msg":"Error executing test process","severity":"error","time":"2024-04-16T10:15:51Z"} error: failed to execute wrapped command: exit status 6 {code} Status: CLOSED | |||
#OCPBUGS-31763 | issue | 10 days ago | gcp install cluster creation fails after 30-40 minutes New |
Issue 15921939: gcp install cluster creation fails after 30-40 minutes Description: Component Readiness has found a potential regression in install should succeed: overall. I see this on various different platforms, but I started digging into GCP failures. No installer log bundle is created, which seriously hinders my ability to dig further. Bootstrap succeeds, and then 30 minutes after waiting for cluster creation, it dies. From [https://prow.ci.openshift.org/view/gs/test-platform-results/logs/periodic-ci-openshift-release-master-nightly-4.16-e2e-gcp-sdn-serial/1775871000018161664] search.ci tells me this affects nearly 10% of jobs on GCP: [https://search.dptools.openshift.org/?search=Attempted+to+gather+ClusterOperator+status+after+installation+failure%3A+listing+ClusterOperator+objects.*connection+refused&maxAge=168h&context=1&type=bug%2Bissue%2Bjunit&name=.*4.16.*gcp.*&excludeName=&maxMatches=5&maxBytes=20971520&groupBy=job] {code:java} time="2024-04-04T13:27:50Z" level=info msg="Waiting up to 40m0s (until 2:07PM UTC) for the cluster at https://api.ci-op-n3pv5pn3-4e5f3.XXXXXXXXXXXXXXXXXXXXXX:6443 to initialize..." time="2024-04-04T14:07:50Z" level=error msg="Attempted to gather ClusterOperator status after installation failure: listing ClusterOperator objects: Get \"https://api.ci-op-n3pv5pn3-4e5f3.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/config.openshift.io/v1/clusteroperators\": dial tcp 35.238.130.20:6443: connect: connection refused" time="2024-04-04T14:07:50Z" level=error msg="Cluster initialization failed because one or more operators are not functioning properly.\nThe cluster should be accessible for troubleshooting as detailed in the documentation linked below,\nhttps://docs.openshift.com/container-platform/latest/support/troubleshooting/troubleshooting-installations.html\nThe 'wait-for install-complete' subcommand can then be used to continue the installation" time="2024-04-04T14:07:50Z" level=error msg="failed to initialize the cluster: timed out waiting for the condition" {code} Probability of significant regression: 99.44% Sample (being evaluated) Release: 4.16 Start Time: 2024-03-29T00:00:00Z End Time: 2024-04-04T23:59:59Z Success Rate: 68.75% Successes: 11 Failures: 5 Flakes: 0 Base (historical) Release: 4.15 Start Time: 2024-02-01T00:00:00Z End Time: 2024-02-28T23:59:59Z Success Rate: 96.30% Successes: 52 Failures: 2 Flakes: 0 View the test details report at [https://sippy.dptools.openshift.org/sippy-ng/component_readiness/test_details?arch=amd64&arch=amd64&baseEndTime=2024-02-28%2023%3A59%3A59&baseRelease=4.15&baseStartTime=2024-02-01%2000%3A00%3A00&capability=Other&component=Installer%20%2F%20openshift-installer&confidence=95&environment=sdn%20upgrade-micro%20amd64%20gcp%20standard&excludeArches=arm64%2Cheterogeneous%2Cppc64le%2Cs390x&excludeClouds=openstack%2Cibmcloud%2Clibvirt%2Covirt%2Cunknown&excludeVariants=hypershift%2Cosd%2Cmicroshift%2Ctechpreview%2Csingle-node%2Cassisted%2Ccompact&groupBy=cloud%2Carch%2Cnetwork&ignoreDisruption=true&ignoreMissing=false&minFail=3&network=sdn&network=sdn&pity=5&platform=gcp&platform=gcp&sampleEndTime=2024-04-04%2023%3A59%3A59&sampleRelease=4.16&sampleStartTime=2024-03-29%2000%3A00%3A00&testId=cluster%20install%3A0cb1bb27e418491b1ffdacab58c5c8c0&testName=install%20should%20succeed%3A%20overall&upgrade=upgrade-micro&upgrade=upgrade-micro&variant=standard&variant=standard] Status: New | |||
#OCPBUGS-17183 | issue | 2 days ago | [BUG] Assisted installer fails to create bond with active backup for single node installation New |
Issue 15401516: [BUG] Assisted installer fails to create bond with active backup for single node installation Description: Description of problem: {code:none} The assisted installer will always fail to create bond with active backup using nmstate yaml and the errors are : ~~~ Jul 26 07:11:47 <hostname> bootkube.sh[8366]: Unable to reach API_URL's https endpoint at https://xx.xx.32.40:6443/version Jul 26 07:11:47 <hostname> bootkube.sh[8366]: Checking validity of <hostname> of type API_INT_URL Jul 26 07:11:47 <hostname> bootkube.sh[8366]: Successfully resolved API_INT_URL <hostname> Jul 26 07:11:47 <hostname> bootkube.sh[8366]: Unable to reach API_INT_URL's https endpoint at https://xx.xx.32.40:6443/versionJul 26 07:12:23 <hostname> bootkube.sh[12960]: Still waiting for the Kubernetes API: Get "https://localhost:6443/readyz": dial tcp [::1]:6443: connect: connection refusedJul 26 07:15:15 <hostname> bootkube.sh[15706]: The connection to the server <hostname>:6443 was refused - did you specify the right host or port? Jul 26 07:15:15 <hostname> bootkube.sh[15706]: The connection to the server <hostname>:6443 was refused - did you specify the right host or port? ~~~ Where, <hostname> is the actual hostname of the node. Adding sosreport and nmstate yaml file here : https://drive.google.com/drive/u/0/folders/19dNzKUPIMmnUls2pT_stuJxr2Dxdi5eb{code} Version-Release number of selected component (if applicable): {code:none} 4.12 Dell 16g Poweredge R660{code} How reproducible: {code:none} Always at customer side{code} Steps to Reproduce: {code:none} 1. Open Assisted installer UI (console.redhat.com -> assisted installer) 2. Add the network configs as below for host1 ----------- interfaces: - name: bond99 type: bond state: up ipv4: address: - ip: xx.xx.32.40 prefix-length: 24 enabled: true link-aggregation: mode: active-backup options: miimon: '140' port: - eno12399 - eno12409 dns-resolver: config: search: - xxxx server: - xx.xx.xx.xx routes: config: - destination: 0.0.0.0/0 metric: 150 next-hop-address: xx.xx.xx.xx next-hop-interface: bond99 table-id: 254 ----------- 3. Enter the mac addresses of interfaces in the fields. 4. Generate the iso and boot the node. The node will not be able to ping/ssh. This happen everytime and reproducible. 5. As there was no way to check (due to ssh not working) what is happening on the node, we reset root password and can see that ip address was present on bond, still ping/ssh does not work. 6. After multiple reboots, customer was able to ssh/ping and provided sosreport and we could see above mentioned error in the journal logs in sosreport. {code} Actual results: {code:none} Fails to install. Seems there is some issue with networking.{code} Expected results: {code:none} Able to proceed with installation without above mentioned issues{code} Additional info: {code:none} - The installation works with round robbin bond mode in 4.12. - Also, the installation works with active-backup 4.10. - Active-backup bond with 4.12 is failing.{code} Status: New | |||
#OCPBUGS-32091 | issue | 4 weeks ago | CAPI-Installer leaks processes during unsuccessful installs MODIFIED |
ERROR Attempted to gather debug logs after installation failure: failed to create SSH client: ssh: handshake failed: ssh: disconnect, reason 2: Too many authentication failures ERROR Attempted to gather ClusterOperator status after installation failure: listing ClusterOperator objects: Get "https://api.gpei-0515.qe.devcluster.openshift.com:6443/apis/config.openshift.io/v1/clusteroperators": dial tcp 3.134.9.157:6443: connect: connection refused ERROR Bootstrap failed to complete: Get "https://api.gpei-0515.qe.devcluster.openshift.com:6443/version": dial tcp 18.222.8.23:6443: connect: connection refused ... 1 lines not shown | |||
periodic-ci-openshift-release-master-ci-4.16-e2e-aws-ovn-upgrade (all) - 40 runs, 5% failed, 1650% of failures match = 83% impact | |||
#1791734353705832448 | junit | 17 hours ago | |
namespace/openshift-cloud-controller-manager node/ip-10-0-115-35.us-east-2.compute.internal pod/aws-cloud-controller-manager-655c9dfd6-5xv4j uid/8e6bcd27-470c-4d54-9476-ca80f029c6d5 container/cloud-controller-manager restarted 1 times: cause/Error code/2 reason/ContainerExit er-manager/leases/cloud-controller-manager?timeout=53.5s": dial tcp 10.0.48.220:6443: connect: connection refused I0518 08:02:19.902893 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206 | |||
#1791734353705832448 | junit | 17 hours ago | |
I0518 08:05:56.039201 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/apiserver/pkg/authentication/request/headerrequest/requestheader_controller.go:172 E0518 08:05:56.994692 1 leaderelection.go:332] error retrieving resource lock openshift-cloud-controller-manager/cloud-controller-manager: Get "https://api-int.ci-op-rpy1kgkl-4b138.aws-2.ci.openshift.org:6443/apis/coordination.k8s.io/v1/namespaces/openshift-cloud-controller-manager/leases/cloud-controller-manager?timeout=53.5s": dial tcp 10.0.48.220:6443: connect: connection refused I0518 08:06:00.863788 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206 | |||
#1791773509198811136 | junit | 14 hours ago | |
namespace/openshift-cloud-controller-manager node/ip-10-0-58-194.ec2.internal pod/aws-cloud-controller-manager-655c9dfd6-h8whn uid/209cf403-39f7-481f-9e55-e6f14e74e966 container/cloud-controller-manager restarted 1 times: cause/Error code/2 reason/ContainerExit =53.5s": dial tcp 10.0.37.22:6443: connect: connection refused I0518 10:39:01.814602 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/apiserver/pkg/authentication/request/headerrequest/requestheader_controller.go:172 | |||
#1791773509198811136 | junit | 14 hours ago | |
I0518 10:39:05.218249 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206 E0518 10:42:04.026525 1 leaderelection.go:332] error retrieving resource lock openshift-cloud-controller-manager/cloud-controller-manager: Get "https://api-int.ci-op-r047nvdn-4b138.aws-2.ci.openshift.org:6443/apis/coordination.k8s.io/v1/namespaces/openshift-cloud-controller-manager/leases/cloud-controller-manager?timeout=53.5s": dial tcp 10.0.106.19:6443: connect: connection refused E0518 10:42:12.730517 1 reflector.go:147] k8s.io/apiserver/pkg/authentication/request/headerrequest/requestheader_controller.go:172: Failed to watch *v1.ConfigMap: unknown (get configmaps) | |||
#1791851890212868096 | junit | 9 hours ago | |
I0518 15:42:07.343926 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1716046614\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1716046614\" (2024-05-18 14:36:54 +0000 UTC to 2025-05-18 14:36:54 +0000 UTC (now=2024-05-18 15:42:07.343907984 +0000 UTC))" E0518 15:47:39.442697 1 leaderelection.go:332] error retrieving resource lock openshift-cloud-controller-manager/cloud-controller-manager: Get "https://api-int.ci-op-23ify9d4-4b138.aws-2.ci.openshift.org:6443/apis/coordination.k8s.io/v1/namespaces/openshift-cloud-controller-manager/leases/cloud-controller-manager?timeout=53.5s": dial tcp 10.0.34.210:6443: connect: connection refused I0518 15:47:47.458432 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/apiserver/pkg/authentication/request/headerrequest/requestheader_controller.go:172 | |||
#1791851890212868096 | junit | 9 hours ago | |
I0518 16:30:14.546124 1 observer_polling.go:159] Starting file observer W0518 16:30:14.565617 1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-101-87.us-east-2.compute.internal": dial tcp [::1]:6443: connect: connection refused I0518 16:30:14.565759 1 builder.go:299] check-endpoints version 4.16.0-202405161711.p0.g313bc06.assembly.stream.el9-313bc06-313bc06912503443d3422da07b4ea4ea57b9fc84 | |||
#1791693027660533760 | junit | 19 hours ago | |
1 tests failed during this blip (2024-05-18 06:13:24.565423661 +0000 UTC m=+1790.083326910 to 2024-05-18 06:13:24.565423661 +0000 UTC m=+1790.083326910): [sig-arch][Feature:ClusterUpgrade] Cluster should remain functional during upgrade [Disruptive] [Serial] (exception: We are not worried about Degraded=True blips for update tests yet.) May 18 06:13:54.842 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False StaticPodsDegraded: pod/kube-apiserver-ip-10-0-118-72.ec2.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0518 06:13:47.075550 1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0518 06:13:47.075828 1 crypto.go:601] Generating new CA for check-endpoints-signer@1716012827 cert, and key in /tmp/serving-cert-1191486825/serving-signer.crt, /tmp/serving-cert-1191486825/serving-signer.key\nStaticPodsDegraded: I0518 06:13:47.285742 1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0518 06:13:47.287457 1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-118-72.ec2.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0518 06:13:47.287660 1 builder.go:299] check-endpoints version 4.16.0-202405161711.p0.g313bc06.assembly.stream.el9-313bc06-313bc06912503443d3422da07b4ea4ea57b9fc84\nStaticPodsDegraded: I0518 06:13:47.288347 1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-1191486825/tls.crt::/tmp/serving-cert-1191486825/tls.key"\nStaticPodsDegraded: F0518 06:13:47.569243 1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: \nNodeControllerDegraded: All master nodes are ready 1 tests failed during this blip (2024-05-18 06:13:54.842279208 +0000 UTC m=+1820.360182457 to 2024-05-18 06:13:54.842279208 +0000 UTC m=+1820.360182457): [sig-arch][Feature:ClusterUpgrade] Cluster should remain functional during upgrade [Disruptive] [Serial] (exception: Degraded=False is the happy case) | |||
#1791693027660533760 | junit | 19 hours ago | |
I0518 06:13:45.685223 1 observer_polling.go:159] Starting file observer W0518 06:13:45.702488 1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-118-72.ec2.internal": dial tcp [::1]:6443: connect: connection refused I0518 06:13:45.702689 1 builder.go:299] check-endpoints version 4.16.0-202405161711.p0.g313bc06.assembly.stream.el9-313bc06-313bc06912503443d3422da07b4ea4ea57b9fc84 | |||
#1791307597584797696 | junit | 44 hours ago | |
May 17 04:53:28.645 - 27s E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-101-76.us-west-1.compute.internal" not ready since 2024-05-17 04:53:23 +0000 UTC because KubeletNotReady ([container runtime status check may not have completed yet, container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?]) (exception: We are not worried about Degraded=True blips for update tests yet.) May 17 04:53:56.573 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-101-76.us-west-1.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0517 04:53:48.924339 1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0517 04:53:48.924542 1 crypto.go:601] Generating new CA for check-endpoints-signer@1715921628 cert, and key in /tmp/serving-cert-1242917750/serving-signer.crt, /tmp/serving-cert-1242917750/serving-signer.key\nStaticPodsDegraded: I0517 04:53:49.174403 1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0517 04:53:49.175801 1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-101-76.us-west-1.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0517 04:53:49.175928 1 builder.go:299] check-endpoints version 4.16.0-202405161711.p0.g313bc06.assembly.stream.el9-313bc06-313bc06912503443d3422da07b4ea4ea57b9fc84\nStaticPodsDegraded: I0517 04:53:49.176517 1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-1242917750/tls.crt::/tmp/serving-cert-1242917750/tls.key"\nStaticPodsDegraded: F0517 04:53:49.480061 1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: (exception: Degraded=False is the happy case) May 17 04:58:16.611 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-66-117.us-west-1.compute.internal" not ready since 2024-05-17 04:56:16 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.) (exception: We are not worried about Degraded=True blips for update tests yet.) | |||
#1791307597584797696 | junit | 44 hours ago | |
May 17 05:03:34.078 - 4s E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-46-115.us-west-1.compute.internal" not ready since 2024-05-17 05:03:04 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.) May 17 05:03:39.034 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-46-115.us-west-1.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0517 05:03:31.506772 1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0517 05:03:31.507062 1 crypto.go:601] Generating new CA for check-endpoints-signer@1715922211 cert, and key in /tmp/serving-cert-1269245982/serving-signer.crt, /tmp/serving-cert-1269245982/serving-signer.key\nStaticPodsDegraded: I0517 05:03:31.875884 1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0517 05:03:31.877313 1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-46-115.us-west-1.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0517 05:03:31.877428 1 builder.go:299] check-endpoints version 4.16.0-202405161711.p0.g313bc06.assembly.stream.el9-313bc06-313bc06912503443d3422da07b4ea4ea57b9fc84\nStaticPodsDegraded: I0517 05:03:31.877968 1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-1269245982/tls.crt::/tmp/serving-cert-1269245982/tls.key"\nStaticPodsDegraded: F0517 05:03:32.072420 1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: (exception: Degraded=False is the happy case) | |||
#1791442514641686528 | junit | 36 hours ago | |
I0517 13:57:20.885688 1 observer_polling.go:159] Starting file observer W0517 13:57:20.895624 1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-113-42.ec2.internal": dial tcp [::1]:6443: connect: connection refused I0517 13:57:20.895754 1 builder.go:299] check-endpoints version 4.16.0-202405161711.p0.g313bc06.assembly.stream.el9-313bc06-313bc06912503443d3422da07b4ea4ea57b9fc84 ... 3 lines not shown | |||
#1791467013210640384 | junit | 34 hours ago | |
I0517 15:22:23.619476 1 observer_polling.go:159] Starting file observer W0517 15:22:23.628726 1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-10-194.us-west-2.compute.internal": dial tcp [::1]:6443: connect: connection refused I0517 15:22:23.629015 1 builder.go:299] check-endpoints version 4.16.0-202405140946.p0.g313bc06.assembly.stream.el9-313bc06-313bc06912503443d3422da07b4ea4ea57b9fc84 ... 3 lines not shown | |||
#1791531426236076032 | junit | 29 hours ago | |
I0517 18:42:25.076220 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206 E0517 18:45:07.842256 1 leaderelection.go:332] error retrieving resource lock openshift-cloud-controller-manager/cloud-controller-manager: Get "https://api-int.ci-op-qfgfx8j9-4b138.aws-2.ci.openshift.org:6443/apis/coordination.k8s.io/v1/namespaces/openshift-cloud-controller-manager/leases/cloud-controller-manager?timeout=53.5s": dial tcp 10.0.101.60:6443: connect: connection refused I0517 18:45:23.739788 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206 | |||
#1791531426236076032 | junit | 29 hours ago | |
I0517 19:55:26.372710 1 observer_polling.go:159] Starting file observer W0517 19:55:26.391251 1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-102-150.us-west-2.compute.internal": dial tcp [::1]:6443: connect: connection refused I0517 19:55:26.391440 1 builder.go:299] check-endpoints version 4.16.0-202405161711.p0.g313bc06.assembly.stream.el9-313bc06-313bc06912503443d3422da07b4ea4ea57b9fc84 | |||
#1791037614342541312 | junit | 2 days ago | |
I0516 11:15:04.415496 1 observer_polling.go:159] Starting file observer W0516 11:15:04.432181 1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-24-133.us-west-1.compute.internal": dial tcp [::1]:6443: connect: connection refused I0516 11:15:04.432323 1 builder.go:299] check-endpoints version 4.16.0-202405151511.p0.g313bc06.assembly.stream.el9-313bc06-313bc06912503443d3422da07b4ea4ea57b9fc84 ... 3 lines not shown | |||
#1790538781527379968 | junit | 3 days ago | |
I0515 01:58:07.029743 1 observer_polling.go:159] Starting file observer W0515 01:58:07.042070 1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-25-198.us-east-2.compute.internal": dial tcp [::1]:6443: connect: connection refused I0515 01:58:07.042167 1 builder.go:299] check-endpoints version 4.16.0-202405140946.p0.g313bc06.assembly.stream.el9-313bc06-313bc06912503443d3422da07b4ea4ea57b9fc84 ... 3 lines not shown | |||
#1790941113515773952 | junit | 2 days ago | |
I0516 04:39:52.584142 1 observer_polling.go:159] Starting file observer W0516 04:39:52.614292 1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-46-132.us-west-2.compute.internal": dial tcp [::1]:6443: connect: connection refused I0516 04:39:52.614469 1 builder.go:299] check-endpoints version 4.16.0-202405151511.p0.g313bc06.assembly.stream.el9-313bc06-313bc06912503443d3422da07b4ea4ea57b9fc84 ... 3 lines not shown | |||
#1790320898838892544 | junit | 4 days ago | |
E0514 10:27:20.909660 1 leaderelection.go:332] error retrieving resource lock openshift-cloud-controller-manager/cloud-controller-manager: Get "https://api-int.ci-op-i8btbql2-4b138.aws-2.ci.openshift.org:6443/apis/coordination.k8s.io/v1/namespaces/openshift-cloud-controller-manager/leases/cloud-controller-manager?timeout=53.5s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) E0514 10:28:11.394933 1 leaderelection.go:332] error retrieving resource lock openshift-cloud-controller-manager/cloud-controller-manager: Get "https://api-int.ci-op-i8btbql2-4b138.aws-2.ci.openshift.org:6443/apis/coordination.k8s.io/v1/namespaces/openshift-cloud-controller-manager/leases/cloud-controller-manager?timeout=53.5s": dial tcp 10.0.78.118:6443: connect: connection refused I0514 10:28:19.302432 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/apiserver/pkg/authentication/request/headerrequest/requestheader_controller.go:172 | |||
#1790320898838892544 | junit | 4 days ago | |
I0514 11:34:31.531401 1 observer_polling.go:159] Starting file observer W0514 11:34:31.539452 1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-49-36.ec2.internal": dial tcp [::1]:6443: connect: connection refused I0514 11:34:31.539579 1 builder.go:299] check-endpoints version 4.16.0-202405110441.p0.gb352992.assembly.stream.el9-b352992-b352992903d6644a22d2618d640479633d9901a3 | |||
#1789891892738002944 | junit | 5 days ago | |
I0513 07:09:10.516846 1 observer_polling.go:159] Starting file observer W0513 07:09:10.527589 1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-125-255.us-east-2.compute.internal": dial tcp [::1]:6443: connect: connection refused I0513 07:09:10.527840 1 builder.go:299] check-endpoints version 4.16.0-202405110441.p0.gb352992.assembly.stream.el9-b352992-b352992903d6644a22d2618d640479633d9901a3 ... 3 lines not shown | |||
#1789969333141639168 | junit | 5 days ago | |
May 13 12:06:52.628 - 7s E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-21-54.us-west-1.compute.internal" not ready since 2024-05-13 12:06:36 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.) May 13 12:07:00.480 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False StaticPodsDegraded: pod/kube-apiserver-ip-10-0-21-54.us-west-1.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0513 12:06:51.451568 1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0513 12:06:51.451978 1 crypto.go:601] Generating new CA for check-endpoints-signer@1715602011 cert, and key in /tmp/serving-cert-764641913/serving-signer.crt, /tmp/serving-cert-764641913/serving-signer.key\nStaticPodsDegraded: I0513 12:06:52.278527 1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0513 12:06:52.294438 1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-21-54.us-west-1.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0513 12:06:52.294624 1 builder.go:299] check-endpoints version 4.16.0-202405110441.p0.gb352992.assembly.stream.el9-b352992-b352992903d6644a22d2618d640479633d9901a3\nStaticPodsDegraded: I0513 12:06:52.307859 1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-764641913/tls.crt::/tmp/serving-cert-764641913/tls.key"\nStaticPodsDegraded: F0513 12:06:52.531774 1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: \nNodeControllerDegraded: All master nodes are ready (exception: Degraded=False is the happy case) May 13 12:11:24.664 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-78-225.us-west-1.compute.internal" not ready since 2024-05-13 12:09:24 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.) (exception: We are not worried about Degraded=True blips for update tests yet.) | |||
#1789969333141639168 | junit | 5 days ago | |
I0513 12:06:52.278527 1 observer_polling.go:159] Starting file observer W0513 12:06:52.294438 1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-21-54.us-west-1.compute.internal": dial tcp [::1]:6443: connect: connection refused I0513 12:06:52.294624 1 builder.go:299] check-endpoints version 4.16.0-202405110441.p0.gb352992.assembly.stream.el9-b352992-b352992903d6644a22d2618d640479633d9901a3 | |||
#1789828563029987328 | junit | 5 days ago | |
May 13 02:57:44.510 - 6s E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-118-67.us-east-2.compute.internal" not ready since 2024-05-13 02:57:31 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.) May 13 02:57:51.482 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-118-67.us-east-2.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0513 02:57:45.069644 1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0513 02:57:45.070008 1 crypto.go:601] Generating new CA for check-endpoints-signer@1715569065 cert, and key in /tmp/serving-cert-3307720750/serving-signer.crt, /tmp/serving-cert-3307720750/serving-signer.key\nStaticPodsDegraded: I0513 02:57:45.405375 1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0513 02:57:45.406818 1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-118-67.us-east-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0513 02:57:45.406928 1 builder.go:299] check-endpoints version 4.16.0-202405110441.p0.gb352992.assembly.stream.el9-b352992-b352992903d6644a22d2618d640479633d9901a3\nStaticPodsDegraded: I0513 02:57:45.407520 1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-3307720750/tls.crt::/tmp/serving-cert-3307720750/tls.key"\nStaticPodsDegraded: F0513 02:57:45.517669 1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: (exception: Degraded=False is the happy case) May 13 03:02:11.456 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-21-193.us-east-2.compute.internal" not ready since 2024-05-13 03:02:04 +0000 UTC because KubeletNotReady ([container runtime status check may not have completed yet, container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?]) (exception: We are not worried about Degraded=True blips for update tests yet.) | |||
#1789828563029987328 | junit | 5 days ago | |
I0513 02:57:44.001093 1 observer_polling.go:159] Starting file observer W0513 02:57:44.011199 1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-118-67.us-east-2.compute.internal": dial tcp [::1]:6443: connect: connection refused I0513 02:57:44.011474 1 builder.go:299] check-endpoints version 4.16.0-202405110441.p0.gb352992.assembly.stream.el9-b352992-b352992903d6644a22d2618d640479633d9901a3 | |||
#1789589378847215616 | junit | 6 days ago | |
May 12 11:02:45.544 - 26s E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-118-66.us-west-1.compute.internal" not ready since 2024-05-12 11:00:45 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.) (exception: We are not worried about Degraded=True blips for update tests yet.) May 12 11:03:11.719 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-118-66.us-west-1.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0512 11:03:04.060643 1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0512 11:03:04.061011 1 crypto.go:601] Generating new CA for check-endpoints-signer@1715511784 cert, and key in /tmp/serving-cert-992216729/serving-signer.crt, /tmp/serving-cert-992216729/serving-signer.key\nStaticPodsDegraded: I0512 11:03:04.353385 1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0512 11:03:04.354530 1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-118-66.us-west-1.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0512 11:03:04.354633 1 builder.go:299] check-endpoints version 4.16.0-202405110441.p0.gb352992.assembly.stream.el9-b352992-b352992903d6644a22d2618d640479633d9901a3\nStaticPodsDegraded: I0512 11:03:04.355232 1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-992216729/tls.crt::/tmp/serving-cert-992216729/tls.key"\nStaticPodsDegraded: F0512 11:03:04.712404 1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: (exception: Degraded=False is the happy case) May 12 11:07:27.090 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-70-92.us-west-1.compute.internal" not ready since 2024-05-12 11:07:26 +0000 UTC because KubeletNotReady ([container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful, container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?]) (exception: We are not worried about Degraded=True blips for update tests yet.) | |||
#1789589378847215616 | junit | 6 days ago | |
May 12 11:08:00.794 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-70-92.us-west-1.compute.internal container "kube-apiserver-check-endpoints" is waiting: CrashLoopBackOff: back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-ip-10-0-70-92.us-west-1.compute.internal_openshift-kube-apiserver(571c0794430d1541facd5cf19b67fe3a) (exception: Degraded=False is the happy case) May 12 11:13:26.736 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady::StaticPods_Error status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-10-222.us-west-1.compute.internal" not ready since 2024-05-12 11:12:59 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?)\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-10-222.us-west-1.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0512 11:13:23.631293 1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0512 11:13:23.631594 1 crypto.go:601] Generating new CA for check-endpoints-signer@1715512403 cert, and key in /tmp/serving-cert-3423121857/serving-signer.crt, /tmp/serving-cert-3423121857/serving-signer.key\nStaticPodsDegraded: I0512 11:13:24.109502 1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0512 11:13:24.118993 1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-10-222.us-west-1.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0512 11:13:24.119139 1 builder.go:299] check-endpoints version 4.16.0-202405110441.p0.gb352992.assembly.stream.el9-b352992-b352992903d6644a22d2618d640479633d9901a3\nStaticPodsDegraded: I0512 11:13:24.136662 1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-3423121857/tls.crt::/tmp/serving-cert-3423121857/tls.key"\nStaticPodsDegraded: F0512 11:13:24.313363 1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: (exception: We are not worried about Degraded=True blips for update tests yet.) May 12 11:13:26.736 - 6s E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady::StaticPods_Error status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-10-222.us-west-1.compute.internal" not ready since 2024-05-12 11:12:59 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?)\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-10-222.us-west-1.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0512 11:13:23.631293 1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0512 11:13:23.631594 1 crypto.go:601] Generating new CA for check-endpoints-signer@1715512403 cert, and key in /tmp/serving-cert-3423121857/serving-signer.crt, /tmp/serving-cert-3423121857/serving-signer.key\nStaticPodsDegraded: I0512 11:13:24.109502 1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0512 11:13:24.118993 1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-10-222.us-west-1.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0512 11:13:24.119139 1 builder.go:299] check-endpoints version 4.16.0-202405110441.p0.gb352992.assembly.stream.el9-b352992-b352992903d6644a22d2618d640479633d9901a3\nStaticPodsDegraded: I0512 11:13:24.136662 1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-3423121857/tls.crt::/tmp/serving-cert-3423121857/tls.key"\nStaticPodsDegraded: F0512 11:13:24.313363 1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: (exception: We are not worried about Degraded=True blips for update tests yet.) ... 1 lines not shown | |||
#1789756025381851136 | junit | 6 days ago | |
I0512 22:05:56.810570 1 observer_polling.go:159] Starting file observer W0512 22:05:56.821223 1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-107-208.ec2.internal": dial tcp [::1]:6443: connect: connection refused I0512 22:05:56.821315 1 builder.go:299] check-endpoints version 4.16.0-202405110441.p0.gb352992.assembly.stream.el9-b352992-b352992903d6644a22d2618d640479633d9901a3 ... 3 lines not shown | |||
#1789655229122220032 | junit | 6 days ago | |
I0512 15:18:26.578167 1 observer_polling.go:159] Starting file observer W0512 15:18:26.600108 1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-31-88.us-east-2.compute.internal": dial tcp [::1]:6443: connect: connection refused I0512 15:18:26.600306 1 builder.go:299] check-endpoints version 4.16.0-202405110441.p0.gb352992.assembly.stream.el9-b352992-b352992903d6644a22d2618d640479633d9901a3 ... 3 lines not shown | |||
#1789507109004513280 | junit | 6 days ago | |
May 12 05:44:31.471 - 8s E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-68-190.ec2.internal" not ready since 2024-05-12 05:44:20 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.) May 12 05:44:40.230 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-68-190.ec2.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0512 05:44:33.804008 1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0512 05:44:33.804289 1 crypto.go:601] Generating new CA for check-endpoints-signer@1715492673 cert, and key in /tmp/serving-cert-1355690364/serving-signer.crt, /tmp/serving-cert-1355690364/serving-signer.key\nStaticPodsDegraded: I0512 05:44:34.213426 1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0512 05:44:34.214768 1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-68-190.ec2.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0512 05:44:34.214915 1 builder.go:299] check-endpoints version 4.16.0-202405110441.p0.gb352992.assembly.stream.el9-b352992-b352992903d6644a22d2618d640479633d9901a3\nStaticPodsDegraded: I0512 05:44:34.215524 1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-1355690364/tls.crt::/tmp/serving-cert-1355690364/tls.key"\nStaticPodsDegraded: F0512 05:44:34.646395 1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: (exception: Degraded=False is the happy case) May 12 05:49:18.328 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-102-249.ec2.internal" not ready since 2024-05-12 05:49:13 +0000 UTC because KubeletNotReady ([container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful, container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?]) (exception: We are not worried about Degraded=True blips for update tests yet.) | |||
#1789507109004513280 | junit | 6 days ago | |
cause/Error code/2 reason/ContainerExit " (2024-05-12 03:22:47 +0000 UTC to 2025-05-12 03:22:47 +0000 UTC (now=2024-05-12 04:28:01.712377437 +0000 UTC))" E0512 04:31:33.255861 1 leaderelection.go:332] error retrieving resource lock openshift-cloud-controller-manager/cloud-controller-manager: Get "https://api-int.ci-op-8dh363gg-4b138.aws-2.ci.openshift.org:6443/apis/coordination.k8s.io/v1/namespaces/openshift-cloud-controller-manager/leases/cloud-controller-manager?timeout=53.5s": dial tcp 10.0.90.182:6443: connect: connection refused I0512 04:31:52.678383 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/apiserver/pkg/authentication/request/headerrequest/requestheader_controller.go:172 | |||
#1789453221400416256 | junit | 6 days ago | |
I0512 01:55:09.132667 1 observer_polling.go:159] Starting file observer W0512 01:55:09.147415 1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-27-13.us-east-2.compute.internal": dial tcp [::1]:6443: connect: connection refused I0512 01:55:09.147602 1 builder.go:299] check-endpoints version 4.16.0-202405110441.p0.gb352992.assembly.stream.el9-b352992-b352992903d6644a22d2618d640479633d9901a3 ... 3 lines not shown | |||
#1789333580145496064 | junit | 7 days ago | |
May 11 18:08:09.914 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False StaticPodsDegraded: pod/kube-apiserver-ip-10-0-66-30.ec2.internal container "kube-apiserver-check-endpoints" is waiting: CrashLoopBackOff: back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-ip-10-0-66-30.ec2.internal_openshift-kube-apiserver(70fe151558637fcee2dafefe256a3776)\nNodeControllerDegraded: All master nodes are ready (exception: Degraded=False is the happy case) May 11 18:13:03.613 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady::StaticPods_Error status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-10-180.ec2.internal" not ready since 2024-05-11 18:12:47 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?)\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-10-180.ec2.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0511 18:13:00.284119 1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0511 18:13:00.284588 1 crypto.go:601] Generating new CA for check-endpoints-signer@1715451180 cert, and key in /tmp/serving-cert-2887510196/serving-signer.crt, /tmp/serving-cert-2887510196/serving-signer.key\nStaticPodsDegraded: I0511 18:13:00.881394 1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0511 18:13:00.893813 1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-10-180.ec2.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0511 18:13:00.893932 1 builder.go:299] check-endpoints version 4.16.0-202405110441.p0.gb352992.assembly.stream.el9-b352992-b352992903d6644a22d2618d640479633d9901a3\nStaticPodsDegraded: I0511 18:13:00.910538 1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-2887510196/tls.crt::/tmp/serving-cert-2887510196/tls.key"\nStaticPodsDegraded: F0511 18:13:01.112643 1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: (exception: We are not worried about Degraded=True blips for update tests yet.) May 11 18:13:03.613 - 5s E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady::StaticPods_Error status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-10-180.ec2.internal" not ready since 2024-05-11 18:12:47 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?)\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-10-180.ec2.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0511 18:13:00.284119 1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0511 18:13:00.284588 1 crypto.go:601] Generating new CA for check-endpoints-signer@1715451180 cert, and key in /tmp/serving-cert-2887510196/serving-signer.crt, /tmp/serving-cert-2887510196/serving-signer.key\nStaticPodsDegraded: I0511 18:13:00.881394 1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0511 18:13:00.893813 1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-10-180.ec2.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0511 18:13:00.893932 1 builder.go:299] check-endpoints version 4.16.0-202405110441.p0.gb352992.assembly.stream.el9-b352992-b352992903d6644a22d2618d640479633d9901a3\nStaticPodsDegraded: I0511 18:13:00.910538 1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-2887510196/tls.crt::/tmp/serving-cert-2887510196/tls.key"\nStaticPodsDegraded: F0511 18:13:01.112643 1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: (exception: We are not worried about Degraded=True blips for update tests yet.) ... 1 lines not shown | |||
#1788969212580990976 | junit | 8 days ago | |
E0510 16:49:35.509931 1 leaderelection.go:332] error retrieving resource lock openshift-cloud-controller-manager/cloud-controller-manager: Get "https://api-int.ci-op-9zt33h60-4b138.aws-2.ci.openshift.org:6443/apis/coordination.k8s.io/v1/namespaces/openshift-cloud-controller-manager/leases/cloud-controller-manager?timeout=53.5s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) E0510 16:51:14.378148 1 leaderelection.go:332] error retrieving resource lock openshift-cloud-controller-manager/cloud-controller-manager: Get "https://api-int.ci-op-9zt33h60-4b138.aws-2.ci.openshift.org:6443/apis/coordination.k8s.io/v1/namespaces/openshift-cloud-controller-manager/leases/cloud-controller-manager?timeout=53.5s": dial tcp 10.0.119.76:6443: connect: connection refused I0510 16:51:41.899664 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206 | |||
#1788969212580990976 | junit | 8 days ago | |
I0510 17:59:34.280278 1 observer_polling.go:159] Starting file observer W0510 17:59:34.294649 1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-117-15.us-east-2.compute.internal": dial tcp [::1]:6443: connect: connection refused I0510 17:59:34.294888 1 builder.go:299] check-endpoints version 4.16.0-202405041048.p0.g8fae6b5.assembly.stream.el9-8fae6b5-8fae6b51944c1f90e6875f55072a3d85db40e743 | |||
#1788335473702211584 | junit | 10 days ago | |
I0509 00:00:33.380798 1 observer_polling.go:159] Starting file observer W0509 00:00:33.407938 1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-121-66.us-west-1.compute.internal": dial tcp [::1]:6443: connect: connection refused I0509 00:00:33.408114 1 builder.go:299] check-endpoints version 4.16.0-202405080039.p0.g8fae6b5.assembly.stream.el9-8fae6b5-8fae6b51944c1f90e6875f55072a3d85db40e743 ... 3 lines not shown | |||
#1788251339445243904 | junit | 10 days ago | |
May 08 18:28:36.878 - 8s E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-2-14.us-west-1.compute.internal" not ready since 2024-05-08 18:28:11 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.) May 08 18:28:45.757 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-2-14.us-west-1.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0508 18:28:39.091908 1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0508 18:28:39.092185 1 crypto.go:601] Generating new CA for check-endpoints-signer@1715192919 cert, and key in /tmp/serving-cert-2881158848/serving-signer.crt, /tmp/serving-cert-2881158848/serving-signer.key\nStaticPodsDegraded: I0508 18:28:39.496711 1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0508 18:28:39.498229 1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-2-14.us-west-1.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0508 18:28:39.498365 1 builder.go:299] check-endpoints version 4.16.0-202405080039.p0.g8fae6b5.assembly.stream.el9-8fae6b5-8fae6b51944c1f90e6875f55072a3d85db40e743\nStaticPodsDegraded: I0508 18:28:39.498924 1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-2881158848/tls.crt::/tmp/serving-cert-2881158848/tls.key"\nStaticPodsDegraded: F0508 18:28:39.673137 1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: (exception: Degraded=False is the happy case) May 08 18:33:33.185 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-78-48.us-west-1.compute.internal" not ready since 2024-05-08 18:33:17 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.) | |||
#1788251339445243904 | junit | 10 days ago | |
May 08 18:38:23.465 - 10s E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-105-164.us-west-1.compute.internal" not ready since 2024-05-08 18:38:00 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.) May 08 18:38:34.374 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-105-164.us-west-1.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0508 18:38:26.542497 1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0508 18:38:26.542700 1 crypto.go:601] Generating new CA for check-endpoints-signer@1715193506 cert, and key in /tmp/serving-cert-1087420003/serving-signer.crt, /tmp/serving-cert-1087420003/serving-signer.key\nStaticPodsDegraded: I0508 18:38:26.974046 1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0508 18:38:26.976093 1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-105-164.us-west-1.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0508 18:38:26.976345 1 builder.go:299] check-endpoints version 4.16.0-202405080039.p0.g8fae6b5.assembly.stream.el9-8fae6b5-8fae6b51944c1f90e6875f55072a3d85db40e743\nStaticPodsDegraded: I0508 18:38:26.976901 1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-1087420003/tls.crt::/tmp/serving-cert-1087420003/tls.key"\nStaticPodsDegraded: F0508 18:38:27.254872 1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: (exception: Degraded=False is the happy case) | |||
#1787997218616119296 | junit | 11 days ago | |
I0508 01:33:58.697161 1 observer_polling.go:159] Starting file observer W0508 01:33:58.705214 1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-108-52.us-west-2.compute.internal": dial tcp [::1]:6443: connect: connection refused I0508 01:33:58.705340 1 builder.go:299] check-endpoints version 4.16.0-202405041048.p0.g8fae6b5.assembly.stream.el9-8fae6b5-8fae6b51944c1f90e6875f55072a3d85db40e743 ... 3 lines not shown | |||
#1787828459129540608 | junit | 11 days ago | |
I0507 14:07:36.766173 1 observer_polling.go:159] Starting file observer W0507 14:07:36.777148 1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-31-75.us-east-2.compute.internal": dial tcp [::1]:6443: connect: connection refused I0507 14:07:36.777282 1 builder.go:299] check-endpoints version 4.16.0-202405041048.p0.g8fae6b5.assembly.stream.el9-8fae6b5-8fae6b51944c1f90e6875f55072a3d85db40e743 ... 3 lines not shown | |||
#1787678615438102528 | junit | 11 days ago | |
May 07 04:30:04.588 - 36s E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-106-200.ec2.internal" not ready since 2024-05-07 04:28:04 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.) (exception: We are not worried about Degraded=True blips for update tests yet.) May 07 04:30:40.828 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-106-200.ec2.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0507 04:30:33.290673 1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0507 04:30:33.290978 1 crypto.go:601] Generating new CA for check-endpoints-signer@1715056233 cert, and key in /tmp/serving-cert-4069730169/serving-signer.crt, /tmp/serving-cert-4069730169/serving-signer.key\nStaticPodsDegraded: I0507 04:30:33.549085 1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0507 04:30:33.550665 1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-106-200.ec2.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0507 04:30:33.550816 1 builder.go:299] check-endpoints version 4.16.0-202405041048.p0.g8fae6b5.assembly.stream.el9-8fae6b5-8fae6b51944c1f90e6875f55072a3d85db40e743\nStaticPodsDegraded: I0507 04:30:33.551446 1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-4069730169/tls.crt::/tmp/serving-cert-4069730169/tls.key"\nStaticPodsDegraded: F0507 04:30:33.818555 1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: (exception: Degraded=False is the happy case) | |||
#1787678615438102528 | junit | 11 days ago | |
I0507 04:30:31.417832 1 observer_polling.go:159] Starting file observer W0507 04:30:31.429547 1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-106-200.ec2.internal": dial tcp [::1]:6443: connect: connection refused I0507 04:30:31.429714 1 builder.go:299] check-endpoints version 4.16.0-202405041048.p0.g8fae6b5.assembly.stream.el9-8fae6b5-8fae6b51944c1f90e6875f55072a3d85db40e743 | |||
#1787582247583354880 | junit | 12 days ago | |
I0506 22:35:16.061981 1 observer_polling.go:159] Starting file observer W0506 22:35:16.077440 1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-107-16.us-east-2.compute.internal": dial tcp [::1]:6443: connect: connection refused I0506 22:35:16.077553 1 builder.go:299] check-endpoints version 4.16.0-202404292211.p0.g1d9a2d0.assembly.stream.el9-1d9a2d0-1d9a2d0d604601640d318ea0cf973d23d711e970 ... 3 lines not shown | |||
#1787551776526831616 | junit | 12 days ago | |
May 06 19:40:20.317 - 14s E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-47-27.us-east-2.compute.internal" not ready since 2024-05-06 19:40:00 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.) May 06 19:40:34.431 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False StaticPodsDegraded: pod/kube-apiserver-ip-10-0-47-27.us-east-2.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0506 19:40:25.479899 1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0506 19:40:25.480124 1 crypto.go:601] Generating new CA for check-endpoints-signer@1715024425 cert, and key in /tmp/serving-cert-970059265/serving-signer.crt, /tmp/serving-cert-970059265/serving-signer.key\nStaticPodsDegraded: I0506 19:40:25.743048 1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0506 19:40:25.744590 1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-47-27.us-east-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0506 19:40:25.744728 1 builder.go:299] check-endpoints version 4.16.0-202405041048.p0.g8fae6b5.assembly.stream.el9-8fae6b5-8fae6b51944c1f90e6875f55072a3d85db40e743\nStaticPodsDegraded: I0506 19:40:25.745325 1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-970059265/tls.crt::/tmp/serving-cert-970059265/tls.key"\nStaticPodsDegraded: F0506 19:40:25.964804 1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: \nNodeControllerDegraded: All master nodes are ready (exception: Degraded=False is the happy case) May 06 19:46:00.322 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-115-72.us-east-2.compute.internal" not ready since 2024-05-06 19:45:38 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.) | |||
#1787551776526831616 | junit | 12 days ago | |
May 06 19:51:17.147 - 32s E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-80-127.us-east-2.compute.internal" not ready since 2024-05-06 19:49:17 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.) (exception: We are not worried about Degraded=True blips for update tests yet.) May 06 19:51:49.716 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False StaticPodsDegraded: pod/kube-apiserver-ip-10-0-80-127.us-east-2.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0506 19:51:40.983207 1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0506 19:51:40.983617 1 crypto.go:601] Generating new CA for check-endpoints-signer@1715025100 cert, and key in /tmp/serving-cert-694515385/serving-signer.crt, /tmp/serving-cert-694515385/serving-signer.key\nStaticPodsDegraded: I0506 19:51:41.396391 1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0506 19:51:41.401493 1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-80-127.us-east-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0506 19:51:41.401807 1 builder.go:299] check-endpoints version 4.16.0-202405041048.p0.g8fae6b5.assembly.stream.el9-8fae6b5-8fae6b51944c1f90e6875f55072a3d85db40e743\nStaticPodsDegraded: I0506 19:51:41.402435 1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-694515385/tls.crt::/tmp/serving-cert-694515385/tls.key"\nStaticPodsDegraded: F0506 19:51:41.931509 1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: \nNodeControllerDegraded: All master nodes are ready (exception: Degraded=False is the happy case) | |||
#1787462255198081024 | junit | 12 days ago | |
namespace/openshift-cloud-controller-manager node/ip-10-0-15-216.us-west-2.compute.internal pod/aws-cloud-controller-manager-769df595b9-n9zlf uid/3b40137e-f2de-44da-92f8-0f5fa5e90b6f container/cloud-controller-manager restarted 1 times: cause/Error code/2 reason/ContainerExit s://api-int.ci-op-bmxqijy9-4b138.aws-2.ci.openshift.org:6443/apis/coordination.k8s.io/v1/namespaces/openshift-cloud-controller-manager/leases/cloud-controller-manager?timeout=53.5s": dial tcp 10.0.94.19:6443: connect: connection refused I0506 13:14:32.516776 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/apiserver/pkg/authentication/request/headerrequest/requestheader_controller.go:172 | |||
#1787462255198081024 | junit | 12 days ago | |
I0506 13:15:10.410549 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206 E0506 13:18:16.275629 1 leaderelection.go:332] error retrieving resource lock openshift-cloud-controller-manager/cloud-controller-manager: Get "https://api-int.ci-op-bmxqijy9-4b138.aws-2.ci.openshift.org:6443/apis/coordination.k8s.io/v1/namespaces/openshift-cloud-controller-manager/leases/cloud-controller-manager?timeout=53.5s": dial tcp 10.0.19.182:6443: connect: connection refused I0506 13:18:28.654380 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/apiserver/pkg/authentication/request/headerrequest/requestheader_controller.go:172 | |||
#1787347345910796288 | junit | 12 days ago | |
I0506 05:32:28.403482 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1714973548\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1714973547\" (2024-05-06 04:32:27 +0000 UTC to 2025-05-06 04:32:27 +0000 UTC (now=2024-05-06 05:32:28.403467821 +0000 UTC))" E0506 05:37:41.963716 1 leaderelection.go:332] error retrieving resource lock openshift-cloud-controller-manager/cloud-controller-manager: Get "https://api-int.ci-op-m4qqpntk-4b138.aws-2.ci.openshift.org:6443/apis/coordination.k8s.io/v1/namespaces/openshift-cloud-controller-manager/leases/cloud-controller-manager?timeout=53.5s": dial tcp 10.0.104.194:6443: connect: connection refused I0506 05:38:11.161828 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/apiserver/pkg/authentication/request/headerrequest/requestheader_controller.go:172 | |||
#1787347345910796288 | junit | 12 days ago | |
I0506 06:09:18.979363 1 observer_polling.go:159] Starting file observer W0506 06:09:18.994440 1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-104-183.ec2.internal": dial tcp [::1]:6443: connect: connection refused I0506 06:09:18.994572 1 builder.go:299] check-endpoints version 4.16.0-202405041048.p0.g8fae6b5.assembly.stream.el9-8fae6b5-8fae6b51944c1f90e6875f55072a3d85db40e743 | |||
#1787262700577886208 | junit | 13 days ago | |
May 06 00:38:42.389 - 22s E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-2-147.us-west-1.compute.internal" not ready since 2024-05-06 00:36:42 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.) (exception: We are not worried about Degraded=True blips for update tests yet.) May 06 00:39:04.794 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-2-147.us-west-1.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0506 00:38:57.605403 1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0506 00:38:57.605632 1 crypto.go:601] Generating new CA for check-endpoints-signer@1714955937 cert, and key in /tmp/serving-cert-390941607/serving-signer.crt, /tmp/serving-cert-390941607/serving-signer.key\nStaticPodsDegraded: I0506 00:38:57.985120 1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0506 00:38:57.986555 1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-2-147.us-west-1.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0506 00:38:57.986698 1 builder.go:299] check-endpoints version 4.16.0-202405041048.p0.g8fae6b5.assembly.stream.el9-8fae6b5-8fae6b51944c1f90e6875f55072a3d85db40e743\nStaticPodsDegraded: I0506 00:38:57.987333 1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-390941607/tls.crt::/tmp/serving-cert-390941607/tls.key"\nStaticPodsDegraded: F0506 00:38:58.255545 1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: (exception: Degraded=False is the happy case) May 06 00:44:42.327 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-65-207.us-west-1.compute.internal" not ready since 2024-05-06 00:44:32 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.) | |||
#1787262700577886208 | junit | 13 days ago | |
May 06 00:50:46.517 - 12s E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-18-202.us-west-1.compute.internal" not ready since 2024-05-06 00:50:37 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.) May 06 00:50:59.510 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-18-202.us-west-1.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0506 00:50:51.088744 1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0506 00:50:51.089056 1 crypto.go:601] Generating new CA for check-endpoints-signer@1714956651 cert, and key in /tmp/serving-cert-2261654140/serving-signer.crt, /tmp/serving-cert-2261654140/serving-signer.key\nStaticPodsDegraded: I0506 00:50:51.437283 1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0506 00:50:51.438601 1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-18-202.us-west-1.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0506 00:50:51.438711 1 builder.go:299] check-endpoints version 4.16.0-202405041048.p0.g8fae6b5.assembly.stream.el9-8fae6b5-8fae6b51944c1f90e6875f55072a3d85db40e743\nStaticPodsDegraded: I0506 00:50:51.439303 1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-2261654140/tls.crt::/tmp/serving-cert-2261654140/tls.key"\nStaticPodsDegraded: F0506 00:50:51.677344 1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: (exception: Degraded=False is the happy case) | |||
#1787177542059298816 | junit | 13 days ago | |
May 05 18:54:49.282 - 14s E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-52-150.us-west-2.compute.internal" not ready since 2024-05-05 18:54:41 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.) May 05 18:55:03.400 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False StaticPodsDegraded: pod/kube-apiserver-ip-10-0-52-150.us-west-2.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0505 18:54:54.940599 1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0505 18:54:54.940812 1 crypto.go:601] Generating new CA for check-endpoints-signer@1714935294 cert, and key in /tmp/serving-cert-2097075674/serving-signer.crt, /tmp/serving-cert-2097075674/serving-signer.key\nStaticPodsDegraded: I0505 18:54:55.207567 1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0505 18:54:55.209281 1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-52-150.us-west-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0505 18:54:55.209404 1 builder.go:299] check-endpoints version 4.16.0-202405041048.p0.g8fae6b5.assembly.stream.el9-8fae6b5-8fae6b51944c1f90e6875f55072a3d85db40e743\nStaticPodsDegraded: I0505 18:54:55.210489 1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-2097075674/tls.crt::/tmp/serving-cert-2097075674/tls.key"\nStaticPodsDegraded: F0505 18:54:55.310013 1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: \nNodeControllerDegraded: All master nodes are ready (exception: Degraded=False is the happy case) May 05 19:00:26.299 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-69-6.us-west-2.compute.internal" not ready since 2024-05-05 18:58:26 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.) (exception: We are not worried about Degraded=True blips for update tests yet.) | |||
#1787177542059298816 | junit | 13 days ago | |
May 05 19:06:36.158 - 12s E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-127-107.us-west-2.compute.internal" not ready since 2024-05-05 19:06:14 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.) May 05 19:06:48.890 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False StaticPodsDegraded: pod/kube-apiserver-ip-10-0-127-107.us-west-2.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0505 19:06:41.361610 1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0505 19:06:41.361841 1 crypto.go:601] Generating new CA for check-endpoints-signer@1714936001 cert, and key in /tmp/serving-cert-13527111/serving-signer.crt, /tmp/serving-cert-13527111/serving-signer.key\nStaticPodsDegraded: I0505 19:06:41.577531 1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0505 19:06:41.579104 1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-127-107.us-west-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0505 19:06:41.579214 1 builder.go:299] check-endpoints version 4.16.0-202405041048.p0.g8fae6b5.assembly.stream.el9-8fae6b5-8fae6b51944c1f90e6875f55072a3d85db40e743\nStaticPodsDegraded: I0505 19:06:41.579801 1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-13527111/tls.crt::/tmp/serving-cert-13527111/tls.key"\nStaticPodsDegraded: F0505 19:06:41.812453 1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: \nNodeControllerDegraded: All master nodes are ready (exception: Degraded=False is the happy case) |
Found in 82.50% of runs (1650.00% of failures) across 40 total runs and 1 jobs (5.00% failed) in 2.047s - clear search | chart view - source code located on github