#OCPBUGS-32517 | issue | 42 hours ago | Missing worker nodes on metal Verified |
Mon 2024-04-22 05:33:53 UTC localhost.localdomain master-bmh-update.service[12603]: Unpause all baremetal hosts Mon 2024-04-22 05:33:53 UTC localhost.localdomain master-bmh-update.service[18264]: E0422 05:33:53.630867 18264 memcache.go:265] couldn't get current server API group list: Get "https://localhost:6443/api?timeout=32s": dial tcp [::1]:6443: connect: connection refused Mon 2024-04-22 05:33:53 UTC localhost.localdomain master-bmh-update.service[18264]: E0422 05:33:53.631351 18264 memcache.go:265] couldn't get current server API group list: Get "https://localhost:6443/api?timeout=32s": dial tcp [::1]:6443: connect: connection refused ... 4 lines not shown | |||
#OCPBUGS-27755 | issue | 9 days ago | openshift-kube-apiserver down and is not being restarted New |
Issue 15736514: openshift-kube-apiserver down and is not being restarted Description: Description of problem: {code:none} SNO cluster, this is the second time that the issue happens. Error like the following are reported: ~~~ failed to fetch token: Post "https://api-int.<cluster>:6443/api/v1/namespaces/openshift-cluster-storage-operator/serviceaccounts/cluster-storage-operator/token": dial tcp <ip>:6443: connect: connection refused ~~~ Checking the pods logs, kube-apiserver pod is terminated and is not being restarted again: ~~~ 2024-01-13T09:41:40.931716166Z I0113 09:41:40.931584 1 main.go:213] Received signal terminated. Forwarding to sub-process "hyperkube". ~~~{code} Version-Release number of selected component (if applicable): {code:none} 4.13.13 {code} How reproducible: {code:none} Not reproducible but has happened twice{code} Steps to Reproduce: {code:none} 1. 2. 3. {code} Actual results: {code:none} API is not available and kube-apiserver is not being restarted{code} Expected results: {code:none} We would expect to see kube-apiserver restarts{code} Additional info: {code:none} {code} Status: New | |||
#OCPBUGS-30631 | issue | 2 weeks ago | SNO (RT kernel) sosreport crash the SNO node CLOSED |
Issue 15865131: SNO (RT kernel) sosreport crash the SNO node Description: Description of problem: {code:none} sosreport collection causes SNO XR11 node crash. {code} Version-Release number of selected component (if applicable): {code:none} - RHOCP : 4.12.30 - kernel : 4.18.0-372.69.1.rt7.227.el8_6.x86_64 - platform : x86_64{code} How reproducible: {code:none} sh-4.4# chrt -rr 99 toolbox .toolboxrc file detected, overriding defaults... Checking if there is a newer version of ocpdalmirror.xxx.yyy:8443/rhel8/support-tools-zzz-feb available... Container 'toolbox-root' already exists. Trying to start... (To remove the container and start with a fresh toolbox, run: sudo podman rm 'toolbox-root') toolbox-root Container started successfully. To exit, type 'exit'. [root@node /]# which sos /usr/sbin/sos logger: socket /dev/log: No such file or directory [root@node /]# taskset -c 29-31,61-63 sos report --batch -n networking,kernel,processor -k crio.all=on -k crio.logs=on -k podman.all=on -kpodman.logs=on sosreport (version 4.5.6) This command will collect diagnostic and configuration information from this Red Hat CoreOS system. An archive containing the collected information will be generated in /host/var/tmp/sos.c09e4f7z and may be provided to a Red Hat support representative. Any information provided to Red Hat will be treated in accordance with the published support policies at: Distribution Website : https://www.redhat.com/ Commercial Support : https://access.redhat.com/ The generated archive may contain data considered sensitive and its content should be reviewed by the originating organization before being passed to any third party. No changes will be made to system configuration. Setting up archive ... Setting up plugins ... [plugin:auditd] Could not open conf file /etc/audit/auditd.conf: [Errno 2] No such file or directory: '/etc/audit/auditd.conf' caught exception in plugin method "system.setup()" writing traceback to sos_logs/system-plugin-errors.txt [plugin:systemd] skipped command 'resolvectl status': required services missing: systemd-resolved. [plugin:systemd] skipped command 'resolvectl statistics': required services missing: systemd-resolved. Running plugins. Please wait ... Starting 1/91 alternatives [Running: alternatives] Starting 2/91 atomichost [Running: alternatives atomichost] Starting 3/91 auditd [Running: alternatives atomichost auditd] Starting 4/91 block [Running: alternatives atomichost auditd block] Starting 5/91 boot [Running: alternatives auditd block boot] Starting 6/91 cgroups [Running: auditd block boot cgroups] Starting 7/91 chrony [Running: auditd block cgroups chrony] Starting 8/91 cifs [Running: auditd block cgroups cifs] Starting 9/91 conntrack [Running: auditd block cgroups conntrack] Starting 10/91 console [Running: block cgroups conntrack console] Starting 11/91 container_log [Running: block cgroups conntrack container_log] Starting 12/91 containers_common [Running: block cgroups conntrack containers_common] Starting 13/91 crio [Running: block cgroups conntrack crio] Starting 14/91 crypto [Running: cgroups conntrack crio crypto] Starting 15/91 date [Running: cgroups conntrack crio date] Starting 16/91 dbus [Running: cgroups conntrack crio dbus] Starting 17/91 devicemapper [Running: cgroups conntrack crio devicemapper] Starting 18/91 devices [Running: cgroups conntrack crio devices] Starting 19/91 dracut [Running: cgroups conntrack crio dracut] Starting 20/91 ebpf [Running: cgroups conntrack crio ebpf] Starting 21/91 etcd [Running: cgroups crio ebpf etcd] Starting 22/91 filesys [Running: cgroups crio ebpf filesys] Starting 23/91 firewall_tables [Running: cgroups crio filesys firewall_tables] Starting 24/91 fwupd [Running: cgroups crio filesys fwupd] Starting 25/91 gluster [Running: cgroups crio filesys gluster] Starting 26/91 grub2 [Running: cgroups crio filesys grub2] Starting 27/91 gssproxy [Running: cgroups crio grub2 gssproxy] Starting 28/91 hardware [Running: cgroups crio grub2 hardware] Starting 29/91 host [Running: cgroups crio hardware host] Starting 30/91 hts [Running: cgroups crio hardware hts] Starting 31/91 i18n [Running: cgroups crio hardware i18n] Starting 32/91 iscsi [Running: cgroups crio hardware iscsi] Starting 33/91 jars [Running: cgroups crio hardware jars] Starting 34/91 kdump [Running: cgroups crio hardware kdump] Starting 35/91 kernelrt [Running: cgroups crio hardware kernelrt] Starting 36/91 keyutils [Running: cgroups crio hardware keyutils] Starting 37/91 krb5 [Running: cgroups crio hardware krb5] Starting 38/91 kvm [Running: cgroups crio hardware kvm] Starting 39/91 ldap [Running: cgroups crio kvm ldap] Starting 40/91 libraries [Running: cgroups crio kvm libraries] Starting 41/91 libvirt [Running: cgroups crio kvm libvirt] Starting 42/91 login [Running: cgroups crio kvm login] Starting 43/91 logrotate [Running: cgroups crio kvm logrotate] Starting 44/91 logs [Running: cgroups crio kvm logs] Starting 45/91 lvm2 [Running: cgroups crio logs lvm2] Starting 46/91 md [Running: cgroups crio logs md] Starting 47/91 memory [Running: cgroups crio logs memory] Starting 48/91 microshift_ovn [Running: cgroups crio logs microshift_ovn] Starting 49/91 multipath [Running: cgroups crio logs multipath] Starting 50/91 networkmanager [Running: cgroups crio logs networkmanager] Removing debug pod ... error: unable to delete the debug pod "ransno1ransnomavdallabcom-debug": Delete "https://api.ransno.mavdallab.com:6443/api/v1/namespaces/openshift-debug-mt82m/pods/ransno1ransnomavdallabcom-debug": dial tcp 10.71.136.144:6443: connect: connection refused {code} Steps to Reproduce: {code:none} Launch a debug pod and the procedure above and it crash the node{code} Actual results: {code:none} Node crash{code} Expected results: {code:none} Node does not crash{code} Additional info: {code:none} We have two vmcore on the associated SFDC ticket. This system use a RT kernel. Using an out of tree ice driver 1.13.7 (probably from 22 dec 2023) [ 103.681608] ice: module unloaded [ 103.830535] ice: loading out-of-tree module taints kernel. [ 103.831106] ice: module verification failed: signature and/or required key missing - tainting kernel [ 103.841005] ice: Intel(R) Ethernet Connection E800 Series Linux Driver - version 1.13.7 [ 103.841017] ice: Copyright (C) 2018-2023 Intel Corporation With the following kernel command line Command line: BOOT_IMAGE=(hd0,gpt3)/ostree/rhcos-f2c287e549b45a742b62e4f748bc2faae6ca907d24bb1e029e4985bc01649033/vmlinuz-4.18.0-372.69.1.rt7.227.el8_6.x86_64 ignition.platform.id=metal ostree=/ostree/boot.1/rhcos/f2c287e549b45a742b62e4f748bc2faae6ca907d24bb1e029e4985bc01649033/0 root=UUID=3e8bda80-5cf4-4c46-b139-4c84cb006354 rw rootflags=prjquota boot=UUID=1d0512c2-3f92-42c5-b26d-709ff9350b81 intel_iommu=on iommu=pt firmware_class.path=/var/lib/firmware skew_tick=1 nohz=on rcu_nocbs=3-31,35-63 tuned.non_isolcpus=00000007,00000007 systemd.cpu_affinity=0,1,2,32,33,34 intel_iommu=on iommu=pt isolcpus=managed_irq,3-31,35-63 nohz_full=3-31,35-63 tsc=nowatchdog nosoftlockup nmi_watchdog=0 mce=off rcutree.kthread_prio=11 default_hugepagesz=1G rcupdate.rcu_normal_after_boot=0 efi=runtime module_blacklist=irdma intel_pstate=passive intel_idle.max_cstate=0 crashkernel=256M vmcore1 show issue with the ice driver crash vmcore tmp/vmlinux KERNEL: tmp/vmlinux [TAINTED] DUMPFILE: vmcore [PARTIAL DUMP] CPUS: 64 DATE: Thu Mar 7 17:16:57 CET 2024 UPTIME: 02:44:28 LOAD AVERAGE: 24.97, 25.47, 25.46 TASKS: 5324 NODENAME: aaa.bbb.ccc RELEASE: 4.18.0-372.69.1.rt7.227.el8_6.x86_64 VERSION: #1 SMP PREEMPT_RT Fri Aug 4 00:21:46 EDT 2023 MACHINE: x86_64 (1500 Mhz) MEMORY: 127.3 GB PANIC: "Kernel panic - not syncing:" PID: 693 COMMAND: "khungtaskd" TASK: ff4d1890260d4000 [THREAD_INFO: ff4d1890260d4000] CPU: 0 STATE: TASK_RUNNING (PANIC) crash> ps|grep sos 449071 363440 31 ff4d189005f68000 IN 0.2 506428 314484 sos 451043 363440 63 ff4d188943a9c000 IN 0.2 506428 314484 sos 494099 363440 29 ff4d187f941f4000 UN 0.2 506428 314484 sos 8457.517696] ------------[ cut here ]------------ [ 8457.517698] NETDEV WATCHDOG: ens3f1 (ice): transmit queue 35 timed out [ 8457.517711] WARNING: CPU: 33 PID: 349 at net/sched/sch_generic.c:472 dev_watchdog+0x270/0x300 [ 8457.517718] Modules linked in: binfmt_misc macvlan pci_pf_stub iavf vfio_pci vfio_virqfd vfio_iommu_type1 vfio vhost_net vhost vhost_iotlb tap tun xt_addrtype nf_conntrack_netlink ip6t_REJECT nf_reject_ipv6 ipt_REJECT nf_reject_ipv4 xt_nat xt_CT tcp_diag inet_diag ip6t_MASQUERADE xt_mark ice(OE) xt_conntrack ipt_MASQUERADE nft_counter xt_comment nft_compat veth nft_chain_nat nf_tables overlay bridge 8021q garp mrp stp llc nfnetlink_cttimeout nfnetlink openvswitch nf_conncount nf_nat nf_conntrack nf_defrag_ipv6 nf_defrag_ipv4 ext4 mbcache jbd2 intel_rapl_msr iTCO_wdt iTCO_vendor_support dell_smbios wmi_bmof dell_wmi_descriptor dcdbas kvm_intel kvm irqbypass intel_rapl_common i10nm_edac nfit libnvdimm x86_pkg_temp_thermal intel_powerclamp coretemp rapl ipmi_ssif intel_cstate intel_uncore dm_thin_pool pcspkr isst_if_mbox_pci dm_persistent_data dm_bio_prison dm_bufio isst_if_mmio isst_if_common mei_me i2c_i801 joydev mei intel_pmt wmi acpi_ipmi ipmi_si acpi_power_meter sctp ip6_udp_tunnel [ 8457.517770] udp_tunnel ip_tables xfs libcrc32c i40e sd_mod t10_pi sg bnxt_re ib_uverbs ib_core crct10dif_pclmul crc32_pclmul crc32c_intel ghash_clmulni_intel bnxt_en ahci libahci libata dm_multipath dm_mirror dm_region_hash dm_log dm_mod ipmi_devintf ipmi_msghandler fuse [last unloaded: ice] [ 8457.517784] Red Hat flags: eBPF/rawtrace [ 8457.517787] CPU: 33 PID: 349 Comm: ktimers/33 Kdump: loaded Tainted: G OE --------- - - 4.18.0-372.69.1.rt7.227.el8_6.x86_64 #1 [ 8457.517789] Hardware name: Dell Inc. PowerEdge XR11/0P2RNT, BIOS 1.12.1 09/13/2023 [ 8457.517790] RIP: 0010:dev_watchdog+0x270/0x300 [ 8457.517793] Code: 17 00 e9 f0 fe ff ff 4c 89 e7 c6 05 c6 03 34 01 01 e8 14 43 fa ff 89 d9 4c 89 e6 48 c7 c7 90 37 98 9a 48 89 c2 e8 1d be 88 ff <0f> 0b eb ad 65 8b 05 05 13 fb 65 89 c0 48 0f a3 05 1b ab 36 01 73 [ 8457.517795] RSP: 0018:ff7aeb55c73c7d78 EFLAGS: 00010286 [ 8457.517797] RAX: 0000000000000000 RBX: 0000000000000023 RCX: 0000000000000001 [ 8457.517798] RDX: 0000000000000000 RSI: ffffffff9a908557 RDI: 00000000ffffffff [ 8457.517799] RBP: 0000000000000021 R08: ffffffff9ae6b3a0 R09: 00080000000000ff [ 8457.517800] R10: 000000006443a462 R11: 0000000000000036 R12: ff4d187f4d1f4000 [ 8457.517801] R13: ff4d187f4d20df00 R14: ff4d187f4d1f44a0 R15: 0000000000000080 [ 8457.517803] FS: 0000000000000000(0000) GS:ff4d18967a040000(0000) knlGS:0000000000000000 [ 8457.517804] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 [ 8457.517805] CR2: 00007fc47c649974 CR3: 00000019a441a005 CR4: 0000000000771ea0 [ 8457.517806] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 [ 8457.517807] DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400 [ 8457.517808] PKRU: 55555554 [ 8457.517810] Call Trace: [ 8457.517813] ? test_ti_thread_flag.constprop.50+0x10/0x10 [ 8457.517816] ? test_ti_thread_flag.constprop.50+0x10/0x10 [ 8457.517818] call_timer_fn+0x32/0x1d0 [ 8457.517822] ? test_ti_thread_flag.constprop.50+0x10/0x10 [ 8457.517825] run_timer_softirq+0x1fc/0x640 [ 8457.517828] ? _raw_spin_unlock_irq+0x1d/0x60 [ 8457.517833] ? finish_task_switch+0xea/0x320 [ 8457.517836] ? __switch_to+0x10c/0x4d0 [ 8457.517840] __do_softirq+0xa5/0x33f [ 8457.517844] run_timersd+0x61/0xb0 [ 8457.517848] smpboot_thread_fn+0x1c1/0x2b0 [ 8457.517851] ? smpboot_register_percpu_thread_cpumask+0x140/0x140 [ 8457.517853] kthread+0x151/0x170 [ 8457.517856] ? set_kthread_struct+0x50/0x50 [ 8457.517858] ret_from_fork+0x1f/0x40 [ 8457.517861] ---[ end trace 0000000000000002 ]--- [ 8458.520445] ice 0000:8a:00.1 ens3f1: tx_timeout: VSI_num: 14, Q 35, NTC: 0x99, HW_HEAD: 0x14, NTU: 0x15, INT: 0x0 [ 8458.520451] ice 0000:8a:00.1 ens3f1: tx_timeout recovery level 1, txqueue 35 [ 8506.139246] ice 0000:8a:00.1: PTP reset successful [ 8506.437047] ice 0000:8a:00.1: VSI rebuilt. VSI index 0, type ICE_VSI_PF [ 8506.445482] ice 0000:8a:00.1: VSI rebuilt. VSI index 1, type ICE_VSI_CTRL [ 8540.459707] ice 0000:8a:00.1 ens3f1: tx_timeout: VSI_num: 14, Q 35, NTC: 0xe3, HW_HEAD: 0xe7, NTU: 0xe8, INT: 0x0 [ 8540.459714] ice 0000:8a:00.1 ens3f1: tx_timeout recovery level 1, txqueue 35 [ 8563.891356] ice 0000:8a:00.1: PTP reset successful ~~~ Second vmcore on the same node show issue with the SSD drive $ crash vmcore-2 tmp/vmlinux KERNEL: tmp/vmlinux [TAINTED] DUMPFILE: vmcore-2 [PARTIAL DUMP] CPUS: 64 DATE: Thu Mar 7 14:29:31 CET 2024 UPTIME: 1 days, 07:19:52 LOAD AVERAGE: 25.55, 26.42, 28.30 TASKS: 5409 NODENAME: aaa.bbb.ccc RELEASE: 4.18.0-372.69.1.rt7.227.el8_6.x86_64 VERSION: #1 SMP PREEMPT_RT Fri Aug 4 00:21:46 EDT 2023 MACHINE: x86_64 (1500 Mhz) MEMORY: 127.3 GB PANIC: "Kernel panic - not syncing:" PID: 696 COMMAND: "khungtaskd" TASK: ff2b35ed48d30000 [THREAD_INFO: ff2b35ed48d30000] CPU: 34 STATE: TASK_RUNNING (PANIC) crash> ps |grep sos 719784 718369 62 ff2b35ff00830000 IN 0.4 1215636 563388 sos 721740 718369 61 ff2b3605579f8000 IN 0.4 1215636 563388 sos 721742 718369 63 ff2b35fa5eb9c000 IN 0.4 1215636 563388 sos 721744 718369 30 ff2b3603367fc000 IN 0.4 1215636 563388 sos 721746 718369 29 ff2b360557944000 IN 0.4 1215636 563388 sos 743356 718369 62 ff2b36042c8e0000 IN 0.4 1215636 563388 sos 743818 718369 29 ff2b35f6186d0000 IN 0.4 1215636 563388 sos 748518 718369 61 ff2b3602cfb84000 IN 0.4 1215636 563388 sos 748884 718369 62 ff2b360713418000 UN 0.4 1215636 563388 sos crash> dmesg [111871.309883] ata3.00: exception Emask 0x0 SAct 0x3ff8 SErr 0x0 action 0x6 frozen [111871.309889] ata3.00: failed command: WRITE FPDMA QUEUED [111871.309891] ata3.00: cmd 61/40:18:28:47:4b/00:00:00:00:00/40 tag 3 ncq dma 32768 out res 40/00:00:00:00:00/00:00:00:00:00/00 Emask 0x4 (timeout) [111871.309895] ata3.00: status: { DRDY } [111871.309897] ata3.00: failed command: WRITE FPDMA QUEUED [111871.309904] ata3.00: cmd 61/40:20:68:47:4b/00:00:00:00:00/40 tag 4 ncq dma 32768 out res 40/00:00:00:00:00/00:00:00:00:00/00 Emask 0x4 (timeout) [111871.309908] ata3.00: status: { DRDY } [111871.309909] ata3.00: failed command: WRITE FPDMA QUEUED [111871.309910] ata3.00: cmd 61/40:28:a8:47:4b/00:00:00:00:00/40 tag 5 ncq dma 32768 out res 40/00:ff:00:00:00/00:00:00:00:00/40 Emask 0x4 (timeout) [111871.309913] ata3.00: status: { DRDY } [111871.309914] ata3.00: failed command: WRITE FPDMA QUEUED [111871.309915] ata3.00: cmd 61/40:30:e8:47:4b/00:00:00:00:00/40 tag 6 ncq dma 32768 out res 40/00:01:00:00:00/00:00:00:00:00/00 Emask 0x4 (timeout) [111871.309918] ata3.00: status: { DRDY } [111871.309919] ata3.00: failed command: WRITE FPDMA QUEUED [111871.309919] ata3.00: cmd 61/70:38:48:37:2b/00:00:1c:00:00/40 tag 7 ncq dma 57344 out res 40/00:00:00:00:00/00:00:00:00:00/00 Emask 0x4 (timeout) [111871.309922] ata3.00: status: { DRDY } [111871.309923] ata3.00: failed command: WRITE FPDMA QUEUED [111871.309924] ata3.00: cmd 61/20:40:78:29:0c/00:00:19:00:00/40 tag 8 ncq dma 16384 out res 40/00:ff:00:00:00/00:00:00:00:00/40 Emask 0x4 (timeout) [111871.309927] ata3.00: status: { DRDY } [111871.309928] ata3.00: failed command: WRITE FPDMA QUEUED [111871.309929] ata3.00: cmd 61/08:48:08:0c:c0/00:00:1c:00:00/40 tag 9 ncq dma 4096 out res 40/00:ff:00:00:00/00:00:00:00:00/40 Emask 0x4 (timeout) [111871.309932] ata3.00: status: { DRDY } [111871.309933] ata3.00: failed command: WRITE FPDMA QUEUED [111871.309934] ata3.00: cmd 61/40:50:28:48:4b/00:00:00:00:00/40 tag 10 ncq dma 32768 out res 40/00:01:00:00:00/00:00:00:00:00/40 Emask 0x4 (timeout) [111871.309937] ata3.00: status: { DRDY } [111871.309938] ata3.00: failed command: WRITE FPDMA QUEUED [111871.309939] ata3.00: cmd 61/40:58:68:48:4b/00:00:00:00:00/40 tag 11 ncq dma 32768 out res 40/00:00:00:00:00/00:00:00:00:00/00 Emask 0x4 (timeout) [111871.309942] ata3.00: status: { DRDY } [111871.309943] ata3.00: failed command: WRITE FPDMA QUEUED [111871.309944] ata3.00: cmd 61/40:60:a8:48:4b/00:00:00:00:00/40 tag 12 ncq dma 32768 out res 40/00:00:00:00:00/00:00:00:00:00/00 Emask 0x4 (timeout) [111871.309946] ata3.00: status: { DRDY } [111871.309947] ata3.00: failed command: WRITE FPDMA QUEUED [111871.309948] ata3.00: cmd 61/40:68:e8:48:4b/00:00:00:00:00/40 tag 13 ncq dma 32768 out res 40/00:01:00:00:00/00:00:00:00:00/40 Emask 0x4 (timeout) [111871.309951] ata3.00: status: { DRDY } [111871.309953] ata3: hard resetting link ... ... ... [112789.787310] INFO: task sos:748884 blocked for more than 600 seconds. [112789.787314] Tainted: G OE --------- - - 4.18.0-372.69.1.rt7.227.el8_6.x86_64 #1 [112789.787316] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [112789.787316] task:sos state:D stack: 0 pid:748884 ppid:718369 flags:0x00084080 [112789.787320] Call Trace: [112789.787323] __schedule+0x37b/0x8e0 [112789.787330] schedule+0x6c/0x120 [112789.787333] schedule_timeout+0x2b7/0x410 [112789.787336] ? enqueue_entity+0x130/0x790 [112789.787340] wait_for_completion+0x84/0xf0 [112789.787343] flush_work+0x120/0x1d0 [112789.787347] ? flush_workqueue_prep_pwqs+0x130/0x130 [112789.787350] schedule_on_each_cpu+0xa7/0xe0 [112789.787353] vmstat_refresh+0x22/0xa0 [112789.787357] proc_sys_call_handler+0x174/0x1d0 [112789.787361] vfs_read+0x91/0x150 [112789.787364] ksys_read+0x52/0xc0 [112789.787366] do_syscall_64+0x87/0x1b0 [112789.787369] entry_SYSCALL_64_after_hwframe+0x61/0xc6 [112789.787372] RIP: 0033:0x7f2dca8c2ab4 [112789.787378] Code: Unable to access opcode bytes at RIP 0x7f2dca8c2a8a. [112789.787378] RSP: 002b:00007f2dbbffc5e0 EFLAGS: 00000246 ORIG_RAX: 0000000000000000 [112789.787380] RAX: ffffffffffffffda RBX: 0000000000000008 RCX: 00007f2dca8c2ab4 [112789.787382] RDX: 0000000000004000 RSI: 00007f2db402b5a0 RDI: 0000000000000008 [112789.787383] RBP: 00007f2db402b5a0 R08: 0000000000000000 R09: 00007f2dcace27bb [112789.787383] R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000004000 [112789.787384] R13: 0000000000000008 R14: 00007f2db402b5a0 R15: 00007f2da4001a90 [112789.787418] NMI backtrace for cpu 34 {code} Status: CLOSED | |||
#OCPBUGS-33157 | issue | 42 hours ago | IPv6 metal-ipi jobs: master-bmh-update loosing access to API Verified |
Issue 15978085: IPv6 metal-ipi jobs: master-bmh-update loosing access to API Description: The last 4 IPv6 jobs are failing on the same error https://prow.ci.openshift.org/job-history/gs/test-platform-results/logs/periodic-ci-openshift-release-master-nightly-4.16-e2e-metal-ipi-ovn-ipv6 master-bmh-update.log looses access to the the API when trying to get/update the BMH details https://prow.ci.openshift.org/view/gs/test-platform-results/logs/periodic-ci-openshift-release-master-nightly-4.16-e2e-metal-ipi-ovn-ipv6/1785492737169035264 {noformat} May 01 03:32:23 localhost.localdomain master-bmh-update.sh[4663]: Waiting for 3 masters to become provisioned May 01 03:32:23 localhost.localdomain master-bmh-update.sh[24484]: E0501 03:32:23.531242 24484 memcache.go:265] couldn't get current server API group list: Get "https://api-int.ostest.test.metalkube.org:6443/api?timeout=32s": dial tcp [fd2e:6f44:5dd8:c956::5]:6443: connect: connection refused May 01 03:32:23 localhost.localdomain master-bmh-update.sh[24484]: E0501 03:32:23.531808 24484 memcache.go:265] couldn't get current server API group list: Get "https://api-int.ostest.test.metalkube.org:6443/api?timeout=32s": dial tcp [fd2e:6f44:5dd8:c956::5]:6443: connect: connection refused May 01 03:32:23 localhost.localdomain master-bmh-update.sh[24484]: E0501 03:32:23.533281 24484 memcache.go:265] couldn't get current server API group list: Get "https://api-int.ostest.test.metalkube.org:6443/api?timeout=32s": dial tcp [fd2e:6f44:5dd8:c956::5]:6443: connect: connection refused May 01 03:32:23 localhost.localdomain master-bmh-update.sh[24484]: E0501 03:32:23.533630 24484 memcache.go:265] couldn't get current server API group list: Get "https://api-int.ostest.test.metalkube.org:6443/api?timeout=32s": dial tcp [fd2e:6f44:5dd8:c956::5]:6443: connect: connection refused May 01 03:32:23 localhost.localdomain master-bmh-update.sh[24484]: E0501 03:32:23.535180 24484 memcache.go:265] couldn't get current server API group list: Get "https://api-int.ostest.test.metalkube.org:6443/api?timeout=32s": dial tcp [fd2e:6f44:5dd8:c956::5]:6443: connect: connection refused May 01 03:32:23 localhost.localdomain master-bmh-update.sh[24484]: The connection to the server api-int.ostest.test.metalkube.org:6443 was refused - did you specify the right host or port? {noformat} Status: Verified {noformat} May 01 02:49:40 localhost.localdomain master-bmh-update.sh[12448]: E0501 02:49:40.429468 12448 memcache.go:265] couldn't get current server API group list: Get "https://api-int.ostest.test.metalkube.org:6443/api?timeout=32s": dial tcp [fd2e:6f44:5dd8:c956::5]:6443: connect: connection refused {noformat} | |||
#OCPBUGS-32375 | issue | 10 days ago | Unsuccessful cluster installation with 4.15 nightlies on s390x using ABI CLOSED |
Issue 15945005: Unsuccessful cluster installation with 4.15 nightlies on s390x using ABI Description: When used the latest s390x release builds in 4.15 nightly stream for Agent Based Installation of SNO on IBM Z KVM, installation is failing at the end while watching cluster operators even though the DNS and HA Proxy configurations are perfect as the same setup is working with 4.15.x stable release image builds Below is the error encountered multiple times when used "release:s390x-latest" image while booting the cluster. This image is used during the boot through OPENSHIFT_INSATLL_RELEASE_IMAGE_OVERRIDE while the binary is fetched using the latest stable builds from here : [https://mirror.openshift.com/pub/openshift-v4/s390x/clients/ocp/latest/] for which the version would be around 4.15.x *release-image:* {code:java} registry.build01.ci.openshift.org/ci-op-cdkdqnqn/release@sha256:c6eb4affa5c44d2ad220d7064e92270a30df5f26d221e35664f4d5547a835617 {code} ** *PROW CI Build :* [https://prow.ci.openshift.org/view/gs/test-platform-results/pr-logs/pull/openshift_release/47965/rehearse-47965-periodic-ci-openshift-multiarch-master-nightly-4.15-e2e-agent-ibmz-sno/1780162365824700416] *Error:* {code:java} '/root/agent-sno/openshift-install wait-for install-complete --dir /root/agent-sno/ --log-level debug' Warning: Permanently added '128.168.142.71' (ED25519) to the list of known hosts. level=debug msg=OpenShift Installer 4.15.8 level=debug msg=Built from commit f4f5d0ee0f7591fd9ddf03ac337c804608102919 level=debug msg=Loading Install Config... level=debug msg= Loading SSH Key... level=debug msg= Loading Base Domain... level=debug msg= Loading Platform... level=debug msg= Loading Cluster Name... level=debug msg= Loading Base Domain... level=debug msg= Loading Platform... level=debug msg= Loading Pull Secret... level=debug msg= Loading Platform... level=debug msg=Loading Agent Config... level=debug msg=Using Agent Config loaded from state file level=warning msg=An agent configuration was detected but this command is not the agent wait-for command level=info msg=Waiting up to 40m0s (until 10:15AM UTC) for the cluster at https://api.agent-sno.abi-ci.com:6443 to initialize... W0416 09:35:51.793770 1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused E0416 09:35:51.793827 1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused W0416 09:35:53.127917 1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused E0416 09:35:53.127946 1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused W0416 09:35:54.760896 1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused E0416 09:35:54.761058 1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused W0416 09:36:00.790136 1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused E0416 09:36:00.790175 1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused W0416 09:36:08.516333 1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused E0416 09:36:08.516445 1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused W0416 09:36:31.442291 1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused E0416 09:36:31.442336 1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused W0416 09:37:03.033971 1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused E0416 09:37:03.034049 1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused W0416 09:37:42.025487 1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused E0416 09:37:42.025538 1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused W0416 09:38:32.148607 1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused E0416 09:38:32.148677 1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused W0416 09:39:27.680156 1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused E0416 09:39:27.680194 1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused W0416 09:40:23.290839 1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused E0416 09:40:23.290988 1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused W0416 09:41:22.298200 1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused E0416 09:41:22.298338 1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused W0416 09:42:01.197417 1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused E0416 09:42:01.197465 1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused W0416 09:42:36.739577 1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused E0416 09:42:36.739937 1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused W0416 09:43:07.331029 1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused E0416 09:43:07.331154 1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused W0416 09:44:04.008310 1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused E0416 09:44:04.008381 1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused W0416 09:44:40.882938 1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused E0416 09:44:40.882973 1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused W0416 09:45:18.975189 1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused E0416 09:45:18.975307 1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused W0416 09:45:49.753584 1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused E0416 09:45:49.753614 1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused W0416 09:46:41.148207 1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused E0416 09:46:41.148347 1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused W0416 09:47:12.882965 1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused E0416 09:47:12.883075 1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused W0416 09:47:53.636491 1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused E0416 09:47:53.636538 1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused W0416 09:48:31.792077 1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused E0416 09:48:31.792165 1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused W0416 09:49:29.117579 1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused E0416 09:49:29.117657 1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused W0416 09:50:02.802033 1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused E0416 09:50:02.802167 1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused W0416 09:50:33.826705 1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused E0416 09:50:33.826859 1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused W0416 09:51:16.045403 1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused E0416 09:51:16.045447 1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused W0416 09:51:53.795710 1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused E0416 09:51:53.795745 1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused W0416 09:52:52.741141 1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused E0416 09:52:52.741289 1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused W0416 09:53:52.621642 1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused E0416 09:53:52.621687 1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused W0416 09:54:35.809906 1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused E0416 09:54:35.810054 1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused W0416 09:55:24.249298 1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused E0416 09:55:24.249418 1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused W0416 09:56:12.717328 1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused E0416 09:56:12.717372 1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused W0416 09:56:51.172375 1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused E0416 09:56:51.172439 1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused W0416 09:57:42.242226 1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused E0416 09:57:42.242292 1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused W0416 09:58:17.663810 1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused E0416 09:58:17.663849 1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused W0416 09:59:13.319754 1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused E0416 09:59:13.319889 1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused W0416 10:00:03.188117 1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused E0416 10:00:03.188166 1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused W0416 10:00:54.590362 1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused E0416 10:00:54.590494 1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused W0416 10:01:35.673592 1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused E0416 10:01:35.673633 1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused W0416 10:02:11.552079 1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused E0416 10:02:11.552133 1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused W0416 10:02:51.110525 1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused E0416 10:02:51.110663 1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused W0416 10:03:31.251376 1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused E0416 10:03:31.251494 1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused W0416 10:04:21.566895 1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused E0416 10:04:21.566931 1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused W0416 10:04:52.754047 1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused E0416 10:04:52.754221 1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused W0416 10:05:24.673675 1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused E0416 10:05:24.673724 1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused W0416 10:06:17.608482 1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused E0416 10:06:17.608598 1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused W0416 10:06:58.215116 1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused E0416 10:06:58.215262 1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused W0416 10:07:46.578262 1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused E0416 10:07:46.578392 1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused W0416 10:08:18.239710 1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused E0416 10:08:18.239830 1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused W0416 10:09:06.947178 1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused E0416 10:09:06.947239 1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused W0416 10:10:00.261401 1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused E0416 10:10:00.261486 1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused W0416 10:10:59.363041 1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused E0416 10:10:59.363113 1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused W0416 10:11:32.205551 1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused E0416 10:11:32.205612 1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused W0416 10:12:24.956052 1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused E0416 10:12:24.956147 1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused W0416 10:12:55.353860 1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused E0416 10:12:55.354004 1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused W0416 10:13:39.223095 1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused E0416 10:13:39.223170 1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused W0416 10:14:25.018278 1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused E0416 10:14:25.018404 1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused W0416 10:15:17.227351 1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused E0416 10:15:17.227424 1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused level=error msg=Attempted to gather ClusterOperator status after wait failure: listing ClusterOperator objects: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusteroperators": dial tcp 10.244.64.4:6443: connect: connection refused level=error msg=Cluster initialization failed because one or more operators are not functioning properly. level=error msg=The cluster should be accessible for troubleshooting as detailed in the documentation linked below, level=error msg=https://docs.openshift.com/container-platform/latest/support/troubleshooting/troubleshooting-installations.html level=error msg=The 'wait-for install-complete' subcommand can then be used to continue the installation level=error msg=failed to initialize the cluster: timed out waiting for the condition {"component":"entrypoint","error":"wrapped process failed: exit status 6","file":"k8s.io/test-infra/prow/entrypoint/run.go:84","func":"k8s.io/test-infra/prow/entrypoint.Options.internalRun","level":"error","msg":"Error executing test process","severity":"error","time":"2024-04-16T10:15:51Z"} error: failed to execute wrapped command: exit status 6 {code} Status: CLOSED | |||
#OCPBUGS-31763 | issue | 10 days ago | gcp install cluster creation fails after 30-40 minutes New |
Issue 15921939: gcp install cluster creation fails after 30-40 minutes Description: Component Readiness has found a potential regression in install should succeed: overall. I see this on various different platforms, but I started digging into GCP failures. No installer log bundle is created, which seriously hinders my ability to dig further. Bootstrap succeeds, and then 30 minutes after waiting for cluster creation, it dies. From [https://prow.ci.openshift.org/view/gs/test-platform-results/logs/periodic-ci-openshift-release-master-nightly-4.16-e2e-gcp-sdn-serial/1775871000018161664] search.ci tells me this affects nearly 10% of jobs on GCP: [https://search.dptools.openshift.org/?search=Attempted+to+gather+ClusterOperator+status+after+installation+failure%3A+listing+ClusterOperator+objects.*connection+refused&maxAge=168h&context=1&type=bug%2Bissue%2Bjunit&name=.*4.16.*gcp.*&excludeName=&maxMatches=5&maxBytes=20971520&groupBy=job] {code:java} time="2024-04-04T13:27:50Z" level=info msg="Waiting up to 40m0s (until 2:07PM UTC) for the cluster at https://api.ci-op-n3pv5pn3-4e5f3.XXXXXXXXXXXXXXXXXXXXXX:6443 to initialize..." time="2024-04-04T14:07:50Z" level=error msg="Attempted to gather ClusterOperator status after installation failure: listing ClusterOperator objects: Get \"https://api.ci-op-n3pv5pn3-4e5f3.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/config.openshift.io/v1/clusteroperators\": dial tcp 35.238.130.20:6443: connect: connection refused" time="2024-04-04T14:07:50Z" level=error msg="Cluster initialization failed because one or more operators are not functioning properly.\nThe cluster should be accessible for troubleshooting as detailed in the documentation linked below,\nhttps://docs.openshift.com/container-platform/latest/support/troubleshooting/troubleshooting-installations.html\nThe 'wait-for install-complete' subcommand can then be used to continue the installation" time="2024-04-04T14:07:50Z" level=error msg="failed to initialize the cluster: timed out waiting for the condition" {code} Probability of significant regression: 99.44% Sample (being evaluated) Release: 4.16 Start Time: 2024-03-29T00:00:00Z End Time: 2024-04-04T23:59:59Z Success Rate: 68.75% Successes: 11 Failures: 5 Flakes: 0 Base (historical) Release: 4.15 Start Time: 2024-02-01T00:00:00Z End Time: 2024-02-28T23:59:59Z Success Rate: 96.30% Successes: 52 Failures: 2 Flakes: 0 View the test details report at [https://sippy.dptools.openshift.org/sippy-ng/component_readiness/test_details?arch=amd64&arch=amd64&baseEndTime=2024-02-28%2023%3A59%3A59&baseRelease=4.15&baseStartTime=2024-02-01%2000%3A00%3A00&capability=Other&component=Installer%20%2F%20openshift-installer&confidence=95&environment=sdn%20upgrade-micro%20amd64%20gcp%20standard&excludeArches=arm64%2Cheterogeneous%2Cppc64le%2Cs390x&excludeClouds=openstack%2Cibmcloud%2Clibvirt%2Covirt%2Cunknown&excludeVariants=hypershift%2Cosd%2Cmicroshift%2Ctechpreview%2Csingle-node%2Cassisted%2Ccompact&groupBy=cloud%2Carch%2Cnetwork&ignoreDisruption=true&ignoreMissing=false&minFail=3&network=sdn&network=sdn&pity=5&platform=gcp&platform=gcp&sampleEndTime=2024-04-04%2023%3A59%3A59&sampleRelease=4.16&sampleStartTime=2024-03-29%2000%3A00%3A00&testId=cluster%20install%3A0cb1bb27e418491b1ffdacab58c5c8c0&testName=install%20should%20succeed%3A%20overall&upgrade=upgrade-micro&upgrade=upgrade-micro&variant=standard&variant=standard] Status: New | |||
#OCPBUGS-17183 | issue | 2 days ago | [BUG] Assisted installer fails to create bond with active backup for single node installation New |
Issue 15401516: [BUG] Assisted installer fails to create bond with active backup for single node installation Description: Description of problem: {code:none} The assisted installer will always fail to create bond with active backup using nmstate yaml and the errors are : ~~~ Jul 26 07:11:47 <hostname> bootkube.sh[8366]: Unable to reach API_URL's https endpoint at https://xx.xx.32.40:6443/version Jul 26 07:11:47 <hostname> bootkube.sh[8366]: Checking validity of <hostname> of type API_INT_URL Jul 26 07:11:47 <hostname> bootkube.sh[8366]: Successfully resolved API_INT_URL <hostname> Jul 26 07:11:47 <hostname> bootkube.sh[8366]: Unable to reach API_INT_URL's https endpoint at https://xx.xx.32.40:6443/versionJul 26 07:12:23 <hostname> bootkube.sh[12960]: Still waiting for the Kubernetes API: Get "https://localhost:6443/readyz": dial tcp [::1]:6443: connect: connection refusedJul 26 07:15:15 <hostname> bootkube.sh[15706]: The connection to the server <hostname>:6443 was refused - did you specify the right host or port? Jul 26 07:15:15 <hostname> bootkube.sh[15706]: The connection to the server <hostname>:6443 was refused - did you specify the right host or port? ~~~ Where, <hostname> is the actual hostname of the node. Adding sosreport and nmstate yaml file here : https://drive.google.com/drive/u/0/folders/19dNzKUPIMmnUls2pT_stuJxr2Dxdi5eb{code} Version-Release number of selected component (if applicable): {code:none} 4.12 Dell 16g Poweredge R660{code} How reproducible: {code:none} Always at customer side{code} Steps to Reproduce: {code:none} 1. Open Assisted installer UI (console.redhat.com -> assisted installer) 2. Add the network configs as below for host1 ----------- interfaces: - name: bond99 type: bond state: up ipv4: address: - ip: xx.xx.32.40 prefix-length: 24 enabled: true link-aggregation: mode: active-backup options: miimon: '140' port: - eno12399 - eno12409 dns-resolver: config: search: - xxxx server: - xx.xx.xx.xx routes: config: - destination: 0.0.0.0/0 metric: 150 next-hop-address: xx.xx.xx.xx next-hop-interface: bond99 table-id: 254 ----------- 3. Enter the mac addresses of interfaces in the fields. 4. Generate the iso and boot the node. The node will not be able to ping/ssh. This happen everytime and reproducible. 5. As there was no way to check (due to ssh not working) what is happening on the node, we reset root password and can see that ip address was present on bond, still ping/ssh does not work. 6. After multiple reboots, customer was able to ssh/ping and provided sosreport and we could see above mentioned error in the journal logs in sosreport. {code} Actual results: {code:none} Fails to install. Seems there is some issue with networking.{code} Expected results: {code:none} Able to proceed with installation without above mentioned issues{code} Additional info: {code:none} - The installation works with round robbin bond mode in 4.12. - Also, the installation works with active-backup 4.10. - Active-backup bond with 4.12 is failing.{code} Status: New | |||
#OCPBUGS-32091 | issue | 4 weeks ago | CAPI-Installer leaks processes during unsuccessful installs MODIFIED |
ERROR Attempted to gather debug logs after installation failure: failed to create SSH client: ssh: handshake failed: ssh: disconnect, reason 2: Too many authentication failures ERROR Attempted to gather ClusterOperator status after installation failure: listing ClusterOperator objects: Get "https://api.gpei-0515.qe.devcluster.openshift.com:6443/apis/config.openshift.io/v1/clusteroperators": dial tcp 3.134.9.157:6443: connect: connection refused ERROR Bootstrap failed to complete: Get "https://api.gpei-0515.qe.devcluster.openshift.com:6443/version": dial tcp 18.222.8.23:6443: connect: connection refused ... 1 lines not shown | |||
periodic-ci-openshift-release-master-nightly-4.13-upgrade-from-stable-4.12-e2e-aws-sdn-upgrade (all) - 17 runs, 29% failed, 320% of failures match = 94% impact | |||
#1791795629408653312 | junit | 13 hours ago | |
May 18 13:11:55.525 E ns/openshift-sdn pod/sdn-controller-mvcz9 node/ip-10-0-172-53.us-west-1.compute.internal uid/13100494-1372-49d2-b8d6-8711cc861f0e container/kube-rbac-proxy reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar) May 18 13:12:02.291 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-172-53.us-west-1.compute.internal node/ip-10-0-172-53.us-west-1.compute.internal uid/14658f0f-66d2-4bd6-b771-c2b9c73a6a1f container/kube-apiserver-check-endpoints reason/ContainerExit code/255 cause/Error W0518 13:12:00.584366 1 cmd.go:216] Using insecure, self-signed certificates\nI0518 13:12:00.590211 1 crypto.go:601] Generating new CA for check-endpoints-signer@1716037920 cert, and key in /tmp/serving-cert-3859821764/serving-signer.crt, /tmp/serving-cert-3859821764/serving-signer.key\nI0518 13:12:00.819385 1 observer_polling.go:159] Starting file observer\nW0518 13:12:00.846852 1 builder.go:239] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-172-53.us-west-1.compute.internal": dial tcp [::1]:6443: connect: connection refused\nI0518 13:12:00.846977 1 builder.go:271] check-endpoints version 4.13.0-202405141537.p0.g4d70179.assembly.stream.el8-4d70179-4d70179045c6a9c1e73f9b7ab22590c7e16efca9\nI0518 13:12:00.860514 1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-3859821764/tls.crt::/tmp/serving-cert-3859821764/tls.key"\nF0518 13:12:01.177265 1 cmd.go:141] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\n May 18 13:12:03.332 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-172-53.us-west-1.compute.internal node/ip-10-0-172-53.us-west-1.compute.internal uid/14658f0f-66d2-4bd6-b771-c2b9c73a6a1f container/kube-apiserver-check-endpoints reason/ContainerExit code/255 cause/Error W0518 13:12:00.584366 1 cmd.go:216] Using insecure, self-signed certificates\nI0518 13:12:00.590211 1 crypto.go:601] Generating new CA for check-endpoints-signer@1716037920 cert, and key in /tmp/serving-cert-3859821764/serving-signer.crt, /tmp/serving-cert-3859821764/serving-signer.key\nI0518 13:12:00.819385 1 observer_polling.go:159] Starting file observer\nW0518 13:12:00.846852 1 builder.go:239] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-172-53.us-west-1.compute.internal": dial tcp [::1]:6443: connect: connection refused\nI0518 13:12:00.846977 1 builder.go:271] check-endpoints version 4.13.0-202405141537.p0.g4d70179.assembly.stream.el8-4d70179-4d70179045c6a9c1e73f9b7ab22590c7e16efca9\nI0518 13:12:00.860514 1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-3859821764/tls.crt::/tmp/serving-cert-3859821764/tls.key"\nF0518 13:12:01.177265 1 cmd.go:141] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\n ... 1 lines not shown | |||
#1791536106190147584 | junit | 30 hours ago | |
May 17 19:43:17.579 E ns/openshift-multus pod/multus-additional-cni-plugins-qjm9x node/ip-10-0-152-232.ec2.internal uid/ee5bd270-3d39-46e8-9fba-2fa77a652d44 container/kube-multus-additional-cni-plugins reason/ContainerExit code/143 cause/Error May 17 19:43:33.730 E ns/openshift-sdn pod/sdn-controller-kplcl node/ip-10-0-152-232.ec2.internal uid/76399ed4-19f4-41dd-8724-15d2aebc5a46 container/sdn-controller reason/ContainerExit code/2 cause/Error I0517 18:42:27.844765 1 server.go:27] Starting HTTP metrics server\nI0517 18:42:27.844980 1 leaderelection.go:248] attempting to acquire leader lease openshift-sdn/openshift-network-controller...\nE0517 18:49:30.825798 1 leaderelection.go:330] error retrieving resource lock openshift-sdn/openshift-network-controller: the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps openshift-network-controller)\nE0517 18:51:21.387968 1 leaderelection.go:330] error retrieving resource lock openshift-sdn/openshift-network-controller: Get "https://api-int.ci-op-7binvghc-769db.aws-2.ci.openshift.org:6443/apis/coordination.k8s.io/v1/namespaces/openshift-sdn/leases/openshift-network-controller": dial tcp 10.0.155.11:6443: connect: connection refused\n May 17 19:43:42.549 E ns/openshift-multus pod/multus-additional-cni-plugins-h2dhd node/ip-10-0-254-77.ec2.internal uid/496f4c75-89e3-48c8-83bd-9a85bbcadccd container/kube-multus-additional-cni-plugins reason/ContainerExit code/143 cause/Error | |||
#1791536106190147584 | junit | 30 hours ago | |
May 17 19:43:47.699 E ns/openshift-multus pod/cni-sysctl-allowlist-ds-cgbg4 node/ip-10-0-184-1.ec2.internal uid/2b151378-094f-4e2c-8c9a-325f5ed14269 container/kube-multus-additional-cni-plugins reason/ContainerExit code/137 cause/Error May 17 19:43:50.848 E ns/openshift-sdn pod/sdn-controller-jcxkt node/ip-10-0-158-244.ec2.internal uid/dacb8c6a-acb8-4791-acf3-b2e31e983f93 container/sdn-controller reason/ContainerExit code/2 cause/Error I0517 18:41:41.982313 1 server.go:27] Starting HTTP metrics server\nI0517 18:41:41.982649 1 leaderelection.go:248] attempting to acquire leader lease openshift-sdn/openshift-network-controller...\nE0517 18:49:03.530919 1 leaderelection.go:330] error retrieving resource lock openshift-sdn/openshift-network-controller: the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps openshift-network-controller)\nE0517 18:50:58.724669 1 leaderelection.go:330] error retrieving resource lock openshift-sdn/openshift-network-controller: the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps openshift-network-controller)\nE0517 18:51:42.919085 1 leaderelection.go:330] error retrieving resource lock openshift-sdn/openshift-network-controller: Get "https://api-int.ci-op-7binvghc-769db.aws-2.ci.openshift.org:6443/api/v1/namespaces/openshift-sdn/configmaps/openshift-network-controller": dial tcp 10.0.155.11:6443: connect: connection refused\n May 17 19:43:58.589 E ns/openshift-multus pod/multus-admission-controller-79c6c86dcc-gg5mf node/ip-10-0-215-5.ec2.internal uid/361e4c59-86b1-44d4-9f3a-2e372ae44e61 container/multus-admission-controller reason/ContainerExit code/137 cause/Error | |||
#1791445441863225344 | junit | 36 hours ago | |
May 17 13:50:57.227 E ns/openshift-multus pod/multus-additional-cni-plugins-zpk55 node/ip-10-0-134-7.ec2.internal uid/14057eb8-ff93-420d-bfdd-c297faaf6908 container/kube-multus-additional-cni-plugins reason/ContainerExit code/143 cause/Error May 17 13:50:57.908 E ns/openshift-sdn pod/sdn-controller-9kfz5 node/ip-10-0-149-205.ec2.internal uid/59a2bdc9-f6a6-46c6-887c-9578121d45f1 container/sdn-controller reason/ContainerExit code/2 cause/Error I0517 12:55:59.480223 1 server.go:27] Starting HTTP metrics server\nI0517 12:55:59.480316 1 leaderelection.go:248] attempting to acquire leader lease openshift-sdn/openshift-network-controller...\nE0517 12:55:59.483844 1 leaderelection.go:330] error retrieving resource lock openshift-sdn/openshift-network-controller: Get "https://api-int.ci-op-z59dt1rp-769db.aws-2.ci.openshift.org:6443/api/v1/namespaces/openshift-sdn/configmaps/openshift-network-controller": dial tcp 10.0.253.27:6443: connect: connection refused\nE0517 12:56:38.520042 1 leaderelection.go:330] error retrieving resource lock openshift-sdn/openshift-network-controller: Get "https://api-int.ci-op-z59dt1rp-769db.aws-2.ci.openshift.org:6443/api/v1/namespaces/openshift-sdn/configmaps/openshift-network-controller": dial tcp 10.0.156.167:6443: connect: connection refused\nE0517 12:57:27.223651 1 leaderelection.go:330] error retrieving resource lock openshift-sdn/openshift-network-controller: Get "https://api-int.ci-op-z59dt1rp-769db.aws-2.ci.openshift.org:6443/api/v1/namespaces/openshift-sdn/configmaps/openshift-network-controller": dial tcp 10.0.156.167:6443: connect: connection refused\nE0517 12:57:55.366344 1 leaderelection.go:330] error retrieving resource lock openshift-sdn/openshift-network-controller: Get "https://api-int.ci-op-z59dt1rp-769db.aws-2.ci.openshift.org:6443/api/v1/namespaces/openshift-sdn/configmaps/openshift-network-controller": dial tcp 10.0.156.167:6443: connect: connection refused\n May 17 13:50:57.908 E ns/openshift-sdn pod/sdn-controller-9kfz5 node/ip-10-0-149-205.ec2.internal uid/59a2bdc9-f6a6-46c6-887c-9578121d45f1 container/sdn-controller reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar) | |||
#1791445441863225344 | junit | 36 hours ago | |
May 17 14:04:17.026 E ns/openshift-multus pod/network-metrics-daemon-cgknm node/ip-10-0-240-182.ec2.internal uid/c9599a3a-41fa-4a3e-9c66-f5f41a1266a9 container/kube-rbac-proxy reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar) May 17 14:04:18.001 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-240-182.ec2.internal node/ip-10-0-240-182.ec2.internal uid/be8e0a83-ad05-4b69-97d6-4a5b60bb857b container/kube-apiserver-check-endpoints reason/ContainerExit code/255 cause/Error W0517 14:04:16.614206 1 cmd.go:216] Using insecure, self-signed certificates\nI0517 14:04:16.614432 1 crypto.go:601] Generating new CA for check-endpoints-signer@1715954656 cert, and key in /tmp/serving-cert-3559104819/serving-signer.crt, /tmp/serving-cert-3559104819/serving-signer.key\nI0517 14:04:17.040381 1 observer_polling.go:159] Starting file observer\nW0517 14:04:17.119008 1 builder.go:239] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-240-182.ec2.internal": dial tcp [::1]:6443: connect: connection refused\nI0517 14:04:17.119258 1 builder.go:271] check-endpoints version 4.13.0-202405141537.p0.g4d70179.assembly.stream.el8-4d70179-4d70179045c6a9c1e73f9b7ab22590c7e16efca9\nI0517 14:04:17.172936 1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-3559104819/tls.crt::/tmp/serving-cert-3559104819/tls.key"\nF0517 14:04:17.584907 1 cmd.go:141] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\n May 17 14:04:19.062 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-240-182.ec2.internal node/ip-10-0-240-182.ec2.internal uid/be8e0a83-ad05-4b69-97d6-4a5b60bb857b container/kube-apiserver-check-endpoints reason/ContainerExit code/255 cause/Error W0517 14:04:16.614206 1 cmd.go:216] Using insecure, self-signed certificates\nI0517 14:04:16.614432 1 crypto.go:601] Generating new CA for check-endpoints-signer@1715954656 cert, and key in /tmp/serving-cert-3559104819/serving-signer.crt, /tmp/serving-cert-3559104819/serving-signer.key\nI0517 14:04:17.040381 1 observer_polling.go:159] Starting file observer\nW0517 14:04:17.119008 1 builder.go:239] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-240-182.ec2.internal": dial tcp [::1]:6443: connect: connection refused\nI0517 14:04:17.119258 1 builder.go:271] check-endpoints version 4.13.0-202405141537.p0.g4d70179.assembly.stream.el8-4d70179-4d70179045c6a9c1e73f9b7ab22590c7e16efca9\nI0517 14:04:17.172936 1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-3559104819/tls.crt::/tmp/serving-cert-3559104819/tls.key"\nF0517 14:04:17.584907 1 cmd.go:141] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\n ... 2 lines not shown | |||
#1791185857131057152 | junit | 2 days ago | |
May 16 20:34:48.621 E ns/openshift-multus pod/multus-additional-cni-plugins-2kk9g node/ip-10-0-203-49.us-west-1.compute.internal uid/1bee29aa-bbbd-4558-a1d2-fcfea1545e1a container/kube-multus-additional-cni-plugins reason/ContainerExit code/143 cause/Error May 16 20:35:04.640 E ns/openshift-sdn pod/sdn-controller-cznzl node/ip-10-0-203-49.us-west-1.compute.internal uid/d33feb0a-4844-4621-9952-1b65d6232928 container/sdn-controller reason/ContainerExit code/2 cause/Error I0516 19:32:59.934090 1 server.go:27] Starting HTTP metrics server\nI0516 19:32:59.934165 1 leaderelection.go:248] attempting to acquire leader lease openshift-sdn/openshift-network-controller...\nE0516 19:39:29.973564 1 leaderelection.go:330] error retrieving resource lock openshift-sdn/openshift-network-controller: the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps openshift-network-controller)\nE0516 19:40:56.764753 1 leaderelection.go:330] error retrieving resource lock openshift-sdn/openshift-network-controller: Get "https://api-int.ci-op-4xt69x0h-769db.aws-2.ci.openshift.org:6443/api/v1/namespaces/openshift-sdn/configmaps/openshift-network-controller": dial tcp 10.0.148.132:6443: connect: connection refused\n May 16 20:35:20.117 E ns/openshift-sdn pod/sdn-controller-v9r8q node/ip-10-0-185-66.us-west-1.compute.internal uid/a8e14c1f-ce8e-497b-b631-7607fb7c727c container/sdn-controller reason/ContainerExit code/2 cause/Error Allocated netid 4494217 for namespace "e2e-check-for-dns-availability-3719"\nI0516 19:54:04.657973 1 vnids.go:105] Allocated netid 1365650 for namespace "e2e-k8s-service-load-balancer-with-pdb-reused-5189"\nI0516 19:54:04.673765 1 vnids.go:105] Allocated netid 2895312 for namespace "e2e-k8s-sig-apps-deployment-upgrade-3645"\nI0516 19:54:04.682617 1 vnids.go:105] Allocated netid 8885161 for namespace "e2e-image-registry-new-5897"\nI0516 19:54:04.721936 1 vnids.go:105] Allocated netid 10265089 for namespace "e2e-k8s-sig-apps-job-upgrade-991"\nI0516 19:54:04.747125 1 vnids.go:105] Allocated netid 5188961 for namespace "e2e-k8s-sig-apps-replicaset-upgrade-3353"\nI0516 19:54:04.770711 1 vnids.go:105] Allocated netid 2017368 for namespace "e2e-k8s-sig-storage-sig-api-machinery-secret-upgrade-6693"\nI0516 19:54:04.778162 1 vnids.go:105] Allocated netid 4527013 for namespace "e2e-k8s-sig-storage-sig-api-machinery-configmap-upgrade-8052"\nI0516 19:54:04.787696 1 vnids.go:105] Allocated netid 6406886 for namespace "e2e-k8s-sig-apps-daemonset-upgrade-6018"\nI0516 19:54:04.811735 1 vnids.go:105] Allocated netid 3560240 for namespace "e2e-image-pulls-are-fast-4107"\nI0516 19:54:04.834707 1 vnids.go:105] Allocated netid 1588474 for namespace "e2e-check-for-alerts-499"\nI0516 19:54:04.858630 1 vnids.go:105] Allocated netid 3020625 for namespace "e2e-check-for-admin-acks-7867"\nI0516 19:54:05.065763 1 vnids.go:105] Allocated netid 14272395 for namespace "e2e-k8s-service-load-balancer-with-pdb-new-7974"\nI0516 19:54:05.256860 1 vnids.go:105] Allocated netid 13463296 for namespace "e2e-prometheus-metrics-available-after-upgrade-6598"\nI0516 19:54:05.489751 1 vnids.go:105] Allocated netid 11119586 for namespace "e2e-image-registry-reused-8900"\nI0516 19:54:05.650584 1 vnids.go:105] Allocated netid 16751671 for namespace "e2e-check-for-deletes-1976"\nI0516 19:54:09.276417 1 vnids.go:127] Released netid 11661822 for namespace "e2e-test-prometheus-wt47q"\n | |||
#1791185857131057152 | junit | 2 days ago | |
May 16 20:35:23.172 E ns/openshift-multus pod/cni-sysctl-allowlist-ds-pphwz node/ip-10-0-141-124.us-west-1.compute.internal uid/ef9b0b34-76bc-4124-b4e9-7ef8c1932f66 container/kube-multus-additional-cni-plugins reason/ContainerExit code/137 cause/Error May 16 20:35:23.732 E ns/openshift-sdn pod/sdn-controller-gppd9 node/ip-10-0-140-208.us-west-1.compute.internal uid/5d7ce257-48f8-4855-92c3-c2fbee4bcf8f container/sdn-controller reason/ContainerExit code/2 cause/Error I0516 19:32:59.279653 1 server.go:27] Starting HTTP metrics server\nI0516 19:32:59.279735 1 leaderelection.go:248] attempting to acquire leader lease openshift-sdn/openshift-network-controller...\nE0516 19:39:37.761755 1 leaderelection.go:330] error retrieving resource lock openshift-sdn/openshift-network-controller: the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps openshift-network-controller)\nE0516 19:40:24.953674 1 leaderelection.go:330] error retrieving resource lock openshift-sdn/openshift-network-controller: Get "https://api-int.ci-op-4xt69x0h-769db.aws-2.ci.openshift.org:6443/api/v1/namespaces/openshift-sdn/configmaps/openshift-network-controller": dial tcp 10.0.193.199:6443: connect: connection refused\nE0516 19:40:52.245717 1 leaderelection.go:330] error retrieving resource lock openshift-sdn/openshift-network-controller: Get "https://api-int.ci-op-4xt69x0h-769db.aws-2.ci.openshift.org:6443/api/v1/namespaces/openshift-sdn/configmaps/openshift-network-controller": dial tcp 10.0.148.132:6443: connect: connection refused\n May 16 20:35:24.051 E ns/openshift-multus pod/cni-sysctl-allowlist-ds-6fw5l node/ip-10-0-141-221.us-west-1.compute.internal uid/d70a0e3b-53cd-4cb7-80ad-be9c2fe7258e container/kube-multus-additional-cni-plugins reason/ContainerExit code/137 cause/Error | |||
#1791096308015042560 | junit | 2 days ago | |
May 16 14:40:25.344 E ns/openshift-multus pod/multus-additional-cni-plugins-ptlnn node/ip-10-0-140-130.us-west-2.compute.internal uid/af61208b-4c71-4ffd-b2c7-af7106922b93 container/kube-multus-additional-cni-plugins reason/ContainerExit code/143 cause/Error May 16 14:40:41.338 E ns/openshift-sdn pod/sdn-controller-2wnwp node/ip-10-0-138-1.us-west-2.compute.internal uid/b082833b-783e-4a2f-936f-2e80bcf897f8 container/sdn-controller reason/ContainerExit code/2 cause/Error I0516 13:35:22.763751 1 server.go:27] Starting HTTP metrics server\nI0516 13:35:22.763839 1 leaderelection.go:248] attempting to acquire leader lease openshift-sdn/openshift-network-controller...\nE0516 13:35:22.790865 1 leaderelection.go:334] error initially creating leader election record: leases.coordination.k8s.io "openshift-network-controller" already exists\nE0516 13:42:26.227846 1 leaderelection.go:330] error retrieving resource lock openshift-sdn/openshift-network-controller: Get "https://api-int.ci-op-4ggcvy1s-769db.aws-2.ci.openshift.org:6443/api/v1/namespaces/openshift-sdn/configmaps/openshift-network-controller": dial tcp 10.0.245.237:6443: connect: connection refused\nE0516 13:44:03.434996 1 leaderelection.go:330] error retrieving resource lock openshift-sdn/openshift-network-controller: the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps openshift-network-controller)\nE0516 13:44:45.974468 1 leaderelection.go:330] error retrieving resource lock openshift-sdn/openshift-network-controller: Get "https://api-int.ci-op-4ggcvy1s-769db.aws-2.ci.openshift.org:6443/api/v1/namespaces/openshift-sdn/configmaps/openshift-network-controller": dial tcp 10.0.149.158:6443: connect: connection refused\n May 16 14:40:45.616 E ns/openshift-sdn pod/sdn-6xkgp node/ip-10-0-253-234.us-west-2.compute.internal uid/48ec545b-6cbb-4d75-9252-24288a841ca6 container/sdn reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar) | |||
#1791096308015042560 | junit | 2 days ago | |
May 16 14:57:00.744 - 999ms E ns/openshift-console route/console disruption/ingress-to-console connection/new reason/DisruptionBegan ns/openshift-console route/console disruption/ingress-to-console connection/new stopped responding to GET requests over new connections: Get "https://console-openshift-console.apps.ci-op-4ggcvy1s-769db.aws-2.ci.openshift.org/healthz": read tcp 10.131.64.250:47426->35.81.212.17:443: read: connection reset by peer May 16 14:57:03.869 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-138-1.us-west-2.compute.internal node/ip-10-0-138-1.us-west-2.compute.internal uid/da6e8517-8983-4df6-95d2-ad334d061413 container/kube-apiserver-check-endpoints reason/ContainerExit code/255 cause/Error W0516 14:57:02.032601 1 cmd.go:216] Using insecure, self-signed certificates\nI0516 14:57:02.043561 1 crypto.go:601] Generating new CA for check-endpoints-signer@1715871422 cert, and key in /tmp/serving-cert-923366319/serving-signer.crt, /tmp/serving-cert-923366319/serving-signer.key\nI0516 14:57:02.542133 1 observer_polling.go:159] Starting file observer\nW0516 14:57:02.553277 1 builder.go:239] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-138-1.us-west-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nI0516 14:57:02.553406 1 builder.go:271] check-endpoints version 4.13.0-202405141537.p0.g4d70179.assembly.stream.el8-4d70179-4d70179045c6a9c1e73f9b7ab22590c7e16efca9\nI0516 14:57:02.579976 1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-923366319/tls.crt::/tmp/serving-cert-923366319/tls.key"\nF0516 14:57:03.162804 1 cmd.go:141] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\n May 16 14:57:04.878 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-138-1.us-west-2.compute.internal node/ip-10-0-138-1.us-west-2.compute.internal uid/da6e8517-8983-4df6-95d2-ad334d061413 container/kube-apiserver-check-endpoints reason/ContainerExit code/255 cause/Error W0516 14:57:02.032601 1 cmd.go:216] Using insecure, self-signed certificates\nI0516 14:57:02.043561 1 crypto.go:601] Generating new CA for check-endpoints-signer@1715871422 cert, and key in /tmp/serving-cert-923366319/serving-signer.crt, /tmp/serving-cert-923366319/serving-signer.key\nI0516 14:57:02.542133 1 observer_polling.go:159] Starting file observer\nW0516 14:57:02.553277 1 builder.go:239] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-138-1.us-west-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nI0516 14:57:02.553406 1 builder.go:271] check-endpoints version 4.13.0-202405141537.p0.g4d70179.assembly.stream.el8-4d70179-4d70179045c6a9c1e73f9b7ab22590c7e16efca9\nI0516 14:57:02.579976 1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-923366319/tls.crt::/tmp/serving-cert-923366319/tls.key"\nF0516 14:57:03.162804 1 cmd.go:141] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\n ... 2 lines not shown | |||
#1791626581211353088 | junit | 24 hours ago | |
May 18 02:00:56.000 - 1s E ns/openshift-image-registry route/test-disruption-new disruption/image-registry connection/new reason/DisruptionBegan ns/openshift-image-registry route/test-disruption-new disruption/image-registry connection/new stopped responding to GET requests over new connections: Get "https://test-disruption-new-openshift-image-registry.apps.ci-op-mr8112q2-769db.aws-2.ci.openshift.org/healthz": read tcp 10.129.18.246:45854->54.235.236.214:443: read: connection reset by peer May 18 02:00:57.220 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-233-194.ec2.internal node/ip-10-0-233-194.ec2.internal uid/f74c058c-294e-4afd-8aa9-582ac16c612a container/kube-apiserver-check-endpoints reason/ContainerExit code/255 cause/Error W0518 02:00:55.328728 1 cmd.go:216] Using insecure, self-signed certificates\nI0518 02:00:55.337606 1 crypto.go:601] Generating new CA for check-endpoints-signer@1715997655 cert, and key in /tmp/serving-cert-3864645229/serving-signer.crt, /tmp/serving-cert-3864645229/serving-signer.key\nI0518 02:00:56.343229 1 observer_polling.go:159] Starting file observer\nW0518 02:00:56.379468 1 builder.go:239] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-233-194.ec2.internal": dial tcp [::1]:6443: connect: connection refused\nI0518 02:00:56.379595 1 builder.go:271] check-endpoints version 4.13.0-202405141537.p0.g4d70179.assembly.stream.el8-4d70179-4d70179045c6a9c1e73f9b7ab22590c7e16efca9\nI0518 02:00:56.417417 1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-3864645229/tls.crt::/tmp/serving-cert-3864645229/tls.key"\nF0518 02:00:56.941372 1 cmd.go:141] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\n May 18 02:00:58.226 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-233-194.ec2.internal node/ip-10-0-233-194.ec2.internal uid/f74c058c-294e-4afd-8aa9-582ac16c612a container/kube-apiserver-check-endpoints reason/ContainerExit code/255 cause/Error W0518 02:00:55.328728 1 cmd.go:216] Using insecure, self-signed certificates\nI0518 02:00:55.337606 1 crypto.go:601] Generating new CA for check-endpoints-signer@1715997655 cert, and key in /tmp/serving-cert-3864645229/serving-signer.crt, /tmp/serving-cert-3864645229/serving-signer.key\nI0518 02:00:56.343229 1 observer_polling.go:159] Starting file observer\nW0518 02:00:56.379468 1 builder.go:239] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-233-194.ec2.internal": dial tcp [::1]:6443: connect: connection refused\nI0518 02:00:56.379595 1 builder.go:271] check-endpoints version 4.13.0-202405141537.p0.g4d70179.assembly.stream.el8-4d70179-4d70179045c6a9c1e73f9b7ab22590c7e16efca9\nI0518 02:00:56.417417 1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-3864645229/tls.crt::/tmp/serving-cert-3864645229/tls.key"\nF0518 02:00:56.941372 1 cmd.go:141] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\n ... 1 lines not shown | |||
#1790977364742639616 | junit | 2 days ago | |
May 16 06:43:32.582 E ns/openshift-multus pod/multus-additional-cni-plugins-bgcwx node/ip-10-0-145-170.us-east-2.compute.internal uid/939a592a-4711-4d98-9acd-e916377bf32e container/kube-multus-additional-cni-plugins reason/ContainerExit code/143 cause/Error May 16 06:43:48.336 E ns/openshift-sdn pod/sdn-controller-w29c9 node/ip-10-0-176-1.us-east-2.compute.internal uid/9aaf97fc-9404-4635-af6f-96c1632f9025 container/sdn-controller reason/ContainerExit code/2 cause/Error I0516 05:40:28.319925 1 server.go:27] Starting HTTP metrics server\nI0516 05:40:28.320002 1 leaderelection.go:248] attempting to acquire leader lease openshift-sdn/openshift-network-controller...\nE0516 05:49:40.203611 1 leaderelection.go:330] error retrieving resource lock openshift-sdn/openshift-network-controller: Get "https://api-int.ci-op-hhrw74q2-769db.aws-2.ci.openshift.org:6443/api/v1/namespaces/openshift-sdn/configmaps/openshift-network-controller": dial tcp 10.0.129.77:6443: connect: connection refused\nE0516 05:50:32.529981 1 leaderelection.go:330] error retrieving resource lock openshift-sdn/openshift-network-controller: Get "https://api-int.ci-op-hhrw74q2-769db.aws-2.ci.openshift.org:6443/api/v1/namespaces/openshift-sdn/configmaps/openshift-network-controller": dial tcp 10.0.195.160:6443: connect: connection refused\n May 16 06:43:52.300 E ns/openshift-multus pod/multus-additional-cni-plugins-tn8mn node/ip-10-0-166-253.us-east-2.compute.internal uid/664baf93-b7de-4260-afbb-52a29f9b768c container/kube-multus-additional-cni-plugins reason/ContainerExit code/143 cause/Error | |||
#1790977364742639616 | junit | 2 days ago | |
May 16 06:43:59.319 E ns/openshift-sdn pod/sdn-vhmct node/ip-10-0-166-253.us-east-2.compute.internal uid/3d354f0d-7a4e-4c16-ae00-1e2c38ddd726 container/sdn reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar) May 16 06:44:03.372 E ns/openshift-sdn pod/sdn-controller-qtwgn node/ip-10-0-212-228.us-east-2.compute.internal uid/378cb56c-28e5-41e5-9838-7fe4e3025207 container/sdn-controller reason/ContainerExit code/2 cause/Error I0516 05:40:27.933124 1 server.go:27] Starting HTTP metrics server\nI0516 05:40:27.933229 1 leaderelection.go:248] attempting to acquire leader lease openshift-sdn/openshift-network-controller...\nE0516 05:47:46.011427 1 leaderelection.go:330] error retrieving resource lock openshift-sdn/openshift-network-controller: the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps openshift-network-controller)\nE0516 05:48:34.386712 1 leaderelection.go:330] error retrieving resource lock openshift-sdn/openshift-network-controller: Get "https://api-int.ci-op-hhrw74q2-769db.aws-2.ci.openshift.org:6443/api/v1/namespaces/openshift-sdn/configmaps/openshift-network-controller": dial tcp 10.0.129.77:6443: connect: connection refused\nE0516 05:49:30.595563 1 leaderelection.go:330] error retrieving resource lock openshift-sdn/openshift-network-controller: Get "https://api-int.ci-op-hhrw74q2-769db.aws-2.ci.openshift.org:6443/api/v1/namespaces/openshift-sdn/configmaps/openshift-network-controller": dial tcp 10.0.195.160:6443: connect: connection refused\nE0516 05:50:05.828400 1 leaderelection.go:330] error retrieving resource lock openshift-sdn/openshift-network-controller: Get "https://api-int.ci-op-hhrw74q2-769db.aws-2.ci.openshift.org:6443/api/v1/namespaces/openshift-sdn/configmaps/openshift-network-controller": dial tcp 10.0.195.160:6443: connect: connection refused\nE0516 05:53:54.692544 1 leaderelection.go:330] error retrieving resource lock openshift-sdn/openshift-network-controller: Get "https://api-int.ci-op-hhrw74q2-769db.aws-2.ci.openshift.org:6443/api/v1/namespaces/openshift-sdn/configmaps/openshift-network-controller": dial tcp 10.0.129.77:6443: connect: connection refused\n May 16 06:44:03.749 E ns/openshift-multus pod/cni-sysctl-allowlist-ds-t8zrc node/ip-10-0-145-170.us-east-2.compute.internal uid/a908e4b8-077d-49be-8749-07202c414964 container/kube-multus-additional-cni-plugins reason/ContainerExit code/137 cause/Error | |||
#1790788596756647936 | junit | 3 days ago | |
May 15 18:23:37.180 E ns/openshift-network-diagnostics pod/network-check-target-9pjcb node/ip-10-0-164-31.us-east-2.compute.internal uid/088d2198-55a4-4d8d-9e59-bb39669f1a6e container/network-check-target-container reason/ContainerExit code/2 cause/Error May 15 18:23:40.883 E ns/openshift-sdn pod/sdn-controller-dnfxr node/ip-10-0-141-39.us-east-2.compute.internal uid/de1882f9-e52a-4337-a9a8-af20e42a43b0 container/sdn-controller reason/ContainerExit code/2 cause/Error I0515 17:10:26.695977 1 server.go:27] Starting HTTP metrics server\nI0515 17:10:26.696090 1 leaderelection.go:248] attempting to acquire leader lease openshift-sdn/openshift-network-controller...\nE0515 17:19:09.428484 1 leaderelection.go:330] error retrieving resource lock openshift-sdn/openshift-network-controller: the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps openshift-network-controller)\nE0515 17:20:47.443230 1 leaderelection.go:330] error retrieving resource lock openshift-sdn/openshift-network-controller: the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps openshift-network-controller)\nE0515 17:21:42.394770 1 leaderelection.go:330] error retrieving resource lock openshift-sdn/openshift-network-controller: Get "https://api-int.ci-op-z6h58nck-769db.aws-2.ci.openshift.org:6443/api/v1/namespaces/openshift-sdn/configmaps/openshift-network-controller": dial tcp 10.0.239.175:6443: connect: connection refused\n May 15 18:23:44.348 E ns/openshift-sdn pod/sdn-t6km5 node/ip-10-0-227-26.us-east-2.compute.internal uid/2b55cd64-bfc8-4939-8e35-1f23629204c8 container/sdn reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar) ... 2 lines not shown | |||
#1790292729373134848 | junit | 4 days ago | |
May 14 09:40:55.186 E ns/openshift-cluster-csi-drivers pod/aws-ebs-csi-driver-node-6gfw7 node/ip-10-0-130-205.us-west-2.compute.internal uid/91aa5dcb-1522-4b1e-a4c7-fc91c1f767bd container/csi-node-driver-registrar reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar) May 14 09:41:01.968 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-130-205.us-west-2.compute.internal node/ip-10-0-130-205.us-west-2.compute.internal uid/d0d9f6cb-117d-4c1d-9f41-caeb853a05b8 container/kube-apiserver-check-endpoints reason/ContainerExit code/255 cause/Error W0514 09:41:00.530405 1 cmd.go:216] Using insecure, self-signed certificates\nI0514 09:41:00.539878 1 crypto.go:601] Generating new CA for check-endpoints-signer@1715679660 cert, and key in /tmp/serving-cert-3649952109/serving-signer.crt, /tmp/serving-cert-3649952109/serving-signer.key\nI0514 09:41:00.998403 1 observer_polling.go:159] Starting file observer\nW0514 09:41:01.015504 1 builder.go:239] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-130-205.us-west-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nI0514 09:41:01.015693 1 builder.go:271] check-endpoints version 4.13.0-202405091442.p0.g4d70179.assembly.stream.el8-4d70179-4d70179045c6a9c1e73f9b7ab22590c7e16efca9\nI0514 09:41:01.023641 1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-3649952109/tls.crt::/tmp/serving-cert-3649952109/tls.key"\nF0514 09:41:01.349084 1 cmd.go:141] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\n May 14 09:41:06.042 E ns/openshift-network-diagnostics pod/network-check-target-mwxv5 node/ip-10-0-130-205.us-west-2.compute.internal uid/5f5d9a22-18c3-4ad8-978a-fd73f081d209 container/network-check-target-container reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar) ... 2 lines not shown | |||
#1790384984171745280 | junit | 4 days ago | |
May 14 15:44:42.000 - 1s E disruption/service-load-balancer-with-pdb connection/new reason/DisruptionBegan disruption/service-load-balancer-with-pdb connection/new stopped responding to GET requests over new connections: Get "http://ae22357a7c2e346629f638056d2bc8fe-379737057.us-west-2.elb.amazonaws.com:80/echo?msg=Hello": read tcp 10.129.244.177:50112->54.69.235.105:80: read: connection reset by peer May 14 15:44:45.988 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-164-34.us-west-2.compute.internal node/ip-10-0-164-34.us-west-2.compute.internal uid/559a2ff9-8b01-41c4-841f-466f81218d3c container/kube-apiserver-check-endpoints reason/ContainerExit code/255 cause/Error W0514 15:44:44.461960 1 cmd.go:216] Using insecure, self-signed certificates\nI0514 15:44:44.471968 1 crypto.go:601] Generating new CA for check-endpoints-signer@1715701484 cert, and key in /tmp/serving-cert-3940328288/serving-signer.crt, /tmp/serving-cert-3940328288/serving-signer.key\nI0514 15:44:45.290860 1 observer_polling.go:159] Starting file observer\nW0514 15:44:45.305701 1 builder.go:239] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-164-34.us-west-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nI0514 15:44:45.305829 1 builder.go:271] check-endpoints version 4.13.0-202405091442.p0.g4d70179.assembly.stream.el8-4d70179-4d70179045c6a9c1e73f9b7ab22590c7e16efca9\nI0514 15:44:45.320192 1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-3940328288/tls.crt::/tmp/serving-cert-3940328288/tls.key"\nF0514 15:44:45.666306 1 cmd.go:141] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\n May 14 15:44:50.185 E ns/e2e-k8s-sig-apps-daemonset-upgrade-3678 pod/ds1-l4htw node/ip-10-0-164-34.us-west-2.compute.internal uid/81204efc-36dd-4311-8c63-a19fb4e31742 container/app reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar) | |||
#1790384984171745280 | junit | 4 days ago | |
May 14 15:44:50.318 E ns/openshift-multus pod/network-metrics-daemon-h6tgj node/ip-10-0-164-34.us-west-2.compute.internal uid/cb6ce915-e4c3-4f57-a6d4-224efb908117 container/network-metrics-daemon reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar) May 14 15:44:50.371 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-164-34.us-west-2.compute.internal node/ip-10-0-164-34.us-west-2.compute.internal uid/559a2ff9-8b01-41c4-841f-466f81218d3c container/kube-apiserver-check-endpoints reason/ContainerExit code/255 cause/Error W0514 15:44:44.461960 1 cmd.go:216] Using insecure, self-signed certificates\nI0514 15:44:44.471968 1 crypto.go:601] Generating new CA for check-endpoints-signer@1715701484 cert, and key in /tmp/serving-cert-3940328288/serving-signer.crt, /tmp/serving-cert-3940328288/serving-signer.key\nI0514 15:44:45.290860 1 observer_polling.go:159] Starting file observer\nW0514 15:44:45.305701 1 builder.go:239] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-164-34.us-west-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nI0514 15:44:45.305829 1 builder.go:271] check-endpoints version 4.13.0-202405091442.p0.g4d70179.assembly.stream.el8-4d70179-4d70179045c6a9c1e73f9b7ab22590c7e16efca9\nI0514 15:44:45.320192 1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-3940328288/tls.crt::/tmp/serving-cert-3940328288/tls.key"\nF0514 15:44:45.666306 1 cmd.go:141] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\n May 14 15:44:55.271 E ns/openshift-multus pod/multus-additional-cni-plugins-hml7p node/ip-10-0-164-34.us-west-2.compute.internal uid/1530a845-f512-419c-a0c3-ed0da8b98959 container/kube-multus-additional-cni-plugins reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar) | |||
#1790698025480359936 | junit | 3 days ago | |
May 15 12:28:48.221 E ns/openshift-dns pod/dns-default-8lp9w node/ip-10-0-244-226.us-west-1.compute.internal uid/eb469fd7-6b8f-4d9d-b909-3156a940708a container/dns reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar) May 15 12:28:48.240 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-244-226.us-west-1.compute.internal node/ip-10-0-244-226.us-west-1.compute.internal uid/e69fd18a-da13-4232-9ed4-6b5cfd025bbf container/kube-apiserver-check-endpoints reason/ContainerExit code/255 cause/Error W0515 12:28:45.863159 1 cmd.go:216] Using insecure, self-signed certificates\nI0515 12:28:45.878845 1 crypto.go:601] Generating new CA for check-endpoints-signer@1715776125 cert, and key in /tmp/serving-cert-3392994934/serving-signer.crt, /tmp/serving-cert-3392994934/serving-signer.key\nI0515 12:28:46.496632 1 observer_polling.go:159] Starting file observer\nW0515 12:28:46.522567 1 builder.go:239] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-244-226.us-west-1.compute.internal": dial tcp [::1]:6443: connect: connection refused\nI0515 12:28:46.525710 1 builder.go:271] check-endpoints version 4.13.0-202405141537.p0.g4d70179.assembly.stream.el8-4d70179-4d70179045c6a9c1e73f9b7ab22590c7e16efca9\nI0515 12:28:46.551669 1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-3392994934/tls.crt::/tmp/serving-cert-3392994934/tls.key"\nF0515 12:28:47.239478 1 cmd.go:141] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\n May 15 12:28:49.251 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-244-226.us-west-1.compute.internal node/ip-10-0-244-226.us-west-1.compute.internal uid/e69fd18a-da13-4232-9ed4-6b5cfd025bbf container/kube-apiserver-check-endpoints reason/ContainerExit code/255 cause/Error W0515 12:28:45.863159 1 cmd.go:216] Using insecure, self-signed certificates\nI0515 12:28:45.878845 1 crypto.go:601] Generating new CA for check-endpoints-signer@1715776125 cert, and key in /tmp/serving-cert-3392994934/serving-signer.crt, /tmp/serving-cert-3392994934/serving-signer.key\nI0515 12:28:46.496632 1 observer_polling.go:159] Starting file observer\nW0515 12:28:46.522567 1 builder.go:239] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-244-226.us-west-1.compute.internal": dial tcp [::1]:6443: connect: connection refused\nI0515 12:28:46.525710 1 builder.go:271] check-endpoints version 4.13.0-202405141537.p0.g4d70179.assembly.stream.el8-4d70179-4d70179045c6a9c1e73f9b7ab22590c7e16efca9\nI0515 12:28:46.551669 1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-3392994934/tls.crt::/tmp/serving-cert-3392994934/tls.key"\nF0515 12:28:47.239478 1 cmd.go:141] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\n ... 2 lines not shown | |||
#1788928652671455232 | junit | 8 days ago | |
May 10 15:03:25.009 E ns/openshift-multus pod/multus-additional-cni-plugins-4znbs node/ip-10-0-197-1.us-west-2.compute.internal uid/76c08032-8052-412d-9f93-1c5dab742923 container/kube-multus-additional-cni-plugins reason/ContainerExit code/143 cause/Error May 10 15:03:39.602 E ns/openshift-sdn pod/sdn-controller-9b9s2 node/ip-10-0-130-115.us-west-2.compute.internal uid/012130b5-5e04-4b38-8ba0-c1ccc2ad6138 container/sdn-controller reason/ContainerExit code/2 cause/Error 1 leaderelection.go:330] error retrieving resource lock openshift-sdn/openshift-network-controller: Get "https://api-int.ci-op-gh85ymzf-769db.aws-2.ci.openshift.org:6443/api/v1/namespaces/openshift-sdn/configmaps/openshift-network-controller": dial tcp 10.0.231.97:6443: connect: connection refused - error from a previous attempt: read tcp 10.0.130.115:60686->10.0.173.93:6443: read: connection reset by peer\nE0510 14:08:52.503744 1 leaderelection.go:330] error retrieving resource lock openshift-sdn/openshift-network-controller: Get "https://api-int.ci-op-gh85ymzf-769db.aws-2.ci.openshift.org:6443/api/v1/namespaces/openshift-sdn/configmaps/openshift-network-controller": dial tcp 10.0.173.93:6443: connect: connection refused\nE0510 14:09:19.814599 1 leaderelection.go:330] error retrieving resource lock openshift-sdn/openshift-network-controller: Get "https://api-int.ci-op-gh85ymzf-769db.aws-2.ci.openshift.org:6443/api/v1/namespaces/openshift-sdn/configmaps/openshift-network-controller": dial tcp 10.0.173.93:6443: connect: connection refused\nE0510 14:09:57.788784 1 leaderelection.go:330] error retrieving resource lock openshift-sdn/openshift-network-controller: Get "https://api-int.ci-op-gh85ymzf-769db.aws-2.ci.openshift.org:6443/api/v1/namespaces/openshift-sdn/configmaps/openshift-network-controller": dial tcp 10.0.173.93:6443: connect: connection refused\nE0510 14:10:45.498349 1 leaderelection.go:330] error retrieving resource lock openshift-sdn/openshift-network-controller: Get "https://api-int.ci-op-gh85ymzf-769db.aws-2.ci.openshift.org:6443/api/v1/namespaces/openshift-sdn/configmaps/openshift-network-controller": dial tcp 10.0.173.93:6443: connect: connection refused\nE0510 14:17:56.328258 1 leaderelection.go:330] error retrieving resource lock openshift-sdn/openshift-network-controller: Get "https://api-int.ci-op-gh85ymzf-769db.aws-2.ci.openshift.org:6443/api/v1/namespaces/openshift-sdn/configmaps/openshift-network-controller": dial tcp 10.0.231.97:6443: connect: connection refused\n May 10 15:03:48.193 E ns/openshift-network-diagnostics pod/network-check-target-jnjf4 node/ip-10-0-179-222.us-west-2.compute.internal uid/c4962d37-ab34-4943-90d6-85e7165b0e37 container/network-check-target-container reason/ContainerExit code/2 cause/Error | |||
#1788928652671455232 | junit | 8 days ago | |
May 10 15:19:14.291 E ns/openshift-sdn pod/sdn-controller-jr7jg node/ip-10-0-137-115.us-west-2.compute.internal uid/925e7c47-8477-4bd9-923a-ab6910cef3ec container/sdn-controller reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar) May 10 15:19:20.250 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-137-115.us-west-2.compute.internal node/ip-10-0-137-115.us-west-2.compute.internal uid/b2720df0-8acb-4448-8ccd-5f88f2f6dbb3 container/kube-apiserver-check-endpoints reason/ContainerExit code/255 cause/Error W0510 15:19:18.662853 1 cmd.go:216] Using insecure, self-signed certificates\nI0510 15:19:18.666579 1 crypto.go:601] Generating new CA for check-endpoints-signer@1715354358 cert, and key in /tmp/serving-cert-4131837214/serving-signer.crt, /tmp/serving-cert-4131837214/serving-signer.key\nI0510 15:19:19.051775 1 observer_polling.go:159] Starting file observer\nW0510 15:19:19.070924 1 builder.go:239] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-137-115.us-west-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nI0510 15:19:19.071246 1 builder.go:271] check-endpoints version 4.13.0-202405091442.p0.g4d70179.assembly.stream.el8-4d70179-4d70179045c6a9c1e73f9b7ab22590c7e16efca9\nI0510 15:19:19.082361 1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-4131837214/tls.crt::/tmp/serving-cert-4131837214/tls.key"\nF0510 15:19:19.302726 1 cmd.go:141] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\n May 10 15:19:23.338 E ns/openshift-network-diagnostics pod/network-check-target-dbxtv node/ip-10-0-137-115.us-west-2.compute.internal uid/fefbac51-32de-4a88-a87b-07f717ec20f4 container/network-check-target-container reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar) | |||
#1788556470334263296 | junit | 9 days ago | |
May 09 14:38:20.396 - 999ms E ns/openshift-authentication route/oauth-openshift disruption/ingress-to-oauth-server connection/new reason/DisruptionBegan ns/openshift-authentication route/oauth-openshift disruption/ingress-to-oauth-server connection/new stopped responding to GET requests over new connections: Get "https://oauth-openshift.apps.ci-op-pcq8thjp-769db.aws-2.ci.openshift.org/healthz": read tcp 10.129.128.157:46382->35.82.165.158:443: read: connection reset by peer May 09 14:38:21.064 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-140-200.us-west-2.compute.internal node/ip-10-0-140-200.us-west-2.compute.internal uid/54fa0c39-feed-4478-8621-51786f92b330 container/kube-apiserver-check-endpoints reason/ContainerExit code/255 cause/Error W0509 14:38:19.762602 1 cmd.go:216] Using insecure, self-signed certificates\nI0509 14:38:19.769924 1 crypto.go:601] Generating new CA for check-endpoints-signer@1715265499 cert, and key in /tmp/serving-cert-4058159586/serving-signer.crt, /tmp/serving-cert-4058159586/serving-signer.key\nI0509 14:38:20.232481 1 observer_polling.go:159] Starting file observer\nW0509 14:38:20.253494 1 builder.go:239] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-140-200.us-west-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nI0509 14:38:20.253847 1 builder.go:271] check-endpoints version 4.13.0-202405070739.p0.g4d70179.assembly.stream.el8-4d70179-4d70179045c6a9c1e73f9b7ab22590c7e16efca9\nI0509 14:38:20.301619 1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-4058159586/tls.crt::/tmp/serving-cert-4058159586/tls.key"\nF0509 14:38:20.614180 1 cmd.go:141] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\n May 09 14:38:23.078 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-140-200.us-west-2.compute.internal node/ip-10-0-140-200.us-west-2.compute.internal uid/54fa0c39-feed-4478-8621-51786f92b330 container/kube-apiserver-check-endpoints reason/ContainerExit code/255 cause/Error W0509 14:38:19.762602 1 cmd.go:216] Using insecure, self-signed certificates\nI0509 14:38:19.769924 1 crypto.go:601] Generating new CA for check-endpoints-signer@1715265499 cert, and key in /tmp/serving-cert-4058159586/serving-signer.crt, /tmp/serving-cert-4058159586/serving-signer.key\nI0509 14:38:20.232481 1 observer_polling.go:159] Starting file observer\nW0509 14:38:20.253494 1 builder.go:239] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-140-200.us-west-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nI0509 14:38:20.253847 1 builder.go:271] check-endpoints version 4.13.0-202405070739.p0.g4d70179.assembly.stream.el8-4d70179-4d70179045c6a9c1e73f9b7ab22590c7e16efca9\nI0509 14:38:20.301619 1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-4058159586/tls.crt::/tmp/serving-cert-4058159586/tls.key"\nF0509 14:38:20.614180 1 cmd.go:141] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\n ... 1 lines not shown | |||
#1788233164573904896 | junit | 10 days ago | |
May 08 17:02:02.569 E ns/openshift-network-diagnostics pod/network-check-target-2ljlf node/ip-10-0-142-198.us-west-1.compute.internal uid/06739b75-2057-402e-9d94-5e4bad6ac563 container/network-check-target-container reason/ContainerExit code/2 cause/Error May 08 17:02:04.974 E ns/openshift-sdn pod/sdn-controller-kk2tk node/ip-10-0-212-20.us-west-1.compute.internal uid/dad12d00-2756-491f-9c55-cf3cc41d5122 container/sdn-controller reason/ContainerExit code/2 cause/Error I0508 15:57:42.363354 1 server.go:27] Starting HTTP metrics server\nI0508 15:57:42.363442 1 leaderelection.go:248] attempting to acquire leader lease openshift-sdn/openshift-network-controller...\nE0508 16:04:59.935316 1 leaderelection.go:330] error retrieving resource lock openshift-sdn/openshift-network-controller: the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps openshift-network-controller)\nE0508 16:05:29.156127 1 leaderelection.go:330] error retrieving resource lock openshift-sdn/openshift-network-controller: Get "https://api-int.ci-op-zxqj48nz-769db.aws-2.ci.openshift.org:6443/api/v1/namespaces/openshift-sdn/configmaps/openshift-network-controller": dial tcp 10.0.244.79:6443: connect: connection refused\nE0508 16:08:59.825106 1 leaderelection.go:330] error retrieving resource lock openshift-sdn/openshift-network-controller: Get "https://api-int.ci-op-zxqj48nz-769db.aws-2.ci.openshift.org:6443/api/v1/namespaces/openshift-sdn/configmaps/openshift-network-controller": dial tcp 10.0.244.79:6443: connect: connection refused\n May 08 17:02:08.638 E ns/openshift-multus pod/cni-sysctl-allowlist-ds-l26wl node/ip-10-0-151-250.us-west-1.compute.internal uid/71b79648-126f-4d63-b8de-0f6d19189a43 container/kube-multus-additional-cni-plugins reason/ContainerExit code/137 cause/Error | |||
#1788233164573904896 | junit | 10 days ago | |
May 08 17:02:17.649 E ns/openshift-multus pod/multus-admission-controller-76cf6864fb-nsd4d node/ip-10-0-142-198.us-west-1.compute.internal uid/0fc7a8df-da6c-40ae-9d85-c16c747ef6c5 container/multus-admission-controller reason/ContainerExit code/137 cause/Error May 08 17:02:19.076 E ns/openshift-sdn pod/sdn-controller-tmssj node/ip-10-0-158-215.us-west-1.compute.internal uid/58d96b62-c0ce-4e2c-b66b-a033d25458c9 container/sdn-controller reason/ContainerExit code/2 cause/Error I0508 15:57:42.393306 1 server.go:27] Starting HTTP metrics server\nI0508 15:57:42.393396 1 leaderelection.go:248] attempting to acquire leader lease openshift-sdn/openshift-network-controller...\nE0508 16:05:59.443313 1 leaderelection.go:330] error retrieving resource lock openshift-sdn/openshift-network-controller: Get "https://api-int.ci-op-zxqj48nz-769db.aws-2.ci.openshift.org:6443/api/v1/namespaces/openshift-sdn/configmaps/openshift-network-controller": dial tcp 10.0.244.79:6443: connect: connection refused\nE0508 16:06:32.313445 1 leaderelection.go:330] error retrieving resource lock openshift-sdn/openshift-network-controller: Get "https://api-int.ci-op-zxqj48nz-769db.aws-2.ci.openshift.org:6443/api/v1/namespaces/openshift-sdn/configmaps/openshift-network-controller": dial tcp 10.0.160.185:6443: connect: connection refused\nE0508 16:08:58.676706 1 leaderelection.go:330] error retrieving resource lock openshift-sdn/openshift-network-controller: Get "https://api-int.ci-op-zxqj48nz-769db.aws-2.ci.openshift.org:6443/api/v1/namespaces/openshift-sdn/configmaps/openshift-network-controller": dial tcp 10.0.160.185:6443: connect: connection refused\n May 08 17:02:29.202 E ns/openshift-sdn pod/sdn-mxqkn node/ip-10-0-212-20.us-west-1.compute.internal uid/7a04a2a5-82ea-4132-8073-1167de26d12a container/sdn reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar) | |||
#1788135193639391232 | junit | 10 days ago | |
May 08 10:44:25.224 E ns/openshift-monitoring pod/node-exporter-g7h4f node/ip-10-0-129-56.us-west-1.compute.internal uid/977a9439-c22b-4dc0-ac3e-449f5b40cf5b container/kube-rbac-proxy reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar) May 08 10:44:33.159 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-129-56.us-west-1.compute.internal node/ip-10-0-129-56.us-west-1.compute.internal uid/649b7054-eff0-450a-90e5-88b6d86dfc77 container/kube-apiserver-check-endpoints reason/ContainerExit code/255 cause/Error W0508 10:44:31.337067 1 cmd.go:216] Using insecure, self-signed certificates\nI0508 10:44:31.344167 1 crypto.go:601] Generating new CA for check-endpoints-signer@1715165071 cert, and key in /tmp/serving-cert-82386919/serving-signer.crt, /tmp/serving-cert-82386919/serving-signer.key\nI0508 10:44:31.957630 1 observer_polling.go:159] Starting file observer\nW0508 10:44:31.972840 1 builder.go:239] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-129-56.us-west-1.compute.internal": dial tcp [::1]:6443: connect: connection refused\nI0508 10:44:31.973015 1 builder.go:271] check-endpoints version 4.13.0-202405070739.p0.g4d70179.assembly.stream.el8-4d70179-4d70179045c6a9c1e73f9b7ab22590c7e16efca9\nI0508 10:44:31.990272 1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-82386919/tls.crt::/tmp/serving-cert-82386919/tls.key"\nF0508 10:44:32.288600 1 cmd.go:141] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\n May 08 10:44:38.397 E ns/openshift-network-diagnostics pod/network-check-target-fldxt node/ip-10-0-129-56.us-west-1.compute.internal uid/9bca7147-b349-423d-a475-bb24c89a6748 container/network-check-target-container reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar) ... 2 lines not shown | |||
#1788039938395082752 | junit | 10 days ago | |
May 08 04:25:17.485 E ns/openshift-sdn pod/sdn-controller-dz66v node/ip-10-0-141-253.us-east-2.compute.internal uid/2f15b4a5-ec07-4a59-b40d-ca640bb578c3 container/sdn-controller reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar) May 08 04:25:25.139 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-141-253.us-east-2.compute.internal node/ip-10-0-141-253.us-east-2.compute.internal uid/1b39be3a-a706-4e3f-b76f-f642c53df734 container/kube-apiserver-check-endpoints reason/ContainerExit code/255 cause/Error W0508 04:25:22.897851 1 cmd.go:216] Using insecure, self-signed certificates\nI0508 04:25:22.906221 1 crypto.go:601] Generating new CA for check-endpoints-signer@1715142322 cert, and key in /tmp/serving-cert-1930163580/serving-signer.crt, /tmp/serving-cert-1930163580/serving-signer.key\nI0508 04:25:23.561110 1 observer_polling.go:159] Starting file observer\nW0508 04:25:23.572015 1 builder.go:239] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-141-253.us-east-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nI0508 04:25:23.572208 1 builder.go:271] check-endpoints version 4.13.0-202405070739.p0.g4d70179.assembly.stream.el8-4d70179-4d70179045c6a9c1e73f9b7ab22590c7e16efca9\nI0508 04:25:23.586308 1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-1930163580/tls.crt::/tmp/serving-cert-1930163580/tls.key"\nF0508 04:25:24.060914 1 cmd.go:141] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\n May 08 04:25:30.232 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-141-253.us-east-2.compute.internal node/ip-10-0-141-253.us-east-2.compute.internal uid/1b39be3a-a706-4e3f-b76f-f642c53df734 container/kube-apiserver-check-endpoints reason/ContainerExit code/255 cause/Error W0508 04:25:22.897851 1 cmd.go:216] Using insecure, self-signed certificates\nI0508 04:25:22.906221 1 crypto.go:601] Generating new CA for check-endpoints-signer@1715142322 cert, and key in /tmp/serving-cert-1930163580/serving-signer.crt, /tmp/serving-cert-1930163580/serving-signer.key\nI0508 04:25:23.561110 1 observer_polling.go:159] Starting file observer\nW0508 04:25:23.572015 1 builder.go:239] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-141-253.us-east-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nI0508 04:25:23.572208 1 builder.go:271] check-endpoints version 4.13.0-202405070739.p0.g4d70179.assembly.stream.el8-4d70179-4d70179045c6a9c1e73f9b7ab22590c7e16efca9\nI0508 04:25:23.586308 1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-1930163580/tls.crt::/tmp/serving-cert-1930163580/tls.key"\nF0508 04:25:24.060914 1 cmd.go:141] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\n ... 1 lines not shown |
Found in 94.12% of runs (320.00% of failures) across 17 total runs and 1 jobs (29.41% failed) in 122ms - clear search | chart view - source code located on github