Announcement

Collapse
No announcement yet.

System Load

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • System Load

    Hello, Im not sure what to read out of this but my server has been in unusally high load today. It does not go below 4.0.

    13:14:29 up 23:28, 1 user, load average: 4.36, 4.20, 4.18

    Linux 2.6.32-379.14.1.lve1.1.9.9.el6.x86_64 (xx.xx.xx) 12/08/2012 _x86_64_ (8 CPU)

    01:23:03 PM CPU %usr %nice %sys %iowait %irq %soft %steal %guest %idle
    01:23:03 PM all 2.63 0.39 0.53 1.89 0.00 0.11 0.00 0.00 94.45
    01:23:03 PM 0 8.74 0.39 1.36 9.46 0.01 0.81 0.00 0.00 79.21
    01:23:03 PM 1 2.78 0.83 0.56 0.85 0.00 0.01 0.00 0.00 94.96
    01:23:03 PM 2 1.58 0.70 0.37 0.49 0.00 0.01 0.00 0.00 96.85
    01:23:03 PM 3 1.00 0.53 0.21 0.30 0.00 0.00 0.00 0.00 97.95
    01:23:03 PM 4 3.78 0.23 0.74 2.99 0.00 0.01 0.00 0.00 92.24
    01:23:03 PM 5 1.39 0.19 0.37 0.37 0.00 0.01 0.00 0.00 97.66
    01:23:03 PM 6 1.19 0.17 0.37 0.37 0.00 0.01 0.00 0.00 97.89
    01:23:03 PM 7 0.58 0.10 0.22 0.27 0.00 0.00 0.00 0.00 98.84

    Any ideas?

  • #2
    Same noticed here on our pre-production server that should be running at almost idle.

    The new kernel has increase disk performance but has also increased the load value.

    Total disk read/write : 0B/s

    top - 15:32:28 up 5:39, 1 user, load average: 1.45, 1.32, 1.26
    Tasks: 775 total, 1 running, 774 sleeping, 0 stopped, 0 zombie
    Cpu(s): 0.2%us, 0.1%sy, 0.0%ni, 99.7%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
    Mem: 264095116k total, 5113052k used, 258982064k free, 219724k buffers
    Swap: 3879924k total, 0k used, 3879924k free, 2606040k cached

    top - 15:32:49 up 5:39, 1 user, load average: 1.55, 1.34, 1.26
    Tasks: 769 total, 1 running, 768 sleeping, 0 stopped, 0 zombie
    Cpu0 : 1.3%us, 0.3%sy, 0.0%ni, 98.4%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
    Cpu1 : 0.3%us, 1.0%sy, 0.0%ni, 98.7%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
    Cpu2 : 0.0%us, 0.0%sy, 0.0%ni,100.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
    Cpu3 : 0.0%us, 0.0%sy, 0.0%ni,100.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
    Cpu4 : 0.0%us, 0.0%sy, 0.0%ni,100.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
    Cpu5 : 0.0%us, 0.0%sy, 0.0%ni,100.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
    Cpu6 : 0.0%us, 0.0%sy, 0.0%ni,100.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
    Cpu7 : 0.0%us, 0.0%sy, 0.0%ni,100.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
    Cpu8 : 3.3%us, 1.0%sy, 0.0%ni, 95.7%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
    Cpu9 : 1.3%us, 0.3%sy, 0.0%ni, 98.3%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
    Cpu10 : 0.0%us, 0.0%sy, 0.0%ni,100.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
    Cpu11 : 0.0%us, 0.0%sy, 0.0%ni,100.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
    Cpu12 : 0.0%us, 0.0%sy, 0.0%ni,100.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
    Cpu13 : 0.0%us, 0.0%sy, 0.0%ni,100.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
    Cpu14 : 4.6%us, 0.0%sy, 0.0%ni, 95.4%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
    Cpu15 : 0.0%us, 0.0%sy, 0.0%ni,100.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
    Cpu16 : 0.3%us, 0.3%sy, 0.0%ni, 99.3%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
    Cpu17 : 2.0%us, 2.3%sy, 0.0%ni, 95.7%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
    Cpu18 : 0.0%us, 0.0%sy, 0.0%ni,100.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
    Cpu19 : 0.0%us, 0.0%sy, 0.0%ni,100.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
    Cpu20 : 0.0%us, 0.0%sy, 0.0%ni,100.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
    Cpu21 : 0.0%us, 0.0%sy, 0.0%ni,100.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
    Cpu22 : 0.0%us, 0.0%sy, 0.0%ni,100.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
    Cpu23 : 0.0%us, 0.0%sy, 0.0%ni,100.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
    Cpu24 : 1.3%us, 0.7%sy, 0.0%ni, 98.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
    Cpu25 : 11.2%us, 3.0%sy, 0.0%ni, 85.9%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
    Cpu26 : 0.0%us, 0.3%sy, 0.0%ni, 99.7%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
    Cpu27 : 0.0%us, 0.0%sy, 0.0%ni,100.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
    Cpu28 : 0.0%us, 0.0%sy, 0.0%ni,100.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
    Cpu29 : 0.0%us, 0.0%sy, 0.0%ni,100.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
    Cpu30 : 0.0%us, 0.0%sy, 0.0%ni,100.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
    Cpu31 : 0.0%us, 0.0%sy, 0.0%ni,100.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st

    Comment


    • #3
      CloudLinux support confirmed that there is still a bug in the latest beta kernel that should be fixed soon.

      Comment


      • #4
        Thanks for taking the time to post here. Nice server you have there.

        Comment


        • #5
          This is getting a bit confusing now. http://cloudlinux.com/blog/clnews/cl...ages-fixes.php

          Which kmod-lve is stable if kmod-lve.x86_64 0:1.1-9.25 is not stable?

          "We removed faulty module from our repository, and installing new kernel will bring stable module."

          Are you suggesting to downgrade to 1.1-9.23 which caused heavy loads?

          Comment


          • #6
            1.1-9.23 doesn cause heavy load. It just reports load averages values higher then they should be given the load on the server.
            It is just the reporting issue.

            Comment


            • #7
              I downgraded as the blog post said and rebooted. Now ~3 hrs later my server rebooted itself. Can see a reason in /var/log/messages for it.

              Comment


              • #8
                could you give me output of
                rpm -qa|grep kmod-lve

                Comment


                • #9
                  kmod-lve-2.6.32-379.11.1.lve1.1.9.8.2.el6.x86_64-1.1-9.20.el6.x86_64

                  kmod-lve-1.1-9.23.el6.x86_64
                  kmod-lve-2.6.32-379.5.1.lve1.1.9.6.1.el6.x86_64-1.1-9.8.el6.x86_64
                  kmod-lve-2.6.32-379.14.1.lve1.1.9.9.el6.x86_64-1.1-9.25.el6.x86_64

                  Comment


                  • #10
                    Strange. You still have old kmod-lve.
                    Try doing:
                    yum downgrade kmod-lve-2.6.32-379.14.1.lve1.1.9.9.el6.x86_64-1.1-9.23.el6.x86_64 --cloudlinux-updates-testing

                    That should downgrade dependent file as well

                    Comment


                    • #11
                      Ok, now it looks like this:

                      kmod-lve-2.6.32-379.11.1.lve1.1.9.8.2.el6.x86_64-1.1-9.20.el6.x86_64
                      kmod-lve-2.6.32-379.14.1.lve1.1.9.9.el6.x86_64-1.1-9.23.el6.x86_64
                      kmod-lve-1.1-9.23.el6.x86_64
                      kmod-lve-2.6.32-379.5.1.lve1.1.9.6.1.el6.x86_64-1.1-9.8.el6.x86_64

                      Comment


                      • #12
                        Now it is good. It should say:

                        kmod-lve-2.6.32-379.14.1.lve1.1.9.9.el6.x86_64-1.1-9.23.el6.x86_64
                        1-9.23 in here.

                        Strange that old command didnt downgrade it. Will need to retest it.

                        Comment


                        • #13
                          Thank you Igor.

                          Comment


                          • #14
                            I have changed command in the blog to force downgrade dependent module as well:
                            $ yum downgrade install kmod-lve-1.1-9.23.el6 kmod-lve-2.6.32-379.14.1.lve1.1.9.9.el6.`uname -i`-1.1-9.23.el6 --enablerepo=cloudlinux-updates-testing

                            It should work for any one who experienced similar issue.

                            Comment

                            Working...
                            X