Announcement

Collapse
No announcement yet.

LVE CPU Limits dont appear to work

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • LVE CPU Limits dont appear to work

    I have a CloudLinux server. It is running as a virtual machine on VMWare ESXi 5.5. The virtual machine has 8 logical processors. The physical CPU is a Xeon 5570 Quad Core. I have the LVE Packages set to 25% for speed to limit the CPU usage on accounts. Running kernel: 2.6.32-531.17.1.lve1.2.60.el6.x86_64

    When I benchmark the server fr om an external server, the server load goes up over 50.00. Is this normal behaviour?

    I would have expected the LVE Speed limits to lim it the server load keeping it within a normal range. (The server when running without the benchmark runs at approx 1.20 load normally)

    The benchmark command I am running on the other server is: ./ab -n 3000 -c 100 -v5 http://www.domainoncloudlinuxserver.com/

  • #2
    It depends
    1. does domainoncloudlinuxserver.com belongs to end user or default domain? If it is default server domain, it is served by apache user, and not limited.
    2. does the page you are hitting: http://www.domainoncloudlinuxserver.com/ a static page? By default we limit only php/cgi pages. If it is a static page -- you are just DoSing your server by sending too many requests.
    If this domain belongs to end user and it is serving PHP --> contact support, something wrong with your settings.

    Comment


    • #3
      Another thing I have just noticed: It seems that when a website reaches its EP limit of 20, and starts faulting, this is when the server load usage goes way up over 50.00. If I run less than 20 concurrent connections in the Apache Benchmark Test the server load stays stable and doesnt increase. So it appears as thought the Entry Process Faults is what is causing the server load to go out of control.

      1. The domain I am testing against is an end user domain.
      2. The site I am testing against a Drupal 7 CMS which is PHP. (Im not sure if this makes a difference but I have made sure to disable caching in the CMS).

      Comment


      • #4
        Then everything is correct.

        Here is what happens: For ab -> if you set 100 concurrent connections, it will establish those connections as fast as it can. So, if you have set ab -c 100, and each page gets responded with 1ms, you will be processing 100*1000 requests per second -- basically creating a DoS on your server.

        And this is what happens, with EP 20, and ab -c 100, first 20 connections are throttled and takes long time to serve, but the other 80 are return right away with 508 error (resource limit reached), and ab reconnects/sends a new request. So your server is handling 80*1000/s.... which is enough to create such load.

        This is completely normal, and such DoS attacks should be stopped on network level, before they reach apache, as you simply working against flood of HTTP requests.

        Comment


        • #5
          Ok. Thanks a lot for the additional info on how this works Igor.

          Comment

          Working...
          X