Announcement

Collapse
No announcement yet.

maxentryprocs with mod_fcgi

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #16
    I got several requests on how to update mod_fcgid to the new version

    If you have cPanel, run


    # install-lve -c

    If you are running it with ISPManager, or other control panel that users RPMs
    # yum update mod_fcgid

    If you are running it with H-Sphere:
    # yum update hsphere_mod_fcgid

    If you are installing it from source, updated sources available here:

    Comment


    • #17
      Thanks for making the changes.. This works exactly how I wanted..



      Regards

      Comment


      • #18
        Hello,

        I reverted back to Centos using migration script and migrated it to CloudLinux again.. just for testing. And sine I migrated, mod_fcgid is not working with maxentryprocs.

        I have specified below in ve.cfg.

        <other maxentryprocs="2"></other>

        It works fine with browsing html files, downloading zip files etc. But with mod_fcgid, it doesn limit concurrent requests to 2. No matter what my setting of maxentryprocs is, it denies 6th concurrent connection only. (there is no modfcgid setting set to 5 or 6 in httpd.conf as well) Is the value of 5 allowed concurrent mod_fcgid requests hard coded or something in new version?

        Below are error logs --

        Code:
        [Mon Jun 14 05:44:30 2010] [warn] [client 172.16.140.48] Timeout waiting for output from CGI script /var/www/vhost1/index1.php
        
        [Mon Jun 14 05:44:30 2010] [error] [client 172.16.140.48] (70007)The timeout specified has expired: ap_content_length_filter: apr_bucket_read() failed
        
        [Mon Jun 14 05:44:32 2010] [warn] [client 172.16.140.48] Timeout waiting for output from CGI script /var/www/vhost1/index1.php
        
        [Mon Jun 14 05:44:32 2010] [error] [client 172.16.140.48] (70007)The timeout specified has expired: ap_content_length_filter: apr_bucket_read() failed
        
        [Mon Jun 14 05:44:39 2010] [warn] [client 172.16.140.48] Timeout waiting for output from CGI script /var/www/vhost1/index1.php
        
        [Mon Jun 14 05:44:39 2010] [error] [client 172.16.140.48] (70007)The timeout specified has expired: ap_content_length_filter: apr_bucket_read() failed

        Comment


        • #19
          maxentryprocs 2 is too little for mod_fcgid to work. I think mod_fcgid is just two low, I am surprised it is working with two at all.

          Regarding the number of concurrent connections -- how do you count them?
          maxEntryProcs really works on on two connections that are simultaneously served. If you are using some kind of statistics package to measure it -- it will not work.
          ab -c tool will not give you exact results as well, as long as the script you are hitting doesn take too long.

          Also, how do you reset maxEntryProcs? It is not enough to just change it in ve.cfg as the file is read only on restart.
          You have to use lvectl tool to change limit on the fly.

          Comment


          • #20
            maxentryprocs setting 2 was just an example I gave. I really tried setting it to 7, 8 etc. All the times, it was only allowing 5 concurrent php fcgi processes, the 6th concurrent process gets 503 error everytime.

            Also maxentryprocs setting 2 works fine with other html pages, zip downloads etc. And it was working well with fastcgid before re-migration. So I am sure it must be fastcgid problem with new version of lve etc.

            And this is how I count concurrent connections. I have a simple php script that runs for 10 secs. So for 10 secs it keeps consuming one apache child server process per request. So that way I have enough time to send 5-10 concurrent requests to the server, and check server-status output, lve stats etc. I always see "503: Service Temporarily Unavailable" in browser for 6th concurrent request only all the time. I used to test it same way successfully before re-migration.

            Also, this is how I change lve settings. I simply edit, /etc/container/ve.cfg file, change CPU and/or maxentryprocs settings, and then runs /etc/init.d/lvectl restart command. I am sure this reflects the new changes in the system, as I can see new CPU limit per vhost in effect in top, or lveps stats. Also I test new maxentryprocs with html/zip files which confirms new setting has taken effect.

            Regards.

            Comment


            • #21
              Unless you are running beta version of LVE, when you do lvectl restart you should see

              Stopping lvectl: not supported in this version
              The way to change limits for all customers without rebooting server right now is like this:

              The way to reset LVE limits for all customers without rebooting server do:
              # lvectl --set --ve default --cpu 30 --maxEntryProcs 20
              that will set new default values in memory

              Then do:
              # for i in `cat /proc/lve/list |cut -f1|grep -v veid`; do
              lvectl --set --ve $i --cpu 30 --maxEntryProcs 20
              done

              Though given that you are testing only one site you can do:

              lvectl --set --ve SITEID --cpu LIMIT --maxEntryProcs LIMIT

              Comment


              • #22
                Thank you very much for the reply.

                Yes I am seeing "Stopping lvectl: not supported in this version" while restarting lvectl. Also, I tried the commands shown by you (to set default lve policy and all vhosts lve policy individually), it worked this time. Wasn aware of new procedure with upgraded version

                Well, so issue was, I was setting default policy only, not setting policy individually for vhost. Anyways its working fine now..

                PS: There is a trivial thing you might want to rectify. When I set maxentryprocs to N, actual number of concurrent processes allowed are N + 1. Does it has to be this way only..?

                Regards,

                Comment


                • #23
                  Ok, great.
                  Regarding the issue with maxEntryProcs being off by one -- yes, we need to correct that, and it is on todo list. Thank you for pointing it out.
                  We also report it incorrectly in lveps

                  Comment

                  Working...
                  X