Announcement

Collapse
No announcement yet.

Can't cage some accounts

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Can't cage some accounts

    Hi,
    For some reason there are about 20 accounts we can't enable cagefs on. Can't enable it on default user says default user package is restricted. Any Idea whats going on?
    Click image for larger version

Name:	Capture.png
Views:	344
Size:	25.5 KB
ID:	38924

    Thanks in advance.

  • #2
    Hello,

    Is this on a cPanel/WHM? Is this a regular hosting account or a reseller? Are there any files/users listed in /etc/cagefs/exclude/* ?

    Comment


    • #3
      Hi,

      Thanks for the reply. Sorry I guess that would help. Yes cPanel/WHM both reseller and regular accounts. There are 2 files in etc/cagefs/exclude/*​ neither show the users accounts.
      cpaneluserlist

      cpanel
      cpanelphpmyadmin
      cpanelphppgadmin
      cpanelroundcube
      mailman
      cpaneleximfilter
      cpanellogin
      cpaneleximscanner
      cpses
      cpanelconnecttrack

      systemuserlist
      saslauth
      mysql
      polkitd
      firebird
      dovecot
      dovenull
      chrony
      nobody


      Comment


      • #4
        On the screenshot from the first post the icon says that user is a reseller 'instance', so CageFS is not applicable for it. However, above it the user with the same should exist with a 'user' instance where you should be able to endable/disable CageFS.

        Comment


        • #5
          Okay thanks for explaining that. Any idea why cagefs will be unavailable and removed daily and need to be reinstalled?

          Comment


          • #6
            That sounds really odd. Do you mean the rpm package is uninstalled completely? If so please check the /var/log/cagefs.log and /var/log/yum.log . As well review the entries in messages in the following way:

            Code:
            grep -i cagefs /var/log/messages

            Comment


            • #7
              Something weird is going on. There isn't a yum log and grep -i cagefs /var/log/messages returns:

              [root@host ~]# grep -i cagefs /var/log/messages
              grep: /var/log/messages: No such file or directory​

              Re-Installed cagefs and enabled on all accounts then about 3 hours later cagefs is completely removed.

              2022.12.16 20:12:46: Parent process: cloudlinux-cli. (PID: 983394): Args: ['--list-enabled']
              2022.12.16 20:20:18: Parent process: cloudlinux-cli. (PID: 986761): Args: ['--list-enabled']
              2022.12.16 20:21:43: Parent process: cloudlinux-cli. (PID: 987331): Args: ['--list-enabled']
              2022.12.16 20:42:01: Parent process: queueprocd (PID: 999463): Args: ['--cagefs-status']
              2022.12.16 20:42:02: Parent process: queueprocd (PID: 999463): Args: ['--list-enabled']
              2022.12.16 23:51:35: Parent process: rpm_preun.sh (PID: 1079646): Args: ['--hook-remove']
              2022.12.16 23:51:36: Parent process: cagefs (PID: 1079667): Args: ['--unmount-skel']
              2022.12.16 23:51:45: Parent process: cagefs (PID: 1079667): Args: ['--remove-unused-mount-points']
              2022.12.16 23:51:52: Parent process: rpm_preun.sh (PID: 1079646): Args: ['--do-not-ask', '--remove-all']

              What the heck could be going on. Has the server been hacked or maybe a script in a users account doing this?

              Comment


              • #8
                It does not look to be hacked. You provided a good snippet here and the main entry point is this:

                2022.12.16 23:51:35: Parent process: rpm_preun.sh (PID: 1079646): Args: ['--hook-remove']
                And it means that a process named rpm_preun.sh launched the hook-remove argument and started uninstallation. The script rpm_preun.sh itself is a part of cagefs package and likely something is triggering the uninstall process. Are any special cronjobs running near 23:50?

                I am assuming it's CL8, so check for entries in journalctl:
                Code:
                journalctl | grep cagefs
                Or this way:
                Code:
                journalctl --since "2022-12-16 23:30" --until "2022-12-16 23:55"
                As well review the main one /var/log/dnf.rpm.log

                Comment


                • #9
                  Hello,

                  I don't see any cron jobs running around that time. Take a look at this below. Can I just remove rpm_preun.sh​?

                  Thanks!


                  2022-12-17T01:18:49-0500 INFO --- logging initialized ---
                  2022-12-17T01:45:16-0500 INFO --- logging initialized ---
                  2022-12-17T02:21:48-0500 INFO --- logging initialized ---
                  2022-12-17T03:23:29-0500 INFO --- logging initialized ---
                  2022-12-17T03:50:27-0500 INFO --- logging initialized ---
                  2022-12-17T03:51:38-0500 INFO --- logging initialized ---
                  2022-12-17T04:00:05-0500 INFO --- logging initialized ---
                  2022-12-17T05:10:12-0500 INFO --- logging initialized ---
                  2022-12-17T05:11:25-0500 INFO --- logging initialized ---
                  2022-12-17T06:24:08-0500 INFO --- logging initialized ---
                  2022-12-17T07:36:20-0500 INFO --- logging initialized ---
                  2022-12-17T08:55:26-0500 INFO --- logging initialized ---
                  2022-12-17T10:24:05-0500 INFO --- logging initialized ---
                  2022-12-17T11:46:21-0500 INFO --- logging initialized ---
                  2022-12-17T12:09:02-0500 INFO --- logging initialized ---
                  2022-12-17T13:18:29-0500 INFO --- logging initialized ---
                  2022-12-17T14:43:04-0500 INFO --- logging initialized ---
                  2022-12-17T16:11:29-0500 INFO --- logging initialized ---
                  2022-12-17T16:24:05-0500 INFO --- logging initialized ---
                  2022-12-17T17:48:44-0500 INFO --- logging initialized ---
                  2022-12-17T18:48:42-0500 INFO --- logging initialized ---
                  2022-12-17T19:24:05-0500 INFO --- logging initialized ---
                  2022-12-17T20:04:17-0500 INFO --- logging initialized ---
                  2022-12-17T21:21:46-0500 INFO --- logging initialized ---
                  2022-12-17T21:22:31-0500 INFO --- logging initialized ---
                  2022-12-17T23:16:01-0500 INFO --- logging initialized ---
                  2022-12-17T23:16:04-0500 INFO --- logging initialized ---
                  2022-12-17T23:31:02-0500 INFO --- logging initialized ---
                  2022-12-17T23:31:07-0500 INFO --- logging initialized ---
                  2022-12-17T23:31:10-0500 INFO --- logging initialized ---
                  2022-12-17T23:31:15-0500 INFO --- logging initialized ---
                  2022-12-17T23:31:18-0500 INFO --- logging initialized ---
                  2022-12-17T23:31:21-0500 INFO --- logging initialized ---
                  2022-12-17T23:31:25-0500 INFO --- logging initialized ---
                  2022-12-17T23:31:28-0500 INFO --- logging initialized ---
                  2022-12-17T23:31:31-0500 INFO --- logging initialized ---
                  2022-12-17T23:31:34-0500 INFO --- logging initialized ---
                  2022-12-17T23:31:37-0500 INFO --- logging initialized ---
                  2022-12-17T23:31:40-0500 INFO --- logging initialized ---
                  2022-12-17T23:31:43-0500 INFO --- logging initialized ---
                  2022-12-17T23:31:46-0500 INFO --- logging initialized ---
                  2022-12-17T23:31:49-0500 INFO --- logging initialized ---
                  2022-12-17T23:31:53-0500 INFO --- logging initialized ---
                  2022-12-17T23:31:57-0500 INFO --- logging initialized ---
                  2022-12-17T23:31:59-0500 INFO --- logging initialized ---
                  2022-12-17T23:36:00-0500 INFO --- logging initialized ---
                  2022-12-17T23:49:24-0500 INFO --- logging initialized ---
                  2022-12-17T23:50:00-0500 SUBDEBUG Upgrade: accelerate-wp-1.1-3.el8.cloudlinux.x86_64
                  2022-12-17T23:50:02-0500 SUBDEBUG Upgrade: lvemanager-7.8.1-1.el8.cloudlinux.noarch
                  2022-12-17T23:51:31-0500 SUBDEBUG Upgraded: lvemanager-7.6.5-1.el8.cloudlinux.noarch
                  2022-12-17T23:51:35-0500 SUBDEBUG Upgraded: accelerate-wp-1.0-7.el8.cloudlinux.x86_64
                  2022-12-17T23:51:36-0500 SUBDEBUG Erase: cagefs-7.4.14-1.el8.cloudlinux.x86_64
                  2022-12-17T23:52:32-0500 INFO error reading information on service cagefs: No such file or directory
                  error reading information on service proxyexecd: No such file or directory
                  WARNING: If you continue, CageFS will be disabled, and all related files and directories will be removed. Do you want to continue (yes/no)? yes
                  Disabling CageFS [DONE]
                  Unmounting skeleton [DONE]
                  Unmounting users [DONE]
                  Removing /var/cagefs [DONE]
                  Removing /usr/share/cagefs-skeleton [DONE]
                  cpanel-delete-cagefs: [##### ] (16%)
                  cpanel-delete-cagefs: [########## ] (33%)
                  cpanel-delete-cagefs: [############### ] (50%)
                  cpanel-delete-cagefs: [#################### ] (66%)
                  cpanel-delete-cagefs: [######################### ] (83%)
                  cpanel-delete-cagefs: [##############################] (100%)
                  Rebuilding Apache's suexec...
                  Rebuilding suphp...


                  Comment


                  • #10
                    Removing the script will do not help, better to find the root cause. What is your WHM version? Recently they fixed the related bug in v 108.0.2:
                    Fixed case CPANEL-41773: Do not pass –allowerasing, stop requiring glibc-static, and enable powertools and epel on C8+ systems.

                    Comment


                    • #11
                      Still having this issue. Any suggestions to resolve?
                      Thanks

                      Comment


                      • #12
                        Hmm, I am out of ideas. It's time to create a ticket for us so we can check it in place.

                        Comment

                        Working...
                        X