Announcement

Collapse
No announcement yet.

CloudLinuxOS

Collapse
This topic has been answered.
X
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • CloudLinuxOS

    Welcome
    I installed CloudLinuxOS, but when installing, this error occurs
    WARNING! Your /etc/fstab configuration is not explicit about device mounting on boot/efi.
    This is not a CL software issue but this potentially can cause problems with booting the CL kernel.
    It is strongly recommended to contact your hosting provider before proceeding.
    I don't know what to do
    Here I find a solution to the problem
    Possible correct way to install
    Thanks to all​
  • Answer selected by bogdan.sh at 07-04-2024, 08:04 AM.

    Is it production server or not yet? Just wanted to be sure before we move forward

    From the information provided and according to the same article you the situation is the following:

    According to /etc/fstab the /boot/efi is searched by LABEL EFI_SYSPART while you have two of such labels - /dev/sda1 and /dev/sdb1:

    Code:
    /dev/sda1: SEC_TYPE="msdos" LABEL="EFI_SYSPART" UUID="1A9D-AF27" BLOCK_SIZE="512 " TYPE="vfat" PARTLABEL="primary" PARTUUID="0c86efa8-4778-40db-8c7f-2ed7f0656fdb "
    /dev/sdb1: SEC_TYPE="msdos" LABEL="EFI_SYSPART" UUID="1ABC-8005" BLOCK_SIZE="512 " TYPE="vfat" PARTLABEL="primary" PARTUUID="c6a1d3b4-2605-41f5-a11f-404c056efa95 "

    The mounted partition after system is booted is the /dev/sdb1 one:

    Code:
    /dev/sdb1 /boot/efi vfat rw,relatime,fmask=0022,dmask=0022,codepage=437,ioc harset=ascii,shortname=mixed,errors=remount-ro 0 0
    While the sda1 is actually used as a boot partition.


    Basically, it confirms the article. And now to fix the problem you have to:

    1. Edit the /etc/fstab file and replace the LABEL=EFI_SYSPART with path to partition of sda1 one, it should be:
    Code:
    /dev/sda1 /boot/efi vfat defaults 0 1
    2. Unmount previous partition:
    Code:
    umount /boot/efi
    3. Mount new /boot/efi:
    Code:
    mount -a
    4. Reinstall CloudLinux kernel and modules:
    Code:
    yum reinstall $(rpm -qa | grep kernel | grep lve)
    5. Reboot the server.

    Comment


    • #2
      Hello,

      There could be multiple cases to this issue, and here is the first step to start with: https://cloudlinux.zendesk.com/hc/en...-raid-1-server

      Overall, you would need some good system administrator to help since it's more related to general administration.

      Comment


      • #3
        Welcome
        Thank you and best regards
        I have implemented this explanation, but unfortunately the server has stopped working. Is there another method or anything I can take now?
        Thanks to all​

        Comment


        • #4
          This issue requires some in-depth investigation and it's better to be done by your local admins. Another option is to create support ticket for us and we will try to help.

          We can try doing the same here on the forum, but it will take longer. If you are interested please provide the output of:

          Code:
          cat /etc/fstab
          blkid
          cat /proc/mounts

          Comment


          • #5
            Thank you and best regards
            Possible link to open a ticket
            Below are the outputs


            cat /etc/fstab


            UUID=2bdb5535-a80d-4a53-a234-0f74b20f33b7 / xfs defaults,uquota01
            UUID=39c21e91-fc9e-4702-b77e-0a7a3c580eb8 /boot xfs defaults 00
            LABEL=EFI_SYSPART /boot/efi vfat defaults 0 1
            UUID=5471188d-e667-4ec0-87c3-ad5d9142f80d /tmp ext4 defaults 00
            UUID=c0b77a97-0f36-40fc-9b24-fd1f326f42b3 swap swap defaults 00
            UUID=cd6529f9-a0b2-4000-bf7e-81c7c510b663 swap swap defaults 00



            ​[root@server ~]# cat /proc/mounts
            sysfs /sys sysfs rw,nosuid,nodev,noexec,relatime 0 0
            proc /proc proc rw,nosuid,nodev,noexec,relatime 0 0
            devtmpfs /dev devtmpfs rw,nosuid,size=131886064k,nr_inodes=32971516,mode= 755 0 0
            securityfs /sys/kernel/security securityfs rw,nosuid,nodev,noexec,relatime 0 0
            tmpfs /dev/shm tmpfs rw,nosuid,nodev 0 0
            devpts /dev/pts devpts rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode= 000 0 0
            tmpfs /run tmpfs rw,nosuid,nodev,mode=755 0 0
            tmpfs /sys/fs/cgroup tmpfs ro,nosuid,nodev,noexec,mode=755 0 0
            cgroup /sys/fs/cgroup/systemd cgroup rw,nosuid,nodev,noexec,relatime,xattr,release_agen t=/usr/lib/systemd/systemd-cgroups-agent,name=systemd 0 0
            pstore /sys/fs/pstore pstore rw,nosuid,nodev,noexec,relatime 0 0
            efivarfs /sys/firmware/efi/efivars efivarfs rw,nosuid,nodev,noexec,relatime 0 0
            bpf /sys/fs/bpf bpf rw,nosuid,nodev,noexec,relatime,mode=700 0 0
            cgroup /sys/fs/cgroup/hugetlb cgroup rw,nosuid,nodev,noexec,relatime,hugetlb 0 0
            cgroup /sys/fs/cgroup/freezer cgroup rw,nosuid,nodev,noexec,relatime,freezer 0 0
            cgroup /sys/fs/cgroup/net_cls,net_prio cgroup rw,nosuid,nodev,noexec,relatime,net_cls,net_prio 0 0
            cgroup /sys/fs/cgroup/blkio cgroup rw,nosuid,nodev,noexec,relatime,blkio 0 0
            cgroup /sys/fs/cgroup/cpuset cgroup rw,nosuid,nodev,noexec,relatime,cpuset 0 0
            cgroup /sys/fs/cgroup/cpu,cpuacct cgroup rw,nosuid,nodev,noexec,relatime,cpu,cpuacct 0 0
            cgroup /sys/fs/cgroup/pids cgroup rw,nosuid,nodev,noexec,relatime,pids 0 0
            cgroup /sys/fs/cgroup/rdma cgroup rw,nosuid,nodev,noexec,relatime,rdma 0 0
            cgroup /sys/fs/cgroup/memory cgroup rw,nosuid,nodev,noexec,relatime,memory 0 0
            cgroup /sys/fs/cgroup/perf_event cgroup rw,nosuid,nodev,noexec,relatime,perf_event 0 0
            cgroup /sys/fs/cgroup/devices cgroup rw,nosuid,nodev,noexec,relatime,devices 0 0
            none /sys/kernel/tracing tracefs rw,relatime 0 0
            configfs /sys/kernel/config configfs rw,relatime 0 0
            /dev/md3 / xfs rw,relatime,attr2,inode64,logbufs=8,logbsize=32k,u srquota 0 0
            systemd-1 /proc/sys/fs/binfmt_misc autofs rw,relatime,fd=36,pgrp=1,timeout=0,minproto=5,maxp roto=5,direct,pipe_ino=54453 0 0
            mqueue /dev/mqueue mqueue rw,relatime 0 0
            debugfs /sys/kernel/debug debugfs rw,relatime 0 0
            hugetlbfs /dev/hugepages hugetlbfs rw,relatime,pagesize=2M 0 0
            /dev/md2 /boot xfs rw,relatime,attr2,inode64,logbufs=8,logbsize=32k,n oquota 0 0
            /dev/md5 /tmp ext4 rw,nosuid,noexec,relatime 0 0
            /dev/sdb1 /boot/efi vfat rw,relatime,fmask=0022,dmask=0022,codepage=437,ioc harset=ascii,shortname=mixed,errors=remount-ro 0 0
            sunrpc /var/lib/nfs/rpc_pipefs rpc_pipefs rw,relatime 0 0
            /dev/md5 /var/tmp ext4 rw,nosuid,noexec,relatime 0 0

            Comment


            • #6
              Please also show the output of:
              Code:
              blkid

              Comment


              • #7
                [root@server ~]# blkid
                /dev/sda1: SEC_TYPE="msdos" LABEL="EFI_SYSPART" UUID="1A9D-AF27" BLOCK_SIZE="512 " TYPE="vfat" PARTLABEL="primary" PARTUUID="0c86efa8-4778-40db-8c7f-2ed7f0656fdb "
                /dev/sda2: UUID="79fd2074-e2cc-9ab3-9063-c7482eb34cc9" UUID_SUB="d0a313ba-2b2f-5 fa9-14d8-c1e90f780cd7" LABEL="md2" TYPE="linux_raid_member" PARTLABEL="primary" PARTUUID="5cc10211-caa4-48b7-b845-0c038936dafb"
                /dev/sda3: UUID="c26dc21e-17e2-2226-a99b-e5511949172a" UUID_SUB="923234f3-bc46-5 746-863c-d6b3f60c9144" LABEL="md3" TYPE="linux_raid_member" PARTLABEL="primary" PARTUUID="3f6c0c53-d431-415d-8679-039f98ced95c"
                /dev/sda4: LABEL="swap-sda4" UUID="c0b77a97-0f36-40fc-9b24-fd1f326f42b3" TYPE="s wap" PARTLABEL="primary" PARTUUID="7f73b97d-ebb3-479a-a8ea-128a56448fe0"
                /dev/sda5: UUID="7b99d53e-e3ee-a32e-40c1-27059f147914" UUID_SUB="6ff54017-5bf0-4 2ab-3aec-29e371b632fc" LABEL="md5" TYPE="linux_raid_member" PARTLABEL="logical" PARTUUID="2ebf0eb3-e5ad-49f9-9b81-91d97e88096b"
                /dev/sda6: BLOCK_SIZE="2048" UUID="2024-07-01-10-15-37-00" LABEL="config-2" TYPE ="iso9660" PARTLABEL="logical" PARTUUID="1227e559-9c1e-4bed-b081-ec640492f0c9"
                /dev/sdb1: SEC_TYPE="msdos" LABEL="EFI_SYSPART" UUID="1ABC-8005" BLOCK_SIZE="512 " TYPE="vfat" PARTLABEL="primary" PARTUUID="c6a1d3b4-2605-41f5-a11f-404c056efa95 "
                /dev/sdb2: UUID="79fd2074-e2cc-9ab3-9063-c7482eb34cc9" UUID_SUB="049b7d37-c5c3-c ede-67ed-baf7a1883cf6" LABEL="md2" TYPE="linux_raid_member" PARTLABEL="primary" PARTUUID="1408abdb-af9c-401c-938d-ca83af796b3d"
                /dev/sdb3: UUID="c26dc21e-17e2-2226-a99b-e5511949172a" UUID_SUB="9e0cae4a-d554-f 404-e057-12f68e08dbaf" LABEL="md3" TYPE="linux_raid_member" PARTLABEL="primary" PARTUUID="7a475e88-4d3f-4a44-82ce-cf287bb1e23b"
                /dev/sdb4: LABEL="swap-sdb4" UUID="cd6529f9-a0b2-4000-bf7e-81c7c510b663" TYPE="s wap" PARTLABEL="primary" PARTUUID="9eb00c55-136c-4e26-89b0-db4bb0052f48"
                /dev/sdb5: UUID="7b99d53e-e3ee-a32e-40c1-27059f147914" UUID_SUB="e3bdd8f0-b366-3 869-b62e-e59353fdc89a" LABEL="md5" TYPE="linux_raid_member" PARTLABEL="logical" PARTUUID="3120e222-83dd-40bd-8aae-ec0a8a0f8a7a"
                /dev/md2: LABEL="boot" UUID="39c21e91-fc9e-4702-b77e-0a7a3c580eb8" BLOCK_SIZE="4 096" TYPE="xfs"
                /dev/md3: LABEL="root" UUID="2bdb5535-a80d-4a53-a234-0f74b20f33b7" BLOCK_SIZE="4 096" TYPE="xfs"
                /dev/md5: LABEL="tmp" UUID="5471188d-e667-4ec0-87c3-ad5d9142f80d" BLOCK_SIZE="40 96" TYPE="ext4"

                Comment


                • #8
                  Is it production server or not yet? Just wanted to be sure before we move forward

                  From the information provided and according to the same article you the situation is the following:

                  According to /etc/fstab the /boot/efi is searched by LABEL EFI_SYSPART while you have two of such labels - /dev/sda1 and /dev/sdb1:

                  Code:
                  /dev/sda1: SEC_TYPE="msdos" LABEL="EFI_SYSPART" UUID="1A9D-AF27" BLOCK_SIZE="512 " TYPE="vfat" PARTLABEL="primary" PARTUUID="0c86efa8-4778-40db-8c7f-2ed7f0656fdb "
                  /dev/sdb1: SEC_TYPE="msdos" LABEL="EFI_SYSPART" UUID="1ABC-8005" BLOCK_SIZE="512 " TYPE="vfat" PARTLABEL="primary" PARTUUID="c6a1d3b4-2605-41f5-a11f-404c056efa95 "

                  The mounted partition after system is booted is the /dev/sdb1 one:

                  Code:
                  /dev/sdb1 /boot/efi vfat rw,relatime,fmask=0022,dmask=0022,codepage=437,ioc harset=ascii,shortname=mixed,errors=remount-ro 0 0
                  While the sda1 is actually used as a boot partition.


                  Basically, it confirms the article. And now to fix the problem you have to:

                  1. Edit the /etc/fstab file and replace the LABEL=EFI_SYSPART with path to partition of sda1 one, it should be:
                  Code:
                  /dev/sda1 /boot/efi vfat defaults 0 1
                  2. Unmount previous partition:
                  Code:
                  umount /boot/efi
                  3. Mount new /boot/efi:
                  Code:
                  mount -a
                  4. Reinstall CloudLinux kernel and modules:
                  Code:
                  yum reinstall $(rpm -qa | grep kernel | grep lve)
                  5. Reboot the server.

                  Comment


                  • #9
                    Can you please explain to me in this step which word to replace with the other?

                    1. Edit the /etc/fstab file and replace the LABEL=EFI_SYSPART with path to partition of sda1 one, it should be:​

                    Comment


                    • #10
                      Thank you
                      The problem was solved thanks to your guidance and solutions. You are a genius and very excellent. Congratulations
                      However, a lower version of my hosting accounts was installed
                      This message appears when calling Cloud Linux
                      Hosting accounts have exceeded the allowed limit.
                      Review your hosting accounts to ensure compliance with the allowed limit for your edition. If you want to use the capabilities of the CloudLinux on a server with more accounts, you may consider using Cloudlinux OS Shared PRO.
                      Should I delete the version and download open accounts and how?​

                      Comment


                      • #11
                        Great to hear the issue has been fixed!

                        What version have you installed? The CloudLinux OS Shared (now called Legacy) should not throw such warning.

                        Comment


                        • #12
                          Thank you my friend
                          I have installed this version
                          CloudLinux OS Solo
                          How do I delete the first one and install this other version?
                          loudLinux OS Shared Pro​

                          Comment


                          • #13
                            I believe you do not need to reinstall other version, you have just to cancel Solo license and purchase the CloudLinux Shared Pro license and reactivate it per https://cloudlinux.zendesk.com/hc/en...based-Licenses

                            Comment


                            • #14
                              Thank you
                              I implemented the explanation and unfortunately it did not pass
                              Is there a way to permanently delete the cloud from the server and restore it from the beginning?
                              Thanks​

                              Comment


                              • #15
                                I believe it should be doable from license perspective. But yes, seems it will be easier for you to uninstall per https://docs.cloudlinux.com/shared/c.../#uninstalling and then do conversion again. Please just leave the /etc/fstab unchanged.

                                Comment

                                Working...
                                X