No announcement yet.

MariaDB 5.5 on CloudLinux 7 hard lock ups

  • Filter
  • Time
  • Show
Clear All
new posts

  • MariaDB 5.5 on CloudLinux 7 hard lock ups

    From time to time the native MariaDB server (Server version: 5.5.68-MariaDB-cll-lve MariaDB Server) locks up with too many connections on most of our servers.
    I can't figure out why. The only way to make it work again is to kill mysqld processes "killall -9 mysqld" then letting systemctl restart it.

    When it lock up it uses all connections available to the database server (ie. I set max connections to 150, then all 150 is in use in case of the lockup) and queries are in a waiting state. I can not even kill specific connections to free them up, I need to kill the entire server.

    I tweaked innodb cache size and related settings but it did not help at all. If I set max connections to 800, then 800 will be in use when the lockup happens.

    I did not experience anything like this on other OSes. I tried to look for specific bugs for this specific version.

    Please help me fix this issue, it is getting really annoying.

  • #2
    Hey Dima,

    Hope you're well! :-)

    Have you checked which processes are using those connections?

    Maybe it'd be worth opening a ticket with us in our support portal so we can investigate this properly,



    • #3

      Well, this is a pretty hard question, and in most cases, it is related to a specific website or configuration of a server.

      The investigation path is same for servers with CloudLinux or without it. Start with a simple "SHOW PROCESSLIST;" to get the understanding of what those connections are, and if they are going from a single user/website. What to look for - State and Info in a resulting table.

      Here is a good explanation of how it works:


      • #4

        Sorry for the late response on this topic.

        Stuck processes are from different users, different queries, I can not find anything in common.
        It happened on physical and virtual installation.

        We have migrated the problematic servers from CLOS7 to 8. On the new OS we did not experience this yet.

        I have opened a ticket about this before in 2018 and told it might be because of high SLAB usage. It was an issue on a server but not on all. For some reason we did not follow up on this ticket or I just can't find the replies in my mailbox.


        • #5
          Have you tried running it on a non CL kernel? It could be kernel related, or most likely mariadb version related bug.


          • #6
            This issue only came up on CL7 with mariadb installed from the default repo. In docker we have newer mariadb running and that's fine.
            Exact mariadb version: mysqld 5.5.68-MariaDB-cll-lve

            Issue happened just now again. I saved the "show processlist", I'll attach some examples. 801 rows.
            First few:

            Click image for larger version

Name:	Screenshot 2022-10-21 124836.png
Views:	127
Size:	178.4 KB
ID:	38852


            • #7
              The longest query and the most suspicious one is the following:
              1679 Filling schema table
              My guess is that is the cause of the issue of why deadlocks are happening. Not much information over the internet, it's "A table in the information_schema database is being built.‚Äč".

              And from 5.5.38 changelog:
              Added state "Filling schema table" when we generate temporary table for SHOW commands and information schema.
              Same time I see few "Copying to tmp table", so first suggestion is to change tmpdir to /dev/shm, so working with the temp tables will be faster over memory.

              And for the next time the
              show engine innodb status \G;
              could provide some useful information.

              Some interesting bug reports with 5.5 affected that could be related: