Announcement

Collapse
No announcement yet.

Imunify360 should improve bot detection and mitigation

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Imunify360 should improve bot detection and mitigation

    We operate a large number of servers running Imunify360.

    As every website and server operator is well aware, there is a huge increase in unwanted bot activity - and Imunify360 is not very effective at identifying and blocking bots. As a multi-layered security solution, I expect Imunify360 to be able to keep websites safe, but it completely misses this threat.

    Today's bad bots are not respecting robots.txt or even identifying themselves as bots. Their behaviour often mimics a distributed denial of service attack because of:
    • High request rate
    • Only of website pages (not static assets) - in our case that typically means PHP/SQL generated pages, which are resource heavy to generate
    • 1 request per IP (even if you can detect the IP as malicious and block it, the IP is never seen again)
    But Imunify360 has a great opportunity to handle this problem:
    1. It already has an RBL feature (so bot IPs identified on one server could be propagated effectively to every other server)
    2. It already has a captcha feature (but it's almost impossible to trigger this from just 1 request!)
    Our customers would pay extra for functionality to properly manage bots.

  • #2
    Hi dls​,

    Thank you for taking the time to share your detailed feedback — we truly appreciate your insight, especially coming from someone managing a large-scale environment.

    You’ve raised a very valid and increasingly common concern in today’s landscape. The evolution of bot behaviour — especially those mimicking legitimate user traffic and DDoS-like patterns — poses challenges for any security solution, and we agree that Imunify360 has a strong foundation to help mitigate such threats more effectively.

    Imunify360 already includes several layers of bot protection, including reputation-based blocking (RBL), mod_security rules, and CAPTCHA-based challenges — however, as you mentioned, a single, low-frequency request per IP can make detection and mitigation more complex.

    To better understand and address the type of traffic you're seeing, we’d appreciate it if you could share a sample access log where you can see the reported activity or traffic snapshot (in plain text format) from your web server that highlights the suspicious behaviour. This would help us:
    • Review how these requests behave,
    • Check if they should be blocked by our current mod_sec rules or heuristics,
    • And explore whether additional detection logic or rule tuning is needed.

    To move forward effectively, we recommend opening a support ticket and​ attaching the relevant logs there. This will allow us to escalate the case and involve our Web Protection Team (WPT) for a deeper investigation. With real-world examples from your environment, we’ll be able to evaluate the traffic pattern and determine whether adjustments to our existing detection rules are needed.

    Looking forward to your example — and thank you again for helping us improve Imunify360.
    Last edited by alevchenko; 04-16-2025, 12:29 PM.

    Comment


    • #3
      Thank you for reaching out and sharing your detailed feedback regarding bot detection. We understand your concerns completely – the rise in aggressive and unwanted AI bots traffic is a raising challenge for website hoters, and we recognize the need for further imrovements.

      Since AI races even known to be good behaving bots ignore
      Code:
      robots.txt
      and
      Code:
      429
      responses, not to mention others using legitimate traffic of their users and healess browsers while rapidly rotating IP addresses from huge pools, making traditional IP blocking less ineffective.​ We agree that effectively identifying and managing these bots, especially resource-intensive AI crawlers, is crucial for a multi-layered security solution like Imunify360. This is an area we are actively working on and improving.

      Here is an overview of our current efforts and planned enhancements based on the points you raised and our internal developments:

      Our recent splash screen improvements and "cool-down" ModSecurity rules have successfully blocked bots like Bytespider and Claudebot based purely on their User-Agent strings. We are expanding these checks across more Uer-Agents. Also we are preparing new ModSecurity rules specifically designed to track requests from a list of known AI bots and possibly punish for their robots.txt uncompliance. This list includes bots like Perplexity-User, anthropic-ai, Claude-Web, cohere-ai, Applebot, and others. This rule is scheduled for release in one of the next releases. Initially, this rule will track requests and pass to gather data effectively without introducing thresholds to traffic yet.

      We are exploring more sophisticated rate-limiting and blocking mechanisms tied to bot identification. The goal is to move beyond simple IP blocks or captcha challenges triggered by single requests but for less aggressive specific bots, there will be an opportunity to implement Block by custom rules, yet we analyse incoming data to find suitable thresholds.

      We are actively discussing how to best implement an opt-in system to block specific categories of bots, such as AI crawlers, providing more granular control based on specific needs. This aligns with your suggestion and is a priority for future development.​

      Comment

      Working...
      X