All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
How to to detect user agent on SSL in IpTables?
I use nginx and i was able to detect and block browser agents through iptables on normal HTTP site not HTTPS but that trick isn't working for SSL sites any idea how to do it?
iptables -N Wordpress-PingBacks iptables -I INPUT -p tcp --dport 80 -m string --to 70 --algo bm --string 'GET /' -j Wordpress-PingBacks
iptables -A Wordpress-PingBacks -p tcp --dport 80 -m string --to 80 --algo bm ! --string 'User-Agent: WordPress/' -j RETURN
iptables -A Wordpress-PingBacks -p tcp --dport 80 -j DROP
iptables -A Wordpress-PingBacks -j RETURN
This code was working fine for HTTP sites but replacing port 80 simply with 443 isn't working because the data is encrypted so any idea how to achieve this in SSL sites?
Comments
The initial SSL handshake isn't encrypted. I'm not bored enough to check, myself- but I bet there's something in the ClientHello which can easily be used to block current incarnations of XML-RPC abuse.
That won't work since the packet could be split or more headers, etc.
Just do a catch in NGINX and you'll be able top process it. If you're really getting slogged I recommend just parsing the access log for WordPress and shove it off into an ipset.
Francisco
Honestly filtering strings in iptables should be the last-resort scenario. All this burden for filtering a user string is simply not worthy. I see jobs like this more fitting for a filtering proxy, HAProxy or even the webserver itself
I am already doing it in nginx and returning 444 to those user agents but even than it's some times creating huge load on VPS when there are many of these requests every second. When i was detecting and blocking the agent through iptables on HTTP without moving to SSL site it was working smoothly without any issue.
Tried to block all those thousands of ip addresses and their asn's from access log using both iptables and nginx deny but than it creates huge load on VPS.
Currently i am getting 2 type of attacks regularly one is coming with wordpress user agent and another one is coming with blank user agent.
I will check it thanks for the recommendation.
I am currently returning 444 to thse agents through webserver (nginx) but still it doesn't work that smoothly when the attack is really big it create huge load on VPS.
That is probably the first time I had autocomplete screw up because my fingers were wrong. I meant to say the ClientHello handshake/data is not encrypted. It's getting low level- we're talking protocol levels.
Sounds like a complicated fix
It'd still be a complicated workaround, but as @Francisco noted- it can be spread between packets, so it's even more of a mess than just banning WordPress.
filtering this way will still create a load on the VPS, a (small but) permanent one; and it's an easy to circumvent block. I don't know the entity of your attacks, but dropping known abusive/compromised IPs (using spamhaus's DROP/EDROP and/or other common lists) in the raw table (to minimize processing on packets from those sources) plus specific rate-limiting+reject/drop rules plus eventually adding specific offending IPs to ipsets themselves (as per Francisco's suggestion, assuming those IPs don't belong to the aforementioned lists).. this could do the job. If it's still so difficult to filter DDoS attempts & the attacker(s) use non-blacklisted IPs & they keep changing them & it's too difficult to catch'em all, my answer would still be HAproxy rather than the string module.
I'm curious to know which topics does your wordpress site cover in order to get so much flattering attention
This is generally something I'd let
fail2ban
take care of. But I'm not sure in this case how the traffic rises to the level of abuse. If the URL is valid and they're not trying to DDoS you, just let the web server do its thing. Maybe a rewrite/redirect to a static page for that particular user agent, or a caching proxy as others have suggested.Sounds like a job for using multiple A-records and put several nginx/haproxy machines in front of your actual app to catch that specific load. Should not be too expensive...
Or if you don't want to rely on DNS round robin, passthrough from one nginx, to some "filtering" nginxes, passing through your app?
Before even bigger guns get on the table, maybe a cloud-based-agent-guardian, the OP might want to tell us some more about his scenario. Like: What does he want to protect (a blog, a DB driven web shop, ...), what he wants to protect from (we want some kind of reasonable balance, not howitzers protecting against the occasional pocket thief, right), why he chose the user agent to filter by, etc.
To expand on @WSS: Yes, a ssl/tls session starts unencrypted and both sides exchanging version number, some crypto details (like accepted/available ciphers/alogs), some random - nothing, however, that would be reasonably useable the way OP wants it.
Generally: Do not abuse the firewall to do all kinds of blocking. Keep in mind that you want your firewall to be tight, lean, and mean. If it's clogged by zillions of IPs and whatnot to be blocked it'll choke and become a bottleneck. At the very minimum put them IP lists into tables (rather than in rules).
More application specific stuff than very basic packet checks should be put into an "app wall". I don't know much about nginx as I don't use it but from what I know it's a quite flexible server with even some scripting built in.
Installing a proxy for only a handful offenders seems an expensive overkill to me.