Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!


Bitninja Abuse Reports - Page 4
New on LowEndTalk? Please Register and read our Community Rules.

All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.

Bitninja Abuse Reports

124678

Comments

  • AnthonySmithAnthonySmith Member, Patron Provider

    bitninja_george said: ould you please tell me more about this web crawler?

    Well done for once again showing you have absolutely no idea how things actually work, of course I cant I am not the one running it.

    bitninja_george said: Crawling for forum registrations and wp-login pages is a suspicious activity I think

    I dont care what you think, if they do not want them accessed, block access, don't generate abuse reports because an IP accesses a publicly available page.

    bitninja_george said: but what about the rest of the Internet? They are not protected, so we warn the host owners to stop the malware on their hosts. You really can't see any value in it? What is actionable if it isn't?

    What the f**k has the rest of the internet got to do with you, we did not ask for your help, stop being the answer to the question no one asked please.

    You do not warn, you generate an abuse report and if it is not actioned you send reminders, you have no idea if something is infected or not you just use the words malware and botnet to get a reaction.

    bitninja_george said: This is what most server owners/DC owners does to prevent infections:

    How arrogant are you... it is beyond belief at this point.

    Again, you still offer no de-listing process, your a commercial scum list, go away you horrible little man.

  • @AnthonySmith said:

    You do not warn, you generate an abuse report and if it is not actioned you send reminders, you have no idea if something is infected or not you just use the words malware and botnet to get a reaction.

    Listen, I know, it is an infection as we see these malwares every day and the damage they do. I can even send you some sample code. If you don't warn your user, it's up to you, it is your responsibility.

    From your last answer, I think you don't really care about others, other hosts, other networks. I think this is not how the Internet should work. And that's why we send the abuse report if we find an infected host.

    Anthony, I think any other argument is completely pointless as we think differently about security and responsibility.

  • Lol this is getting boring.
    I'm just glad that most of the hosts here will not take your reports seriously.

    Thanked by 1inthecloudblog
  • edited November 2016

    You've made me loose 4 soyoustart crawlers @bitnija_george
    Luckily most providers (good ones) are not taking you seriously.
    We've done over 6 trillions urls btw and project is legit for those who are curious(10+ years and counting) .

    If you were any transparent you would not be asked to be called George when that's your name and you've admitted it.

    Thanked by 2Four20 Master_Bo
  • AnthonySmithAnthonySmith Member, Patron Provider
    edited November 2016

    @bitninja_george yes we see things very differently, I respect the work of anyone trying to make the internet a better place, my issue is I see a spade, I call it a spade, you see a piece of wood, a handle of any description, a sliver of metal... you call it a spade.

    I don't think this needs to go any further, not 1 person in a thread with 3500+ views agrees with you, or is even sympathetic to your methods, you just see all those people as wrong, while I have common sense.

  • stefemanstefeman Member
    edited November 2016

    @AnthonySmith said:
    @bitninja_george yes we see things very differently, I respect the work of anyone trying to make the internet a better place, my issue is I see a spade, I call it a spade, you see a piece of wood, a handle of any description, a sliver of metal... you call it a spade.

    I don't think this needs to go any further, not 1 person in a thread with 3500+ views agrees with you, or is even sympathetic to your methods, you just see all those people as wrong, while I have common sense.

    I do agree with his ambition for more secure internet. However the way hes executing it is horrible. Rather than a business with money collected via extortion, this kind of stuff should be donation based and funded by the big players on the industry together.

    I alone hate the very idea of a private company with every intention to only make most money being able to dictate what is abuse and who gets his hosting suspended for not complying with whatever demands are given out.

    Look at spamhaus.. It's the biggest "blacklist provider" for all threats and spam. Being blacklisted by it even in accident would result into total loss of all business. They even ruled Ecatel and other providers unsafe and blocked their ip ranges. They have insane control over the internet, because being blacklisted by Spamhaus is catastrophic for anyone. I have no problem with them having that power.. ..Why? Because they are non-profit and trustworthy.

  • I'm sure no one would ever heard of bitninja except their advertisement sending as abuse.

  • AnthonySmithAnthonySmith Member, Patron Provider

    stefeman said: I do agree with his ambition for more secure internet.

    I agree with this in principal, I just question that idea that this is actually the real aim of bitninja.

    Thanked by 1inthecloudblog
  • ricardoricardo Member
    edited November 2016

    I've came across their 'reports' a few times. The report contains nothing useful and comes across as a marketing email.

    I'd triggered an 'abuse' report sending requests to my own hosting package on a shared server from one of my own VPS containers.

    Apparently bitninja decides to display a captcha instead of the resource you are actually requesting and if you don't solve that captcha, they'll sling out an email to the abuse@ contact.

    Apart from them being awkwardly in the way, the fact that their software decides to display captchas and intercept HTTP requests to my own resources seems a bit inane.
    I'd lean towards avoiding shared hosting providers who use bitninja, however I do see some potential in what they're trying to do, it's just not that good at it yet. Any software that decides for me who can/can't view my stuff without my say, is poor software.

  • Most of the bitninja abuse reports I've had are from people running VPN's, however I have had one that was for some guy running WP brute force thing, so their blacklist does get things right at times.

    What is interesting and I don't think should send abuse reports about, is when IP's are manually reported by users with no additional info.

  • ricardo said: Any software that decides for me who can/can't view my stuff without my say, is poor software.

    So is CloudFlare because that's exactly what CloudFlare does.

  • jarjar Patron Provider, Top Host, Veteran

    AnthonySmith said: What the f**k has the rest of the internet got to do with you, we did not ask for your help, stop being the answer to the question no one asked please.

    I understand your frustration and I agree with you, but I would ask you to keep it civil. I'm fully aware how hypocritical that is coming from me. I'm trying.

    Thanked by 1inthecloudblog
  • AnthonySmithAnthonySmith Member, Patron Provider

    jarland said: I'm fully aware how hypocritical that is coming from me. I'm trying.

    Fair one, my apologies.

    Thanked by 2jar Maounique
  • Master_BoMaster_Bo Member
    edited November 2016

    @bitninja_george said:
    This is what most server owners/DC owners does to prevent infections

    O'RLY?

    I read the thread and see that you genuinely (assuming your mind is crystal clear and virtuously just) assume you possess the absolute knowledge of how things should work. The rest of us either don't, or do it wrong.

    I assume that if if you would offer assistance, having explained it, and only would do what you do after benind having been asked to do, situation could be different.

    Have you done proper study of how server owners view and handle vulnerabilities and "pwned" state of their assets? Just curious.

  • NexeonNexeon Member, Patron Provider

    We've received a good number of bitninja reports - we forward to the customer but won't axe someone because of it of course.

  • jarjar Patron Provider, Top Host, Veteran

    Master_Bo said: Have you done proper study of how server owners view and handle vulnerabilities and "pwned" state of their assets? Just curious.

    It doesn't sound like they have, based on this thread. I'm totally on board with making the internet a better place. I'm not on board with spamming people's abuse@ because you don't understand how the internet works.

  • @jarland said:
    It doesn't sound like they have, based on this thread. I'm totally on board with making the internet a better place. I'm not on board with spamming people's abuse@ because you don't understand how the internet works.

    Right, we try to find a solution to reduce the number of reports and aggregate them somehow.

    Regarding the study about how server owners handle vulnerabilities, the parent company of BitNinja is a shared web hosting provider with 40+ servers and we had a lot if issues with different attacks, infections and botnets. Our shared hosting users were complaining constantly for their hacked CMSes and slow servers, our IPs were constantly on different mail black lists because of spamming. BitNinja was born as an inhouse project to find a solution for this problem. We attended many different exhibitions like HostingCons and WHDs as exhibitors and many visitor (shared providers, VPS providers and DC owners) were complaining about similar issues.

    We have done a lot of research about the different malwares and botnets. Here is my CodeMash presentation about how hackers abuse servers

    Sorry if my replies were arrogant, it was not my intention. I really just want to help anyone to make the Internet better, and I think there is a lot we - server providers - can do.

    I'm happy to help anyone about server security.

  • time4vpstime4vps Member, Host Rep
    edited November 2016

    bitninja_george said: I really just want to help anyone to make the Internet better <...>

    Hm, somehow I know this phrase... O wait, here it is:

    "Let's make <...insert_your_profit_plan...> great again!" Mr. D. Trump.

  • @inthecloudblog said:
    You've made me loose 4 soyoustart crawlers @bitnija_george
    Luckily most providers (good ones) are not taking you seriously.
    We've done over 6 trillions urls btw and project is legit for those who are curious(10+ years and counting) .

    We do crawler whitelisting, so if you have a legit crawler, and you set up the reverse dns correctly, we can do whitelisting based on it. Please contact us and we are happy to solve it.

    If you were any transparent you would not be asked to be called George when that's your name and you've admitted it.

    My real name is Zsolt Egri, I live in Debrecen, Hungary. Most native english ppl can't pronounce my name as 'Zs' is a letter and voice not existing in english, that's why I use George as this is the closest. We attend exhibitions regularly, and most ppl know me as George from those events. But I think your name is not 'inthecloudblog' too :-)

  • edited November 2016

    A few remarks...

    1) Abuse reports should be always in the ARF format, correctly categorized and only one per serious incident.
    2) Max one per month for non-optin. For optins, let them choose the frequency.
    3) Crawlers should not be blacklisted. Accessing wp-login is fine. Accessing wp-login with user data is not and is also illegal in many countries. Only the last case should be considered harmful. Only the last case should be reported to people who are not your customers.
    4) George is György. Zsolt is Sultan, so maybe Brian would be appropriate. But I would use Zoli ;)

  • AnthonySmithAnthonySmith Member, Patron Provider

    bitninja_george said: if you have a legit crawler, and you set up the reverse dns correctly, we can do whitelisting based on it. Please contact us and we are happy to solve it.

    sigh...

    image

    Thanked by 1inthecloudblog
  • ricardoricardo Member
    edited November 2016

    hostingwizard_net said: So is CloudFlare because that's exactly what CloudFlare does.

    That's a standalone service with at least an on/off switch, and probably more granularity than a simple 'deal with it'.

    bitninja_george said: We do crawler whitelisting, so if you have a legit crawler, and you set up the reverse dns correctly, we can do whitelisting based on it. Please contact us and we are happy to solve it.

    Please make sure no one else uses bitninja outside your company. Normal providers would let their users use robots.txt and 403 anything they don't like. Do you whitelist Yandex? Majestic? Ahrefs? Not quite sure what your definition of 'legitimate' is. Your captchas I mentioned earlier return an HTTP 200 response, which is pretty a damn poor attempt at 'legit' itself.

  • HostSlickHostSlick Member, Patron Provider
    edited November 2016

    Nah we don't give a shit about them either

  • edited November 2016

    ricardo said: Majestic

    That's us.. and he has send several hundreds of emails taking down servers.. He'd never provide IP's to block single IP's or ranges, domains whatever...

    The owner of the project contacted BitNinja.

    We still receive the same number of complaints or takedowns without a point

  • @ricardo said:
    Do you whitelist Yandex? Majestic? Ahrefs? Not quite sure what your definition of 'legitimate' is. Your captchas I mentioned earlier return an HTTP 200 response, which is pretty a damn poor attempt at 'legit' itself.

    Yes we whitelist all major search bots. All of them have a revers dns so it is easy to identify them on the agent side

    > host 66.249.66.1 1.66.249.66.in-addr.arpa domain name pointer crawl-66-249-66-1.googlebot.com
    

    With majestic it was a bit different, because they didn't provide correct reverse dns, but as I know eventually we whitelisted them too. Correct me if I'm wrong.

    Not quite sure what your definition of 'legitimate'

    I mean if the bot won't overload the server. So doesn't use too many concurrent requests and doesn't crawl too fast. I couldn't find a good definition but here is blogpost abot this topic: http://blog.mischel.com/2011/12/20/writing-a-web-crawler-politeness/

    Also obviously if the crawler tries to explore vulnerabilities like scanning for wp-login on different domains/hosts

            2015-06-22 19:15:49 | Url: [xx.169.56.201http://xx.169.56.201:80/admin/pMA/]    
        2015-06-22 19:15:49 | Url: [xx.169.56.201http://xx.169.56.201:80/admin/]    
        2015-06-22 19:15:49 | Url: [xx.169.56.200http://xx.169.56.200:80/PMA2015/]  
        2015-06-22 19:15:49 | Url: [xx.169.56.201http://xx.169.56.201:80/PMA2013/]  
        2015-06-22 19:15:49 | Url: [xx.169.56.200http://xx.169.56.200:80/PMA2011/]  
        2015-06-22 19:15:49 | Url: [xx.169.56.203http://xx.169.56.203:80/1phpmyadmin/]  
        2015-06-22 19:15:49 | Url: [xx.169.56.200http://xx.169.56.200:80/MyAdmin/]
    

    or tries to discover other sensitive path like the existence of xmlrp.php or other easy to abuse software component is also not legit in our terms.

    Your captchas I mentioned earlier return an HTTP 200 response, which is pretty a damn poor attempt at 'legit' itself.

    The reason we return an HTTP 200 is to render the CAPTCH page for human visitors, and also to give some decoy links to robots which does not respect robots.txt . If the robot leave the page and doesn't follow the links as instructed in the robot.txt we don't generate an incident. Malicious/poorly implemented robots will follow the links and generate an incident.

  • Don't get me wrong, I understand your attempts to improve performance and protect end users, but surely you realise your own implementation goes against the grain of a lot of rationale wrt how people expect the web to work.

    bitninja_george said: Yes we whitelist all major search bots

    Major is another subjective term. Your definition of major isn't conclusive to anyone, unless you have a list?

    bitninja_george said: With majestic it was a bit different, because they didn't provide correct reverse dns, but as I know eventually we whitelisted them too. Correct me if I'm wrong.

    The only identifier of a Majestic bot is its user agent, which of course can easily be spoofed. I suppose that means your product is either a) going to allow majestic to crawl, b) providing an easy circumvention to your product or c) somewhere in the middle involving more guesswork

    bitninja_george said: The reason we return an HTTP 200 is to render the CAPTCH page for human visitors, and also to give some decoy links to robots which does not respect robots.txt . If the robot leave the page and doesn't follow the links as instructed in the robot.txt we don't generate an incident. Malicious/poorly implemented robots will follow the links and generate an incident.

    I somewhat get the logic but don't think that's very sophisticated or effective. And the links don't have to be followed to 'generate an incident', your implementation sends out the abuse emails without following such links.

  • @hostingwizard_net said:
    A few remarks...

    1) Abuse reports should be always in the ARF format, correctly categorized and only one per serious incident.

    The ARF format is quite focused on email abuse. Anyway we are happy to implement it if there is a widespread use of it. Are there other providers using it too? Also I can find a lot of examples of spam reports with ARF, but not a single one reporting others, like how can I generate a report about an IP participating in a distributed wp-login bruteforce attack?

    2) Max one per month for non-optin. For optins, let them choose the frequency.

    We will consider this on. Seems to be reasonable.

    3) Crawlers should not be blacklisted. Accessing wp-login is fine. Accessing wp-login with user data is not and is also illegal in many countries. Only the last case should be considered harmful. Only the last case should be reported to people who are not your customers.

    Yes, I think here we are on the same page. We never greylisted any IP just because accessing wp-login once/twice. Our threashold is about 15 within 10 minutes if I remember well for it. But when it is clearly a search for vulnerable sites, I think there's a good reason to greylist/report it.

    For example this one:

            2015-06-04 03:40:29 | Url: [fl###en.com/wp/wp-admin/]   
        2015-05-20 02:14:43 | Url: [op###tk.com/wp-admin/]  
        2015-05-18 21:36:45 | Url: [co###rs.com/wordpress/wp-admin/]    
        2015-05-18 14:21:46 | Url: [sk###on.com/wp-admin/]  
        2015-05-18 14:03:43 | Url: [pi###ia.###.uk/old/wp-admin/]   
        2015-05-18 13:48:17 | Url: [in###ws.org/wp-comments-post.php]   
        2015-05-18 13:36:26 | Url: [##pi.or.id/wordpress/wp-admin/] 
        2015-05-18 13:23:23 | Url: [th###ep.net/wp/wp-admin/]
    

    4) George is György. Zsolt is Sultan, so maybe Brian would be appropriate. But I would use Zoli ;)

    So I'm not the only one from Hungary here :-) I like George, thank you anyway :-)

  • @ricardo said:
    Major is another subjective term. Your definition of major isn't conclusive to anyone, unless you have a list?

    'googlebot.com', 'msn.com', 'ahrefs.com', 'cloudflare.com',
    'yandex.ru', 'yandex.net', 'yandex.com', 'google.com', 'opera-mini.net',
    'facebook.com', 'w3c.org', 'w3.org', 'crawl.baidu.com',
    'netcraft.com', 'wesee.com', 'phishtank.com', 'yahoo.net', 'yahoo.com', 'yahoodns.net',
    'protection.outlook.com', 'sitelock.com', '1e100.net', 'mx.aol.com', 'core.woorank.com', 'hotmail.com',
    'majestic12.co.uk', 'qualys.com', 'archive.org', 'alexa.com',

    The only identifier of a Majestic bot is its user agent, which of course can easily be spoofed. I suppose that means your product is either a) going to allow majestic to crawl, b) providing an easy circumvention to your product or c) somewhere in the middle involving more guesswork

    Yes, that's exactly the problem with majestic. Unfortunately we can't simply allow access based on the useragent field. If anyone has an idea how to do it properly, I'm happy to discuss. We have an idea in our backlog to label all incidents using AI and auto remove good bot traffic. But it is not too high priority at the moment.

    I somewhat get the logic but don't think that's very sophisticated or effective. And the links don't have to be followed to 'generate an incident', your implementation sends out the abuse emails without following such links.

    Well I tried to insert here the code we use, but CloudFlare decided to deny it so I had to rewrite this post :-) So here is the code snippet:

    http://pastebin.com/qzkscCVQ

    So for visiting the front page, robots.txt and the captcha verification page won't generate an incident. I'm not sure if robots.txt is fetched first or the actual page with majestic? Because I think robots.txt should be fetched first and then the content if allowed. I know there was some strange implementation in majestic so for example in case of /some/subdir/hello they loaded /some/subdir/robots.txt and not /robots.txt. We fixed it on our side to now our module server the same content in both cases.

  • ricardoricardo Member
    edited November 2016

    My point about your whitelist would be the list is arbitrary, what about uptime monitors? Cron jobs? Callbacks and APIs? Your point about things visiting robots.txt first is fair, WRT bots, but your definition of a bot would also be arbitrary. Headless browsers, browser automation is a thing.

    Yes, that's exactly the problem with majestic

    It isn't really a problem. If someone wanted to block majestic, they can. The problem is you believing it's your choice under the obfuscation that is the thing BitNinja :)

    Anyways, rant over for me. I wouldn't recommend using hosting services using such a thing. I think it's fine to block brute force and known attack vectors, or impolite clients, but you're overstepping the remit in a few areas, IMO. Also the emails you send (including many false-positives) are just marketing spam and waste people's time, seemingly only to promote your product.

    Thanked by 1inthecloudblog
  • @ricardo said:

    It isn't really a problem. If someone wanted to block majestic, they can.

    Yes I mean That's our problem with majestic. A shared IP database or other mechanism to authenticate a majestic bot could solve this issue. But I think sites protected by CF, Incapsula, and similar cloud services suffer from the same problem with majestic.

    My point about your whitelist would be the list is arbitrary, what about uptime monitors? Cron jobs? Callbacks and APIs?

    As long as they don't hit the thresholds, they won't be greylisted. And every bitninja user has their own whitelists so they can whitelist any number of IP-s, even with API or from the command line, so monitors, cron jobs callbacks and APIs are not causing any problem.

Sign In or Register to comment.