Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!


[Linux] ~4500 threads limit when running script from crontab, no limits when running from terminal
New on LowEndTalk? Please Register and read our Community Rules.

All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.

[Linux] ~4500 threads limit when running script from crontab, no limits when running from terminal

Hey guys

I see strange thing on my VPS, this VPS is debian 10 KVM on my own Proxmox node.
I run on this node simple python script that do some network operations in many threads (50 process, 200 threads per process), 10k threads total.

When I run this script from terminal by logging into server via ssh - everything is working perfectly fine.
When I reboot the server and run same script using @reboot in crontab - server is limiting threads at around ~4500.

cat /proc/sys/kernel/threads-max
117321

/etc/security/limits.conf:
root soft nproc 655350
root hard nproc 655350
root soft nofile 655350
root hard nofile 655350

  • soft stack 256
  • hard stack 256
  • soft sigpending 120000
  • hard sigpending 120000
    root soft stack 256
    root hard stack 256
    root soft sigpending 120000
    root hard sigpending 120000

Any hints where to look for reason of this?

Comments

  • jackbjackb Member, Host Rep

    I think you're going about this the wrong way. Why do you need 10,000 threads?

    Thanked by 1vimalware
  • dodheimsgard said: simple python script

    dodheimsgard said: 10k threads total.

    Something wrong here.

  • Why do you need 10,000 threads?

    Because I'm building the second google ;p
    Its not about how many threads I need, as long as I start everything via terminal, or via a script that login to server with ssh everything is working fine.
    Its about the reason why it work in this way, and why threads are throttled when I start everything via crontab.

  • jackbjackb Member, Host Rep
    edited October 2019

    @dodheimsgard said:

    Why do you need 10,000 threads?

    Because I'm building the second google ;p
    Its not about how many threads I need, as long as I start everything via terminal, or via a script that login to server with ssh everything is working fine.
    Its about the reason why it work in this way, and why threads are throttled when I start everything via crontab.

    The point I'm making is it is pointless. If what you're doing is computationally expensive, you're making things slower by using more than the number of threads your cpu has. If what you're doing is not computationally expensive, create new threads as required (e.g. when serving web requests) rather than creating 10,000 at once.

    There is no need to find the answer to a question that doesn't need answered.

    Thanked by 2vimalware Daniel15
  • The point I'm making is it is pointless.

    You think it's pointless without knowing what my code do... really? This sounds more like a joke.
    I need to make as many requests as I can, and every request needs to wait 60-120 seconds before getting what I need - no way around this.
    No other way than creating a shitload of requests.

    It's not my problem that you can't imagine a situation where 10k threads need to sit and wait for something.

    There is no need to find the answer to a question that doesn't need answered.

    How about staying on topic and not posting answers that don't bring any value to a given thread?

  • Try "ulimit -T 10000" or whatever. You might have to sudo that. Other posters are right that if you are using that many threads, things are likely to bog down, and you should consider other approaches, like libevent or whatever.

  • Dude, with 10K threads and 1MB stack size per thread, you are sitting at a minimum of 10GB of RAM, just for thread stacks. A much more sane approach would be to make all the requests async...

  • yea that happens on StackOverflow alot, you ask a question and folks come in with a bunch of sidebars.. so basically if your script runs fine in the terminal and as the same user fails under crontab .. hmm can't think of a specific reason, assuming there aren't any SELinux or Cgroup profiles preventing crontab children from spawning that many threads, system log or dmesg should display why the failure. What I would try is to have the script started via SystemD instead.

    Thanked by 1dodheimsgard
  • On thing which could cause a difference is the delay you get while writing to stdout (afaik there is a difference in buffering when not using a terminal but a cronjob)

  • dodheimsgarddodheimsgard Member
    edited October 2019

    Thanks guys for the input, my special thanks go to @joshb
    You were right about cgroup :) Cgroup fork rejected found in syslog. I was looking for some logs before posting there, bud I didn't know what to look for ;p
    Solution:
    After modifying /etc/systemd/system.conf
    with:
    DefaultTasksMax=30000
    Problem is solved - script run from crontab without problems.

    @nulldev Btw - I'm at 256 stack size as I've mentioned in the first post, not 1mb. At 10k threads im at ~3gb ram usage total so perfectly fine for a given task. Anyway, with 256 GB ram on master node it's not important at all. You are right that async could be better but for an unknown reason on my end async gave worse performance than standard threading and multiprocessing library.

    @jar
    Please close this thread.

  • Thread summary: building the second Google with cron and simple Python script.

    Thanked by 2nulldev gazmull
  • I remember hitting some issues with running out of network ports and also with select() 1024 file descriptor limit when I did something like this a while back. It's not just having too many threads. Switching from select to epoll was not that big a deal but I don't remember what the solution about ports was.

  • jarjar Patron Provider, Top Host, Veteran

    dodheimsgard said: Please close this thread.

    Ain't me.

  • This is a nice post

Sign In or Register to comment.