Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!


How much RAM do I need on a Windows FTP server for 330 connection at once?
New on LowEndTalk? Please Register and read our Community Rules.

All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.

How much RAM do I need on a Windows FTP server for 330 connection at once?

I'm going to run a script on around 330 servers, collecting around 8 MB with data, sending them with FTP to my FTP server.
The FTP server is a Windows Server 2016 with FileZilla server.

The script will start on all 330 servers at once, and will then create 330 connections on my FTP server, starting to transfer data.

How much RAM do you think I need on this server? It has a 250 mbit connection. And are there other settings in FileZilla server that I need to change before I start??

Comments

  • I have a Ubuntu server that gets about 100-120 sftp connections per minute and rarely goes over 350 mb of used memory. I would get a benchmark with 33 connections then again at 66 to see how it scales and you should have a good idea.

    Thanked by 3myhken yomero Clouvider
  • For what do you need something like that??

  • qtwrkqtwrk Member
    edited July 2017

    @fourzerofour said:
    I have a Ubuntu server that gets about 100-120 sftp connections per minute and rarely goes over 350 mb of used memory. I would get a benchmark with 33 connections then again at 66 to see how it scales and you should have a good idea.

    I think in that case , cpu resource is more crucial than RAM.

    I have once opened about 10 sftp to upload files, and 4 core on 100%

    But those were big files, over 2 GB each, that could be reason for high CPU, but I don't know...

  • You shouldn't have a problem with that little data. FileZilla Server will handle it by its own.

    Thanked by 1myhken
  • FalzoFalzo Member

    If you use a bash script running via cron for what you do you might want to add some small randomized sleep to it to spread the connections across a few seconds?

  • myhkenmyhken Member

    @Falzo said:
    If you use a bash script running via cron for what you do you might want to add some small randomized sleep to it to spread the connections across a few seconds?

    In this case, it's my day job related. We manage around 330 Windows servers, and need to get some information out of each servers that we can put in a database that we can search in. We are using Ncentral from N-Able.com (SolarWinds) and will push the script out from Ncentral to all customers at once. This is a one time event, we just need to get this information fast.

  • ClouviderClouvider Member, Patron Provider

    300 writes/reads at the same time - I'd probably worry more about IOPS / IO in general rather than memory. Especially if using spinning drives.

    Thanked by 2myhken mehargags
  • GamerTech24GamerTech24 Member
    edited July 2017

    Yeah I know businesses that have used like old Dell PowerEdge server 2003-era servers with like 8GB of ddr2 ram serving 500 or more SMB connections at once to modern windows 10-8 clients logging in and storing their %APPDATA% and entire user profile on said SMB network storage, I think the one instance I know of where this was being done they switched to Synology DiskStation NAS but that's besides the point

    I don't think RAM would be a big deal, probably 4GB total system memory would cut it

  • @myhken 330 sftp connections, will eat your cpu on both ends. Each server uses the cpu to encrypt the data transfers, so a faster cpu will give you better speed. On the other hand, your machine that connects to 330 servers, will also use CPU for each one of those connections.

    Furthermore, connecting to one server is not necessarily 1 single connection. You can connect to 1 server and have for example, 10 simultaneous transfers.

    Those 250 Mbps are also to consider, depending how much data you want to fetch.
    If it's only a few Kb nevermind, but note that 250 Mbps dividing by 330 servers would be roughly 750 Kbps (how much data can you transfer with that? That's roughly 80 KB/s)

    IO is also important, an SSD is highly recommended on the main server.

    I would suggest rather than sftp, to have some small json api available via https.
    It's much cheaper to do a while loop and fetch 330 json files (if time is not critical) than to handle 330 simultaneous connections.
    If you have several cores, you can use GNU Parallels (not sure on Windows) or similar, to parallelize the job.

    You can probably use 1 Gb RAM and it should be ok, but 4~8 CPU might be needed + 1 Gbps.

  • myhkenmyhken Member

    Just an update, it all went very well. I started the script, and had started monitoring of CPU, RAM, IO etc, and bandwidth monitoring and everything. But it was nothing to report on anything, the script did what it was meant to do, each server did upload three files with info, no high usage of anything. The data size that we had calculated to be max 2.4 GB, was only around 150 MB. And the total file number was right under 1000.

    Thank you for all your inputs.

    Thanked by 2Falzo daxterfellowes
Sign In or Register to comment.