Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!


Need shared hosting in West coast with unlimited INODE/File Limit
New on LowEndTalk? Please Register and read our Community Rules.

All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.

Need shared hosting in West coast with unlimited INODE/File Limit

CoastHostingCoastHosting Member, Host Rep

Hey guys need a either shared web hosting platform or some presetup in the west coast preferably with unlimited INODE/File Limit and add on domains.

«1

Comments

  • donlidonli Member

    What limit of the unlimited Inodes do you need?

  • infinite file storage?

  • CoastHostingCoastHosting Member, Host Rep

    @omelas said:
    infinite file storage?

    I've been asked to develop a dating site and users are uploading photos so there will be shit load of files.. just don't want to run out so... a no limit would be nice haha

  • deankdeank Member, Troll
    edited April 2018

    Shared hosting, unlimited storage, and a dating site.

    That's one good recipe for disaster.

  • donlidonli Member

    @davenz said:

    I've been asked to develop a dating site and users are uploading photos so there will be shit load of files.. just don't want to run out so... a no limit would be nice haha

    Is this a serious project with a budget - like $100/month at least?

    Thanked by 1muffin
  • YokedEggYokedEgg Member
    edited April 2018

    @davenz said:

    @omelas said:
    infinite file storage?

    I've been asked to develop a dating site and users are uploading photos so there will be shit load of files.. just don't want to run out so... a no limit would be nice haha

    Best go with a vps. Use a panel like centminmod.

  • williewillie Member
    edited April 2018

    This sounds like you want an object store rather than file hosting. DO Spaces maybe? @BunnyCDN?

  • your filesystem is going to have some limit baked in. That's just the nature of the beast. You can get info about inodes available for ext2 / ext3 / ext4 by using the tune2fs utility. For example:

    sudo /sbin/tune2fs -l /dev/sda2 | grep -i inode # replace "/dev/sda2" with correct filesystem path
    

    Also "ls -i" yields number of inodes used for specified files or directory

    You'll want to figure out anticipated usage, add a reasonable margin of error, and confirm with your prospective provider that their system can support what you need. (And I would be especially skeptical of any provider offering "unlimited" inodes...)

  • HarambeHarambe Member, Host Rep

    If you have a real budget ($60+/mo), skip the shared hosting and talk to @Francisco about a managed cPanel VPS in Vegas.

    Thanked by 1Francisco
  • AnthonySmithAnthonySmith Member, Patron Provider

    Technically what you want does not exist, maybe it would be better off if you told us what you want to pay then we can tell you what to expect

    Thanked by 1Falzo
  • FranciscoFrancisco Top Host, Host Rep, Veteran

    No one is going to allow this due to the headaches it causes.

    If the host ever has to migrate you, back you up, or things like that, it'll take a heck of a long time.

    Get a good storage VPS and you'll be fine for millions of files.

    Francisco

  • WHTWHT Member

    If you get shared account with unlimited inodes you will ask for alternative very soon. Every disk have limited inodes. The more inodes used the slower are responses.

  • CoastHostingCoastHosting Member, Host Rep

    thanks for the advice let me go back to the planning table, i'll talk to my developer see what realistic stats are.

  • msg7086msg7086 Member
    edited April 2018

    Only if you pay unlimited monthly fee.

    If your needs are close to 100k files, you should stop thinking about shared hosting.

    A decent VPS or a dedicated server might be your better bet.

  • williewillie Member
    edited April 2018

    Yes it's best if you post your actual requirements including accurate numbers. If you need to manage millions of small pieces of data, the software you want for that is a "database" not a file system. If it's media files (photos etc) then a file system or object store can be ok. Again though, we need numbers.

    100k files is ok for an SSD VPS. After doing a debian install and rebooting a vps to clear its cache, I like to run "du -a /usr | wc" which shows around 60k files and takes a few seconds on a decent SSD VPS. It's much slower with HDD.

  • ZerpyZerpy Member

    @msg7086 said:
    If your needs are close to 100k files, you should stop thinking about shared hosting.

    So most websites these days should stop thinking about shared hosting?
    I think sometimes people assume "shared hosting" is generally bad - maybe it's the low price-point that people assume in this forum - but 100k files is easy to reach to be honest - I think about 30% of the customers I have, exceeds 100k files and about 25% exceeding 150k.

    Most shared hosting servers these days are SSDs or NVMe anyway, so doing a migration of 500k files isn't exactly a lot and can easily be mitigated.

    Yesterday I had to do a migration of an account, the account is 22.2 GiB in size and 897711 files, it took 5.5 minutes moving the files.

    Now, that could have been lowered by doing a pre-sync of all the files, but for a full account transfer with 900k files.

    Sure - inode limits make sense, and it's quite possible for the OP to limit the number of stored files on a hosting account by e.g. utilizing things such as Amazon S3 or other object storage solutions, however - I'd say that saying if people are near 100k files, that shared hosting isn't a solution, that's just living 10 years behind.

    You can take a system like Prestashop when you upload a product image, it will store 6-7 images on the system.

    If you use its file cache, you'll likewise store your modules * (number of products + categories) - so if you have 30 modules, 1k categories, and 10k products, you'll store roughly 330k cache files + 70k product images.

    That's 400k files for a shop with 10k products, and 30 modules are rather easy to hit in Prestashop.

  • @Zerpy said:

    @msg7086 said:
    If your needs are close to 100k files, you should stop thinking about shared hosting.

    So most websites these days should stop thinking about shared hosting?
    I think sometimes people assume "shared hosting" is generally bad - maybe it's the low price-point that people assume in this forum - but 100k files is easy to reach to be honest - I think about 30% of the customers I have, exceeds 100k files and about 25% exceeding 150k.

    Most shared hosting servers these days are SSDs or NVMe anyway, so doing a migration of 500k files isn't exactly a lot and can easily be mitigated.

    Yesterday I had to do a migration of an account, the account is 22.2 GiB in size and 897711 files, it took 5.5 minutes moving the files.

    Now, that could have been lowered by doing a pre-sync of all the files, but for a full account transfer with 900k files.

    Sure - inode limits make sense, and it's quite possible for the OP to limit the number of stored files on a hosting account by e.g. utilizing things such as Amazon S3 or other object storage solutions, however - I'd say that saying if people are near 100k files, that shared hosting isn't a solution, that's just living 10 years behind.

    You can take a system like Prestashop when you upload a product image, it will store 6-7 images on the system.

    If you use its file cache, you'll likewise store your modules * (number of products + categories) - so if you have 30 modules, 1k categories, and 10k products, you'll store roughly 330k cache files + 70k product images.

    That's 400k files for a shop with 10k products, and 30 modules are rather easy to hit in Prestashop.

    Just because you can doesn't mean you should.

  • ZerpyZerpy Member

    @YokedEgg said:
    Just because you can doesn't mean you should.

    shouldn't do what? have 100k files?

    There's many things people shouldn't do, but yet most people do - like buying $15/yr plans, expect everything and if not, go on LET and completely rage about it because it's not up to the unrealistic "standards" people have.

    If you pay more than just a dime, then 500k files on a shared plan should be completely perfect.
    The hosting provider should be competent enough to handle migrations with ease or restore accounts in an optimal fashion in case of a DR.

    Using a VPS wouldn't really solve the issue anyway - a VPS doesn't magically solve inode performance "issues" - you can back up the whole VM, but isn't exactly an optimal way to restore a partial set of files if that's what you want.

    Using a VPS also doesn't guarantee more iops anyway, so we're recommending something that most likely won't solve the problem? Wauw.

  • @Zerpy said:

    @YokedEgg said:
    Just because you can doesn't mean you should.

    shouldn't do what? have 100k files?

    There's many things people shouldn't do, but yet most people do - like buying $15/yr plans, expect everything and if not, go on LET and completely rage about it because it's not up to the unrealistic "standards" people have.

    If you pay more than just a dime, then 500k files on a shared plan should be completely perfect.
    The hosting provider should be competent enough to handle migrations with ease or restore accounts in an optimal fashion in case of a DR.

    Using a VPS wouldn't really solve the issue anyway - a VPS doesn't magically solve inode performance "issues" - you can back up the whole VM, but isn't exactly an optimal way to restore a partial set of files if that's what you want.

    Using a VPS also doesn't guarantee more iops anyway, so we're recommending something that most likely won't solve the problem? Wauw.

    Just because you can use shared hosting doesn't mean you should.

    Several reasons, less isolation, usually less resources to be shared, no control over webstack, etc

  • @Zerpy said:

    So most websites these days should stop thinking about shared hosting?

    If a website grows that big, people should start thinking about a more dedicated environment so they could have more control on (like above said) resources, web stacks, security, customization, etc., you name them.

    Suppose the provider has 100 customers on a shared hosting server, each having 100k files, now that's 10mln files in total on couple SSDs, on a single file system, say an ext4 volume. Does that still work? Probably yes. Is that an optimal solution for your website? Not in my opinion.

    Yes, there are companies like dreamhost that allocates 50mln or even 100mln inodes on a single file system. Then you really have to rely on how superior the server management is, and you have to pay the premium for all these.

    By the way, I was supposed to say "start to stop thinking" if my English skill was serving.

  • I can offer you something like that in London, But then in the west-coast I suggest you to go for a VPS...

  • WHTWHT Member
    edited April 2018

    I have seem a offer in digitalpoint forum I think the provider name was smilehosting. He was offering unlimites inodes. Do a research before you sign up.

    Edit: its smileserv: https://forums.digitalpoint.com/threads/smileserve-insanely-cheap-unlimited-shared-alpha-reseller-hosting-vps-dedicated-servers.2794155/

  • ZerpyZerpy Member

    @msg7086 said:
    If a website grows that big, people should start thinking about a more dedicated environment so they could have more control on (like above said) resources, web stacks, security, customization, etc., you name them.

    A shared provider can have plenty of resources to handle a good amount of traffic - you often don't have to customize your web stack, most performance gains you get are within the application itself.

    From a security perspective, the code of the application generally tend to be the bad one, so you're assuming security, but yet, you leave the biggest security issue "open" by running the (most likely vulnerable application) in first place.

    Majority of the shared providers today use systems such as CloudLinux or similar that offers a caged environment, pretty much preventing cross-account defacing.

    I've yet to see a website that couldn't get optimized in any way that absolutely required an expensive VPS to actually be able to handle the traffic (since low-cost VPS's tend to be fairly oversold).

    @msg7086 said:
    Suppose the provider has 100 customers on a shared hosting server, each having 100k files, now that's 10mln files in total on couple SSDs, on a single file system, say an ext4 volume. Does that still work? Probably yes. Is that an optimal solution for your website? Not in my opinion.

    If we're a bit realistic, that's not actually the case - most customers tend to have a lot less, and even 10 million inodes on a couple of SSDs isn't exactly an issue.

    @msg7086 said:
    Yes, there are companies like dreamhost that allocates 50mln or even 100mln inodes on a single file system. Then you really have to rely on how superior the server management is, and you have to pay the premium for all these.

    that isn't exactly speaking about any decent quality - but a decent host should have smart enough system engineers or systems that in case of a disaster recovery would restore accounts with least amount of files/disk space first, to get the highest percentage of customers back as soon as possible and do the "heavy" accounts later.

    And again if you care to explain, how do using a VM mitigate the issue of inode count? If you have to restore or migrate your site, you might still have to restore 100k+ files if you wanna do a DR - unless you restore your whole VM snapshot, but maybe that's not ideal.

    A VM doesn't magically solve inodes.

  • @Zerpy said:

    Majority of the shared providers today use systems such as CloudLinux or similar that offers a caged environment, pretty much preventing cross-account defacing.

    My knowledge about shared hosting still stays at shared apache running shared PHP instances from hundreds of users' home directory. Using a caged environment essentially make it a "VM" (container if precisely) and I'm OK with that.

    And again if you care to explain, how do using a VM mitigate the issue of inode count? If you have to restore or migrate your site, you might still have to restore 100k+ files if you wanna do a DR - unless you restore your whole VM snapshot, but maybe that's not ideal.

    A VM doesn't magically solve inodes.

    You can choose different file system according to your needs. For example, back to the days when there was no ext4, dealing with huge files (Linux ISOs for example), could be a pain due to the slow allocation and deallocation. If I have special needs, I have the freedom.

    On inode count -- For mlns of files, reiser family comes to my mind and you are not stuck with vendor provided ext4. Even with ext4, your directories and files are located closer to each other, if your storage space is properly provisioned, and it has less files to manage, so safer and faster. When your system crashes, you don't have to take the risk to fsck the huge file system with other 100 users' files. Even if your file system is broken, you can boot into a rescue cd, and image the storage for further recovery -- which I believe is not possible if same thing happens on the shared hosting server.

    Again, we are talking about very edge cases. People should do proper backup and should not come to this DR situation.

    A VM doesn't magically solve problems, but it provides the freedom for you to use those technologies to solve problems.

    Thanked by 1uptime
  • ZerpyZerpy Member

    @msg7086 said:

    My knowledge about shared hosting still stays at shared apache running shared PHP instances from hundreds of users' home directory. Using a caged environment essentially make it a "VM" (container if precisely) and I'm OK with that.

    You'll have the httpd (or nginx) process being run as X user, and individual php processes in any decent hosting environment is run under suexec, so each account would run it's own set of PHP processes.

    You still have the shared httpd/nginx processes that redirects the requests to php processes, or takes care of the job of delivering static files - but to compromise this, you'd have to find a pretty big exploit within the webserver to be able to read files across accounts - now, this is very unlikely, and there's things such as symlink protection etc in place in e.g. cagefs.

    @msg7086 said:
    When your system crashes, you don't have to take the risk to fsck the huge file system with other 100 users' files.

    Question is more as a hosting provider, if you'd even have fsck enabled - the time a fsck would take, you might as well just restore backups - pretty sure the recovery procedure for that would be faster anyway, but in a VM scenario you surely might not be sharing a system with 100 other users files, but can be 100 other VMs, if you'd experience a system crash, it could very likely be the amount of disk IO that would be performed, since many would maybe have to run fsck, you end up with a storage array being so utilized, that in fact your fsck would end up taking the same amount of time anyway.

    Sure there are so many factors involved that performance can take a hit in any environment, but we have to be realistic, most VM providers these days do provision a lot of VMs per hypervisor, because people do not want to pay much for their VMs anyway, so there's for a big part, no guarantee that you will have a good amount of resources allocated or "stolen" due to abusers - but that goes for any "shared" environment, it being shared hosting or container/vms where resources are not guaranteed in any way.

    @msg7086 said:
    Even if your file system is broken, you can boot into a rescue cd, and image the storage for further recovery -- which I believe is not possible if same thing happens on the shared hosting server.

    True - but ideally any customer should have backups anyway right? :P

    A VM doesn't magically solve problems, but it provides the freedom for you to use those technologies to solve problems.

    And at same time it can provide a lot of hassle that can easily introduce such problems if people do not know what they're doing.

    Sadly I've seen plenty of people running their own VMs and eventually required external help because they did something silly that could have been easily avoided if people had a bit more knowledge about managing systems.

    We can always find "bad" things for all types of environments, there will always be downsides going for one or another - however, I do feel that people tend to say "Shared hosting is shit" just because 80% of the providers might be shit for a particular customer - that doesn't mean it sucks for some providers or other customers for that matter.

    There's plenty of websites that doesn't need more than shared hosting, even for handling high traffic (600k+ pageviews per day e.g.) - sure it would also run fine on a VM, or a dedicated server - but if a shared hosting provider does the job, and there's a minor risk for the above scenarios, then why not use the shared provider.

  • Look at digial ocean (or any vps) + space (S3 alternative) as option if your project is decent.

  • raindog308raindog308 Administrator, Veteran

    deank said: Shared hosting, unlimited storage, and a dating site.

    A dating site hosted on shared storage...

    LowEndMatch.com?

    Lowendr.com?

    AshleyMadisonRejects.com?

  • deankdeank Member, Troll

    No, WSS.com

  • donlidonli Member
    edited April 2018

    @raindog308 said:

    deank said: Shared hosting, unlimited storage, and a dating site.

    A dating site hosted on shared storage...

    LowEndMatch.com?

    Lowendmatch.com is perfect and unregistered even.

    Sadly someone already registered loweredexpectations.com

    Thanked by 1raindog308
  • @Zerpy said:

    >

    There's plenty of websites that doesn't need more than shared hosting, even for handling high traffic (600k+ pageviews per day e.g.) - sure it would also run fine on a VM, or a dedicated server - but if a shared hosting provider does the job, and there's a minor risk for the above scenarios, then why not use the shared provider.

    This is more like a "do you trust yourself / your friend or some random shared hosting provider" problem. For a 600k+pv/day I'd expect some professionals around to help the maintenance.

    It's true that there are providers that can handle this very well, but still I'd choose someone to customize the environment instead of using a general purpose shared hosting. Do they give you redis access? Do they install elasticsearch for you? Do they change MySQL parameters for you just because you want 2 characters search instead of minimum 3 characters? Once I had to ask BuyVM to change group_concat_max_len to a bigger value because I relied that in our programs -- but I finally moved to VPS for better flexibility.

    Also I have a story to share. I used to (experimentally) set up a small API-like site that barely had 100 million requests/day on BuyVM's openvz plan. On its first day running I crashed the host server 3 to 5 times and woke up their staff in midnight (Sorry BuyVM). I had no choice but immediately stopped the service. Now it's running on BuyVM's $7 Slice, with a dedicated kernel, and handles 1-2 hundred million requests/day just fine. Lesson learned, containers and shared environment cannot always handle as much load as individual kernels.

Sign In or Register to comment.