Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!


Looking for database advice.
New on LowEndTalk? Please Register and read our Community Rules.

Looking for database advice.

edoarudo5edoarudo5 Member

Hello,

So one of the servers I manage (moodle for a certain uni) has a database size of 88.56 GB, is this already considered big? Seems like performance is affected... Any effective way to optimize the database safely and maybe gain some more space and speed? (I'm using mariadb)

Comments

  • ArirangArirang Member

    I think it's not that big. Check your sql query first. command : explain query. Full table scan results in slow execution.

    Thanked by 2edoarudo5 tjn
  • Daniel15Daniel15 Member
    edited June 8

    MySQL should easily handle an 89 GB database, even on a VPS. I've dealt with ~2.5 TB MySQL databases at my day job, albeit on dedicated hardware (and the DBAs recommend starting to split the DB into separate shards once it's around 3 TB)

    Some ideas:

    • Turn on slow query logging. Profile the queries using EXPLAIN (like @Arirang suggested above). If you're having performance issues, one of the most common reasons is that an index is missing, resulting in full table scans

    If indexes are insufficient and you still have perf issues:

    • Run MySQL Tuner and see if it has any recommendations (but actually read about the optimizations it mentions, rather than blindly following a tutorial)
    • If you're running other stuff on the database server (for example, a web server), split them onto two separate servers. Reducing context switching will improve performance.

      • Along the same lines, if you're using a VPS for MySQL, ideally use a VPS with dedicated CPU, and get the host to configure the CPU affinity such that your VPS is constantly running on the same host CPU
    • If the app is very read-heavy, you could use replication to have multiple replicas. Replicas are generally read-only, meaning reads can go to replicas but writes must go to the master.

      • Some database client libraries (and thus some apps) have the built-in ability to send reads to a replica but writes to the master, by specifying whether a connection should be "read-only" or "read-write" in code
      • For apps that don't support this, you can use something like ProxySQL to load balance between multiple replicas and ensure writes always go to the master.
  • raindog308raindog308 Administrator, Moderator

    @edoarudo5 said: So one of the servers I manage (moodle for a certain uni) has a database size of 88.56 GB, is this already considered big? Seems like performance is affected... Any effective way to optimize the database safely and maybe gain some more space and speed? (I'm using mariadb)

    It's not DB size as much as concurrent usage. One user on a 1TB database doing mainly reads is not that demanding...1,000 users slamming a single 10MB table with nonstop updates can be glacial.

    Is Moodle read-heavy? If so @Daniel15's suggestions on adding read-only replicas is great. DBs are also I/O monsters so worth making sure you're running on the speediest possible storage. Definitely worth looking at indexing...the problem with code you don't control is you're stuck with their queries but you can still index and the code will benefit.

    Thanked by 3edoarudo5 webcraft tjn

    For LET support, please visit the support desk.

  • edoarudo5edoarudo5 Member
    edited June 10

    Thank you for the advices, helped a lot. In this use-case, moodle is quite read-heavy. Concurrent (logged in) users range from 250 minimum to about 18000 when the traffic peaks. Everything feels better now.

    Thanked by 1tjn
  • KassemKassem Member

    @Daniel15 How do you backup 2.5 TB MySQL database? mysqldump from a replica? how long that usually took?

    Just Lurking

  • Daniel15Daniel15 Member
    edited June 11

    @Kassem said:
    @Daniel15 How do you backup 2.5 TB MySQL database? mysqldump from a replica? how long that usually took?

    I'm not sure. I'm a developer - backing up the database is the DBA's problem :smiley: The DBs my team usually deals with are much smaller (most are less than 50 GB, and some tables I maintain are only a few MB).

    Did a bit of digging and it looks like an older backup system (that doesn't appear to be used any more) backed up deltas using a custom system with binlogs. Given an initial full backup, you can get the GTID for the last committed transaction that was part of the backup (I guess it's in the backup itself), then parse all binlogs after that GTID.

    Seems like they might just use mysqldump for most databases now, but I'm not sure how long it takes.

  • KassemKassem Member

    Hopefully you weren't just dumping images/media as blobs in the db. :)

    Makes sense for that huge db to do continuous incremental backups. Otherwise maybe shutdown a replica server, copy files start again.

    Just Lurking

  • Daniel15Daniel15 Member
    edited June 11

    @Kassem said: Hopefully you weren't just dumping images/media as blobs in the db.

    Not blobs :) In my case it's things like code quality metrics, eg. imagine periodically running various linters (such as ESLint) and other metrics tools (such as sloccount, Python Lizard, etc) across millions of source code files and logging the results to a database, to keep track of code quality over time. All that data adds up over time!

    All the "huge" data goes into HDFS and is queried via Presto, which I think can be easily snapshotted and backed up, but stuff that has to be queried from a UI is usually in MySQL.

  • eva2000eva2000 Member

    @Kassem said: How do you backup 2.5 TB MySQL database? mysqldump from a replica? how long that usually took?

    Many ways to do it with single threaded or multi-threaded backup tools like mysqldump, mydumper, dedicated binlog server/proxy, mariadb backup and percona xtrabackup. With the right software tools and hardware, I usually can backup large MySQL databases at 200-600MB/s and even restore at 100-300MB/s speeds.

    * Centmin Mod Project (HTTP/2 support + ngx_pagespeed + Nginx Lua + Vhost Stats)
    * Centmin Mod LEMP Stack Quick Install Guide
  • @Daniel15 said:

    @edoarudo5 said: Everything feels better now.

    What did you end up doing to improve it?

    I ran mySQL tuner and followed the recommendations. Once I did it, performance has improved so I didn't get to follow your following advices. Thanks! I noted all your advices and will be splitting the web server and the DB once it reaches 3TB.

  • aliletalilet Member

    @Kassem said:
    Hopefully you weren't just dumping images/media as blobs in the db. :)

    SharePoint has entered the chat.

  • @edoarudo5 said:
    Hello,

    So one of the servers I manage (moodle for a certain uni) has a database size of 88.56 GB, is this already considered big? Seems like performance is affected... Any effective way to optimize the database safely and maybe gain some more space and speed? (I'm using mariadb)

    Is there any slow query generating in the server? Another major factor in database performance is the version of MySQL you're currently deploying. Staying up to date with the latest version of your database can have significant impact on overall database performance.

  • @mywebhosting said:

    @edoarudo5 said:
    Hello,

    So one of the servers I manage (moodle for a certain uni) has a database size of 88.56 GB, is this already considered big? Seems like performance is affected... Any effective way to optimize the database safely and maybe gain some more space and speed? (I'm using mariadb)

    Is there any slow query generating in the server? Another major factor in database performance is the version of MySQL you're currently deploying. Staying up to date with the latest version of your database can have significant impact on overall database performance.

    There were, it was more than one but I already forgot what those are... MariaDB is always updated.

Sign In or Register to comment.