Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!


mySQL from RAM Drive ? is it possible ?
New on LowEndTalk? Please Register and read our Community Rules.

All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.

mySQL from RAM Drive ? is it possible ?

Ok, this is a continuation thread of https://www.lowendtalk.com/discussion/131929/what-server-specification-should-i-get

I am suggested by @wss to rewrite the code, and I did. Now, instead of 100% @ 8 core and 16 GB RAM. I manage to make it to only use %10 CPU and 3GB of RAM.

Now I have another bottleneck.

MySQL has a high constant of read/write at 30MB / s. Which is BAD. The query is mostly about Update. There is about 100 Update / second. The table is about 8.000 row, which I think is not that big.

However once I emptied the table (now the table consist of about 1000 row). The Read / Write speed drop significantly. It's about 3 MB / s. Which is totally fine. Now since the botleneck is about Disk I/O Speed, is it possible to run mysql entirely on RAM ? I have about 5GB of unused RAM.

«13

Comments

  • vmhausvmhaus Member, Top Host, Host Rep

    Have you considered NVMe? We would be out of your price range judging by that thread but I'm sure others will go that low.

  • @vmhaus said:
    Have you considered NVMe? We would be out of your price range judging by that thread but I'm sure others will go that low.

    I am trying to use what I have in the moment. I have 16GB OF RAM and only 3GB used. Is there a way to make MySQL use more RAM and less use Disk I/O ?

  • @yokowasis said:

    @vmhaus said:
    Have you considered NVMe? We would be out of your price range judging by that thread but I'm sure others will go that low.

    I am trying to use what I have in the moment. I have 16GB OF RAM and only 3GB used. Is there a way to make MySQL use more RAM and less use Disk I/O ?

    you using innodb?
    what is innodb buffer pool size in my.cnf file?

    try increasing.

  • kasslekassle Member
    edited December 2017

    sure, just mount the ram as tmpfs and relocate the mysql db

    but make sure you have the replication and have plan to restore in case server restart.

    too much hassle tho

  • BecomeWebHostBecomeWebHost Member
    edited December 2017

    yokowasis said: MySQL has a high constant of read/write at 30MB / s.

    if buffer pool size is already set correctly then I think not using good indexing.

    if you use index, it will keep it in memory (search operations).

    == from your other thread => using xampp is not recommended on production.

  • Firstly, 8000 rows is a very small table by general DB standards. I don't know the "width" of the table in terms of columns and average size per row. If you're updating only a few columns every so often, you can easily restructure the tables (split) so that only the heavily updated columns are in a separate table via a join to the main table.

    You should also look at what indexes you have and if they are also a reason for the heavy writes on updates. Sometimes the updates themselves may be lesser writes than the index updates as a result (due to many suboptimal indexes).

    Depending on the DB size, you can easily move the actual DB file to a ramdrive/tmpfs mount (via symlink etc.) but beware that if/when you have any crash that results in an OS issue, you'll loose all DB changes from the last backup. Of course you could periodically take a back from the ramdrive to disk to avoid loss based on your requirements of how often data changes and how often you need it.

    As pointed out in the other thread, without really getting into the details of the DB schema/app etc. it's not easy to give you (good) advise on how to improve things.

    Hope this helps.

  • nullnothere said: You should also look at what indexes you have and if they are also a reason for the heavy writes on updates. Sometimes the updates themselves may be lesser writes than the index updates as a result (due to many suboptimal indexes).

    this!

    @yokowasis does your table use proper indexing at all?

    if you make heavy use of temporary tables have a look if your cache for this is big enough and/or use a tmpfs dir as working dir for mysql so that those temp-tables get created in ram instead of on disk...

  • @Falzo said:

    nullnothere said: You should also look at what indexes you have and if they are also a reason for the heavy writes on updates. Sometimes the updates themselves may be lesser writes than the index updates as a result (due to many suboptimal indexes).

    this!

    @yokowasis does your table use proper indexing at all?

    if you make heavy use of temporary tables have a look if your cache for this is big enough and/or use a tmpfs dir as working dir for mysql so that those temp-tables get created in ram instead of on disk...

    I don't know what kind of index is considered proper.

    Here is what I have.

    A table with 120 column.
    I put index on 3 column. 2 of which are used in where clause on update query. Is this considered proper index ?

  • @nullnothere said:
    Firstly, 8000 .....

    The width of the table is 105 columns with 100 of them being regularly updated. 3 of them is index.

    To summarize column 1 to 5 is used for students identity (id, username, name of the test, time starting the test) and column 6 to 105 is the students answer of each question. There are 100 questions.

    Maybe there are better way to approach this ? I am open to suggestion.

    @BecomeWebHost said:

    yokowasis said: MySQL has a high constant of read/write at 30MB / s.

    if buffer pool size is already set correctly then I think not using good indexing.

    if you use index, it will keep it in memory (search operations).

    == from your other thread => using xampp is not recommended on production.

    Yes. I am using innodb (was myisam). I am using xampp on Windows. I don't think it has .my.cnf. I have tried a few lamp stack and control panel. XAMPP surprisingly perform better than others (in my case that is)

  • teamaccteamacc Member
    edited December 2017

    @yokowasis said:

    To summarize column 1 to 5 is used for students identity (id, username, name of the test, time starting the test) and column 6 to 105 is the students answer of each question. There are 100 questions.

    Maybe there are better way to approach this ? I am open to suggestion.

    Flatten your mysql tables.

    Have 1 table with student name/number/etc. (primary key studentnumber)

    Have 1 table with "first test", "second test" etc. (primary key testnumber)

    Have 1 table with "first test question 1", "second test question 2" etc (primary key questionnumber, foreign key for testnumber, unique key combo "testnumber+questionnumber")

    Have 1 table with "first test question 1 student 1 answer: A" (no real primary key (although some sources advise you to do so, might wanna look into that), but unique key combo "questionnumber+studentnumber" and foreign key answernumber)

    Then, instead of having to update a 105 column row, you only insert/update a 2 column row. (INSERT INTO testanswers (questionnumber, answer) VALUES (124, 2) ON DUPLICATE KEY UPDATE answer=2)

    Thanked by 1Falzo
  • BecomeWebHostBecomeWebHost Member
    edited December 2017

    yokowasis said: Yes. I am using innodb (was myisam). I am using xampp on Windows. I don't think it has .my.cnf. I have tried a few lamp stack and control panel. XAMPP surprisingly perform better than others (in my case that is)

    this file. by default it only gives 16mb for that!

    and don't use xamp in production. it's for development only.

  • @teamacc said:

    >
    So , instead of having tables with 105 columns x 8000 row, you are suggesting 4 column x 800.000 row.

    Makes we wonder. Which one is faster.

    Searching index in 8000 row and update it (only 1 column to update. Total column is 105)

    Or

    Finding which row to update in a 800.000 row table. (Update 1 column , total column 4).

  • @teamacc said:

    >
    So , instead of having tables with 105 columns x 8000 row, you are suggesting 4 column x 800.000 row.

    Makes we wonder. Which one is faster.

    Searching index in 8000 row and update it (only 1 column to update. Total column is 105)

    Or

    Finding which row to update in a 800.000 row table. (Update 1 column , total column 4).

  • @BecomeWebHost said:

    yokowasis said: Yes. I am using innodb (was myisam). I am using xampp on Windows. I don't think it has .my.cnf. I have tried a few lamp stack and control panel. XAMPP surprisingly perform better than others (in my case that is)

    this file. by default it only gives 16mb for that!

    and don't use xamp in production. it's for development only.

    What is your suggestion ? 64 MB ? 128 ? I have 3GB of RAM to spare.

  • ArirangArirang Member
    edited December 2017

    Update will eventually result in disk i/o. How about changing from innodb_flush_log_at_trx_commit = 1 to 2. This makes i/o speed better, but If there is an error, some updates may be lost within 1 second.

  • BecomeWebHostBecomeWebHost Member
    edited December 2017

    @yokowasis said:

    @BecomeWebHost said:

    yokowasis said: Yes. I am using innodb (was myisam). I am using xampp on Windows. I don't think it has .my.cnf. I have tried a few lamp stack and control panel. XAMPP surprisingly perform better than others (in my case that is)

    this file. by default it only gives 16mb for that!

    and don't use xamp in production. it's for development only.

    What is your suggestion ? 64 MB ? 128 ? I have 3GB of RAM to spare.

    assign as much as you can but make sure server don't run out of ram... add some swap for safety :) restart mysql after change.

  • If you could send me some of the data you use and some testqueries I could run a few tests for you.

  • @yokowasis said:

    @teamacc said:


    So , instead of having tables with 105 columns x 8000 row, you are suggesting 4 column x 800.000 row.

    no, he suggests splitting your one big table into multiple and joining only the ones needed per your requests. with the given indexes those joins will be fast and as he suggested updates may become much smaller.

  • HarzemHarzem Member
    edited December 2017

    Why do you need 105 different columns for 105 different answers? Any reason for not using a single column and storing the answers as strings?

    Do you require sorting the results by individual answers?

    Also, what is the table structure like? Do you have lots of varchar's?

  • AuroraZAuroraZ Barred
    edited December 2017

    Nvm not going to mentioned the obvious flaw in this plan.

  • Please look at normalizing your database. More info can be had here -> https://www.guru99.com/database-normalization.html

    Thanked by 1Falzo
  • Thread is derailed, from how to run mysql from ram into how to refactor OP codes :)

    @AuroraZ said:
    Nvm not going to mentioned the obvious flaw in this plan.

    Why you keep following me

  • @kassle said:
    Thread is derailed, from how to run mysql from ram into how to refactor OP codes :)

    @AuroraZ said:
    Nvm not going to mentioned the obvious flaw in this plan.

    Why you keep following me

    Are you still here?

  • @teamacc

    There are actually about 205 columns instead of 105.

    Which one is better :
    1. Using 1 field as json for Answers and updating ALL the answers everytime the students answer
    2. Using 100 fields and update only 1 answer ?

  • @AuroraZ said:

    Are you still here?

    shadow only

  • @Harzem said:
    Why do you need 105 different columns for 105 different answers? Any reason for not using a single column and storing the answers as strings?

    No. it is actually a pretty good idea. I never though of it.

    Do you require sorting the results by individual answers?

    No

    Also, what is the table structure like? Do you have lots of varchar's?

    Yes. All of the answers fields (100) is varchar.

  • kasslekassle Member
    edited December 2017

    @yokowasis said:
    @teamacc

    There are actually about 205 columns instead of 105.

    Which one is better :
    1. Using 1 field as json for Answers and updating ALL the answers everytime the students answer
    2. Using 100 fields and update only 1 answer ?

    there is a reason why mysql categorized as Relational Database

    @MCHPhil give you a nice place to learn about

  • MCHPhilMCHPhil Member
    edited December 2017

    I would just start all over honestly. That table is bad. varchar 512 all over the place. I'd suggest starting with some basic database books / resources.

    Edit:

    You have a clear misunderstanding of how a database works.

    Please place explain in front of all your SQL statements. You will see how it's working internally. If you run explain and see that every part of your query utilizes every bit of the database.... That is wrong.

  • I think, I am going to go with @harzem idea. Unless anyone has a better suggestion.

    @MCHPhil said:

    I would just start all over honestly. That table is bad. varchar 512 all over the place. I'd suggest starting with some basic database books / resources.

    That is the point of this thread. To find out a better way to do this. If it wasn't for LET folks. The apps can maximize 8 core and 16 gb ram of dedicated server easily.

Sign In or Register to comment.