I have a 83GB compressed .sql.gz file that I'm importing. The DB size on the source server (the size of the folder) was 240GB.
I'm importing it on a Amazon cloud server, with 2CPUs, 4GM RAM and SSD disk, using innodb buffer of 2.2GB and max packet of 128MB. The allowed IOPS on the server are 1200. I'm running version 5.5.63 for "legacy reasons".
The import command is like this:
zcat dump.sql.gz | mysql -u root -ppasswd DB_NAME
Most of the volume is due to four huge row-compressed tables that are also partitioned by timestamp range. For the compression I'm using key_block_size=4kb
The import is going extremely slowly. The .ibd files for the compressed tables (or rather, the one that is, at that time, receiving data) increase at a super-slow rate of 40MB per 5 minute period, as I find using "ls -l" 5 minutes apart. The intriguing thing is that Amazon's monitoring utilities as well as the IOSTAT utility reports that every 5sec between 50MB-140MB are written on disk, here is an example:
avg-cpu: %user %nice %system %iowait %steal %idle
33.00 0.00 0.99 2.37 11.86 51.78
Device: tps kB_read/s kB_wrtn/s kB_read kB_wrtn
xvda 657.60 0.00 13877.70 0 69388
So, bottom line, the operating system reports that the disk is seeing a lot of traffic, but the innodb table files are increasing at a snail's pace.
A MySQL guru recommended that I set TRX_COMMIT=2 and disable QUERY_CACHE using the "set global" command. I did that, and I see no improvement.
Thanks for any clues as to what is going on and to how I can speed this up.
At this rate, the import will take weeks! In about 36 hours the DB takes 27GB on disk, out of the 240GB that it took on the source server.
I'm importing it on a Amazon cloud server, with 2CPUs, 4GM RAM and SSD disk, using innodb buffer of 2.2GB and max packet of 128MB. The allowed IOPS on the server are 1200. I'm running version 5.5.63 for "legacy reasons".
The import command is like this:
zcat dump.sql.gz | mysql -u root -ppasswd DB_NAME
Most of the volume is due to four huge row-compressed tables that are also partitioned by timestamp range. For the compression I'm using key_block_size=4kb
The import is going extremely slowly. The .ibd files for the compressed tables (or rather, the one that is, at that time, receiving data) increase at a super-slow rate of 40MB per 5 minute period, as I find using "ls -l" 5 minutes apart. The intriguing thing is that Amazon's monitoring utilities as well as the IOSTAT utility reports that every 5sec between 50MB-140MB are written on disk, here is an example:
avg-cpu: %user %nice %system %iowait %steal %idle
33.00 0.00 0.99 2.37 11.86 51.78
Device: tps kB_read/s kB_wrtn/s kB_read kB_wrtn
xvda 657.60 0.00 13877.70 0 69388
So, bottom line, the operating system reports that the disk is seeing a lot of traffic, but the innodb table files are increasing at a snail's pace.
A MySQL guru recommended that I set TRX_COMMIT=2 and disable QUERY_CACHE using the "set global" command. I did that, and I see no improvement.
Thanks for any clues as to what is going on and to how I can speed this up.
At this rate, the import will take weeks! In about 36 hours the DB takes 27GB on disk, out of the 240GB that it took on the source server.