Quantcast
Channel: MySQL Forums - Backup
Viewing all articles
Browse latest Browse all 537

'ROW' Binary log format (2 replies)

$
0
0
Hi

After changing binary log format from 'STATEMENT' to 'ROW' I have problems applying the binary log to a newly restored database.

It seems it is possible for mysql to write an entry to the binary log that gets so huge, that it cant read it again.

When you try to apply the binary log to the database, you will get a message saying: 'ERROR 1153 (08S01) at line 1074175: Got a packet bigger than 'max_allowed_packet' bytes', I have now tried to increase the max allowed packet to 1GB which is maximum, and its still not enough to process all binlog entries.

I have a slave which has no problems streaming the binary logs, so it seems odd to me that you would need so much memory to restore from a backup and you can still risk getting too big entries issues.

A simple way to generate such an entry is to delete every row from a big table. In the binary log it is translated to a delete statement for each row with a where clause describing every column in the table. All these single delete statements needs to fit in one packet (as I see it).

I hope im missing something, and that someone can tell me its possible to get mysql to write smaller chunks to the binary log, or get mysqlbinlog to make it more easily processable by mysql client.

I use mysqlbinlog to decode the binary log and then pipe the output to mysql client. I have tried increasing the max_allowed_packet on both mysql client and server.

Viewing all articles
Browse latest Browse all 537

Trending Articles