By avoiding mixing these tables with other tables, you can use BATCHSQL efficiently. Otherwise, the LOB data will cause BATCHSQL processing to stop.
- Groups similar SQL statements (DML SQL statements that affect the same table) across multiple transactions together and kept then in arrays
- Puts the arrays into memory queues
- Applies the arrays in one database operation
- MAXSQLSTATEMENTS: control the number of prepared SQL statements that can be used by Replicat both in regular processing mode and in BATCHSQL mode. The default is 250.
- BATCHESPERQUEUE: This specifies the maximum number of batches per queue (1-1000). The default is 50.
- BATCHTRANSOPS: Controls the maximum number of batch operations that can be grouped into a transaction before requiring a commit. (1-100K). The default is 1000.
- BYTESPERQUEUE: This specifies the maximum number of bytes per queue(1M-1G). The default is 20 MB.
- OPSPERBATCH: This specifies the maximum number of row operations per batch (1-100K). The default is 1200.
- OPSPERQUEUE: This specifies the maximum number of row operations a batch can contain in the memory queue (1-100K). The default is also 1200.
BATCHSQL BATCHTRANSOPS 50 BATCHPERQUEUE 100, OPSPERBATCH 1200
Answer: No, BATCHSQL grouping changes the execution order of the SQL statements.
- LOB and LONG data
- Row greater than 25k
- Tables having more than one unique keys besides primary key
- (SQL Server) The target table has a trigger
- Errors
BatchSQLworks the best for large number of small row changes. For large row changes, the performance result is not ideal. You normally would not use BATCHSQL for the row size > 5K.
- Stephan Haisley, Lawrence To, Oracle GoldenGate Performance Best Practices, November 2014
- John P. Jeffries, Oracle GoldenGate 12c Implementer's Guide, July 2015
- Oracle GoldenGate Documentation (12.2) - 3.15 BATCHSQL