database design - Creating a table with volatile columns at first to reduce log size -


i not sure this, think read before, , know if true or false:

when creating tables, better put volatile columns first , static columns. mean, put columns updatable, , not updatable @ end. reduce size of transaction log because each time row modified, log write old row, , columns of new row till last updated.

row

id-pk, code, name , message 1    , 10 , "john doe", "a funny message" 

update to

1    , 10 , "john doe", "the message changed." 

in case, log write new row. however, if change order of columns:

row

id-pk, message , code, name<br/> 1    , "a funny message"         , 10 , "john doe" 

update to

1    , "the message changed.", 10 , "john doe" 

the transaction log write till last modified column (1, "the message changed.") , improve performance while writting , shipping logs machine when using hadr.

i know if true, , can found information this.

when full data change capture not enabled, db2 luw log records begin first byte data changes , continue last byte data changes. ibm offers following recommendation in section of online documentation titled "ordering columns minimize update logging"

columns updated should grouped together, , defined towards or @ end of table definition. results in better performance, fewer bytes logged, , fewer log pages written, smaller active log space requirement transactions performing large number of updates.


Comments

Popular posts from this blog

android - Spacing between the stars of a rating bar? -

aspxgridview - Devexpress grid - header filter does not work if column is initially hidden -

c# - How to execute a particular part of code asynchronously in a class -