You can have multiple database block sizes in the same database. You can have a mixture of the “default” block size (the block size you used when you initially created the database; the size that is used for the SYSTEM and all TEMPORARY tablespaces) and up to four other block sizes. Each unique block size must have its own buffer cache area. The default, keep, and recycle pools will only cache blocks of the default size. In order to have a nondefault block size in your database, you need to have configured a buffer pool to hold them.
In this example, my default block size is 8KB. I will attempt to create a tablespace with a 16KB block size:
$ sqlplus / as sysdba
SQL> create tablespace ts_16k datafile ‘/tmp/ts_16k.dbf’ size 5m blocksize 16k; create tablespace ts_16k
ERROR at line 1:
ORA-29339: tablespace block size 16384 does not match configured block sizes
SQL> show parameter 16k
Right now, since I have not configured a 16KB cache, I can’t create such a tablespace. I could do one of a couple of things right now to rectify this situation. I could set the DB_16K_CACHE_SIZE parameter and restart the database. I could shrink one of my other SGA components in order to make room for a 16KB cache in the existing SGA. Or, I might be able to just allocate a 16KB cache if the SGA_MAX_SIZE parameter was larger than my current SGA size.
Note It is possible to resize various SGA components while the database is up and running. If you want to be able to “grow” the size of the SGA beyond its initial allocation, you must have set the SGA_MAX_SIZE parameter to some value larger than the allocated SGA. For example, if after startup your SGA size was 800MB and you wanted to add an additional 200MB to the buffer cache, you would have had to set the SGA_MAX_SIZE to 1GB or larger to allow for the growth.
In this example, I will set the DB_16K_CACHE_SIZE and restart, since I’m using ASMM and don’t wish to set any of the other caches manually:
SQL> alter system set db_16k_cache_size = 16m scope=spfile;
SQL> startup force
SQL> show parameter 16k
db_16k_cache_size big integer 16M
So, now I have another buffer cache set up: one to cache any blocks that are 16KB in size. The default pool will consume the rest of the buffer cache space, as you can see by querying V$SGASTAT. These two buffer caches are mutually exclusive; if one “fills up,” it can’t use space in the other.
This gives the DBA a very fine degree of control over memory use, but it comes at a price. That price is complexity and management. These multiple block sizes were not intended as a performance or tuning feature (if you need multiple caches, you have the default, keep, and recycle pools already), but rather came about in support of transportable tablespaces—the ability to take formatted datafiles from one database and transport or attach them to another database.
They were implemented in order to take datafiles from a transactional system that was using an 8KB block size and transport that information to a data warehouse using a 16KB or 32KB block size.
The multiple block sizes do serve a good purpose, however, in testing theories. If you want to see how your database would operate with a different block size—how much space, for example, a certain table would consume if you used a 4KB block instead of an 8KB block—you can now test that easily without having to create an entirely new database instance.
You may also be able to use multiple block sizes as a very finely focused tuning tool for a specific set of segments, by giving them their own private buffer pools. Or, in a hybrid system, transactional users could use one set of data, and reporting/warehouse users could query a separate set of data.
The transactional data would benefit from the smaller block sizes due to less contention on the blocks (less data/rows per block means fewer people in general would go after the same block at the same time) as well as better buffer cache utilization (users read into the cache only the data they are interested in—the single row or small set of rows).
The reporting/warehouse data, which might be based on the transactional data, would benefit from the larger block sizes due in part to less block overhead (it takes less storage overall) and larger logical I/O sizes perhaps.
And since reporting/warehouse data does not have the same update contention issues, the fact that there are more rows per block is not a concern but a benefit. Moreover, the transactional users get their own buffer cache in effect; they don’t have to worry about the reporting queries overrunning their cache.
But in general, the default, keep, and recycle pools should be sufficient for fine-tuning the block buffer cache, and multiple block sizes would be used primarily for transporting data from database to database and perhaps for a hybrid reporting/transactional system.