Reduces the Memory Needed on the System- Oracle Processes

This is one of the most highly touted reasons for using a shared server: it reduces the amount of required memory. It does, but not as significantly as you might think, especially given the automatic PGA memory management discussed in Chapter 4, where work areas are allocated to a process, used, and released—and their size varies based on the concurrent workload.

So, this was a fact that was truer in older releases of Oracle but is not as meaningful today. Also, remember that when you use a shared server, the UGA is located in the SGA. This means that when switching over to a shared server, you must be able to accurately determine your expected UGA memory needs and allocate appropriately in the SGA via the LARGE_POOL_SIZE parameter.

The SGA requirements for the shared server configuration are typically very large. This memory must typically be preallocated and thus can only be used by the database instance.

Note  It is true that with a resizable SGA, you may grow and shrink this memory over time, but for the most part, it will be owned by the database instance and will not be usable by other processes.

Contrast this with a dedicated server, where anyone can use any memory not allocated to the SGA. If the SGA is much larger due to the UGA being located in it, where does the memory savings come from? It comes from having that many fewer PGAs allocated.

Each dedicated/shared server has a PGA. This is process information. It is sort areas, hash areas, and other process-related structures. It is this memory need that you are removing from the system by using a shared server. If you go from using 5000 dedicated servers to 100 shared servers, it is the cumulative sizes of the 4900 PGAs (excluding their UGAs) you no longer need that you are saving with a shared server.

DRCP

So, what about the DRCP feature? It has many of the benefits of a shared server such as reduced processes (we are pooling), possible memory savings without the drawbacks.

There is no chance of artificial deadlock; for example, the session that holds the lock on the resource in the earlier example would have its own dedicated server dedicated to it from the pool, and that session would be able to release the lock eventually.

It doesn’t have the multithreading capability of a shared server; when a client process gets a dedicated server from the pool, it owns that process until that client process releases it.

Therefore, it is best suited for client applications that frequently connect, do some relatively short process, and disconnect—over and over and over again; in short, for client processes that have an API that do not have an efficient connection pool of their own.

Leave a Reply

Your email address will not be published. Required fields are marked *