Row/Resultset Buffers - Prefer larger buffer sizes to improve the performance of fetching numerous rows, which can greatly improve the speed of creating extracts. This is sometimes called the cache size or response size.
oracle odbc driver configuration fetch buffer size
The MEMLIM parameter sets the size of the buffer the ODBC driver uses for data retrieval. The value is specified in bytes. The minimum value for the buffer size is 32 KB. The recommended setting is 2,000,000 bytes (approximately 2 MB) to address this issue.
For Oracle, the Block parameter can be used in conjunction with the MaxFetchBuffer database parameter to improve performance when the size of a row is very large. The MaxFetchBuffer parameter has a default value of 5000000 bytes, which is sufficient for most applications. The size of the actual fetch buffer is the product of the value of the blocking factor and the size of the row.
If the fetch buffer required by the blocking factor and the row size is greater than the value of MaxFetchBuffer, the value of the blocking factor is adjusted so that the buffer is not exceeded. For example, if block=500 and the row size is 10KB, the fetch buffer is 5000KB, which equals the default maximum buffer size.
Sets the maximum size of the buffer into which theDataWindow object can fetch rows from the database. Using the MaxFetchBuffer parameter with the Block parameter can improve performance when accessing a database in PowerBuilder.
The size of the actual fetch buffer is the product of the value of the blocking factor and the size of the row. If the fetch buffer required by the blocking factor and the row size is greater than the value of MaxFetchBuffer, the value of the blocking factor is adjusted so that the buffer is not exceeded.
Used with the fetchmany method, specifies the internal buffer size, which is also how many rows are actually fetched from the server at a time. The default value is 10000. For narrow results (results in which each row does not contain a lot of data), you should increase this value for better performance.
With large record sets it is best not to attempt to go to the last record as this may take some time, A large buffer size might even slow down the fetch. If you must get the number of rows in a large record set you might try using an few large OCI_FETCH_ABSOLUTEs and then an OCI_FETCH_LAST, this might save some time. So if you had a record set of 10000 rows and you set the buffer to 5000 and did a OCI_FETCH_LAST one would fetch the first 5000 rows into the buffer then the next 5000 rows. If one requires only the first few rows there is no need to set a large prefetch value.
Viewed 100K+ times! This question is You Asked Tom:My Development Environment:I have a NT development box with 256MB on it. There are two oracle databases on it. At least 2 identical schema owners on each of them. Each scema has on an average about 10 pl/sql packages of about 1000 lines each. I have separated two databases as D (Development) and Q (For our own internal testing). Also there is a JRUN server, and a Netscape Enterprise Server running on the same box.My SGA:Total System Global Area 24899532 bytesFixed Size 65484 bytesVariable Size 7983104 bytesDatabase Buffers 16777216 bytesRedo Buffers 73728 bytesSome of the pfile Parameters:db_block_size integer 8192db_block_buffers integer 2048shared_pool_size string 5000000shared_pool_reserved_size string 250000shared_pool_size string 5000000Problems Encountered: Our developers are using servlet applications, and our testers are banging against the database pretty hard. I have to restart the server at least once or twice every day due to shared memory errors such as the one below. Our QC and Production environments will be dedicated Oracle servers running on Solaris boxes with much better resource allocations.Any help on how to eliminate/mitigate this problem will be greatly appreciated.Thanks, KhalidSample Errors:***********************************************Error: SQLException java.sql.SQLException: ORA-04031: unable to allocate 4096 bytes of shared memory ("shared pool","GF","PL/SQLMPCODE","BAMIMA: Bam Buffer") ORA-06508: PL/SQL: could not find program unit being called ORA-06512: at line 1 begin :1 := gfx.insrt_coach('RonJennings',5,'04172001','','','',''); end; Error: SQLException java.sql.SQLException: ORA-00604: error occurred at recursive SQL level 1 ORA-04031: unable to allocate 4216 bytes of shared memory("shared pool","select con#,obj#,rcon#,enabl...","sga heap","library cache") INSERT INTO gfx_suggestion (suggestion, suggestion_id, timestamp, suggestion_type_fl,name, email, business_unit_key) select 'test suggestion. RJ 04/19/01', max(suggestion_id)+1, sysdate, 'T', 'Ron Jennings' , 'rj@rwd.com', '5' from gf_suggestion******************************************************* and Tom said...#1 -- you are NOT USING BIND VARIABLES, for example I clearly see:INSERT INTO gfx_suggestion (suggestion, suggestion_id, timestamp, suggestion_type_fl,name, email, business_unit_key) select 'test suggestion. RJ 04/19/01', max(suggestion_id)+1, sysdate, 'T', 'Ron Jennings' , 'rj@rwd.com', '5' from gf_suggestionthat MUST BE rewritten as:INSERT INTO gfx_suggestion (suggestion, suggestion_id, timestamp, suggestion_type_fl,name, email, business_unit_key) select :1, max(suggestion_id)+1, sysdate, :2, :3, :4, :5 from gf_suggestionOr you will not go ANYWHERE with this application. Bind variables are SO MASSIVELY important -- I cannot in any way shape or form OVERSTATE their importance.Same with the PLSQL call I see there:begin :1 := gfx.insrt_coach('RonJennings',5,'04172001','','','',''); end;That MUST be coded as:begin :1 := gfx.insrt_coach(:2,:3,:4,:5,:6,:7,:8); If you do not fix this, your application is doomed to utter and total failure from day one.#2 -- I see the query:INSERT INTO gfx_suggestion (suggestion, suggestion_id, timestamp, suggestion_type_fl,name, email, business_unit_key) select 'test suggestion. RJ 04/19/01', max(suggestion_id)+1, sysdate, 'T', 'Ron Jennings' , 'rj@rwd.com', '5' from gf_suggestionand shudder - that is frightening!!! What happens when two people insert at about the same time? --- answer -- both get the SAME suggestion_id. That is a horrible programming practice your "database" developers have (i quote "database" because I don't think they are database developers, I think they are java programmers trying to use a database-- these code snippets must just be the tiny tip of a really big iceberg).They NEED to read about sequences. That insert should be inserting:... SUGGESTION_SEQ.NEXTVAL, ....A sequence is a highly scalable, non-blocking ID generator.Java supports bind variables, your developers must start using prepared statements and bind inputs into it. If you want your system to ultimately scale beyond say about 3 or 4 users -- you will do this right now (fix the code). It is not something to think about, it is something you MUST do. A side effect of this - your shared pool problems will pretty much disappear. That is the root cause.If I was to write a book on how to build 'non scalable applications in Oracle', this would be the first and last chapter. This is a major cause of performance issues and a major inhibitor of scalability in Oracle. The way the Oracle shared pool (a very important shared memory data structure) operates is predicated on developers using bind variables. If you want to make Oracle run slowly, even grind to a total halt just refuse to use them.For those that do not know, a bind variable is a placeholder in a query. For example, to retrieve the record for employee 1234, I can either query:SELECT * FROM EMP WHERE EMPNO = 1234;Or I can query:SELECT * FROM EMP WHERE EMPNO = :empno;And supply the value for :empno at query execution time. The difference between the two is huge, dramatic even. In a typical system, you would query up employee 1234 maybe once and then never again. Later, you would query up employee 456, then 789 and so on. If you use literal (constants) in the query each and every query is a brand new query, never before seen by the database. It will have to be parsed, qualified (names resolved), security checked, optimized and so on. In short, it will be compiled. Every unique statement you execute will have to be compiled every time. This would be like shipping your customers Java source code and before calling a method in a class you would invoke the Java compiler, compile the class, run the method and then throw away the byte code. The next time you wanted to execute the same exact method, you would do the same thing compile it, run it and throw it away. Executing SQL statements without bind variables is very much the same thing as compiling a subroutine before each and every call. You would never consider doing that in your application you should never consider doing that to your database either.Not only will parsing a statement like that (also called a HARD parse) consume many more resources and time then reusing an already parsed query plan found in the shared pool (called a SOFT parse), it will limit your scalability. We can see it will obviously take longer, what is not obvious is that it will reduce the amount of users your system can support. This is due in part to the increased resource consumption but mainly to the latching mechanisms for the library cache where these plans are stored after they are compiled. When you hard parse a query, we will spend more time holding certain low level serialization devices called latches. These latches protect the data structures in the shared memory of Oracle from concurrent modifications by two sessions (else Oracle would end up with corrupt data structures) and from someone reading a data structure while it is being modified. The longer and more often we have to latch these data structures, the longer the queue to get these latches will become. Similar to the MTS architecture issue described above, we will start to monopolize scarce resources. Your machine may appear to be underutilized at times but yet everyone in the database is running very slowly. This will be because someone is holding one of these serialization mechanisms and a line is forming. You are not able to run at top speed.The second query above on the other hand, the one with :empno, is compiled once and stored in the shared pool (the library cache). Everyone who submits the same exact query that references the same object will use that compiled plan (the SOFT parse). You will compile your subroutine once and use it over and over again. This is very efficient and the way the database is intending you will do your work. Not only will you use less resources (a SOFT parse is much less resource intensive), but you will hold latches for less time and need them less frequently. This increases your performance and greatly increases your scalability. Just to give you a tiny idea of how huge of a difference this can make performance wise, you only need to run a very small test:tkyte@TKYTE816> alter system flush shared_pool;System altered.tkyte@TKYTE816> declare 2 type rc is ref cursor; 3 l_rc rc; 4 l_dummy all_objects.object_name%type; 5 l_start number default dbms_utility.get_time; 6 begin 7 for i in 1 .. 1000 8 loop 9 open l_rc for 10 'select object_name 11 from all_objects 12 where object_id = ' i; 13 fetch l_rc into l_dummy; 14 close l_rc; 15 end loop; 16 dbms_output.put_line 17 ( round( (dbms_utility.get_time-l_start)/100, 2 ) 18 ' seconds...' ); 19 end; 20 /14.86 seconds...PL/SQL procedure successfully completed.tkyte@TKYTE816> declare 2 type rc is ref cursor; 3 l_rc rc; 4 l_dummy all_objects.object_name%type; 5 l_start number default dbms_utility.get_time; 6 begin 7 for i in 1 .. 1000 8 loop 9 open l_rc for 10 'select object_name 11 from all_objects 12 where object_id = :x' 13 using i; 14 fetch l_rc into l_dummy; 15 close l_rc; 16 end loop; 17 dbms_output.put_line 18 ( round( (dbms_utility.get_time-l_start)/100, 2 ) 19 ' seconds...' ); 20 end; 21 /1.27 seconds...PL/SQL procedure successfully completed.That is pretty dramatic. The fact is that not only does this execute much faster (we spent more time PARSING our queries then actually EXECUTING them!) it will let more users use your system simultaneously. Rating (200 ratings)Is this answer out of date? If it is, please let us know via a Comment Comments Comment Sean Bu, April 21, 2001 - 12:10 pm UTC 2ff7e9595c
Comments