且构网

分享程序员开发的那些事...
且构网 - 分享程序员编程开发的那些事

ABAP里For all entries FAE的用法

更新时间:2022-08-22 12:48:42

by IBASE we have some places which are using Open Cursor … Fetch. For example in the method CL_IBCOMPTOCOMADV_IL->IF_IBASE_IL_SEARCH~SEARCH_DYNAMIC


OPEN CURSOR WITH HOLD lv_cursor FOR

SELECT DISTINCT

(lt_select_cond)

FROM  (lt_from_cond)

FOR ALL ENTRIES IN gt_type_maint

WHERE (lt_where_cond)

AND   (gv_type_maint_where_cond).

FETCH NEXT CURSOR lv_cursor

INTO CORRESPONDING FIELDS OF TABLE lt_select_parameters

PACKAGE SIZE gc_package_size.

The gc_package_size is set to 100.


I tested this coding in HANA and non-HANA system. In both system the SQL trace file shows that much more records than 100 were selected from DB. Is this the standard behavior?


And in HANA system the performance is much worse than non-HANA system.



ABAP里For all entries FAE的用法

ABAP里For all entries FAE的用法

# Answer the documentation says “The addition PACKAGE SIZE does not influence the size of the packages (configured in the profile parameters) used to transport data between the database server and the application server.” The package size influences the communication between the DBIF and the ABAP (you only get the number of records specified by the addition PACKAGE SIZE into the internal table), but not the communication of the DBIF with the DB. This is similar to the situation with SELECT … ENDSELECT and SELECT INTO TABLE … . The "Open Cursor" handling from within ABAP works on HANA like on AnyDB. The problem you described is related to the combination of cursor handling together with the FOR ALL ENTRIES addition. In that case, package processing on moderate sized tables looks to the end user like it works perfectly fine. On huge tables memory dumps occur like you reported. So why is that the case? It's simply the fact that the package processing happens in that special combination on the application server only! This means that all the data is loaded into SAP Memory at the first fetch statement and the package processing happens on this application server buffer only. Therefore the system memory dump on huge tables is easily understandable. Nevertheless this is not the system behaviour one would expect as OPEN cursor handling with packaging is typically the approach to be used when working with massive amount of data. How to resolve that? Just use a RANGE TABLE together with the IN operator. With that, packaging works like it should be - on database level. We plan to replace FOR ALL ENTRIES with RANGE TABLE in our programs. I tested it and it works as expected. please be aware that statements with ranges tables may have other problems. Esp. if the ranges table is huge, the statement size may overflow predefined limits. With FOR ALL ENTRIES this cannot happen because the DBIF in the ABAP WP splits the internal driver table into smaller chunks.