Drastic Reduction of Possible Rows in a Table with TEXT and BYTE in the Tablespace
(Article written by CURSOR employee Andreas Seifert)
Usually, the table limits in an Informix instance are high enough to ensure a smooth operation of the instance. However, when using the old simple large objects TEXT and BYTE in Informix, this is not the case. Unlike the new smart large objects, these blobs cannot be stored in a smart blobspace and still have to be stored in the old blobspace.
The problem is that blobs (data types TEXT and BYTE) stored in old blobspaces are not logged in the logical log. This is a great disadvantage if you want to use a high-availability solution such as HDR, RSS or SDS. Since all of these solutions work by exchanging logical log information, objects that do not appear in the logical log will not be replicated.
The only way to log simple large objects in the logical log is to forego blobspaces and to store the blobs in the tablespace.
This is where the Informix table limits come into play. Informix allows rows up to 32KB wide. In our case, this limit is not problematic since rows including simple-large-object data only contain a 56-byte simple-large-object descriptor. What is problematic in our case is the limit that the maximum number of data pages per fragment is 16,777,215. The question now is whether the blob pages needed for a simple large object count towards this number.
The Informix documentation says nothing about this. In order to find out whether blob pages count as data pages, we have created a 100GB-sized chunk. As platform, we used Windows with a 4k page size. In the table with two text fields, we have inserted 300KB rows in a loop. The question was – how many rows can we insert?
create table "informix".testtableScheme of Our Test Table
(
f01 char(64) not null ,
f02 char(4),
f03 char(30),
f04 char(2),
f05 char(1),
f06 char(5),
f07 char(5),
f08 char(1),
f09 text ,
f10 text ,
f11 char(8),
f12 date,
f13 date
);
If the blob pages do count as data pages, we should be able to reach the page limit. The number of rows needed for this can be calculated as follows:
300kByte data per row / 4kByte page size = 75 pages per row
Limit for pages per fragment 16.777.215 / 75 pages per row = 223.696,2 rows
Thus, it can be expected that once 223,696 rows have been inserted, an error will occur. Indeed, this is what happened. After a while, the load script also reported the following error at 222,493 rows:
D: mp est>dbaccess testdb loadError Message of the dbaccess While Data Is Loaded
Database selected.
605: Cannot write blob. 136: ISAM error: no more extents
847: Error in load file row 1.
Error in line 2
Near character position 52
Database closed.
At first, this sounds misleading since the table had only seven extents. However, the error message has several meanings. The detailed error text advised to run onstat -t and to look at the output:
IBM Informix Dynamic Server Version 12.10.FC3 -- On-Line -- Up 1 days 00:37:15 -- 74880 KbytesOutput of onstat -t
Tblspaces
n address flgs ucnt tblnum physaddr npages nused npdata nrows nextns
.
.
.
. 1010 8135d030 0 1 500001 5:4 50 6 0 4 1
1015 81369030 2 1 600001 6:4 50 4 0 0 1
1016 811e3030 0 1 700001 7:4 100 70 0 68 2
1017 811e3ae0 0 1 700002 7:5 8 6 3 71 1
1018 8119c030 0 1 700003 7:6 16 12 6 557 1
1019 81213030 0 1 700004 7:7 64 43 38 114 1
1028 81452030 0 1 70000d 7:16 8 3 0 0 1
1032 81857030 0 1 700011 7:20 128 70 53 513 5
1046 8185b030 0 1 70001f 7:34 8 3 0 0 1
1067 81933060 0 1 700034 7:511 8 3 0 0 1
1074 81902980 0 1 70003b 7:518 8 2 0 0 1
1082 81830680 2 1 700043 7:526 16777215 16777215 13906 222493 7
1083 81927030 40000002 1 700044 7:527 384 280 0 0 8
1084 8121aa70 40000002 1 700045 7:528 384 278 0 0 8
76 active, 1084 total
An extract from the error text:
"If nused is close to 0xFFFFFF (16,777,215), the maximum number of pages has been reached for the table. Reallocate the table in a dbspace with a larger page size as described above."
This shows that at this point, the page limit for the table fragment has been reached.
The situation is explained nicer by an oncheck -pT testdb:testtable:
TBLspace Report for testdb:informix.testtableOutput of oncheck -pT testdb:testtable
Physical Address 7:526
Creation date 06/26/2014 09:59:08
TBLspace Flags c01 Page Locking
TBLspace contains TBLspace BLOBs
TBLspace use 4 bit bit-maps
Maximum row size 240
Number of special columns 2
Number of keys 0
Number of extents 7
Current serial value 1
Current SERIAL8 value 1
Current BIGSERIAL value 1
Current REFID value 1
Pagesize (k) 4
First extent size 8
Next extent size 2097152
Number of pages allocated 16777215
Number of pages used 16777215
Number of data pages 13906
Number of rows 222493
Partition partnum 7340099
Partition lockid 7340099
Extents
Logical Page Physical Page Size Physical Pages
0 7:1271 65536 65536
65536 7:66815 262144 262144
327680 7:328975 589824 589824
917504 7:918831 917504 917504
1835008 7:1836399 2359296 2359296
4194304 7:4195823 4194304 4194304
8388608 7:8390383 8388607 8388607
TBLspace Usage Report for testdb:informix.testtable
Type Pages Empty Semi-Full Full Very-Full
---------------- ---------- ---------- ---------- ---------- ----------
Free 59
Bit-Map 2065
Index 0
Data (Home) 13906
TBLspace BLOBs 16761185 0 184 352182 16408819
----------
Total Pages 16777215
Unused Space Summary
Unused data slots 3
Unused bytes per data page 164
Total unused bytes in data pages 2280584
Unused bytes in TBLspace Blob pages 456503339
Home Data Page Version Summary
Version Count
0 (current) 13906
This output also shows that the table data has only required 13,906 pages; all other pages are blob pages.
The most fitting error message is hidden in the file $DBTEMP/$INFORMIXSERVER.alarm. It reads:
2014-06-26 12:22:19 WARNING: partition 'testdb:informix.testtable': no more pages
Text of the File $DBTEMP/$INFORMIXSERVER.alarm
This message states exactly what has happened. We have reached the page limit of one partition. Unfortunately, this is not displayed in the log files – the first place to look when trying to detect a problem. At least, an email is sent if the automatic alarm has been activated in the alarm program.
Should you reach this page limit, you have a number of options:
1. Transfering the table to a dbspace with a bigger page size
Considering our above calculation, doubling the page size only means that the number of rows that can be inserted is almost doubled. The maximum page size is 16k, which, in our example, means that only about 890,000 rows can be inserted.
2. Using fragmentation
This is an easy way to solve the problem. You can have 2,047 dbspaces, each of which can be fragmented. This way, you don’t reach the page limit anymore.
Disadvantage: Fragmentation is only allowed in the Enterprise / Advanced Enterprise editions. All other editions are not allowed to use fragmentation.
3. Using smart large objects
This is a handy solution since smart large objects can be stored in smart blobspaces. You can choose whether you want to include data from smart blobspaces in the logical log or not. However, smart large objects are handled differently than simple large objects in the application development. Therefore, switching from simple to smart large objects may result in a high effort to adjust the application, or – if you are using a bought application – is not possible at all since you cannot adjust the application yourself.
Unfortunately, you cannot make do with a view that casts smart large objects accordingly. The reason for this is that IBM only provides a cast that allows the conversion of TEXT and BYTE objects to BLOB and CLOB data types but not a cast that works the other way around.
Therefore, the problem could only be solved with a self-written UDR that allows the conversion TEXT <--> CLOB and BYTE <--> BLOB in both directions.
Dipl. Wirtschaftsinformatiker Andreas Seifert
(business division IBM Distribution)
CURSOR Software AG
Friedrich-List-Straße 31
D-35398 Gießen