home clear 64x64
en blue 200x116 de orange 200x116 info letter User
suche 36x36
Latest versionsfixlist
14.10.xC11 FixList
12.10.xC16.X5 FixList
11.70.xC9.XB FixList
11.50.xC9.X2 FixList
11.10.xC3.W5 FixList
Have problems? - contact us.
Register for free anmeldung-x26
Contact form kontakt-x26

Informix - Problem description

Problem IT06393 Status: Closed

READ AHEAD THREAD CAN FREE RPARTN MEMORY AFTER HDR PRIMARY SERVER PUTS
ITSELF INTO LOGICAL RECOVERY MODE

product:
INFORMIX SERVER / 5725A3900 / C10 - IDS 12.10
Problem description:
In a HDR pair, if the servers disconnect and reconnect, and at 
the reconnect point, the primary determines it needs to do 
logical recovery from the secondary server which switched up 
into standard mode, under some conditions if the read ahead 
thread runs at that point, it can free up memory for rpartn 
structures but leave pointers to that memory behind.  This can 
lead to various memory corruption issues with the RSAM pool, or 
invalid mutex assertion failures when the left over pointers are 
referenced as rpartn structures, but the memory is now in use as 
something else. 
 
Two different invalid mutex af's enountered shown: 
 
1) 
 
10:02:40  DR: Turned off on primary server 
10:02:40  DR: Cannot connect to secondary server 
10:03:31  DR: Primary server connected 
10:03:31  DR: Primary server needs failure recovery 
 
10:03:43  Physical Recovery Started at Page (1:1126). 
10:03:43  Recovery Mode 
10:03:43  Physical Recovery Complete: 0 Pages Examined, 0 Pages 
Restored. 
10:03:44  DR: Failure recovery from disk in progress ... 
10:03:44  Logical Recovery Started. 
10:03:44  10 recovery worker threads will be started. 
10:03:44  Start Logical Recovery - Start Log 4, End Log ? 
10:03:44  Starting Log Position - 4 0x5d018 
... 
10:13:55  Assert Failed: Invalid Mutex Type 
10:13:55  IBM Informix Dynamic Server Version 12.10.F 
 
10:13:55   Who: Session(5, informix@vox, 0, 0x44d79148) 
                Thread(19, btscanner_0, 44d357a8, 1) 
 
10:13:55  Stack for thread: 19 btscanner_0 
 
afstack 
afhandler 
afcrash_interface 
mt_slock 
btc_create_hot_list 
btscanner_loop 
th_init_initgls 
startup 
 
2) 
 
03:43:36  DR_ERR set to -1 
03:43:39  DR: Turned off on primary server 
03:43:39  DR: Cannot connect to secondary server 
 
03:44:04  DR: Primary server connected 
03:44:04  SCHAPI: thread dbWorker2 task 
post_alarm_message(19-30087) shutting down 
03:44:04  SCHAPI: thread dbWorker1 task 
post_alarm_message(19-30088) shutting down 
03:44:04  SCHAPI: thread dbScheduler(116) shutting down 
03:44:04  DR: Primary server needs failure recovery 
 
03:44:04  Physical Recovery Started at Page (2:92306). 
03:44:05  Physical Recovery Complete: 593 Pages Examined, 593 
Pages Restored. 
03:44:05  Recovery Mode 
03:44:05  DR: Failure recovery from disk in progress ... 
03:44:06  Logical Recovery Started. 
03:44:06  10 recovery worker threads will be started. 
03:44:06  Start Logical Recovery - Start Log 14455, End Log ? 
03:44:06  Starting Log Position - 14455 0x7a9018 
03:44:06  DR: Cleared 6076 KB of logical log in 0 seconds. 
... 
03:44:13  Assert Failed: Invalid Mutex Type 
03:44:13   Who: Session(94022, informix@machine, 0, 0x1523b46d8) 
        Thread(44159, xchg_1.3, 14db3bdf8, 1) 
03:44:13  Stack for thread: 44159 xchg_1.3 
 
 base: 0x0000000152bb1000 
  len:   69632 
   pc: 0x0000000001369653 
  tos: 0x0000000152bbfa70 
state: running 
   vp: 1 
 
afstack 
afhandler 
afcrash_interface 
mt_lock 
ptalloc 
flalloc 
rspnopen 
pntorsfd 
plogredo 
rlogm_redo 
next_recvr 
producer_thread 
startup
Problem Summary:
**************************************************************** 
* USERS AFFECTED:                                              * 
* Users with HDR servers                                       * 
**************************************************************** 
* PROBLEM DESCRIPTION:                                         * 
* See Error Description                                        * 
**************************************************************** 
* RECOMMENDATION:                                              * 
* Update to IBM Informix Server 12.10.xC5                      * 
****************************************************************
Local Fix:
Solution
Problem Fixed In IBM Informix Server 12.10.xC5
Workaround
not known / see Local fix
Timestamps
Date  - problem reported    :
Date  - problem closed      :
Date  - last modified       :
09.01.2015
16.10.2015
16.10.2015
Problem solved at the following versions (IBM BugInfos)
12.10.xC5
Problem solved according to the fixlist(s) of the following version(s)
12.10.xC5 FixList
12.10.xC5.W1 FixList