DUMP utility bug (AREV Specific)
At 02 FEB 2000 03:49:49PM Bruce D. Williams wrote:
We use AREV 2.12 with the AREV NLM on a NetWare 4.11 server.
We have several large (] 2GB) linear hash files which show GFEs when examined using DUMP. The forward and skip values for the last frame of the group having a GFE are non-zero but DUMP appears unable to read the frame referenced by the forward value.
Directly reading the record which is split across the frame boundary shows no problems (implying the file is really OK). In addition, the system routine LH_VERIFY_SUB does not report these groups as having a GFE (again implying the file is really OK).
It appears that DUMP is unable to read the frame pointed to be the forward value for some groups for large linear hash files. A cursory inspection shows that most of these forward values are very large (]2,000,000); does DUMP perhaps limit frame values to a 21-bit integer (2,097,152 is largest positive integer which can be represented by a 21-bit variable)?
Has anyone else encountered this behavior? Is there an improved DUMP which doesn't exhibit this problem.
At 03 FEB 2000 04:50AM Steve Smith wrote:
Bruce,
I doubt that there are many people here who would run dump in circumstances like yours. DUMP is (in my experience) not that good on exceptionally large files. Because the source code is not available (that I know) there is no way to tell what constraints it has. Please understand that DOS is really bad at handling files over 2 gB, but maybe NTFS or Novell is better. Some GFEs elude dump all together, and you have to patch first, then run dump, especially on weird forward pointer problems traversing the OV and LK parts of the file. I recall one case where a New Zealand developer of immaculate technical ability was unable to solve a GFE problem in Australia with a local client because DUMP would not behave. It took several hours with the hex editor to edit the file so dump would then begin to work.
Steve
At 03 FEB 2000 10:24AM [email protected] wrote:
Steve,
Just as a side note, I can recall many years ago we "lost" a critical file. (Details not fresh, but anxiety still remembered.) Happened only once. And there was no current backup. We had to edit the "hex" value in DUMP. Scary for us since we never did anything like this before or since. Any case using some RTI tech bullentin, we were able to muddle through (with luck) and resurrect our lost file.
"Hex" is not a four-letter word, but it make me shiver.
[email protected] onmouseover=window.status=imagine … ;return(true)"
Ray Chan ~ Symmetry Info
At 03 FEB 2000 11:28AM Victor Engel wrote:
We use AREV 2.12 with the AREV NLM on a NetWare 4.11 server.
We have several large (] 2GB) linear hash files which show GFEs when examined using DUMP. The forward and skip values for the last frame of the group having a GFE are non-zero but DUMP appears unable to read the frame referenced by the forward value.
Directly reading the record which is split across the frame boundary shows no problems (implying the file is really OK). In addition, the system routine LH_VERIFY_SUB does not report these groups as having a GFE (again implying the file is really OK).
]It appears that DUMP is unable to read the frame pointed to be the forward value for some groups for large linear hash files.
Have you verified by independent means that that location in the file is a valid location? Sometimes a frame format error or group format error will cause these pointers to get out of whack. If such a pointer gets corrupted, it is possible (barring other cleaner recovery methods) to scan the file using OSBREAD to find all frames associated with the group. However, be aware that just because a frame is found that appears to be in a group doesn't mean it actually is. It coult simply be unused (or no longer used) space.
]A cursory inspection shows that most of these forward values are very large (]2,000,000);
Again, does a comparison to the file size indicate this is reasonable? If not, it would appear you have a corrupted file.
]does DUMP perhaps limit frame values to a 21-bit integer (2,097,152 is largest positive integer which can be represented by a 21-bit variable)?
Not sure what you mean by frame values. I assume you mean frame position values. If I recall correctly, the file format uses 4 bytes to code for this, so the maximum position would be 256 ^ 4 - 1 or 4,294,967,295 times the frame size (default 1024).
]Has anyone else encountered this behavior? Is there an improved DUMP which doesn't exhibit this problem.
Are you sure dump is at fault and not your file?
At 03 FEB 2000 04:01PM Warren wrote:
Try octal if hex makes you shiver…your skin will probably fall off. The system programmers I worked with could breath that stuff.
At 03 FEB 2000 06:43PM [email protected] wrote:
Warren,
When I was much much younger, I worked on some CDC computers and Octal was used. Then, I switched to IBM mainframes and Hex was the norm. Of course, I was smoking 5 packs a day then. Now that I'm working mostly with PCs (with Windows), I have quit smoking and am much healtier.
I'm afraid that if I went back to that kind of an environment where Octal/Hex was the norm I would start smoking unhealthy things once again .
Therefore, the surgeon general should recommend Windows for better health.
[email protected] onmouseover=window.status=imagine … ;return(true)"
Ray Chan ~ Symmetry Info
At 03 FEB 2000 07:49PM Bruce D. Williams wrote:
I have verified the record using an R/Basic READ statement. The problem is definitely in DUMP.
The file size is consistent with the modulo and frame size (set to the default of 1024). However, the record count in frame 0 is screwed up in this file (fairly simple to fix and I don't think this is the source of the problems with DUMP).
My observation has been that all the groups incorrectly reported by DUMP to have GFEs have a forward value in the last frame displayed by DUMP greater than 2^21 (e.g., the position of the beginning of the next frame in the group as indicated by the forward pointer is greater than the 1024 * 2^21st byte). This is not an absolute statement, just an observation based on inspecting 10 or so groups in DUMP and an educated guess. I have no idea what the significance of 2^21 is (doesn't seem to correspond with any multiple of 8-bit bytes)…
This problem with DUMP concerns me for two reasons:
1) DUMP supposedly fixes *some* GFEs (I know it can't fix everything, but something is better than nothing)
2) DUMP supposedly displays the content of any frame in a LH file. Right now, it won't display any frame greater than 2^21. This prevents even manually attempts at fixing a LH file with a GFE since you have no idea of the contents of frames past modulo 2^21.
I suppose I'll end up coding my own version of a hex editor for LH files if no working version of DUMP exists…