LH file with locked SIZELOCK (AREV Specific)
At 09 DEC 1997 10:40:11AM Per Vestböstad, NCCH, Univ. of Bergen wrote:
Running Arev 3.12 on a Win95 client over NETBUI from a NT server has somehow remade the main data table (name=OBJEKT)and locked it to modulo=1 in the .LK file (1.024 bytes). So the .OV file has to hold the rest of the data (2,068,480 bytes). This causes some problems for the utilities in trying to fix the problem:
SELECT OBJEKT gives the message 'RTP57A' Line 215 B706 String space format error .
DUMPLH OBJEKT gives the message 'LHDUMP' Line 2 B703 Variable exceeds max. length
The correct data is probably there, we are able to use single rows, but COUNT, LIST and LABELS breaks Arev and sends us back into DOS.
How can I change the Sizelock manually? Any other suggestions from experienced people out there?
At 09 DEC 1997 02:00PM John Duquette wrote:
Per,
You can manually change the sizelock in the dump window. Bring up a TCL window (f5) and type DUMP . The sizelock for the file will be displayed in the upper right hand corner of the screen. You can use the + and - keys to adjust the sizelock to zero.
After the sizelock is reset the file should resize under normal usage, and you should be back in business.
John Revelation
At 10 DEC 1997 06:23AM Per Vestböstad, NCCH, University of Bergen wrote:
Thanks for your swift answer, John.
The DUMP OBJEKT gives the same result as the DUMPLH OBJEKT (see my main message):
'LHDUMP' Line 2 B703 Variable exceeds max. length
Line 2 'LHDUMP' broke because a runtime error was encountered.
Arev is run in a 4DOS window under Win95.
Any further suggestions will be appriciated?
At 10 DEC 1997 09:59AM Ed Mantz wrote:
If you cannot use DUMP to access the LH file, I have a program thatg I downloaded from the REV compuserve area the claims to be able to change the sizelock from TCL. If you would like it I can email it to you - let me know.
ed
At 10 DEC 1997 10:21AM Aaron Kaplan wrote:
See this message for a program that might help.
After you change the sizelock, just start reading and writing records and sooner or later it will start to hash better.
Although, it might be best to just copy all the records into a temp file, make a new one, then copy the records back.
At 10 DEC 1997 11:28AM Victor Engel wrote:
I've run into this problem before. Arev seems to have no limit when it comes to writing records to a group. New records simply are concatenated to the end of the group. However, reading or resizing the group require that the entire list of keys be read into a single variable. You have too much data in one group for all the keys to fit in one variable. That's why you're getting the error message. DUMP (DUMPLH) does a check for GFEs immediately. This check requires all keys be read.
To resolve this problem, I created a utility to read the file directly one frame at a time using OSBREAD and write the records to a new file. AFAIK, this is the only solution available to you. I'm at work right now, but when I get home, I'll poke around and see if I still have a copy of this routine.
At 11 DEC 1997 04:23AM Per Vestböstad, NCCH, Univ. of Bergen wrote:
Thanks for sharing your experiences with a part-time programmer!
Victor, I think your explanation best describes what is happening, and I would really like to have a copy of your routine - if you can find it.
Aaron, I will try the sizelock routine, but the normal COPY command only work on specific keys - and that is no solution for 6876 rows.
Ed, if your programme is the same as Aaron refers to, I already have it - else I would like to look into it.
Yours
At 11 DEC 1997 09:17AM Aaron Kaplan wrote:
The copy might only work for some keys, true. But, as you copy the stuff out, the file should start hashing, leaving more keys available for the copy. Each deletion out of the file should, in theory at least, create 2 new groups until the file gets close to balance.
At 11 DEC 1997 12:39PM Victor wrote:
Interesting:
I decided to try creating such a file to try a few things out. The results were surprising. I could not even create such a file withe the IPX/Advanced Netware driver. I got an error on write that the group was too large. I changed the driver to Non-Networking, and this time I was able to create an oversized file. I tried DUMPing it and was successful. I changed the sizelock to 0 and wrote a record to the file. It resized appropriately. I didn't try again with a different driver, but I know that I had problems in the past.
Perhaps the simplest thing to try would be to change to the Non-networking driver (making sure nobody else was logged on, of course) to make the change.
Per Vestböstad,
I haven't had a chance to look for my utility yet. If this tip doesn't work, [email protected] at [email protected] and I'll try to find it.
Victor
At 14 DEC 1997 12:11PM Aaron Kaplan wrote:
Well, the system goes out of it's way to ensure that you can't create these types of files, so I'm not really all that surprised you had troubles doing it.
The problem generally occurs when the file's alpha value get corrupted, which tells the system that the file is really big and it starts compacting groups.
FWIW….