Sign up on the Revelation Software website to have access to the most current content, and to be able to ask questions and get answers from the Revelation community

At 20 APR 1998 02:08:23PM Ginny Soyars wrote:

My customer is using AREV 3.03. A table that is normally about 25 meg in size has suddenly grown to over 100 meg and a window that edits or adds data to this table has started functioning extremely slow. If I try to Verify the table, the PC locks up. If I try to Edit the table at TCL, it exits back to DOS and displays the following message:

  Your program has caused a divide overflow error. If this problem     persists, contact your vendor.

The same error occurs if I try to use CopyRow or Export the data to an Ascii file. If I try to Dump the table, it returns a Variable exceeds maximum length error. I can add and edit data in this table thru the data entry window, but I can't do anything else with it or get the data out of it. I've moved this table to a clean system completely off of the network and get the same results. Does anyone have any ideas or suggestions? I really just want to get the data out of the table into a good clean table if possible.


At 21 APR 1998 03:55AM Egbert Poell wrote:

Try accessing and repairing the table on a new build AREV directory.

I think something in your AREV files is damaged.

Good luck,

Egbert Poell

Mecomp Automatisering


At 21 APR 1998 11:39AM Victor Engel wrote:

Do a listmedia of the volume on which the file is located to find the name of the DOS file. Then do a directory listing for that file. What are the sizes of the .LK and .OV portions of the file? I suspect that you may have run into a situation where the sizelock was set on the file so that it was not able to expand when needed. This would cause excessive overflow to accumulate (.OV file is huge compared to .LK file). Eventually, one of the groups in your file will contain more than 64K worth of keys and you will no longer be able to use many of the core functionality. Even RTP57, the base filing system, gathers a list of keys for the group when doing a read, for example. I have found that writes continue to work, but reads fail. DUMP will also

fail for the same reason.

If this is the case, you may be able to do a copytable, although I think this will fail also. I wrote a utility a few years ago to get around this exact problem. What it does is to OSBREAD through the file, making sure to account for the linear hash formatting and then writing to a different file. If I get some time I'll try to find it and post it to my web site.

Victor


At 21 APR 1998 12:37PM Joe Nirta wrote:

Victor,

I'm experiencing the same problem as Ginny. Tiny .LK anf huge .OV

Please let me know if you post your utility program as I would like to use it as well.

Thanks

Joe Nirta

[email protected]


At 22 APR 1998 01:27PM Ginny Soyars wrote:

Victor,

It sounds like the utility you wrote may be my only way of recovering the data from this corrupted file. I would appreciate it VERY much if you could share that with me. I think it's my only hope of salvaging the data from the file (my customer has a backup, but it is rather old and they don't want to go that route, of course). Would you please eMail me directly at [email protected], or just write a response indicating what your web site address is so I can check for the program to download? Thanks a million!

Ginny Soyars

[email protected]


At 23 APR 1998 09:23AM Victor Engel wrote:

My utility is only on a zip disk at the moment, and my zip drive is unavailable because of problems relating to a recent installation of a film scanner. However, I just remembered something. The last time I ran into this problem, I tried duplicating the problem under different conditions. A result of my investigation showed that there is a way you can recover from this sort of situation without a utility. Follow these steps:

  • Ensure that no other user is logged in to Arev.
  • Change your network driver to non-networked. With the non-networked driver, you will be able to resize your file.
  • Dump the file in question, and change the sizelock to 0.
  • Perform some I/O to the file to force it to resize. I like to COPY filename * (O) TO: (filename. Make sure that whatever I/O process you use does not set the sizelock to 2. In this case, no resizing would occur.
  • Dump the file again and check that the overflow is distributed reasonably. If not, repeat the previous step.
  • Change your network driver back to what it was.
  • Allow your users back on.

Let me know if this works for you.


At 23 APR 1998 09:28AM Victor wrote:

Whoops! It looks like my HTML didn't work for some reason. Sorry about that.


At 25 APR 1998 10:26AM Aaron Kaplan wrote:

I've had real good luck solving these if there's a listing of keys somewhere or the keys are able to be retrieved somehow. BTREE.EXTRACT or maybe an old save list or if the keys are sequential.

Anyway, create a new file and start reading from the old, writing to the new, deleting from the old as you go. Sooner or laterthe file will get to a point where it's small enought to maniputate in DUMP, then drop the sizelock back to 0 and it will start hashing again (makes the process go quicker). Once you're satisfied the file is accessible again, you can either continue with the deletion and just work with the new copy or just copy all the stuff back.

[email protected]

Sprezzatura, Inc.

www.sprezzatura.com_zz.jpg

View this thread on the forum...

  • third_party_content/community/commentary/forums_nonworks/88575c4b250642a3852565ec0063a559.txt
  • Last modified: 2023/12/28 07:40
  • by 127.0.0.1