Sign up on the Revelation Software website to have access to the most current content, and to be able to ask questions and get answers from the Revelation community

At 30 NOV 1998 11:21:20AM Jeff Reiter wrote:

Anyone have a memory of working with Rev vG2b?

I have a client who has a billing system running in this old, non-supported version of Rev with a corrupt Invoice file – due to a group format error.

We can't re-create the file (which often fixes this problem with minimal data loss) because the file is so badly corrupted. Can't find the "fix" programs from this old version … would they work anyway? … don't know.

Is anyone capable of, or can you provide a utility for, dumping the data out of this file in ASCII format??? We could then recreate the file from backup.

…and, no, there is not a good, recent backup.

Any suggestions are appreciated. Page me at 312.656.5698 or e-mail me at [email protected]

Thanks in advance.

Jeff


At 30 NOV 1998 12:42PM Victor Engel wrote:

If your file is so badly corrupted that remaking the file won't work, most likely other utilities won't work either. What we used to do was to create a new file and then copy from the corrupted file to it. You may want to set a book on whatever keystroke it takes to bypass the error message .

Of course, with a file corrupted this badly, you realize that even with a "fixed" file you will be missing a lot of data. That's one of the reasons going back to a backup is frequently the best thing to do.


At 30 NOV 1998 12:50PM Jeff Reiter wrote:

Thanks Victor…

Yes, I frequently use the re-create scheme but this time no luck. The Netware 3.12 OS reports that the file is - get this - 3.3GBs. That's right, GB! Naturally, Rev can't recognize a file of that size so re-create will never work.

Barring finding a good backup (still working on that), I think I may tap into the file with a C program because once I get through the dictionary portion of the .LNK file, I should be able to see the data and parse it with char(255)'s, etc.

Jeff


At 30 NOV 1998 01:10PM Victor Engel wrote:

I've got a program somewhere that'll read through the file. However, it assumes the file is basically intact. It was designed for use on uncorrupted files that just have too many records in a single group.

Parsing by FF's will not be enough. You will have to link to the correct overflow frames. That's where the real trick will be, because since you have group format errors, your links will not be correct on file.

Have you tried using COPY to a clean file yet?


At 30 NOV 1998 03:54PM Jeff Reiter wrote:

Good point about the overflow sections of the file Victor…

Yes, tried COPY and RE-CREATE… no luck.

Jeff


At 30 NOV 1998 07:51PM Chris Vaughan wrote:

Why use C when you can OSBREAD the ROS file in 1024 byte chunks (frame by frame) ?

I've had to do quite a few 'manual' file recovery/rebuilds in the past and would recommend Rbasic above C (but would not that be true for just about anything?)


At 01 DEC 1998 03:10AM Charles Schmidling wrote:

Okay, after reading everybody else's ideas, I throw mine into the ring.

First, somewhere around there is the LINK.RECOVER routine that was on the RevG2 Utilties disk. Don't know if it works or not, have not had your problem … yet.

Second, RECREATiing the file will attempt to create another LNK file and I have never liked LNK files. Hence, I suggest that you create a dummy file in the ROS format allowing for a very large modulo. Try this:

CREATE-FILE dummy 10 64000 C (or whatever drive) and say "No" to a link file.

This will create a file with 11 "buckets" to start. Copy the DICT and DATA from the Invoices to the Dummy and see what you get.

Third, the OSBREAD idea in 1024 junks would work pretty good too. May even work better since you're not locked into the individual record lengths.

Much luck,

Charles Schmidling

DATASCAN Systems, Inc.

[email protected]


At 01 DEC 1998 11:38AM Victor Engel wrote:

Using a ROS file is a bad idea if the file is being shared or if the file is large. The whole idea of the ROS FS is that it is RAM-based whenever possible. This makes it incompatible with multi-user updates and inefficient for large files. Actually, given the structure of the extension for ROS files, there is also a significant size limitation.


At 01 DEC 1998 06:15PM Chris Vaughan wrote:

You may not like Link files, but I can assure that they are absolutely essential for anything but trivial applications. Record locking is not possible for ROS files, and the file structure collapses when the sum of the length of all the record keys exceeds 64k bytes.

ROS files were designed to give high performance when READING small tables that are NEVER updated in multi-user mode.


At 02 DEC 1998 02:30AM Charles Schmidling wrote:

Ah so true. Try that same set of records and sort them by an MV field. Major scale pain in the keester. Since I never quite figured out that SORT-FILE thats supposed to handle the overflow from these kinds of SSELECTion, I have had to create my own sort methods that allow for the overflow. True you have no access at READNEXT functions, but ones owns sorts, even insertion sorts, run faster than SELECT sentences.

The LNK file schema was expressly designed for network applications. It would be next to impossible to track locking logic necessary for ROS files. However, when LNK files were first introduced, my own tests found that they operated SLOWER than ROS files in smaller applications and were disk file pigs. That was in XT days. Now the speeds are blazingly fast, my reports that once took 4 hours to start now take 5 minutes.

Because RevG only does hash coding, I have found ROS file work is still efficient to as high as a modulo of 18-20 to around 6K or so transaction records. Doing my own indexing helps a lot. The slowdown comes when your modulo is not high enough to prevent frame overflow.

True, this doesn't allow for multiple users, and gets converted in network applications, but should something get corrupted, it usually only involves one frame and can be recovered in relative ease.

Just some thoughts,

Charles Schmidling

DATASCAN Systems, Inc.

[email protected]


At 02 DEC 1998 02:53AM Charles Schmidling wrote:

Sorry Victor, I seem to have abbreviated my response a bit. What you say, of course, is true to a point.

The suggestion was only designed to resolve the original problem: lost stuff.

My intent was to suggest he start from scratch and set up a temporary ROS file to accept recovered data. It is not meant for multiple access. RevG will allow both types to to be attached at the same time. Since the frames and file sizes grow with each record, it could help determine the actually size of his data. It would be important to select a sufficiently large enough modulo to keep some kind of file efficiency alive.

I would rather prefer creating or modifiying the LINK.RECOVER routine to search the file, seek each record by some sort of pattern and write the recovered records to the ROS file and doing something with the weird stuff to be able to come back at them manually.

When done, either generate a new file or do a RECREATE-FILE using the LNK format. Since ROS files can grow surprising large, you can get a handle on how big the LNK file should have been in the first place.

Just a clarification,

Charles Schmidling

DATASCAN Systems, Inc.

[email protected]


At 02 DEC 1998 03:12AM Anonymous wrote:

The technique I remember using is to use the DUMP utility. Skip to the first bad frame, put an FF right after the frame header. What I believe it does is to remove any pointers from that frame to overflow frames. If you are lucky, there are only a few primary frames that point to corrupted frames. It is similar to a F in the Arev dump and it is what recover.link is supposed to do programmatically.

Eventually you get a clean enough file, so that you can copy the good records to another table. If you really want to try and get all of it, take screen shots of the frames you mess with and any good overflow frames.


At 17 JAN 1999 01:00AM Susan Misnick ([email protected]) wrote:

We also use the DUMP facility, but we have found that the

group format error usually occurs when a frame is being rewritten.

Because of this, it is usually a bad 5-digit length of record

that is incorrect in the frame. By actually counting the lengths

of each record, and fixing the 5-digit lengths, we have fixed

many group format errors. If data is duplicated, as it will be

if the frame is in the process of being rewritten, make the duplication look like a long record by removing end-of-record

indicators, and fixing the 5-digit length.

When done, you can access the records, and perhaps text into the

bogus one and fix it that way, or delete it if necessary.

View this thread on the forum...

  • third_party_content/community/commentary/forums_nonworks/b47156d71c9da889852566cc0059d81a.txt
  • Last modified: 2023/12/28 07:40
  • by 127.0.0.1