Sign up on the Revelation Software website to have access to the most current content, and to be able to ask questions and get answers from the Revelation community

At 12 JAN 2000 08:59:37PM e. miller wrote:

We are experiencing a problem in REVG with what acts like it should be a group format error, yet isn't. When selecting a record from a file randomly they, what I refer to as 'loop', never finising. You can't ctrl-break it and you can't access the record at all. Listing out the file locks after that record (which I've seen in GFE's) but there is no error message. Generally, the record releases within the next few hours. Is it possible to have an error, or corrupt data in a frame and not get the actual error message?


At 13 JAN 2000 06:01AM Warren wrote:

Most likely the file is in desperate need of a resizing. If it is a link file do a dump on it a peek at the amount of overflow frames. If a ROS file, do a DIR on the native ROSxxxxx files and look at the number of overflow files there.


At 13 JAN 2000 11:29AM Victor Engel wrote:

If it is bloated so much that the list of keys in the first group exceeds 64K then I think DUMP file fail. In that case, the only recourse I know of is to use a utility to copy the records to a new file. I used to have such a utility for REVG for exactly this purpose, but I don't think I have it anymore. What it does is to traverse through the file a frame at a time using OSBREAD, parsing the records together and writing them off to the new file. Knowledge of the REVG file header structure is required to do this.


At 13 JAN 2000 02:36PM Warren wrote:

Sounds like a Larry Coon powertool.


At 13 JAN 2000 02:49PM Warren wrote:

As long as a COPY works it shouldn't be much of a problem resizing the file. One can do manually what RECREATE-FILE does. (Create temp file with new modulo, copy from source file to temp, delete source file, rename temp).

If DUMP chokes because of the size of the file one can always guesstimate the amount of overflow frames by calculating the size of the base file=1).

Subtract that from the size of the LNK file as reported by DIR and divided by 1024 should give you a close figure to the number of overflow frames (dict and data combined).

Then there's always my FILE-FACTS utility which does some frame walking based on the header info which will count the overflow frames.


At 13 JAN 2000 03:33PM Victor Engel wrote:

I think both RECREATE-FILE and COPY also choke under these circumstances. I'm fairly confident about the COPY command choking, but less confident about the RECREATE-FILE choking.

Look at it this way. Every time a record is read, the entire group is read. How else does the system know if there is a group format error? I don't remember how REVG works, but when writing the record back, in the Pick I used to use, the entire group was written back also. It seems that this would be necessary unless the record size before and after was identical, unless slack is introduced between records (which it isn't).


At 13 JAN 2000 07:33PM Warren wrote:

I worked on Microdata systems with 64K or less of CORE memory and some of the people I worked with were very heavy into the internals of the Pick System.

If a key is specified the group is only read until a hit is made. Thus if a group has 32 overflow frames and the record is in overflow frames 10 thru 20 the group is not read past overflow frame 20. That's why the best way to detect a GFE was to do a COUNT on the file.

On writes, unless the record size changed, the record would be written out to the same area in the logical group, thus in the above example only overflow frames 10 thru 20 would get updated if the record did not change size.

In both cases (read and write) a GFE in any overflow frame in the group past overflow frame 20 would not get detected.

Pick being designed from the ground up as a virtual system and *had* to keep disk access to a minimum for speeds sake (how else can you efficiently handle 32K records with only 16K of memory on a multiuser system?).

Revelation had much more leeway for disk access because of the 640K max memory spec of the IBM PC. To limit disk access, Revelation tries to cache as much a possible into memory, which potentially means reading a file in 64K byte chunks (which is exactly what it does with ROS files). During the summer I posted some benchmarks on the comp.databases.revelation Usegroup on RList COUNT vs Frame Walking (OSBREADs in 1K chunks) vs Large Chunking (OSBREADs in 64K chunks). I don't recall the figures but it was magnitudes of difference between the methods. You be able to find the messages on Deja News, but I normally post with X-NOARCHIVE flag on, which Deja News honors. In terms of speed the methods were as listed as above, slowest first. This gives some idea as to the internals of Revelation file accessing. Larry Coon or Aaron Kaplan could probably give more details (as far as non-disclosure agreements go that is) on the internals of Revelation file accessing.

However, as I recall while hacking LNK files with DUMP to create GFEs when developing FILE-FACTS it is still possible to have a GFE in a group that will not be detected on a key specified READ/WRITE as outlined above. So in some situations it would seem that the group is not read in it's entirety, but it is entirely possible Revelation will read an entire group given the chance.

Besides, I've never let a LINK file get so badly oversized that it choked DUMP. That's why I wrote FILE-FACTS. The only time I've seen DUMP choke on a file is when the headers were corrupted with illegal forward and backward pointers.


At 14 JAN 2000 01:40PM Victor Engel wrote:

Regarding benchmarks, I never did benchmarks with RevG, but when I benchmarked Arev, 4K block sizes were the most efficient, faster than either 1K or 16K or even 64K blocks (16 4K blocks was faster than one 64K block). This really surprised me at the time. The reason probably has more to do with the network than anything else.

My position on oversized files is based upon personal experience. There was a file that had been written to for a long time never being recreated. Writes never had a problem, probably because the LNK file system went directly to the end of the group to write the record. However, when it came time to do something with the contents of the file, no amount of jumping through hoops would extract any data from the file. The only solution that worked was to create a program that extracted the records using OSBREAD as I described earlier.


At 14 FEB 2000 03:54PM Susan Misnick wrote:

Estelle, my friend!

Perhaps you should check for non-ASCII characters

in the record. I've had that problem before.

Susan

View this thread on the forum...


1)
dict modulo * 1024) + (data modulo * 1024
  • third_party_content/community/commentary/forums_nonworks/9318caec8da67ab885256865000af3c3.txt
  • Last modified: 2023/12/28 07:40
  • by 127.0.0.1