Sign up on the Revelation Software website to have access to the most current content, and to be able to ask questions and get answers from the Revelation community

At 18 OCT 2000 03:14:48PM Michael Gwinnell wrote:

Using Arev 2.12, NLM 1.5, Novell 4.1x…

I'd like to create a file with an 8192 byte frame size. Are there any considerations I need to be aware of with the NLM or otherwise?

This is in an effort to allow Arev to more efficiently handle a file with 3million sequential records.

To answer questions ahead of time-

No, I cannot renumber these records.

No, I cannot purge or delete these records.

These records average 890 bytes per (400 min, 3000 peak).

These records are accessed many times during workday, and existing file is severly bad-sized, (OV is 15x larger than LK). And, many LK frames are empty while overflows reach 30+ frames.


At 18 OCT 2000 03:38PM Don Miller - C3 Inc. wrote:

Aaaaaaaaaaaggggghhhhhhhhhh. Bummer in the extreme! The large frame size usually won't cause problems other than hogging memory and maybe slowing down I/O but sequential key numbering is going to hurt you big time since the "sparse" placing algorithm will cause huge inefficiency. Don't think RTI will EVER fix this!! Maybe Aaron Kaplan @ Sprezz can give you some ideas. I hate the handling of sequential counter keys with a passion and won't use it if I can come up with ANY other solution!

:-(……. (that's a drool because it makes me senile!)

Don Miller

C3 Inc.


At 19 OCT 2000 08:55AM [url=http://www.sprezzatura.com]The Sprezzatura Group[/url] wrote:

There is nothing wrong with a 8192 frame size, in theory. The NLM should read in the entire frame in one go, saving you the disk read.

However, based on your record size, you will need to fill in the entire frame before the system begins to resize. Resizing is based on the percentage of primary frames filled, not primary space used.

You will end up with a large amount of records inside a single frame. The system will have to parse through an average of 10 records for a full frame to read the last one. You will also have less frames, since it will take longer for the threshold value to be reached. This will mean even larger overflow files.

Increasing frame size really only works with large records, meaning records larger than the frame size.

Most likely, the issues you have with file size has to do with the nature of LH's expansion. The hash algorith tries to ensure that given any modulo, any record has an equal chance of hitting any group. As files grow larger, you are more likely to hash to an existing group than an empty group. As more records hash to existing groups, the overflow size will begin to increase.

Resizing a file is simply creating a new file with the new frame size and copying in each record one by one. Since there is no way around this, I would propose a better method is to create a new 1024 based file, pre-sized to accomodate all the records you will need about 18 months from now, set the sizelock to 1, then copy in all the records.

When the copy is finished, leave the sizelock as 1. This will keep the file from shrinking. It will also leave enough empty space, and enough groups that the records will distribute evenly as time passes.

Sooner or later you will have to go through the process again, but by creating the file for optimal hashing with another 18 months of data, you should have another year after that before you start to hit the current situation again.

The Sprezzatura Group

World Leaders in all things RevSoft

View this thread on the forum...

  • third_party_content/community/commentary/forums_nonworks/54224a0fd1457d1c8525697c0069b9c4.txt
  • Last modified: 2023/12/28 07:40
  • by 127.0.0.1