Sign up on the Revelation Software website to have access to the most current content, and to be able to ask questions and get answers from the Revelation community

At 19 APR 2000 04:09:17PM MGwinnell wrote:

I have a file where there are 3 million records, average size 850 bytes, sequentially keyed. Records range in size from around 500 bytes, up to what may be 1500 bytes.

The file sizing comes out VERY badly, with a tiny LK, and a massive OV, with many items per group. (163mb LK, 2.2GB OV).

Granted, someone chose to use sequential keys, and I can't fix that. However, is there any trick to making the system utilize a larger (pre-sized) LK portion, and distribute the records more evenly across the file? (without changing the record keys to alpha-numeric???)

Any attempt to utilize a pre-sized LK file now merely results in many many empty frames in the LK, and nearly the same amount of overflow being used.

This file is heavily accessed, and the added stress of accessing so many overflow frames is degrading file access immensely.

I have tried several definefile settings (items, size, threshold, etc) with no particular success. I even tried an experiment using only prime numbers as record keys for items (just for grins), and it made essentially no difference to the record distribution. HELP!

Your assistance is, as always, appreciated.

MEG


At 19 APR 2000 06:48PM Warren wrote:

Maketable and CREATEFILE has parameters to set both the base frame size and resize/split threshold.

If using MAKETABLE see the helps in the Table Attributes section.

See the manual for CREATEFILE syntax as I don't remember it exactly and I don't have a manual handy.

Unfortunately ARev doesn't not have the 20+ hashing algorithms that PrimeInformation/Universe has to tweak the hashing distribution of files.


At 19 APR 2000 07:19PM Larry Wilson - TARDIS Systems, Inc. wrote:

if you run REMAKETABLE (you might have the voc item as RECREATE-FILE) it should create a large LK and a very small OV, thus making additions to the file quicker.

REMAKETABLE {table ID} {avg.size} {# of records}

Be sure everyone else is logged off when you do this.

You might also check the ! file to see if you need to REMAKETABLE on that also (a much overlooked speed problem)

tardis_systems@yahoo.com


At 19 APR 2000 11:42PM Richard Hunt wrote:

Hmmm… I am thinking that maybe your problem is a "size lock" problem. See if the size lock is less than 1, then the file will grow and shrink automatically when needed. If the size lock is 1, then the file will grow automatically when needed (will not shrink though). If the size lock is greater than 1, then the file will NOT grow or shrink. That might be your problem.

Before checking the "size lock" have all users logoff. This process will take less than one minute. It is not disk intensive, no matter what the file size is.

To check the "size lock" do this… at the "command prompt", type DUMPLH filename. where the "filename" is the filename. Be very very careful what you do in this screen. first, just look for the "size lock" value. it will be on the far right side, and 3 lines down from the top. If the size lock is greater than 1 then hit the "-" (minus) key. That will reduce the value of the size lock. Keep doing this until the value is 1 or 0. Once this is done hit the "escape" key to exit.

I have created a file and filled it with 100,000 rows. All the same size of 850 bytes. The file was acceptable, and no where near the ratios you have mentioned. I did a "VERIFYLH AAA (F), my tablename was AAA, it did suggest to "REMAKETABLE" to 2048 bytes. Not too critical.


At 20 APR 2000 01:11AM Warren wrote:

Changing the base frame size can have a very adverse effect on file retrieval time so use caution. Too large of a base frame can cause excessive disk reads and network traffic.

I'm thinking along the same lines as Richard and you have a sizelock set. The hashing algorithm should work well on sequential number keys.


At 21 APR 2000 07:17AM MGwinnell wrote:

I have looked at the sizelock, and it is set to zero. I also tried remakefile and several variations of the definefile and various base file sizes (LK).

Tried the good old select and list process to allow Arev to resize the file.

I can really agree with the sentiment about the multiple-hashing algorithm choices available in other systems. I never ran into this kind of problem with Pick systems…

Any other suggestions?

Thanks.


At 21 APR 2000 10:51AM Warren wrote:

I'm afraid you're at an impasse. There is no real equivalant to Fitzgerald & Long's FAST or whatever it was/is called (been some time since working with Prime Info but the tool is still available for Unidata and Universe). LHVERIFY/VERIFYLH has a symbolic that suggests base frame size and stuff but I never checked how accurate the suggestions were.

Other than bumbling around and trying to find a modulo with size lock set that gives a fairly even distribution I'm out of ideas.

BTW: Informix and Ardent have merged. UniData/Universe keeps getting bigger and bigger…

View this thread on the forum...

  • third_party_content/community/commentary/forums_nonworks/2966eceb82125ade852568c6006eb6af.txt
  • Last modified: 2023/12/28 07:40
  • by 127.0.0.1