Join The Works program to have access to the most current content, and to be able to ask questions and get answers from Revelation staff and the Revelation community

At 31 JAN 2005 04:04:34PM Paxton Scott wrote:

Greetings,

The other day I saw Don say…."… some here would argue that your linear hash tables would become quite large and unwieldy over a significant period of time…"

I'd like to know what is large and unwieldy.

I appreciate any thoughts or experiences.

arcs@arcscustomsoftware.com

[url=http://www.arcscustomsoftware.com]ARCS, Inc.


At 31 JAN 2005 04:13PM John Bouley wrote:

I guess it depends on how the table is used and/or accessed. Can you put the original quote in context since I missed it…

If you access by key then there is no real problem. But if you routinely need to sort/select on un-indexed fields then it is dependent on how many records you have. Even with Indexes,occasionally they have to be rebuilt.

John


At 31 JAN 2005 04:54PM support@sprezzatura.com wrote:

We don't know :) We have clients with over 10,000,000 rows and gigabytes of info. We've just done some initial testing with frame sizes of 30K and all indications are that it is actually quicker byte per byte to read and write the larger records, presumably amongst other reasons as because the frame info overhead is proportionally less for a larger record/frame.

For initial testing we created a new table with a 1K frame size and another with a 36K frame size. We wrote out and read a thousand records to each, using a record size of approximately 80% of the frame size. If the speed were directly linear we'd expect the larger file to be 36 times slower but the results were as follows

Writing - 1K 0.465 secs, 36K 2.23 secs

Reading - 1K 0.04 secs, 36K 0.35 secs

If we get the time we'll try and look at an SENL on this.

support@sprezzatura.com

The Sprezzatura Group Web Site

World Leaders in all things RevSoft


At 31 JAN 2005 05:13PM Paxton Scott wrote:

Real interesting. I don't expect to have an issue with record size, but its sure nice to know some parameters. The original remark was in response to a discussion of the pros and cons of storing images directly in the LH table.

I'm thinking more of issues of 100 million records with 99.999% of retrieveal for view only by cross Reference indexes.

Not concerned about disk space and hope to keep rebuilds at a minimum but so far are real fast.

On rare occasions I've had to remove all indexes and manually delete the Bang! file in order to get the system to rebuild indexes properly.

So, I'm glad to hear it is still kind of an unknown limit.

arcs@arcscustomsoftware.com

[url=http://www.arcscustomsoftware.com]ARCS, Inc.

View this thread on the Works forum...

  • third_party_content/community/commentary/forums_works/05f68617c934874e85256f9a0073c66c.txt
  • Last modified: 2023/12/30 11:57
  • by 127.0.0.1