LH file sizes (OpenInsight 32-Bit)
At 31 JAN 2005 04:04:34PM Paxton Scott wrote:
Greetings,
The other day I saw Don say…."… some here would argue that your linear hash tables would become quite large and unwieldy over a significant period of time…"
I'd like to know what is large and unwieldy.
I appreciate any thoughts or experiences.
arcs@arcscustomsoftware.com
[url=http://www.arcscustomsoftware.com]ARCS, Inc.
At 31 JAN 2005 04:13PM John Bouley wrote:
I guess it depends on how the table is used and/or accessed. Can you put the original quote in context since I missed it…
If you access by key then there is no real problem. But if you routinely need to sort/select on un-indexed fields then it is dependent on how many records you have. Even with Indexes,occasionally they have to be rebuilt.
John
At 31 JAN 2005 04:54PM support@sprezzatura.com wrote:
We don't know :) We have clients with over 10,000,000 rows and gigabytes of info. We've just done some initial testing with frame sizes of 30K and all indications are that it is actually quicker byte per byte to read and write the larger records, presumably amongst other reasons as because the frame info overhead is proportionally less for a larger record/frame.
For initial testing we created a new table with a 1K frame size and another with a 36K frame size. We wrote out and read a thousand records to each, using a record size of approximately 80% of the frame size. If the speed were directly linear we'd expect the larger file to be 36 times slower but the results were as follows
Writing - 1K 0.465 secs, 36K 2.23 secs
Reading - 1K 0.04 secs, 36K 0.35 secs
If we get the time we'll try and look at an SENL on this.
support@sprezzatura.com
The Sprezzatura Group Web Site
World Leaders in all things RevSoft
At 31 JAN 2005 05:13PM Paxton Scott wrote:
Real interesting. I don't expect to have an issue with record size, but its sure nice to know some parameters. The original remark was in response to a discussion of the pros and cons of storing images directly in the LH table.
I'm thinking more of issues of 100 million records with 99.999% of retrieveal for view only by cross Reference indexes.
Not concerned about disk space and hope to keep rebuilds at a minimum but so far are real fast.
On rare occasions I've had to remove all indexes and manually delete the Bang! file in order to get the system to rebuild indexes properly.
So, I'm glad to hear it is still kind of an unknown limit.
arcs@arcscustomsoftware.com
[url=http://www.arcscustomsoftware.com]ARCS, Inc.