LH Frame Size and Performance (OpenInsight Specific)
At 24 DEC 1998 01:37:52AM Nick Stevenson wrote:
Further to earlier postings, here are results of varying the frame size on NT Service'd LH databases.
I took two files, FOO_1024 and FOO_32768. The former has a frame size of 1024 and the latter 32768 (the recommended size for NLM/NT Service backends). Both tables are identical and both were recreated with the Remake_Table SP as published by Cameron Purdy on the Works site. There are no indexes on the tables.
FOO_1024 FOO_32768Rows in table 54759 54759
Avge row length 150.64 150.64
Rows per group 7.88 55.42
Data size (LH+OV) 12,982,272 12,170,000
Avge disk I/O 1.394 1.046
LH Verify process 25:30 (MM:SS) 3:45 (MM:SS)
Seq read all rows 2:30 (MM:SS) 3:30 (MM:SS)
Select (FIELDn=VALUE") 3:00 (MM:SS) 3:50 (MM:SS)
Environment:
OI3.6.1, NT Service 1.5, NT 4.0 SP2, Windows '95
IBM PC720 Server, 384Mb RAM, Dual Pentium 100, 24Gb Disk
Pentium 230MMX Workstation, 64Mb RAM
Comment:
This test is not exhaustive, and after the initial elation of seeing the LHVerify time comparisons to see the Select and Sequential Read times poorer was very disappointing, we lost a bit of interest.
Although Remake_Table set the frame size to 32768, OI (LHVerify) reports the frame size to be 10,000. Not quite sure what is happening there.
There may be some packet size / protocol issue that is interfering with our figures. We need to repeat this exercise on Netware with the NLM to see if the results are consistent there.
At 28 DEC 1998 01:05PM Cameron Revelation wrote:
Hi Nick,
Sequential reads (select/readnext/read) are slower as frame sizes get bigger. That has always been true. What should be faster is multiple workstations doing mixed reads/writes/deletes against a table. In other words, most real-world situations should be faster, with the exception of non-indexed selects.
With regards to the 10000, I am wondering if the table creation algorithm max'd out the frame size. As a test, try 8192.
Cameron Purdy
Revelation Software