Performance reports on big LH files (AREV Specific)
At 03 AUG 1999 01:27:19PM Wilhelm Schmitt wrote:
Recently, I ran into a discussion about performance issues on big files. I know, that there is no limit on the amount of records in a file and the file size itself.
My experience with a 300MB+ file (500,000+ records, with up to 22.0K on a single record) has been very good over several years.
I would like to hear comments on performance from somebody with experience on big files (lets say file size ] 1GB, several million records).
Any comment will be appreciated.
Wilhelm Schmitt
At 03 AUG 1999 04:31PM Mike Ruane, WinWin Solutions Inc. wrote:
Wilhelm-
We have a system with 2,000,000+ records, file size 1G+, 15 indexes.
NT Workstations, Server, and Service.
Performance is great.
Mike Ruane
WinWin Solutions Inc.
WWW.WinWinSol.Com
At 03 AUG 1999 05:12PM Steve Smith wrote:
The issue is not that reading / writing big files is slow, but that DOS is slower at resizing large files (] 50 mB).
Steve
At 03 AUG 1999 05:37PM Victor wrote:
Let me guess … W is your favorite letter, right?
At 03 AUG 1999 06:00PM akaplan@sprezzatura.com - [url=http://www.sprezzatura.com]Sprezzatura Group[/url] wrote:
Performance really depends on what you are doing. Assuming your files are sizing properly, then a large file is really no different than a small file. LH is an filing system designed so that no matter how large the file, when you know the key, it is just as fast on a file with 100 records as with 1,000,000 records.
The speed hit happens when you start with selects. When you don't use indexes, you have to scan the entire file. Obviously, this takes a lot longer when the file is larger than smaller.
Indexes can make some difference since they are designed to find the nodes based on a single key, which is a key to speed in LH.
As for whether the system can handle it, yes, it can handle it. Know if places with 5,000,000+ records. Using the NLM or NT service would help. However when dealing with files this size, your best bet it Novell. NT might choke under the heavy load.
akaplan@sprezzatura.com
At 04 AUG 1999 01:18AM Curt Putnam wrote:
A couple years ago I did a 6 million name & address list without a hiccup
At 04 AUG 1999 11:17AM Mike Ruane, WinWIn Solutions Inc. wrote:
I really like V, and W's give me more bang for the buck….
At 06 AUG 1999 11:06AM Ron Wielage wrote:
We didn't find a true solution to this problem. First we created an MFS to distribute records from one logical file into a number physical files. There was a single physical index file, however. We addressed indexing times by increasing the record size in indexes to 4K. This helped a great deal. Finally we ported the application to Universe where there are no limits.
Ron Wielage
At 09 AUG 1999 10:31AM Ron Wielage wrote:
FWIW, our largest file is 3 gig with just under 9.5 million records. Universe performance is the product of data transfer at hard disk/internal bus speeds, rather than network speeds. If you access Universe files a record at a time from, say, a VB app across the network, you are back to Arev-like performance.
BTW, using Universe daily reminds us of the features we loved in Arev.
Ron
At 09 AUG 1999 10:30PM akaplan@sprezzatura.com - [url=http://www.sprezzatura.com]Sprezzatura Group[/url] wrote:
My VW gives me a huge bang, but usually only when I buy the gas that costs less than a buck.
akaplan@sprezzatura.com