CREATING TABLE & system not resizing it (OpenInsight 32-Bit)
At 13 OCT 2010 08:08:38AM Martin Drenovac wrote:
In attempting to speed up our large system installations, I thought I'd take the following approach (just a random approach)
We study file growth, and decide that we'll expand the file space allocation x 4 (random choice), so that the system does not have to
do much file resizing etc, in the hope that it will speed OI up. (presumption on my behalf that doing this makes any difference for OI).
So, I create a table using TABLE builder
10,000 records, 400 bytes, 4096 blocks
OI creates a Table 5,070,848 for the *.lk and 0 bytes for the *.ov
I add a record to the table (file size stays the same)I select the tableI list the tableFile size has dropped to 4096 bytes for the .LK2 part Q:
A) am I kidding myself that I can pre-allocate space for a table?
B) if (A) is false, what flag do I set to tell OI to leave my file alone (at least don't downsize)
C) what do I have to do to get a system speed up?
At 13 OCT 2010 08:59AM Jared Bratu wrote:
You must set the size lock setting on the table after you create it. See the Fix_LH function for information about the size lock settings. I often use this when testing the impact of record size/frame size on tables.
What raid level is in use?
Is the raid controller set to write through or write back cache?
Have you verified the file system is the bottleneck and not the network?
At 13 OCT 2010 09:14AM Martin Drenovac wrote:
Jared, thank you. I will read the size lock info.
Disk - I don't know the detail at the customer end.
What should these values that you write here be?
I've told them based on conversations with yourself & David, Server 2008, 4GB RAM as a minimum configuration They have 85 direct connected users onto our system - and it grinds to 100% CPU often.
At 13 OCT 2010 11:21AM [url=http://www.sprezzatura.com]The Sprezzatura Group[/url] wrote:
We at Sprezz Towers believe that there is a "feature" of the various services that at a critical size (normally in the GBs) the service will decide to do an expansion and a lot of copies and we've seen this cripple the server. We were able to show this to be the case by using an internal LH network analysis tool. The fix is - as you have rightly surmised - to create the file a few times larger than required and lock it.
World leaders in all things RevSoft
At 13 OCT 2010 11:26AM Jared Bratu wrote:
100% CPU usage? Is this a terminal server? If not, where is all the CPU time going?
I would need more information about the system usage before suggesting a raid configuration. Is this a read/write, read, or write volume? Different configurations benefit different purposes. Some raid controllers allow different partitions/volumes to be cached depending on their purpose (or not at all).
Write caching can be controversial but read caching is generally safe and useful because files that don't hash well can benefit from faster access to the .OV file. If you've ever used process monitor to watch the LinearHash service access a badly hashed table you would see a read to the .LK and then multiple reads to the .OV file while it locates the frame with the record. If the sector of the .OV file is cached the read access time decreases.
At 13 OCT 2010 11:36AM Richard Hunt wrote:
There is an art to having an efficient data base. Some seem to miss the most important.
1) I have my Virtual Memory, paging file, on a separate physical drive than the data base. And I do not have it on a mirrored volume.
Having your data base on a mirrored drive will slow it down considerably. Although you should consider this option very carefully before you do it.
2) I have my processor scheduling set for "programs", and the memory usage set for "system cache".
3) I program my indexing to be very efficient. I also have my selects programmed efficiently.
4) When you consider manually setting the table configuration, be sure to test it and see if you get the desired results. Most of the time, it is index updating and selecting that is the problem. Unless your tables are horribly over sized.
5) Try to not to make your selects go through the entire table when selecting. Use indexes when possible. And too many indexes is even worse! Basically what you need to do is to gather up all your selects and see what common criteria you have in them, and how often they are used. That is how to determine what indexes to have and not have.
I did some very serious testing before I released my product. I found that I was able to speed up about 600% just by making my selects and indexing more efficient. Programming techniques are a part also. I do not have performance issues.
There are so many ways to tune up your data base. Without knowing what your data base does, it is very hard to know how to respond to you.
You really need to be a software developer, designer, and tune up (doctor) expert to get or have a very efficient system.
Your problem could be something very simple, something like how your users access the server and the data base.
Remember that data transferred from memory (cached) is the quickest. Data transferred from physical disk is second quickest. Data transferred via network is way slower. And data transferred via the internet is the slowest.
If your data base total size is like two gigs or so, then you might consider more memory so that, if you choose, you can have more caching.
Windows server 2008 has a program called "perfmon". Set it up to monitor the "fast reads, cache (reads per second)", network activity (bytes per second), paging file (% usage), each physical disk (% disk time), each CPU (% processor time). This might help you see what needs to be looked at.
My opinion… Windows defragging does not help! It might save you one or two seconds a day. Using the OpenInsight "compress" utility does not help! Compressing only removes unused overflow frames. The unused overflow frames are queued to be used again when needed.
At 13 OCT 2010 12:13PM Mike Ruane wrote:
I would call this more of a compromise than a 'feature'. If the files were not self-resizing, there would be more of a need for DBA type activities to maintain, configure, and tune the data tables, corect?
At 13 OCT 2010 12:25PM [url=http://www.sprezzatura.com]The Sprezzatura Group[/url] wrote:
We simply did not wish to use the word "bug" as it is working as intended. We would be happy to replace "feature" with "compromise" but did not feel it was our place to do so initially.
Personally we love not having to employ DBAs and have had up front personal experience of entire departments of hundreds of people being brought to a grinding halt because the DBA did not proactively increase the size of the SQL tables. Long may linear hash prosper :)
World leaders in all things RevSoft
At 13 OCT 2010 05:11PM Barry Stevens wrote:
Great info, thanks
At 13 OCT 2010 05:11PM Richard Hunt wrote:
I remember the time, in the 1980's, when linear hashed files were not "dynamic" (self resizing). And you had to test to find the proper hashing logarithm, and the proper modulo. then you had to resize the file.
Once dynamic (self resizing) files came out, it was a very big blessing.
Also, if I remember correctly before dynamic (self resizing) files, you also had the issue of deleted records were not really deleted. They were flagged as deleted until a resize was done. Alot of dead space existed.
At 14 OCT 2010 07:40AM Martin Drenovac wrote:
Thank you very much Richard, great info - I will review it.
How can I get some info from you on indexing, as it's my bug-bear. It takes up to 6 hours to re-index 5 indices on our main table - on a business that's 24x7 it's a show stopper.
=== At 14 OCT 2010 09:23AM The Sprezzatura Group wrote: ===
The only real way to speed up indexing is to resize the indexing file as well. Depending on what the system is doing now, removing all indexes might delete the !file at the OS level, so you'll need to remove each index one by one, keeping a stray one around to keep the size up.
There's been talk of putting the faster ARev multipass rebuild into OI, which will increase build speeds.
There's also been talk of being able to modify the node size. That will also increase build speed, though the trade off will be incompatibility with older systems.
World leaders in all things RevSoft
At 14 OCT 2010 09:57AM Martin Drenovac wrote:
Cheers for that.
What 'compatibility' with old systems - unless I read bad English, Rev is keenly ushering us into OI9.x.
Yes, I've often wondered as to how to manually make the index files - and the sooner we get some of the AREV stuff happening the better. Though I guess I could just shut up and move over to MSSQL? as I'm sure Mike's had enough of my whining…
At 15 OCT 2010 09:03AM [url=http://www.sprezzatura.com]The Sprezzatura Group[/url] wrote:
Just meant if you're sharing data.
Right now, if you're sharing data between ARev and OI, I'd rebuild the indexes in ARev.
World leaders in all things RevSoft