Novell Server Optimization For Arev (AREV Specific)
At 13 FEB 1998 12:51:39PM Jim Dierking wrote:
We are upgrading our Pentium Pro 150 server. Tentatively, we
had planned on using a Pentium II 233 with dual processors.
The folks at Novell indicated that the Novell server can only
utilize one processor, but that a database such as oracle could
utilize both processors. Will Arev be able to use both processors?
We are upgrading from 96 megs of 60ns edo ram to 128megs of 10ns
sram. We have considered going to 64mb sram inorder to put
256mb of ram into the server. Any ideas on how this might
increase performance as opposed to a super fast scsi drive?
We are adding an additional NIC hoping it will
also increase performance. However, it will not be serving a
seperate segment, just providing another connection to the stacked
hubs. Feedback about where the bottlenecks are best attacked
is appreciated.
TIA, Jim Dierking
At 13 FEB 1998 03:30PM Victor Engel wrote:
Here is a list of some bottlenecks with Arev that you may be able to improve upon, depending upon your system:
* Make sure you don't have any undersized files (LH Verify can detect this condition but sometimes makes poor recommendations, in my opinion).
* Sort DOS pathname (make this local if possible) – if you have a large enough RAM disk and the files you sort are not too big you might be able to use this. I think the maximum RAM disk size is 32 Mb in DOS. I don't know about Windows 95 or other OS's.
* Move system files and other static files local. This may require periodic updates. This helps relieve a bottleneck in two ways: 1) files are more readily available to the local workstation in most cases, and 2) Reduced overhead on the server frees up cache memory on the server making I/O more efficient for everyone else
* Change number of saved queries to 0. If you have a number other than 0, then the system maintains that many old lists (accessible using CTRL-F10) producing the associated overhead.
* If you have programs with loops containing a call to MSG, make sure to set up the MAP variable to treat your message as a literal if it is, in fact, a literal. Otherwise, for each message you put up, there is IO to both MESSAGES and SYSMESSAGES.
* If the design of your system warrants it, presize your files and set the sizelock. Resizing of files can be more I/O intensive than anything else with linear hash files.
* I'm not sure if this is still true in 3.12, but I think it probably is. If I remember correctly, XLATE caches up to 4 records. If you use XLATE a lot, try not to use it where more than 4 records are accessed if you still must XLATE the first. Doing so will introduce another I/O operation.
* Make good use of indexes, but do not overuse them.
* Cut down on the use of {} dictionary calls if extracting directly from a variable containing the record will suffice. Using {}, while convenient, will not only result in increased I/O, but will also clutter up your program stack.
* When appropriate, restructure SELECT statements. Take advantage of the intrinsic sort order of BTREE indexes, and try to limit the reduction as much as possible at the outset for nested selects.
That's all for now. Can anyone else expand the list?
At 13 FEB 1998 05:09PM Jim Dierking wrote:
Victor, Thanks for the nice list. I had questions about taking the
system files local. Does this include arev.exe? Has this been done
when the NLM has been deployed and does it cause any problems with
indexing or gfe's? Thanks again, Jim.
At 16 FEB 1998 10:54AM Victor wrote:
No, that does NOT include AREV.EXE, which, unless you have a special license, must occur only once.
At 16 FEB 1998 11:06AM Victor Engel wrote:
I had questions about taking the system files local. Does this include arev.exe? Has this been done when the NLM has been deployed and does it cause any problems with indexing or gfe's?
Look at it this way. Any files that do not need to be shared or remain unchanged can be brought local (I'm talking about *.LK and *.OV files here). Index files, for example, would NOT be an option, because they would be updated, albeit indirectly, by other users. The NLM is used only for files located on the file server. There is no problem with accessing local files. I do it all the time. Just make sure you don't have a write-back cache enabled. While this normally works out OK, it can sometimes cause corrupted data.