Sign up on the Revelation Software website to have access to the most current content, and to be able to ask questions and get answers from the Revelation community

At 05 MAR 2004 09:50:54AM Jim E ONeill wrote:

We have a customer that has a large table, 1.4 million rows, that has recently seen a slow down in secondary searches and we are looking for ideas. They are on a Novell network and are using NLM version 5.5. The workstation we are using is a windows 98 but the problem occurs on W2K and XP machines as well.

The problem occurrs when they conduct an initial select using a btree index then select against another field. The initial index select is quick, several seconds and returns approximately 13,000 rows. The second select originally was taking about 10 minutes. To try and resolve this issue we have done several things.

1.) We copied all rows from the original table to a new table. This gave us a clean file w/ no indexes. We added 1 index and tested. The second select was quicker but was still over 3 minutes.

2.) We copied test the file to our office. Both selects completed in about 20 seconds. We are on a Windows 200 server with the NT service installed.

3.) We have replaced the entire sysobj table and installed a fresh arev.exe. Same results. We also ran lh_verify and there was no reported corruption.

4.) We have set up the application on a local hard drive and run the same tests off the network. Same results.

5.) The original file had a size lock of 2 when we dumped the file. The test file after the copy had no size lock. We are thinking this have something to do with the imporved performance after the copy. Also the original file had about 67 MGB in the LK portion and 370MGB in the OV portion. The test file has about 150MGB in the lk and about 340MGB in the ov portions.

Any thoughts or suggestions would be appreciated.

Thanks


At 05 MAR 2004 12:01PM Richard Hunt wrote:

A size lock of 2 states that you do not want the file to expand or contract. That means that when additional items are added to the file, it will not dynamically resize automatically to maintain the quick access to items in the file.

Your file is overflowed badly. So it is possible that reading one item could take much longer than normal. That would explain the second select taking way too long.

Reset the sizelock to 0 and then do your copy routine again so that the file gets resized to its optimum sizing.


At 05 MAR 2004 07:46PM Steve Smith wrote:

Thinking that indexing may be your friend, as may be several smaller files.

It all depends on your data. There are several techniques.

Try creating two tables - one large historical table full of static data, and one ever-changing table with recent transactions. The historic/ static table is btree-indexed (every which-way), and the search on the recent file is conducted in normal fashion.

Two selects are saved into separate LISTS file records, then the smaller set renamed to follow the larger set, making them contiguous. Then a GET-LIST is performed to merge the two.

From time to time recent transactions can be archived off into the historic transactions.

Another options is to run your own indexing scheme, especially if your data is largely numeric.

Steve


At 05 MAR 2004 10:18PM Curt Putnam wrote:

I'm confused. Are you saying that the process runs in seconds in your office and that all the things you've tried have been on the client hardware?


At 06 MAR 2004 06:47AM [url=http://www.sprezzatura.com]The Sprezzatura Group[/url] wrote:

Have you put the new copy back at the customer's site?

How long does a simple file pass take at the various places?

After the index select is performed, the system will go through a basic read process, reading each record in turn to determine if it belongs in the select list. If read access is generally slower at one site, then selects of this nature will also be slower.

One tip for selects like this, esp. if the data is not symbolic, is to create a sylmbolic that joins both criteria and index on that.

The Sprezzatura Group

World Leaders in all Things RevSoft


At 08 MAR 2004 09:06AM Warren wrote:

Is the slowdown confined to the one file or overall?

Has the customer updated the Novell client on their workstations? Client v4.9 for W2K and XP seems to have speed issues, I'm not sure about v3.4 for Win9X.

I had a report that went from under 15 minutes to over 45 when switching to the v4.9 client on W2K & XP. Switching back to v4.8 sp2 returned it to the former execute to completion time.


At 09 MAR 2004 08:24AM Jim E ONeill wrote:

Yes. The files are identical. One copy there and one copy here. They are on Netware and we are on a Windows server. We have removed the size lock and copied the table to a new file but the second serach remans slow.


At 09 MAR 2004 08:28AM Jim E ONeill wrote:

Yes, we put the copy back at the customer site. We have thought of creating a symbolic that is made of the 2 fields then adding an index to that and we will do that if we cannot solve this. If possible we would like to solve the root problem if possible so that we can be aware going forward.


At 09 MAR 2004 08:37AM Jim E ONeill wrote:

We did that and the file was more evenly distributed between the LK and OV files. The process was quicker but still slow. Over 3 minutes to process 12000 rows. We tried setting the resize threshold to a lower percent and the file was more closely redistributed but the process was still slow. Any other ideas?


At 09 MAR 2004 08:42AM The Sprezzatura Group wrote:

What state is your LISTS file in?

The Sprezzatura Group

World Leaders in all things RevSoft


At 09 MAR 2004 11:30AM Warren Auyong wrote:

Where are the temp sort files and rollout files pathed to? The network or local drives?

When was the last time a PURGE /ALL run on the Novell volumes?

Is there disk mirroring on the Novell server? Are the mirrors out of sync?

I've had slow downs as you describe due to the fact that the temp files were being created / destroyed on the network and eventually all the deleted entries just clog the FAT up.

You can see if this is the problem by running a PURGE /ALL on the root of each of the disk volumes, either from a command prompt or through the Novell Client. This should be run from a user with full admin rights so as to purge deleted files from all users. NOTE: This process can take hours depending on the number of files and speed of the system.

The solution to this problem is to either move the sort/rollout files to local drives or set the folders/directories where these files are being created to "purge immediately". This can be set with the Novell client.

You might consider trying a third-party Novell FAT defragmentation tool as these are helpful in restoring file system speed.


At 09 MAR 2004 11:48AM Jim E ONeill wrote:

We cleared it a week ago.


At 09 MAR 2004 12:11PM Warren Auyong wrote:

Is this ARev v3.12? How about the SYSTEMP file?


At 09 MAR 2004 12:14PM Warren Auyong wrote:

Clarification:

Drive, file and path names for the rollout and temp sort files are set in ARev.

The "purge immediately" attribute is set via Novell.


At 10 MAR 2004 12:56PM Hippo wrote:

For regular processing I recomend the B-tree on joined symbolic as Sprezz.

(I 've rewrite a lot of programs based on double selection this way)

There is big speedup even on small tables.

You neednot count the indexing on background to the select complexity when the source tables are rather edited than computed.

The problem may appear with tables computed nearly at the select time as the background indexing has not enough time.

(Combine results of two indexes requires time proportional to smaller one index result (+log #records). Using joined index requires time proportional to result size (+log #records).)

View this thread on the forum...

  • third_party_content/community/commentary/forums_nonworks/fc3330a570d237ea85256e4e005190ab.txt
  • Last modified: 2023/12/28 07:40
  • by 127.0.0.1