Thanks all .. (General)
At 17 FEB 2003 03:28:56PM [email protected] wrote:
Just wanted to say thanks to everyone who commented on the last couple of issues. The user and his system had lost his developer and was in a real bind.
Turns out that the problem is a combination of Files greatly out of size (ov multiple times great than LK), hardware issues (this looks towards the drives) and finally we found that the Lists file was something like 100meg LK and over 1GIG ov ..
And as for the tip on using Recreate File .. that was great.
Thanks again everyone .. this is a great User Environment
[email protected] onmouseover=window.status=the new revelation technology .. a refreshing change;return(true)"
David Tod Sigafoos ~ SigSolutions
Phone: 971-570-2005
OS: Win2k sp2 (5.00.2195)
OI: 4.1.2
At 18 FEB 2003 09:20AM Dave Harmacek wrote:
One of the worst ideas of the original ARev was the "Saved Queries" feature. This feature automatically maintains an index and a saved list for every SELECT command issued, by user. Normally 40 lists items are maintained.
Upon normal logout of ARev, lists that are no longer in that index are supposed to be deleted. This fails if the user has an abnormal exit leaving a number of these lists as orphans.
So, in Environment, General, Number of Saved Queries, just enter a zero!!!!
Dave Harmacek
At 18 FEB 2003 09:53AM Don Miller - C3 Inc. wrote:
Dave ..
I wrote a little utility years ago that does a clean-up of the LISTS file. The only tricky thing about it was a parser to isolate the date from the name so that a cutoff date can be user-specified. It also looks in the environment to see how many saved lists are to be maintain and retains only that number. It also works on SYSTEMP too.
I'd rather be able to maintain a reasonable number of saved queries for re-use. It's a very handy feature.
Don M.
At 18 FEB 2003 01:05PM Richard Hunt wrote:
It is very wise to do weekly or monthly maintenance on these two tables. I have "cringed" reading when some have reported their "LISTS" table being like 100 megs. Although it is a linear hashed table… the average row size could be extremely large.
Basically you have three types of rows. And this is my way of cleaning. Remember that the software you are running might use rows in these two tables.
1) Rows starting with "&STACK*", saved command stacks. Basically used for audit trails of activity. These can be deleted.
2) Rows starting with "T*", temporary saved lists. These can be deleted when there are no users on the system.
3) Others…, you must decide on these. Since I do not use either of these two tables for anything other than saving lists, I can delete all of them when there are no users on the system. Some other software developers use saved lists over a period of days to quicken up reporting. If that is so then you will need to be careful on what you delete.
Just a note here…
I remember way back in the 80's before "dynamic" files. We actually had to select the "hashing" method and manually "resize" the tables. Oh the agony!!! Luckly, today, "dynamic" files automatically "resize" leaving only the task of "verifying" the file and maybe "compressing".
Really though, "compressing" is not a critical item. that could be done like once a month or even once a year. "Verifying"… I think I would be doing that like once a week.
Defragging is kinda like "compressing". If you really feel the urge… then once a month. Otherwise, once a year should do it.
If anyone has some benchmark results on how effective defragging is, I would appreciate you sharing them. I have not seen any noticable improvement in performance after a defrag.