Sign up on the Revelation Software website to have access to the most current content, and to be able to ask questions and get answers from the Revelation community

At 06 SEP 2005 11:34:02AM Jeni Pearson wrote:

A client has a GFE in a very large datatable. They cannot restore as it would mean returning to pre-month end data.

We have run FIXLH on the table to attempt to fix the problem.

FIXLH crashed with a Group calculation error in group 0 (FS129) during the 'Restoring good rows' section.

When I try to copy the rows manually from DUMPFIX_stationname I get the same error.

I have moved the bad rows from SYSTEMP into a newly created table to ensure I don't loose them (there are about 5000).

How do I get my table of good rows back since there still appears to be an error?

I am not very knowledgeable about Arev technically - everything I have tried so far has been picked up from other details posted on this forum.

I read some information about manually correcting the groups using DUMP or CTL+F but I have never tried this and I'm not sure of the procedure or whether it would be relevant here.

Thanks in advance for any help anyone can give.

Jeni


At 06 SEP 2005 12:09PM Victor Engel wrote:

What happens when you get the error message? If you get thrown to the debugger, try entering G to continue (Go). When you tried fixing the file, was it a local file, or was it on the server? Have you verified cashing is not enabled? How big is the file (both in terms of number of records and DOS file size)?

Using DUMP to fix the records can work, but you'll need to be familiar with the file structure.


At 07 SEP 2005 06:30AM Jeni Pearson wrote:

I don't get thrown to the debugger, pressing OK on the Group calculation error simply appears to finish the process, without finishing restoring the good rows.

I am running on the server. The file size is just under half a GB for the OV, just over 100 MB for the LK.

I have come across mentions of using DUMP for fixing on this site, but I think this might be beyond my skill set at the moment!

Thanks


At 07 SEP 2005 06:57AM [email protected] wrote:

Do you know if you are running a network service of any kind? Can you try this locally not logged on to the network?

[email protected]

The Sprezzatura Group Web Site

World Leaders in all things RevSoft


At 07 SEP 2005 07:52AM Jeni Pearson wrote:

In the end, the clients too concerned about the accuracy of their data, and they have decided to restore despite losing a lot of work. So I don't need to go any further with this at the moment.

However, in order to prevent potential further problems with their data, they are asking if there are any regular maintenance steps they should be taking to try to ensure that the database stays robust.

I believe it would be a good idea for them to run VERIFYLH on the larger tables every month or so, (and then follow recommendations), but are there any other regular steps or checks which people could recommend?


At 07 SEP 2005 12:12PM Richard Hunt wrote:

I really don't have any checks or steps that I know of that will eliminate GFE's. A monthly check (LHVERIFY) would be a good idea.

The most important is to have an up to date backup everytime you need it. Having to use a backup from a month or so ago is the real disaster.


At 07 SEP 2005 12:17PM Matt Sorrell wrote:

Also, you want to make sure you are using a networking product. In the 6 years here, the only GFE I've had was related to failing server hardware.

Finally, make sure that write-behind caching is disabled on all of your workstations and servers. This prevents data loss and corruption if a machine goes down but not all of the write buffers have been flushed to disk.

[email protected]

Greyhound Lines, Inc.


At 08 SEP 2005 09:43AM Hippo wrote:

BTW: it may be usefull to backup the corrupted file … to do some experiments with it.

Do you know the total of key lengths of the group 0?

Is it ] 64KKB?

… In that case, data are not corrupted, only AREV cannot access them using standard IO operations. See .LK,.OV file structure documentation … . You can access them using OSBREADs and copy the contents to newly created table. But I suppose it's several days of programming.

… In that case, the problem can be expected more often … what is your average record size? If much smaller then group size, may be decreasing the group size will help.

It may be interesting whether the table is in "resizing mode" … shrink/spread?

May be answering these questions will help you to prepare for future …


At 12 SEP 2005 04:32PM [email protected] wrote:

My recomendations are usually:

Run LH Verify on all files. Backup system files at this point so you know they are clean. System files would be programs and windows.

Tables that are active, should be verified monthly. Keep in mind, active files aren't always files that get large, but files that get written to frequently. For example, LISTS files should remain around the same size, but will get written frequently, probably more so than any other table in the system.

Look into your server. Group calculation errors generally mean that the server has somehow hashed a record into the wrong location. It's theoretically a math problem, but normally it means that there has been some sort of caching on a heavily written to file.

[email protected]

The Sprezzatura Group Web Site

World Leaders in all things RevSoft


At 13 SEP 2005 11:32PM Barry Stevens wrote:

Are there any disk caching settings in XP to turn off.

View this thread on the forum...

  • third_party_content/community/commentary/forums_nonworks/b0447bf0a894a5c18525707400558375.txt
  • Last modified: 2023/12/28 07:39
  • by 127.0.0.1