Sign up on the Revelation Software website to have access to the most current content, and to be able to ask questions and get answers from the Revelation community

At 02 DEC 2002 08:39:11AM Devon Hubbard wrote:

Could frame size changes result in Group 1 errors? Specifically, when dumping the file you immediately get, "The header for group 1 has been corrupted. No frame size or modulo information can be read". The option to try and fix group 1 is worthless. We stopped trying to fix errors like this years ago because it's never worked on any file on any of our servers.

The specifics…We had a file with a modulo of 160,025, frame size 1024 that was seriously undersized. Average size of the records in this file is 559. We resized over the weekend with a framesize of 1600. Previously, the file averaged 23 frames per group and is horrible on access time. The file currently has 3,172,066 records in it. On a typical day this particular file has around 140 users hitting it with an average of 4500 new records created daily. Reads of existing records in the file average around 55,000 daily.

Over the Thanksgiving holiday weekend, we left two PCs running that were doing select/readnext/read record tests on everything in this newly resized file. Comparing times between the old one and the new one, read access was 328% faster after the resize. As with any Pick system I've worked on for almost 20 years now, properly resizing the file gave us an expected performance increase. We left the new file in place for production use come Monday morning.

Unfortunately, within an hour this morning of the file being in full production, users were reporting "Fatal Error Reading xxx in table xxx" errors. Sure enough, the file there were having problems accessing was this newly resized file. Dumping the file results in the "group 1" bad header msg and trying to access records in specific groups results in the "Fatal Error Reading…" alert.

Does anyone know if there any direct relation between the framesize and Group 1 corruption problems? We are running AREV on Dell servers with NetWare and the LH NLM installed.

Despite several consultants telling us that AREV can handle data traffic in heavy usage files like this, it's track record is horrible from our direct experience. And trying to perform maintenance to improve performance on files like this in our system continually result in AREV barfing out on the files eventually. I can't believe how unstable and unmaintainable AREV is. I've never worked on a Pick file system this unreliable.

regards,

dEVoN Hubbard

BenComp National Corp.


At 02 DEC 2002 09:12AM Peter Lynch wrote:

In my experience, the problems you are having, while they may appear to be symptoms of the result of your data base administration, and therefore a bug in Arev, they are most likely not.

My reasoning is not very scientific. It has been a very bumpy ride from DOS to NT and W2K. I used to believe there was something innately wrong with Arev's filing system during the Win 95/98 years.

Networks were unstable. Filing systems had strange quirks, which only Arev seemed to trigger - usually because it was the mission critical bit that most used the network, as I suspect yours does.

But since NT and Win2K the quirky problems have disappeared.

I now feel that Arev is reliable. Arev hasnt changed, I havent changed - the host OS has changed.

As far as I know, none of the versions of Pick runs on less than NT or Win2K, so they've never had the problem. (R83 doesnt count).

The reality is that Arev was running in its usual ignorance that it is no longer DOS, doing the things that DOS required it to do. No bugs.

DOS networking has not been stable until now - with W2K and NT.

So Arev has appeared unstable but not been unstable.

Blame Billy Gates.

(Of course if there had been a TCP/IP driver the network problems may have disappeared)


At 02 DEC 2002 09:40AM Devon Hubbard wrote:

Thanks Peter. I appreciate the insight/opinion. Coincidentally, we are in the process of migrating users to Win2K and XP systems. But the majority of our installed base is still using Win98. After reading your email I'm now thinking we should accelerate our migration plans.

thanks,

dEVoN


At 02 DEC 2002 10:56AM [url=http://www.sprezzatura.com" onMouseOver=window.status= Click here to visit our web site?';return(true)]The Sprezzatura Group[/url] wrote:

Several things spring to mind. What version of AREV are you using? What version (if any) of the network products are you using? Why did you choose a non base 2 frame size? Do you use any programs that store off file handles to speed up login?

The Sprezzatura Group

World Leaders in all things RevSoft


At 02 DEC 2002 11:09AM [url=http://www.sprezzatura.com" onMouseOver=window.status= Click here to visit our web site?';return(true)]The Sprezzatura Group[/url] wrote:

Several things spring to mind. What version of AREV are you using? What version (if any) of the network products are you using? Why did you choose a non base 2 frame size? Do you use any programs that store off file handles to speed up login?

The Sprezzatura Group

World Leaders in all things RevSoft


At 02 DEC 2002 03:59PM Devon Hubbard wrote:

] Several things spring to mind. What version of AREV are you using?

v3.12

] What version (if any) of the network products are you using?

LH.NLM v1.5a; Netware 5.1 Service Pack 4

] Why did you choose a non base 2 frame size?

Would 1664 have been better? What AREV docs specify that the frame size needs to be a base 2 number? Everything we've ever seen simply says, "any size between 256 and 10000 bytes."

] Do you use any programs that store off file handles to speed up login?

We have some files that have their own MFS installed but the file in question isn't one of them. Why, should an MFS not cache file handles to avoid the overhead of opening them repeatedly?

thanks,

dEVoN


At 02 DEC 2002 08:29PM Devon Hubbard wrote:

Okay, here's another data point…

Using MAKETABLE, we can make a new table (parameters: 1800000 560 240 2048) and when it's done hanging the system for almost 45 minutes creating it, we can dump and list the file just fine. Try to write a new record to it, and wham!… "Fatal Error" alert. (No custom MFS install.

The interesting thing about this alert is it says…

Fatal Error Reading TEST in table TEST_CL01
FS104
General write error in the operating system file "CHIPDATA\REV83250.LK"

Aside from getting the error to begin with, why does the error say "Fatal Error Reading" and then say "General write error" right after that? Especially when we know we're trying to write to the file, not read.

I think we need to move our AREV servicer to NT and get the heck off Netware.

dEVoN


At 03 DEC 2002 08:10PM Curt Putnam wrote:

This may well be all wet, but I seem to remember something about all attached files needing to have the same framesize - or was it volume?

Way back in the middle 90's we ran some tests on a crashable developers only net and found that framesize changes had minimal positive impact. Seems that the NICs really didn't like anything other than 1024 - or was it Netware …

Anyway we found that designing applications to what Arev actually did well improved things hundreds fold over designing to what the designer hoped it would do. The ERP system wound up with 238 users beating on very few files and AREV was never the limiting factor. This was the old ethernet (at 10), but the server did have 10 or 12 NICS in it. Remember: PICK is PICK and Arev is Arev. Expertise in one is not expertise in the other despite their many similarities.


At 04 DEC 2002 10:09AM Victor Engel wrote:

Curt,

I generally agree, and this post may be what you remember reading.


At 08 DEC 2002 12:00AM Devon Hubbard wrote:

This entire file sizing issue has been extremely frustrating. Given the nature of our business (medical insurance) we do not have more than a Sat evening and Sunday for extended work on our database. This high traffic file is seriously undersized and no matter how we try to resize it, AREV just doesn't like us trying to make a file with 3.1 million 550 byte records. Even after reading the other postings in this thread, and creating the file with a frame size of 1024, the file becomes corrupted (group 1 header problems) the minute we try and write new records to it.

Interestingly enough, we have many files on this same server with frame sizes at 2048 and 4096 that work just fine. Creating smaller files at any of these frame sizes works just fine. AREV will let us create the file, but it doesn't like us writing anything to it.

The extremely frustrating things is I've tried creating this file on two different physical servers, from multiple AREV client machines (Win98 and XP) with the same results. Group 1 header problems as soon as we start writing to the file.

Separate threads about file size limitations in AREV state differing opinions about 2GB or 4GB limitations. Utils like DUMP definetely have problems with files over 2GB, but after spending two weeks now trying to get AREV to let us write to a 1.4GB file, I would now debate with anyone that it can't even do that.

very frustrating. If anyone has any suggestions we'd certainly appreciate the input.

dEVoN


At 08 DEC 2002 05:14PM Donald Bakke wrote:

Devon,

This area is not our specialty…largely because we have had very little problems with our tables that have gone into the multi-million row count. However, a couple of thoughts have come to mind:

1. Have you attempted to create the table on a local machine first and then move to the Novell server? We've found that often the MAKETABLE will run much faster and problems that occur on a network generally don't happen on a stand-alone machine. Case in point, I ran MAKETABLE using the parameters you specified and the table was created in about 5 minutes. The resulting .LK file was 1.27GB and the OV file was (at this point) 0 bytes. I was able to write, modify, and delete records without any errors. However, all of my testing was done on my local laptop running W2K (and no EMS to boot!)

2. I didn't get the gist of how this table is used but if you are pre-sizing this table what are you doing with the sizelock flag? Is there a possibility that the table is already filled and your sizelock is preventing in growth?

dbakke@srpcs.com

SRP Computer Solutions, Inc.


At 09 DEC 2002 09:59AM Victor Engel wrote:

Perhaps you are running into some other sort of limitation. There are several limits to the RTP57 file format (search for threads on this subject for more details). When the corruption happens, what is the sizelock on the file?

You could get around the problem by creating an MFS to split the file into several pieces. I think there is code publicly available to do this if you don't want to develop your own. Essentially, the MFS would make the various pieces look like only one file but records would be distributed at write time by the MFS to several physical files. The MFS would also have to manage reads, selects, etc. To your application, though, it would look like one file.

View this thread on the forum...

  • third_party_content/community/commentary/forums_nonworks/e0e00f513cb831d385256c83004affa0.txt
  • Last modified: 2023/12/28 07:40
  • by 127.0.0.1