Sign up on the Revelation Software website to have access to the most current content, and to be able to ask questions and get answers from the Revelation community

At 27 NOV 2002 08:10:51AM C Mansutti wrote:

Hi All,

I'm afraid this one needs a long explanation so I hope not to bore you.

Arev 3.12

I have a site which controls multiple branches.

The data tables for each branch is a different directory on the drive of a network server.

Each directory is a carbon copy of the other - with different data in each branch obviously

They can switch between branches by detaching from Branch A and attaching to branch B etc.

All this works fine.

Now the fun bit..

They need to consolidate and pass data between branches.

They only need to access about 5 tables as part of the data transfer so I did the following.

I ran a process that takes the file handle details from the current branch (stores it on a globally available table) detaches from the current branch and attaches to the next branch and repeats the file handle information. (This process is done once - and that should be it)

The end result is a global table with the file handles of all the relevant tables of all branches stored in each appropriate record.

So if I want to update Table 1 in Branch B while I am in Branch A, I read the file handle from the global table and write to that handle.

All this works fine (and has done for about 3 years)… until

If I update a dictionary that relates to a table on the global table list, I need to go into each branch and update it. Because of the changes I need to run file handle storage process again except afterwards I get very strange behaviour.

I initially get "! does not exist in the dictionary" when I try and send data to a file via the file handle from the global database. This is consistent.

A couple of more go's of the storing the file handle seems to fix the problem.

Then I intermittently get

"AnExisitingFieldName" does not exist in the dictionary.

It then intermittently becomes unable to read a table according to the file handle from the global table

And eventually - the piece of resistance, unable to read the $Process_Name of a compiled process that has been there since day 1

I thought it was a hardware issue, but this happens on

Novell 5 with NLM or NPP (2 different servers)

Windows 98 Peer to Peer with NPP (different PCs)

I've even changed hubs. And to top it off they have even changed locations (they moved about 6 months ago)

The system is only 5 user - the problem can be present on any of the workstations.

It seems like the read process gets very confused. I have also located records that should have been written to one table, sitting in a different control table (specified in the global file handle table). I must be trigerring something that upsets Arev - but what?

Finally to complicate matters, the intermittent faults start to reduce over time and then don't show up for up to 6 months.

It seems like the dictionary updates trigger the read errors or is it something else. Any clues from anybody would be greatly appreciated

Here the story endeth…

Claude


At 27 NOV 2002 08:14AM [url=http://www.sprezzatura.com" onMouseOver=window.status= Click here to visit our web site?';return(true)]The Sprezzatura Group[/url] wrote:

We wouldn't recommend caching file handles like this. You have no control over how the network products choose to reuse them.

The Sprezzatura Group

World Leaders in all things RevSoft


At 27 NOV 2002 09:08AM C Mansutti wrote:

What would you reccommend then?

Regards

Claude


At 27 NOV 2002 09:46AM Matt Sorrell wrote:

Claude,

I had to do something similar, and what I ended up doing was creating a TEMP table that was available globally. I would write records from VolA to the TEMP table, including the destination volume as part of the key. Then, I would read from the TEMP table, determine the destination volume, attach the destination volume, and write the record.

Cumbersome, but it works.

msorrel@greyhound.com

Greyhound Lines, Inc.


At 27 NOV 2002 10:35AM C Mansutti wrote:

Thanks for your suggestion Mat, but these guys are pumping out up to 1500+ transactions per day. I can't be attaching/re-attaching for every transfer.

The thing is, that the process worked happily for months. It was the dictionary update that seemed to set off some wierd chain of events.

I've updated the dictionary about 4/5 times over the last 3 years and this read problems shows itself every time afterwards. Then it starts to settle - why?

Claude


At 27 NOV 2002 10:40AM C Mansutti wrote:

Hi Sprezz,

Don't forget the piece de resistance.

Arev is unable to read the object code of a process it has been using hundreds of time during the day (it happened twice today). It must have something to do with the way it reads - buffer memory - something getting corrupted.

To top it off - I've duplicated the "! does not exist" message on a stand alone PC - so I'm doubting network issues

Claude


At 27 NOV 2002 11:29AM Victor Engel wrote:

First, each volume should NOT be identical. At the very least, the Arev volume label should be different. This is stored in the REVMEDIA record of the REVMEDIA file. You can change it using the NAMEVOLUME command if there are no indexes on the files on the volume. If there are, you must remove the indexes, do the NAMEVOLUME, and reapply the indexes. This is because the volume label is used in the indexing control records.

The last point is what I think is the root of your problem.

After you set up the volumes as described, then you can have each of them identical except for the REVMEDIA.* files. Now you can refer to files on a different volume using a qfile (alias) and simply copy records directly from one to the other.


At 27 NOV 2002 11:42AM Victor Engel wrote:

Removing indexes is not required in version 3.12.


At 27 NOV 2002 11:49AM Warren wrote:

Why not just set up persistant SYSALIAS file pointers (SETALIAS option W)? As long as the Media/Volume names are unique this should be no problem.

Storing File handles is a recipe for disaster with the NLM. GAAP uses a proprietary caching scheme that has to be turned off when using the NLM.


At 28 NOV 2002 06:46PM C Mansutti wrote:

Thanks Warren,

I've started to re-write using SETALIAS but I've come across a limitation.

I've decided to put a new post on about it.

Claude


At 28 NOV 2002 06:51PM C Mansutti wrote:

Victor

Thanks for your worldly advice to a self taught programmer.

It didn't fix the problem (made it worse really) but at least consistently worse. I obviously overlooked SYSALIAS which would have solved my problem - so I'm re-writing with it.

I am having a problem with it though - see new post

Claude


At 29 NOV 2002 10:08AM Cameron Christie wrote:

It almost sounds like the clash you get when different versions of AREV (and/or OI) fight over the %FIELDS% variable (except that NEVER resolves itself! ;-)

FWIW, our code here accesses identical table structures online on a very regular basis (not for multi-company, but because of year on year archiving) and we always attach, open, read and detach the relevant tables each time. Our kit might be brilliant of course (our network guys would certainly tell us so!) but the performance hit is minimal.


At 29 NOV 2002 09:39PM C Mansutti wrote:

I think that is the only way Cam,

I've tried SYSALIAS but that seems to have a major bug (see next posting)

All I seem to be doing is workarounds

Claude


At 05 DEC 2002 05:53AM Hippo wrote:

May be, I am off topic… I had an idle … is the problem solved already?

I do attaching/detaching hazards only when synchronizing my laptop code with the code maintained on network.

I remember that there was problems, which I solved by some delay loops. I must check that the volume is realy attached.

View this thread on the forum...

  • third_party_content/community/commentary/forums_nonworks/818afd45b581783085256c7e004867cc.txt
  • Last modified: 2023/12/28 07:40
  • by 127.0.0.1