Sign up on the Revelation Software website to have access to the most current content, and to be able to ask questions and get answers from the Revelation community

At 22 AUG 2003 05:32:01AM Richard Guise wrote:

Some while ago there was a discussion on problems with %SK% records simply not getting filed but vanishing into the ether. It seemed from the experts' comments in this discussion that it was caused by a fast user beating a slow system (and the client's system was indeed very slow indeed).

In spite of a new faster network, I replaced the standard %SK% facility. The key prompt default was "New Entry" and a LOSTFOCUS routine on the key prompt then found the next sequence and changed the key on-screen, in the Window Common vbls and re record locks.

It's worked fine for many months with about four workstations hammering it hard. Suddenly last Friday it seems to have skipped the Lostfocus process and filed a record under "New Entry". This was fixed and then suddenly today it's done the same again.

I have further remedies and checks in mind.

However, it seems from the in initial problem and this new one that, under certain circumstances, OI can simply skip window event processes.

Anyone got any experiences of this, comments, checks or cures?

TIA


At 22 AUG 2003 09:49AM Donald Bakke wrote:

Richard,

Well we're one of the "die hards" who hasn't abandoned the original %SK% method, even though we were the first to report the problem with event sequence when the system is hammered fast and hard. Essentially we put our workaround in the promoted READ event. Your LOSTFOCUS event is not likely getting skipped but rather it is getting pushed so far down the queue that it doesn't do anything by the time it gets called.

Therefore I would see if the READ event is a better place for you to execute your logic. Alternatively, I believe most people who use alternative methods for assigning a sequential key do so in the WRITE event. In other words, the key id is in "to be determined" status until the record is actually written. We do this for a lot of our web-based applications. Seems to work well.

dbakke@srpcs.com

SRP Computer Solutions, Inc.


At 22 AUG 2003 10:39AM Richard Guise wrote:

Donald

Many thanks for this.

I take your point that the read event is sooner and therefore less vulnerable. The logic should be very similar and it should do no harm and might well do good. Worth doing.

You say that you suspect the lostfocus probably isn't being skipped but doesn't do anything by the time its is called. Surely if the code is processed at all it'll do what it's told - not just as little as it feels like!

I'm also a little curious that, in spite of having problems with %SK% you still use it. Our clients lost several records without trace with %SK% and weren't too chuffed. At least when our lostfocus misbehaves this is immediately apparent.

If the key is assigned at write time, then the number used isn't displayed to the user. As lostfocus (or read) the number is shown as soon as the key prompt is left.

Thanks again.


At 22 AUG 2003 11:13AM Donald Bakke wrote:

Richard,

I take your point that the read event is sooner and therefore less vulnerable.

Not just *less* vulnerable but actually *in*vulnerable. Since the problem ultimately affects the reading of the record, then we know a READ event must fire. Thus, you pretty much can't go wrong tapping into this event.

You say that you suspect the lostfocus probably isn't being skipped but doesn't do anything by the time its is called. Surely if the code is processed at all it'll do what it's told - not just as little as it feels like!

I was assuming that the logic would get processed but since the context is no longer valid it wouldn't do anything noticeable. It is possible, however, that the event never gets called.

I'm also a little curious that, in spite of having problems with %SK% you still use it. Our clients lost several records without trace with %SK% and weren't too chuffed. At least when our lostfocus misbehaves this is immediately apparent. If the key is assigned at write time, then the number used isn't displayed to the user. As lostfocus (or read) the number is shown as soon as the key prompt is left.

You pretty much explained why we haven't abandoned %SK%. We want our users to see the key ASAP. Granted there are the alternative ways of doing this but we haven't had the degree of problems others have talked about so we are happily (and ignorantly) keeping %SK% with minor workarounds in place. Obviously we will revisit this if we hear about more serious issues with our applications.

dbakke@srpcs.com

SRP Computer Solutions, Inc.


At 22 AUG 2003 11:25AM Richard Hunt wrote:

I just gotta ask this…

If it was not critical that all sequantial keys be used in perfect sequential order, I mean like using 1 2 4 5, skipping 3… would the sequential key be acceptable? Or is there other problems with it?


At 22 AUG 2003 11:54AM Donald Bakke wrote:

Richard,

I'm not sure I fully understand your question. You brought up another issue related to the need for custom sequential counters: the need to keep properly sequenced and non-skipped keys.

Key skipping is a common issue with normal %SK% logic. This is because one workstation can have a key locked but hasn't yet completed the record. Another workstation creates a record but is forced to use the next available key. Therefore, temporarily at least, there is a gap in the sequence. However, even if the first workstation aborts the creation of the record, that key is still the next default so eventually it will get used. The worst case scenario is that the date and time sequence (if there is any) will not correspond exactly with the sequence of the keys.

Another issue that comes up is when records are deleted. Some people have additional logic that resets the counter in these situations in order to prevent gaps.

One problem with the use of standard %SK% is when the window is sitting there with the default %SK% in the key id field and the user goes to another window in the application or to another application entirely. This will trigger the LOSTFOCUS of that key id field and consequently read that record. Now that the record has actually been read (even if it is a new record) the end user can accidentally save it if they aren't paying attention. We avoid this by having our promoted READ event check to make sure the original window still has focus AND that the application wasn't minimized (which also causes the LOSTFOCUS event to fire.) If the conditions aren't correct we simply return 0 and null out the key id field. When the user goes back to the window the default logic operates again and the next %SK% gets filled in.

Hope that addresses your question.

dbakke@srpcs.com

SRP Computer Solutions, Inc.


At 23 AUG 2003 12:49PM Peter Lynch wrote:

There is nothing wrong with %SK% in the dictionary of a file - it is a standard and convenient place to keep the last used key.

There is obviously a lot wrong with its usage.

Since there are so many uses for an automatically generated unique key, and some of those uses require that it be sequential, there are also many ways to implement it.

The inate chronological order may be important. If so, then the key is best generated on the write event. That simply guarantees, with the appropriate write-time locking, that the new key is the next one written. This also guarantees no gaps, unless you delete of course.

In other cases it doesnt matter a damn if a key disappears because a user decided not to continue with a transaction.

If you havent tried the write event method of generating a key when you do have the chronological requirement, i recommend you try it. Many of the problems will disappear.

Regards,

Peter


At 24 AUG 2003 09:14AM Richard Guise wrote:

Thanks, folks! Points taken, Don!

I suspect we all store the next available (or last used) key in %SK%, if only for sentimental reasons. If one isn't going to sift through all the numbers from 1, it has to be stored somewhere and it cannot matter too much (within reason) where.

It seems that, there is a big benefit in allocating the key at read time as the user can then see what the number is used instead of seeing "New Entry" (or whatever). However, as noted by AN Other in this thread, due to the possibility of a user getting a key number and not filing the record, there can be a risk of gaps. In order to minimise gaps we use a logic which records the last used number and the SK routine then checks from there - a necessary check in any event and not much lengthened by working from the last used instead of the (alleged) next available. One day I'll check how gappy the end result is!

If, however, gaps in the sequence are not acceptable then maybe the allocation has to be done secretively at write-time - unless someone can devise a gap-free algorithm for read-time.


At 24 AUG 2003 11:20AM Gerald Lovel wrote:

Problem with assigning keys at write time: relationships between tables cannot be established during entry. For example, a note for the current transaction or charges for the current activity cannot be entered because the current record id is not known.

As for logic for a gap-free read assigned sequence, think of the following:

Instead of keeping only one value in the seed, keep a list of values in ascending order. When assigning a key, scan the list, reading the transaction file for each value. if a key exists on the file, delete it from the list. If a key is locked, skip it. If you use the last id on the list, increment it. When a key is assigned, add it to the list. Then write the list back.

When writing a record, you can delete your key from the list, or simply let the next interactive read do this for you.

As for deleted records, I write transactions in a current file and a history file. When a record is "deleted", my MFS writes the record to the history file with a deleted status, then deletes it from the transaction file.

This results in a sequence which truly has no gaps, in accordance with certain regulatory or accounting practices.

Gerald


At 24 AUG 2003 11:30AM Donald Bakke wrote:

Gerald,

When a record is "deleted", my MFS writes the record to the history file with a deleted status, then deletes it from the transaction file. This results in a sequence which truly has no gaps, in accordance with certain regulatory or accounting practices.

Can I assume then that the main window somehow prohibits the end user from manually entering new records using 'deleted' keys?

dbakke@srpcs.com

SRP Computer Solutions, Inc.


At 26 AUG 2003 09:01AM Gerald Lovel wrote:

Don,

Exactly. Since sequential key checks are performed at read time, a check is made against the history table as well as the main transaction table. History actually tracks several transaction tables, guaranteeing that sequences do not overlap in the application. When a read is attempted and the transaction is found out-of-file, an appropriate read error is reported.

Users can also browse the history table by switching the file pointer of the window, although the records are read-only. This file pointer switching in OI is a topic I haven't gotten to yet.

Write again if you want more info, or example code in OI.


At 26 AUG 2003 10:06AM Richard Guise wrote:

You've opened up another topic - archiving. In summary …

We too use an MFS which has been developed over the years since before OI! When deleting a record it inserts the delete time & date (could also be user, etc.). If record (allowed to be) reactivated this is nulled.

On read the MFS looks to archive file if not found on live file - the deleted datestamp indicates whether a live or deleted record.

On write it datestamps creation and amendment. Also if archive record it refuses if archive not allowed to be altered.

Another MFS on the archive dict file uses live file dict if name not found in archive dict. Archive dict must have entries for indexed fields.

Hope this helps or interests someone!

View this thread on the forum...

  • third_party_content/community/commentary/forums_nonworks/56c6197cc7becddc85256d8a00345edb.txt
  • Last modified: 2023/12/28 07:40
  • by 127.0.0.1