Duplicate keys (OpenInsight 32-Bit)
At 18 JUN 2006 09:44:50PM Kauko Laurinolli wrote:
One of my production databases (v7.2.1)has developed few duplicate keys.
If I delete one I loose both.
How do I get rid of the duplicate key records?
At 18 JUN 2006 10:46PM Gerald Lovel wrote:
I didn't know you could have duplicate key records.
At 18 JUN 2006 11:32PM [email protected]'s Don Bakke wrote:
Kauko,
I agree with Gerald, duplicate keys are not supposed to be possible. The fact that when you delete one they both disappear would seem to confirm that you are not dealing with duplicates.
In what way are you detecting duplicate keys? I suspect that you have a bad index that is returning the same record more than once or possibly an exploded sort against a MV field.
At 19 JUN 2006 08:49AM Kauko Laurinolli wrote:
I have tried deleting from entry screen and from system editor, both with bad results.
I have rebuilt indexes w/o any help.
Wrote a small code in OI to show the records but the keys are identical and I can't find what makes them different?
Brought data into arev and list shows 2 records w/ same key.
At 19 JUN 2006 08:56AM Kauko Laurinolli wrote:
Removed all indexes from this file w/o making any difference.
At 19 JUN 2006 09:28AM John Bouley wrote:
There is probably some cntrl char in the key that is not visible in the editor. Dup Keys are supposed to be technically impossible in linear hash. Have you check the file for GFE's? How about using AREV's Dump utility and search for your Key. That may shed some light on if there are two and what is different about them?
HTh,
John
At 19 JUN 2006 12:44PM Gerald Lovel wrote:
My next suggestion too. I have cases in AREV where records are posted from a MV entry. If the user presses Ctrl-E in the MV field and enters two key values together, then AREV stores one record with the @SVM in the key, but LIST shows it as two records.
Well, you won't like the fix. Do a record count on the table. Then copy the original Table to QTable on DATAVOL. Clear the original table (I said you wouldn't like this). Select QTable, excluding the one bad key. Copy the records from QTable to Table. Do a record count. It should be one record fewer. You can add back the funky record by other means.
At 19 JUN 2006 04:49PM Kauko Laurinolli wrote:
No GPE's in any of my files.
At 19 JUN 2006 04:52PM Kauko Laurinolli wrote:
Probably least time consuming to fix these few is to print on paper, delete and re-enter.
At 19 JUN 2006 07:55PM Barry Stevens wrote:
In arev can you use debug and display the keys in HEX
At 19 JUN 2006 09:35PM Kauko Laurinolli wrote:
Deleting one deleted both.
When creating new one under same @ID I again have two duplicate keys and records visible??
At 19 JUN 2006 11:01PM Karen Oland wrote:
If you have keys with control characters or MV marks, you can't just delete them using the editor or an entry screen. You can do so programatically (but sometimes not with a select list - depends on what has crept into the key). Without using something like the AREV DUMP utility, you can't figure out what the key is, tho.
At 20 JUN 2006 01:38AM Warren Auyong wrote:
You can write a symbolic that uses the SEQ function
At 20 JUN 2006 06:05AM Kauko Laurinolli wrote:
You mean LHDUMP?
But how do I found the particular record in there?
Paging and finding it is quite hopeless.
At 20 JUN 2006 12:13PM Karen Oland wrote:
Not sure how that would help in finding the current record that has a bad ID.
The easiest way I've found (on a runtime - can write a program on a dev copy) is to do a select (with no "BY" or "WITH" clauses) of the file, save the list. Then visually scan it (for small files)
Or use a run-time symbolic to select records that have non-character/numeric values in the ID and save that list.
Once you have a list of ID's, do a getlist/delete on the file.
With a dev copy, it is a little easier, but even a runtime arev can be used to find these records with bad ID's. I had a system that I supported develop them some years back, but it's been a long time since I've had to worry about them (not that they might not resurface under OI - where it is no doubt harder to find them, perhaps impossible with a runtime engine).
At 20 JUN 2006 12:46PM Warren Auyong wrote:
Save in Dict Voc Symbolics
HEXID (varchar(50))
temp='
lid=len(@id)
for i=1 to lid
byte=@idi,1
seqno=seq(byte)
if seqno ge 33 and seqno le 122 then
temp := byte
end else
temp := '\':oconv(seqno,'MX'):'\'
end
next i
@ans=temp
BADID (boolean)
temp='
lid=len(@id)
for i=1 to lid
byte=@idi,1
seqno=seq(byte)
@ANS='
if seqno ge 33 and seqno le 122 else
@ans=1
end
until @ans
next i
Select file with BADID=1
LIST file HEXID (so you can see what's funny in the ids)
Note ids with embedded @VM will not show in LIST since this is the syntax used for exploded sorts. You can save a list and edit the saved list.
At 22 JUN 2006 11:56AM Gerald Lovel wrote:
Kauko,
If this is not yet resolved, I have another suggestion. If the key column's data type is missing, and the key column is indexed, and the data is widely varying in length, then long keys would produce just the difficulty you describe. Check that your key columns have data types in the dictionary, such as VARCHAR(255). Add missing data types, then rebuild all indexes on the table.
Gerald
At 23 JUN 2006 06:40AM Kauko Laurinolli wrote:
No missing data types here.
I do not know how to locate those particular records using LHDUMP?