Sign up on the Revelation Software website to have access to the most current content, and to be able to ask questions and get answers from the Revelation community

At 24 MAR 2003 11:07:20PM Scott Kapaona wrote:

In running a SELECT command on a zip code file, the SELECT returns 46797 records. If I DUMPLH the zip file the record count shows 46797 records as well. If I do a TCL COUNTROWS on the zip file, it shows 46797 records also. Same number of records. Now, if I export this file to DBASE III format and read it from Foxpro, it shows 86668 records, which is correct. What is wrong or what am I not doing to account for those other records. Any help on this would be greatly appreciated.

Arev 3.12 - Stand Alone PC

Expanded Memory Active

Arev /XM4096

B-Tree index on ZIP column

B-Tree index on ST column

No Quickdex/Rightdex

Thank You,

Scott Kapaona


At 24 MAR 2003 11:28PM Curt Putnam wrote:

Did you just do a major import?


At 25 MAR 2003 06:06AM Hippo wrote:

Do you use only single valued fields?


At 25 MAR 2003 08:45AM Don Miller - C3 Inc. wrote:

Is this the standard AREV export program or a "roll-your-own"? I almost NEVER use the AREV export programs in favor of doing my own coding anyway, but I've seen records dropped when the data is invalid for some reason (mostly numerics). If you have the source code, then you could trace the problem down by modifying the logic to notify you of any case where the incoming data is not written. Something like:

INP_COUNT=0 ;* count of input records

OUT_COUNT=0 ;* count of output records written

LOOP0:

READNEXT @ID THEN

INP_COUNT+=1       ;* bump the counter at the READNEXT point
READ @RECORD FROM FILE_IN,@ID THEN
  OK=0  ;* flag indicating conversion ok, will be set to 1
  GOSUB CONVERT
  IF OK THEN
     OSBWRITE CONV_REC ... etc. THEN
       OUT_COUNT+=1
     END
  END ELSE
  • process the conversion error
  END
END ELSE
  • process the read-error
END
IF INP_COUNT NE OUT_COUNT THEN
  • Input record INP_COUNT not written!!!
END
GOTO LOOP0

END ELSE

  • process complete
Input Count: INP_COUNT
Output Count: OUT_COUNT

END

CONVERT:

* set OK to 1 if conversion valid otherwise post error message about

* what failed

OK=1

RETURN

Maybe this will work for you.

Don Miller

C3 Inc.


At 25 MAR 2003 10:24AM Scott Kapaona wrote:

Hey Guys,

 No major import being done, and these are single fields. The Arev Export routine was used. This is totally just out of curiosity.

Thanks,

Scott Kapaona


At 25 MAR 2003 10:43AM Warren wrote:

What is the select statement you are using? Are you selecting by either of the indexed fields? What is @CRT set to in the dictionary? Are you doing a select before running the dBase export?

What count do you get using an RBASIC select statement vs a TCL select statement?

The count in the header frame is not always accurate. A fully resolved select should update it though.

What happens if you rebuild the indexes or remove and replace them?


At 25 MAR 2003 10:44AM Hippo wrote:

Two sugestions from me:

1)

from TCL:

PDISK C:\TST.tst (S)

LIST zip (P)

PDISK PRN (S)

Import TST.tst into FOXPRO (remove new page headings) and compare keys. … Keys which are in DBASEIII and not in LIST will help you in further investigations.

2) I am suspicious for READNEXTING loop (as my first topic here was never solved). May be there is a problem with TCL "SELECT ZIP", which demages FMC pointer during the READNEXTING loop. Therefore the resulting RECORDCOUNT is wrong and it is wrongly saved into GROUP 0 … when READNEXTING loop traverses all records.


At 25 MAR 2003 01:14PM Richard Hunt wrote:

Scott,

The incorrect record count is a result of AREV's Linear hash filing structure, Group 0. Within Group 0 of each linear hashed file, there is a location to store the record count.

The simple rbasic "SELECT filevar" sentence uses this "group 0" record count rather than the actual record count of the file. Also so does the "DUMPLH" and "COUNTROWS".

Normally, this "group 0" record count is updated correctly. Sometimes this record count becomes invalid. Probably due to size locking (maybe). I think this "group 0" record count is checked and updated during file operations where every record in the file is read.

So, the record count you are seeing is just simply invalid. And there is no other damage or problem to the file (based on the record count being invalid).

And now you know that even though it is a quick way of getting the recourd count of a file, it might not always be accurate.


At 26 MAR 2003 12:10PM Scott Kapaona wrote:

I don't have an @CRT definition in the dictionary. The SELECT statement is 'SELECT ZIP BY ST'. Both the ZIP and ST columns are BTree indexed. And the DBASE export was only done so that I could see exactly how many records were actually in the file. Maybe a REMAKEFILE would correct the situation and report the correct number of records.

I'll give it a try…

Scott Kapaona


At 26 MAR 2003 12:18PM Victor Engel wrote:

I would check to make sure each ZIP had exactly one ST value, and then I'd rebuild the ST index.


At 27 MAR 2003 04:42AM Hippo wrote:

What happens when you perform

"select zip …"

"savelist x"

"getlist x"

which number of rows the message shows?


At 28 MAR 2003 03:03PM Scott Kapaona wrote:

Hippo,

 Thanks for your response(s). I tried what you had suggested. It still returns the lesser amount of records. I'm beginning to think it's being truncated somewhere b/c of the 64K limit on the indexes.

Thanks,

Scott


At 28 MAR 2003 04:16PM Victor Engel wrote:

Perhaps you have a delimiter in one of your keys, thus throwing the index off. Have you scanned for this yet?


At 31 MAR 2003 04:31AM Hippo wrote:

64kkB limit causes variable exceeds … error message.

I don't thing this is the problem.

… you didn't answer … what messages give select … savelist … getlist sequence. It may be crucial (as it does not depend (I expected at least third one does not) on information stored in group0).


At 01 APR 2003 01:06PM Warren wrote:

Try rebuilding the indexes or a SELECT BY non-indexed field or just a COUNT filename.

View this thread on the forum...

  • third_party_content/community/commentary/forums_nonworks/442e01cbd8f151d085256cf4001126e1.txt
  • Last modified: 2023/12/28 07:40
  • by 127.0.0.1