Database Size - How big is too big? (AREV V3.12) (AREV Specific)
At 21 DEC 1999 06:15:27PM Kevin Gray wrote:
Documentation for AREV suggests no upper limit for a
database size nor for a table size i.e. unless we are
reading it incorrectly.
During recent months we have seen a major client have
a specific data table grow at the rate of over 65,000
rows per month. Since July 1 over 300,000 rows of data
so 12 months transaction levels will be approximately
800,000 rows.
Each data row has an average size of 80 when using the
SIZE column so they are quite small in relative terms.
There are 6 BTREE indices on the current table and
reporting and query performance are both excellent.
Where is that outer boundary?
Given that this is just one of the database tables and
some of the others have over 100,000 rows in them are
there some system constraints that we are approaching?
Platform is high powered NT Server with NT Service 1.5
and disk admin is via RAID 5 which is all OK at this time.
Would appreciate feedback.
Thanks in anticipation.
Regards,
Kevin Gray
Graycorp (kevin@graycorp.com.au)
At 21 DEC 1999 08:13PM Don Bakke wrote:
Kevin,
There are others on this forum that I believe have more experience with some of the larger AREV systems and tables out there. I believe that in theory there is no upper limit. Early this year we inherited a system that totalled almost 7GB in size. About 3-4 AREV tables are between 1 and 2GB in OS size each and have between 1.5 and 2 million rows each. These all have multiple Btree, XRef, and Relational indexes on them.
The system runs very good, and robust, as long as the data and index files stay healthy. At first this was not the case due to improper configuration of the workstations, server, and NLM. We've had to spend countless hours rebuilding indexes and far too many critical processes were dependant on the Relational indexes. Most of this has been put to rest now and I'm amazed (as well as some Microsoft/SQL groupies who have reviewed the system) how well AREV can manage at this level.
Outside of very long index rebuilds, we've discovered that fixing GFE's is sometimes impossible with DUMPLH with very large tables. We had to create our own utility (with Aaron Kaplan's guidance) that can clean out a nasty frame when necessary. Fortunately we haven't had to use it in a while.
dbakke@srpcs.com
At 21 DEC 1999 08:50PM Long live the King wrote:
Isn't amazing
At 22 DEC 1999 04:04AM Curt wrote:
A couple years ago I managed a name/address file of a little less than 6 million names/addresses. No problems.
At 22 DEC 1999 10:26AM akaplan@sprezzatura.com - [url=http://www.sprezzatura.com]Sprezzatura Group[/url] wrote:
In actuality, the LH structure does have limits. These limits are based on two unrelated items. One is the actual limit of the LH implementation based on the information stored in the header. The other is the maximum file size of the operating system. We'll leave the OS limitation for another time and concentrate on the LH structure.
Essentially, all header information is stored in 2 or 4 bytes in a base 256 system. Only frame size and sizelock are in 2 bytes, the rest is in 4. So, translating FFFFFFFF and FFFF into base 10 we have 4,294,967,295 and 65,535 as the maximum values for the various sections.
So, what does this mean in English?
The maximum frame size is 64K. The maximum record size is 4G. The maximum number of frames would be 4G in primary space and 4G in overflow. Figure on a maximum frame size of 65535 this means you could have 281,474,976,710,656 which is basically 281 trillion bytes of data or 281 terabytes of data stored in the LK alone, with another 281 terabytes in the OV file.
The limiting value on the system could be record count, since that could not exceed 4G so you might find your system failing at this point. I've never really tested it though so use these numbers at your own risk.
akaplan@sprezzatura.com
[/i]World leaders in all things RevSoft[/i]
At 22 DEC 1999 10:57AM Mr C wrote:
So, how long would that take to re-index then?
At 22 DEC 1999 11:06AM akaplan@sprezzatura.com - [url=http://www.sprezzatura.com]Sprezzatura Group[/url] wrote:
Grab a snickers……
akaplan@sprezzatura.com
[/i]World leaders in all things RevSoft[/i]
At 22 DEC 1999 03:48PM Eric Emu wrote:
That's a lot of Christmas cards, Curt.
At 22 DEC 1999 07:28PM Kevin Gray wrote:
Once again thanks to all who responded. It is re-assuring to
know the depth of expertise available within the AREV developer
community.
Clearly our "relatively large tables" are mere chicken feed in
the whole context of things which is what we always suspected.
When tables of large size encounter data corruption then clearly
there is the need for either off-system rectification or recovery
from backups and we have to expect index rebuilds to take some time.
Thanks once more for your assistance.
Do have a Merry Christmas and a Happy New Year celebration.
Kind Regards,
Kevin Gray
Graycorp
(email keving@graycorp.com.au)
At 28 FEB 2000 04:37AM hari s wrote:
How long does it take to rebuild the index ?
How long does it take to print selected names ?