Friday, March 9, 2012

How large can a database be?

I am interested in getting some stats regarding recommended hardware per DB
size/complexity. I've been using SQL Server 7 and 2000 for about 4 years wi
th zero issues until now. You see, I've always worked with relatively tiny d
atabases--maybe 100 or 200
MB. Now I'm using a several GB database that only has 2 tables. One table i
s a transaction log that has about 6 million records per day inserted. There
are 29 columns, most of which are int (not a big complex table with text an
d varchar cols) We have 3 i
ndexes on the columns we most often query against. I don't know the exact s
pecs on the hardware, but it's no lightweight. I'm told it's a newer 800 fr
ont-side bus system with "lots" of RAM. (Of course I can find out exactly.)
But it's supposedly a very
fast server with plenty of disk space and ram. SQL Server almost comes to a
halt everyday doing the inserting. Then , after hours when nobody is doing
anything on the server, it will appear to come to its knees just doing "sel
ect count(1) from table".
This table only has 19 million records in it. I read on Microsoft's website
about companies with multi-Terabyte db's. Of course they are probably usin
g massive 64-bit hardware. Seems my DB is relatively small, but it's the la
rgest I've worked with, and
I desperately trying to get it to perform better. No locking issues, and DB
CC CHECKDB does not report any problems. I've run Profiler traces, but I re
ally don't know what to look for.You can get away with inefficiencies with smaller databases but larger
databases are much less forgiving. You need to know the details of your
existing hardware configuration in order to properly configure the system
for maximum efficiency.
Are data and log files on separate devices? Segregating log and data are
especially important for high-volume logged operations. Ideally, you should
have a RAID 10 arrays when your application is write-intensive,
fault-tolerance is required and performance is important to you.
How are the inserts performed? The fastest way to insert data into SQL
Server is with a bulk insert method, such as BULK INSERT, DTS, ODBC BCP or
OLEDB IRowsetFastLoad.
How many processors? If you experience too much parallelism, consider
reducing the 'max degree of parallelism' option or adding a MAXDOP hint.
If you haven't already done so, update statistics or rebuild indexes.
Hope this helps.
Dan Guzman
SQL Server MVP
"Troy" <anonymous@.discussions.microsoft.com> wrote in message
news:5ABE586C-540C-43AD-990E-E5D6DE53E7FA@.microsoft.com...
quote:

> I am interested in getting some stats regarding recommended hardware per

DB size/complexity. I've been using SQL Server 7 and 2000 for about 4 years
with zero issues until now. You see, I've always worked with relatively tiny
databases--maybe 100 or 200 MB. Now I'm using a several GB database that
only has 2 tables. One table is a transaction log that has about 6 million
records per day inserted. There are 29 columns, most of which are int (not a
big complex table with text and varchar cols) We have 3 indexes on the
columns we most often query against. I don't know the exact specs on the
hardware, but it's no lightweight. I'm told it's a newer 800 front-side bus
system with "lots" of RAM. (Of course I can find out exactly.) But it's
supposedly a very fast server with plenty of disk space and ram. SQL Server
almost comes to a halt everyday doing the inserting. Then , after hours
when nobody is doing anything on the server, it will appear to come to its
knees just doing "select count(1) from table". This table only has 19
million records in it. I read on Microsoft's website about companies with
multi-Terabyte db's. Of course they are probably using massive 64-bit
hardware. Seems my DB is relatively small, but it's the largest I've worked
with, and I desperately trying to get it to perform better. No locking
issues, and DBCC CHECKDB does not report any problems. I've run Profiler
traces, but I really don't know what to look for.

No comments:

Post a Comment