Wednesday, March 21, 2012

How many rows can a given processor handle

Hi all,
I know this is a very general question but if anyone can give me even a
ballpark figure that would be a big help.
I have a database that in a single table has between 75 million and 150
million. These are the absolute maximum values.
The datatypes are all very simple and there is about 8 columns. This table
links to various other tables through standard relationships.
There are 6 other table.
When a join occurs, it occurs between at most, 3 tables.
My question -: Can a dual Xeon 3.06 dedicated sql server with 3Gb ram and
SCSI disks handle this dealing with 100 users at a time?
If not, could anyone suggest a spec that could
Thanks for any advice you can share
SimonWell, it all depends on what kind of joins and what other operations
needed.
Here're some estimate, 8 columns, 2 bytes each, so total 16 bytes per
row. Say it has 150 millions rows, that's
150,000,000 * 16 = 2,400,000,000 byte = 2.3 GB
Assume on average each user retrieves 1.5 millions rows (that's still
alot), then you should have enough memory to handle 100 users. CPU won't
be a problem unless you do some complicated calculations. I am more
worry about the disk performance since unless you design index right,
there may be table scans, which will really hurt the performance.
Eric Li
SQL DBA
MCDBA
Simon Harvey wrote:
> Hi all,
> I know this is a very general question but if anyone can give me even a
> ballpark figure that would be a big help.
> I have a database that in a single table has between 75 million and 150
> million. These are the absolute maximum values.
> The datatypes are all very simple and there is about 8 columns. This table
> links to various other tables through standard relationships.
> There are 6 other table.
> When a join occurs, it occurs between at most, 3 tables.
> My question -: Can a dual Xeon 3.06 dedicated sql server with 3Gb ram and
> SCSI disks handle this dealing with 100 users at a time?
> If not, could anyone suggest a spec that could
> Thanks for any advice you can share
> Simon
>|||Thanks for the reply Eric,
If anyone happens to have experience of using very large databases I would
really appreciate any information on the sort of machines that are being
used to run them
Thanks all
Simon|||here
http://www.microsoft.com/sql/evaluation/casestudies/default.asp
Bojidar Alexandrov|||and "the one"
http://www.microsoft.com/resources/casestudies/CaseStudy.asp?CaseStudyID=11205
Bojidar Alexandrov|||For a large DB, like > 100 GB DB, most of the time, disk/memory is more
a concern than CPU, unless there're alot of complicated floating point
calculations, like financial risk analysis/scientific modeling,
otherwise, a 4 CPU config. should be good enough.
Memory is cheap now, get as much as you can, I was working on a 600 GB
DB before, and it run happily on a 8 GB machine.
Get SAN if you have $$$, or RAID 10, or multiple SCSI disk controllers,
partition your DB into different file groups and put different group on
different disk, if possible. There are just too many different ways, all
depend on how big your budget is.
Or you can get multiple machines, partition your DB and chain those
machines together.
There's no one-size-fit-all solution, you need a seasoned DBA to look at
your requirement.
--
Eric Li
SQL DBA
MCDBA
Simon Harvey wrote:
> Thanks for the reply Eric,
> If anyone happens to have experience of using very large databases I would
> really appreciate any information on the sort of machines that are being
> used to run them
> Thanks all
> Simon
>|||Thanks all.
Thats all really helpful. Cheers
Simonsql

No comments:

Post a Comment