Monday, March 12, 2012

filegroup question

I apologize if this shows up twice as the first time I sent it said there was
an error. An article snippet:
If your database is very large and very busy, multiple files can be used to
increase performance. Here is one example of how you might use multiple
files. Let's say you have a single table with 10 million rows that is heavily
queried. If the table is in a single file, such as a single database file,
then SQL Server would only use one thread to perform a read of the rows in
the table. But if the table were divided into three physical files (all part
of the same filegroup), then SQL Server would use three threads (one per
physical file) to read the table, which potentially could be faster. In
addition, if each file were on its own separate physical disk or disk array,
the performance gain would even be greater.
Is this true form your experiences? If so, why not just split the whole db
into multiple files on the same filegroup?There are other factors to consider.
How many physical disk drives do you have and disk controllers and how many
threads does each controller allow?
If you have too many reads occurring, then you may end up with some disk
thrashing as the switches occur.
Are you set up with a RAID array and if so, how is that RAID array handling
things.
From personal experience, I have found that the best of all worlds for small
to medium size databases (30GB or less) appears to be the following:
SQL Server and core files installed on the Root Drive.
Transaction logs on RAID 1 drives
Database on RAID 5 drives
Place very heavily used tables in their own filegroup.
I'm sure others have some great input to this question as well.
Rick Sawtell
MCT, MCSD, MCDBA
"ChrisR" <ChrisR@.discussions.microsoft.com> wrote in message
news:169D42A6-339E-47CA-B0A4-23793C2AF884@.microsoft.com...
> I apologize if this shows up twice as the first time I sent it said there
was
> an error. An article snippet:
> If your database is very large and very busy, multiple files can be used
to
> increase performance. Here is one example of how you might use multiple
> files. Let's say you have a single table with 10 million rows that is
heavily
> queried. If the table is in a single file, such as a single database file,
> then SQL Server would only use one thread to perform a read of the rows in
> the table. But if the table were divided into three physical files (all
part
> of the same filegroup), then SQL Server would use three threads (one per
> physical file) to read the table, which potentially could be faster. In
> addition, if each file were on its own separate physical disk or disk
array,
> the performance gain would even be greater.
>
> Is this true form your experiences? If so, why not just split the whole db
> into multiple files on the same filegroup?|||ChrisR wrote: <snip>
> But if the table were divided into three physical files (all part
> of the same filegroup), then SQL Server would use three threads (one per
> physical file) to read the table, which potentially could be faster.
Well, multiple threads won't help much if the bottleneck is the I/O on a
particular disc. Remember that I/O is magnitudes slower than RAM or
context switching.
Since I primarily use OLTP type applications I have never bothered to
examine the potential 'gain'. In my experience the important part is to
activate as many physical discs as possible. This can be done with a
RAID setting, or by placing one or several files on each disc and
assigning them to individual or shared filegroups. In my experience the
simplest advice is to stripe and mirror everything. Unless you want to
do very specific database tuning every once in a while, this is probably
the best generic advice.
HTH,
Gert-Jan|||"ChrisR" <ChrisR@.discussions.microsoft.com> wrote in message
news:169D42A6-339E-47CA-B0A4-23793C2AF884@.microsoft.com...
> I apologize if this shows up twice as the first time I sent it said there
was
> an error. An article snippet:
> If your database is very large and very busy, multiple files can be used
to
> increase performance. Here is one example of how you might use multiple
> files. Let's say you have a single table with 10 million rows that is
heavily
> queried. If the table is in a single file, such as a single database file,
> then SQL Server would only use one thread to perform a read of the rows in
> the table. But if the table were divided into three physical files (all
part
> of the same filegroup), then SQL Server would use three threads (one per
> physical file) to read the table, which potentially could be faster. In
> addition, if each file were on its own separate physical disk or disk
array,
> the performance gain would even be greater.
My understanding is that this was true under SQL 6.5, but is no longer true
in SQL 2000 (which I believe does support multiple treads per physical
file.)
>
> Is this true form your experiences? If so, why not just split the whole db
> into multiple files on the same filegroup?

No comments:

Post a Comment