Showing posts with label second. Show all posts
Showing posts with label second. Show all posts

Monday, March 26, 2012

Fillfactor

I have a very simple insert statement into a table containing about a
million rows.
It runs a fraction of a second for about 1000 times and then takes about 5
seconds, and then again fraction of a second for about 1000 inserts. There
are 3 indexes on the table with fillfactor 70.
Am i right by guessing that at the time when insert takes 5 seconds the page
splits? Should i decrese fillfactor to 50? Or could it be something else?
Thanks.
Perhaps there are other possible sources, but it would more likely be your
log files or data files growing. I suppose the quick and dirty check is to
simply monitor both files before and after to see if growth occured.
Autogrowth is evil in my mind.
A single page split is taking a 8K allocated space and splitting it in two
locations. While you don't want this to happen a ton, this is not a task
that would take 5 full seconds, or at least the stall would not be after
record 997 was inserted. I suppose if the records you are inserting are all
being inserted into the SAME part(page) of the index, this could be
problematic.
If the index is related to your performance issues, you could remove the
index prior to the insert, insert, and then rebuild the indexes? This is an
expensive operation too though.
Mark
"Michael Kansky" <mike@.zazasoftware.com> wrote in message
news:%23zHWWj$mIHA.5704@.TK2MSFTNGP05.phx.gbl...
>I have a very simple insert statement into a table containing about a
>million rows.
> It runs a fraction of a second for about 1000 times and then takes about 5
> seconds, and then again fraction of a second for about 1000 inserts. There
> are 3 indexes on the table with fillfactor 70.
> Am i right by guessing that at the time when insert takes 5 seconds the
> page splits? Should i decrese fillfactor to 50? Or could it be something
> else?
> Thanks.
>
|||1) you can monitor page splits using a perf mon counter. I doubt very much
this is the issue
2) the issue could be log flushes, checkpoints, blocking, something random
hitting the server hard. examine waiting tasks (not sure which version you
are on so can't specify exactly how to do that) when the delay occurs. also
run sp_who2 active to check for blocking.
Kevin G. Boles
Indicium Resources, Inc.
SQL Server MVP
kgboles a earthlink dt net
"Michael Kansky" <mike@.zazasoftware.com> wrote in message
news:%23zHWWj$mIHA.5704@.TK2MSFTNGP05.phx.gbl...
>I have a very simple insert statement into a table containing about a
>million rows.
> It runs a fraction of a second for about 1000 times and then takes about 5
> seconds, and then again fraction of a second for about 1000 inserts. There
> are 3 indexes on the table with fillfactor 70.
> Am i right by guessing that at the time when insert takes 5 seconds the
> page splits? Should i decrese fillfactor to 50? Or could it be something
> else?
> Thanks.
>
|||I agree with the others that it is probably not related to the fill factor.
I would guess blocking first, disk queues next (checkpoints etc.) and maybe
even file auto growths. Make sure there is plenty of free space in the data
and log files.
Andrew J. Kelly SQL MVP
Solid Quality Mentors
"Michael Kansky" <mike@.zazasoftware.com> wrote in message
news:%23zHWWj$mIHA.5704@.TK2MSFTNGP05.phx.gbl...
>I have a very simple insert statement into a table containing about a
>million rows.
> It runs a fraction of a second for about 1000 times and then takes about 5
> seconds, and then again fraction of a second for about 1000 inserts. There
> are 3 indexes on the table with fillfactor 70.
> Am i right by guessing that at the time when insert takes 5 seconds the
> page splits? Should i decrese fillfactor to 50? Or could it be something
> else?
> Thanks.
>
|||I see that i have a lot of checkpoints by running a trace.
This might be my problem
How do i minimize checkpoints?
"Andrew J. Kelly" <sqlmvpnooospam@.shadhawk.com> wrote in message
news:u4zPw5AnIHA.4504@.TK2MSFTNGP06.phx.gbl...
>I agree with the others that it is probably not related to the fill factor.
>I would guess blocking first, disk queues next (checkpoints etc.) and maybe
>even file auto growths. Make sure there is plenty of free space in the data
>and log files.
> --
> Andrew J. Kelly SQL MVP
> Solid Quality Mentors
>
> "Michael Kansky" <mike@.zazasoftware.com> wrote in message
> news:%23zHWWj$mIHA.5704@.TK2MSFTNGP05.phx.gbl...
>
|||Checkpoints are there for a reason which is to limit recovery times in the
event of a crash. There is a setting called the recovery interval that can
affect when checkpoints happen but that is not the solution. If checkpoints
hinder the activity that much you do not have proper disk configurations to
handle the load. It sounds like you probably have your tran log files on the
same drive array as the data files. To deal with lots of writes it is
imperative you separate the logs from the data files onto different physical
(not just logical) arrays. Also consider adding more write cache and check
the read / write ratio of the disk controller. If it is not 100% write back
then change it and you will most likely see improvements with checkpoints.
Andrew J. Kelly SQL MVP
Solid Quality Mentors
"Michael Kansky" <mike@.zazasoftware.com> wrote in message
news:uo901MBnIHA.4712@.TK2MSFTNGP04.phx.gbl...
>I see that i have a lot of checkpoints by running a trace.
> This might be my problem
> How do i minimize checkpoints?
> "Andrew J. Kelly" <sqlmvpnooospam@.shadhawk.com> wrote in message
> news:u4zPw5AnIHA.4504@.TK2MSFTNGP06.phx.gbl...
>

Fillfactor

I have a very simple insert statement into a table containing about a
million rows.
It runs a fraction of a second for about 1000 times and then takes about 5
seconds, and then again fraction of a second for about 1000 inserts. There
are 3 indexes on the table with fillfactor 70.
Am i right by guessing that at the time when insert takes 5 seconds the page
splits? Should i decrese fillfactor to 50? Or could it be something else?
Thanks.Perhaps there are other possible sources, but it would more likely be your
log files or data files growing. I suppose the quick and dirty check is to
simply monitor both files before and after to see if growth occured.
Autogrowth is evil in my mind.
A single page split is taking a 8K allocated space and splitting it in two
locations. While you don't want this to happen a ton, this is not a task
that would take 5 full seconds, or at least the stall would not be after
record 997 was inserted. I suppose if the records you are inserting are all
being inserted into the SAME part(page) of the index, this could be
problematic.
If the index is related to your performance issues, you could remove the
index prior to the insert, insert, and then rebuild the indexes? This is an
expensive operation too though.
Mark
"Michael Kansky" <mike@.zazasoftware.com> wrote in message
news:%23zHWWj$mIHA.5704@.TK2MSFTNGP05.phx.gbl...
>I have a very simple insert statement into a table containing about a
>million rows.
> It runs a fraction of a second for about 1000 times and then takes about 5
> seconds, and then again fraction of a second for about 1000 inserts. There
> are 3 indexes on the table with fillfactor 70.
> Am i right by guessing that at the time when insert takes 5 seconds the
> page splits? Should i decrese fillfactor to 50? Or could it be something
> else?
> Thanks.
>|||1) you can monitor page splits using a perf mon counter. I doubt very much
this is the issue
2) the issue could be log flushes, checkpoints, blocking, something random
hitting the server hard. examine waiting tasks (not sure which version you
are on so can't specify exactly how to do that) when the delay occurs. also
run sp_who2 active to check for blocking.
--
Kevin G. Boles
Indicium Resources, Inc.
SQL Server MVP
kgboles a earthlink dt net
"Michael Kansky" <mike@.zazasoftware.com> wrote in message
news:%23zHWWj$mIHA.5704@.TK2MSFTNGP05.phx.gbl...
>I have a very simple insert statement into a table containing about a
>million rows.
> It runs a fraction of a second for about 1000 times and then takes about 5
> seconds, and then again fraction of a second for about 1000 inserts. There
> are 3 indexes on the table with fillfactor 70.
> Am i right by guessing that at the time when insert takes 5 seconds the
> page splits? Should i decrese fillfactor to 50? Or could it be something
> else?
> Thanks.
>|||I agree with the others that it is probably not related to the fill factor.
I would guess blocking first, disk queues next (checkpoints etc.) and maybe
even file auto growths. Make sure there is plenty of free space in the data
and log files.
--
Andrew J. Kelly SQL MVP
Solid Quality Mentors
"Michael Kansky" <mike@.zazasoftware.com> wrote in message
news:%23zHWWj$mIHA.5704@.TK2MSFTNGP05.phx.gbl...
>I have a very simple insert statement into a table containing about a
>million rows.
> It runs a fraction of a second for about 1000 times and then takes about 5
> seconds, and then again fraction of a second for about 1000 inserts. There
> are 3 indexes on the table with fillfactor 70.
> Am i right by guessing that at the time when insert takes 5 seconds the
> page splits? Should i decrese fillfactor to 50? Or could it be something
> else?
> Thanks.
>|||I see that i have a lot of checkpoints by running a trace.
This might be my problem
How do i minimize checkpoints?
"Andrew J. Kelly" <sqlmvpnooospam@.shadhawk.com> wrote in message
news:u4zPw5AnIHA.4504@.TK2MSFTNGP06.phx.gbl...
>I agree with the others that it is probably not related to the fill factor.
>I would guess blocking first, disk queues next (checkpoints etc.) and maybe
>even file auto growths. Make sure there is plenty of free space in the data
>and log files.
> --
> Andrew J. Kelly SQL MVP
> Solid Quality Mentors
>
> "Michael Kansky" <mike@.zazasoftware.com> wrote in message
> news:%23zHWWj$mIHA.5704@.TK2MSFTNGP05.phx.gbl...
>>I have a very simple insert statement into a table containing about a
>>million rows.
>> It runs a fraction of a second for about 1000 times and then takes about
>> 5 seconds, and then again fraction of a second for about 1000 inserts.
>> There are 3 indexes on the table with fillfactor 70.
>> Am i right by guessing that at the time when insert takes 5 seconds the
>> page splits? Should i decrese fillfactor to 50? Or could it be something
>> else?
>> Thanks.
>|||Checkpoints are there for a reason which is to limit recovery times in the
event of a crash. There is a setting called the recovery interval that can
affect when checkpoints happen but that is not the solution. If checkpoints
hinder the activity that much you do not have proper disk configurations to
handle the load. It sounds like you probably have your tran log files on the
same drive array as the data files. To deal with lots of writes it is
imperative you separate the logs from the data files onto different physical
(not just logical) arrays. Also consider adding more write cache and check
the read / write ratio of the disk controller. If it is not 100% write back
then change it and you will most likely see improvements with checkpoints.
--
Andrew J. Kelly SQL MVP
Solid Quality Mentors
"Michael Kansky" <mike@.zazasoftware.com> wrote in message
news:uo901MBnIHA.4712@.TK2MSFTNGP04.phx.gbl...
>I see that i have a lot of checkpoints by running a trace.
> This might be my problem
> How do i minimize checkpoints?
> "Andrew J. Kelly" <sqlmvpnooospam@.shadhawk.com> wrote in message
> news:u4zPw5AnIHA.4504@.TK2MSFTNGP06.phx.gbl...
>>I agree with the others that it is probably not related to the fill
>>factor. I would guess blocking first, disk queues next (checkpoints etc.)
>>and maybe even file auto growths. Make sure there is plenty of free space
>>in the data and log files.
>> --
>> Andrew J. Kelly SQL MVP
>> Solid Quality Mentors
>>
>> "Michael Kansky" <mike@.zazasoftware.com> wrote in message
>> news:%23zHWWj$mIHA.5704@.TK2MSFTNGP05.phx.gbl...
>>I have a very simple insert statement into a table containing about a
>>million rows.
>> It runs a fraction of a second for about 1000 times and then takes about
>> 5 seconds, and then again fraction of a second for about 1000 inserts.
>> There are 3 indexes on the table with fillfactor 70.
>> Am i right by guessing that at the time when insert takes 5 seconds the
>> page splits? Should i decrese fillfactor to 50? Or could it be something
>> else?
>> Thanks.
>>
>

Wednesday, March 21, 2012

Files not filing with data

We have a quad sql server that runs OLTP transactions at the rate of
100's per second (read & Write).
We used to have all the tables on 1 file but started to notice high
contention on this file. We added 3 more files to match the processor
number. The problem is that the 3 additional files are not filling with
data. Does anyone know why this happens or can reccommend a fix?
will
Hi
Since all your tables are in the 1st file group, SQL Server will continue to
use the 1st file group until it is full.
It does not balance over the files. You need to specifically move a table or
Index onto a file group for it to be used immediately.
Unless each file group is on a separate disk system (or LUN on a SAN), it
will not help adding file groups as the same IO contention continues to
exists.
Regards
Mike Epprecht, Microsoft SQL Server MVP
Zurich, Switzerland
IM: mike@.epprecht.net
MVP Program: http://www.microsoft.com/mvp
Blog: http://www.msmvps.com/epprecht/
"we7313" <we7313@.discussions.microsoft.com> wrote in message
news:AD68081F-08DD-43E6-A3C7-D5F186C1D561@.microsoft.com...
> We have a quad sql server that runs OLTP transactions at the rate of
> 100's per second (read & Write).
> We used to have all the tables on 1 file but started to notice high
> contention on this file. We added 3 more files to match the processor
> number. The problem is that the 3 additional files are not filling with
> data. Does anyone know why this happens or can reccommend a fix?
> --
> will
|||Hi Will
Did you add the new files to the same file group or did you create a new
group?
HTH
Kalen Delaney, SQL Server MVP
www.solidqualitylearning.com
"we7313" <we7313@.discussions.microsoft.com> wrote in message
news:AD68081F-08DD-43E6-A3C7-D5F186C1D561@.microsoft.com...
> We have a quad sql server that runs OLTP transactions at the rate of
> 100's per second (read & Write).
> We used to have all the tables on 1 file but started to notice high
> contention on this file. We added 3 more files to match the processor
> number. The problem is that the 3 additional files are not filling with
> data. Does anyone know why this happens or can reccommend a fix?
> --
> will
>
|||Let me correct myself:
We have a file group called 'Avail'.
In That file group we had all the tables running on 1 file.
We added 3 additional files to that filegroup and noticed they were not
filling with data. Has anyone seen this before?
will
"Kalen Delaney" wrote:

> Hi Will
> Did you add the new files to the same file group or did you create a new
> group?
> --
> HTH
> Kalen Delaney, SQL Server MVP
> www.solidqualitylearning.com
>
> "we7313" <we7313@.discussions.microsoft.com> wrote in message
> news:AD68081F-08DD-43E6-A3C7-D5F186C1D561@.microsoft.com...
>
>
|||If a table is created on a filegroup, and that filegroup has multiple files,
all the files should be used as more data is inserted into the table.
Are you seeing that existing tables are not seeming to use the new files?
How are you determining that?
Can you try creating a new table on the filegroup and see if its data is
spread around?
HTH
Kalen Delaney, SQL Server MVP
www.solidqualitylearning.com
"we7313" <we7313@.discussions.microsoft.com> wrote in message
news:286B7811-24A5-4323-8A4F-52384C44C3E3@.microsoft.com...
> Let me correct myself:
> We have a file group called 'Avail'.
> In That file group we had all the tables running on 1 file.
> We added 3 additional files to that filegroup and noticed they were not
> filling with data. Has anyone seen this before?
> --
> will
>
> "Kalen Delaney" wrote:
>
|||Yes the existing table is not using the new files in the file group.
If I go into enterprisemanager/view/taskpad I can see how big the data files
are and much data is actually in them. What I'm seeing is that 99% of the
data continues to go into the original file. I do see about 1% of data going
to the other 3 files combined.
will
"Kalen Delaney" wrote:

> If a table is created on a filegroup, and that filegroup has multiple files,
> all the files should be used as more data is inserted into the table.
> Are you seeing that existing tables are not seeming to use the new files?
> How are you determining that?
> Can you try creating a new table on the filegroup and see if its data is
> spread around?
> --
> HTH
> Kalen Delaney, SQL Server MVP
> www.solidqualitylearning.com
>
> "we7313" <we7313@.discussions.microsoft.com> wrote in message
> news:286B7811-24A5-4323-8A4F-52384C44C3E3@.microsoft.com...
>
>
sql

Files not filing with data

We have a quad sql server that runs OLTP transactions at the rate of
100's per second (read & Write).
We used to have all the tables on 1 file but started to notice high
contention on this file. We added 3 more files to match the processor
number. The problem is that the 3 additional files are not filling with
data. Does anyone know why this happens or can reccommend a fix?
--
willHi
Since all your tables are in the 1st file group, SQL Server will continue to
use the 1st file group until it is full.
It does not balance over the files. You need to specifically move a table or
Index onto a file group for it to be used immediately.
Unless each file group is on a separate disk system (or LUN on a SAN), it
will not help adding file groups as the same IO contention continues to
exists.
Regards
--
Mike Epprecht, Microsoft SQL Server MVP
Zurich, Switzerland
IM: mike@.epprecht.net
MVP Program: http://www.microsoft.com/mvp
Blog: http://www.msmvps.com/epprecht/
"we7313" <we7313@.discussions.microsoft.com> wrote in message
news:AD68081F-08DD-43E6-A3C7-D5F186C1D561@.microsoft.com...
> We have a quad sql server that runs OLTP transactions at the rate of
> 100's per second (read & Write).
> We used to have all the tables on 1 file but started to notice high
> contention on this file. We added 3 more files to match the processor
> number. The problem is that the 3 additional files are not filling with
> data. Does anyone know why this happens or can reccommend a fix?
> --
> will|||Hi Will
Did you add the new files to the same file group or did you create a new
group?
HTH
Kalen Delaney, SQL Server MVP
www.solidqualitylearning.com
"we7313" <we7313@.discussions.microsoft.com> wrote in message
news:AD68081F-08DD-43E6-A3C7-D5F186C1D561@.microsoft.com...
> We have a quad sql server that runs OLTP transactions at the rate of
> 100's per second (read & Write).
> We used to have all the tables on 1 file but started to notice high
> contention on this file. We added 3 more files to match the processor
> number. The problem is that the 3 additional files are not filling with
> data. Does anyone know why this happens or can reccommend a fix?
> --
> will
>|||Let me correct myself:
We have a file group called 'Avail'.
In That file group we had all the tables running on 1 file.
We added 3 additional files to that filegroup and noticed they were not
filling with data. Has anyone seen this before?
--
will
"Kalen Delaney" wrote:

> Hi Will
> Did you add the new files to the same file group or did you create a new
> group?
> --
> HTH
> Kalen Delaney, SQL Server MVP
> www.solidqualitylearning.com
>
> "we7313" <we7313@.discussions.microsoft.com> wrote in message
> news:AD68081F-08DD-43E6-A3C7-D5F186C1D561@.microsoft.com...
>
>|||If a table is created on a filegroup, and that filegroup has multiple files,
all the files should be used as more data is inserted into the table.
Are you seeing that existing tables are not seeming to use the new files?
How are you determining that?
Can you try creating a new table on the filegroup and see if its data is
spread around?
HTH
Kalen Delaney, SQL Server MVP
www.solidqualitylearning.com
"we7313" <we7313@.discussions.microsoft.com> wrote in message
news:286B7811-24A5-4323-8A4F-52384C44C3E3@.microsoft.com...
> Let me correct myself:
> We have a file group called 'Avail'.
> In That file group we had all the tables running on 1 file.
> We added 3 additional files to that filegroup and noticed they were not
> filling with data. Has anyone seen this before?
> --
> will
>
> "Kalen Delaney" wrote:
>
>|||Yes the existing table is not using the new files in the file group.
If I go into enterprisemanager/view/taskpad I can see how big the data files
are and much data is actually in them. What I'm seeing is that 99% of the
data continues to go into the original file. I do see about 1% of data goin
g
to the other 3 files combined.
--
will
"Kalen Delaney" wrote:

> If a table is created on a filegroup, and that filegroup has multiple file
s,
> all the files should be used as more data is inserted into the table.
> Are you seeing that existing tables are not seeming to use the new files?
> How are you determining that?
> Can you try creating a new table on the filegroup and see if its data is
> spread around?
> --
> HTH
> Kalen Delaney, SQL Server MVP
> www.solidqualitylearning.com
>
> "we7313" <we7313@.discussions.microsoft.com> wrote in message
> news:286B7811-24A5-4323-8A4F-52384C44C3E3@.microsoft.com...
>
>

Files not filing with data

We have a quad sql server that runs OLTP transactions at the rate of
100's per second (read & Write).
We used to have all the tables on 1 file but started to notice high
contention on this file. We added 3 more files to match the processor
number. The problem is that the 3 additional files are not filling with
data. Does anyone know why this happens or can reccommend a fix?
--
willHi Will
Did you add the new files to the same file group or did you create a new
group?
--
HTH
Kalen Delaney, SQL Server MVP
www.solidqualitylearning.com
"we7313" <we7313@.discussions.microsoft.com> wrote in message
news:AD68081F-08DD-43E6-A3C7-D5F186C1D561@.microsoft.com...
> We have a quad sql server that runs OLTP transactions at the rate of
> 100's per second (read & Write).
> We used to have all the tables on 1 file but started to notice high
> contention on this file. We added 3 more files to match the processor
> number. The problem is that the 3 additional files are not filling with
> data. Does anyone know why this happens or can reccommend a fix?
> --
> will
>|||Hi
Since all your tables are in the 1st file group, SQL Server will continue to
use the 1st file group until it is full.
It does not balance over the files. You need to specifically move a table or
Index onto a file group for it to be used immediately.
Unless each file group is on a separate disk system (or LUN on a SAN), it
will not help adding file groups as the same IO contention continues to
exists.
Regards
--
Mike Epprecht, Microsoft SQL Server MVP
Zurich, Switzerland
IM: mike@.epprecht.net
MVP Program: http://www.microsoft.com/mvp
Blog: http://www.msmvps.com/epprecht/
"we7313" <we7313@.discussions.microsoft.com> wrote in message
news:AD68081F-08DD-43E6-A3C7-D5F186C1D561@.microsoft.com...
> We have a quad sql server that runs OLTP transactions at the rate of
> 100's per second (read & Write).
> We used to have all the tables on 1 file but started to notice high
> contention on this file. We added 3 more files to match the processor
> number. The problem is that the 3 additional files are not filling with
> data. Does anyone know why this happens or can reccommend a fix?
> --
> will|||Let me correct myself:
We have a file group called 'Avail'.
In That file group we had all the tables running on 1 file.
We added 3 additional files to that filegroup and noticed they were not
filling with data. Has anyone seen this before?
--
will
"Kalen Delaney" wrote:
> Hi Will
> Did you add the new files to the same file group or did you create a new
> group?
> --
> HTH
> Kalen Delaney, SQL Server MVP
> www.solidqualitylearning.com
>
> "we7313" <we7313@.discussions.microsoft.com> wrote in message
> news:AD68081F-08DD-43E6-A3C7-D5F186C1D561@.microsoft.com...
> > We have a quad sql server that runs OLTP transactions at the rate of
> > 100's per second (read & Write).
> >
> > We used to have all the tables on 1 file but started to notice high
> > contention on this file. We added 3 more files to match the processor
> > number. The problem is that the 3 additional files are not filling with
> > data. Does anyone know why this happens or can reccommend a fix?
> > --
> > will
> >
>
>|||If a table is created on a filegroup, and that filegroup has multiple files,
all the files should be used as more data is inserted into the table.
Are you seeing that existing tables are not seeming to use the new files?
How are you determining that?
Can you try creating a new table on the filegroup and see if its data is
spread around?
--
HTH
Kalen Delaney, SQL Server MVP
www.solidqualitylearning.com
"we7313" <we7313@.discussions.microsoft.com> wrote in message
news:286B7811-24A5-4323-8A4F-52384C44C3E3@.microsoft.com...
> Let me correct myself:
> We have a file group called 'Avail'.
> In That file group we had all the tables running on 1 file.
> We added 3 additional files to that filegroup and noticed they were not
> filling with data. Has anyone seen this before?
> --
> will
>
> "Kalen Delaney" wrote:
>> Hi Will
>> Did you add the new files to the same file group or did you create a new
>> group?
>> --
>> HTH
>> Kalen Delaney, SQL Server MVP
>> www.solidqualitylearning.com
>>
>> "we7313" <we7313@.discussions.microsoft.com> wrote in message
>> news:AD68081F-08DD-43E6-A3C7-D5F186C1D561@.microsoft.com...
>> > We have a quad sql server that runs OLTP transactions at the rate of
>> > 100's per second (read & Write).
>> >
>> > We used to have all the tables on 1 file but started to notice high
>> > contention on this file. We added 3 more files to match the processor
>> > number. The problem is that the 3 additional files are not filling
>> > with
>> > data. Does anyone know why this happens or can reccommend a fix?
>> > --
>> > will
>> >
>>
>>
>|||Yes the existing table is not using the new files in the file group.
If I go into enterprisemanager/view/taskpad I can see how big the data files
are and much data is actually in them. What I'm seeing is that 99% of the
data continues to go into the original file. I do see about 1% of data going
to the other 3 files combined.
--
will
"Kalen Delaney" wrote:
> If a table is created on a filegroup, and that filegroup has multiple files,
> all the files should be used as more data is inserted into the table.
> Are you seeing that existing tables are not seeming to use the new files?
> How are you determining that?
> Can you try creating a new table on the filegroup and see if its data is
> spread around?
> --
> HTH
> Kalen Delaney, SQL Server MVP
> www.solidqualitylearning.com
>
> "we7313" <we7313@.discussions.microsoft.com> wrote in message
> news:286B7811-24A5-4323-8A4F-52384C44C3E3@.microsoft.com...
> > Let me correct myself:
> > We have a file group called 'Avail'.
> > In That file group we had all the tables running on 1 file.
> > We added 3 additional files to that filegroup and noticed they were not
> > filling with data. Has anyone seen this before?
> > --
> > will
> >
> >
> > "Kalen Delaney" wrote:
> >
> >>
> >> Hi Will
> >>
> >> Did you add the new files to the same file group or did you create a new
> >> group?
> >>
> >> --
> >> HTH
> >> Kalen Delaney, SQL Server MVP
> >> www.solidqualitylearning.com
> >>
> >>
> >> "we7313" <we7313@.discussions.microsoft.com> wrote in message
> >> news:AD68081F-08DD-43E6-A3C7-D5F186C1D561@.microsoft.com...
> >> > We have a quad sql server that runs OLTP transactions at the rate of
> >> > 100's per second (read & Write).
> >> >
> >> > We used to have all the tables on 1 file but started to notice high
> >> > contention on this file. We added 3 more files to match the processor
> >> > number. The problem is that the 3 additional files are not filling
> >> > with
> >> > data. Does anyone know why this happens or can reccommend a fix?
> >> > --
> >> > will
> >> >
> >>
> >>
> >>
> >>
> >
>
>

Wednesday, March 7, 2012

File System Task

hi friends

I PLACED A FILE SYSTEM OBJECT WICH IS USED TO (COPY FILE/MOVE FILE SO ON )

Once copy file working fine second time copy file gives an errors

we need to check the condition if that folder contrain the dest.txt file we dont require to copy a file

other wise we need to copy

so i need a controle for checking a folder contrain the dest.txt file or not

regards

koti

Hi Koti,

You need to check whether the file already there or not. You can use File.Exists("dest.txt")=False. if it returns false then you can proceed with your Copy process.

Atanu

|||

ok but in my desk top contain the check dir exisit or not i have to check

variable contrain this path ok

check_dir = C:\Documents and Settings\Koteswara.Chava\Desktop\check\one.txt

I was written the code here ok then Please

Dim DTSVariables As Variables

If System.IO.File.Exists((CStr(DTSVariables("check_dir").Value))) Then

MsgBox("YES FILE EXIST")

Else

MsgBox("NO FILE NOT EXIST")

End If

PRE COMPILED BINAY SCRIPT NOT EXIST ERROR I AM GETTING BROTHER

REGARDS

KOTI