Showing posts with label runs. Show all posts
Showing posts with label runs. Show all posts

Monday, March 26, 2012

Fillfactor

I have a very simple insert statement into a table containing about a
million rows.
It runs a fraction of a second for about 1000 times and then takes about 5
seconds, and then again fraction of a second for about 1000 inserts. There
are 3 indexes on the table with fillfactor 70.
Am i right by guessing that at the time when insert takes 5 seconds the page
splits? Should i decrese fillfactor to 50? Or could it be something else?
Thanks.
Perhaps there are other possible sources, but it would more likely be your
log files or data files growing. I suppose the quick and dirty check is to
simply monitor both files before and after to see if growth occured.
Autogrowth is evil in my mind.
A single page split is taking a 8K allocated space and splitting it in two
locations. While you don't want this to happen a ton, this is not a task
that would take 5 full seconds, or at least the stall would not be after
record 997 was inserted. I suppose if the records you are inserting are all
being inserted into the SAME part(page) of the index, this could be
problematic.
If the index is related to your performance issues, you could remove the
index prior to the insert, insert, and then rebuild the indexes? This is an
expensive operation too though.
Mark
"Michael Kansky" <mike@.zazasoftware.com> wrote in message
news:%23zHWWj$mIHA.5704@.TK2MSFTNGP05.phx.gbl...
>I have a very simple insert statement into a table containing about a
>million rows.
> It runs a fraction of a second for about 1000 times and then takes about 5
> seconds, and then again fraction of a second for about 1000 inserts. There
> are 3 indexes on the table with fillfactor 70.
> Am i right by guessing that at the time when insert takes 5 seconds the
> page splits? Should i decrese fillfactor to 50? Or could it be something
> else?
> Thanks.
>
|||1) you can monitor page splits using a perf mon counter. I doubt very much
this is the issue
2) the issue could be log flushes, checkpoints, blocking, something random
hitting the server hard. examine waiting tasks (not sure which version you
are on so can't specify exactly how to do that) when the delay occurs. also
run sp_who2 active to check for blocking.
Kevin G. Boles
Indicium Resources, Inc.
SQL Server MVP
kgboles a earthlink dt net
"Michael Kansky" <mike@.zazasoftware.com> wrote in message
news:%23zHWWj$mIHA.5704@.TK2MSFTNGP05.phx.gbl...
>I have a very simple insert statement into a table containing about a
>million rows.
> It runs a fraction of a second for about 1000 times and then takes about 5
> seconds, and then again fraction of a second for about 1000 inserts. There
> are 3 indexes on the table with fillfactor 70.
> Am i right by guessing that at the time when insert takes 5 seconds the
> page splits? Should i decrese fillfactor to 50? Or could it be something
> else?
> Thanks.
>
|||I agree with the others that it is probably not related to the fill factor.
I would guess blocking first, disk queues next (checkpoints etc.) and maybe
even file auto growths. Make sure there is plenty of free space in the data
and log files.
Andrew J. Kelly SQL MVP
Solid Quality Mentors
"Michael Kansky" <mike@.zazasoftware.com> wrote in message
news:%23zHWWj$mIHA.5704@.TK2MSFTNGP05.phx.gbl...
>I have a very simple insert statement into a table containing about a
>million rows.
> It runs a fraction of a second for about 1000 times and then takes about 5
> seconds, and then again fraction of a second for about 1000 inserts. There
> are 3 indexes on the table with fillfactor 70.
> Am i right by guessing that at the time when insert takes 5 seconds the
> page splits? Should i decrese fillfactor to 50? Or could it be something
> else?
> Thanks.
>
|||I see that i have a lot of checkpoints by running a trace.
This might be my problem
How do i minimize checkpoints?
"Andrew J. Kelly" <sqlmvpnooospam@.shadhawk.com> wrote in message
news:u4zPw5AnIHA.4504@.TK2MSFTNGP06.phx.gbl...
>I agree with the others that it is probably not related to the fill factor.
>I would guess blocking first, disk queues next (checkpoints etc.) and maybe
>even file auto growths. Make sure there is plenty of free space in the data
>and log files.
> --
> Andrew J. Kelly SQL MVP
> Solid Quality Mentors
>
> "Michael Kansky" <mike@.zazasoftware.com> wrote in message
> news:%23zHWWj$mIHA.5704@.TK2MSFTNGP05.phx.gbl...
>
|||Checkpoints are there for a reason which is to limit recovery times in the
event of a crash. There is a setting called the recovery interval that can
affect when checkpoints happen but that is not the solution. If checkpoints
hinder the activity that much you do not have proper disk configurations to
handle the load. It sounds like you probably have your tran log files on the
same drive array as the data files. To deal with lots of writes it is
imperative you separate the logs from the data files onto different physical
(not just logical) arrays. Also consider adding more write cache and check
the read / write ratio of the disk controller. If it is not 100% write back
then change it and you will most likely see improvements with checkpoints.
Andrew J. Kelly SQL MVP
Solid Quality Mentors
"Michael Kansky" <mike@.zazasoftware.com> wrote in message
news:uo901MBnIHA.4712@.TK2MSFTNGP04.phx.gbl...
>I see that i have a lot of checkpoints by running a trace.
> This might be my problem
> How do i minimize checkpoints?
> "Andrew J. Kelly" <sqlmvpnooospam@.shadhawk.com> wrote in message
> news:u4zPw5AnIHA.4504@.TK2MSFTNGP06.phx.gbl...
>

Fillfactor

I have a very simple insert statement into a table containing about a
million rows.
It runs a fraction of a second for about 1000 times and then takes about 5
seconds, and then again fraction of a second for about 1000 inserts. There
are 3 indexes on the table with fillfactor 70.
Am i right by guessing that at the time when insert takes 5 seconds the page
splits? Should i decrese fillfactor to 50? Or could it be something else?
Thanks.Perhaps there are other possible sources, but it would more likely be your
log files or data files growing. I suppose the quick and dirty check is to
simply monitor both files before and after to see if growth occured.
Autogrowth is evil in my mind.
A single page split is taking a 8K allocated space and splitting it in two
locations. While you don't want this to happen a ton, this is not a task
that would take 5 full seconds, or at least the stall would not be after
record 997 was inserted. I suppose if the records you are inserting are all
being inserted into the SAME part(page) of the index, this could be
problematic.
If the index is related to your performance issues, you could remove the
index prior to the insert, insert, and then rebuild the indexes? This is an
expensive operation too though.
Mark
"Michael Kansky" <mike@.zazasoftware.com> wrote in message
news:%23zHWWj$mIHA.5704@.TK2MSFTNGP05.phx.gbl...
>I have a very simple insert statement into a table containing about a
>million rows.
> It runs a fraction of a second for about 1000 times and then takes about 5
> seconds, and then again fraction of a second for about 1000 inserts. There
> are 3 indexes on the table with fillfactor 70.
> Am i right by guessing that at the time when insert takes 5 seconds the
> page splits? Should i decrese fillfactor to 50? Or could it be something
> else?
> Thanks.
>|||1) you can monitor page splits using a perf mon counter. I doubt very much
this is the issue
2) the issue could be log flushes, checkpoints, blocking, something random
hitting the server hard. examine waiting tasks (not sure which version you
are on so can't specify exactly how to do that) when the delay occurs. also
run sp_who2 active to check for blocking.
--
Kevin G. Boles
Indicium Resources, Inc.
SQL Server MVP
kgboles a earthlink dt net
"Michael Kansky" <mike@.zazasoftware.com> wrote in message
news:%23zHWWj$mIHA.5704@.TK2MSFTNGP05.phx.gbl...
>I have a very simple insert statement into a table containing about a
>million rows.
> It runs a fraction of a second for about 1000 times and then takes about 5
> seconds, and then again fraction of a second for about 1000 inserts. There
> are 3 indexes on the table with fillfactor 70.
> Am i right by guessing that at the time when insert takes 5 seconds the
> page splits? Should i decrese fillfactor to 50? Or could it be something
> else?
> Thanks.
>|||I agree with the others that it is probably not related to the fill factor.
I would guess blocking first, disk queues next (checkpoints etc.) and maybe
even file auto growths. Make sure there is plenty of free space in the data
and log files.
--
Andrew J. Kelly SQL MVP
Solid Quality Mentors
"Michael Kansky" <mike@.zazasoftware.com> wrote in message
news:%23zHWWj$mIHA.5704@.TK2MSFTNGP05.phx.gbl...
>I have a very simple insert statement into a table containing about a
>million rows.
> It runs a fraction of a second for about 1000 times and then takes about 5
> seconds, and then again fraction of a second for about 1000 inserts. There
> are 3 indexes on the table with fillfactor 70.
> Am i right by guessing that at the time when insert takes 5 seconds the
> page splits? Should i decrese fillfactor to 50? Or could it be something
> else?
> Thanks.
>|||I see that i have a lot of checkpoints by running a trace.
This might be my problem
How do i minimize checkpoints?
"Andrew J. Kelly" <sqlmvpnooospam@.shadhawk.com> wrote in message
news:u4zPw5AnIHA.4504@.TK2MSFTNGP06.phx.gbl...
>I agree with the others that it is probably not related to the fill factor.
>I would guess blocking first, disk queues next (checkpoints etc.) and maybe
>even file auto growths. Make sure there is plenty of free space in the data
>and log files.
> --
> Andrew J. Kelly SQL MVP
> Solid Quality Mentors
>
> "Michael Kansky" <mike@.zazasoftware.com> wrote in message
> news:%23zHWWj$mIHA.5704@.TK2MSFTNGP05.phx.gbl...
>>I have a very simple insert statement into a table containing about a
>>million rows.
>> It runs a fraction of a second for about 1000 times and then takes about
>> 5 seconds, and then again fraction of a second for about 1000 inserts.
>> There are 3 indexes on the table with fillfactor 70.
>> Am i right by guessing that at the time when insert takes 5 seconds the
>> page splits? Should i decrese fillfactor to 50? Or could it be something
>> else?
>> Thanks.
>|||Checkpoints are there for a reason which is to limit recovery times in the
event of a crash. There is a setting called the recovery interval that can
affect when checkpoints happen but that is not the solution. If checkpoints
hinder the activity that much you do not have proper disk configurations to
handle the load. It sounds like you probably have your tran log files on the
same drive array as the data files. To deal with lots of writes it is
imperative you separate the logs from the data files onto different physical
(not just logical) arrays. Also consider adding more write cache and check
the read / write ratio of the disk controller. If it is not 100% write back
then change it and you will most likely see improvements with checkpoints.
--
Andrew J. Kelly SQL MVP
Solid Quality Mentors
"Michael Kansky" <mike@.zazasoftware.com> wrote in message
news:uo901MBnIHA.4712@.TK2MSFTNGP04.phx.gbl...
>I see that i have a lot of checkpoints by running a trace.
> This might be my problem
> How do i minimize checkpoints?
> "Andrew J. Kelly" <sqlmvpnooospam@.shadhawk.com> wrote in message
> news:u4zPw5AnIHA.4504@.TK2MSFTNGP06.phx.gbl...
>>I agree with the others that it is probably not related to the fill
>>factor. I would guess blocking first, disk queues next (checkpoints etc.)
>>and maybe even file auto growths. Make sure there is plenty of free space
>>in the data and log files.
>> --
>> Andrew J. Kelly SQL MVP
>> Solid Quality Mentors
>>
>> "Michael Kansky" <mike@.zazasoftware.com> wrote in message
>> news:%23zHWWj$mIHA.5704@.TK2MSFTNGP05.phx.gbl...
>>I have a very simple insert statement into a table containing about a
>>million rows.
>> It runs a fraction of a second for about 1000 times and then takes about
>> 5 seconds, and then again fraction of a second for about 1000 inserts.
>> There are 3 indexes on the table with fillfactor 70.
>> Am i right by guessing that at the time when insert takes 5 seconds the
>> page splits? Should i decrese fillfactor to 50? Or could it be something
>> else?
>> Thanks.
>>
>

Wednesday, March 21, 2012

Files not filing with data

We have a quad sql server that runs OLTP transactions at the rate of
100's per second (read & Write).
We used to have all the tables on 1 file but started to notice high
contention on this file. We added 3 more files to match the processor
number. The problem is that the 3 additional files are not filling with
data. Does anyone know why this happens or can reccommend a fix?
will
Hi
Since all your tables are in the 1st file group, SQL Server will continue to
use the 1st file group until it is full.
It does not balance over the files. You need to specifically move a table or
Index onto a file group for it to be used immediately.
Unless each file group is on a separate disk system (or LUN on a SAN), it
will not help adding file groups as the same IO contention continues to
exists.
Regards
Mike Epprecht, Microsoft SQL Server MVP
Zurich, Switzerland
IM: mike@.epprecht.net
MVP Program: http://www.microsoft.com/mvp
Blog: http://www.msmvps.com/epprecht/
"we7313" <we7313@.discussions.microsoft.com> wrote in message
news:AD68081F-08DD-43E6-A3C7-D5F186C1D561@.microsoft.com...
> We have a quad sql server that runs OLTP transactions at the rate of
> 100's per second (read & Write).
> We used to have all the tables on 1 file but started to notice high
> contention on this file. We added 3 more files to match the processor
> number. The problem is that the 3 additional files are not filling with
> data. Does anyone know why this happens or can reccommend a fix?
> --
> will
|||Hi Will
Did you add the new files to the same file group or did you create a new
group?
HTH
Kalen Delaney, SQL Server MVP
www.solidqualitylearning.com
"we7313" <we7313@.discussions.microsoft.com> wrote in message
news:AD68081F-08DD-43E6-A3C7-D5F186C1D561@.microsoft.com...
> We have a quad sql server that runs OLTP transactions at the rate of
> 100's per second (read & Write).
> We used to have all the tables on 1 file but started to notice high
> contention on this file. We added 3 more files to match the processor
> number. The problem is that the 3 additional files are not filling with
> data. Does anyone know why this happens or can reccommend a fix?
> --
> will
>
|||Let me correct myself:
We have a file group called 'Avail'.
In That file group we had all the tables running on 1 file.
We added 3 additional files to that filegroup and noticed they were not
filling with data. Has anyone seen this before?
will
"Kalen Delaney" wrote:

> Hi Will
> Did you add the new files to the same file group or did you create a new
> group?
> --
> HTH
> Kalen Delaney, SQL Server MVP
> www.solidqualitylearning.com
>
> "we7313" <we7313@.discussions.microsoft.com> wrote in message
> news:AD68081F-08DD-43E6-A3C7-D5F186C1D561@.microsoft.com...
>
>
|||If a table is created on a filegroup, and that filegroup has multiple files,
all the files should be used as more data is inserted into the table.
Are you seeing that existing tables are not seeming to use the new files?
How are you determining that?
Can you try creating a new table on the filegroup and see if its data is
spread around?
HTH
Kalen Delaney, SQL Server MVP
www.solidqualitylearning.com
"we7313" <we7313@.discussions.microsoft.com> wrote in message
news:286B7811-24A5-4323-8A4F-52384C44C3E3@.microsoft.com...
> Let me correct myself:
> We have a file group called 'Avail'.
> In That file group we had all the tables running on 1 file.
> We added 3 additional files to that filegroup and noticed they were not
> filling with data. Has anyone seen this before?
> --
> will
>
> "Kalen Delaney" wrote:
>
|||Yes the existing table is not using the new files in the file group.
If I go into enterprisemanager/view/taskpad I can see how big the data files
are and much data is actually in them. What I'm seeing is that 99% of the
data continues to go into the original file. I do see about 1% of data going
to the other 3 files combined.
will
"Kalen Delaney" wrote:

> If a table is created on a filegroup, and that filegroup has multiple files,
> all the files should be used as more data is inserted into the table.
> Are you seeing that existing tables are not seeming to use the new files?
> How are you determining that?
> Can you try creating a new table on the filegroup and see if its data is
> spread around?
> --
> HTH
> Kalen Delaney, SQL Server MVP
> www.solidqualitylearning.com
>
> "we7313" <we7313@.discussions.microsoft.com> wrote in message
> news:286B7811-24A5-4323-8A4F-52384C44C3E3@.microsoft.com...
>
>
sql

Files not filing with data

We have a quad sql server that runs OLTP transactions at the rate of
100's per second (read & Write).
We used to have all the tables on 1 file but started to notice high
contention on this file. We added 3 more files to match the processor
number. The problem is that the 3 additional files are not filling with
data. Does anyone know why this happens or can reccommend a fix?
--
willHi
Since all your tables are in the 1st file group, SQL Server will continue to
use the 1st file group until it is full.
It does not balance over the files. You need to specifically move a table or
Index onto a file group for it to be used immediately.
Unless each file group is on a separate disk system (or LUN on a SAN), it
will not help adding file groups as the same IO contention continues to
exists.
Regards
--
Mike Epprecht, Microsoft SQL Server MVP
Zurich, Switzerland
IM: mike@.epprecht.net
MVP Program: http://www.microsoft.com/mvp
Blog: http://www.msmvps.com/epprecht/
"we7313" <we7313@.discussions.microsoft.com> wrote in message
news:AD68081F-08DD-43E6-A3C7-D5F186C1D561@.microsoft.com...
> We have a quad sql server that runs OLTP transactions at the rate of
> 100's per second (read & Write).
> We used to have all the tables on 1 file but started to notice high
> contention on this file. We added 3 more files to match the processor
> number. The problem is that the 3 additional files are not filling with
> data. Does anyone know why this happens or can reccommend a fix?
> --
> will|||Hi Will
Did you add the new files to the same file group or did you create a new
group?
HTH
Kalen Delaney, SQL Server MVP
www.solidqualitylearning.com
"we7313" <we7313@.discussions.microsoft.com> wrote in message
news:AD68081F-08DD-43E6-A3C7-D5F186C1D561@.microsoft.com...
> We have a quad sql server that runs OLTP transactions at the rate of
> 100's per second (read & Write).
> We used to have all the tables on 1 file but started to notice high
> contention on this file. We added 3 more files to match the processor
> number. The problem is that the 3 additional files are not filling with
> data. Does anyone know why this happens or can reccommend a fix?
> --
> will
>|||Let me correct myself:
We have a file group called 'Avail'.
In That file group we had all the tables running on 1 file.
We added 3 additional files to that filegroup and noticed they were not
filling with data. Has anyone seen this before?
--
will
"Kalen Delaney" wrote:

> Hi Will
> Did you add the new files to the same file group or did you create a new
> group?
> --
> HTH
> Kalen Delaney, SQL Server MVP
> www.solidqualitylearning.com
>
> "we7313" <we7313@.discussions.microsoft.com> wrote in message
> news:AD68081F-08DD-43E6-A3C7-D5F186C1D561@.microsoft.com...
>
>|||If a table is created on a filegroup, and that filegroup has multiple files,
all the files should be used as more data is inserted into the table.
Are you seeing that existing tables are not seeming to use the new files?
How are you determining that?
Can you try creating a new table on the filegroup and see if its data is
spread around?
HTH
Kalen Delaney, SQL Server MVP
www.solidqualitylearning.com
"we7313" <we7313@.discussions.microsoft.com> wrote in message
news:286B7811-24A5-4323-8A4F-52384C44C3E3@.microsoft.com...
> Let me correct myself:
> We have a file group called 'Avail'.
> In That file group we had all the tables running on 1 file.
> We added 3 additional files to that filegroup and noticed they were not
> filling with data. Has anyone seen this before?
> --
> will
>
> "Kalen Delaney" wrote:
>
>|||Yes the existing table is not using the new files in the file group.
If I go into enterprisemanager/view/taskpad I can see how big the data files
are and much data is actually in them. What I'm seeing is that 99% of the
data continues to go into the original file. I do see about 1% of data goin
g
to the other 3 files combined.
--
will
"Kalen Delaney" wrote:

> If a table is created on a filegroup, and that filegroup has multiple file
s,
> all the files should be used as more data is inserted into the table.
> Are you seeing that existing tables are not seeming to use the new files?
> How are you determining that?
> Can you try creating a new table on the filegroup and see if its data is
> spread around?
> --
> HTH
> Kalen Delaney, SQL Server MVP
> www.solidqualitylearning.com
>
> "we7313" <we7313@.discussions.microsoft.com> wrote in message
> news:286B7811-24A5-4323-8A4F-52384C44C3E3@.microsoft.com...
>
>

Files not filing with data

We have a quad sql server that runs OLTP transactions at the rate of
100's per second (read & Write).
We used to have all the tables on 1 file but started to notice high
contention on this file. We added 3 more files to match the processor
number. The problem is that the 3 additional files are not filling with
data. Does anyone know why this happens or can reccommend a fix?
--
willHi Will
Did you add the new files to the same file group or did you create a new
group?
--
HTH
Kalen Delaney, SQL Server MVP
www.solidqualitylearning.com
"we7313" <we7313@.discussions.microsoft.com> wrote in message
news:AD68081F-08DD-43E6-A3C7-D5F186C1D561@.microsoft.com...
> We have a quad sql server that runs OLTP transactions at the rate of
> 100's per second (read & Write).
> We used to have all the tables on 1 file but started to notice high
> contention on this file. We added 3 more files to match the processor
> number. The problem is that the 3 additional files are not filling with
> data. Does anyone know why this happens or can reccommend a fix?
> --
> will
>|||Hi
Since all your tables are in the 1st file group, SQL Server will continue to
use the 1st file group until it is full.
It does not balance over the files. You need to specifically move a table or
Index onto a file group for it to be used immediately.
Unless each file group is on a separate disk system (or LUN on a SAN), it
will not help adding file groups as the same IO contention continues to
exists.
Regards
--
Mike Epprecht, Microsoft SQL Server MVP
Zurich, Switzerland
IM: mike@.epprecht.net
MVP Program: http://www.microsoft.com/mvp
Blog: http://www.msmvps.com/epprecht/
"we7313" <we7313@.discussions.microsoft.com> wrote in message
news:AD68081F-08DD-43E6-A3C7-D5F186C1D561@.microsoft.com...
> We have a quad sql server that runs OLTP transactions at the rate of
> 100's per second (read & Write).
> We used to have all the tables on 1 file but started to notice high
> contention on this file. We added 3 more files to match the processor
> number. The problem is that the 3 additional files are not filling with
> data. Does anyone know why this happens or can reccommend a fix?
> --
> will|||Let me correct myself:
We have a file group called 'Avail'.
In That file group we had all the tables running on 1 file.
We added 3 additional files to that filegroup and noticed they were not
filling with data. Has anyone seen this before?
--
will
"Kalen Delaney" wrote:
> Hi Will
> Did you add the new files to the same file group or did you create a new
> group?
> --
> HTH
> Kalen Delaney, SQL Server MVP
> www.solidqualitylearning.com
>
> "we7313" <we7313@.discussions.microsoft.com> wrote in message
> news:AD68081F-08DD-43E6-A3C7-D5F186C1D561@.microsoft.com...
> > We have a quad sql server that runs OLTP transactions at the rate of
> > 100's per second (read & Write).
> >
> > We used to have all the tables on 1 file but started to notice high
> > contention on this file. We added 3 more files to match the processor
> > number. The problem is that the 3 additional files are not filling with
> > data. Does anyone know why this happens or can reccommend a fix?
> > --
> > will
> >
>
>|||If a table is created on a filegroup, and that filegroup has multiple files,
all the files should be used as more data is inserted into the table.
Are you seeing that existing tables are not seeming to use the new files?
How are you determining that?
Can you try creating a new table on the filegroup and see if its data is
spread around?
--
HTH
Kalen Delaney, SQL Server MVP
www.solidqualitylearning.com
"we7313" <we7313@.discussions.microsoft.com> wrote in message
news:286B7811-24A5-4323-8A4F-52384C44C3E3@.microsoft.com...
> Let me correct myself:
> We have a file group called 'Avail'.
> In That file group we had all the tables running on 1 file.
> We added 3 additional files to that filegroup and noticed they were not
> filling with data. Has anyone seen this before?
> --
> will
>
> "Kalen Delaney" wrote:
>> Hi Will
>> Did you add the new files to the same file group or did you create a new
>> group?
>> --
>> HTH
>> Kalen Delaney, SQL Server MVP
>> www.solidqualitylearning.com
>>
>> "we7313" <we7313@.discussions.microsoft.com> wrote in message
>> news:AD68081F-08DD-43E6-A3C7-D5F186C1D561@.microsoft.com...
>> > We have a quad sql server that runs OLTP transactions at the rate of
>> > 100's per second (read & Write).
>> >
>> > We used to have all the tables on 1 file but started to notice high
>> > contention on this file. We added 3 more files to match the processor
>> > number. The problem is that the 3 additional files are not filling
>> > with
>> > data. Does anyone know why this happens or can reccommend a fix?
>> > --
>> > will
>> >
>>
>>
>|||Yes the existing table is not using the new files in the file group.
If I go into enterprisemanager/view/taskpad I can see how big the data files
are and much data is actually in them. What I'm seeing is that 99% of the
data continues to go into the original file. I do see about 1% of data going
to the other 3 files combined.
--
will
"Kalen Delaney" wrote:
> If a table is created on a filegroup, and that filegroup has multiple files,
> all the files should be used as more data is inserted into the table.
> Are you seeing that existing tables are not seeming to use the new files?
> How are you determining that?
> Can you try creating a new table on the filegroup and see if its data is
> spread around?
> --
> HTH
> Kalen Delaney, SQL Server MVP
> www.solidqualitylearning.com
>
> "we7313" <we7313@.discussions.microsoft.com> wrote in message
> news:286B7811-24A5-4323-8A4F-52384C44C3E3@.microsoft.com...
> > Let me correct myself:
> > We have a file group called 'Avail'.
> > In That file group we had all the tables running on 1 file.
> > We added 3 additional files to that filegroup and noticed they were not
> > filling with data. Has anyone seen this before?
> > --
> > will
> >
> >
> > "Kalen Delaney" wrote:
> >
> >>
> >> Hi Will
> >>
> >> Did you add the new files to the same file group or did you create a new
> >> group?
> >>
> >> --
> >> HTH
> >> Kalen Delaney, SQL Server MVP
> >> www.solidqualitylearning.com
> >>
> >>
> >> "we7313" <we7313@.discussions.microsoft.com> wrote in message
> >> news:AD68081F-08DD-43E6-A3C7-D5F186C1D561@.microsoft.com...
> >> > We have a quad sql server that runs OLTP transactions at the rate of
> >> > 100's per second (read & Write).
> >> >
> >> > We used to have all the tables on 1 file but started to notice high
> >> > contention on this file. We added 3 more files to match the processor
> >> > number. The problem is that the 3 additional files are not filling
> >> > with
> >> > data. Does anyone know why this happens or can reccommend a fix?
> >> > --
> >> > will
> >> >
> >>
> >>
> >>
> >>
> >
>
>

Friday, February 24, 2012

file or filegroup not online? "sysft_ftcat_documentindex"

Just upgraded from SQL 2000 to 2005 and I created a backup maintenance plan
but I am getting the following error everytime the maintenance plan runs:
Error number: -1073548784
Executing the query "BACKUP DATABASE [Clarke_MSCRM] TO DISK = N'C:\\Program
Files\\Microsoft SQL
Server\\MSSQL\\BACKUP\\Clarke_MSCRM\\Clarke_MSCRM_backup_200607270400.bak'
WITH NOFORMAT, NOINIT, NAME = N'Clarke_MSCRM_backup_20060727040005', SKIP,
REWIND, NOUNLOAD, STATS = 10
" failed with the following error: "The backup of the file or filegroup
"sysft_ftcat_documentindex" is not permitted because it is not online. BACKUP
can be performed by using the FILEGROUP or FILE clauses to restrict the
selection to include only online data.
BACKUP DATABASE is terminating abnormally.". Possible failure reasons:
Problems with the query, "ResultSet" property not set correctly, parameters
not set correctly, or connection not established correctly.
I'm not sure where to start looking.Try rebuilding or repopulating your full-text catalog. This is included in 2005 backup and is most
probably your problem.
--
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://www.solidqualitylearning.com/
"Matt M" <MattM@.discussions.microsoft.com> wrote in message
news:0222672A-806B-4EA0-BC3C-957050488734@.microsoft.com...
> Just upgraded from SQL 2000 to 2005 and I created a backup maintenance plan
> but I am getting the following error everytime the maintenance plan runs:
> Error number: -1073548784
> Executing the query "BACKUP DATABASE [Clarke_MSCRM] TO DISK = N'C:\\Program
> Files\\Microsoft SQL
> Server\\MSSQL\\BACKUP\\Clarke_MSCRM\\Clarke_MSCRM_backup_200607270400.bak'
> WITH NOFORMAT, NOINIT, NAME = N'Clarke_MSCRM_backup_20060727040005', SKIP,
> REWIND, NOUNLOAD, STATS = 10
> " failed with the following error: "The backup of the file or filegroup
> "sysft_ftcat_documentindex" is not permitted because it is not online. BACKUP
> can be performed by using the FILEGROUP or FILE clauses to restrict the
> selection to include only online data.
> BACKUP DATABASE is terminating abnormally.". Possible failure reasons:
> Problems with the query, "ResultSet" property not set correctly, parameters
> not set correctly, or connection not established correctly.
>
> I'm not sure where to start looking.
>|||I tried the Rebuild Index task and that didnt seem to work. How would I go
about repopulating?
"Tibor Karaszi" wrote:
> Try rebuilding or repopulating your full-text catalog. This is included in 2005 backup and is most
> probably your problem.
> --
> Tibor Karaszi, SQL Server MVP
> http://www.karaszi.com/sqlserver/default.asp
> http://www.solidqualitylearning.com/
>
> "Matt M" <MattM@.discussions.microsoft.com> wrote in message
> news:0222672A-806B-4EA0-BC3C-957050488734@.microsoft.com...
> > Just upgraded from SQL 2000 to 2005 and I created a backup maintenance plan
> > but I am getting the following error everytime the maintenance plan runs:
> >
> > Error number: -1073548784
> >
> > Executing the query "BACKUP DATABASE [Clarke_MSCRM] TO DISK = N'C:\\Program
> > Files\\Microsoft SQL
> > Server\\MSSQL\\BACKUP\\Clarke_MSCRM\\Clarke_MSCRM_backup_200607270400.bak'
> > WITH NOFORMAT, NOINIT, NAME = N'Clarke_MSCRM_backup_20060727040005', SKIP,
> > REWIND, NOUNLOAD, STATS = 10
> > " failed with the following error: "The backup of the file or filegroup
> > "sysft_ftcat_documentindex" is not permitted because it is not online. BACKUP
> > can be performed by using the FILEGROUP or FILE clauses to restrict the
> > selection to include only online data.
> > BACKUP DATABASE is terminating abnormally.". Possible failure reasons:
> > Problems with the query, "ResultSet" property not set correctly, parameters
> > not set correctly, or connection not established correctly.
> >
> >
> > I'm not sure where to start looking.
> >
> >
>|||I'm not talking about regular indexes, I'm talking about full text indexes. I don't do much
fulltext, so try at microsoft.public.sqlserver.fulltext.
--
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://www.solidqualitylearning.com/
"Matt M" <MattM@.discussions.microsoft.com> wrote in message
news:82F9483C-93C4-48E9-B65B-D002AFF24700@.microsoft.com...
>I tried the Rebuild Index task and that didnt seem to work. How would I go
> about repopulating?
> "Tibor Karaszi" wrote:
>> Try rebuilding or repopulating your full-text catalog. This is included in 2005 backup and is
>> most
>> probably your problem.
>> --
>> Tibor Karaszi, SQL Server MVP
>> http://www.karaszi.com/sqlserver/default.asp
>> http://www.solidqualitylearning.com/
>>
>> "Matt M" <MattM@.discussions.microsoft.com> wrote in message
>> news:0222672A-806B-4EA0-BC3C-957050488734@.microsoft.com...
>> > Just upgraded from SQL 2000 to 2005 and I created a backup maintenance plan
>> > but I am getting the following error everytime the maintenance plan runs:
>> >
>> > Error number: -1073548784
>> >
>> > Executing the query "BACKUP DATABASE [Clarke_MSCRM] TO DISK = N'C:\\Program
>> > Files\\Microsoft SQL
>> > Server\\MSSQL\\BACKUP\\Clarke_MSCRM\\Clarke_MSCRM_backup_200607270400.bak'
>> > WITH NOFORMAT, NOINIT, NAME = N'Clarke_MSCRM_backup_20060727040005', SKIP,
>> > REWIND, NOUNLOAD, STATS = 10
>> > " failed with the following error: "The backup of the file or filegroup
>> > "sysft_ftcat_documentindex" is not permitted because it is not online. BACKUP
>> > can be performed by using the FILEGROUP or FILE clauses to restrict the
>> > selection to include only online data.
>> > BACKUP DATABASE is terminating abnormally.". Possible failure reasons:
>> > Problems with the query, "ResultSet" property not set correctly, parameters
>> > not set correctly, or connection not established correctly.
>> >
>> >
>> > I'm not sure where to start looking.
>> >
>> >
>>|||I realized I misunderstood you after I posted. I looked into it and found
where to rebuild full text and it is running on a test box right now (Its a
slow machine so its taking a long time) I will try in production after hours.
"Tibor Karaszi" wrote:
> I'm not talking about regular indexes, I'm talking about full text indexes. I don't do much
> fulltext, so try at microsoft.public.sqlserver.fulltext.
> --
> Tibor Karaszi, SQL Server MVP
> http://www.karaszi.com/sqlserver/default.asp
> http://www.solidqualitylearning.com/
>
> "Matt M" <MattM@.discussions.microsoft.com> wrote in message
> news:82F9483C-93C4-48E9-B65B-D002AFF24700@.microsoft.com...
> >I tried the Rebuild Index task and that didnt seem to work. How would I go
> > about repopulating?
> >
> > "Tibor Karaszi" wrote:
> >
> >> Try rebuilding or repopulating your full-text catalog. This is included in 2005 backup and is
> >> most
> >> probably your problem.
> >>
> >> --
> >> Tibor Karaszi, SQL Server MVP
> >> http://www.karaszi.com/sqlserver/default.asp
> >> http://www.solidqualitylearning.com/
> >>
> >>
> >> "Matt M" <MattM@.discussions.microsoft.com> wrote in message
> >> news:0222672A-806B-4EA0-BC3C-957050488734@.microsoft.com...
> >> > Just upgraded from SQL 2000 to 2005 and I created a backup maintenance plan
> >> > but I am getting the following error everytime the maintenance plan runs:
> >> >
> >> > Error number: -1073548784
> >> >
> >> > Executing the query "BACKUP DATABASE [Clarke_MSCRM] TO DISK = N'C:\\Program
> >> > Files\\Microsoft SQL
> >> > Server\\MSSQL\\BACKUP\\Clarke_MSCRM\\Clarke_MSCRM_backup_200607270400.bak'
> >> > WITH NOFORMAT, NOINIT, NAME = N'Clarke_MSCRM_backup_20060727040005', SKIP,
> >> > REWIND, NOUNLOAD, STATS = 10
> >> > " failed with the following error: "The backup of the file or filegroup
> >> > "sysft_ftcat_documentindex" is not permitted because it is not online. BACKUP
> >> > can be performed by using the FILEGROUP or FILE clauses to restrict the
> >> > selection to include only online data.
> >> > BACKUP DATABASE is terminating abnormally.". Possible failure reasons:
> >> > Problems with the query, "ResultSet" property not set correctly, parameters
> >> > not set correctly, or connection not established correctly.
> >> >
> >> >
> >> > I'm not sure where to start looking.
> >> >
> >> >
> >>
> >>
>

file names in backup and restore output

I have an isql script that runs and does a full dump of the database - I
use a -i to indicate the input file and a -o for the output file -
everything works fine - but when I look at the output file - I was
expecting to see the name of the dump file that I just created ( like in
Sybase ) but it is not there - and the same is true with the load
process - in the output from the restore there is no mention of the file
name that was used as input for the restore - since these run at night it is
important to be able to look at the output and know if the right file was
used - I'm hoping there is a parameter that I need to set in order for the
file info. to be included in the output - ?
thanks,
--
JohnWhat version of SQL Server? ISQL is a really old tool which was superseded by OSQL.EXE in 7.0 and
now SQLCMD.EXE in 2005. Anyhow, check your the -e switch of ISQL.
--
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://www.solidqualitylearning.com/
Blog: http://solidqualitylearning.com/blogs/tibor/
"John" <John@.discussions.microsoft.com> wrote in message
news:83513629-DBFC-4F69-82FD-5E62F0659BD4@.microsoft.com...
>I have an isql script that runs and does a full dump of the database - I
> use a -i to indicate the input file and a -o for the output file -
> everything works fine - but when I look at the output file - I was
> expecting to see the name of the dump file that I just created ( like in
> Sybase ) but it is not there - and the same is true with the load
> process - in the output from the restore there is no mention of the file
> name that was used as input for the restore - since these run at night it is
> important to be able to look at the output and know if the right file was
> used - I'm hoping there is a parameter that I need to set in order for the
> file info. to be included in the output - ?
> thanks,
> --
> John|||This may help ? it a bit rough but you can tydy it up
-----
CREATE procedure [sp_dba_backupinfo]
@.DBName varchar(100) = '%'
AS
IF @.DBName <> '%'
set @.DBName = @.DBName+'%'
select
substring(database_name,1,24) as DATABASE_NAME,
right(backup_start_date,19) as BACKUP_START,
right(backup_finish_date,19) as BACKUP_FINISH,
--round(backup_size/1024/1024,0) as MB,
str(backup_size/1024/1024,6,0) as MB,
position,
substring(logical_device_name,1,24) as LOGICAL_DEVICE,
substring(physical_device_name,1,100) as PHYSICAL_DEVICE
from msdb..backupmediafamily a, msdb..backupset b
where a.media_set_id = b.media_set_id
--and database_name='lamda1p_interfaces_store'
and database_name like @.DBName
--database_name in ('lamdace','metadatace','security')
--and backup_start_date > '2003-09-01 01:00:00.000'
--and logical_device_name = 'lamda1p_interfaces_store'
--and physical_device_name not like '\\.%'
--and physical_device_name like 'Ba%'
--order by database_name,backup_start_date desc
order by database_name asc,backup_start_date desc
-----
CREATE procedure [sp_dba_restoreinfo]
@.DBName varchar(100) = '%'
AS
IF @.DBName <> '%'
set @.DBName = @.DBName+'%'
--select @.DBName
SET NOCOUNT ON
/*
select --xtype,
'msdb.dbo.'+name from msdb.dbo.sysobjects
where
name like '%backup%' and xtype in ('U','P')
or name like '%restore%' and xtype in ('U','P')
order by 1 desc
*/
SELECT
RH.[restore_type],
CONVERT(char(20),RH.[restore_date],120) [restore_date],
CONVERT(char(20),BS.[database_creation_date],120) [database_creation_date],
CONVERT(char(20),BS.[backup_start_date],120) [backup_start_date],
CONVERT(char(20),BS.[backup_finish_date],120) [backup_finish_date],
right(BS.[machine_name],12) [machine_name],
--right(BS.[server_name],12) [server_name],
right(BS.[database_name],35) [database_name],
right(RH.[destination_database_name],35) [destination_database_name],
str(BS.backup_size/1024/1024,6,0) as MB,
--RF.[restore_history_id],
--RF.[file_number],
--RF.[destination_phys_drive],
--RH.[restore_history_id],
--RH.[user_name],
--RH.[backup_set_id],
--RH.[replace],
--RH.[recovery],
--RH.[restart],
--RH.[stop_at],
--RH.[device_count],
--RFG.[restore_history_id],
--RFG.[filegroup_name],
--BS.[backup_set_id],
--BS.[backup_set_uuid],
--BS.[media_set_id],
--BS.[first_family_number],
--BS.[first_media_number],
--BS.[last_family_number],
--BS.[last_media_number],
--BS.[catalog_family_number],
--BS.[catalog_media_number],
--BS.[position],
--BS.[expiration_date],
--BS.[software_vendor_id],
--BS.[name],
--BS.[description],
--BS.[user_name],
--BS.[software_major_version],
--BS.[software_minor_version],
--BS.[software_build_version],
--BS.[time_zone],
--BS.[mtf_minor_version],
--BS.[first_lsn],
--BS.[last_lsn],
--BS.[checkpoint_lsn],
--BS.[database_backup_lsn],
--BS.[type],
--BS.[sort_order],
--BS.[code_page],
--BS.[compatibility_level],
--BS.[database_version],
right(RF.[destination_phys_name],45) [destination_phys_name],
+'End'
from
msdb.dbo.restorefile RF,
msdb.dbo.restorehistory RH,
msdb.dbo.restorefilegroup RFG,
msdb.dbo.backupset BS
--msdb.dbo.backupmediaset
--msdb.dbo.backupmediafamily
--msdb.dbo.backupfile
where RF.restore_history_id = RH.restore_history_id
and RH.restore_history_id = RFG.restore_history_id
and RH.backup_set_id = BS.backup_set_id
--and RH.restore_date > getdate() -31
and RH.destination_database_name like @.dbname
order by restore_date desc
-----
"Tibor Karaszi" wrote:
> What version of SQL Server? ISQL is a really old tool which was superseded by OSQL.EXE in 7.0 and
> now SQLCMD.EXE in 2005. Anyhow, check your the -e switch of ISQL.
> --
> Tibor Karaszi, SQL Server MVP
> http://www.karaszi.com/sqlserver/default.asp
> http://www.solidqualitylearning.com/
> Blog: http://solidqualitylearning.com/blogs/tibor/
>
> "John" <John@.discussions.microsoft.com> wrote in message
> news:83513629-DBFC-4F69-82FD-5E62F0659BD4@.microsoft.com...
> >I have an isql script that runs and does a full dump of the database - I
> > use a -i to indicate the input file and a -o for the output file -
> > everything works fine - but when I look at the output file - I was
> > expecting to see the name of the dump file that I just created ( like in
> > Sybase ) but it is not there - and the same is true with the load
> > process - in the output from the restore there is no mention of the file
> > name that was used as input for the restore - since these run at night it is
> > important to be able to look at the output and know if the right file was
> > used - I'm hoping there is a parameter that I need to set in order for the
> > file info. to be included in the output - ?
> >
> > thanks,
> >
> > --
> > John
>

file names in backup and restore output

I have an isql script that runs and does a full dump of the database - I
use a -i to indicate the input file and a -o for the output file -
everything works fine - but when I look at the output file - I was
expecting to see the name of the dump file that I just created ( like in
Sybase ) but it is not there - and the same is true with the load
process - in the output from the restore there is no mention of the file
name that was used as input for the restore - since these run at night it i
s
important to be able to look at the output and know if the right file was
used - I'm hoping there is a parameter that I need to set in order for the
file info. to be included in the output - ?
thanks,
JohnWhat version of SQL Server? ISQL is a really old tool which was superseded b
y OSQL.EXE in 7.0 and
now SQLCMD.EXE in 2005. Anyhow, check your the -e switch of ISQL.
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://www.solidqualitylearning.com/
Blog: http://solidqualitylearning.com/blogs/tibor/
"John" <John@.discussions.microsoft.com> wrote in message
news:83513629-DBFC-4F69-82FD-5E62F0659BD4@.microsoft.com...
>I have an isql script that runs and does a full dump of the database - I
> use a -i to indicate the input file and a -o for the output file -
> everything works fine - but when I look at the output file - I was
> expecting to see the name of the dump file that I just created ( like in
> Sybase ) but it is not there - and the same is true with the load
> process - in the output from the restore there is no mention of the file
> name that was used as input for the restore - since these run at night it
is
> important to be able to look at the output and know if the right file was
> used - I'm hoping there is a parameter that I need to set in order for t
he
> file info. to be included in the output - ?
> thanks,
> --
> John|||This may help ? it a bit rough but you can tydy it up
----
--
CREATE procedure [sp_dba_backupinfo]
@.DBName varchar(100) = '%'
AS
IF @.DBName <> '%'
set @.DBName = @.DBName+'%'
select
substring(database_name,1,24) as DATABASE_NAME,
right(backup_start_date,19) as BACKUP_START,
right(backup_finish_date,19) as BACKUP_FINISH,
--round(backup_size/1024/1024,0) as MB,
str(backup_size/1024/1024,6,0) as MB,
position,
substring(logical_device_name,1,24) as LOGICAL_DEVICE,
substring(physical_device_name,1,100) as PHYSICAL_DEVICE
from msdb..backupmediafamily a, msdb..backupset b
where a.media_set_id = b.media_set_id
--and database_name='lamda1p_interfaces_store'
and database_name like @.DBName
--database_name in ('lamdace','metadatace','security')
--and backup_start_date > '2003-09-01 01:00:00.000'
--and logical_device_name = 'lamda1p_interfaces_store'
--and physical_device_name not like '\\.%'
--and physical_device_name like 'Ba%'
--order by database_name,backup_start_date desc
order by database_name asc,backup_start_date desc
----
--
CREATE procedure [sp_dba_restoreinfo]
@.DBName varchar(100) = '%'
AS
IF @.DBName <> '%'
set @.DBName = @.DBName+'%'
--select @.DBName
SET NOCOUNT ON
/*
select --xtype,
'msdb.dbo.'+name from msdb.dbo.sysobjects
where
name like '%backup%' and xtype in ('U','P')
or name like '%restore%' and xtype in ('U','P')
order by 1 desc
*/
SELECT
RH.[restore_type],
CONVERT(char(20),RH.[restore_date],120) [restore_date],
CONVERT(char(20),BS.[database_creation_date],120) [database_creation
_date],
CONVERT(char(20),BS.[backup_start_date],120) [backup_start_date],
CONVERT(char(20),BS.[backup_finish_date],120) [backup_finish_date],
right(BS.[machine_name],12) [machine_name],
--right(BS.[server_name],12) [server_name],
right(BS.[database_name],35) [database_name],
right(RH.[destination_database_name],35) [destination_database_name]
,
str(BS.backup_size/1024/1024,6,0) as MB,
--RF.[restore_history_id],
--RF.[file_number],
--RF.[destination_phys_drive],
--RH.[restore_history_id],
--RH.[user_name],
--RH.[backup_set_id],
--RH.[replace],
--RH.[recovery],
--RH.[restart],
--RH.[stop_at],
--RH.[device_count],
--RFG.[restore_history_id],
--RFG.[filegroup_name],
--BS.[backup_set_id],
--BS.[backup_set_uuid],
--BS.[media_set_id],
--BS.[first_family_number],
--BS.[first_media_number],
--BS.[last_family_number],
--BS.[last_media_number],
--BS.[catalog_family_number],
--BS.[catalog_media_number],
--BS.[position],
--BS.[expiration_date],
--BS.[software_vendor_id],
--BS.[name],
--BS.[description],
--BS.[user_name],
--BS.[software_major_version],
--BS.[software_minor_version],
--BS.[software_build_version],
--BS.[time_zone],
--BS.[mtf_minor_version],
--BS.[first_lsn],
--BS.[last_lsn],
--BS.[checkpoint_lsn],
--BS.[database_backup_lsn],
--BS.[type],
--BS.[sort_order],
--BS.[code_page],
--BS.[compatibility_level],
--BS.[database_version],
right(RF.[destination_phys_name],45) [destination_phys_name],
+'End'
from
msdb.dbo.restorefile RF,
msdb.dbo.restorehistory RH,
msdb.dbo.restorefilegroup RFG,
msdb.dbo.backupset BS
--msdb.dbo.backupmediaset
--msdb.dbo.backupmediafamily
--msdb.dbo.backupfile
where RF.restore_history_id = RH.restore_history_id
and RH.restore_history_id = RFG.restore_history_id
and RH.backup_set_id = BS.backup_set_id
--and RH.restore_date > getdate() -31
and RH.destination_database_name like @.dbname
order by restore_date desc
----
--
"Tibor Karaszi" wrote:

> What version of SQL Server? ISQL is a really old tool which was superseded
by OSQL.EXE in 7.0 and
> now SQLCMD.EXE in 2005. Anyhow, check your the -e switch of ISQL.
> --
> Tibor Karaszi, SQL Server MVP
> http://www.karaszi.com/sqlserver/default.asp
> http://www.solidqualitylearning.com/
> Blog: http://solidqualitylearning.com/blogs/tibor/
>
> "John" <John@.discussions.microsoft.com> wrote in message
> news:83513629-DBFC-4F69-82FD-5E62F0659BD4@.microsoft.com...
>