I have read extensively on filegroups vs. RAID 5, 10 etc. Seems like
parallel I/O is a mute issue if you setup a proper disk sub-system, at least
for small to medium sized databases.
But, it seems like everyone is skimming over the SMP benefits of filegroups,
if they exist. I came across one other posting referring to the number of
threads that SQL 2000 will use per filegroup.
Let me see if I understand this correctly? SQL 2000 will issue one thread
per filegroup for reads and writes. Thus if you have one large database on
a RAID 10 setup, I/O will be limited to one CPU thread. Granted disk I/O
should thread across both CPU's at the RAID controller level. If you create
two PRIMARY filegroups for the database, then you SQL 2000 will dish out two
threads for disk I/O thus using two of your CPU's. These will both be
issued to the RAID controller which should thread out your disk I/O once
again. So, if my understanding is correct, will there be a performance gain
by creating one filegroup per CPU?
Since in my case, we have two sets of disks RAID 10 and RAID 0, we already
our optimizing our disk I/O with tempbd and logs on the RAID 0. I know,
RAID 0 for logs, but this is a read-only SQL server so logs are irrelevant.
This also limits us from doing any filegroup optimizations such as placing
the indexes on another disk sub-system. So the only performance benefit we
are looking at is potential better SMP handling of disk I/O.
Thank You,
Kevin HammondHi Kevin,
I'm afraid your understanding is completely incorrect.
You do not have to have more files to get better IO performance, except when
using backup/restore. Multiple files are generally used for easier
management.
The access methods code is totally oblivious to the number of files in all
respects. Likewise, lazy-writing and checkpointing are both indifferent to
the number of files. SQL Server can have hundreds of outstanding read and/or
writes per file, not just one. More file does not equal more IO.
Please let me know if you have any further questions.
Thanks and regards.
--
Paul Randal
Dev Lead, Microsoft SQL Server Storage Engine
This posting is provided "AS IS" with no warranties, and confers no rights.
"Kevin Hammond" <kghammond@.nrscorp.com> wrote in message
news:c8ak86$6m9$1@.grandcanyon.binc.net...
> I have read extensively on filegroups vs. RAID 5, 10 etc. Seems like
> parallel I/O is a mute issue if you setup a proper disk sub-system, at
least
> for small to medium sized databases.
> But, it seems like everyone is skimming over the SMP benefits of
filegroups,
> if they exist. I came across one other posting referring to the number of
> threads that SQL 2000 will use per filegroup.
> Let me see if I understand this correctly? SQL 2000 will issue one thread
> per filegroup for reads and writes. Thus if you have one large database
on
> a RAID 10 setup, I/O will be limited to one CPU thread. Granted disk I/O
> should thread across both CPU's at the RAID controller level. If you
create
> two PRIMARY filegroups for the database, then you SQL 2000 will dish out
two
> threads for disk I/O thus using two of your CPU's. These will both be
> issued to the RAID controller which should thread out your disk I/O once
> again. So, if my understanding is correct, will there be a performance
gain
> by creating one filegroup per CPU?
> Since in my case, we have two sets of disks RAID 10 and RAID 0, we already
> our optimizing our disk I/O with tempbd and logs on the RAID 0. I know,
> RAID 0 for logs, but this is a read-only SQL server so logs are
irrelevant.
> This also limits us from doing any filegroup optimizations such as placing
> the indexes on another disk sub-system. So the only performance benefit
we
> are looking at is potential better SMP handling of disk I/O.
> Thank You,
> Kevin Hammond
>
Showing posts with label disk. Show all posts
Showing posts with label disk. Show all posts
Monday, March 19, 2012
Filegroups and multi-processor threading improvements
I have read extensively on filegroups vs. RAID 5, 10 etc. Seems like
parallel I/O is a mute issue if you setup a proper disk sub-system, at least
for small to medium sized databases.
But, it seems like everyone is skimming over the SMP benefits of filegroups,
if they exist. I came across one other posting referring to the number of
threads that SQL 2000 will use per filegroup.
Let me see if I understand this correctly? SQL 2000 will issue one thread
per filegroup for reads and writes. Thus if you have one large database on
a RAID 10 setup, I/O will be limited to one CPU thread. Granted disk I/O
should thread across both CPU's at the RAID controller level. If you create
two PRIMARY filegroups for the database, then you SQL 2000 will dish out two
threads for disk I/O thus using two of your CPU's. These will both be
issued to the RAID controller which should thread out your disk I/O once
again. So, if my understanding is correct, will there be a performance gain
by creating one filegroup per CPU?
Since in my case, we have two sets of disks RAID 10 and RAID 0, we already
our optimizing our disk I/O with tempbd and logs on the RAID 0. I know,
RAID 0 for logs, but this is a read-only SQL server so logs are irrelevant.
This also limits us from doing any filegroup optimizations such as placing
the indexes on another disk sub-system. So the only performance benefit we
are looking at is potential better SMP handling of disk I/O.
Thank You,
Kevin HammondHi Kevin,
I'm afraid your understanding is completely incorrect.
You do not have to have more files to get better IO performance, except when
using backup/restore. Multiple files are generally used for easier
management.
The access methods code is totally oblivious to the number of files in all
respects. Likewise, lazy-writing and checkpointing are both indifferent to
the number of files. SQL Server can have hundreds of outstanding read and/or
writes per file, not just one. More file does not equal more IO.
Please let me know if you have any further questions.
Thanks and regards.
Paul Randal
Dev Lead, Microsoft SQL Server Storage Engine
This posting is provided "AS IS" with no warranties, and confers no rights.
"Kevin Hammond" <kghammond@.nrscorp.com> wrote in message
news:c8ak86$6m9$1@.grandcanyon.binc.net...
> I have read extensively on filegroups vs. RAID 5, 10 etc. Seems like
> parallel I/O is a mute issue if you setup a proper disk sub-system, at
least
> for small to medium sized databases.
> But, it seems like everyone is skimming over the SMP benefits of
filegroups,
> if they exist. I came across one other posting referring to the number of
> threads that SQL 2000 will use per filegroup.
> Let me see if I understand this correctly? SQL 2000 will issue one thread
> per filegroup for reads and writes. Thus if you have one large database
on
> a RAID 10 setup, I/O will be limited to one CPU thread. Granted disk I/O
> should thread across both CPU's at the RAID controller level. If you
create
> two PRIMARY filegroups for the database, then you SQL 2000 will dish out
two
> threads for disk I/O thus using two of your CPU's. These will both be
> issued to the RAID controller which should thread out your disk I/O once
> again. So, if my understanding is correct, will there be a performance
gain
> by creating one filegroup per CPU?
> Since in my case, we have two sets of disks RAID 10 and RAID 0, we already
> our optimizing our disk I/O with tempbd and logs on the RAID 0. I know,
> RAID 0 for logs, but this is a read-only SQL server so logs are
irrelevant.
> This also limits us from doing any filegroup optimizations such as placing
> the indexes on another disk sub-system. So the only performance benefit
we
> are looking at is potential better SMP handling of disk I/O.
> Thank You,
> Kevin Hammond
>
parallel I/O is a mute issue if you setup a proper disk sub-system, at least
for small to medium sized databases.
But, it seems like everyone is skimming over the SMP benefits of filegroups,
if they exist. I came across one other posting referring to the number of
threads that SQL 2000 will use per filegroup.
Let me see if I understand this correctly? SQL 2000 will issue one thread
per filegroup for reads and writes. Thus if you have one large database on
a RAID 10 setup, I/O will be limited to one CPU thread. Granted disk I/O
should thread across both CPU's at the RAID controller level. If you create
two PRIMARY filegroups for the database, then you SQL 2000 will dish out two
threads for disk I/O thus using two of your CPU's. These will both be
issued to the RAID controller which should thread out your disk I/O once
again. So, if my understanding is correct, will there be a performance gain
by creating one filegroup per CPU?
Since in my case, we have two sets of disks RAID 10 and RAID 0, we already
our optimizing our disk I/O with tempbd and logs on the RAID 0. I know,
RAID 0 for logs, but this is a read-only SQL server so logs are irrelevant.
This also limits us from doing any filegroup optimizations such as placing
the indexes on another disk sub-system. So the only performance benefit we
are looking at is potential better SMP handling of disk I/O.
Thank You,
Kevin HammondHi Kevin,
I'm afraid your understanding is completely incorrect.
You do not have to have more files to get better IO performance, except when
using backup/restore. Multiple files are generally used for easier
management.
The access methods code is totally oblivious to the number of files in all
respects. Likewise, lazy-writing and checkpointing are both indifferent to
the number of files. SQL Server can have hundreds of outstanding read and/or
writes per file, not just one. More file does not equal more IO.
Please let me know if you have any further questions.
Thanks and regards.
Paul Randal
Dev Lead, Microsoft SQL Server Storage Engine
This posting is provided "AS IS" with no warranties, and confers no rights.
"Kevin Hammond" <kghammond@.nrscorp.com> wrote in message
news:c8ak86$6m9$1@.grandcanyon.binc.net...
> I have read extensively on filegroups vs. RAID 5, 10 etc. Seems like
> parallel I/O is a mute issue if you setup a proper disk sub-system, at
least
> for small to medium sized databases.
> But, it seems like everyone is skimming over the SMP benefits of
filegroups,
> if they exist. I came across one other posting referring to the number of
> threads that SQL 2000 will use per filegroup.
> Let me see if I understand this correctly? SQL 2000 will issue one thread
> per filegroup for reads and writes. Thus if you have one large database
on
> a RAID 10 setup, I/O will be limited to one CPU thread. Granted disk I/O
> should thread across both CPU's at the RAID controller level. If you
create
> two PRIMARY filegroups for the database, then you SQL 2000 will dish out
two
> threads for disk I/O thus using two of your CPU's. These will both be
> issued to the RAID controller which should thread out your disk I/O once
> again. So, if my understanding is correct, will there be a performance
gain
> by creating one filegroup per CPU?
> Since in my case, we have two sets of disks RAID 10 and RAID 0, we already
> our optimizing our disk I/O with tempbd and logs on the RAID 0. I know,
> RAID 0 for logs, but this is a read-only SQL server so logs are
irrelevant.
> This also limits us from doing any filegroup optimizations such as placing
> the indexes on another disk sub-system. So the only performance benefit
we
> are looking at is potential better SMP handling of disk I/O.
> Thank You,
> Kevin Hammond
>
Labels:
database,
disk,
extensively,
filegroups,
improvements,
leastfor,
likeparallel,
microsoft,
multi-processor,
mute,
mysql,
oracle,
proper,
raid,
server,
setup,
sql,
sub-system,
threading
Filegroups and multi-processor threading improvements
I have read extensively on filegroups vs. RAID 5, 10 etc. Seems like
parallel I/O is a mute issue if you setup a proper disk sub-system, at least
for small to medium sized databases.
But, it seems like everyone is skimming over the SMP benefits of filegroups,
if they exist. I came across one other posting referring to the number of
threads that SQL 2000 will use per filegroup.
Let me see if I understand this correctly? SQL 2000 will issue one thread
per filegroup for reads and writes. Thus if you have one large database on
a RAID 10 setup, I/O will be limited to one CPU thread. Granted disk I/O
should thread across both CPU's at the RAID controller level. If you create
two PRIMARY filegroups for the database, then you SQL 2000 will dish out two
threads for disk I/O thus using two of your CPU's. These will both be
issued to the RAID controller which should thread out your disk I/O once
again. So, if my understanding is correct, will there be a performance gain
by creating one filegroup per CPU?
Since in my case, we have two sets of disks RAID 10 and RAID 0, we already
our optimizing our disk I/O with tempbd and logs on the RAID 0. I know,
RAID 0 for logs, but this is a read-only SQL server so logs are irrelevant.
This also limits us from doing any filegroup optimizations such as placing
the indexes on another disk sub-system. So the only performance benefit we
are looking at is potential better SMP handling of disk I/O.
Thank You,
Kevin Hammond
Hi Kevin,
I'm afraid your understanding is completely incorrect.
You do not have to have more files to get better IO performance, except when
using backup/restore. Multiple files are generally used for easier
management.
The access methods code is totally oblivious to the number of files in all
respects. Likewise, lazy-writing and checkpointing are both indifferent to
the number of files. SQL Server can have hundreds of outstanding read and/or
writes per file, not just one. More file does not equal more IO.
Please let me know if you have any further questions.
Thanks and regards.
Paul Randal
Dev Lead, Microsoft SQL Server Storage Engine
This posting is provided "AS IS" with no warranties, and confers no rights.
"Kevin Hammond" <kghammond@.nrscorp.com> wrote in message
news:c8ak86$6m9$1@.grandcanyon.binc.net...
> I have read extensively on filegroups vs. RAID 5, 10 etc. Seems like
> parallel I/O is a mute issue if you setup a proper disk sub-system, at
least
> for small to medium sized databases.
> But, it seems like everyone is skimming over the SMP benefits of
filegroups,
> if they exist. I came across one other posting referring to the number of
> threads that SQL 2000 will use per filegroup.
> Let me see if I understand this correctly? SQL 2000 will issue one thread
> per filegroup for reads and writes. Thus if you have one large database
on
> a RAID 10 setup, I/O will be limited to one CPU thread. Granted disk I/O
> should thread across both CPU's at the RAID controller level. If you
create
> two PRIMARY filegroups for the database, then you SQL 2000 will dish out
two
> threads for disk I/O thus using two of your CPU's. These will both be
> issued to the RAID controller which should thread out your disk I/O once
> again. So, if my understanding is correct, will there be a performance
gain
> by creating one filegroup per CPU?
> Since in my case, we have two sets of disks RAID 10 and RAID 0, we already
> our optimizing our disk I/O with tempbd and logs on the RAID 0. I know,
> RAID 0 for logs, but this is a read-only SQL server so logs are
irrelevant.
> This also limits us from doing any filegroup optimizations such as placing
> the indexes on another disk sub-system. So the only performance benefit
we
> are looking at is potential better SMP handling of disk I/O.
> Thank You,
> Kevin Hammond
>
parallel I/O is a mute issue if you setup a proper disk sub-system, at least
for small to medium sized databases.
But, it seems like everyone is skimming over the SMP benefits of filegroups,
if they exist. I came across one other posting referring to the number of
threads that SQL 2000 will use per filegroup.
Let me see if I understand this correctly? SQL 2000 will issue one thread
per filegroup for reads and writes. Thus if you have one large database on
a RAID 10 setup, I/O will be limited to one CPU thread. Granted disk I/O
should thread across both CPU's at the RAID controller level. If you create
two PRIMARY filegroups for the database, then you SQL 2000 will dish out two
threads for disk I/O thus using two of your CPU's. These will both be
issued to the RAID controller which should thread out your disk I/O once
again. So, if my understanding is correct, will there be a performance gain
by creating one filegroup per CPU?
Since in my case, we have two sets of disks RAID 10 and RAID 0, we already
our optimizing our disk I/O with tempbd and logs on the RAID 0. I know,
RAID 0 for logs, but this is a read-only SQL server so logs are irrelevant.
This also limits us from doing any filegroup optimizations such as placing
the indexes on another disk sub-system. So the only performance benefit we
are looking at is potential better SMP handling of disk I/O.
Thank You,
Kevin Hammond
Hi Kevin,
I'm afraid your understanding is completely incorrect.
You do not have to have more files to get better IO performance, except when
using backup/restore. Multiple files are generally used for easier
management.
The access methods code is totally oblivious to the number of files in all
respects. Likewise, lazy-writing and checkpointing are both indifferent to
the number of files. SQL Server can have hundreds of outstanding read and/or
writes per file, not just one. More file does not equal more IO.
Please let me know if you have any further questions.
Thanks and regards.
Paul Randal
Dev Lead, Microsoft SQL Server Storage Engine
This posting is provided "AS IS" with no warranties, and confers no rights.
"Kevin Hammond" <kghammond@.nrscorp.com> wrote in message
news:c8ak86$6m9$1@.grandcanyon.binc.net...
> I have read extensively on filegroups vs. RAID 5, 10 etc. Seems like
> parallel I/O is a mute issue if you setup a proper disk sub-system, at
least
> for small to medium sized databases.
> But, it seems like everyone is skimming over the SMP benefits of
filegroups,
> if they exist. I came across one other posting referring to the number of
> threads that SQL 2000 will use per filegroup.
> Let me see if I understand this correctly? SQL 2000 will issue one thread
> per filegroup for reads and writes. Thus if you have one large database
on
> a RAID 10 setup, I/O will be limited to one CPU thread. Granted disk I/O
> should thread across both CPU's at the RAID controller level. If you
create
> two PRIMARY filegroups for the database, then you SQL 2000 will dish out
two
> threads for disk I/O thus using two of your CPU's. These will both be
> issued to the RAID controller which should thread out your disk I/O once
> again. So, if my understanding is correct, will there be a performance
gain
> by creating one filegroup per CPU?
> Since in my case, we have two sets of disks RAID 10 and RAID 0, we already
> our optimizing our disk I/O with tempbd and logs on the RAID 0. I know,
> RAID 0 for logs, but this is a read-only SQL server so logs are
irrelevant.
> This also limits us from doing any filegroup optimizations such as placing
> the indexes on another disk sub-system. So the only performance benefit
we
> are looking at is potential better SMP handling of disk I/O.
> Thank You,
> Kevin Hammond
>
Labels:
database,
disk,
extensively,
filegroups,
improvements,
leastfor,
likeparallel,
microsoft,
multi-processor,
mute,
mysql,
oracle,
proper,
raid,
server,
setup,
sql,
sub-system,
threading
Monday, March 12, 2012
filegroup full during DBCC
running DBCC CHECKDB repair and wound up filling up the disk - was able
to free up some space, the files are set to grow automatically. The
DBCC is still running - will it eventually bag out, or is it smart
enough to continue now that it has more space?check to see if it is hung by using sp_who2. Watch to see if it consumes cpu
and disk io between consecutive runs of sp_who2. If cpu or io is not
increasing you should kill it.
Hilary Cotter
Director of Text Mining and Database Strategy
RelevantNOISE.Com - Dedicated to mining blogs for business intelligence.
This posting is my own and doesn't necessarily represent RelevantNoise's
positions, strategies or opinions.
Looking for a SQL Server replication book?
http://www.nwsu.com/0974973602.html
Looking for a FAQ on Indexing Services/SQL FTS
http://www.indexserverfaq.com
<mdevenney1@.gmail.com> wrote in message
news:1163194366.339470.237560@.h54g2000cwb.googlegroups.com...
> running DBCC CHECKDB repair and wound up filling up the disk - was able
> to free up some space, the files are set to grow automatically. The
> DBCC is still running - will it eventually bag out, or is it smart
> enough to continue now that it has more space?
>|||Was it Tempdb that was full? If so you can run DBCC CHECKDB with the
Estimate_only option to see how much space you may require first.
Andrew J. Kelly SQL MVP
<mdevenney1@.gmail.com> wrote in message
news:1163194366.339470.237560@.h54g2000cwb.googlegroups.com...
> running DBCC CHECKDB repair and wound up filling up the disk - was able
> to free up some space, the files are set to grow automatically. The
> DBCC is still running - will it eventually bag out, or is it smart
> enough to continue now that it has more space?
>|||Andrew J. Kelly wrote:[vbcol=seagreen]
> Was it Tempdb that was full? If so you can run DBCC CHECKDB with the
> Estimate_only option to see how much space you may require first.
> --
> Andrew J. Kelly SQL MVP
> <mdevenney1@.gmail.com> wrote in message
> news:1163194366.339470.237560@.h54g2000cwb.googlegroups.com...
thanks Hilary and Andrew for the responses - the DBCC did wind up
finishing and performing the repairs needed without any intervention
once I freed up space. Dumb move on my part not running the
Estimate_only first; lesson learned there. And thanks for the tip-off
on sp_who2 - I can see that coming in real handy. I've been admin'ing
SQL Srvr for about 8 years but it's almost a side job, and my db's have
never really run into problems before so now it's time to ratchet up
the diligence a bit. much obliged.
to free up some space, the files are set to grow automatically. The
DBCC is still running - will it eventually bag out, or is it smart
enough to continue now that it has more space?check to see if it is hung by using sp_who2. Watch to see if it consumes cpu
and disk io between consecutive runs of sp_who2. If cpu or io is not
increasing you should kill it.
Hilary Cotter
Director of Text Mining and Database Strategy
RelevantNOISE.Com - Dedicated to mining blogs for business intelligence.
This posting is my own and doesn't necessarily represent RelevantNoise's
positions, strategies or opinions.
Looking for a SQL Server replication book?
http://www.nwsu.com/0974973602.html
Looking for a FAQ on Indexing Services/SQL FTS
http://www.indexserverfaq.com
<mdevenney1@.gmail.com> wrote in message
news:1163194366.339470.237560@.h54g2000cwb.googlegroups.com...
> running DBCC CHECKDB repair and wound up filling up the disk - was able
> to free up some space, the files are set to grow automatically. The
> DBCC is still running - will it eventually bag out, or is it smart
> enough to continue now that it has more space?
>|||Was it Tempdb that was full? If so you can run DBCC CHECKDB with the
Estimate_only option to see how much space you may require first.
Andrew J. Kelly SQL MVP
<mdevenney1@.gmail.com> wrote in message
news:1163194366.339470.237560@.h54g2000cwb.googlegroups.com...
> running DBCC CHECKDB repair and wound up filling up the disk - was able
> to free up some space, the files are set to grow automatically. The
> DBCC is still running - will it eventually bag out, or is it smart
> enough to continue now that it has more space?
>|||Andrew J. Kelly wrote:[vbcol=seagreen]
> Was it Tempdb that was full? If so you can run DBCC CHECKDB with the
> Estimate_only option to see how much space you may require first.
> --
> Andrew J. Kelly SQL MVP
> <mdevenney1@.gmail.com> wrote in message
> news:1163194366.339470.237560@.h54g2000cwb.googlegroups.com...
thanks Hilary and Andrew for the responses - the DBCC did wind up
finishing and performing the repairs needed without any intervention
once I freed up space. Dumb move on my part not running the
Estimate_only first; lesson learned there. And thanks for the tip-off
on sp_who2 - I can see that coming in real handy. I've been admin'ing
SQL Srvr for about 8 years but it's almost a side job, and my db's have
never really run into problems before so now it's time to ratchet up
the diligence a bit. much obliged.
filegroup full during DBCC
running DBCC CHECKDB repair and wound up filling up the disk - was able
to free up some space, the files are set to grow automatically. The
DBCC is still running - will it eventually bag out, or is it smart
enough to continue now that it has more space?check to see if it is hung by using sp_who2. Watch to see if it consumes cpu
and disk io between consecutive runs of sp_who2. If cpu or io is not
increasing you should kill it.
--
Hilary Cotter
Director of Text Mining and Database Strategy
RelevantNOISE.Com - Dedicated to mining blogs for business intelligence.
This posting is my own and doesn't necessarily represent RelevantNoise's
positions, strategies or opinions.
Looking for a SQL Server replication book?
http://www.nwsu.com/0974973602.html
Looking for a FAQ on Indexing Services/SQL FTS
http://www.indexserverfaq.com
<mdevenney1@.gmail.com> wrote in message
news:1163194366.339470.237560@.h54g2000cwb.googlegroups.com...
> running DBCC CHECKDB repair and wound up filling up the disk - was able
> to free up some space, the files are set to grow automatically. The
> DBCC is still running - will it eventually bag out, or is it smart
> enough to continue now that it has more space?
>|||Was it Tempdb that was full? If so you can run DBCC CHECKDB with the
Estimate_only option to see how much space you may require first.
--
Andrew J. Kelly SQL MVP
<mdevenney1@.gmail.com> wrote in message
news:1163194366.339470.237560@.h54g2000cwb.googlegroups.com...
> running DBCC CHECKDB repair and wound up filling up the disk - was able
> to free up some space, the files are set to grow automatically. The
> DBCC is still running - will it eventually bag out, or is it smart
> enough to continue now that it has more space?
>|||Andrew J. Kelly wrote:
> Was it Tempdb that was full? If so you can run DBCC CHECKDB with the
> Estimate_only option to see how much space you may require first.
> --
> Andrew J. Kelly SQL MVP
> <mdevenney1@.gmail.com> wrote in message
> news:1163194366.339470.237560@.h54g2000cwb.googlegroups.com...
> > running DBCC CHECKDB repair and wound up filling up the disk - was able
> > to free up some space, the files are set to grow automatically. The
> > DBCC is still running - will it eventually bag out, or is it smart
> > enough to continue now that it has more space?
> >
thanks Hilary and Andrew for the responses - the DBCC did wind up
finishing and performing the repairs needed without any intervention
once I freed up space. Dumb move on my part not running the
Estimate_only first; lesson learned there. And thanks for the tip-off
on sp_who2 - I can see that coming in real handy. I've been admin'ing
SQL Srvr for about 8 years but it's almost a side job, and my db's have
never really run into problems before so now it's time to ratchet up
the diligence a bit. much obliged.
to free up some space, the files are set to grow automatically. The
DBCC is still running - will it eventually bag out, or is it smart
enough to continue now that it has more space?check to see if it is hung by using sp_who2. Watch to see if it consumes cpu
and disk io between consecutive runs of sp_who2. If cpu or io is not
increasing you should kill it.
--
Hilary Cotter
Director of Text Mining and Database Strategy
RelevantNOISE.Com - Dedicated to mining blogs for business intelligence.
This posting is my own and doesn't necessarily represent RelevantNoise's
positions, strategies or opinions.
Looking for a SQL Server replication book?
http://www.nwsu.com/0974973602.html
Looking for a FAQ on Indexing Services/SQL FTS
http://www.indexserverfaq.com
<mdevenney1@.gmail.com> wrote in message
news:1163194366.339470.237560@.h54g2000cwb.googlegroups.com...
> running DBCC CHECKDB repair and wound up filling up the disk - was able
> to free up some space, the files are set to grow automatically. The
> DBCC is still running - will it eventually bag out, or is it smart
> enough to continue now that it has more space?
>|||Was it Tempdb that was full? If so you can run DBCC CHECKDB with the
Estimate_only option to see how much space you may require first.
--
Andrew J. Kelly SQL MVP
<mdevenney1@.gmail.com> wrote in message
news:1163194366.339470.237560@.h54g2000cwb.googlegroups.com...
> running DBCC CHECKDB repair and wound up filling up the disk - was able
> to free up some space, the files are set to grow automatically. The
> DBCC is still running - will it eventually bag out, or is it smart
> enough to continue now that it has more space?
>|||Andrew J. Kelly wrote:
> Was it Tempdb that was full? If so you can run DBCC CHECKDB with the
> Estimate_only option to see how much space you may require first.
> --
> Andrew J. Kelly SQL MVP
> <mdevenney1@.gmail.com> wrote in message
> news:1163194366.339470.237560@.h54g2000cwb.googlegroups.com...
> > running DBCC CHECKDB repair and wound up filling up the disk - was able
> > to free up some space, the files are set to grow automatically. The
> > DBCC is still running - will it eventually bag out, or is it smart
> > enough to continue now that it has more space?
> >
thanks Hilary and Andrew for the responses - the DBCC did wind up
finishing and performing the repairs needed without any intervention
once I freed up space. Dumb move on my part not running the
Estimate_only first; lesson learned there. And thanks for the tip-off
on sp_who2 - I can see that coming in real handy. I've been admin'ing
SQL Srvr for about 8 years but it's almost a side job, and my db's have
never really run into problems before so now it's time to ratchet up
the diligence a bit. much obliged.
filegroup full during DBCC
running DBCC CHECKDB repair and wound up filling up the disk - was able
to free up some space, the files are set to grow automatically. The
DBCC is still running - will it eventually bag out, or is it smart
enough to continue now that it has more space?
check to see if it is hung by using sp_who2. Watch to see if it consumes cpu
and disk io between consecutive runs of sp_who2. If cpu or io is not
increasing you should kill it.
Hilary Cotter
Director of Text Mining and Database Strategy
RelevantNOISE.Com - Dedicated to mining blogs for business intelligence.
This posting is my own and doesn't necessarily represent RelevantNoise's
positions, strategies or opinions.
Looking for a SQL Server replication book?
http://www.nwsu.com/0974973602.html
Looking for a FAQ on Indexing Services/SQL FTS
http://www.indexserverfaq.com
<mdevenney1@.gmail.com> wrote in message
news:1163194366.339470.237560@.h54g2000cwb.googlegr oups.com...
> running DBCC CHECKDB repair and wound up filling up the disk - was able
> to free up some space, the files are set to grow automatically. The
> DBCC is still running - will it eventually bag out, or is it smart
> enough to continue now that it has more space?
>
|||Was it Tempdb that was full? If so you can run DBCC CHECKDB with the
Estimate_only option to see how much space you may require first.
Andrew J. Kelly SQL MVP
<mdevenney1@.gmail.com> wrote in message
news:1163194366.339470.237560@.h54g2000cwb.googlegr oups.com...
> running DBCC CHECKDB repair and wound up filling up the disk - was able
> to free up some space, the files are set to grow automatically. The
> DBCC is still running - will it eventually bag out, or is it smart
> enough to continue now that it has more space?
>
|||Andrew J. Kelly wrote:[vbcol=seagreen]
> Was it Tempdb that was full? If so you can run DBCC CHECKDB with the
> Estimate_only option to see how much space you may require first.
> --
> Andrew J. Kelly SQL MVP
> <mdevenney1@.gmail.com> wrote in message
> news:1163194366.339470.237560@.h54g2000cwb.googlegr oups.com...
thanks Hilary and Andrew for the responses - the DBCC did wind up
finishing and performing the repairs needed without any intervention
once I freed up space. Dumb move on my part not running the
Estimate_only first; lesson learned there. And thanks for the tip-off
on sp_who2 - I can see that coming in real handy. I've been admin'ing
SQL Srvr for about 8 years but it's almost a side job, and my db's have
never really run into problems before so now it's time to ratchet up
the diligence a bit. much obliged.
to free up some space, the files are set to grow automatically. The
DBCC is still running - will it eventually bag out, or is it smart
enough to continue now that it has more space?
check to see if it is hung by using sp_who2. Watch to see if it consumes cpu
and disk io between consecutive runs of sp_who2. If cpu or io is not
increasing you should kill it.
Hilary Cotter
Director of Text Mining and Database Strategy
RelevantNOISE.Com - Dedicated to mining blogs for business intelligence.
This posting is my own and doesn't necessarily represent RelevantNoise's
positions, strategies or opinions.
Looking for a SQL Server replication book?
http://www.nwsu.com/0974973602.html
Looking for a FAQ on Indexing Services/SQL FTS
http://www.indexserverfaq.com
<mdevenney1@.gmail.com> wrote in message
news:1163194366.339470.237560@.h54g2000cwb.googlegr oups.com...
> running DBCC CHECKDB repair and wound up filling up the disk - was able
> to free up some space, the files are set to grow automatically. The
> DBCC is still running - will it eventually bag out, or is it smart
> enough to continue now that it has more space?
>
|||Was it Tempdb that was full? If so you can run DBCC CHECKDB with the
Estimate_only option to see how much space you may require first.
Andrew J. Kelly SQL MVP
<mdevenney1@.gmail.com> wrote in message
news:1163194366.339470.237560@.h54g2000cwb.googlegr oups.com...
> running DBCC CHECKDB repair and wound up filling up the disk - was able
> to free up some space, the files are set to grow automatically. The
> DBCC is still running - will it eventually bag out, or is it smart
> enough to continue now that it has more space?
>
|||Andrew J. Kelly wrote:[vbcol=seagreen]
> Was it Tempdb that was full? If so you can run DBCC CHECKDB with the
> Estimate_only option to see how much space you may require first.
> --
> Andrew J. Kelly SQL MVP
> <mdevenney1@.gmail.com> wrote in message
> news:1163194366.339470.237560@.h54g2000cwb.googlegr oups.com...
thanks Hilary and Andrew for the responses - the DBCC did wind up
finishing and performing the repairs needed without any intervention
once I freed up space. Dumb move on my part not running the
Estimate_only first; lesson learned there. And thanks for the tip-off
on sp_who2 - I can see that coming in real handy. I've been admin'ing
SQL Srvr for about 8 years but it's almost a side job, and my db's have
never really run into problems before so now it's time to ratchet up
the diligence a bit. much obliged.
Sunday, February 26, 2012
File Restore Problems
We have backed up a SQL 7.0 database to disk and am now
trying to restore it using the with move option to a
different device and with a different name.
The restore starts and fails after a fairly small amount
of time with the following message:
fcb::ZeroFile(): GetOverLappedResult() failed with error 2
It appears to initially create the .mdf file but deletes
it after the above message is received.
We have searched the web and found very little about this
problem. I suspect physical disk problems. I can restire
other smaller databases to the same device using the same
method with no trouble - but anything bigger fails also.
Anybody had anything similar and what did you do to
resolve the problem.
Thanks
David
I think the first step is to check your hardware (HD and RAID controller if
present) and then go to another issues linked with MS SQL errors... but for
me it's 99% a hardware problems
"David" <dave.whitehead@.cis.co.uk> wrote in message
news:dec601c43c15$89472f20$a401280a@.phx.gbl...
> We have backed up a SQL 7.0 database to disk and am now
> trying to restore it using the with move option to a
> different device and with a different name.
> The restore starts and fails after a fairly small amount
> of time with the following message:
> fcb::ZeroFile(): GetOverLappedResult() failed with error 2
> It appears to initially create the .mdf file but deletes
> it after the above message is received.
> We have searched the web and found very little about this
> problem. I suspect physical disk problems. I can restire
> other smaller databases to the same device using the same
> method with no trouble - but anything bigger fails also.
> Anybody had anything similar and what did you do to
> resolve the problem.
|||Try coping a large file to the destination drive(size almost similar to the database), using windows explorer and check what error do you get. Also check if any disk quota has been set.
trying to restore it using the with move option to a
different device and with a different name.
The restore starts and fails after a fairly small amount
of time with the following message:
fcb::ZeroFile(): GetOverLappedResult() failed with error 2
It appears to initially create the .mdf file but deletes
it after the above message is received.
We have searched the web and found very little about this
problem. I suspect physical disk problems. I can restire
other smaller databases to the same device using the same
method with no trouble - but anything bigger fails also.
Anybody had anything similar and what did you do to
resolve the problem.
Thanks
David
I think the first step is to check your hardware (HD and RAID controller if
present) and then go to another issues linked with MS SQL errors... but for
me it's 99% a hardware problems
"David" <dave.whitehead@.cis.co.uk> wrote in message
news:dec601c43c15$89472f20$a401280a@.phx.gbl...
> We have backed up a SQL 7.0 database to disk and am now
> trying to restore it using the with move option to a
> different device and with a different name.
> The restore starts and fails after a fairly small amount
> of time with the following message:
> fcb::ZeroFile(): GetOverLappedResult() failed with error 2
> It appears to initially create the .mdf file but deletes
> it after the above message is received.
> We have searched the web and found very little about this
> problem. I suspect physical disk problems. I can restire
> other smaller databases to the same device using the same
> method with no trouble - but anything bigger fails also.
> Anybody had anything similar and what did you do to
> resolve the problem.
|||Try coping a large file to the destination drive(size almost similar to the database), using windows explorer and check what error do you get. Also check if any disk quota has been set.
File Restore Problems
We have backed up a SQL 7.0 database to disk and am now
trying to restore it using the with move option to a
different device and with a different name.
The restore starts and fails after a fairly small amount
of time with the following message:
fcb::ZeroFile(): GetOverLappedResult() failed with error 2
It appears to initially create the .mdf file but deletes
it after the above message is received.
We have searched the web and found very little about this
problem. I suspect physical disk problems. I can restire
other smaller databases to the same device using the same
method with no trouble - but anything bigger fails also.
Anybody had anything similar and what did you do to
resolve the problem.
Thanks
DavidI think the first step is to check your hardware (HD and RAID controller if
present) and then go to another issues linked with MS SQL errors... but for
me it's 99% a hardware problems
"David" <dave.whitehead@.cis.co.uk> wrote in message
news:dec601c43c15$89472f20$a401280a@.phx.gbl...
> We have backed up a SQL 7.0 database to disk and am now
> trying to restore it using the with move option to a
> different device and with a different name.
> The restore starts and fails after a fairly small amount
> of time with the following message:
> fcb::ZeroFile(): GetOverLappedResult() failed with error 2
> It appears to initially create the .mdf file but deletes
> it after the above message is received.
> We have searched the web and found very little about this
> problem. I suspect physical disk problems. I can restire
> other smaller databases to the same device using the same
> method with no trouble - but anything bigger fails also.
> Anybody had anything similar and what did you do to
> resolve the problem.|||Try coping a large file to the destination drive(size almost similar to the
database), using windows explorer and check what error do you get. Also chec
k if any disk quota has been set.
trying to restore it using the with move option to a
different device and with a different name.
The restore starts and fails after a fairly small amount
of time with the following message:
fcb::ZeroFile(): GetOverLappedResult() failed with error 2
It appears to initially create the .mdf file but deletes
it after the above message is received.
We have searched the web and found very little about this
problem. I suspect physical disk problems. I can restire
other smaller databases to the same device using the same
method with no trouble - but anything bigger fails also.
Anybody had anything similar and what did you do to
resolve the problem.
Thanks
DavidI think the first step is to check your hardware (HD and RAID controller if
present) and then go to another issues linked with MS SQL errors... but for
me it's 99% a hardware problems
"David" <dave.whitehead@.cis.co.uk> wrote in message
news:dec601c43c15$89472f20$a401280a@.phx.gbl...
> We have backed up a SQL 7.0 database to disk and am now
> trying to restore it using the with move option to a
> different device and with a different name.
> The restore starts and fails after a fairly small amount
> of time with the following message:
> fcb::ZeroFile(): GetOverLappedResult() failed with error 2
> It appears to initially create the .mdf file but deletes
> it after the above message is received.
> We have searched the web and found very little about this
> problem. I suspect physical disk problems. I can restire
> other smaller databases to the same device using the same
> method with no trouble - but anything bigger fails also.
> Anybody had anything similar and what did you do to
> resolve the problem.|||Try coping a large file to the destination drive(size almost similar to the
database), using windows explorer and check what error do you get. Also chec
k if any disk quota has been set.
File Restore Problems
We have backed up a SQL 7.0 database to disk and am now
trying to restore it using the with move option to a
different device and with a different name.
The restore starts and fails after a fairly small amount
of time with the following message:
fcb::ZeroFile(): GetOverLappedResult() failed with error 2
It appears to initially create the .mdf file but deletes
it after the above message is received.
We have searched the web and found very little about this
problem. I suspect physical disk problems. I can restire
other smaller databases to the same device using the same
method with no trouble - but anything bigger fails also.
Anybody had anything similar and what did you do to
resolve the problem.
Thanks
DavidI think the first step is to check your hardware (HD and RAID controller if
present) and then go to another issues linked with MS SQL errors... but for
me it's 99% a hardware problems
"David" <dave.whitehead@.cis.co.uk> wrote in message
news:dec601c43c15$89472f20$a401280a@.phx.gbl...
> We have backed up a SQL 7.0 database to disk and am now
> trying to restore it using the with move option to a
> different device and with a different name.
> The restore starts and fails after a fairly small amount
> of time with the following message:
> fcb::ZeroFile(): GetOverLappedResult() failed with error 2
> It appears to initially create the .mdf file but deletes
> it after the above message is received.
> We have searched the web and found very little about this
> problem. I suspect physical disk problems. I can restire
> other smaller databases to the same device using the same
> method with no trouble - but anything bigger fails also.
> Anybody had anything similar and what did you do to
> resolve the problem.|||Try coping a large file to the destination drive(size almost similar to the database), using windows explorer and check what error do you get. Also check if any disk quota has been set
trying to restore it using the with move option to a
different device and with a different name.
The restore starts and fails after a fairly small amount
of time with the following message:
fcb::ZeroFile(): GetOverLappedResult() failed with error 2
It appears to initially create the .mdf file but deletes
it after the above message is received.
We have searched the web and found very little about this
problem. I suspect physical disk problems. I can restire
other smaller databases to the same device using the same
method with no trouble - but anything bigger fails also.
Anybody had anything similar and what did you do to
resolve the problem.
Thanks
DavidI think the first step is to check your hardware (HD and RAID controller if
present) and then go to another issues linked with MS SQL errors... but for
me it's 99% a hardware problems
"David" <dave.whitehead@.cis.co.uk> wrote in message
news:dec601c43c15$89472f20$a401280a@.phx.gbl...
> We have backed up a SQL 7.0 database to disk and am now
> trying to restore it using the with move option to a
> different device and with a different name.
> The restore starts and fails after a fairly small amount
> of time with the following message:
> fcb::ZeroFile(): GetOverLappedResult() failed with error 2
> It appears to initially create the .mdf file but deletes
> it after the above message is received.
> We have searched the web and found very little about this
> problem. I suspect physical disk problems. I can restire
> other smaller databases to the same device using the same
> method with no trouble - but anything bigger fails also.
> Anybody had anything similar and what did you do to
> resolve the problem.|||Try coping a large file to the destination drive(size almost similar to the database), using windows explorer and check what error do you get. Also check if any disk quota has been set
File placement for optimal performance?
What is the best performance for this configuration:
Files:
Data
Log
Indexes
tempdb
Disk:
A - RAID 10
B - RAID 10 (or should this be RAID 1?)
Whats best?:
A - Data and Indexes
B - tempdb and Log
? Thanks.
check these links
http://www.microsoft.com/technet/prodtechnol/sql/bestpractice/storage-top-10.mspx
http://sql-server-performance.com/Community/forums/t/2337.aspx
Madhu
Sunday, February 19, 2012
file group question
Hi
I have 4 filegroups in sql server 2000. 2 indexes and 2 tables.
They are 4 different disks and different disk controllers.
I do not have a backup of the database. Now I have a situation where I lost
one of the disk containing the index filegroup. I am running in FULL MODE.
I do not care if I loose index file group as it will only slow down the
database.
Is there a way to start the database without the index filegroup?
Can I take it offline or drop it?
Thanks
Mangesh
Mangesh Deshpande wrote:
> Hi
> I have 4 filegroups in sql server 2000. 2 indexes and 2 tables.
> They are 4 different disks and different disk controllers.
> I do not have a backup of the database. Now I have a situation where
> I lost one of the disk containing the index filegroup. I am running
> in FULL MODE.
> I do not care if I loose index file group as it will only slow down
> the database.
> Is there a way to start the database without the index filegroup?
> Can I take it offline or drop it?
> Thanks
> Mangesh
Do you think there were any clustered indexes in the database. If so,
where did you put those tables? On one of the table filegroups or one of
the index file groups? The table goes with the clustered index, so if
you built a clustered index on a table and put it on the index drive you
lost, you lost the table as well.
Many here are going to wonder:
Where is your redundancy on your drive subsystem?
Why no backups?
It's pretty risky leaving both out of the equation as you now know.
Going forward I might suggest using two mirrored sets. One for data and
one for temp db and log files.
David Gugick
Imceda Software
www.imceda.com
|||Thanks David. This is just a hypothetical case. For production we are using
RAIDa 5.
I am keeping clustered index on the my table file group and it is not broken.
My non clustered indexes are on separate file group and it is broken.
So I want to have a mechanism by which even if the index file group disk
crashes my system should be up. In oracle you can take index file system
offline and allow the database to work fine.
I wanted to check if we have a mechanism in SQL Server 2000.
Thanks always
Mangesh
"David Gugick" wrote:
> Mangesh Deshpande wrote:
> Do you think there were any clustered indexes in the database. If so,
> where did you put those tables? On one of the table filegroups or one of
> the index file groups? The table goes with the clustered index, so if
> you built a clustered index on a table and put it on the index drive you
> lost, you lost the table as well.
> Many here are going to wonder:
> Where is your redundancy on your drive subsystem?
> Why no backups?
> It's pretty risky leaving both out of the equation as you now know.
> Going forward I might suggest using two mirrored sets. One for data and
> one for temp db and log files.
> --
> David Gugick
> Imceda Software
> www.imceda.com
>
|||Mangesh Deshpande wrote:
> Thanks David. This is just a hypothetical case. For production we are
> using RAIDa 5.
> I am keeping clustered index on the my table file group and it is not
> broken. My non clustered indexes are on separate file group and it is
> broken.
> So I want to have a mechanism by which even if the index file group
> disk crashes my system should be up. In oracle you can take index
> file system offline and allow the database to work fine.
> I wanted to check if we have a mechanism in SQL Server 2000.
> Thanks always
> Mangesh
But in production you are using RAID 5 which won't apply to your
hypothetical. I personally have no idea whether SQL Server can recover
from a lost drive with a lost filegroup. I assume there is a way.
Whether that's with the help of MS PSS or some technique I don;t know
about I'm not sure. My only question here is why is this hypothetical
even worth considering if your production environment does not resemble
what you describe?
David Gugick
Imceda Software
www.imceda.com
|||Thanks David for sharing the knowledge. I was thinking of adding the
filegroups on one of our other production NON CRITICAL database which has no
RAID.
But I was checking to see if I can achieve any benefit and JUST Curious as
it is a standard practice in Oracle to Create indexes on separate Tablespaces.
Mangesh
"David Gugick" wrote:
> Mangesh Deshpande wrote:
> But in production you are using RAID 5 which won't apply to your
> hypothetical. I personally have no idea whether SQL Server can recover
> from a lost drive with a lost filegroup. I assume there is a way.
> Whether that's with the help of MS PSS or some technique I don;t know
> about I'm not sure. My only question here is why is this hypothetical
> even worth considering if your production environment does not resemble
> what you describe?
> --
> David Gugick
> Imceda Software
> www.imceda.com
>
I have 4 filegroups in sql server 2000. 2 indexes and 2 tables.
They are 4 different disks and different disk controllers.
I do not have a backup of the database. Now I have a situation where I lost
one of the disk containing the index filegroup. I am running in FULL MODE.
I do not care if I loose index file group as it will only slow down the
database.
Is there a way to start the database without the index filegroup?
Can I take it offline or drop it?
Thanks
Mangesh
Mangesh Deshpande wrote:
> Hi
> I have 4 filegroups in sql server 2000. 2 indexes and 2 tables.
> They are 4 different disks and different disk controllers.
> I do not have a backup of the database. Now I have a situation where
> I lost one of the disk containing the index filegroup. I am running
> in FULL MODE.
> I do not care if I loose index file group as it will only slow down
> the database.
> Is there a way to start the database without the index filegroup?
> Can I take it offline or drop it?
> Thanks
> Mangesh
Do you think there were any clustered indexes in the database. If so,
where did you put those tables? On one of the table filegroups or one of
the index file groups? The table goes with the clustered index, so if
you built a clustered index on a table and put it on the index drive you
lost, you lost the table as well.
Many here are going to wonder:
Where is your redundancy on your drive subsystem?
Why no backups?
It's pretty risky leaving both out of the equation as you now know.
Going forward I might suggest using two mirrored sets. One for data and
one for temp db and log files.
David Gugick
Imceda Software
www.imceda.com
|||Thanks David. This is just a hypothetical case. For production we are using
RAIDa 5.
I am keeping clustered index on the my table file group and it is not broken.
My non clustered indexes are on separate file group and it is broken.
So I want to have a mechanism by which even if the index file group disk
crashes my system should be up. In oracle you can take index file system
offline and allow the database to work fine.
I wanted to check if we have a mechanism in SQL Server 2000.
Thanks always
Mangesh
"David Gugick" wrote:
> Mangesh Deshpande wrote:
> Do you think there were any clustered indexes in the database. If so,
> where did you put those tables? On one of the table filegroups or one of
> the index file groups? The table goes with the clustered index, so if
> you built a clustered index on a table and put it on the index drive you
> lost, you lost the table as well.
> Many here are going to wonder:
> Where is your redundancy on your drive subsystem?
> Why no backups?
> It's pretty risky leaving both out of the equation as you now know.
> Going forward I might suggest using two mirrored sets. One for data and
> one for temp db and log files.
> --
> David Gugick
> Imceda Software
> www.imceda.com
>
|||Mangesh Deshpande wrote:
> Thanks David. This is just a hypothetical case. For production we are
> using RAIDa 5.
> I am keeping clustered index on the my table file group and it is not
> broken. My non clustered indexes are on separate file group and it is
> broken.
> So I want to have a mechanism by which even if the index file group
> disk crashes my system should be up. In oracle you can take index
> file system offline and allow the database to work fine.
> I wanted to check if we have a mechanism in SQL Server 2000.
> Thanks always
> Mangesh
But in production you are using RAID 5 which won't apply to your
hypothetical. I personally have no idea whether SQL Server can recover
from a lost drive with a lost filegroup. I assume there is a way.
Whether that's with the help of MS PSS or some technique I don;t know
about I'm not sure. My only question here is why is this hypothetical
even worth considering if your production environment does not resemble
what you describe?
David Gugick
Imceda Software
www.imceda.com
|||Thanks David for sharing the knowledge. I was thinking of adding the
filegroups on one of our other production NON CRITICAL database which has no
RAID.
But I was checking to see if I can achieve any benefit and JUST Curious as
it is a standard practice in Oracle to Create indexes on separate Tablespaces.
Mangesh
"David Gugick" wrote:
> Mangesh Deshpande wrote:
> But in production you are using RAID 5 which won't apply to your
> hypothetical. I personally have no idea whether SQL Server can recover
> from a lost drive with a lost filegroup. I assume there is a way.
> Whether that's with the help of MS PSS or some technique I don;t know
> about I'm not sure. My only question here is why is this hypothetical
> even worth considering if your production environment does not resemble
> what you describe?
> --
> David Gugick
> Imceda Software
> www.imceda.com
>
file group question
Hi
I have 4 filegroups in sql server 2000. 2 indexes and 2 tables.
They are 4 different disks and different disk controllers.
I do not have a backup of the database. Now I have a situation where I lost
one of the disk containing the index filegroup. I am running in FULL MODE.
I do not care if I loose index file group as it will only slow down the
database.
Is there a way to start the database without the index filegroup?
Can I take it offline or drop it'
Thanks
MangeshMangesh Deshpande wrote:
> Hi
> I have 4 filegroups in sql server 2000. 2 indexes and 2 tables.
> They are 4 different disks and different disk controllers.
> I do not have a backup of the database. Now I have a situation where
> I lost one of the disk containing the index filegroup. I am running
> in FULL MODE.
> I do not care if I loose index file group as it will only slow down
> the database.
> Is there a way to start the database without the index filegroup?
> Can I take it offline or drop it'
> Thanks
> Mangesh
Do you think there were any clustered indexes in the database. If so,
where did you put those tables? On one of the table filegroups or one of
the index file groups? The table goes with the clustered index, so if
you built a clustered index on a table and put it on the index drive you
lost, you lost the table as well.
Many here are going to wonder:
Where is your redundancy on your drive subsystem?
Why no backups?
It's pretty risky leaving both out of the equation as you now know.
Going forward I might suggest using two mirrored sets. One for data and
one for temp db and log files.
David Gugick
Imceda Software
www.imceda.com|||Thanks David. This is just a hypothetical case. For production we are using
RAIDa 5.
I am keeping clustered index on the my table file group and it is not broken
.
My non clustered indexes are on separate file group and it is broken.
So I want to have a mechanism by which even if the index file group disk
crashes my system should be up. In oracle you can take index file system
offline and allow the database to work fine.
I wanted to check if we have a mechanism in SQL Server 2000.
Thanks always
Mangesh
"David Gugick" wrote:
> Mangesh Deshpande wrote:
> Do you think there were any clustered indexes in the database. If so,
> where did you put those tables? On one of the table filegroups or one of
> the index file groups? The table goes with the clustered index, so if
> you built a clustered index on a table and put it on the index drive you
> lost, you lost the table as well.
> Many here are going to wonder:
> Where is your redundancy on your drive subsystem?
> Why no backups?
> It's pretty risky leaving both out of the equation as you now know.
> Going forward I might suggest using two mirrored sets. One for data and
> one for temp db and log files.
> --
> David Gugick
> Imceda Software
> www.imceda.com
>|||Mangesh Deshpande wrote:
> Thanks David. This is just a hypothetical case. For production we are
> using RAIDa 5.
> I am keeping clustered index on the my table file group and it is not
> broken. My non clustered indexes are on separate file group and it is
> broken.
> So I want to have a mechanism by which even if the index file group
> disk crashes my system should be up. In oracle you can take index
> file system offline and allow the database to work fine.
> I wanted to check if we have a mechanism in SQL Server 2000.
> Thanks always
> Mangesh
But in production you are using RAID 5 which won't apply to your
hypothetical. I personally have no idea whether SQL Server can recover
from a lost drive with a lost filegroup. I assume there is a way.
Whether that's with the help of MS PSS or some technique I don;t know
about I'm not sure. My only question here is why is this hypothetical
even worth considering if your production environment does not resemble
what you describe?
David Gugick
Imceda Software
www.imceda.com|||Thanks David for sharing the knowledge. I was thinking of adding the
filegroups on one of our other production NON CRITICAL database which has no
RAID.
But I was checking to see if I can achieve any benefit and JUST Curious as
it is a standard practice in Oracle to Create indexes on separate Tablespace
s.
Mangesh
"David Gugick" wrote:
> Mangesh Deshpande wrote:
> But in production you are using RAID 5 which won't apply to your
> hypothetical. I personally have no idea whether SQL Server can recover
> from a lost drive with a lost filegroup. I assume there is a way.
> Whether that's with the help of MS PSS or some technique I don;t know
> about I'm not sure. My only question here is why is this hypothetical
> even worth considering if your production environment does not resemble
> what you describe?
> --
> David Gugick
> Imceda Software
> www.imceda.com
>
I have 4 filegroups in sql server 2000. 2 indexes and 2 tables.
They are 4 different disks and different disk controllers.
I do not have a backup of the database. Now I have a situation where I lost
one of the disk containing the index filegroup. I am running in FULL MODE.
I do not care if I loose index file group as it will only slow down the
database.
Is there a way to start the database without the index filegroup?
Can I take it offline or drop it'
Thanks
MangeshMangesh Deshpande wrote:
> Hi
> I have 4 filegroups in sql server 2000. 2 indexes and 2 tables.
> They are 4 different disks and different disk controllers.
> I do not have a backup of the database. Now I have a situation where
> I lost one of the disk containing the index filegroup. I am running
> in FULL MODE.
> I do not care if I loose index file group as it will only slow down
> the database.
> Is there a way to start the database without the index filegroup?
> Can I take it offline or drop it'
> Thanks
> Mangesh
Do you think there were any clustered indexes in the database. If so,
where did you put those tables? On one of the table filegroups or one of
the index file groups? The table goes with the clustered index, so if
you built a clustered index on a table and put it on the index drive you
lost, you lost the table as well.
Many here are going to wonder:
Where is your redundancy on your drive subsystem?
Why no backups?
It's pretty risky leaving both out of the equation as you now know.
Going forward I might suggest using two mirrored sets. One for data and
one for temp db and log files.
David Gugick
Imceda Software
www.imceda.com|||Thanks David. This is just a hypothetical case. For production we are using
RAIDa 5.
I am keeping clustered index on the my table file group and it is not broken
.
My non clustered indexes are on separate file group and it is broken.
So I want to have a mechanism by which even if the index file group disk
crashes my system should be up. In oracle you can take index file system
offline and allow the database to work fine.
I wanted to check if we have a mechanism in SQL Server 2000.
Thanks always
Mangesh
"David Gugick" wrote:
> Mangesh Deshpande wrote:
> Do you think there were any clustered indexes in the database. If so,
> where did you put those tables? On one of the table filegroups or one of
> the index file groups? The table goes with the clustered index, so if
> you built a clustered index on a table and put it on the index drive you
> lost, you lost the table as well.
> Many here are going to wonder:
> Where is your redundancy on your drive subsystem?
> Why no backups?
> It's pretty risky leaving both out of the equation as you now know.
> Going forward I might suggest using two mirrored sets. One for data and
> one for temp db and log files.
> --
> David Gugick
> Imceda Software
> www.imceda.com
>|||Mangesh Deshpande wrote:
> Thanks David. This is just a hypothetical case. For production we are
> using RAIDa 5.
> I am keeping clustered index on the my table file group and it is not
> broken. My non clustered indexes are on separate file group and it is
> broken.
> So I want to have a mechanism by which even if the index file group
> disk crashes my system should be up. In oracle you can take index
> file system offline and allow the database to work fine.
> I wanted to check if we have a mechanism in SQL Server 2000.
> Thanks always
> Mangesh
But in production you are using RAID 5 which won't apply to your
hypothetical. I personally have no idea whether SQL Server can recover
from a lost drive with a lost filegroup. I assume there is a way.
Whether that's with the help of MS PSS or some technique I don;t know
about I'm not sure. My only question here is why is this hypothetical
even worth considering if your production environment does not resemble
what you describe?
David Gugick
Imceda Software
www.imceda.com|||Thanks David for sharing the knowledge. I was thinking of adding the
filegroups on one of our other production NON CRITICAL database which has no
RAID.
But I was checking to see if I can achieve any benefit and JUST Curious as
it is a standard practice in Oracle to Create indexes on separate Tablespace
s.
Mangesh
"David Gugick" wrote:
> Mangesh Deshpande wrote:
> But in production you are using RAID 5 which won't apply to your
> hypothetical. I personally have no idea whether SQL Server can recover
> from a lost drive with a lost filegroup. I assume there is a way.
> Whether that's with the help of MS PSS or some technique I don;t know
> about I'm not sure. My only question here is why is this hypothetical
> even worth considering if your production environment does not resemble
> what you describe?
> --
> David Gugick
> Imceda Software
> www.imceda.com
>
file group question
Hi
I have 4 filegroups in sql server 2000. 2 indexes and 2 tables.
They are 4 different disks and different disk controllers.
I do not have a backup of the database. Now I have a situation where I lost
one of the disk containing the index filegroup. I am running in FULL MODE.
I do not care if I loose index file group as it will only slow down the
database.
Is there a way to start the database without the index filegroup?
Can I take it offline or drop it'
Thanks
MangeshMangesh Deshpande wrote:
> Hi
> I have 4 filegroups in sql server 2000. 2 indexes and 2 tables.
> They are 4 different disks and different disk controllers.
> I do not have a backup of the database. Now I have a situation where
> I lost one of the disk containing the index filegroup. I am running
> in FULL MODE.
> I do not care if I loose index file group as it will only slow down
> the database.
> Is there a way to start the database without the index filegroup?
> Can I take it offline or drop it'
> Thanks
> Mangesh
Do you think there were any clustered indexes in the database. If so,
where did you put those tables? On one of the table filegroups or one of
the index file groups? The table goes with the clustered index, so if
you built a clustered index on a table and put it on the index drive you
lost, you lost the table as well.
Many here are going to wonder:
Where is your redundancy on your drive subsystem?
Why no backups?
It's pretty risky leaving both out of the equation as you now know.
Going forward I might suggest using two mirrored sets. One for data and
one for temp db and log files.
--
David Gugick
Imceda Software
www.imceda.com|||Thanks David. This is just a hypothetical case. For production we are using
RAIDa 5.
I am keeping clustered index on the my table file group and it is not broken.
My non clustered indexes are on separate file group and it is broken.
So I want to have a mechanism by which even if the index file group disk
crashes my system should be up. In oracle you can take index file system
offline and allow the database to work fine.
I wanted to check if we have a mechanism in SQL Server 2000.
Thanks always
Mangesh
"David Gugick" wrote:
> Mangesh Deshpande wrote:
> > Hi
> >
> > I have 4 filegroups in sql server 2000. 2 indexes and 2 tables.
> > They are 4 different disks and different disk controllers.
> >
> > I do not have a backup of the database. Now I have a situation where
> > I lost one of the disk containing the index filegroup. I am running
> > in FULL MODE.
> >
> > I do not care if I loose index file group as it will only slow down
> > the database.
> > Is there a way to start the database without the index filegroup?
> > Can I take it offline or drop it'
> >
> > Thanks
> > Mangesh
> Do you think there were any clustered indexes in the database. If so,
> where did you put those tables? On one of the table filegroups or one of
> the index file groups? The table goes with the clustered index, so if
> you built a clustered index on a table and put it on the index drive you
> lost, you lost the table as well.
> Many here are going to wonder:
> Where is your redundancy on your drive subsystem?
> Why no backups?
> It's pretty risky leaving both out of the equation as you now know.
> Going forward I might suggest using two mirrored sets. One for data and
> one for temp db and log files.
> --
> David Gugick
> Imceda Software
> www.imceda.com
>|||Mangesh Deshpande wrote:
> Thanks David. This is just a hypothetical case. For production we are
> using RAIDa 5.
> I am keeping clustered index on the my table file group and it is not
> broken. My non clustered indexes are on separate file group and it is
> broken.
> So I want to have a mechanism by which even if the index file group
> disk crashes my system should be up. In oracle you can take index
> file system offline and allow the database to work fine.
> I wanted to check if we have a mechanism in SQL Server 2000.
> Thanks always
> Mangesh
But in production you are using RAID 5 which won't apply to your
hypothetical. I personally have no idea whether SQL Server can recover
from a lost drive with a lost filegroup. I assume there is a way.
Whether that's with the help of MS PSS or some technique I don;t know
about I'm not sure. My only question here is why is this hypothetical
even worth considering if your production environment does not resemble
what you describe?
--
David Gugick
Imceda Software
www.imceda.com|||Thanks David for sharing the knowledge. I was thinking of adding the
filegroups on one of our other production NON CRITICAL database which has no
RAID.
But I was checking to see if I can achieve any benefit and JUST Curious as
it is a standard practice in Oracle to Create indexes on separate Tablespaces.
Mangesh
"David Gugick" wrote:
> Mangesh Deshpande wrote:
> > Thanks David. This is just a hypothetical case. For production we are
> > using RAIDa 5.
> > I am keeping clustered index on the my table file group and it is not
> > broken. My non clustered indexes are on separate file group and it is
> > broken.
> >
> > So I want to have a mechanism by which even if the index file group
> > disk crashes my system should be up. In oracle you can take index
> > file system offline and allow the database to work fine.
> > I wanted to check if we have a mechanism in SQL Server 2000.
> >
> > Thanks always
> > Mangesh
> But in production you are using RAID 5 which won't apply to your
> hypothetical. I personally have no idea whether SQL Server can recover
> from a lost drive with a lost filegroup. I assume there is a way.
> Whether that's with the help of MS PSS or some technique I don;t know
> about I'm not sure. My only question here is why is this hypothetical
> even worth considering if your production environment does not resemble
> what you describe?
> --
> David Gugick
> Imceda Software
> www.imceda.com
>
I have 4 filegroups in sql server 2000. 2 indexes and 2 tables.
They are 4 different disks and different disk controllers.
I do not have a backup of the database. Now I have a situation where I lost
one of the disk containing the index filegroup. I am running in FULL MODE.
I do not care if I loose index file group as it will only slow down the
database.
Is there a way to start the database without the index filegroup?
Can I take it offline or drop it'
Thanks
MangeshMangesh Deshpande wrote:
> Hi
> I have 4 filegroups in sql server 2000. 2 indexes and 2 tables.
> They are 4 different disks and different disk controllers.
> I do not have a backup of the database. Now I have a situation where
> I lost one of the disk containing the index filegroup. I am running
> in FULL MODE.
> I do not care if I loose index file group as it will only slow down
> the database.
> Is there a way to start the database without the index filegroup?
> Can I take it offline or drop it'
> Thanks
> Mangesh
Do you think there were any clustered indexes in the database. If so,
where did you put those tables? On one of the table filegroups or one of
the index file groups? The table goes with the clustered index, so if
you built a clustered index on a table and put it on the index drive you
lost, you lost the table as well.
Many here are going to wonder:
Where is your redundancy on your drive subsystem?
Why no backups?
It's pretty risky leaving both out of the equation as you now know.
Going forward I might suggest using two mirrored sets. One for data and
one for temp db and log files.
--
David Gugick
Imceda Software
www.imceda.com|||Thanks David. This is just a hypothetical case. For production we are using
RAIDa 5.
I am keeping clustered index on the my table file group and it is not broken.
My non clustered indexes are on separate file group and it is broken.
So I want to have a mechanism by which even if the index file group disk
crashes my system should be up. In oracle you can take index file system
offline and allow the database to work fine.
I wanted to check if we have a mechanism in SQL Server 2000.
Thanks always
Mangesh
"David Gugick" wrote:
> Mangesh Deshpande wrote:
> > Hi
> >
> > I have 4 filegroups in sql server 2000. 2 indexes and 2 tables.
> > They are 4 different disks and different disk controllers.
> >
> > I do not have a backup of the database. Now I have a situation where
> > I lost one of the disk containing the index filegroup. I am running
> > in FULL MODE.
> >
> > I do not care if I loose index file group as it will only slow down
> > the database.
> > Is there a way to start the database without the index filegroup?
> > Can I take it offline or drop it'
> >
> > Thanks
> > Mangesh
> Do you think there were any clustered indexes in the database. If so,
> where did you put those tables? On one of the table filegroups or one of
> the index file groups? The table goes with the clustered index, so if
> you built a clustered index on a table and put it on the index drive you
> lost, you lost the table as well.
> Many here are going to wonder:
> Where is your redundancy on your drive subsystem?
> Why no backups?
> It's pretty risky leaving both out of the equation as you now know.
> Going forward I might suggest using two mirrored sets. One for data and
> one for temp db and log files.
> --
> David Gugick
> Imceda Software
> www.imceda.com
>|||Mangesh Deshpande wrote:
> Thanks David. This is just a hypothetical case. For production we are
> using RAIDa 5.
> I am keeping clustered index on the my table file group and it is not
> broken. My non clustered indexes are on separate file group and it is
> broken.
> So I want to have a mechanism by which even if the index file group
> disk crashes my system should be up. In oracle you can take index
> file system offline and allow the database to work fine.
> I wanted to check if we have a mechanism in SQL Server 2000.
> Thanks always
> Mangesh
But in production you are using RAID 5 which won't apply to your
hypothetical. I personally have no idea whether SQL Server can recover
from a lost drive with a lost filegroup. I assume there is a way.
Whether that's with the help of MS PSS or some technique I don;t know
about I'm not sure. My only question here is why is this hypothetical
even worth considering if your production environment does not resemble
what you describe?
--
David Gugick
Imceda Software
www.imceda.com|||Thanks David for sharing the knowledge. I was thinking of adding the
filegroups on one of our other production NON CRITICAL database which has no
RAID.
But I was checking to see if I can achieve any benefit and JUST Curious as
it is a standard practice in Oracle to Create indexes on separate Tablespaces.
Mangesh
"David Gugick" wrote:
> Mangesh Deshpande wrote:
> > Thanks David. This is just a hypothetical case. For production we are
> > using RAIDa 5.
> > I am keeping clustered index on the my table file group and it is not
> > broken. My non clustered indexes are on separate file group and it is
> > broken.
> >
> > So I want to have a mechanism by which even if the index file group
> > disk crashes my system should be up. In oracle you can take index
> > file system offline and allow the database to work fine.
> > I wanted to check if we have a mechanism in SQL Server 2000.
> >
> > Thanks always
> > Mangesh
> But in production you are using RAID 5 which won't apply to your
> hypothetical. I personally have no idea whether SQL Server can recover
> from a lost drive with a lost filegroup. I assume there is a way.
> Whether that's with the help of MS PSS or some technique I don;t know
> about I'm not sure. My only question here is why is this hypothetical
> even worth considering if your production environment does not resemble
> what you describe?
> --
> David Gugick
> Imceda Software
> www.imceda.com
>
Subscribe to:
Posts (Atom)