Showing posts with label backup. Show all posts
Showing posts with label backup. Show all posts

Wednesday, March 21, 2012

filelistonly vs verifyonly

Hello all. Does anyone know if a successful completion of a 'restore
filelistonly' command would indicate that a backup file is valid? I've
noticed some of our backup jobs failing during the verify phase of the
maintenenace plan because of network issues, and I'd like a quick way to
check if the backup is valid because some of the backup files take hours to
verify. I searched MS Support and they don't seem to have any info on this.

TWTech Witch (tech.witch@.gmail.NOSPAM.com) writes:
> Hello all. Does anyone know if a successful completion of a 'restore
> filelistonly' command would indicate that a backup file is valid? I've
> noticed some of our backup jobs failing during the verify phase of the
> maintenenace plan because of network issues, and I'd like a quick way to
> check if the backup is valid because some of the backup files take hours
> to verify. I searched MS Support and they don't seem to have any info
> on this.

I can't say for certain, but my gut feeling is that a "filelistonly" is
a far cry from verifying the entire backup. An OK FILELISTONLY will tell
you that the backup is not completely broken, but there might still
be occassional errors, because of bad disk sectors, network glitches (when
backing up to a file share), tape-drive glitches (when backing up to
tape).

--
Erland Sommarskog, SQL Server MVP, esquel@.sommarskog.se

Books Online for SQL Server SP3 at
http://www.microsoft.com/sql/techin.../2000/books.asp

Monday, March 19, 2012

filegroup restore problem

Hi,
We are planning to implement filegroup backup strategy for one of our big
database. We are planning to divide the database by dates so that jan data
will be in 1 filegroup and feb data in separate filegroup so basically we
will have 12 filegroups per year. As the month finish we will put the
filegroup as read only and take the filegroup backup and then later on if we
need to recover this filegroup in case of disaster we just need to restore
this filegroup backup and don’t need to apply all the log files after the
filegroup as this is read only and sql server should assume that since this
is read only it should not expect log files after this filegroup restore. But
this is not happening: when I restore the filegroup backup sql server still
force me to apply all the log files after that. But this will mean we have to
keep all the log files need for recovery ..so in fact we don’t have advantage
of putting filegroup as readonly. So any suggestions on how to avoid applying
log files or we are looking for feedback about how other people are doing
this?
Thanks
--Harvinder
Note: Already reviewed this article
http://support.microsoft.com/default...;EN-US;Q295371
Following are the steps I am testing this:
1) complete/full database backup
2) create Jan filegroup
3) populate data into Jan as well as primary filegroup
4) transaction log backup
5) put Jan as Read only
6) Jan filegroup backup
7) create Feb filegroup
8) populate data into Feb as well as primary filegroup
9) transaction log backup
10) put Feb as Read only
11) Feb filegroup backup
12) create Mar filegroup
13) populate data into Mar as well as primary filegroup
14) transaction log backup
15) put Mar as Read only
16) Mar filegroup backup
17) Create Apr filegroup
18) populate data into Apr as well as primary filegroup
19) If at this point we lost Datafile belonging to Feb filegroup I expect
only to apply backup taken at step 11) but SQL Server forced me to take the
log backup of tail and apply backups taken at step 11, 14, t-log tail backup
i.e. all the transaction log backups after filegroup backup
Consider differential backups,, perhaps on a weekly basis. This way, you
restore the filegroup, then the most-recent differential, then the remaining
logs.
Tom
Thomas A. Moreau, BSc, PhD, MCSE, MCDBA
SQL Server MVP
Columnist, SQL Server Professional
Toronto, ON Canada
www.pinnaclepublishing.com
"Harvinder" <Harvinder@.discussions.microsoft.com> wrote in message
news:60FA9D13-D045-4C91-B224-A09EA45855BC@.microsoft.com...
Hi,
We are planning to implement filegroup backup strategy for one of our big
database. We are planning to divide the database by dates so that jan data
will be in 1 filegroup and feb data in separate filegroup so basically we
will have 12 filegroups per year. As the month finish we will put the
filegroup as read only and take the filegroup backup and then later on if we
need to recover this filegroup in case of disaster we just need to restore
this filegroup backup and don’t need to apply all the log files after the
filegroup as this is read only and sql server should assume that since this
is read only it should not expect log files after this filegroup restore.
But
this is not happening: when I restore the filegroup backup sql server still
force me to apply all the log files after that. But this will mean we have
to
keep all the log files need for recovery ..so in fact we don’t have
advantage
of putting filegroup as readonly. So any suggestions on how to avoid
applying
log files or we are looking for feedback about how other people are doing
this?
Thanks
--Harvinder
Note: Already reviewed this article
http://support.microsoft.com/default...;EN-US;Q295371
Following are the steps I am testing this:
1) complete/full database backup
2) create Jan filegroup
3) populate data into Jan as well as primary filegroup
4) transaction log backup
5) put Jan as Read only
6) Jan filegroup backup
7) create Feb filegroup
8) populate data into Feb as well as primary filegroup
9) transaction log backup
10) put Feb as Read only
11) Feb filegroup backup
12) create Mar filegroup
13) populate data into Mar as well as primary filegroup
14) transaction log backup
15) put Mar as Read only
16) Mar filegroup backup
17) Create Apr filegroup
18) populate data into Apr as well as primary filegroup
19) If at this point we lost Datafile belonging to Feb filegroup I expect
only to apply backup taken at step 11) but SQL Server forced me to take the
log backup of tail and apply backups taken at step 11, 14, t-log tail backup
i.e. all the transaction log backups after filegroup backup
|||In addition to Tom's post:
What you are asking for is a planned feature for SQL Server 2005.
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://www.solidqualitylearning.com/
"Harvinder" <Harvinder@.discussions.microsoft.com> wrote in message
news:60FA9D13-D045-4C91-B224-A09EA45855BC@.microsoft.com...
> Hi,
> We are planning to implement filegroup backup strategy for one of our big
> database. We are planning to divide the database by dates so that jan data
> will be in 1 filegroup and feb data in separate filegroup so basically we
> will have 12 filegroups per year. As the month finish we will put the
> filegroup as read only and take the filegroup backup and then later on if we
> need to recover this filegroup in case of disaster we just need to restore
> this filegroup backup and don't need to apply all the log files after the
> filegroup as this is read only and sql server should assume that since this
> is read only it should not expect log files after this filegroup restore. But
> this is not happening: when I restore the filegroup backup sql server still
> force me to apply all the log files after that. But this will mean we have to
> keep all the log files need for recovery ..so in fact we don't have advantage
> of putting filegroup as readonly. So any suggestions on how to avoid applying
> log files or we are looking for feedback about how other people are doing
> this?
> Thanks
> --Harvinder
> Note: Already reviewed this article
> http://support.microsoft.com/default...;EN-US;Q295371
> Following are the steps I am testing this:
> 1) complete/full database backup
> 2) create Jan filegroup
> 3) populate data into Jan as well as primary filegroup
> 4) transaction log backup
> 5) put Jan as Read only
> 6) Jan filegroup backup
> 7) create Feb filegroup
> 8) populate data into Feb as well as primary filegroup
> 9) transaction log backup
> 10) put Feb as Read only
> 11) Feb filegroup backup
> 12) create Mar filegroup
> 13) populate data into Mar as well as primary filegroup
> 14) transaction log backup
> 15) put Mar as Read only
> 16) Mar filegroup backup
> 17) Create Apr filegroup
> 18) populate data into Apr as well as primary filegroup
> 19) If at this point we lost Datafile belonging to Feb filegroup I expect
> only to apply backup taken at step 11) but SQL Server forced me to take the
> log backup of tail and apply backups taken at step 11, 14, t-log tail backup
> i.e. all the transaction log backups after filegroup backup
>
>
|||Tibor,
You mentioned that this will be new feature in sql server 2005. I don't see
any white paper on microsoft web site regarding backup on sql server 2005.If
you get this message and if u have any information on this topic do let me
know
Thanks
--Harvinder
"Tibor Karaszi" wrote:

> In addition to Tom's post:
> What you are asking for is a planned feature for SQL Server 2005.
> --
> Tibor Karaszi, SQL Server MVP
> http://www.karaszi.com/sqlserver/default.asp
> http://www.solidqualitylearning.com/
>
> "Harvinder" <Harvinder@.discussions.microsoft.com> wrote in message
> news:60FA9D13-D045-4C91-B224-A09EA45855BC@.microsoft.com...
>
>

filegroup restore problem

Hi,
We are planning to implement filegroup backup strategy for one of our big
database. We are planning to divide the database by dates so that jan data
will be in 1 filegroup and feb data in separate filegroup so basically we
will have 12 filegroups per year. As the month finish we will put the
filegroup as read only and take the filegroup backup and then later on if we
need to recover this filegroup in case of disaster we just need to restore
this filegroup backup and don’t need to apply all the log files after the
filegroup as this is read only and sql server should assume that since this
is read only it should not expect log files after this filegroup restore. Bu
t
this is not happening: when I restore the filegroup backup sql server still
force me to apply all the log files after that. But this will mean we have t
o
keep all the log files need for recovery ..so in fact we don’t have advant
age
of putting filegroup as readonly. So any suggestions on how to avoid applyin
g
log files or we are looking for feedback about how other people are doing
this?
Thanks
--Harvinder
Note: Already reviewed this article
http://support.microsoft.com/defaul...b;EN-US;Q295371
Following are the steps I am testing this:
1) complete/full database backup
2) create Jan filegroup
3) populate data into Jan as well as primary filegroup
4) transaction log backup
5) put Jan as Read only
6) Jan filegroup backup
7) create Feb filegroup
8) populate data into Feb as well as primary filegroup
9) transaction log backup
10) put Feb as Read only
11) Feb filegroup backup
12) create Mar filegroup
13) populate data into Mar as well as primary filegroup
14) transaction log backup
15) put Mar as Read only
16) Mar filegroup backup
17) Create Apr filegroup
18) populate data into Apr as well as primary filegroup
19) If at this point we lost Datafile belonging to Feb filegroup I expect
only to apply backup taken at step 11) but SQL Server forced me to take the
log backup of tail and apply backups taken at step 11, 14, t-log tail backup
i.e. all the transaction log backups after filegroup backupConsider differential backups,, perhaps on a weekly basis. This way, you
restore the filegroup, then the most-recent differential, then the remaining
logs.
Tom
---
Thomas A. Moreau, BSc, PhD, MCSE, MCDBA
SQL Server MVP
Columnist, SQL Server Professional
Toronto, ON Canada
www.pinnaclepublishing.com
"Harvinder" <Harvinder@.discussions.microsoft.com> wrote in message
news:60FA9D13-D045-4C91-B224-A09EA45855BC@.microsoft.com...
Hi,
We are planning to implement filegroup backup strategy for one of our big
database. We are planning to divide the database by dates so that jan data
will be in 1 filegroup and feb data in separate filegroup so basically we
will have 12 filegroups per year. As the month finish we will put the
filegroup as read only and take the filegroup backup and then later on if we
need to recover this filegroup in case of disaster we just need to restore
this filegroup backup and don’t need to apply all the log files after the
filegroup as this is read only and sql server should assume that since this
is read only it should not expect log files after this filegroup restore.
But
this is not happening: when I restore the filegroup backup sql server still
force me to apply all the log files after that. But this will mean we have
to
keep all the log files need for recovery ..so in fact we don’t have
advantage
of putting filegroup as readonly. So any suggestions on how to avoid
applying
log files or we are looking for feedback about how other people are doing
this?
Thanks
--Harvinder
Note: Already reviewed this article
http://support.microsoft.com/defaul...b;EN-US;Q295371
Following are the steps I am testing this:
1) complete/full database backup
2) create Jan filegroup
3) populate data into Jan as well as primary filegroup
4) transaction log backup
5) put Jan as Read only
6) Jan filegroup backup
7) create Feb filegroup
8) populate data into Feb as well as primary filegroup
9) transaction log backup
10) put Feb as Read only
11) Feb filegroup backup
12) create Mar filegroup
13) populate data into Mar as well as primary filegroup
14) transaction log backup
15) put Mar as Read only
16) Mar filegroup backup
17) Create Apr filegroup
18) populate data into Apr as well as primary filegroup
19) If at this point we lost Datafile belonging to Feb filegroup I expect
only to apply backup taken at step 11) but SQL Server forced me to take the
log backup of tail and apply backups taken at step 11, 14, t-log tail backup
i.e. all the transaction log backups after filegroup backup|||In addition to Tom's post:
What you are asking for is a planned feature for SQL Server 2005.
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://www.solidqualitylearning.com/
"Harvinder" <Harvinder@.discussions.microsoft.com> wrote in message
news:60FA9D13-D045-4C91-B224-A09EA45855BC@.microsoft.com...
> Hi,
> We are planning to implement filegroup backup strategy for one of our big
> database. We are planning to divide the database by dates so that jan data
> will be in 1 filegroup and feb data in separate filegroup so basically we
> will have 12 filegroups per year. As the month finish we will put the
> filegroup as read only and take the filegroup backup and then later on if
we
> need to recover this filegroup in case of disaster we just need to restore
> this filegroup backup and don't need to apply all the log files after the
> filegroup as this is read only and sql server should assume that since thi
s
> is read only it should not expect log files after this filegroup restore.
But
> this is not happening: when I restore the filegroup backup sql server stil
l
> force me to apply all the log files after that. But this will mean we have
to
> keep all the log files need for recovery ..so in fact we don't have advant
age
> of putting filegroup as readonly. So any suggestions on how to avoid apply
ing
> log files or we are looking for feedback about how other people are doing
> this?
> Thanks
> --Harvinder
> Note: Already reviewed this article
> http://support.microsoft.com/defaul...b;EN-US;Q295371
> Following are the steps I am testing this:
> 1) complete/full database backup
> 2) create Jan filegroup
> 3) populate data into Jan as well as primary filegroup
> 4) transaction log backup
> 5) put Jan as Read only
> 6) Jan filegroup backup
> 7) create Feb filegroup
> 8) populate data into Feb as well as primary filegroup
> 9) transaction log backup
> 10) put Feb as Read only
> 11) Feb filegroup backup
> 12) create Mar filegroup
> 13) populate data into Mar as well as primary filegroup
> 14) transaction log backup
> 15) put Mar as Read only
> 16) Mar filegroup backup
> 17) Create Apr filegroup
> 18) populate data into Apr as well as primary filegroup
> 19) If at this point we lost Datafile belonging to Feb filegroup I expec
t
> only to apply backup taken at step 11) but SQL Server forced me to take th
e
> log backup of tail and apply backups taken at step 11, 14, t-log tail back
up
> i.e. all the transaction log backups after filegroup backup
>
>|||Tibor,
You mentioned that this will be new feature in sql server 2005. I don't see
any white paper on microsoft web site regarding backup on sql server 2005.If
you get this message and if u have any information on this topic do let me
know
Thanks
--Harvinder
"Tibor Karaszi" wrote:

> In addition to Tom's post:
> What you are asking for is a planned feature for SQL Server 2005.
> --
> Tibor Karaszi, SQL Server MVP
> http://www.karaszi.com/sqlserver/default.asp
> http://www.solidqualitylearning.com/
>
> "Harvinder" <Harvinder@.discussions.microsoft.com> wrote in message
> news:60FA9D13-D045-4C91-B224-A09EA45855BC@.microsoft.com...
>
>

filegroup restore problem

Hi,
We are planning to implement filegroup backup strategy for one of our big
database. We are planning to divide the database by dates so that jan data
will be in 1 filegroup and feb data in separate filegroup so basically we
will have 12 filegroups per year. As the month finish we will put the
filegroup as read only and take the filegroup backup and then later on if we
need to recover this filegroup in case of disaster we just need to restore
this filegroup backup and donâ't need to apply all the log files after the
filegroup as this is read only and sql server should assume that since this
is read only it should not expect log files after this filegroup restore. But
this is not happening: when I restore the filegroup backup sql server still
force me to apply all the log files after that. But this will mean we have to
keep all the log files need for recovery ..so in fact we donâ't have advantage
of putting filegroup as readonly. So any suggestions on how to avoid applying
log files or we are looking for feedback about how other people are doing
this?
Thanks
--Harvinder
Note: Already reviewed this article
http://support.microsoft.com/default.aspx?scid=kb;EN-US;Q295371
Following are the steps I am testing this:
1) complete/full database backup
2) create Jan filegroup
3) populate data into Jan as well as primary filegroup
4) transaction log backup
5) put Jan as Read only
6) Jan filegroup backup
7) create Feb filegroup
8) populate data into Feb as well as primary filegroup
9) transaction log backup
10) put Feb as Read only
11) Feb filegroup backup
12) create Mar filegroup
13) populate data into Mar as well as primary filegroup
14) transaction log backup
15) put Mar as Read only
16) Mar filegroup backup
17) Create Apr filegroup
18) populate data into Apr as well as primary filegroup
19) If at this point we lost Datafile belonging to Feb filegroup I expect
only to apply backup taken at step 11) but SQL Server forced me to take the
log backup of tail and apply backups taken at step 11, 14, t-log tail backup
i.e. all the transaction log backups after filegroup backupConsider differential backups,, perhaps on a weekly basis. This way, you
restore the filegroup, then the most-recent differential, then the remaining
logs.
--
Tom
---
Thomas A. Moreau, BSc, PhD, MCSE, MCDBA
SQL Server MVP
Columnist, SQL Server Professional
Toronto, ON Canada
www.pinnaclepublishing.com
"Harvinder" <Harvinder@.discussions.microsoft.com> wrote in message
news:60FA9D13-D045-4C91-B224-A09EA45855BC@.microsoft.com...
Hi,
We are planning to implement filegroup backup strategy for one of our big
database. We are planning to divide the database by dates so that jan data
will be in 1 filegroup and feb data in separate filegroup so basically we
will have 12 filegroups per year. As the month finish we will put the
filegroup as read only and take the filegroup backup and then later on if we
need to recover this filegroup in case of disaster we just need to restore
this filegroup backup and donâ't need to apply all the log files after the
filegroup as this is read only and sql server should assume that since this
is read only it should not expect log files after this filegroup restore.
But
this is not happening: when I restore the filegroup backup sql server still
force me to apply all the log files after that. But this will mean we have
to
keep all the log files need for recovery ..so in fact we donâ't have
advantage
of putting filegroup as readonly. So any suggestions on how to avoid
applying
log files or we are looking for feedback about how other people are doing
this?
Thanks
--Harvinder
Note: Already reviewed this article
http://support.microsoft.com/default.aspx?scid=kb;EN-US;Q295371
Following are the steps I am testing this:
1) complete/full database backup
2) create Jan filegroup
3) populate data into Jan as well as primary filegroup
4) transaction log backup
5) put Jan as Read only
6) Jan filegroup backup
7) create Feb filegroup
8) populate data into Feb as well as primary filegroup
9) transaction log backup
10) put Feb as Read only
11) Feb filegroup backup
12) create Mar filegroup
13) populate data into Mar as well as primary filegroup
14) transaction log backup
15) put Mar as Read only
16) Mar filegroup backup
17) Create Apr filegroup
18) populate data into Apr as well as primary filegroup
19) If at this point we lost Datafile belonging to Feb filegroup I expect
only to apply backup taken at step 11) but SQL Server forced me to take the
log backup of tail and apply backups taken at step 11, 14, t-log tail backup
i.e. all the transaction log backups after filegroup backup|||In addition to Tom's post:
What you are asking for is a planned feature for SQL Server 2005.
--
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://www.solidqualitylearning.com/
"Harvinder" <Harvinder@.discussions.microsoft.com> wrote in message
news:60FA9D13-D045-4C91-B224-A09EA45855BC@.microsoft.com...
> Hi,
> We are planning to implement filegroup backup strategy for one of our big
> database. We are planning to divide the database by dates so that jan data
> will be in 1 filegroup and feb data in separate filegroup so basically we
> will have 12 filegroups per year. As the month finish we will put the
> filegroup as read only and take the filegroup backup and then later on if we
> need to recover this filegroup in case of disaster we just need to restore
> this filegroup backup and don't need to apply all the log files after the
> filegroup as this is read only and sql server should assume that since this
> is read only it should not expect log files after this filegroup restore. But
> this is not happening: when I restore the filegroup backup sql server still
> force me to apply all the log files after that. But this will mean we have to
> keep all the log files need for recovery ..so in fact we don't have advantage
> of putting filegroup as readonly. So any suggestions on how to avoid applying
> log files or we are looking for feedback about how other people are doing
> this?
> Thanks
> --Harvinder
> Note: Already reviewed this article
> http://support.microsoft.com/default.aspx?scid=kb;EN-US;Q295371
> Following are the steps I am testing this:
> 1) complete/full database backup
> 2) create Jan filegroup
> 3) populate data into Jan as well as primary filegroup
> 4) transaction log backup
> 5) put Jan as Read only
> 6) Jan filegroup backup
> 7) create Feb filegroup
> 8) populate data into Feb as well as primary filegroup
> 9) transaction log backup
> 10) put Feb as Read only
> 11) Feb filegroup backup
> 12) create Mar filegroup
> 13) populate data into Mar as well as primary filegroup
> 14) transaction log backup
> 15) put Mar as Read only
> 16) Mar filegroup backup
> 17) Create Apr filegroup
> 18) populate data into Apr as well as primary filegroup
> 19) If at this point we lost Datafile belonging to Feb filegroup I expect
> only to apply backup taken at step 11) but SQL Server forced me to take the
> log backup of tail and apply backups taken at step 11, 14, t-log tail backup
> i.e. all the transaction log backups after filegroup backup
>
>|||Tibor,
You mentioned that this will be new feature in sql server 2005. I don't see
any white paper on microsoft web site regarding backup on sql server 2005.If
you get this message and if u have any information on this topic do let me
know
Thanks
--Harvinder
"Tibor Karaszi" wrote:
> In addition to Tom's post:
> What you are asking for is a planned feature for SQL Server 2005.
> --
> Tibor Karaszi, SQL Server MVP
> http://www.karaszi.com/sqlserver/default.asp
> http://www.solidqualitylearning.com/
>
> "Harvinder" <Harvinder@.discussions.microsoft.com> wrote in message
> news:60FA9D13-D045-4C91-B224-A09EA45855BC@.microsoft.com...
> > Hi,
> >
> > We are planning to implement filegroup backup strategy for one of our big
> > database. We are planning to divide the database by dates so that jan data
> > will be in 1 filegroup and feb data in separate filegroup so basically we
> > will have 12 filegroups per year. As the month finish we will put the
> > filegroup as read only and take the filegroup backup and then later on if we
> > need to recover this filegroup in case of disaster we just need to restore
> > this filegroup backup and don't need to apply all the log files after the
> > filegroup as this is read only and sql server should assume that since this
> > is read only it should not expect log files after this filegroup restore. But
> > this is not happening: when I restore the filegroup backup sql server still
> > force me to apply all the log files after that. But this will mean we have to
> > keep all the log files need for recovery ..so in fact we don't have advantage
> > of putting filegroup as readonly. So any suggestions on how to avoid applying
> > log files or we are looking for feedback about how other people are doing
> > this?
> >
> > Thanks
> > --Harvinder
> > Note: Already reviewed this article
> > http://support.microsoft.com/default.aspx?scid=kb;EN-US;Q295371
> > Following are the steps I am testing this:
> > 1) complete/full database backup
> > 2) create Jan filegroup
> > 3) populate data into Jan as well as primary filegroup
> > 4) transaction log backup
> > 5) put Jan as Read only
> > 6) Jan filegroup backup
> > 7) create Feb filegroup
> > 8) populate data into Feb as well as primary filegroup
> > 9) transaction log backup
> > 10) put Feb as Read only
> > 11) Feb filegroup backup
> > 12) create Mar filegroup
> > 13) populate data into Mar as well as primary filegroup
> > 14) transaction log backup
> > 15) put Mar as Read only
> > 16) Mar filegroup backup
> > 17) Create Apr filegroup
> > 18) populate data into Apr as well as primary filegroup
> > 19) If at this point we lost Datafile belonging to Feb filegroup I expect
> > only to apply backup taken at step 11) but SQL Server forced me to take the
> > log backup of tail and apply backups taken at step 11, 14, t-log tail backup
> > i.e. all the transaction log backups after filegroup backup
> >
> >
> >
>
>

Monday, March 12, 2012

Filegroup restore from full backup

Hello!
I'm trying to figure out how some things about filegroup restore work.
I have a primary filegroup that is very small (2 MB)
and another filegroup that is rather large (2 GB).
The transaction log if very small (1 MB).
The full backup is about 2 GB.
I'm doing a filegroup restore of only the primary filegroup from the full
backup.
RESTORE DATABASE Test FILEGROUP = 'PRIMARY' FROM DISK = 'C:\Test.bak'
If would expect this to be very fast, but it's not. Could it be that SQL
Server is reading the complete backup file and not only the filegroup that is
needed?
I'm using SQL Server 2005 SP2.
Best regards
Ola Hallengren
Thanks, Tibor. I understand.
I'm thinking about using it in a human data error (a record deleted)
scenario. If you have a really large database and a good filegroup strategy
this feature would be very useful.
Does it work the same way in SQL Server 2008?
/Ola
"Tibor Karaszi" wrote:

> Hej Ola,
> To the best of my knowledge, SQL Server do not have any type of allocation structure in the
> beginning of the backup with which it know where pages from some particular file exist. I.e., it
> will have to read the backup file from beginning to end and for each extent see what page it belongs
> in order to determine whether to write the extent to the database file or not.
> --
> Tibor Karaszi, SQL Server MVP
> http://www.karaszi.com/sqlserver/default.asp
> http://sqlblog.com/blogs/tibor_karaszi
>
> "Ola Hallengren" <OlaHallengren@.discussions.microsoft.com> wrote in message
> news:C1A1452A-2361-49D0-8BF9-05D9A5B9392B@.microsoft.com...
>
>
|||The best way to plan for restoring at the file or filegroup level is to
never place any user objects in the primary filegroup. That is because to
restore any file or filegroup you must always restore the primary filegroup
first and keeping only the system objects will speed this dramatically. Then
place user objects in separate secondary filegroups based on their usage
within the schema. Then you can do individual file or filegroup backups so
that you don't have to read an entire full backup each time. For more
details I suggest you read up on Piecemeal Restores in BooksOnLine.
Andrew J. Kelly SQL MVP
Solid Quality Mentors
"Ola Hallengren" <OlaHallengren@.discussions.microsoft.com> wrote in message
news:759D0E49-7C55-4DEC-8EA1-8D39C827DF3A@.microsoft.com...[vbcol=seagreen]
> Thanks, Tibor. I understand.
> I'm thinking about using it in a human data error (a record deleted)
> scenario. If you have a really large database and a good filegroup
> strategy
> this feature would be very useful.
> Does it work the same way in SQL Server 2008?
> /Ola
>
> "Tibor Karaszi" wrote:
|||I still think that it would be smart if it was possible to restore filegroups
from a full backup without having to read the entire backup file. (And yes it
is a good practise to only have system objects in the Primary filegroup.)
Thanks.
/Ola
"Andrew J. Kelly" wrote:

> The best way to plan for restoring at the file or filegroup level is to
> never place any user objects in the primary filegroup. That is because to
> restore any file or filegroup you must always restore the primary filegroup
> first and keeping only the system objects will speed this dramatically. Then
> place user objects in separate secondary filegroups based on their usage
> within the schema. Then you can do individual file or filegroup backups so
> that you don't have to read an entire full backup each time. For more
> details I suggest you read up on Piecemeal Restores in BooksOnLine.
> --
> Andrew J. Kelly SQL MVP
> Solid Quality Mentors
>
> "Ola Hallengren" <OlaHallengren@.discussions.microsoft.com> wrote in message
> news:759D0E49-7C55-4DEC-8EA1-8D39C827DF3A@.microsoft.com...
>

Filegroup restore from full backup

Hello!
I'm trying to figure out how some things about filegroup restore work.
I have a primary filegroup that is very small (2 MB)
and another filegroup that is rather large (2 GB).
The transaction log if very small (1 MB).
The full backup is about 2 GB.
I'm doing a filegroup restore of only the primary filegroup from the full
backup.
RESTORE DATABASE Test FILEGROUP = 'PRIMARY' FROM DISK = 'C:\Test.bak'
If would expect this to be very fast, but it's not. Could it be that SQL
Server is reading the complete backup file and not only the filegroup that is
needed?
I'm using SQL Server 2005 SP2.
Best regards
Ola HallengrenHej Ola,
To the best of my knowledge, SQL Server do not have any type of allocation structure in the
beginning of the backup with which it know where pages from some particular file exist. I.e., it
will have to read the backup file from beginning to end and for each extent see what page it belongs
in order to determine whether to write the extent to the database file or not.
--
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://sqlblog.com/blogs/tibor_karaszi
"Ola Hallengren" <OlaHallengren@.discussions.microsoft.com> wrote in message
news:C1A1452A-2361-49D0-8BF9-05D9A5B9392B@.microsoft.com...
> Hello!
> I'm trying to figure out how some things about filegroup restore work.
> I have a primary filegroup that is very small (2 MB)
> and another filegroup that is rather large (2 GB).
> The transaction log if very small (1 MB).
> The full backup is about 2 GB.
> I'm doing a filegroup restore of only the primary filegroup from the full
> backup.
> RESTORE DATABASE Test FILEGROUP = 'PRIMARY' FROM DISK = 'C:\Test.bak'
> If would expect this to be very fast, but it's not. Could it be that SQL
> Server is reading the complete backup file and not only the filegroup that is
> needed?
> I'm using SQL Server 2005 SP2.
> Best regards
> Ola Hallengren|||Thanks, Tibor. I understand.
I'm thinking about using it in a human data error (a record deleted)
scenario. If you have a really large database and a good filegroup strategy
this feature would be very useful.
Does it work the same way in SQL Server 2008?
/Ola
"Tibor Karaszi" wrote:
> Hej Ola,
> To the best of my knowledge, SQL Server do not have any type of allocation structure in the
> beginning of the backup with which it know where pages from some particular file exist. I.e., it
> will have to read the backup file from beginning to end and for each extent see what page it belongs
> in order to determine whether to write the extent to the database file or not.
> --
> Tibor Karaszi, SQL Server MVP
> http://www.karaszi.com/sqlserver/default.asp
> http://sqlblog.com/blogs/tibor_karaszi
>
> "Ola Hallengren" <OlaHallengren@.discussions.microsoft.com> wrote in message
> news:C1A1452A-2361-49D0-8BF9-05D9A5B9392B@.microsoft.com...
> > Hello!
> >
> > I'm trying to figure out how some things about filegroup restore work.
> >
> > I have a primary filegroup that is very small (2 MB)
> > and another filegroup that is rather large (2 GB).
> > The transaction log if very small (1 MB).
> >
> > The full backup is about 2 GB.
> >
> > I'm doing a filegroup restore of only the primary filegroup from the full
> > backup.
> >
> > RESTORE DATABASE Test FILEGROUP = 'PRIMARY' FROM DISK = 'C:\Test.bak'
> >
> > If would expect this to be very fast, but it's not. Could it be that SQL
> > Server is reading the complete backup file and not only the filegroup that is
> > needed?
> >
> > I'm using SQL Server 2005 SP2.
> >
> > Best regards
> >
> > Ola Hallengren
>
>|||The best way to plan for restoring at the file or filegroup level is to
never place any user objects in the primary filegroup. That is because to
restore any file or filegroup you must always restore the primary filegroup
first and keeping only the system objects will speed this dramatically. Then
place user objects in separate secondary filegroups based on their usage
within the schema. Then you can do individual file or filegroup backups so
that you don't have to read an entire full backup each time. For more
details I suggest you read up on Piecemeal Restores in BooksOnLine.
--
Andrew J. Kelly SQL MVP
Solid Quality Mentors
"Ola Hallengren" <OlaHallengren@.discussions.microsoft.com> wrote in message
news:759D0E49-7C55-4DEC-8EA1-8D39C827DF3A@.microsoft.com...
> Thanks, Tibor. I understand.
> I'm thinking about using it in a human data error (a record deleted)
> scenario. If you have a really large database and a good filegroup
> strategy
> this feature would be very useful.
> Does it work the same way in SQL Server 2008?
> /Ola
>
> "Tibor Karaszi" wrote:
>> Hej Ola,
>> To the best of my knowledge, SQL Server do not have any type of
>> allocation structure in the
>> beginning of the backup with which it know where pages from some
>> particular file exist. I.e., it
>> will have to read the backup file from beginning to end and for each
>> extent see what page it belongs
>> in order to determine whether to write the extent to the database file or
>> not.
>> --
>> Tibor Karaszi, SQL Server MVP
>> http://www.karaszi.com/sqlserver/default.asp
>> http://sqlblog.com/blogs/tibor_karaszi
>>
>> "Ola Hallengren" <OlaHallengren@.discussions.microsoft.com> wrote in
>> message
>> news:C1A1452A-2361-49D0-8BF9-05D9A5B9392B@.microsoft.com...
>> > Hello!
>> >
>> > I'm trying to figure out how some things about filegroup restore work.
>> >
>> > I have a primary filegroup that is very small (2 MB)
>> > and another filegroup that is rather large (2 GB).
>> > The transaction log if very small (1 MB).
>> >
>> > The full backup is about 2 GB.
>> >
>> > I'm doing a filegroup restore of only the primary filegroup from the
>> > full
>> > backup.
>> >
>> > RESTORE DATABASE Test FILEGROUP = 'PRIMARY' FROM DISK = 'C:\Test.bak'
>> >
>> > If would expect this to be very fast, but it's not. Could it be that
>> > SQL
>> > Server is reading the complete backup file and not only the filegroup
>> > that is
>> > needed?
>> >
>> > I'm using SQL Server 2005 SP2.
>> >
>> > Best regards
>> >
>> > Ola Hallengren
>>|||I still think that it would be smart if it was possible to restore filegroups
from a full backup without having to read the entire backup file. (And yes it
is a good practise to only have system objects in the Primary filegroup.)
Thanks.
/Ola
"Andrew J. Kelly" wrote:
> The best way to plan for restoring at the file or filegroup level is to
> never place any user objects in the primary filegroup. That is because to
> restore any file or filegroup you must always restore the primary filegroup
> first and keeping only the system objects will speed this dramatically. Then
> place user objects in separate secondary filegroups based on their usage
> within the schema. Then you can do individual file or filegroup backups so
> that you don't have to read an entire full backup each time. For more
> details I suggest you read up on Piecemeal Restores in BooksOnLine.
> --
> Andrew J. Kelly SQL MVP
> Solid Quality Mentors
>
> "Ola Hallengren" <OlaHallengren@.discussions.microsoft.com> wrote in message
> news:759D0E49-7C55-4DEC-8EA1-8D39C827DF3A@.microsoft.com...
> > Thanks, Tibor. I understand.
> >
> > I'm thinking about using it in a human data error (a record deleted)
> > scenario. If you have a really large database and a good filegroup
> > strategy
> > this feature would be very useful.
> >
> > Does it work the same way in SQL Server 2008?
> >
> > /Ola
> >
> >
> >
> > "Tibor Karaszi" wrote:
> >
> >> Hej Ola,
> >>
> >> To the best of my knowledge, SQL Server do not have any type of
> >> allocation structure in the
> >> beginning of the backup with which it know where pages from some
> >> particular file exist. I.e., it
> >> will have to read the backup file from beginning to end and for each
> >> extent see what page it belongs
> >> in order to determine whether to write the extent to the database file or
> >> not.
> >>
> >> --
> >> Tibor Karaszi, SQL Server MVP
> >> http://www.karaszi.com/sqlserver/default.asp
> >> http://sqlblog.com/blogs/tibor_karaszi
> >>
> >>
> >> "Ola Hallengren" <OlaHallengren@.discussions.microsoft.com> wrote in
> >> message
> >> news:C1A1452A-2361-49D0-8BF9-05D9A5B9392B@.microsoft.com...
> >> > Hello!
> >> >
> >> > I'm trying to figure out how some things about filegroup restore work.
> >> >
> >> > I have a primary filegroup that is very small (2 MB)
> >> > and another filegroup that is rather large (2 GB).
> >> > The transaction log if very small (1 MB).
> >> >
> >> > The full backup is about 2 GB.
> >> >
> >> > I'm doing a filegroup restore of only the primary filegroup from the
> >> > full
> >> > backup.
> >> >
> >> > RESTORE DATABASE Test FILEGROUP = 'PRIMARY' FROM DISK = 'C:\Test.bak'
> >> >
> >> > If would expect this to be very fast, but it's not. Could it be that
> >> > SQL
> >> > Server is reading the complete backup file and not only the filegroup
> >> > that is
> >> > needed?
> >> >
> >> > I'm using SQL Server 2005 SP2.
> >> >
> >> > Best regards
> >> >
> >> > Ola Hallengren
> >>
> >>
> >>
>|||In addition to Andrew's reply:
> Does it work the same way in SQL Server 2008?
AFAIK, yes. I haven't seen or heard about this type of architectural changes for backup.
--
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://sqlblog.com/blogs/tibor_karaszi
"Ola Hallengren" <OlaHallengren@.discussions.microsoft.com> wrote in message
news:759D0E49-7C55-4DEC-8EA1-8D39C827DF3A@.microsoft.com...
> Thanks, Tibor. I understand.
> I'm thinking about using it in a human data error (a record deleted)
> scenario. If you have a really large database and a good filegroup strategy
> this feature would be very useful.
> Does it work the same way in SQL Server 2008?
> /Ola
>
> "Tibor Karaszi" wrote:
>> Hej Ola,
>> To the best of my knowledge, SQL Server do not have any type of allocation structure in the
>> beginning of the backup with which it know where pages from some particular file exist. I.e., it
>> will have to read the backup file from beginning to end and for each extent see what page it
>> belongs
>> in order to determine whether to write the extent to the database file or not.
>> --
>> Tibor Karaszi, SQL Server MVP
>> http://www.karaszi.com/sqlserver/default.asp
>> http://sqlblog.com/blogs/tibor_karaszi
>>
>> "Ola Hallengren" <OlaHallengren@.discussions.microsoft.com> wrote in message
>> news:C1A1452A-2361-49D0-8BF9-05D9A5B9392B@.microsoft.com...
>> > Hello!
>> >
>> > I'm trying to figure out how some things about filegroup restore work.
>> >
>> > I have a primary filegroup that is very small (2 MB)
>> > and another filegroup that is rather large (2 GB).
>> > The transaction log if very small (1 MB).
>> >
>> > The full backup is about 2 GB.
>> >
>> > I'm doing a filegroup restore of only the primary filegroup from the full
>> > backup.
>> >
>> > RESTORE DATABASE Test FILEGROUP = 'PRIMARY' FROM DISK = 'C:\Test.bak'
>> >
>> > If would expect this to be very fast, but it's not. Could it be that SQL
>> > Server is reading the complete backup file and not only the filegroup that is
>> > needed?
>> >
>> > I'm using SQL Server 2005 SP2.
>> >
>> > Best regards
>> >
>> > Ola Hallengren
>>

Friday, March 9, 2012

Filegroup Backups

I'm trying to do a Full Backup on a filegroup but
everytime I try to run it I get this error:
[SQLSTATE 01000] (Message 4035) BACKUP
DATABASE...FILE=<name> successfully processed 1625664
pages in 625.900 seconds (21.277 MB/sec). [SQLSTATE 01000]
(Message 3014) The value '0' is not within range for the
FILE parameter. [SQLSTATE 42000] (Error 3250) VERIFY
DATABASE is terminating abnormally. [SQLSTATE 42000]
(Error 3013). The step failed.
Does anyone have any idea as to why this is happening?
I'm running SQL2000 on a Win2K Server. SQL Books Online
is no help.
Thanks
JeroockoAs Jasper says you need to post the command the part of
your error 'The value '0' is not within range for the
FILE parameter', usually means an error in the code, wrong
file name, running against the wrong database, something
like that.
Regards
John

Filegroup Backup and Restore Issue

Can filegroup speed up the process of backup and restore?
Peter,
All databases have a filegroup called the primary filegroup. The
existence of a filegroup therefore does not speed up backup and restore
per se. Using filegroup backups can be used to good effect to back up
unmanageable very large databases (VLDBs), by allowing you to split the
backup over several maintenance windows.
i.e. you could backup group1 on monday, group2 on tuesday, group1 on
wednesday, etc.
For smaller databases (say in the tens of Gigs), you should do full
backups. However, this can vary depending on your requirements.
Mark Allison, SQL Server MVP
http://www.markallison.co.uk
Looking for a SQL Server replication book?
http://www.nwsu.com/0974973602m.html
Peter wrote:
> Can filegroup speed up the process of backup and restore?
>
|||To add to Marks comments, filegroups allow you to make partial backups, and
restore partial backups. You should place each filegroup on a different
disk, then you can restore and backup the filegroups which map to a
particluar disk or raid array..
Wayne Snyder, MCDBA, SQL Server MVP
Mariner, Charlotte, NC
www.mariner-usa.com
(Please respond only to the newsgroups.)
I support the Professional Association of SQL Server (PASS) and it's
community of SQL Server professionals.
www.sqlpass.org
"Peter" <Peter@.discussions.microsoft.com> wrote in message
news:57086BDA-AA15-4F53-B40B-21D468959F0D@.microsoft.com...
> Can filegroup speed up the process of backup and restore?
>

Filegroup backup and full recovery mode

Hi all,
My database in mssql2000 sp3a is about 40 GB in size. I
want to use filegroups and implement filegroup backup - at
the same time - i do not want to compromise on my 'FULL
recovery mode' - which means that i would like to have
point in time recovery as well.
Could any one let me know how to implement the same with
an example...?Hi
I think you will find in BOL a well explained examples about how to
accomlish it
"bharath" <barathsing@.hotmail.com> wrote in message
news:04d801c3b586$6db9bcc0$a501280a@.phx.gbl...
> Hi all,
> My database in mssql2000 sp3a is about 40 GB in size. I
> want to use filegroups and implement filegroup backup - at
> the same time - i do not want to compromise on my 'FULL
> recovery mode' - which means that i would like to have
> point in time recovery as well.
> Could any one let me know how to implement the same with
> an example...?

FileGroup

Dear Professional
Can any body tell me give me the URL where there is any good explaination
step by step about the filegroup backup as well restoration process and all
the detail.
Thanks
Noor
Hi Noor,
Have a look into the below sections in books online:-
"Physical Database Files and Filegroups"
"Using Files and Filegroups"
"Creating Filegroups"
"Using File Backups"
"Files and Filegroups"
"Backing up and Restoring Databases"
"Partial Database Restore Operations"
"Backing up Selected Portions of a Database"
Thanks
Hari
MCDBA
"Noorali Issani" <naissani@.softhome.net> wrote in message
news:OghYmaXOEHA.1276@.TK2MSFTNGP11.phx.gbl...
> Dear Professional
> Can any body tell me give me the URL where there is any good explaination
> step by step about the filegroup backup as well restoration process and
all
> the detail.
>
> Thanks
> Noor
>
|||BOL
"Noorali Issani" <naissani@.softhome.net> wrote in message
news:OghYmaXOEHA.1276@.TK2MSFTNGP11.phx.gbl...
> Dear Professional
> Can any body tell me give me the URL where there is any good explaination
> step by step about the filegroup backup as well restoration process and
all
> the detail.
>
> Thanks
> Noor
>

file/filegroup backup vs copy the mdf, ndf, ldf file directly

Dear all,
I have a question about the difference between file/filegroup backup and
copy the mdf, ndf, ldf file directly
1.) If i copy the mdf,ndf,ldf file and then replace the file(recreate the db
with the same name) to the sql server, will it works and ok, what is the
difference with backup file/filegroup backup?
2.) if i backup with sql script backup (select all object), is it the same
output with full backup?1.) The only way to safely copy the files directly is to detach the
database first. That means you must take it off line. A backup does not
require you to take the db offline.
2.) The generate script is simply the DDL to recreate the database and
does not contain the data. It is not to be used in place of a backup.
Andrew J. Kelly SQL MVP
"Joe" <Joe@.discussions.microsoft.com> wrote in message
news:299DA939-7562-4546-B4A1-15F7CA2D8881@.microsoft.com...
> Dear all,
> I have a question about the difference between file/filegroup backup and
> copy the mdf, ndf, ldf file directly
> 1.) If i copy the mdf,ndf,ldf file and then replace the file(recreate the
> db
> with the same name) to the sql server, will it works and ok, what is the
> difference with backup file/filegroup backup?
> 2.) if i backup with sql script backup (select all object), is it the same
> output with full backup?
>|||Script backup does not have data?
"Andrew J. Kelly" wrote:

> 1.) The only way to safely copy the files directly is to detach the
> database first. That means you must take it off line. A backup does not
> require you to take the db offline.
> 2.) The generate script is simply the DDL to recreate the database and
> does not contain the data. It is not to be used in place of a backup.
> --
> Andrew J. Kelly SQL MVP
>
> "Joe" <Joe@.discussions.microsoft.com> wrote in message
> news:299DA939-7562-4546-B4A1-15F7CA2D8881@.microsoft.com...
>
>|||Now I am not sure what you are referring to. There isn't a way to directly
generate a backup script that I know of. Can you explain exactly how you
get to the script?
Andrew J. Kelly SQL MVP
"Joe" <Joe@.discussions.microsoft.com> wrote in message
news:786D0922-017D-41AE-A915-63F76799F77E@.microsoft.com...[vbcol=seagreen]
> Script backup does not have data?
> "Andrew J. Kelly" wrote:
>

Wednesday, March 7, 2012

File System Task Error

Hi,
I am using the 'File System Task ' to create a directory structure (e.g ..\DB; ..\DB\LOG; ..\DB\BACKUP; )
I set following properties for the single tasks: UseDirectoryIfExists = True; Operation = Create Directory;

The task works fine before installing SP1 on the server. Now it creates an ERROR if the directory already exists and it is not empty.

SSIS package "testcreatedirectory.dtsx" starting.

Warning: 0xC002915A at Create DB Directory, File System Task: The Directory already exists.

Error: 0xC002F304 at Create DB Directory, File System Task: An error occurred with the following error message: "Das Verzeichnis ist nicht leer.". (The Directory is not empty.)

Task failed: Create DB Directory

Warning: 0x80019002 at Create Directorys: The Execution method succeeded, but the number of errors raised (1) reached the maximum allowed (1); resulting in failure. This occurs when the number of errors reaches the number specified in MaximumErrorCount. Change the MaximumErrorCount or fix the errors.

Warning: 0x80019002 at testcreatedirectory: The Execution method succeeded, but the number of errors raised (1) reached the maximum allowed (1); resulting in failure. This occurs when the number of errors reaches the number specified in MaximumErrorCount. Change the MaximumErrorCount or fix the errors.

SSIS package "testcreatedirectory.dtsx" finished: Failure.

Does anyone know if this is a known bug in SP1 or maybe its a feature and if there already exists a solution (maybe I have to set additional properties I have not discovered as yet).

Thanks in advance
Holger

I have just ran into the same problem after applying Service Pack 1. Have you resolved it yet?|||i don't know if this will help with your particular situation, but hotfix packages for sql server 2005 sp1 were recently released: http://support.microsoft.com/kb/918222/en-us|||I installed the SSIS hotfix but I still get the same results. I guess I need to check on Microsoft Bug list site to see if it's been posted.|||I also tried the hotfix (last week) - with the same result . Have you checked the Microsoft Bug list?|||

I don't know if there is a proper fix for this but you might want to use a item loop container to loop over the paths that you want created and then just have two tasks in there. One task would be a script task to check the existance and the other to create if it does not.

This would achieve the same result as the former single task.

Fred

|||

Has anyone been able to solve this problem without an unofficial workaround?
All my packages uses this task to ease logging and currently they are all failing.

|||

No, I have not resolved the issue yet. The only soloution I have are work arounds.

|||

Could some body help me with this same issue. How do we do the work around method.

The first time i run it works fine and creates some file inside but when i execute the pacakge for the second time. It throughs up error saying that the directory is not empty. If i go and delete the files manually it write the files corretly. i have the File system task with UseIFDirectoryExists = True and operation = create directory.

Any help,

Thanks,

JA

|||I had the same issue, but realized that within the constraints of the task there was no "real" solution. I set the "Force Execution Result" property of the File Systme Object (FSO) Task to "Success". This solved the issue. It would still create a directory if it didn't exist and leave it if it didn't. Then I used another FSO task to move the files I wanted to rename to the new directory. So in this case I used the move option to move and rename. I often use the move option to rename things, if the rename won't work. To tell you the truth the move option might do what you wan't as well. I can't tell because I used both, first the create, then the move.|||

So you had the same the FSO with ForceExecutionResult = success and usedirectoryif exists = true and operation = create Directory. It does not work for me. Even if i use next step to move files to a new directory. you have to create the directory in first place and then move and then rename or delete. this is what i get.

Source: Creating Directoy Folder
Description: The Directory already exists.
End Warning
Error: 2006-09-08 12:21:31.59
Code: 0xC002F304
Source: Creating Directoy Folder
Description: An error occurred with the following error message: "The directory is not empty.

Any idea about this error

Thanks,

JA

|||Like I said, there is no "real" solution with the task. It just doesn't do what it should. What I offered was a temporary fix to have the task move on regardless. The only other fix, as others have mentioned is doing it in a script, utilizing the File System Object. It seems to be a bug.

File System Task Error

Hi,
I am using the 'File System Task ' to create a directory structure (e.g ..\DB; ..\DB\LOG; ..\DB\BACKUP; )
I set following properties for the single tasks: UseDirectoryIfExists = True; Operation = Create Directory;

The task works fine before installing SP1 on the server. Now it creates an ERROR if the directory already exists and it is not empty.

SSIS package "testcreatedirectory.dtsx" starting.

Warning: 0xC002915A at Create DB Directory, File System Task: The Directory already exists.

Error: 0xC002F304 at Create DB Directory, File System Task: An error occurred with the following error message: "Das Verzeichnis ist nicht leer.". (The Directory is not empty.)

Task failed: Create DB Directory

Warning: 0x80019002 at Create Directorys: The Execution method succeeded, but the number of errors raised (1) reached the maximum allowed (1); resulting in failure. This occurs when the number of errors reaches the number specified in MaximumErrorCount. Change the MaximumErrorCount or fix the errors.

Warning: 0x80019002 at testcreatedirectory: The Execution method succeeded, but the number of errors raised (1) reached the maximum allowed (1); resulting in failure. This occurs when the number of errors reaches the number specified in MaximumErrorCount. Change the MaximumErrorCount or fix the errors.

SSIS package "testcreatedirectory.dtsx" finished: Failure.

Does anyone know if this is a known bug in SP1 or maybe its a feature and if there already exists a solution (maybe I have to set additional properties I have not discovered as yet).

Thanks in advance
Holger

I have just ran into the same problem after applying Service Pack 1. Have you resolved it yet?|||i don't know if this will help with your particular situation, but hotfix packages for sql server 2005 sp1 were recently released: http://support.microsoft.com/kb/918222/en-us|||I installed the SSIS hotfix but I still get the same results. I guess I need to check on Microsoft Bug list site to see if it's been posted.|||I also tried the hotfix (last week) - with the same result . Have you checked the Microsoft Bug list?|||

I don't know if there is a proper fix for this but you might want to use a item loop container to loop over the paths that you want created and then just have two tasks in there. One task would be a script task to check the existance and the other to create if it does not.

This would achieve the same result as the former single task.

Fred

|||

Has anyone been able to solve this problem without an unofficial workaround?
All my packages uses this task to ease logging and currently they are all failing.

|||

No, I have not resolved the issue yet. The only soloution I have are work arounds.

|||

Could some body help me with this same issue. How do we do the work around method.

The first time i run it works fine and creates some file inside but when i execute the pacakge for the second time. It throughs up error saying that the directory is not empty. If i go and delete the files manually it write the files corretly. i have the File system task with UseIFDirectoryExists = True and operation = create directory.

Any help,

Thanks,

JA

|||I had the same issue, but realized that within the constraints of the task there was no "real" solution. I set the "Force Execution Result" property of the File Systme Object (FSO) Task to "Success". This solved the issue. It would still create a directory if it didn't exist and leave it if it didn't. Then I used another FSO task to move the files I wanted to rename to the new directory. So in this case I used the move option to move and rename. I often use the move option to rename things, if the rename won't work. To tell you the truth the move option might do what you wan't as well. I can't tell because I used both, first the create, then the move.|||

So you had the same the FSO with ForceExecutionResult = success and usedirectoryif exists = true and operation = create Directory. It does not work for me. Even if i use next step to move files to a new directory. you have to create the directory in first place and then move and then rename or delete. this is what i get.

Source: Creating Directoy Folder
Description: The Directory already exists.
End Warning
Error: 2006-09-08 12:21:31.59
Code: 0xC002F304
Source: Creating Directoy Folder
Description: An error occurred with the following error message: "The directory is not empty.

Any idea about this error

Thanks,

JA

|||Like I said, there is no "real" solution with the task. It just doesn't do what it should. What I offered was a temporary fix to have the task move on regardless. The only other fix, as others have mentioned is doing it in a script, utilizing the File System Object. It seems to be a bug.

File System Task Error

Hi,
I am using the 'File System Task ' to create a directory structure (e.g ..\DB; ..\DB\LOG; ..\DB\BACKUP; )
I set following properties for the single tasks: UseDirectoryIfExists = True; Operation = Create Directory;

The task works fine before installing SP1 on the server. Now it creates an ERROR if the directory already exists and it is not empty.

SSIS package "testcreatedirectory.dtsx" starting.

Warning: 0xC002915A at Create DB Directory, File System Task: The Directory already exists.

Error: 0xC002F304 at Create DB Directory, File System Task: An error occurred with the following error message: "Das Verzeichnis ist nicht leer.". (The Directory is not empty.)

Task failed: Create DB Directory

Warning: 0x80019002 at Create Directorys: The Execution method succeeded, but the number of errors raised (1) reached the maximum allowed (1); resulting in failure. This occurs when the number of errors reaches the number specified in MaximumErrorCount. Change the MaximumErrorCount or fix the errors.

Warning: 0x80019002 at testcreatedirectory: The Execution method succeeded, but the number of errors raised (1) reached the maximum allowed (1); resulting in failure. This occurs when the number of errors reaches the number specified in MaximumErrorCount. Change the MaximumErrorCount or fix the errors.

SSIS package "testcreatedirectory.dtsx" finished: Failure.

Does anyone know if this is a known bug in SP1 or maybe its a feature and if there already exists a solution (maybe I have to set additional properties I have not discovered as yet).

Thanks in advance
Holger

I have just ran into the same problem after applying Service Pack 1. Have you resolved it yet?|||i don't know if this will help with your particular situation, but hotfix packages for sql server 2005 sp1 were recently released: http://support.microsoft.com/kb/918222/en-us|||I installed the SSIS hotfix but I still get the same results. I guess I need to check on Microsoft Bug list site to see if it's been posted.|||I also tried the hotfix (last week) - with the same result . Have you checked the Microsoft Bug list?|||

I don't know if there is a proper fix for this but you might want to use a item loop container to loop over the paths that you want created and then just have two tasks in there. One task would be a script task to check the existance and the other to create if it does not.

This would achieve the same result as the former single task.

Fred

|||

Has anyone been able to solve this problem without an unofficial workaround?
All my packages uses this task to ease logging and currently they are all failing.

|||

No, I have not resolved the issue yet. The only soloution I have are work arounds.

|||

Could some body help me with this same issue. How do we do the work around method.

The first time i run it works fine and creates some file inside but when i execute the pacakge for the second time. It throughs up error saying that the directory is not empty. If i go and delete the files manually it write the files corretly. i have the File system task with UseIFDirectoryExists = True and operation = create directory.

Any help,

Thanks,

JA

|||I had the same issue, but realized that within the constraints of the task there was no "real" solution. I set the "Force Execution Result" property of the File Systme Object (FSO) Task to "Success". This solved the issue. It would still create a directory if it didn't exist and leave it if it didn't. Then I used another FSO task to move the files I wanted to rename to the new directory. So in this case I used the move option to move and rename. I often use the move option to rename things, if the rename won't work. To tell you the truth the move option might do what you wan't as well. I can't tell because I used both, first the create, then the move.|||

So you had the same the FSO with ForceExecutionResult = success and usedirectoryif exists = true and operation = create Directory. It does not work for me. Even if i use next step to move files to a new directory. you have to create the directory in first place and then move and then rename or delete. this is what i get.

Source: Creating Directoy Folder
Description: The Directory already exists.
End Warning
Error: 2006-09-08 12:21:31.59
Code: 0xC002F304
Source: Creating Directoy Folder
Description: An error occurred with the following error message: "The directory is not empty.

Any idea about this error

Thanks,

JA

|||Like I said, there is no "real" solution with the task. It just doesn't do what it should. What I offered was a temporary fix to have the task move on regardless. The only other fix, as others have mentioned is doing it in a script, utilizing the File System Object. It seems to be a bug.

Friday, February 24, 2012

file or filegroup not online? "sysft_ftcat_documentindex"

Just upgraded from SQL 2000 to 2005 and I created a backup maintenance plan
but I am getting the following error everytime the maintenance plan runs:
Error number: -1073548784
Executing the query "BACKUP DATABASE [Clarke_MSCRM] TO DISK = N'C:\\Pro
gram
Files\\Microsoft SQL
Server\\MSSQL\\BACKUP\\Clarke_MSCRM\\Cla
rke_MSCRM_backup_200607270400.bak'
WITH NOFORMAT, NOINIT, NAME = N'Clarke_MSCRM_backup_20060727040005', SKIP,
REWIND, NOUNLOAD, STATS = 10
" failed with the following error: "The backup of the file or filegroup
"sysft_ftcat_documentindex" is not permitted because it is not online. BACKU
P
can be performed by using the FILEGROUP or FILE clauses to restrict the
selection to include only online data.
BACKUP DATABASE is terminating abnormally.". Possible failure reasons:
Problems with the query, "ResultSet" property not set correctly, parameters
not set correctly, or connection not established correctly.
I'm not sure where to start looking.Try rebuilding or repopulating your full-text catalog. This is included in 2
005 backup and is most
probably your problem.
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://www.solidqualitylearning.com/
"Matt M" <MattM@.discussions.microsoft.com> wrote in message
news:0222672A-806B-4EA0-BC3C-957050488734@.microsoft.com...
> Just upgraded from SQL 2000 to 2005 and I created a backup maintenance pla
n
> but I am getting the following error everytime the maintenance plan runs:
> Error number: -1073548784
> Executing the query "BACKUP DATABASE [Clarke_MSCRM] TO DISK = N'C:\\P
rogram
> Files\\Microsoft SQL
> Server\\MSSQL\\BACKUP\\Clarke_MSCRM\\Cla
rke_MSCRM_backup_200607270400.bak'
> WITH NOFORMAT, NOINIT, NAME = N'Clarke_MSCRM_backup_20060727040005', SKIP
,
> REWIND, NOUNLOAD, STATS = 10
> " failed with the following error: "The backup of the file or filegroup
> "sysft_ftcat_documentindex" is not permitted because it is not online. BAC
KUP
> can be performed by using the FILEGROUP or FILE clauses to restrict the
> selection to include only online data.
> BACKUP DATABASE is terminating abnormally.". Possible failure reasons:
> Problems with the query, "ResultSet" property not set correctly, parameter
s
> not set correctly, or connection not established correctly.
>
> I'm not sure where to start looking.
>|||I tried the Rebuild Index task and that didnt seem to work. How would I go
about repopulating?
"Tibor Karaszi" wrote:

> Try rebuilding or repopulating your full-text catalog. This is included in
2005 backup and is most
> probably your problem.
> --
> Tibor Karaszi, SQL Server MVP
> http://www.karaszi.com/sqlserver/default.asp
> http://www.solidqualitylearning.com/
>
> "Matt M" <MattM@.discussions.microsoft.com> wrote in message
> news:0222672A-806B-4EA0-BC3C-957050488734@.microsoft.com...
>|||I'm not talking about regular indexes, I'm talking about full text indexes.
I don't do much
fulltext, so try at microsoft.public.sqlserver.fulltext.
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://www.solidqualitylearning.com/
"Matt M" <MattM@.discussions.microsoft.com> wrote in message
news:82F9483C-93C4-48E9-B65B-D002AFF24700@.microsoft.com...[vbcol=seagreen]
>I tried the Rebuild Index task and that didnt seem to work. How would I go
> about repopulating?
> "Tibor Karaszi" wrote:
>|||I realized I misunderstood you after I posted. I looked into it and found
where to rebuild full text and it is running on a test box right now (Its a
slow machine so its taking a long time) I will try in production after hours
.
"Tibor Karaszi" wrote:

> I'm not talking about regular indexes, I'm talking about full text indexes
. I don't do much
> fulltext, so try at microsoft.public.sqlserver.fulltext.
> --
> Tibor Karaszi, SQL Server MVP
> http://www.karaszi.com/sqlserver/default.asp
> http://www.solidqualitylearning.com/
>
> "Matt M" <MattM@.discussions.microsoft.com> wrote in message
> news:82F9483C-93C4-48E9-B65B-D002AFF24700@.microsoft.com...
>

file or filegroup not online? "sysft_ftcat_documentindex"

Just upgraded from SQL 2000 to 2005 and I created a backup maintenance plan
but I am getting the following error everytime the maintenance plan runs:
Error number: -1073548784
Executing the query "BACKUP DATABASE [Clarke_MSCRM] TO DISK = N'C:\\Program
Files\\Microsoft SQL
Server\\MSSQL\\BACKUP\\Clarke_MSCRM\\Clarke_MSCRM_backup_200607270400.bak'
WITH NOFORMAT, NOINIT, NAME = N'Clarke_MSCRM_backup_20060727040005', SKIP,
REWIND, NOUNLOAD, STATS = 10
" failed with the following error: "The backup of the file or filegroup
"sysft_ftcat_documentindex" is not permitted because it is not online. BACKUP
can be performed by using the FILEGROUP or FILE clauses to restrict the
selection to include only online data.
BACKUP DATABASE is terminating abnormally.". Possible failure reasons:
Problems with the query, "ResultSet" property not set correctly, parameters
not set correctly, or connection not established correctly.
I'm not sure where to start looking.Try rebuilding or repopulating your full-text catalog. This is included in 2005 backup and is most
probably your problem.
--
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://www.solidqualitylearning.com/
"Matt M" <MattM@.discussions.microsoft.com> wrote in message
news:0222672A-806B-4EA0-BC3C-957050488734@.microsoft.com...
> Just upgraded from SQL 2000 to 2005 and I created a backup maintenance plan
> but I am getting the following error everytime the maintenance plan runs:
> Error number: -1073548784
> Executing the query "BACKUP DATABASE [Clarke_MSCRM] TO DISK = N'C:\\Program
> Files\\Microsoft SQL
> Server\\MSSQL\\BACKUP\\Clarke_MSCRM\\Clarke_MSCRM_backup_200607270400.bak'
> WITH NOFORMAT, NOINIT, NAME = N'Clarke_MSCRM_backup_20060727040005', SKIP,
> REWIND, NOUNLOAD, STATS = 10
> " failed with the following error: "The backup of the file or filegroup
> "sysft_ftcat_documentindex" is not permitted because it is not online. BACKUP
> can be performed by using the FILEGROUP or FILE clauses to restrict the
> selection to include only online data.
> BACKUP DATABASE is terminating abnormally.". Possible failure reasons:
> Problems with the query, "ResultSet" property not set correctly, parameters
> not set correctly, or connection not established correctly.
>
> I'm not sure where to start looking.
>|||I tried the Rebuild Index task and that didnt seem to work. How would I go
about repopulating?
"Tibor Karaszi" wrote:
> Try rebuilding or repopulating your full-text catalog. This is included in 2005 backup and is most
> probably your problem.
> --
> Tibor Karaszi, SQL Server MVP
> http://www.karaszi.com/sqlserver/default.asp
> http://www.solidqualitylearning.com/
>
> "Matt M" <MattM@.discussions.microsoft.com> wrote in message
> news:0222672A-806B-4EA0-BC3C-957050488734@.microsoft.com...
> > Just upgraded from SQL 2000 to 2005 and I created a backup maintenance plan
> > but I am getting the following error everytime the maintenance plan runs:
> >
> > Error number: -1073548784
> >
> > Executing the query "BACKUP DATABASE [Clarke_MSCRM] TO DISK = N'C:\\Program
> > Files\\Microsoft SQL
> > Server\\MSSQL\\BACKUP\\Clarke_MSCRM\\Clarke_MSCRM_backup_200607270400.bak'
> > WITH NOFORMAT, NOINIT, NAME = N'Clarke_MSCRM_backup_20060727040005', SKIP,
> > REWIND, NOUNLOAD, STATS = 10
> > " failed with the following error: "The backup of the file or filegroup
> > "sysft_ftcat_documentindex" is not permitted because it is not online. BACKUP
> > can be performed by using the FILEGROUP or FILE clauses to restrict the
> > selection to include only online data.
> > BACKUP DATABASE is terminating abnormally.". Possible failure reasons:
> > Problems with the query, "ResultSet" property not set correctly, parameters
> > not set correctly, or connection not established correctly.
> >
> >
> > I'm not sure where to start looking.
> >
> >
>|||I'm not talking about regular indexes, I'm talking about full text indexes. I don't do much
fulltext, so try at microsoft.public.sqlserver.fulltext.
--
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://www.solidqualitylearning.com/
"Matt M" <MattM@.discussions.microsoft.com> wrote in message
news:82F9483C-93C4-48E9-B65B-D002AFF24700@.microsoft.com...
>I tried the Rebuild Index task and that didnt seem to work. How would I go
> about repopulating?
> "Tibor Karaszi" wrote:
>> Try rebuilding or repopulating your full-text catalog. This is included in 2005 backup and is
>> most
>> probably your problem.
>> --
>> Tibor Karaszi, SQL Server MVP
>> http://www.karaszi.com/sqlserver/default.asp
>> http://www.solidqualitylearning.com/
>>
>> "Matt M" <MattM@.discussions.microsoft.com> wrote in message
>> news:0222672A-806B-4EA0-BC3C-957050488734@.microsoft.com...
>> > Just upgraded from SQL 2000 to 2005 and I created a backup maintenance plan
>> > but I am getting the following error everytime the maintenance plan runs:
>> >
>> > Error number: -1073548784
>> >
>> > Executing the query "BACKUP DATABASE [Clarke_MSCRM] TO DISK = N'C:\\Program
>> > Files\\Microsoft SQL
>> > Server\\MSSQL\\BACKUP\\Clarke_MSCRM\\Clarke_MSCRM_backup_200607270400.bak'
>> > WITH NOFORMAT, NOINIT, NAME = N'Clarke_MSCRM_backup_20060727040005', SKIP,
>> > REWIND, NOUNLOAD, STATS = 10
>> > " failed with the following error: "The backup of the file or filegroup
>> > "sysft_ftcat_documentindex" is not permitted because it is not online. BACKUP
>> > can be performed by using the FILEGROUP or FILE clauses to restrict the
>> > selection to include only online data.
>> > BACKUP DATABASE is terminating abnormally.". Possible failure reasons:
>> > Problems with the query, "ResultSet" property not set correctly, parameters
>> > not set correctly, or connection not established correctly.
>> >
>> >
>> > I'm not sure where to start looking.
>> >
>> >
>>|||I realized I misunderstood you after I posted. I looked into it and found
where to rebuild full text and it is running on a test box right now (Its a
slow machine so its taking a long time) I will try in production after hours.
"Tibor Karaszi" wrote:
> I'm not talking about regular indexes, I'm talking about full text indexes. I don't do much
> fulltext, so try at microsoft.public.sqlserver.fulltext.
> --
> Tibor Karaszi, SQL Server MVP
> http://www.karaszi.com/sqlserver/default.asp
> http://www.solidqualitylearning.com/
>
> "Matt M" <MattM@.discussions.microsoft.com> wrote in message
> news:82F9483C-93C4-48E9-B65B-D002AFF24700@.microsoft.com...
> >I tried the Rebuild Index task and that didnt seem to work. How would I go
> > about repopulating?
> >
> > "Tibor Karaszi" wrote:
> >
> >> Try rebuilding or repopulating your full-text catalog. This is included in 2005 backup and is
> >> most
> >> probably your problem.
> >>
> >> --
> >> Tibor Karaszi, SQL Server MVP
> >> http://www.karaszi.com/sqlserver/default.asp
> >> http://www.solidqualitylearning.com/
> >>
> >>
> >> "Matt M" <MattM@.discussions.microsoft.com> wrote in message
> >> news:0222672A-806B-4EA0-BC3C-957050488734@.microsoft.com...
> >> > Just upgraded from SQL 2000 to 2005 and I created a backup maintenance plan
> >> > but I am getting the following error everytime the maintenance plan runs:
> >> >
> >> > Error number: -1073548784
> >> >
> >> > Executing the query "BACKUP DATABASE [Clarke_MSCRM] TO DISK = N'C:\\Program
> >> > Files\\Microsoft SQL
> >> > Server\\MSSQL\\BACKUP\\Clarke_MSCRM\\Clarke_MSCRM_backup_200607270400.bak'
> >> > WITH NOFORMAT, NOINIT, NAME = N'Clarke_MSCRM_backup_20060727040005', SKIP,
> >> > REWIND, NOUNLOAD, STATS = 10
> >> > " failed with the following error: "The backup of the file or filegroup
> >> > "sysft_ftcat_documentindex" is not permitted because it is not online. BACKUP
> >> > can be performed by using the FILEGROUP or FILE clauses to restrict the
> >> > selection to include only online data.
> >> > BACKUP DATABASE is terminating abnormally.". Possible failure reasons:
> >> > Problems with the query, "ResultSet" property not set correctly, parameters
> >> > not set correctly, or connection not established correctly.
> >> >
> >> >
> >> > I'm not sure where to start looking.
> >> >
> >> >
> >>
> >>
>

file operations

I'm writing a proc to do tlog backups. I want to validate the path before
attempting to issue the backup command but I don't see any documented file
handling funtions in T-SQL. What's the best approach for this?
Thanks,
Bob Castleman
DBA PoseurImplement a DTS package that first uses the FSO.FileExists function via a
ActiveX Script task to verify the folder. If the task returns success, then
use a Execute SQL task to call your SP.
Function Main()
Main = DTSTaskExecResult_Failure
sCopyFrom = "c:\temp\xxx.tmp"
sCopyTo = "c:\temp\xxx.dat"
set FSO = CreateObject("Scripting.FileSystemObject")
if not FSO.FileExists( sCopyFrom ) then
exit function
end if
if FSO.FileExists( sCopyTo ) then
FSO.DeleteFile sCopyTo
end if
FSO.CopyFile sCopyFrom, sCopyTo
set FSO = nothing
Main = DTSTaskExecResult_Success
End Function
"Bob Castleman" <nomail@.here> wrote in message
news:e180EavYFHA.1028@.TK2MSFTNGP10.phx.gbl...
> I'm writing a proc to do tlog backups. I want to validate the path before
> attempting to issue the backup command but I don't see any documented file
> handling funtions in T-SQL. What's the best approach for this?
> Thanks,
> Bob Castleman
> DBA Poseur
>

File numbers in backup files

When i backup, everyday, A db to the same file(using noinit option), how can
I know the nmber of each file in order to be able to restore the the one I
want using with file = 1 or 5 or 6. Is there a sp or a sus table
Thanks
RESTORE HEADERONLY
Also see the backup history tables in the msdb database.
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://www.solidqualitylearning.com/
"SalamElias" <eliassal@.online.nospam> wrote in message
news:00E117EF-9023-444D-8B3A-2DC8FB4F2F4B@.microsoft.com...
> When i backup, everyday, A db to the same file(using noinit option), how can
> I know the nmber of each file in order to be able to restore the the one I
> want using with file = 1 or 5 or 6. Is there a sp or a sus table
> Thanks
|||Hi,
This is Charles from Microsoft Online Community Support. I'm responsible
for checking the issue status.
Please feel free to let us know if you need further research. It's always
our pleasure to be of assistance.
Charles Wang
Microsoft Online Community Support

File numbers in backup files

When i backup, everyday, A db to the same file(using noinit option), how can
I know the nmber of each file in order to be able to restore the the one I
want using with file = 1 or 5 or 6. Is there a sp or a sus table
ThanksRESTORE HEADERONLY
Also see the backup history tables in the msdb database.
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://www.solidqualitylearning.com/
"SalamElias" <eliassal@.online.nospam> wrote in message
news:00E117EF-9023-444D-8B3A-2DC8FB4F2F4B@.microsoft.com...
> When i backup, everyday, A db to the same file(using noinit option), how c
an
> I know the nmber of each file in order to be able to restore the the one
I
> want using with file = 1 or 5 or 6. Is there a sp or a sus table
> Thanks|||Hi,
This is Charles from Microsoft Online Community Support. I'm responsible
for checking the issue status.
Please feel free to let us know if you need further research. It's always
our pleasure to be of assistance.
Charles Wang
Microsoft Online Community Support

File numbers in backup files

When i backup, everyday, A db to the same file(using noinit option), how can
I know the nmber of each file in order to be able to restore the the one I
want using with file = 1 or 5 or 6. Is there a sp or a sus table
ThanksRESTORE HEADERONLY
Also see the backup history tables in the msdb database.
--
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://www.solidqualitylearning.com/
"SalamElias" <eliassal@.online.nospam> wrote in message
news:00E117EF-9023-444D-8B3A-2DC8FB4F2F4B@.microsoft.com...
> When i backup, everyday, A db to the same file(using noinit option), how can
> I know the nmber of each file in order to be able to restore the the one I
> want using with file = 1 or 5 or 6. Is there a sp or a sus table
> Thanks|||Hi,
This is Charles from Microsoft Online Community Support. I'm responsible
for checking the issue status.
Please feel free to let us know if you need further research. It's always
our pleasure to be of assistance.
Charles Wang
Microsoft Online Community Support