Thursday, March 29, 2012
Filter Error: "..processing of filter expression..cannot be perfor
When adding a filter to the matrix it seems ok
blah.value = 2
but when running the report i get the following error:
--error--
An error has occured during report processing.
The processing of filter expression for the matrix 'matrix1' cannot be
performed. The comparison failed. Please check the data type returned by the
filter expression.
--enderror--
i have also tried
blah.value = "2"
with no success
anyone know why?a-ha!
this works
Expression
=CInt(Fields!salesGroup.Value)
Operator
=
Value
=2
frankly this is crap
i am returning a number and comparing to a number
- why should i have to convert a number to er a er number?
- why do i have to put an equals in front of the number?
"adolf garlic" wrote:
> My results return a tinyint column which is either 0,1,2.
> When adding a filter to the matrix it seems ok
> blah.value = 2
> but when running the report i get the following error:
> --error--
> An error has occured during report processing.
> The processing of filter expression for the matrix 'matrix1' cannot be
> performed. The comparison failed. Please check the data type returned by the
> filter expression.
> --enderror--
>
> i have also tried
> blah.value = "2"
> with no success
> anyone know why?
>
filter by UserID
the report. Aside from rolling my own security is there a way for me to use
RS to pass this ID. Since the user is logging in via Windows Authentication
I have their login name. I was thinking of using this to query the UserID
from a custom table and then use that as a hidden parameter for all the
sp's. The problem I have though is how to get that UserId prior to anything
on the report happening.
Thanks,
ShawnThe user!userid global variable can be used as a input to your query. Do the
following, create a query parameter. Click on the ..., go to parameters and
then map the query parameter to the global variable (it has domain so you
might want to strip off the domain). To map it choose expressions and that
brings up the expression builder. Next go to layout, report parameters and
remove the parameter that was automatically created for you by RS when you
created the query parameter.
Bruce Loehle-Conger
MVP SQL Server Reporting Services
"Shawn Mason" <shawn@.issda.com> wrote in message
news:%23YFHG1L$EHA.3416@.TK2MSFTNGP09.phx.gbl...
> I have many reports that are filtered by the userID of the person running
> the report. Aside from rolling my own security is there a way for me to
use
> RS to pass this ID. Since the user is logging in via Windows
Authentication
> I have their login name. I was thinking of using this to query the UserID
> from a custom table and then use that as a hidden parameter for all the
> sp's. The problem I have though is how to get that UserId prior to
anything
> on the report happening.
> Thanks,
> Shawn
>|||Here's something I did, to strip off the domain, as Bruce says:
SELECT e.EMPLID, e.NAME
FROM EMPLTABLE e
WHERE (e.ID = RIGHT(@.UserID, LEN(@.UserID) - CHARINDEX('\', @.UserID)))
It will use whatever you write after the \ in a domain\username scenario.
Kaisa M. Lindahl
"Bruce L-C [MVP]" <bruce_lcNOSPAM@.hotmail.com> wrote in message
news:OiMDkHM$EHA.3908@.TK2MSFTNGP12.phx.gbl...
> The user!userid global variable can be used as a input to your query. Do
the
> following, create a query parameter. Click on the ..., go to parameters
and
> then map the query parameter to the global variable (it has domain so you
> might want to strip off the domain). To map it choose expressions and that
> brings up the expression builder. Next go to layout, report parameters and
> remove the parameter that was automatically created for you by RS when you
> created the query parameter.
>
> --
> Bruce Loehle-Conger
> MVP SQL Server Reporting Services
> "Shawn Mason" <shawn@.issda.com> wrote in message
> news:%23YFHG1L$EHA.3416@.TK2MSFTNGP09.phx.gbl...
> > I have many reports that are filtered by the userID of the person
running
> > the report. Aside from rolling my own security is there a way for me to
> use
> > RS to pass this ID. Since the user is logging in via Windows
> Authentication
> > I have their login name. I was thinking of using this to query the
UserID
> > from a custom table and then use that as a hidden parameter for all the
> > sp's. The problem I have though is how to get that UserId prior to
> anything
> > on the report happening.
> >
> > Thanks,
> >
> > Shawn
> >
> >
>
Monday, March 26, 2012
Fill() problem in VS .NET 2003 in Windows application C# with SQL Server
with SQL Server and I get problem with method Fill() for which when
running ends with System Error even with the most simple cases which I
copied from a tutorial. everything else works right for ex Preview Data
except for example sqlDataAdapter1.Fill(ds1).
Is it a bug or some other kind of error. What should I do?
Dalius
Fill() problem in VS .NET 2003 in Windows application C# with SQL ServerI get :
An unhandled exception of type 'System.Data.SqlClient.SqlException' occurred
in system.data.dll
Additional information: System error.
"Auto" <dalius@.autocentras.com> wrote in message
news:uvWhT6X2DHA.1704@.tk2msftngp13.phx.gbl...
> I starting to use Visual Studio .NET 2003 creating C# Windows application
> with SQL Server and I get problem with method Fill() for which when
> running ends with System Error even with the most simple cases which I
> copied from a tutorial. everything else works right for ex Preview Data
> except for example sqlDataAdapter1.Fill(ds1).
> Is it a bug or some other kind of error. What should I do?
> Dalius
> Fill() problem in VS .NET 2003 in Windows application C# with SQL Server
>
>|||Post your code... The fill method does work, so I suspect you are doing
something funny... It might be a better place to post it in the dot net
groups, but Either place will do.
--
Wayne Snyder, MCDBA, SQL Server MVP
Computer Education Services Corporation (CESC), Charlotte, NC
www.computeredservices.com
(Please respond only to the newsgroups.)
I support the Professional Association of SQL Server (PASS) and it's
community of SQL Server professionals.
www.sqlpass.org
"Auto" <dalius@.autocentras.com> wrote in message
news:uciFeVa2DHA.3224@.tk2msftngp13.phx.gbl...
> I get :
> An unhandled exception of type 'System.Data.SqlClient.SqlException'
occurred
> in system.data.dll
> Additional information: System error.
>
> "Auto" <dalius@.autocentras.com> wrote in message
> news:uvWhT6X2DHA.1704@.tk2msftngp13.phx.gbl...
> > I starting to use Visual Studio .NET 2003 creating C# Windows
application
> > with SQL Server and I get problem with method Fill() for which
when
> > running ends with System Error even with the most simple cases which I
> > copied from a tutorial. everything else works right for ex Preview
Data
> > except for example sqlDataAdapter1.Fill(ds1).
> >
> > Is it a bug or some other kind of error. What should I do?
> >
> > Dalius
> >
> > Fill() problem in VS .NET 2003 in Windows application C# with SQL Server
> >
> >
> >
>|||Try executing the statement in a try/catch block so that you can get more
detailed info on the error:
try
{
Fill();
}
catch ( SqlException ex )
{
MessageBox.Show(ex.ToString());
}
--
Hope this helps.
Dan Guzman
SQL Server MVP
"Auto" <dalius@.autocentras.com> wrote in message
news:uciFeVa2DHA.3224@.tk2msftngp13.phx.gbl...
> I get :
> An unhandled exception of type 'System.Data.SqlClient.SqlException'
occurred
> in system.data.dll
> Additional information: System error.
>
> "Auto" <dalius@.autocentras.com> wrote in message
> news:uvWhT6X2DHA.1704@.tk2msftngp13.phx.gbl...
> > I starting to use Visual Studio .NET 2003 creating C# Windows
application
> > with SQL Server and I get problem with method Fill() for which
when
> > running ends with System Error even with the most simple cases which I
> > copied from a tutorial. everything else works right for ex Preview
Data
> > except for example sqlDataAdapter1.Fill(ds1).
> >
> > Is it a bug or some other kind of error. What should I do?
> >
> > Dalius
> >
> > Fill() problem in VS .NET 2003 in Windows application C# with SQL Server
> >
> >
> >
>sql
Monday, March 12, 2012
Filegroup recommendation
We are redesigning the database and table schemas. For a disk setup we have the hard drive and a Raid 5 array comprising an E drive. We currently have transaction logs on the hard drive, system databases (and subsequently all system tables) on the hard drive, and all our user defined databases (and subsequently all user defined tables) on the E drive.
The server under discussion contains only one processor.
The user defined database itself is rather small, at about 100MB. It contains a clustered index, one date column index, and three columns separately indexed as foreign keys. Additionally, there is an image column. The table has both, almost equally, inserts and updates, few deletes. It is also read heavily, the clustered index most, followed by the three foreign key indexed columns, and then the least using the date column.
We have one filegroup, the default Primary. Currently, our user defined tables and indexes are all on the Primary filegroup. As I understand filegroups (as per Books Online) as well as other internet sources, it may be advantageous to create a separate file group for indexes, and even the image column. Given our limited resources, and our current set up of a hard drive, Raid 5 Array [E Drive], and a single processor, do you have any recommendations?
--
Message posted via http://www.sqlmonster.comRobert,
We went with placing the system data in the PRIMARY filegroup and created a
DATA filegroup making it the default. This way we separated the user data
from the system data for admin purposes. I know that this not really what
you had asked for but it was something we did.
Chris Wood
Alberta Department of Energy
CANADA
"Robert Richards via SQLMonster.com" <forum@.SQLMonster.com> wrote in message
news:38366e87041d4bc2a888439965f48bee@.SQLMonster.com...
>I am trying to understand filegroups under our environment and how it might
>best be leveraged. We are running SQL 2K.
> We are redesigning the database and table schemas. For a disk setup we
> have the hard drive and a Raid 5 array comprising an E drive. We currently
> have transaction logs on the hard drive, system databases (and
> subsequently all system tables) on the hard drive, and all our user
> defined databases (and subsequently all user defined tables) on the E
> drive.
> The server under discussion contains only one processor.
> The user defined database itself is rather small, at about 100MB. It
> contains a clustered index, one date column index, and three columns
> separately indexed as foreign keys. Additionally, there is an image
> column. The table has both, almost equally, inserts and updates, few
> deletes. It is also read heavily, the clustered index most, followed by
> the three foreign key indexed columns, and then the least using the date
> column.
> We have one filegroup, the default Primary. Currently, our user defined
> tables and indexes are all on the Primary filegroup. As I understand
> filegroups (as per Books Online) as well as other internet sources, it may
> be advantageous to create a separate file group for indexes, and even the
> image column. Given our limited resources, and our current set up of a
> hard drive, Raid 5 Array [E Drive], and a single processor, do you have
> any recommendations?
> --
> Message posted via http://www.sqlmonster.com|||Separate filegroups can give you better performance. I would separate out
the data files into a filegroup, non-clustered indexes into their own
filegroup, and the transaction logs into their own filegroup.
The advantage to this comes primarily from a system with multiple processors
and multiple hard drives. In a multiple-processor system, SQL Server can
access tables and associated non-clustered indexes in parallel. Filegroups
on separate hard drives (particularly the transaction log) can help reduce
head thrashing and associated performance degradation.
If you have all your filegroups on the same physical hard drive, the drive
head is going to slow you down a bit since it will have to jump from reading
your index to writing data in your table to updating the transaction log and
so on. This may not be as big an issue with a small database though, since
caching will help.
Depending on the amount of reads to writes you're performing, you might look
at a different RAID configuration. RAID 5 is horrible for transaction logs.
The reason is that RAID 5 takes about twice the number of disk I/Os for
every write.
I would recommend, at the very least, that you get your transaction logs on
their own separate physical hard drive (it could be RAID 1 or RAID 1+0, but
RAID 5 is not recommended). If you have the option, I would also get an
additional hard drive and create a separate filegroup on it for your
non-clustered indexes.
Thanks,
Michael C#, MCDBA
"Robert Richards via SQLMonster.com" <forum@.SQLMonster.com> wrote in message
news:38366e87041d4bc2a888439965f48bee@.SQLMonster.com...
> I am trying to understand filegroups under our environment and how it
might best be leveraged. We are running SQL 2K.
> We are redesigning the database and table schemas. For a disk setup we
have the hard drive and a Raid 5 array comprising an E drive. We currently
have transaction logs on the hard drive, system databases (and subsequently
all system tables) on the hard drive, and all our user defined databases
(and subsequently all user defined tables) on the E drive.
> The server under discussion contains only one processor.
> The user defined database itself is rather small, at about 100MB. It
contains a clustered index, one date column index, and three columns
separately indexed as foreign keys. Additionally, there is an image column.
The table has both, almost equally, inserts and updates, few deletes. It is
also read heavily, the clustered index most, followed by the three foreign
key indexed columns, and then the least using the date column.
> We have one filegroup, the default Primary. Currently, our user defined
tables and indexes are all on the Primary filegroup. As I understand
filegroups (as per Books Online) as well as other internet sources, it may
be advantageous to create a separate file group for indexes, and even the
image column. Given our limited resources, and our current set up of a hard
drive, Raid 5 Array [E Drive], and a single processor, do you have any
recommendations?
> --
> Message posted via http://www.sqlmonster.com|||Just wanted to clarify some things you mentioned.
> Separate filegroups can give you better performance. I would separate out
> the data files into a filegroup, non-clustered indexes into their own
> filegroup, and the transaction logs into their own filegroup.
> The advantage to this comes primarily from a system with multiple
> processors
> and multiple hard drives. In a multiple-processor system, SQL Server can
> access tables and associated non-clustered indexes in parallel.
Actually with SQL 2000 you do not need separate files in order for SQL
Server to read them with multiple threads. That was true in SQL 7.0 but no
longer so in 2000. Even with 7.0 you don't need multiple filegroups, just
multiple files. With a smaller db and a single drive array there is no
performance gain by creating separate files. It is more of a maintenance
aspect or if you plan on adding another drive array in the future it may
make things easier.
> filegroup, and the transaction logs into their own filegroup.
Log files don't actually have a filegroup, they are simply individual files.
Andrew J. Kelly SQL MVP
"Michael C" <me@.mine.com> wrote in message
news:eWpFzH0$EHA.3256@.TK2MSFTNGP11.phx.gbl...
> Separate filegroups can give you better performance. I would separate out
> the data files into a filegroup, non-clustered indexes into their own
> filegroup, and the transaction logs into their own filegroup.
> The advantage to this comes primarily from a system with multiple
> processors
> and multiple hard drives. In a multiple-processor system, SQL Server can
> access tables and associated non-clustered indexes in parallel.
> Filegroups
> on separate hard drives (particularly the transaction log) can help reduce
> head thrashing and associated performance degradation.
> If you have all your filegroups on the same physical hard drive, the drive
> head is going to slow you down a bit since it will have to jump from
> reading
> your index to writing data in your table to updating the transaction log
> and
> so on. This may not be as big an issue with a small database though,
> since
> caching will help.
> Depending on the amount of reads to writes you're performing, you might
> look
> at a different RAID configuration. RAID 5 is horrible for transaction
> logs.
> The reason is that RAID 5 takes about twice the number of disk I/Os for
> every write.
> I would recommend, at the very least, that you get your transaction logs
> on
> their own separate physical hard drive (it could be RAID 1 or RAID 1+0,
> but
> RAID 5 is not recommended). If you have the option, I would also get an
> additional hard drive and create a separate filegroup on it for your
> non-clustered indexes.
> Thanks,
> Michael C#, MCDBA
> "Robert Richards via SQLMonster.com" <forum@.SQLMonster.com> wrote in
> message
> news:38366e87041d4bc2a888439965f48bee@.SQLMonster.com...
>> I am trying to understand filegroups under our environment and how it
> might best be leveraged. We are running SQL 2K.
>> We are redesigning the database and table schemas. For a disk setup we
> have the hard drive and a Raid 5 array comprising an E drive. We currently
> have transaction logs on the hard drive, system databases (and
> subsequently
> all system tables) on the hard drive, and all our user defined databases
> (and subsequently all user defined tables) on the E drive.
>> The server under discussion contains only one processor.
>> The user defined database itself is rather small, at about 100MB. It
> contains a clustered index, one date column index, and three columns
> separately indexed as foreign keys. Additionally, there is an image
> column.
> The table has both, almost equally, inserts and updates, few deletes. It
> is
> also read heavily, the clustered index most, followed by the three foreign
> key indexed columns, and then the least using the date column.
>> We have one filegroup, the default Primary. Currently, our user defined
> tables and indexes are all on the Primary filegroup. As I understand
> filegroups (as per Books Online) as well as other internet sources, it may
> be advantageous to create a separate file group for indexes, and even the
> image column. Given our limited resources, and our current set up of a
> hard
> drive, Raid 5 Array [E Drive], and a single processor, do you have any
> recommendations?
>> --
>> Message posted via http://www.sqlmonster.com
>|||"Andrew J. Kelly" <sqlmvpnooospam@.shadhawk.com> wrote in message
news:%23BNKsB1$EHA.3472@.TK2MSFTNGP14.phx.gbl...
> Just wanted to clarify some things you mentioned.
>> Separate filegroups can give you better performance. I would separate
>> out
>> the data files into a filegroup, non-clustered indexes into their own
>> filegroup, and the transaction logs into their own filegroup.
>> The advantage to this comes primarily from a system with multiple
>> processors
>> and multiple hard drives. In a multiple-processor system, SQL Server can
>> access tables and associated non-clustered indexes in parallel.
> Actually with SQL 2000 you do not need separate files in order for SQL
> Server to read them with multiple threads. That was true in SQL 7.0 but
> no longer so in 2000. Even with 7.0 you don't need multiple filegroups,
> just multiple files. With a smaller db and a single drive array there is
> no performance gain by creating separate files. It is more of a
> maintenance aspect or if you plan on adding another drive array in the
> future it may make things easier.
>
You're absolutely right. I was typing faster than I was thinking. Sorry
about that. You're absolutely right about the separate drives; and ideally,
separate controllers for those drives. I tried to bring that point through.
The physical non-linear movement of the hard drive heads is a big
performance killers, especially when you're mixing the non-linear data read
and write operations with constant jumps for the linear log write operations
on one drive.
Having multiple parallel threads on separate processors doesn't help as much
if your single hard drive is thrashing like crazy. Again, for small
databases this might be alleviated somewhat by caching. But I think it
depends primarily on the ratio of read operations vs. write operations in
your particular database (I think I've seen the 'average' quoted at around
7:1 [reads:writes] somewhere).
>> filegroup, and the transaction logs into their own filegroup.
> Log files don't actually have a filegroup, they are simply individual
> files.
>
Again, typing faster than I was thinking. I was actually on my way out to
the local bar with my co-workers when I decided to try to get an answer out
to the OP on this one. :) Beer on the brain. The log files should,
whenever possible, be on their own hard drive was the point I was trying to
get across.
Thanks for the corrections,
Michael C#, MCDBA
>
> --
> Andrew J. Kelly SQL MVP|||Have one for me too<g>.
--
Andrew J. Kelly SQL MVP
"Michael C#" <xyz@.abcdef.com> wrote in message
news:ly_Hd.8296$mF.6862@.fe08.lga...
> "Andrew J. Kelly" <sqlmvpnooospam@.shadhawk.com> wrote in message
> news:%23BNKsB1$EHA.3472@.TK2MSFTNGP14.phx.gbl...
>> Just wanted to clarify some things you mentioned.
>> Separate filegroups can give you better performance. I would separate
>> out
>> the data files into a filegroup, non-clustered indexes into their own
>> filegroup, and the transaction logs into their own filegroup.
>> The advantage to this comes primarily from a system with multiple
>> processors
>> and multiple hard drives. In a multiple-processor system, SQL Server
>> can
>> access tables and associated non-clustered indexes in parallel.
>> Actually with SQL 2000 you do not need separate files in order for SQL
>> Server to read them with multiple threads. That was true in SQL 7.0 but
>> no longer so in 2000. Even with 7.0 you don't need multiple filegroups,
>> just multiple files. With a smaller db and a single drive array there is
>> no performance gain by creating separate files. It is more of a
>> maintenance aspect or if you plan on adding another drive array in the
>> future it may make things easier.
> You're absolutely right. I was typing faster than I was thinking. Sorry
> about that. You're absolutely right about the separate drives; and
> ideally, separate controllers for those drives. I tried to bring that
> point through. The physical non-linear movement of the hard drive heads is
> a big performance killers, especially when you're mixing the non-linear
> data read and write operations with constant jumps for the linear log
> write operations on one drive.
> Having multiple parallel threads on separate processors doesn't help as
> much if your single hard drive is thrashing like crazy. Again, for small
> databases this might be alleviated somewhat by caching. But I think it
> depends primarily on the ratio of read operations vs. write operations in
> your particular database (I think I've seen the 'average' quoted at around
> 7:1 [reads:writes] somewhere).
>> filegroup, and the transaction logs into their own filegroup.
>> Log files don't actually have a filegroup, they are simply individual
>> files.
> Again, typing faster than I was thinking. I was actually on my way out to
> the local bar with my co-workers when I decided to try to get an answer
> out to the OP on this one. :) Beer on the brain. The log files should,
> whenever possible, be on their own hard drive was the point I was trying
> to get across.
> Thanks for the corrections,
> Michael C#, MCDBA
>>
>> --
>> Andrew J. Kelly SQL MVP
>
Filegroup recommendation
best be leveraged. We are running SQL 2K.
We are redesigning the database and table schemas. For a disk setup we have
the hard drive and a Raid 5 array comprising an E drive. We currently have t
ransaction logs on the hard drive, system databases (and subsequently all sy
stem tables) on the hard dr
ive, and all our user defined databases (and subsequently all user defined t
ables) on the E drive.
The server under discussion contains only one processor.
The user defined database itself is rather small, at about 100MB. It contain
s a clustered index, one date column index, and three columns separately ind
exed as foreign keys. Additionally, there is an image column. The table has
both, almost equally, inser
ts and updates, few deletes. It is also read heavily, the clustered index mo
st, followed by the three foreign key indexed columns, and then the least us
ing the date column.
We have one filegroup, the default Primary. Currently, our user defined tabl
es and indexes are all on the Primary filegroup. As I understand filegroups
(as per Books Online) as well as other internet sources, it may be advantage
ous to create a separate fi
le group for indexes, and even the image column. Given our limited resources
, and our current set up of a hard drive, Raid 5 Array [E Drive], and a
single processor, do you have any recommendations?
Message posted via http://www.droptable.comRobert,
We went with placing the system data in the PRIMARY filegroup and created a
DATA filegroup making it the default. This way we separated the user data
from the system data for admin purposes. I know that this not really what
you had asked for but it was something we did.
Chris Wood
Alberta Department of Energy
CANADA
"Robert Richards via droptable.com" <forum@.droptable.com> wrote in message
news:38366e87041d4bc2a888439965f48bee@.SQ
droptable.com...
>I am trying to understand filegroups under our environment and how it might
>best be leveraged. We are running SQL 2K.
> We are redesigning the database and table schemas. For a disk setup we
> have the hard drive and a Raid 5 array comprising an E drive. We currently
> have transaction logs on the hard drive, system databases (and
> subsequently all system tables) on the hard drive, and all our user
> defined databases (and subsequently all user defined tables) on the E
> drive.
> The server under discussion contains only one processor.
> The user defined database itself is rather small, at about 100MB. It
> contains a clustered index, one date column index, and three columns
> separately indexed as foreign keys. Additionally, there is an image
> column. The table has both, almost equally, inserts and updates, few
> deletes. It is also read heavily, the clustered index most, followed by
> the three foreign key indexed columns, and then the least using the date
> column.
> We have one filegroup, the default Primary. Currently, our user defined
> tables and indexes are all on the Primary filegroup. As I understand
> filegroups (as per Books Online) as well as other internet sources, it may
> be advantageous to create a separate file group for indexes, and even the
> image column. Given our limited resources, and our current set up of a
> hard drive, Raid 5 Array [E Drive], and a single processor, do you hav
e
> any recommendations?
> --
> Message posted via http://www.droptable.com|||Separate filegroups can give you better performance. I would separate out
the data files into a filegroup, non-clustered indexes into their own
filegroup, and the transaction logs into their own filegroup.
The advantage to this comes primarily from a system with multiple processors
and multiple hard drives. In a multiple-processor system, SQL Server can
access tables and associated non-clustered indexes in parallel. Filegroups
on separate hard drives (particularly the transaction log) can help reduce
head thrashing and associated performance degradation.
If you have all your filegroups on the same physical hard drive, the drive
head is going to slow you down a bit since it will have to jump from reading
your index to writing data in your table to updating the transaction log and
so on. This may not be as big an issue with a small database though, since
caching will help.
Depending on the amount of reads to writes you're performing, you might look
at a different RAID configuration. RAID 5 is horrible for transaction logs.
The reason is that RAID 5 takes about twice the number of disk I/Os for
every write.
I would recommend, at the very least, that you get your transaction logs on
their own separate physical hard drive (it could be RAID 1 or RAID 1+0, but
RAID 5 is not recommended). If you have the option, I would also get an
additional hard drive and create a separate filegroup on it for your
non-clustered indexes.
Thanks,
Michael C#, MCDBA
"Robert Richards via droptable.com" <forum@.droptable.com> wrote in message
news:38366e87041d4bc2a888439965f48bee@.SQ
droptable.com...
> I am trying to understand filegroups under our environment and how it
might best be leveraged. We are running SQL 2K.
> We are redesigning the database and table schemas. For a disk setup we
have the hard drive and a Raid 5 array comprising an E drive. We currently
have transaction logs on the hard drive, system databases (and subsequently
all system tables) on the hard drive, and all our user defined databases
(and subsequently all user defined tables) on the E drive.
> The server under discussion contains only one processor.
> The user defined database itself is rather small, at about 100MB. It
contains a clustered index, one date column index, and three columns
separately indexed as foreign keys. Additionally, there is an image column.
The table has both, almost equally, inserts and updates, few deletes. It is
also read heavily, the clustered index most, followed by the three foreign
key indexed columns, and then the least using the date column.
> We have one filegroup, the default Primary. Currently, our user defined
tables and indexes are all on the Primary filegroup. As I understand
filegroups (as per Books Online) as well as other internet sources, it may
be advantageous to create a separate file group for indexes, and even the
image column. Given our limited resources, and our current set up of a hard
drive, Raid 5 Array [E Drive], and a single processor, do you have any
recommendations?
> --
> Message posted via http://www.droptable.com|||Just wanted to clarify some things you mentioned.
> Separate filegroups can give you better performance. I would separate out
> the data files into a filegroup, non-clustered indexes into their own
> filegroup, and the transaction logs into their own filegroup.
> The advantage to this comes primarily from a system with multiple
> processors
> and multiple hard drives. In a multiple-processor system, SQL Server can
> access tables and associated non-clustered indexes in parallel.
Actually with SQL 2000 you do not need separate files in order for SQL
Server to read them with multiple threads. That was true in SQL 7.0 but no
longer so in 2000. Even with 7.0 you don't need multiple filegroups, just
multiple files. With a smaller db and a single drive array there is no
performance gain by creating separate files. It is more of a maintenance
aspect or if you plan on adding another drive array in the future it may
make things easier.
> filegroup, and the transaction logs into their own filegroup.
Log files don't actually have a filegroup, they are simply individual files.
Andrew J. Kelly SQL MVP
"Michael C" <me@.mine.com> wrote in message
news:eWpFzH0$EHA.3256@.TK2MSFTNGP11.phx.gbl...
> Separate filegroups can give you better performance. I would separate out
> the data files into a filegroup, non-clustered indexes into their own
> filegroup, and the transaction logs into their own filegroup.
> The advantage to this comes primarily from a system with multiple
> processors
> and multiple hard drives. In a multiple-processor system, SQL Server can
> access tables and associated non-clustered indexes in parallel.
> Filegroups
> on separate hard drives (particularly the transaction log) can help reduce
> head thrashing and associated performance degradation.
> If you have all your filegroups on the same physical hard drive, the drive
> head is going to slow you down a bit since it will have to jump from
> reading
> your index to writing data in your table to updating the transaction log
> and
> so on. This may not be as big an issue with a small database though,
> since
> caching will help.
> Depending on the amount of reads to writes you're performing, you might
> look
> at a different RAID configuration. RAID 5 is horrible for transaction
> logs.
> The reason is that RAID 5 takes about twice the number of disk I/Os for
> every write.
> I would recommend, at the very least, that you get your transaction logs
> on
> their own separate physical hard drive (it could be RAID 1 or RAID 1+0,
> but
> RAID 5 is not recommended). If you have the option, I would also get an
> additional hard drive and create a separate filegroup on it for your
> non-clustered indexes.
> Thanks,
> Michael C#, MCDBA
> "Robert Richards via droptable.com" <forum@.droptable.com> wrote in
> message
> news:38366e87041d4bc2a888439965f48bee@.SQ
droptable.com...
> might best be leveraged. We are running SQL 2K.
> have the hard drive and a Raid 5 array comprising an E drive. We currently
> have transaction logs on the hard drive, system databases (and
> subsequently
> all system tables) on the hard drive, and all our user defined databases
> (and subsequently all user defined tables) on the E drive.
> contains a clustered index, one date column index, and three columns
> separately indexed as foreign keys. Additionally, there is an image
> column.
> The table has both, almost equally, inserts and updates, few deletes. It
> is
> also read heavily, the clustered index most, followed by the three foreign
> key indexed columns, and then the least using the date column.
> tables and indexes are all on the Primary filegroup. As I understand
> filegroups (as per Books Online) as well as other internet sources, it may
> be advantageous to create a separate file group for indexes, and even the
> image column. Given our limited resources, and our current set up of a
> hard
> drive, Raid 5 Array [E Drive], and a single processor, do you have any
> recommendations?
>|||"Andrew J. Kelly" <sqlmvpnooospam@.shadhawk.com> wrote in message
news:%23BNKsB1$EHA.3472@.TK2MSFTNGP14.phx.gbl...
> Just wanted to clarify some things you mentioned.
>
> Actually with SQL 2000 you do not need separate files in order for SQL
> Server to read them with multiple threads. That was true in SQL 7.0 but
> no longer so in 2000. Even with 7.0 you don't need multiple filegroups,
> just multiple files. With a smaller db and a single drive array there is
> no performance gain by creating separate files. It is more of a
> maintenance aspect or if you plan on adding another drive array in the
> future it may make things easier.
>
You're absolutely right. I was typing faster than I was thinking. Sorry
about that. You're absolutely right about the separate drives; and ideally,
separate controllers for those drives. I tried to bring that point through.
The physical non-linear movement of the hard drive heads is a big
performance killers, especially when you're mixing the non-linear data read
and write operations with constant jumps for the linear log write operations
on one drive.
Having multiple parallel threads on separate processors doesn't help as much
if your single hard drive is thrashing like crazy. Again, for small
databases this might be alleviated somewhat by caching. But I think it
depends primarily on the ratio of read operations vs. write operations in
your particular database (I think I've seen the 'average' quoted at around
7:1 [reads:writes] somewhere).
>
> Log files don't actually have a filegroup, they are simply individual
> files.
>
Again, typing faster than I was thinking. I was actually on my way out to
the local bar with my co-workers when I decided to try to get an answer out
to the OP on this one.

whenever possible, be on their own hard drive was the point I was trying to
get across.
Thanks for the corrections,
Michael C#, MCDBA
>
> --
> Andrew J. Kelly SQL MVP|||Have one for me too<g>.
Andrew J. Kelly SQL MVP
"Michael C#" <xyz@.abcdef.com> wrote in message
news:ly_Hd.8296$mF.6862@.fe08.lga...
> "Andrew J. Kelly" <sqlmvpnooospam@.shadhawk.com> wrote in message
> news:%23BNKsB1$EHA.3472@.TK2MSFTNGP14.phx.gbl...
> You're absolutely right. I was typing faster than I was thinking. Sorry
> about that. You're absolutely right about the separate drives; and
> ideally, separate controllers for those drives. I tried to bring that
> point through. The physical non-linear movement of the hard drive heads is
> a big performance killers, especially when you're mixing the non-linear
> data read and write operations with constant jumps for the linear log
> write operations on one drive.
> Having multiple parallel threads on separate processors doesn't help as
> much if your single hard drive is thrashing like crazy. Again, for small
> databases this might be alleviated somewhat by caching. But I think it
> depends primarily on the ratio of read operations vs. write operations in
> your particular database (I think I've seen the 'average' quoted at around
> 7:1 [reads:writes] somewhere).
>
> Again, typing faster than I was thinking. I was actually on my way out to
> the local bar with my co-workers when I decided to try to get an answer
> out to the OP on this one.

> whenever possible, be on their own hard drive was the point I was trying
> to get across.
> Thanks for the corrections,
> Michael C#, MCDBA
>
>
Filegroup recommendation
We are redesigning the database and table schemas. For a disk setup we have the hard drive and a Raid 5 array comprising an E drive. We currently have transaction logs on the hard drive, system databases (and subsequently all system tables) on the hard dr
ive, and all our user defined databases (and subsequently all user defined tables) on the E drive.
The server under discussion contains only one processor.
The user defined database itself is rather small, at about 100MB. It contains a clustered index, one date column index, and three columns separately indexed as foreign keys. Additionally, there is an image column. The table has both, almost equally, inser
ts and updates, few deletes. It is also read heavily, the clustered index most, followed by the three foreign key indexed columns, and then the least using the date column.
We have one filegroup, the default Primary. Currently, our user defined tables and indexes are all on the Primary filegroup. As I understand filegroups (as per Books Online) as well as other internet sources, it may be advantageous to create a separate fi
le group for indexes, and even the image column. Given our limited resources, and our current set up of a hard drive, Raid 5 Array [E Drive], and a single processor, do you have any recommendations?
Message posted via http://www.sqlmonster.com
Robert,
We went with placing the system data in the PRIMARY filegroup and created a
DATA filegroup making it the default. This way we separated the user data
from the system data for admin purposes. I know that this not really what
you had asked for but it was something we did.
Chris Wood
Alberta Department of Energy
CANADA
"Robert Richards via SQLMonster.com" <forum@.SQLMonster.com> wrote in message
news:38366e87041d4bc2a888439965f48bee@.SQLMonster.c om...
>I am trying to understand filegroups under our environment and how it might
>best be leveraged. We are running SQL 2K.
> We are redesigning the database and table schemas. For a disk setup we
> have the hard drive and a Raid 5 array comprising an E drive. We currently
> have transaction logs on the hard drive, system databases (and
> subsequently all system tables) on the hard drive, and all our user
> defined databases (and subsequently all user defined tables) on the E
> drive.
> The server under discussion contains only one processor.
> The user defined database itself is rather small, at about 100MB. It
> contains a clustered index, one date column index, and three columns
> separately indexed as foreign keys. Additionally, there is an image
> column. The table has both, almost equally, inserts and updates, few
> deletes. It is also read heavily, the clustered index most, followed by
> the three foreign key indexed columns, and then the least using the date
> column.
> We have one filegroup, the default Primary. Currently, our user defined
> tables and indexes are all on the Primary filegroup. As I understand
> filegroups (as per Books Online) as well as other internet sources, it may
> be advantageous to create a separate file group for indexes, and even the
> image column. Given our limited resources, and our current set up of a
> hard drive, Raid 5 Array [E Drive], and a single processor, do you have
> any recommendations?
> --
> Message posted via http://www.sqlmonster.com
|||Separate filegroups can give you better performance. I would separate out
the data files into a filegroup, non-clustered indexes into their own
filegroup, and the transaction logs into their own filegroup.
The advantage to this comes primarily from a system with multiple processors
and multiple hard drives. In a multiple-processor system, SQL Server can
access tables and associated non-clustered indexes in parallel. Filegroups
on separate hard drives (particularly the transaction log) can help reduce
head thrashing and associated performance degradation.
If you have all your filegroups on the same physical hard drive, the drive
head is going to slow you down a bit since it will have to jump from reading
your index to writing data in your table to updating the transaction log and
so on. This may not be as big an issue with a small database though, since
caching will help.
Depending on the amount of reads to writes you're performing, you might look
at a different RAID configuration. RAID 5 is horrible for transaction logs.
The reason is that RAID 5 takes about twice the number of disk I/Os for
every write.
I would recommend, at the very least, that you get your transaction logs on
their own separate physical hard drive (it could be RAID 1 or RAID 1+0, but
RAID 5 is not recommended). If you have the option, I would also get an
additional hard drive and create a separate filegroup on it for your
non-clustered indexes.
Thanks,
Michael C#, MCDBA
"Robert Richards via SQLMonster.com" <forum@.SQLMonster.com> wrote in message
news:38366e87041d4bc2a888439965f48bee@.SQLMonster.c om...
> I am trying to understand filegroups under our environment and how it
might best be leveraged. We are running SQL 2K.
> We are redesigning the database and table schemas. For a disk setup we
have the hard drive and a Raid 5 array comprising an E drive. We currently
have transaction logs on the hard drive, system databases (and subsequently
all system tables) on the hard drive, and all our user defined databases
(and subsequently all user defined tables) on the E drive.
> The server under discussion contains only one processor.
> The user defined database itself is rather small, at about 100MB. It
contains a clustered index, one date column index, and three columns
separately indexed as foreign keys. Additionally, there is an image column.
The table has both, almost equally, inserts and updates, few deletes. It is
also read heavily, the clustered index most, followed by the three foreign
key indexed columns, and then the least using the date column.
> We have one filegroup, the default Primary. Currently, our user defined
tables and indexes are all on the Primary filegroup. As I understand
filegroups (as per Books Online) as well as other internet sources, it may
be advantageous to create a separate file group for indexes, and even the
image column. Given our limited resources, and our current set up of a hard
drive, Raid 5 Array [E Drive], and a single processor, do you have any
recommendations?
> --
> Message posted via http://www.sqlmonster.com
|||Just wanted to clarify some things you mentioned.
> Separate filegroups can give you better performance. I would separate out
> the data files into a filegroup, non-clustered indexes into their own
> filegroup, and the transaction logs into their own filegroup.
> The advantage to this comes primarily from a system with multiple
> processors
> and multiple hard drives. In a multiple-processor system, SQL Server can
> access tables and associated non-clustered indexes in parallel.
Actually with SQL 2000 you do not need separate files in order for SQL
Server to read them with multiple threads. That was true in SQL 7.0 but no
longer so in 2000. Even with 7.0 you don't need multiple filegroups, just
multiple files. With a smaller db and a single drive array there is no
performance gain by creating separate files. It is more of a maintenance
aspect or if you plan on adding another drive array in the future it may
make things easier.
> filegroup, and the transaction logs into their own filegroup.
Log files don't actually have a filegroup, they are simply individual files.
Andrew J. Kelly SQL MVP
"Michael C" <me@.mine.com> wrote in message
news:eWpFzH0$EHA.3256@.TK2MSFTNGP11.phx.gbl...
> Separate filegroups can give you better performance. I would separate out
> the data files into a filegroup, non-clustered indexes into their own
> filegroup, and the transaction logs into their own filegroup.
> The advantage to this comes primarily from a system with multiple
> processors
> and multiple hard drives. In a multiple-processor system, SQL Server can
> access tables and associated non-clustered indexes in parallel.
> Filegroups
> on separate hard drives (particularly the transaction log) can help reduce
> head thrashing and associated performance degradation.
> If you have all your filegroups on the same physical hard drive, the drive
> head is going to slow you down a bit since it will have to jump from
> reading
> your index to writing data in your table to updating the transaction log
> and
> so on. This may not be as big an issue with a small database though,
> since
> caching will help.
> Depending on the amount of reads to writes you're performing, you might
> look
> at a different RAID configuration. RAID 5 is horrible for transaction
> logs.
> The reason is that RAID 5 takes about twice the number of disk I/Os for
> every write.
> I would recommend, at the very least, that you get your transaction logs
> on
> their own separate physical hard drive (it could be RAID 1 or RAID 1+0,
> but
> RAID 5 is not recommended). If you have the option, I would also get an
> additional hard drive and create a separate filegroup on it for your
> non-clustered indexes.
> Thanks,
> Michael C#, MCDBA
> "Robert Richards via SQLMonster.com" <forum@.SQLMonster.com> wrote in
> message
> news:38366e87041d4bc2a888439965f48bee@.SQLMonster.c om...
> might best be leveraged. We are running SQL 2K.
> have the hard drive and a Raid 5 array comprising an E drive. We currently
> have transaction logs on the hard drive, system databases (and
> subsequently
> all system tables) on the hard drive, and all our user defined databases
> (and subsequently all user defined tables) on the E drive.
> contains a clustered index, one date column index, and three columns
> separately indexed as foreign keys. Additionally, there is an image
> column.
> The table has both, almost equally, inserts and updates, few deletes. It
> is
> also read heavily, the clustered index most, followed by the three foreign
> key indexed columns, and then the least using the date column.
> tables and indexes are all on the Primary filegroup. As I understand
> filegroups (as per Books Online) as well as other internet sources, it may
> be advantageous to create a separate file group for indexes, and even the
> image column. Given our limited resources, and our current set up of a
> hard
> drive, Raid 5 Array [E Drive], and a single processor, do you have any
> recommendations?
>
|||"Andrew J. Kelly" <sqlmvpnooospam@.shadhawk.com> wrote in message
news:%23BNKsB1$EHA.3472@.TK2MSFTNGP14.phx.gbl...
> Just wanted to clarify some things you mentioned.
>
> Actually with SQL 2000 you do not need separate files in order for SQL
> Server to read them with multiple threads. That was true in SQL 7.0 but
> no longer so in 2000. Even with 7.0 you don't need multiple filegroups,
> just multiple files. With a smaller db and a single drive array there is
> no performance gain by creating separate files. It is more of a
> maintenance aspect or if you plan on adding another drive array in the
> future it may make things easier.
>
You're absolutely right. I was typing faster than I was thinking. Sorry
about that. You're absolutely right about the separate drives; and ideally,
separate controllers for those drives. I tried to bring that point through.
The physical non-linear movement of the hard drive heads is a big
performance killers, especially when you're mixing the non-linear data read
and write operations with constant jumps for the linear log write operations
on one drive.
Having multiple parallel threads on separate processors doesn't help as much
if your single hard drive is thrashing like crazy. Again, for small
databases this might be alleviated somewhat by caching. But I think it
depends primarily on the ratio of read operations vs. write operations in
your particular database (I think I've seen the 'average' quoted at around
7:1 [reads:writes] somewhere).
>
> Log files don't actually have a filegroup, they are simply individual
> files.
>
Again, typing faster than I was thinking. I was actually on my way out to
the local bar with my co-workers when I decided to try to get an answer out
to the OP on this one.

whenever possible, be on their own hard drive was the point I was trying to
get across.
Thanks for the corrections,
Michael C#, MCDBA
>
> --
> Andrew J. Kelly SQL MVP
|||Have one for me too<g>.
Andrew J. Kelly SQL MVP
"Michael C#" <xyz@.abcdef.com> wrote in message
news:ly_Hd.8296$mF.6862@.fe08.lga...
> "Andrew J. Kelly" <sqlmvpnooospam@.shadhawk.com> wrote in message
> news:%23BNKsB1$EHA.3472@.TK2MSFTNGP14.phx.gbl...
> You're absolutely right. I was typing faster than I was thinking. Sorry
> about that. You're absolutely right about the separate drives; and
> ideally, separate controllers for those drives. I tried to bring that
> point through. The physical non-linear movement of the hard drive heads is
> a big performance killers, especially when you're mixing the non-linear
> data read and write operations with constant jumps for the linear log
> write operations on one drive.
> Having multiple parallel threads on separate processors doesn't help as
> much if your single hard drive is thrashing like crazy. Again, for small
> databases this might be alleviated somewhat by caching. But I think it
> depends primarily on the ratio of read operations vs. write operations in
> your particular database (I think I've seen the 'average' quoted at around
> 7:1 [reads:writes] somewhere).
>
> Again, typing faster than I was thinking. I was actually on my way out to
> the local bar with my co-workers when I decided to try to get an answer
> out to the OP on this one.

> whenever possible, be on their own hard drive was the point I was trying
> to get across.
> Thanks for the corrections,
> Michael C#, MCDBA
>
>
filegroup full during DBCC
to free up some space, the files are set to grow automatically. The
DBCC is still running - will it eventually bag out, or is it smart
enough to continue now that it has more space?check to see if it is hung by using sp_who2. Watch to see if it consumes cpu
and disk io between consecutive runs of sp_who2. If cpu or io is not
increasing you should kill it.
Hilary Cotter
Director of Text Mining and Database Strategy
RelevantNOISE.Com - Dedicated to mining blogs for business intelligence.
This posting is my own and doesn't necessarily represent RelevantNoise's
positions, strategies or opinions.
Looking for a SQL Server replication book?
http://www.nwsu.com/0974973602.html
Looking for a FAQ on Indexing Services/SQL FTS
http://www.indexserverfaq.com
<mdevenney1@.gmail.com> wrote in message
news:1163194366.339470.237560@.h54g2000cwb.googlegroups.com...
> running DBCC CHECKDB repair and wound up filling up the disk - was able
> to free up some space, the files are set to grow automatically. The
> DBCC is still running - will it eventually bag out, or is it smart
> enough to continue now that it has more space?
>|||Was it Tempdb that was full? If so you can run DBCC CHECKDB with the
Estimate_only option to see how much space you may require first.
Andrew J. Kelly SQL MVP
<mdevenney1@.gmail.com> wrote in message
news:1163194366.339470.237560@.h54g2000cwb.googlegroups.com...
> running DBCC CHECKDB repair and wound up filling up the disk - was able
> to free up some space, the files are set to grow automatically. The
> DBCC is still running - will it eventually bag out, or is it smart
> enough to continue now that it has more space?
>|||Andrew J. Kelly wrote:[vbcol=seagreen]
> Was it Tempdb that was full? If so you can run DBCC CHECKDB with the
> Estimate_only option to see how much space you may require first.
> --
> Andrew J. Kelly SQL MVP
> <mdevenney1@.gmail.com> wrote in message
> news:1163194366.339470.237560@.h54g2000cwb.googlegroups.com...
thanks Hilary and Andrew for the responses - the DBCC did wind up
finishing and performing the repairs needed without any intervention
once I freed up space. Dumb move on my part not running the
Estimate_only first; lesson learned there. And thanks for the tip-off
on sp_who2 - I can see that coming in real handy. I've been admin'ing
SQL Srvr for about 8 years but it's almost a side job, and my db's have
never really run into problems before so now it's time to ratchet up
the diligence a bit. much obliged.
filegroup full during DBCC
to free up some space, the files are set to grow automatically. The
DBCC is still running - will it eventually bag out, or is it smart
enough to continue now that it has more space?check to see if it is hung by using sp_who2. Watch to see if it consumes cpu
and disk io between consecutive runs of sp_who2. If cpu or io is not
increasing you should kill it.
--
Hilary Cotter
Director of Text Mining and Database Strategy
RelevantNOISE.Com - Dedicated to mining blogs for business intelligence.
This posting is my own and doesn't necessarily represent RelevantNoise's
positions, strategies or opinions.
Looking for a SQL Server replication book?
http://www.nwsu.com/0974973602.html
Looking for a FAQ on Indexing Services/SQL FTS
http://www.indexserverfaq.com
<mdevenney1@.gmail.com> wrote in message
news:1163194366.339470.237560@.h54g2000cwb.googlegroups.com...
> running DBCC CHECKDB repair and wound up filling up the disk - was able
> to free up some space, the files are set to grow automatically. The
> DBCC is still running - will it eventually bag out, or is it smart
> enough to continue now that it has more space?
>|||Was it Tempdb that was full? If so you can run DBCC CHECKDB with the
Estimate_only option to see how much space you may require first.
--
Andrew J. Kelly SQL MVP
<mdevenney1@.gmail.com> wrote in message
news:1163194366.339470.237560@.h54g2000cwb.googlegroups.com...
> running DBCC CHECKDB repair and wound up filling up the disk - was able
> to free up some space, the files are set to grow automatically. The
> DBCC is still running - will it eventually bag out, or is it smart
> enough to continue now that it has more space?
>|||Andrew J. Kelly wrote:
> Was it Tempdb that was full? If so you can run DBCC CHECKDB with the
> Estimate_only option to see how much space you may require first.
> --
> Andrew J. Kelly SQL MVP
> <mdevenney1@.gmail.com> wrote in message
> news:1163194366.339470.237560@.h54g2000cwb.googlegroups.com...
> > running DBCC CHECKDB repair and wound up filling up the disk - was able
> > to free up some space, the files are set to grow automatically. The
> > DBCC is still running - will it eventually bag out, or is it smart
> > enough to continue now that it has more space?
> >
thanks Hilary and Andrew for the responses - the DBCC did wind up
finishing and performing the repairs needed without any intervention
once I freed up space. Dumb move on my part not running the
Estimate_only first; lesson learned there. And thanks for the tip-off
on sp_who2 - I can see that coming in real handy. I've been admin'ing
SQL Srvr for about 8 years but it's almost a side job, and my db's have
never really run into problems before so now it's time to ratchet up
the diligence a bit. much obliged.
filegroup full during DBCC
to free up some space, the files are set to grow automatically. The
DBCC is still running - will it eventually bag out, or is it smart
enough to continue now that it has more space?
check to see if it is hung by using sp_who2. Watch to see if it consumes cpu
and disk io between consecutive runs of sp_who2. If cpu or io is not
increasing you should kill it.
Hilary Cotter
Director of Text Mining and Database Strategy
RelevantNOISE.Com - Dedicated to mining blogs for business intelligence.
This posting is my own and doesn't necessarily represent RelevantNoise's
positions, strategies or opinions.
Looking for a SQL Server replication book?
http://www.nwsu.com/0974973602.html
Looking for a FAQ on Indexing Services/SQL FTS
http://www.indexserverfaq.com
<mdevenney1@.gmail.com> wrote in message
news:1163194366.339470.237560@.h54g2000cwb.googlegr oups.com...
> running DBCC CHECKDB repair and wound up filling up the disk - was able
> to free up some space, the files are set to grow automatically. The
> DBCC is still running - will it eventually bag out, or is it smart
> enough to continue now that it has more space?
>
|||Was it Tempdb that was full? If so you can run DBCC CHECKDB with the
Estimate_only option to see how much space you may require first.
Andrew J. Kelly SQL MVP
<mdevenney1@.gmail.com> wrote in message
news:1163194366.339470.237560@.h54g2000cwb.googlegr oups.com...
> running DBCC CHECKDB repair and wound up filling up the disk - was able
> to free up some space, the files are set to grow automatically. The
> DBCC is still running - will it eventually bag out, or is it smart
> enough to continue now that it has more space?
>
|||Andrew J. Kelly wrote:[vbcol=seagreen]
> Was it Tempdb that was full? If so you can run DBCC CHECKDB with the
> Estimate_only option to see how much space you may require first.
> --
> Andrew J. Kelly SQL MVP
> <mdevenney1@.gmail.com> wrote in message
> news:1163194366.339470.237560@.h54g2000cwb.googlegr oups.com...
thanks Hilary and Andrew for the responses - the DBCC did wind up
finishing and performing the repairs needed without any intervention
once I freed up space. Dumb move on my part not running the
Estimate_only first; lesson learned there. And thanks for the tip-off
on sp_who2 - I can see that coming in real handy. I've been admin'ing
SQL Srvr for about 8 years but it's almost a side job, and my db's have
never really run into problems before so now it's time to ratchet up
the diligence a bit. much obliged.
Friday, March 9, 2012
filegroup auto-growth - know when it happens?
I'm running SQLServer 2000. I am in the process of looking over our current filegroups and providing reports based on file size increases.
One thing I'd like to know is when (date and time) a filegroup last "auto-grew". Is there a way to know when this occurs?
thanks,
Jason
SQL Profiler, Events, Database...
Data File Auto Grow
Data File Auto Shrink
Log File Auto Grow
Log File Auto Shrink
|||Derek,
Thanks for the info. I should have re-worded my question I think:
Is there a way to know when the date/time a filegroup last auto-resized? Let's say it auto-resized last friday, is this timestamp stored somewhere? Profiler would tell me only if I was running it at the time of the resize.
I'm trying to correlate the time of an error with the last time a large filegroup resized.
thanks
Jason
|||No. In general if you do not trace database activity it is lost. SQL Server has a builtin error log but it collects exceptions.|||Thanks for the info. Appreciate the help.
Jason
file will not attach to SQL reason file is not primary file
I have a customer they are running raid 5 on a windows 2000 server one of the drives went bad. The customer replaced the drive and raid rebuilt the drive, every thing seamed to be fine but there is one database file that cannot be attached to SQL. The file is 15G so I know there is information the error states that the file is not a Primary file. Any clue on how to fix this?
mdf file size 5,738,944 KB
ldf file size 10,176 KB
You need the primary file to attach this. I guess your last resort will be your backups|||There's another file out there somewhere - probably another mdf. Make sure you've got them all and the attach should work.|||Thanks the file looks to be corupted. the customer did not run any backup so I guess the information is lost. thanks again for your help.
file will not attach to SQL reason file is not primary file
I have a customer they are running raid 5 on a windows 2000 server one of the drives went bad. The customer replaced the drive and raid rebuilt the drive, every thing seamed to be fine but there is one database file that cannot be attached to SQL. The file is 15G so I know there is information the error states that the file is not a Primary file. Any clue on how to fix this?
mdf file size 5,738,944 KB
ldf file size 10,176 KB
You need the primary file to attach this. I guess your last resort will be your backups|||There's another file out there somewhere - probably another mdf. Make sure you've got them all and the attach should work.|||Thanks the file looks to be corupted. the customer did not run any backup so I guess the information is lost. thanks again for your help.
Wednesday, March 7, 2012
File System Error: The record ID is incorrect
Hi,
We have deployed SQL Server 2005 with Analysis Services and Reporting Services.
When running a report, we get the following error:
An error has occurred during report processing.
Cannot read the next data row for the data set DM_GASTOS.
File system error:
The record ID is incorrect. Physical file: . Logical file: .
The error occurs both with a local user account and domain user account. The user account is member of a role which has Cell Data Security restriction in Analysis Services. When other member run the report, the error goes away.
Anybody has seen this before? Any solutions?
Cheers
This looks like a data corruption issue. Try fully re-processing your entire database.
Edward Melomed.
--
This posting is provided "AS IS" with no warranties, and confers no rights.
Hi. I'm encountering the same problem. I have re-processed my database but I still get the error message when I generate my report. Any other suggestions?
Thanks
|||I'm having the same problem.
I have the same OLAP data base in 3 servers, and the error is the same.
Any other suggestions?
File System Error: The record ID is incorrect
Hi,
We have deployed SQL Server 2005 with Analysis Services and Reporting Services.
When running a report, we get the following error:
An error has occurred during report processing.
Cannot read the next data row for the data set DM_GASTOS.
File system error:
The record ID is incorrect. Physical file: . Logical file: .
The error occurs both with a local user account and domain user account. The user account is member of a role which has Cell Data Security restriction in Analysis Services. When other member run the report, the error goes away.
Anybody has seen this before? Any solutions?
Cheers
This looks like a data corruption issue. Try fully re-processing your entire database.
Edward Melomed.
--
This posting is provided "AS IS" with no warranties, and confers no rights.
Hi. I'm encountering the same problem. I have re-processed my database but I still get the error message when I generate my report. Any other suggestions?
Thanks
|||I'm having the same problem.
I have the same OLAP data base in 3 servers, and the error is the same.
Any other suggestions?
File System Error: The record ID is incorrect
Hi,
We have deployed SQL Server 2005 with Analysis Services and Reporting Services.
When running a report, we get the following error:
An error has occurred during report processing.
Cannot read the next data row for the data set DM_GASTOS.
File system error:
The record ID is incorrect. Physical file: . Logical file: .
The error occurs both with a local user account and domain user account. The user account is member of a role which has Cell Data Security restriction in Analysis Services. When other member run the report, the error goes away.
Anybody has seen this before? Any solutions?
Cheers
This looks like a data corruption issue. Try fully re-processing your entire database.
Edward Melomed.
--
This posting is provided "AS IS" with no warranties, and confers no rights.
Hi. I'm encountering the same problem. I have re-processed my database but I still get the error message when I generate my report. Any other suggestions?
Thanks
|||I'm having the same problem.
I have the same OLAP data base in 3 servers, and the error is the same.
Any other suggestions?
File system error: The background thread running lazy writer encountered an I/O error. Physical
Hi All,
Anyone encountered this kind of error:
File system error: The background thread running lazy writer encountered an I/O error. Physical file: \\?\D:\OLAP_DATA\Project88.6.db\Dim Service Dim Std Sub Key.0.dim\677.Hierarchy 5.Subkey Only.lstore. Logical file: .
I believe a corruption of OLAP database occured during the dimension update processing and I don't know what might have caused and factors that have caused this problem.
The resolution to this error was to restart analysis service then process update the dimension.
Any insight on this will be appreciated.
thanks
Please contact Customer Support with you situation.
Are you running SP1? See if installing it resolves the problem.
Edward.
--
This posting is provided "AS IS" with no warranties, and confers no rights.
Yes, SQL Server 2005 SP1 and cumulative hotfixes were already installed.
File system error: The background thread running lazy writer encountered an I/O error. Physi
Hi All,
Anyone encountered this kind of error:
File system error: The background thread running lazy writer encountered an I/O error. Physical file: \\?\D:\OLAP_DATA\Project88.6.db\Dim Service Dim Std Sub Key.0.dim\677.Hierarchy 5.Subkey Only.lstore. Logical file: .
I believe a corruption of OLAP database occured during the dimension update processing and I don't know what might have caused and factors that have caused this problem.
The resolution to this error was to restart analysis service then process update the dimension.
Any insight on this will be appreciated.
thanks
Please contact Customer Support with you situation.
Are you running SP1? See if installing it resolves the problem.
Edward.
--
This posting is provided "AS IS" with no warranties, and confers no rights.
Yes, SQL Server 2005 SP1 and cumulative hotfixes were already installed.
Sunday, February 26, 2012
File Size - Probably missing something basic...
fast it grows.
In the Task Pad of SQL Server it gives the file size as X
By running "select * from sysfiles" I get Y,
But by looking at the actual file size I get Z.
Can anyone tell me whats happening ?
JI'm not sure what you mean by the taskpad of SQL Server.. but here goes.
The sysfiles table shows the size of your database is 8k data pages. So you
need to multiply that number by 8.
For example: select * from sysfiles (while in my master database) reveals
the following two files.
size: 1464
size: 96
If I multiply those out as: SELECT ((1464 * 8) + (96 * 8)) you get about 12
MBs
Now if I go out to my: \Program Files\Microsoft SQL Server\MSSQL\Data
My master.mdb file is 11.4 MB
and the ldf file is .7 MB
That makes just over 12 MB... So these are the same.
If I right-click the database in Enterprise Manager and choose properties, I
get just over 12 MB's...
So, these are all the same size as far as I can see...
HTH
Rick Sawtell
MCT, MCSD, MCDBA
"Jimbo" <anonymous@.discussions.microsoft.com> wrote in message
news:4cb201c490f4$f09a23b0$a501280a@.phx.gbl...
> We have one main data file, and I would like to track how
> fast it grows.
> In the Task Pad of SQL Server it gives the file size as X
> By running "select * from sysfiles" I get Y,
> But by looking at the actual file size I get Z.
> Can anyone tell me whats happening ?
> J|||Thanks for that.
What I mean by taskpad is if you go to SQL Server, select
the database, then View -> Taskpad it shows the size of
the file.
J
>--Original Message--
>I'm not sure what you mean by the taskpad of SQL
Server.. but here goes.
>The sysfiles table shows the size of your database is 8k
data pages. So you
>need to multiply that number by 8.
>For example: select * from sysfiles (while in my master
database) reveals
>the following two files.
>size: 1464
>size: 96
>If I multiply those out as: SELECT ((1464 * 8) + (96 *
8)) you get about 12
>MBs
>
>Now if I go out to my: \Program Files\Microsoft SQL
Server\MSSQL\Data
>My master.mdb file is 11.4 MB
>and the ldf file is .7 MB
>That makes just over 12 MB... So these are the same.
>If I right-click the database in Enterprise Manager and
choose properties, I
>get just over 12 MB's...
>
>So, these are all the same size as far as I can see...
>
>HTH
>Rick Sawtell
>MCT, MCSD, MCDBA
>
>
>
>
>"Jimbo" <anonymous@.discussions.microsoft.com> wrote in
message
>news:4cb201c490f4$f09a23b0$a501280a@.phx.gbl...
>> We have one main data file, and I would like to track
how
>> fast it grows.
>> In the Task Pad of SQL Server it gives the file size as
X
>> By running "select * from sysfiles" I get Y,
>> But by looking at the actual file size I get Z.
>> Can anyone tell me whats happening ?
>> J
>
>.
>
File Size - Probably missing something basic...
fast it grows.
In the Task Pad of SQL Server it gives the file size as X
By running "select * from sysfiles" I get Y,
But by looking at the actual file size I get Z.
Can anyone tell me whats happening ?
JI'm not sure what you mean by the taskpad of SQL Server.. but here goes.
The sysfiles table shows the size of your database is 8k data pages. So you
need to multiply that number by 8.
For example: select * from sysfiles (while in my master database) reveals
the following two files.
size: 1464
size: 96
If I multiply those out as: SELECT ((1464 * 8) + (96 * 8)) you get about 12
MBs
Now if I go out to my: \Program Files\Microsoft SQL Server\MSSQL\Data
My master.mdb file is 11.4 MB
and the ldf file is .7 MB
That makes just over 12 MB... So these are the same.
If I right-click the database in Enterprise Manager and choose properties, I
get just over 12 MB's...
So, these are all the same size as far as I can see...
HTH
Rick Sawtell
MCT, MCSD, MCDBA
"Jimbo" <anonymous@.discussions.microsoft.com> wrote in message
news:4cb201c490f4$f09a23b0$a501280a@.phx.gbl...
> We have one main data file, and I would like to track how
> fast it grows.
> In the Task Pad of SQL Server it gives the file size as X
> By running "select * from sysfiles" I get Y,
> But by looking at the actual file size I get Z.
> Can anyone tell me whats happening ?
> J|||Thanks for that.
What I mean by taskpad is if you go to SQL Server, select
the database, then View -> Taskpad it shows the size of
the file.
J
>--Original Message--
>I'm not sure what you mean by the taskpad of SQL
Server.. but here goes.
>The sysfiles table shows the size of your database is 8k
data pages. So you
>need to multiply that number by 8.
>For example: select * from sysfiles (while in my master
database) reveals
>the following two files.
>size: 1464
>size: 96
>If I multiply those out as: SELECT ((1464 * 8) + (96 *
8)) you get about 12
>MBs
>
>Now if I go out to my: \Program Files\Microsoft SQL
Server\MSSQL\Data
>My master.mdb file is 11.4 MB
>and the ldf file is .7 MB
>That makes just over 12 MB... So these are the same.
>If I right-click the database in Enterprise Manager and
choose properties, I
>get just over 12 MB's...
>
>So, these are all the same size as far as I can see...
>
>HTH
>Rick Sawtell
>MCT, MCSD, MCDBA
>
>
>
>
>"Jimbo" <anonymous@.discussions.microsoft.com> wrote in
message
>news:4cb201c490f4$f09a23b0$a501280a@.phx.gbl...
how[vbcol=seagreen]
X[vbcol=seagreen]
>
>.
>
File Size - Probably missing something basic...
fast it grows.
In the Task Pad of SQL Server it gives the file size as X
By running "select * from sysfiles" I get Y,
But by looking at the actual file size I get Z.
Can anyone tell me whats happening ?
J
I'm not sure what you mean by the taskpad of SQL Server.. but here goes.
The sysfiles table shows the size of your database is 8k data pages. So you
need to multiply that number by 8.
For example: select * from sysfiles (while in my master database) reveals
the following two files.
size: 1464
size: 96
If I multiply those out as: SELECT ((1464 * 8) + (96 * 8)) you get about 12
MBs
Now if I go out to my: \Program Files\Microsoft SQL Server\MSSQL\Data
My master.mdb file is 11.4 MB
and the ldf file is .7 MB
That makes just over 12 MB... So these are the same.
If I right-click the database in Enterprise Manager and choose properties, I
get just over 12 MB's...
So, these are all the same size as far as I can see...
HTH
Rick Sawtell
MCT, MCSD, MCDBA
"Jimbo" <anonymous@.discussions.microsoft.com> wrote in message
news:4cb201c490f4$f09a23b0$a501280a@.phx.gbl...
> We have one main data file, and I would like to track how
> fast it grows.
> In the Task Pad of SQL Server it gives the file size as X
> By running "select * from sysfiles" I get Y,
> But by looking at the actual file size I get Z.
> Can anyone tell me whats happening ?
> J
|||Thanks for that.
What I mean by taskpad is if you go to SQL Server, select
the database, then View -> Taskpad it shows the size of
the file.
J
>--Original Message--
>I'm not sure what you mean by the taskpad of SQL
Server.. but here goes.
>The sysfiles table shows the size of your database is 8k
data pages. So you
>need to multiply that number by 8.
>For example: select * from sysfiles (while in my master
database) reveals
>the following two files.
>size: 1464
>size: 96
>If I multiply those out as: SELECT ((1464 * 8) + (96 *
8)) you get about 12
>MBs
>
>Now if I go out to my: \Program Files\Microsoft SQL
Server\MSSQL\Data
>My master.mdb file is 11.4 MB
>and the ldf file is .7 MB
>That makes just over 12 MB... So these are the same.
>If I right-click the database in Enterprise Manager and
choose properties, I
>get just over 12 MB's...
>
>So, these are all the same size as far as I can see...
>
>HTH
>Rick Sawtell
>MCT, MCSD, MCDBA
>
>
>
>
>"Jimbo" <anonymous@.discussions.microsoft.com> wrote in
message[vbcol=seagreen]
>news:4cb201c490f4$f09a23b0$a501280a@.phx.gbl...
how[vbcol=seagreen]
X
>
>.
>