Showing posts with label error. Show all posts
Showing posts with label error. Show all posts

Thursday, March 29, 2012

Filter Error: "..processing of filter expression..cannot be perfor

My results return a tinyint column which is either 0,1,2.
When adding a filter to the matrix it seems ok
blah.value = 2
but when running the report i get the following error:
--error--
An error has occured during report processing.
The processing of filter expression for the matrix 'matrix1' cannot be
performed. The comparison failed. Please check the data type returned by the
filter expression.
--enderror--
i have also tried
blah.value = "2"
with no success
anyone know why?a-ha!
this works
Expression
=CInt(Fields!salesGroup.Value)
Operator
=
Value
=2
frankly this is crap
i am returning a number and comparing to a number
- why should i have to convert a number to er a er number?
- why do i have to put an equals in front of the number?
"adolf garlic" wrote:
> My results return a tinyint column which is either 0,1,2.
> When adding a filter to the matrix it seems ok
> blah.value = 2
> but when running the report i get the following error:
> --error--
> An error has occured during report processing.
> The processing of filter expression for the matrix 'matrix1' cannot be
> performed. The comparison failed. Please check the data type returned by the
> filter expression.
> --enderror--
>
> i have also tried
> blah.value = "2"
> with no success
> anyone know why?
>

Filter Error

I have No Filters in Place but I Keep getting this error on a couple reports.

  • An error has occurred during report processing.
  • The processing of SortExpression for the table ‘table2’ cannot be performed. The comparison failed. Please check the data type returned by the SortExpression.

    What is table2

    Have you configured interactive sorting to text field

    sql
  • Friday, March 23, 2012

    FileShareProvider delivery extension credential encoding

    I receive the following error when I try to provide a username and password
    to the FileShareProvider delivery extension through the data-driven
    subscription query:
    ReportingServicesService!dbpolling!5d8!07/14/2004-17:20:23::
    NotificationPolling processing item 951ca046-a4a7-4657-be85-04512f3e8787
    ReportingServicesService!crypto!5d8!07/14/2004-17:20:23:: e ERROR:
    DBUnProtectData failed: System.FormatException: Invalid character in a
    Base-64 string.
    at System.Convert.FromBase64String(String s)
    at
    Microsoft.ReportingServices.Diagnostics.DataProtection.DBUnprotectData(Strin
    g data)
    ReportingServicesService!crypto!5d8!07/14/2004-17:20:23:: i INFO: Current
    user: secret
    ReportingServicesService!notification!5d8!07/14/2004-17:20:23:: e ERROR:
    Error occured processing notification. Invalid character in a Base-64
    string.
    ReportingServicesService!library!5d8!07/14/2004-17:20:23:: Data Driven
    Notification for activation id c0eac265-fe46-4de5-9b13-4745be82bd25 was
    saved.
    ReportingServicesService!library!5d8!07/14/2004-17:20:23:: Status: Invalid
    character in a Base-64 string.
    ReportingServicesService!notification!5d8!07/14/2004-17:20:23:: Notification
    951ca046-a4a7-4657-be85-04512f3e8787 completed. Success: True, Status:
    Invalid character in a Base-64 string., DeliveryExtension: Report Server
    FileShare, Report: Invoice, Attempt 0
    ReportingServicesService!dbpolling!5d8!07/14/2004-17:20:24::
    NotificationPolling finished processing item
    951ca046-a4a7-4657-be85-04512f3e8787
    It seems that the extension is trying to decode the username and password
    from the query, but can't since they aren't encoded. If I hard-code the
    username and password values in the subscription itself, everything works
    fine.
    I tried to use
    Microsoft.ReportingServices.Diagnostics.DataProtection.DBProtectData to
    encode the username and password before storing them in the database, but am
    getting the following error:
    System.Security.SecurityException: Request for the permission of type
    System.Security.Permissions.StrongNameIdentityPermission, mscorlib,
    Version=1.0.5000.0, Culture=neutral, PublicKeyToken=b77a5c561934e089
    failed..
    What do I need to do to make the extension happy?
    JayIt's a know issue, the user name and password are not supported by the file
    share delivery extension, can you make them not data driven - i.e. constant?
    --
    Tudor Trufinescu
    Dev Lead
    Sql Server Reporting Services
    This posting is provided "AS IS" with no warranties, and confers no rights.
    "Jay Hackney" <jayh@.nospam.intellinet.com> wrote in message
    news:#wIsGe2aEHA.3804@.TK2MSFTNGP10.phx.gbl...
    > I receive the following error when I try to provide a username and
    password
    > to the FileShareProvider delivery extension through the data-driven
    > subscription query:
    > ReportingServicesService!dbpolling!5d8!07/14/2004-17:20:23::
    > NotificationPolling processing item 951ca046-a4a7-4657-be85-04512f3e8787
    > ReportingServicesService!crypto!5d8!07/14/2004-17:20:23:: e ERROR:
    > DBUnProtectData failed: System.FormatException: Invalid character in a
    > Base-64 string.
    > at System.Convert.FromBase64String(String s)
    > at
    >
    Microsoft.ReportingServices.Diagnostics.DataProtection.DBUnprotectData(Strin
    > g data)
    > ReportingServicesService!crypto!5d8!07/14/2004-17:20:23:: i INFO: Current
    > user: secret
    > ReportingServicesService!notification!5d8!07/14/2004-17:20:23:: e ERROR:
    > Error occured processing notification. Invalid character in a Base-64
    > string.
    > ReportingServicesService!library!5d8!07/14/2004-17:20:23:: Data Driven
    > Notification for activation id c0eac265-fe46-4de5-9b13-4745be82bd25 was
    > saved.
    > ReportingServicesService!library!5d8!07/14/2004-17:20:23:: Status: Invalid
    > character in a Base-64 string.
    > ReportingServicesService!notification!5d8!07/14/2004-17:20:23::
    Notification
    > 951ca046-a4a7-4657-be85-04512f3e8787 completed. Success: True, Status:
    > Invalid character in a Base-64 string., DeliveryExtension: Report Server
    > FileShare, Report: Invoice, Attempt 0
    > ReportingServicesService!dbpolling!5d8!07/14/2004-17:20:24::
    > NotificationPolling finished processing item
    > 951ca046-a4a7-4657-be85-04512f3e8787
    > It seems that the extension is trying to decode the username and password
    > from the query, but can't since they aren't encoded. If I hard-code the
    > username and password values in the subscription itself, everything works
    > fine.
    > I tried to use
    > Microsoft.ReportingServices.Diagnostics.DataProtection.DBProtectData to
    > encode the username and password before storing them in the database, but
    am
    > getting the following error:
    > System.Security.SecurityException: Request for the permission of type
    > System.Security.Permissions.StrongNameIdentityPermission, mscorlib,
    > Version=1.0.5000.0, Culture=neutral, PublicKeyToken=b77a5c561934e089
    > failed..
    > What do I need to do to make the extension happy?
    > Jay
    >

    fileshare subscription erro

    My report subscription got the following error. Before the subscription was
    run, I changed the Reporting Service's Windows Service logon account from
    local system account to a local user with adm rights.
    I'm using developer editions for sql server and reporting service, running
    everything from one single development machine of XP Pro.
    I'd appreciate any help.
    jf
    ************ log info *********************************
    ReportingServicesService!crypto!1600!01/11/2005-17:27:03:: e ERROR:
    DBUnProtectData failed: System.Runtime.InteropServices.COMException
    (0x80090005): Bad Data.
    at System.Runtime.InteropServices.Marshal.ThrowExceptionForHR(Int32
    errorCode, IntPtr errorInfo)
    at RSManagedCrypto.RSCrypto.DecryptData(Byte[] pCipherText)
    at
    Microsoft.ReportingServices.Diagnostics.DataProtection.DBUnprotectData(String
    data)
    ReportingServicesService!crypto!1600!01/11/2005-17:27:03:: i INFO: Current
    user: jf\john
    ReportingServicesService!notification!1600!01/11/2005-17:27:03:: e ERROR:
    Error occured processing notification. Bad Data.
    ReportingServicesService!notification!1600!01/11/2005-17:27:03::
    Notification 882b52f3-40de-486e-a7ef-4e72e6efd6ea completed. Success: True,
    Status: Bad Data., DeliveryExtension: Report Server FileShare, Report:
    NWCustomers, Attempt 0
    ReportingServicesService!dbpolling!1600!01/11/2005-17:27:03::
    NotificationPolling finished processing item
    882b52f3-40de-486e-a7ef-4e72e6efd6ea
    ***************** end of log info ***************************Hi, you can see the documentation (BOL) of reporting services about change
    the account using rsconfig.exe
    The problem is that the new user would be changed in the RSCONFIG file, but
    trougth rsconfig.exe not over file directly
    this is the command
    RSConfig -c -s Server name -d Reporting Services database name -a
    Sql|Windows -u User name -p User password
    Best Regards
    John Bocachica
    Colombia
    "John Fant" <JohnFant@.discussions.microsoft.com> wrote in message
    news:3CE5CAEF-7553-457A-B072-70BA04EF84B2@.microsoft.com...
    > My report subscription got the following error. Before the subscription
    > was
    > run, I changed the Reporting Service's Windows Service logon account from
    > local system account to a local user with adm rights.
    > I'm using developer editions for sql server and reporting service, running
    > everything from one single development machine of XP Pro.
    > I'd appreciate any help.
    > jf
    > ************ log info *********************************
    > ReportingServicesService!crypto!1600!01/11/2005-17:27:03:: e ERROR:
    > DBUnProtectData failed: System.Runtime.InteropServices.COMException
    > (0x80090005): Bad Data.
    > at System.Runtime.InteropServices.Marshal.ThrowExceptionForHR(Int32
    > errorCode, IntPtr errorInfo)
    > at RSManagedCrypto.RSCrypto.DecryptData(Byte[] pCipherText)
    > at
    > Microsoft.ReportingServices.Diagnostics.DataProtection.DBUnprotectData(String
    > data)
    > ReportingServicesService!crypto!1600!01/11/2005-17:27:03:: i INFO: Current
    > user: jf\john
    > ReportingServicesService!notification!1600!01/11/2005-17:27:03:: e ERROR:
    > Error occured processing notification. Bad Data.
    > ReportingServicesService!notification!1600!01/11/2005-17:27:03::
    > Notification 882b52f3-40de-486e-a7ef-4e72e6efd6ea completed. Success:
    > True,
    > Status: Bad Data., DeliveryExtension: Report Server FileShare, Report:
    > NWCustomers, Attempt 0
    > ReportingServicesService!dbpolling!1600!01/11/2005-17:27:03::
    > NotificationPolling finished processing item
    > 882b52f3-40de-486e-a7ef-4e72e6efd6ea
    > ***************** end of log info ***************************

    Wednesday, March 21, 2012

    FileIOPermission

    I got my custom assembly working fine until I add writing to a log
    file. Then I get a FileIOPermission error.
    I have the following code in my policy file:
    <PermissionSet
    class="NamedPermissionSet"
    version="1"
    Name="ReportHelperFilePermissionSet"
    Description="A special permission set that grants read access to my
    currency rates file.">
    <IPermission
    class="FileIOPermission"
    version="1"
    All="C:\ReportHelper.log"/>
    <IPermission
    class="SecurityPermission"
    version="1"
    Flags="Execution, Assertion"/>
    </PermissionSet>
    <CodeGroup class="UnionCodeGroup"
    version="1"
    PermissionSetName="ReportHelperFilePermissionSet"
    Name="MyNewCodeGroup"
    Description="A special code group for my custom assembly.">
    <IMembershipCondition
    class="UrlMembershipCondition"
    version="1"
    Url="D:\Program Files\Microsoft SQL Server\MSSQL\Reporting
    Services\ReportServer\bin\ReportHelper.dll"/>
    </CodeGroup>
    <CodeGroup
    class="UnionCodeGroup"
    version="1"
    PermissionSetName="FullTrust"
    Name="AWCLibrary">
    <IMembershipCondition
    class="UrlMembershipCondition"
    version="1"
    Url="D:\Program Files\Microsoft SQL Server\MSSQL\Reporting
    Services\ReportServer\bin\ReportHelper.dll"/>
    </CodeGroup>
    I write to my log file as follows:
    public static void WriteLogFile(String msg)
    {
    FileIOPermission perm1 = new
    FileIOPermission(FileIOPermissionAccess.Write, @."C:\ReportHelper.log");
    perm1.Assert();
    //try
    //{
    FileStream fs = new FileStream(@."C:\ReportHelper.log",
    FileMode.OpenOrCreate, FileAccess.ReadWrite);
    StreamWriter w = new StreamWriter(fs);
    w.BaseStream.Seek(0, SeekOrigin.End);
    w.Write("{0} {1} ", DateTime.Now.ToLongTimeString(),
    DateTime.Now.ToLongDateString());
    w.Write(msg + "\r\n");
    w.Flush();
    w.Close();
    }
    What am I missing here? Any help would be appreciated.
    Thanks!Check this
    http://www.c-sharpcorner.com/Code/2005/June/CustomAssemblyinRS.asp
    HTH
    If still you face the issue, let me know (bkkrishnan [at] hotmail [dot] com)
    Thanks
    Balaji
    Siwy wrote:
    >I got my custom assembly working fine until I add writing to a log
    >file. Then I get a FileIOPermission error.
    >I have the following code in my policy file:
    ><PermissionSet
    > class="NamedPermissionSet"
    > version="1"
    > Name="ReportHelperFilePermissionSet"
    > Description="A special permission set that grants read access to my
    >currency rates file.">
    > <IPermission
    > class="FileIOPermission"
    > version="1"
    > All="C:\ReportHelper.log"/>
    > <IPermission
    > class="SecurityPermission"
    > version="1"
    > Flags="Execution, Assertion"/>
    ></PermissionSet>
    ><CodeGroup class="UnionCodeGroup"
    > version="1"
    > PermissionSetName="ReportHelperFilePermissionSet"
    > Name="MyNewCodeGroup"
    > Description="A special code group for my custom assembly.">
    > <IMembershipCondition
    > class="UrlMembershipCondition"
    > version="1"
    > Url="D:\Program Files\Microsoft SQL Server\MSSQL\Reporting
    >Services\ReportServer\bin\ReportHelper.dll"/>
    ></CodeGroup>
    ><CodeGroup
    > class="UnionCodeGroup"
    > version="1"
    > PermissionSetName="FullTrust"
    > Name="AWCLibrary">
    > <IMembershipCondition
    > class="UrlMembershipCondition"
    > version="1"
    > Url="D:\Program Files\Microsoft SQL Server\MSSQL\Reporting
    >Services\ReportServer\bin\ReportHelper.dll"/>
    ></CodeGroup>
    >I write to my log file as follows:
    >public static void WriteLogFile(String msg)
    > {
    > FileIOPermission perm1 = new
    >FileIOPermission(FileIOPermissionAccess.Write, @."C:\ReportHelper.log");
    > perm1.Assert();
    > //try
    > //{
    > FileStream fs = new FileStream(@."C:\ReportHelper.log",
    >FileMode.OpenOrCreate, FileAccess.ReadWrite);
    > StreamWriter w = new StreamWriter(fs);
    > w.BaseStream.Seek(0, SeekOrigin.End);
    > w.Write("{0} {1} ", DateTime.Now.ToLongTimeString(),
    > DateTime.Now.ToLongDateString());
    > w.Write(msg + "\r\n");
    > w.Flush();
    > w.Close();
    > }
    >What am I missing here? Any help would be appreciated.
    >Thanks!
    Message posted via SQLMonster.com
    http://www.sqlmonster.com/Uwe/Forums.aspx/sql-server-reporting/200507/1|||Hi Balaji,
    I followed the article and it still doesn't work. I added [assembly:
    AllowPartiallyTrustedCallers] to my assembly and removed extra
    PermissionSet and CodeGroup sections from my policy file (I left the
    "FullTrust" one) and I still get FileIOPermission error.
    Thanks,
    Kris
    BALAJI via SQLMonster.com wrote:
    > Check this
    > http://www.c-sharpcorner.com/Code/2005/June/CustomAssemblyinRS.asp
    > HTH
    > If still you face the issue, let me know (bkkrishnan [at] hotmail [dot] com)
    > Thanks
    > Balaji
    >
    > Siwy wrote:
    > >I got my custom assembly working fine until I add writing to a log
    > >file. Then I get a FileIOPermission error.
    > >
    > >I have the following code in my policy file:
    > >
    > ><PermissionSet
    > > class="NamedPermissionSet"
    > > version="1"
    > > Name="ReportHelperFilePermissionSet"
    > > Description="A special permission set that grants read access to my
    > >currency rates file.">
    > > <IPermission
    > > class="FileIOPermission"
    > > version="1"
    > > All="C:\ReportHelper.log"/>
    > > <IPermission
    > > class="SecurityPermission"
    > > version="1"
    > > Flags="Execution, Assertion"/>
    > ></PermissionSet>
    > >
    > ><CodeGroup class="UnionCodeGroup"
    > > version="1"
    > > PermissionSetName="ReportHelperFilePermissionSet"
    > > Name="MyNewCodeGroup"
    > > Description="A special code group for my custom assembly.">
    > > <IMembershipCondition
    > > class="UrlMembershipCondition"
    > > version="1"
    > > Url="D:\Program Files\Microsoft SQL Server\MSSQL\Reporting
    > >Services\ReportServer\bin\ReportHelper.dll"/>
    > ></CodeGroup>
    > ><CodeGroup
    > > class="UnionCodeGroup"
    > > version="1"
    > > PermissionSetName="FullTrust"
    > > Name="AWCLibrary">
    > > <IMembershipCondition
    > > class="UrlMembershipCondition"
    > > version="1"
    > > Url="D:\Program Files\Microsoft SQL Server\MSSQL\Reporting
    > >Services\ReportServer\bin\ReportHelper.dll"/>
    > ></CodeGroup>
    > >
    > >I write to my log file as follows:
    > >
    > >public static void WriteLogFile(String msg)
    > > {
    > > FileIOPermission perm1 = new
    > >FileIOPermission(FileIOPermissionAccess.Write, @."C:\ReportHelper.log");
    > > perm1.Assert();
    > > //try
    > > //{
    > > FileStream fs = new FileStream(@."C:\ReportHelper.log",
    > >FileMode.OpenOrCreate, FileAccess.ReadWrite);
    > > StreamWriter w = new StreamWriter(fs);
    > > w.BaseStream.Seek(0, SeekOrigin.End);
    > > w.Write("{0} {1} ", DateTime.Now.ToLongTimeString(),
    > > DateTime.Now.ToLongDateString());
    > > w.Write(msg + "\r\n");
    > > w.Flush();
    > >
    > > w.Close();
    > > }
    > >
    > >What am I missing here? Any help would be appreciated.
    > >
    > >Thanks!
    >
    > --
    > Message posted via SQLMonster.com
    > http://www.sqlmonster.com/Uwe/Forums.aspx/sql-server-reporting/200507/1

    Monday, March 19, 2012

    Filegroup with no file, not possible to remove it!

    Hi All,
    I have a problem with one of my filegroup. Every files have been removed,
    but when I try to remove the filegroup, I have the following error:
    Msg 5042, Level 16, State 8, Line 1
    The filegroup 'TEXT' cannot be removed because it is not empty.
    When I run the following query, to see which filegroup have no
    allocation_unit:
    SELECT * FROM sys.filegroups f
    WHERE NOT EXISTS (SELECT * FROM sys.allocation_units i WHERE
    i.data_space_id = f.data_space_id)
    The Filegroup named 'TEXT' is returned.
    Does anyone have an idea to know why I cannot remove this filegroup?
    Regards,
    JeremyAs a sanity check, verify that there are no remaining files in the
    filegroup:
    EXEC sp_helpfilegroup 'TEXT'
    or
    SELECT *
    FROM sys.data_spaces AS ds
    JOIN sys.database_files AS df ON
    df.data_space_id = ds.data_space_id
    WHERE ds.NAME = 'TEXT'
    --
    Hope this helps.
    Dan Guzman
    SQL Server MVP
    "Jeremy Calles" <Jeremy Calles@.discussions.microsoft.com> wrote in message
    news:C24E794B-0241-495A-8E87-3F57F9BB4E67@.microsoft.com...
    > Hi All,
    > I have a problem with one of my filegroup. Every files have been removed,
    > but when I try to remove the filegroup, I have the following error:
    > Msg 5042, Level 16, State 8, Line 1
    > The filegroup 'TEXT' cannot be removed because it is not empty.
    > When I run the following query, to see which filegroup have no
    > allocation_unit:
    > SELECT * FROM sys.filegroups f
    > WHERE NOT EXISTS (SELECT * FROM sys.allocation_units i WHERE
    > i.data_space_id = f.data_space_id)
    > The Filegroup named 'TEXT' is returned.
    > Does anyone have an idea to know why I cannot remove this filegroup?
    > Regards,
    > Jeremy|||Hi Dan,
    Thanks for your message, but I have already checked that no files are
    assigned to this filegroup. And of course, there are no files.
    So this is why it seems really strange to me.
    I don't know what could it be.
    Regards,
    Jeremy|||Jeremy
    DBCC SHRINKFILE command has an EMPTYFILE parameter.Specify a file that is
    on filegroup and run it
    "Jeremy Calles" <JeremyCalles@.discussions.microsoft.com> wrote in message
    news:A3D24B42-5CB4-4E27-AD85-15DCB1BDA701@.microsoft.com...
    > Hi Dan,
    > Thanks for your message, but I have already checked that no files are
    > assigned to this filegroup. And of course, there are no files.
    > So this is why it seems really strange to me.
    > I don't know what could it be.
    > Regards,
    > Jeremy|||Hi Uri,
    As I have said in my first post: There is no files in this filegroups.
    I have removed all files using DBCC SHRINKFILE
    When I execute "EXEC sp_helpfilegroup 'TEXT'"I have no result.
    This is why I have posted in this filegroup. Cause a filegroup without any
    files attached cannot be removed.
    Regards,
    Jeremy|||Sorry, did not read it properly, how did you try to remove it ? by EM or
    ALTER DATABASE... in QA?
    "Jeremy Calles" <JeremyCalles@.discussions.microsoft.com> wrote in message
    news:D418E95E-EE3A-4049-BFDB-8432D31000F7@.microsoft.com...
    > Hi Uri,
    > As I have said in my first post: There is no files in this filegroups.
    > I have removed all files using DBCC SHRINKFILE
    > When I execute "EXEC sp_helpfilegroup 'TEXT'"I have no result.
    > This is why I have posted in this filegroup. Cause a filegroup without any
    > files attached cannot be removed.
    > Regards,
    > Jeremy
    >
    >|||You're welcome, Uri.
    I'm doing ALTER DATABASE XXX REMOVE FILEGROUP [TEXT] which has worked for
    many filegroups before, and I have the following error:
    Msg 5042, Level 16, State 8, Line 1
    The filegroup 'TEXT' cannot be removed because it is not empty.
    Regards,
    Jeremy|||I haven't seen this problem before. Can you post the exact steps you
    performed the remove the files?
    --
    Hope this helps.
    Dan Guzman
    SQL Server MVP
    "Jeremy Calles" <JeremyCalles@.discussions.microsoft.com> wrote in message
    news:A3D24B42-5CB4-4E27-AD85-15DCB1BDA701@.microsoft.com...
    > Hi Dan,
    > Thanks for your message, but I have already checked that no files are
    > assigned to this filegroup. And of course, there are no files.
    > So this is why it seems really strange to me.
    > I don't know what could it be.
    > Regards,
    > Jeremy|||Hi Dan,
    I have done the following:
    - First extract the name of files attached to [TEXT] filegroup:
    SELECT name FROM sys.database_files WHERE data_space_id =FILEGROUP_ID('TEXT')
    -For each file, I have done:
    DBCC SHRINKFILE(XXXXXXX_01,EMPTYFILE)
    ALTER DATABASE XXX REMOVE FILE XXXXXXX_01
    -When all files have been remove, I do the following on filegroup:
    ALTER DATABASE XXX REMOVE FILEGROUP [Text]
    I have done the following many times, and it used to works on all
    filegroups, but not on this one.
    Regards,
    Jeremy|||How did you move existing objects from filegroup? I just want to be sure
    I'm not missing something.
    --
    Hope this helps.
    Dan Guzman
    SQL Server MVP
    "Jeremy Calles" <JeremyCalles@.discussions.microsoft.com> wrote in message
    news:5E8B5C25-C8B6-4EEC-8A03-8CB2042439CD@.microsoft.com...
    > Hi Dan,
    > I have done the following:
    > - First extract the name of files attached to [TEXT] filegroup:
    > SELECT name FROM sys.database_files WHERE data_space_id => FILEGROUP_ID('TEXT')
    > -For each file, I have done:
    > DBCC SHRINKFILE(XXXXXXX_01,EMPTYFILE)
    > ALTER DATABASE XXX REMOVE FILE XXXXXXX_01
    > -When all files have been remove, I do the following on filegroup:
    > ALTER DATABASE XXX REMOVE FILEGROUP [Text]
    > I have done the following many times, and it used to works on all
    > filegroups, but not on this one.
    > Regards,
    > Jeremy|||I have dropped a lot of objects, but the one I need to move, I have use the
    syntax below:
    CREATE CLUSTERED INDEX [CIdxTemp] ON [dbo].[YYYY]
    (
    [PK] ASC
    )WITH (DROP_EXISTING = ON) ON [OtherFilegroup]
    Regards,
    Jeremy|||With no files in the filegroup, the error message is obviously erroneous.
    Unless someone else jumps in, I suggest you open a support case.
    --
    Hope this helps.
    Dan Guzman
    SQL Server MVP
    "Jeremy Calles" <JeremyCalles@.discussions.microsoft.com> wrote in message
    news:EBACBC50-2B8B-46F8-9FE1-E161470A8A75@.microsoft.com...
    >I have dropped a lot of objects, but the one I need to move, I have use the
    > syntax below:
    > CREATE CLUSTERED INDEX [CIdxTemp] ON [dbo].[YYYY]
    > (
    > [PK] ASC
    > )WITH (DROP_EXISTING = ON) ON [OtherFilegroup]
    > Regards,
    > Jeremy
    >

    Monday, March 12, 2012

    filegroup question

    I apologize if this shows up twice as the first time I sent it said there was
    an error. An article snippet:
    If your database is very large and very busy, multiple files can be used to
    increase performance. Here is one example of how you might use multiple
    files. Let's say you have a single table with 10 million rows that is heavily
    queried. If the table is in a single file, such as a single database file,
    then SQL Server would only use one thread to perform a read of the rows in
    the table. But if the table were divided into three physical files (all part
    of the same filegroup), then SQL Server would use three threads (one per
    physical file) to read the table, which potentially could be faster. In
    addition, if each file were on its own separate physical disk or disk array,
    the performance gain would even be greater.
    Is this true form your experiences? If so, why not just split the whole db
    into multiple files on the same filegroup?
    There are other factors to consider.
    How many physical disk drives do you have and disk controllers and how many
    threads does each controller allow?
    If you have too many reads occurring, then you may end up with some disk
    thrashing as the switches occur.
    Are you set up with a RAID array and if so, how is that RAID array handling
    things.
    From personal experience, I have found that the best of all worlds for small
    to medium size databases (30GB or less) appears to be the following:
    SQL Server and core files installed on the Root Drive.
    Transaction logs on RAID 1 drives
    Database on RAID 5 drives
    Place very heavily used tables in their own filegroup.
    I'm sure others have some great input to this question as well.
    Rick Sawtell
    MCT, MCSD, MCDBA
    "ChrisR" <ChrisR@.discussions.microsoft.com> wrote in message
    news:169D42A6-339E-47CA-B0A4-23793C2AF884@.microsoft.com...
    > I apologize if this shows up twice as the first time I sent it said there
    was
    > an error. An article snippet:
    > If your database is very large and very busy, multiple files can be used
    to
    > increase performance. Here is one example of how you might use multiple
    > files. Let's say you have a single table with 10 million rows that is
    heavily
    > queried. If the table is in a single file, such as a single database file,
    > then SQL Server would only use one thread to perform a read of the rows in
    > the table. But if the table were divided into three physical files (all
    part
    > of the same filegroup), then SQL Server would use three threads (one per
    > physical file) to read the table, which potentially could be faster. In
    > addition, if each file were on its own separate physical disk or disk
    array,
    > the performance gain would even be greater.
    >
    > Is this true form your experiences? If so, why not just split the whole db
    > into multiple files on the same filegroup?
    |||ChrisR wrote: <snip>
    > But if the table were divided into three physical files (all part
    > of the same filegroup), then SQL Server would use three threads (one per
    > physical file) to read the table, which potentially could be faster.
    Well, multiple threads won't help much if the bottleneck is the I/O on a
    particular disc. Remember that I/O is magnitudes slower than RAM or
    context switching.
    Since I primarily use OLTP type applications I have never bothered to
    examine the potential 'gain'. In my experience the important part is to
    activate as many physical discs as possible. This can be done with a
    RAID setting, or by placing one or several files on each disc and
    assigning them to individual or shared filegroups. In my experience the
    simplest advice is to stripe and mirror everything. Unless you want to
    do very specific database tuning every once in a while, this is probably
    the best generic advice.
    HTH,
    Gert-Jan
    |||"ChrisR" <ChrisR@.discussions.microsoft.com> wrote in message
    news:169D42A6-339E-47CA-B0A4-23793C2AF884@.microsoft.com...
    > I apologize if this shows up twice as the first time I sent it said there
    was
    > an error. An article snippet:
    > If your database is very large and very busy, multiple files can be used
    to
    > increase performance. Here is one example of how you might use multiple
    > files. Let's say you have a single table with 10 million rows that is
    heavily
    > queried. If the table is in a single file, such as a single database file,
    > then SQL Server would only use one thread to perform a read of the rows in
    > the table. But if the table were divided into three physical files (all
    part
    > of the same filegroup), then SQL Server would use three threads (one per
    > physical file) to read the table, which potentially could be faster. In
    > addition, if each file were on its own separate physical disk or disk
    array,
    > the performance gain would even be greater.
    My understanding is that this was true under SQL 6.5, but is no longer true
    in SQL 2000 (which I believe does support multiple treads per physical
    file.)

    >
    > Is this true form your experiences? If so, why not just split the whole db
    > into multiple files on the same filegroup?

    filegroup question

    I apologize if this shows up twice as the first time I sent it said there wa
    s
    an error. An article snippet:
    If your database is very large and very busy, multiple files can be used to
    increase performance. Here is one example of how you might use multiple
    files. Let's say you have a single table with 10 million rows that is heavil
    y
    queried. If the table is in a single file, such as a single database file,
    then SQL Server would only use one thread to perform a read of the rows in
    the table. But if the table were divided into three physical files (all part
    of the same filegroup), then SQL Server would use three threads (one per
    physical file) to read the table, which potentially could be faster. In
    addition, if each file were on its own separate physical disk or disk array,
    the performance gain would even be greater.
    Is this true form your experiences? If so, why not just split the whole db
    into multiple files on the same filegroup?There are other factors to consider.
    How many physical disk drives do you have and disk controllers and how many
    threads does each controller allow?
    If you have too many reads occurring, then you may end up with some disk
    thrashing as the switches occur.
    Are you set up with a RAID array and if so, how is that RAID array handling
    things.
    From personal experience, I have found that the best of all worlds for small
    to medium size databases (30GB or less) appears to be the following:
    SQL Server and core files installed on the Root Drive.
    Transaction logs on RAID 1 drives
    Database on RAID 5 drives
    Place very heavily used tables in their own filegroup.
    I'm sure others have some great input to this question as well.
    Rick Sawtell
    MCT, MCSD, MCDBA
    "ChrisR" <ChrisR@.discussions.microsoft.com> wrote in message
    news:169D42A6-339E-47CA-B0A4-23793C2AF884@.microsoft.com...
    > I apologize if this shows up twice as the first time I sent it said there
    was
    > an error. An article snippet:
    > If your database is very large and very busy, multiple files can be used
    to
    > increase performance. Here is one example of how you might use multiple
    > files. Let's say you have a single table with 10 million rows that is
    heavily
    > queried. If the table is in a single file, such as a single database file,
    > then SQL Server would only use one thread to perform a read of the rows in
    > the table. But if the table were divided into three physical files (all
    part
    > of the same filegroup), then SQL Server would use three threads (one per
    > physical file) to read the table, which potentially could be faster. In
    > addition, if each file were on its own separate physical disk or disk
    array,
    > the performance gain would even be greater.
    >
    > Is this true form your experiences? If so, why not just split the whole db
    > into multiple files on the same filegroup?|||ChrisR wrote: <snip>
    > But if the table were divided into three physical files (all part
    > of the same filegroup), then SQL Server would use three threads (one per
    > physical file) to read the table, which potentially could be faster.
    Well, multiple threads won't help much if the bottleneck is the I/O on a
    particular disc. Remember that I/O is magnitudes slower than RAM or
    context switching.
    Since I primarily use OLTP type applications I have never bothered to
    examine the potential 'gain'. In my experience the important part is to
    activate as many physical discs as possible. This can be done with a
    RAID setting, or by placing one or several files on each disc and
    assigning them to individual or shared filegroups. In my experience the
    simplest advice is to stripe and mirror everything. Unless you want to
    do very specific database tuning every once in a while, this is probably
    the best generic advice.
    HTH,
    Gert-Jan|||"ChrisR" <ChrisR@.discussions.microsoft.com> wrote in message
    news:169D42A6-339E-47CA-B0A4-23793C2AF884@.microsoft.com...
    > I apologize if this shows up twice as the first time I sent it said there
    was
    > an error. An article snippet:
    > If your database is very large and very busy, multiple files can be used
    to
    > increase performance. Here is one example of how you might use multiple
    > files. Let's say you have a single table with 10 million rows that is
    heavily
    > queried. If the table is in a single file, such as a single database file,
    > then SQL Server would only use one thread to perform a read of the rows in
    > the table. But if the table were divided into three physical files (all
    part
    > of the same filegroup), then SQL Server would use three threads (one per
    > physical file) to read the table, which potentially could be faster. In
    > addition, if each file were on its own separate physical disk or disk
    array,
    > the performance gain would even be greater.
    My understanding is that this was true under SQL 6.5, but is no longer true
    in SQL 2000 (which I believe does support multiple treads per physical
    file.)

    >
    > Is this true form your experiences? If so, why not just split the whole db
    > into multiple files on the same filegroup?

    filegroup question

    I apologize if this shows up twice as the first time I sent it said there was
    an error. An article snippet:
    If your database is very large and very busy, multiple files can be used to
    increase performance. Here is one example of how you might use multiple
    files. Let's say you have a single table with 10 million rows that is heavily
    queried. If the table is in a single file, such as a single database file,
    then SQL Server would only use one thread to perform a read of the rows in
    the table. But if the table were divided into three physical files (all part
    of the same filegroup), then SQL Server would use three threads (one per
    physical file) to read the table, which potentially could be faster. In
    addition, if each file were on its own separate physical disk or disk array,
    the performance gain would even be greater.
    Is this true form your experiences? If so, why not just split the whole db
    into multiple files on the same filegroup?There are other factors to consider.
    How many physical disk drives do you have and disk controllers and how many
    threads does each controller allow?
    If you have too many reads occurring, then you may end up with some disk
    thrashing as the switches occur.
    Are you set up with a RAID array and if so, how is that RAID array handling
    things.
    From personal experience, I have found that the best of all worlds for small
    to medium size databases (30GB or less) appears to be the following:
    SQL Server and core files installed on the Root Drive.
    Transaction logs on RAID 1 drives
    Database on RAID 5 drives
    Place very heavily used tables in their own filegroup.
    I'm sure others have some great input to this question as well.
    Rick Sawtell
    MCT, MCSD, MCDBA
    "ChrisR" <ChrisR@.discussions.microsoft.com> wrote in message
    news:169D42A6-339E-47CA-B0A4-23793C2AF884@.microsoft.com...
    > I apologize if this shows up twice as the first time I sent it said there
    was
    > an error. An article snippet:
    > If your database is very large and very busy, multiple files can be used
    to
    > increase performance. Here is one example of how you might use multiple
    > files. Let's say you have a single table with 10 million rows that is
    heavily
    > queried. If the table is in a single file, such as a single database file,
    > then SQL Server would only use one thread to perform a read of the rows in
    > the table. But if the table were divided into three physical files (all
    part
    > of the same filegroup), then SQL Server would use three threads (one per
    > physical file) to read the table, which potentially could be faster. In
    > addition, if each file were on its own separate physical disk or disk
    array,
    > the performance gain would even be greater.
    >
    > Is this true form your experiences? If so, why not just split the whole db
    > into multiple files on the same filegroup?|||ChrisR wrote: <snip>
    > But if the table were divided into three physical files (all part
    > of the same filegroup), then SQL Server would use three threads (one per
    > physical file) to read the table, which potentially could be faster.
    Well, multiple threads won't help much if the bottleneck is the I/O on a
    particular disc. Remember that I/O is magnitudes slower than RAM or
    context switching.
    Since I primarily use OLTP type applications I have never bothered to
    examine the potential 'gain'. In my experience the important part is to
    activate as many physical discs as possible. This can be done with a
    RAID setting, or by placing one or several files on each disc and
    assigning them to individual or shared filegroups. In my experience the
    simplest advice is to stripe and mirror everything. Unless you want to
    do very specific database tuning every once in a while, this is probably
    the best generic advice.
    HTH,
    Gert-Jan|||"ChrisR" <ChrisR@.discussions.microsoft.com> wrote in message
    news:169D42A6-339E-47CA-B0A4-23793C2AF884@.microsoft.com...
    > I apologize if this shows up twice as the first time I sent it said there
    was
    > an error. An article snippet:
    > If your database is very large and very busy, multiple files can be used
    to
    > increase performance. Here is one example of how you might use multiple
    > files. Let's say you have a single table with 10 million rows that is
    heavily
    > queried. If the table is in a single file, such as a single database file,
    > then SQL Server would only use one thread to perform a read of the rows in
    > the table. But if the table were divided into three physical files (all
    part
    > of the same filegroup), then SQL Server would use three threads (one per
    > physical file) to read the table, which potentially could be faster. In
    > addition, if each file were on its own separate physical disk or disk
    array,
    > the performance gain would even be greater.
    My understanding is that this was true under SQL 6.5, but is no longer true
    in SQL 2000 (which I believe does support multiple treads per physical
    file.)
    >
    > Is this true form your experiences? If so, why not just split the whole db
    > into multiple files on the same filegroup?

    Filegroup is full error

    Hello,
    This is a server post even though I mention Analysis Services!
    I am getting an error as follows when I try to process a cube:
    Data source provider error: Could not allocate space for object '(SYSTEM
    table id: -76163232)' in database 'TEMPDB' because the 'DEFAULT' filegroup is
    full.;42000; Time:16/11/2004 7:05:34 PM
    It looks like a SQL Server problem rather than AS.
    The TEMPDB data file and transaction log are set on automatically grow and
    unrestricted growth, and there is plenty of overall disk space (it's on a
    RAID system).
    Any ideas as to what the problem might be? Does the number 42000 mean
    anything? A bad stripe has been suggested (i.e. hardware). I am going to
    try adding another file to the PRIMARY filegroup but beyond that I am a bit
    lost.
    Hope someone can help.
    LesSometimes it can not grow fast enough. Don't rely on autogrow if you know
    you need more space. Manually grow the file(s) in that filegroup and try
    again.
    --
    Andrew J. Kelly SQL MVP
    "Les Russell" <LesRussell@.discussions.microsoft.com> wrote in message
    news:8157C0BC-32E4-4C9C-85AF-50D8683A829B@.microsoft.com...
    > Hello,
    > This is a server post even though I mention Analysis Services!
    > I am getting an error as follows when I try to process a cube:
    > Data source provider error: Could not allocate space for object '(SYSTEM
    > table id: -76163232)' in database 'TEMPDB' because the 'DEFAULT' filegroup
    > is
    > full.;42000; Time:16/11/2004 7:05:34 PM
    > It looks like a SQL Server problem rather than AS.
    > The TEMPDB data file and transaction log are set on automatically grow and
    > unrestricted growth, and there is plenty of overall disk space (it's on a
    > RAID system).
    > Any ideas as to what the problem might be? Does the number 42000 mean
    > anything? A bad stripe has been suggested (i.e. hardware). I am going to
    > try adding another file to the PRIMARY filegroup but beyond that I am a
    > bit
    > lost.
    > Hope someone can help.
    > Les|||I had the same thing a while back if Im not mistaken. What it came out to be
    was that tempdb couldnt grow fast enough to keep up with the demands. I
    increased the size to 100 megs and havent had the problem since.
    "Les Russell" <LesRussell@.discussions.microsoft.com> wrote in message
    news:8157C0BC-32E4-4C9C-85AF-50D8683A829B@.microsoft.com...
    > Hello,
    > This is a server post even though I mention Analysis Services!
    > I am getting an error as follows when I try to process a cube:
    > Data source provider error: Could not allocate space for object '(SYSTEM
    > table id: -76163232)' in database 'TEMPDB' because the 'DEFAULT' filegroup
    is
    > full.;42000; Time:16/11/2004 7:05:34 PM
    > It looks like a SQL Server problem rather than AS.
    > The TEMPDB data file and transaction log are set on automatically grow and
    > unrestricted growth, and there is plenty of overall disk space (it's on a
    > RAID system).
    > Any ideas as to what the problem might be? Does the number 42000 mean
    > anything? A bad stripe has been suggested (i.e. hardware). I am going to
    > try adding another file to the PRIMARY filegroup but beyond that I am a
    bit
    > lost.
    > Hope someone can help.
    > Les|||Andrew,
    Thanks to you and ChrisR. I think you are right.
    I have redesigned the cube so that it is smaller, and it is OK now, but I
    will check up how to grow the file manually for future reference.
    Les

    Filegroup is full error

    Hello,
    This is a server post even though I mention Analysis Services!
    I am getting an error as follows when I try to process a cube:
    Data source provider error: Could not allocate space for object '(SYSTEM
    table id: -76163232)' in database 'TEMPDB' because the 'DEFAULT' filegroup i
    s
    full.;42000; Time:16/11/2004 7:05:34 PM
    It looks like a SQL Server problem rather than AS.
    The TEMPDB data file and transaction log are set on automatically grow and
    unrestricted growth, and there is plenty of overall disk space (it's on a
    RAID system).
    Any ideas as to what the problem might be? Does the number 42000 mean
    anything? A bad stripe has been suggested (i.e. hardware). I am going to
    try adding another file to the PRIMARY filegroup but beyond that I am a bit
    lost.
    Hope someone can help.
    LesSometimes it can not grow fast enough. Don't rely on autogrow if you know
    you need more space. Manually grow the file(s) in that filegroup and try
    again.
    Andrew J. Kelly SQL MVP
    "Les Russell" <LesRussell@.discussions.microsoft.com> wrote in message
    news:8157C0BC-32E4-4C9C-85AF-50D8683A829B@.microsoft.com...
    > Hello,
    > This is a server post even though I mention Analysis Services!
    > I am getting an error as follows when I try to process a cube:
    > Data source provider error: Could not allocate space for object '(SYSTEM
    > table id: -76163232)' in database 'TEMPDB' because the 'DEFAULT' filegroup
    > is
    > full.;42000; Time:16/11/2004 7:05:34 PM
    > It looks like a SQL Server problem rather than AS.
    > The TEMPDB data file and transaction log are set on automatically grow and
    > unrestricted growth, and there is plenty of overall disk space (it's on a
    > RAID system).
    > Any ideas as to what the problem might be? Does the number 42000 mean
    > anything? A bad stripe has been suggested (i.e. hardware). I am going to
    > try adding another file to the PRIMARY filegroup but beyond that I am a
    > bit
    > lost.
    > Hope someone can help.
    > Les|||I had the same thing a while back if Im not mistaken. What it came out to be
    was that tempdb couldnt grow fast enough to keep up with the demands. I
    increased the size to 100 megs and havent had the problem since.
    "Les Russell" <LesRussell@.discussions.microsoft.com> wrote in message
    news:8157C0BC-32E4-4C9C-85AF-50D8683A829B@.microsoft.com...
    > Hello,
    > This is a server post even though I mention Analysis Services!
    > I am getting an error as follows when I try to process a cube:
    > Data source provider error: Could not allocate space for object '(SYSTEM
    > table id: -76163232)' in database 'TEMPDB' because the 'DEFAULT' filegroup
    is
    > full.;42000; Time:16/11/2004 7:05:34 PM
    > It looks like a SQL Server problem rather than AS.
    > The TEMPDB data file and transaction log are set on automatically grow and
    > unrestricted growth, and there is plenty of overall disk space (it's on a
    > RAID system).
    > Any ideas as to what the problem might be? Does the number 42000 mean
    > anything? A bad stripe has been suggested (i.e. hardware). I am going to
    > try adding another file to the PRIMARY filegroup but beyond that I am a
    bit
    > lost.
    > Hope someone can help.
    > Les|||Andrew,
    Thanks to you and ChrisR. I think you are right.
    I have redesigned the cube so that it is smaller, and it is OK now, but I
    will check up how to grow the file manually for future reference.
    Les

    Filegroup is full error

    Hello,
    This is a server post even though I mention Analysis Services!
    I am getting an error as follows when I try to process a cube:
    Data source provider error: Could not allocate space for object '(SYSTEM
    table id: -76163232)' in database 'TEMPDB' because the 'DEFAULT' filegroup is
    full.;42000; Time:16/11/2004 7:05:34 PM
    It looks like a SQL Server problem rather than AS.
    The TEMPDB data file and transaction log are set on automatically grow and
    unrestricted growth, and there is plenty of overall disk space (it's on a
    RAID system).
    Any ideas as to what the problem might be? Does the number 42000 mean
    anything? A bad stripe has been suggested (i.e. hardware). I am going to
    try adding another file to the PRIMARY filegroup but beyond that I am a bit
    lost.
    Hope someone can help.
    Les
    Sometimes it can not grow fast enough. Don't rely on autogrow if you know
    you need more space. Manually grow the file(s) in that filegroup and try
    again.
    Andrew J. Kelly SQL MVP
    "Les Russell" <LesRussell@.discussions.microsoft.com> wrote in message
    news:8157C0BC-32E4-4C9C-85AF-50D8683A829B@.microsoft.com...
    > Hello,
    > This is a server post even though I mention Analysis Services!
    > I am getting an error as follows when I try to process a cube:
    > Data source provider error: Could not allocate space for object '(SYSTEM
    > table id: -76163232)' in database 'TEMPDB' because the 'DEFAULT' filegroup
    > is
    > full.;42000; Time:16/11/2004 7:05:34 PM
    > It looks like a SQL Server problem rather than AS.
    > The TEMPDB data file and transaction log are set on automatically grow and
    > unrestricted growth, and there is plenty of overall disk space (it's on a
    > RAID system).
    > Any ideas as to what the problem might be? Does the number 42000 mean
    > anything? A bad stripe has been suggested (i.e. hardware). I am going to
    > try adding another file to the PRIMARY filegroup but beyond that I am a
    > bit
    > lost.
    > Hope someone can help.
    > Les
    |||I had the same thing a while back if Im not mistaken. What it came out to be
    was that tempdb couldnt grow fast enough to keep up with the demands. I
    increased the size to 100 megs and havent had the problem since.
    "Les Russell" <LesRussell@.discussions.microsoft.com> wrote in message
    news:8157C0BC-32E4-4C9C-85AF-50D8683A829B@.microsoft.com...
    > Hello,
    > This is a server post even though I mention Analysis Services!
    > I am getting an error as follows when I try to process a cube:
    > Data source provider error: Could not allocate space for object '(SYSTEM
    > table id: -76163232)' in database 'TEMPDB' because the 'DEFAULT' filegroup
    is
    > full.;42000; Time:16/11/2004 7:05:34 PM
    > It looks like a SQL Server problem rather than AS.
    > The TEMPDB data file and transaction log are set on automatically grow and
    > unrestricted growth, and there is plenty of overall disk space (it's on a
    > RAID system).
    > Any ideas as to what the problem might be? Does the number 42000 mean
    > anything? A bad stripe has been suggested (i.e. hardware). I am going to
    > try adding another file to the PRIMARY filegroup but beyond that I am a
    bit
    > lost.
    > Hope someone can help.
    > Les
    |||Andrew,
    Thanks to you and ChrisR. I think you are right.
    I have redesigned the cube so that it is smaller, and it is OK now, but I
    will check up how to grow the file manually for future reference.
    Les

    Filegroup is Full - During Import Data

    Hi,

    I am getting below error while importing data in SQL 2005 Express:

    "error 0xc0202009: Data Flow Task: SSIS Error Code DTS_E_OLEDBERROR. An OLE DB error has occurred. Error code: 0x80004005.
    An OLE DB record is available. Source: "Microsoft SQL Native Client" Hresult: 0x80004005 Description: "Could not allocate space for object 'dbo.HistoryLog'.'PK_HistoryLog' in database 'HistoryData' because the 'PRIMARY' filegroup is full. Create disk space by deleting unneeded files, dropping objects in the filegroup, adding additional files to the filegroup, or setting autogrowth on for existing files in the filegroup.".
    "

    I have select :

    Enable Autogrownth = Yes

    Filegrowth = 1 MB

    Maximum File Size = Unrestricted File growth

    I don't know what else I am missing?

    Please help

    thanks

    AA

    How much empty space was on the disk when you received the error message.

    Check the Log files.

    |||

    there is only one Log file by the name of "HistoryData_log"

    Initial size is: 1,747

    Enable Autogrownth = Yes

    Filegrowth = 10%

    Maximum File Size = Unrestricted File growth

    Empty Space on C: is 149 GB

    Total database Size : 5843.00 MB

    Space Available : 128.21 MB

    Please note:

    I just noticed when I was increasing initial database size:

    "Create Database Or Alter DataBase failied becasue the resulting cumulative database size would exceed your licence limit of 4096 MB Per Database"

    Please advsie is this the licencing issue?

    Thanks

    |||

    I meant to look in the computer application logs. However, you 'stumbled' upon an explanition.

    Yep, You have SQL Server Express, and SQL Server Express is limited to a maximum 4 GB database size.

    |||

    Is there any way to increase it?

    Please advise which version is good for unlimited data base space?

    Thanks for your help

    Amir

    |||

    Any Edition you purchase, from Workgroup to Enterprise, has 'unlimited' database size.

    Check here for the best Edition for your needs.

    SQL Server 2005 Features, Version Comparison
    http://www.microsoft.com/sql/prodinfo/features/compare-features.mspx

    |||Thanks for hep Arnie.

    Filegroup is full

    Hi.
    I've a msde database and I'm getting the following error every minute
    Error 1105, Severity: 17, ErrState: 2
    Could not allocate space for object 'ProductSettings' in database
    'epo_Srv-ms-avd' because the 'Primary' filegroup is full
    I've cleanup the database but the error keeps on.
    I need urgent help!!!
    Thank you very much
    Hi Eduardo,
    How big is the database? Any chance it's up to 2G (ie the limit)?
    HTH,
    Greg Low [MVP]
    MSDE Manager SQL Tools
    www.whitebearconsulting.com
    "Eduardo Crespo" <crespo_santamaria@.NOSPAMhotmail.com> wrote in message
    news:OoFIRcmuEHA.1992@.TK2MSFTNGP10.phx.gbl...
    > Hi.
    > I've a msde database and I'm getting the following error every minute
    > Error 1105, Severity: 17, ErrState: 2
    > Could not allocate space for object 'ProductSettings' in database
    > 'epo_Srv-ms-avd' because the 'Primary' filegroup is full
    > I've cleanup the database but the error keeps on.
    > I need urgent help!!!
    > Thank you very much
    >
    >
    |||hi Eduardo,
    "Eduardo Crespo" <crespo_santamaria@.NOSPAMhotmail.com> ha scritto nel
    messaggio news:OoFIRcmuEHA.1992@.TK2MSFTNGP10.phx.gbl
    > Hi.
    > I've a msde database and I'm getting the following error every minute
    > Error 1105, Severity: 17, ErrState: 2
    > Could not allocate space for object 'ProductSettings' in database
    > 'epo_Srv-ms-avd' because the 'Primary' filegroup is full
    > I've cleanup the database but the error keeps on.
    > I need urgent help!!!
    > Thank you very much
    in addition to Greg's answer, try executing
    EXEC sp_helpdb 'your_db_name'
    to see, in the 2nd resultset, if the datafile has limited file maxsize
    option set or it can growth till 2gb limit...
    Andrea Montanari (Microsoft MVP - SQL Server)
    http://www.asql.biz/DbaMgr.shtmhttp://italy.mvps.org
    DbaMgr2k ver 0.9.1 - DbaMgr ver 0.55.1
    (my vb6+sql-dmo little try to provide MS MSDE 1.0 and MSDE 2000 a visual
    interface)
    -- remove DMO to reply
    |||Thank you very much. Database size is 2 GB
    "Eduardo Crespo" <crespo_santamaria@.NOSPAMhotmail.com> escribi en el
    mensaje news:OoFIRcmuEHA.1992@.TK2MSFTNGP10.phx.gbl...
    > Hi.
    > I've a msde database and I'm getting the following error every minute
    > Error 1105, Severity: 17, ErrState: 2
    > Could not allocate space for object 'ProductSettings' in database
    > 'epo_Srv-ms-avd' because the 'Primary' filegroup is full
    > I've cleanup the database but the error keeps on.
    > I need urgent help!!!
    > Thank you very much
    >
    >

    Friday, March 9, 2012

    Filegroup Backups

    I'm trying to do a Full Backup on a filegroup but
    everytime I try to run it I get this error:
    [SQLSTATE 01000] (Message 4035) BACKUP
    DATABASE...FILE=<name> successfully processed 1625664
    pages in 625.900 seconds (21.277 MB/sec). [SQLSTATE 01000]
    (Message 3014) The value '0' is not within range for the
    FILE parameter. [SQLSTATE 42000] (Error 3250) VERIFY
    DATABASE is terminating abnormally. [SQLSTATE 42000]
    (Error 3013). The step failed.
    Does anyone have any idea as to why this is happening?
    I'm running SQL2000 on a Win2K Server. SQL Books Online
    is no help.
    Thanks
    JeroockoAs Jasper says you need to post the command the part of
    your error 'The value '0' is not within range for the
    FILE parameter', usually means an error in the code, wrong
    file name, running against the wrong database, something
    like that.
    Regards
    John

    Wednesday, March 7, 2012

    File System Task Error - Process in use

    Hi,

    In the integration services package i'm working on i connect to an excel database and have two data flows which deal with two work sheets in the excel spreadsheet. The data flows do some transformation and then write the data out to a SQL database (two differnet tables). This is all working great however what i want to do once both data flows are complete is to move the directory which contains the current file that is being looped through. At present there is a foreach loop going through a number of directories. I have tried to implement the File System Task object to move the directory but get the following error:

    [File System Task] Error: An error occurred with the following error message: "The process cannot access the file because it is being used by another process.".

    I've read a similar post where by the problem was due to not closing the excel connection before doing the File System Task. I cannot seem to find where i would be able to do this.

    If anyone has any ideas i'm looking forward to hearing them.

    Many thanks in advance,

    Grant

    i'm not sure what's causing your problem. however, my hunch is that a data flow task is still executing when control is passed to the file system task. you need make sure that both data flow tasks have completed before control is passed to the next task. one way to do this is to place both data flow tasks within a single sequence container, and then link this container to the next task. that way, control won't proceed to the next task until both data flow tasks have completed.

    i hope this helps.

    |||

    Hi Duane,

    Thanks for the response. I had the same thought that the task hadn't released the file when it came to the moving of the directory. I have tried the suggestion of using the sequence container but i still get the same result. The move directory task still thinks it is being used by another process.

    Any further suggestion would be welcomed. I thinking there must be a way to get round this or i'm sure it could be a common problem.

    Many Thanks,

    Grant

    |||Hi Duane,

    Further to my last post. After some debugging i have found that it is the directory itself that seems to be in use by another process. I can remove the contents of the file no proble at all. Does this shed any more light on the matter?

    Cheers,

    Grant|||Hi,

    I have found what i believe to be my problem and can't believe i didn't see it earlier. I'm doing all of my file and directory processing within a foreach loop. Hence when i come to moving the current directory i'm sure the foreach loop is still using the folder as it enumerated value. I need to be able to somehow call the move directory task outwith the loop so that it doesn't have control of the process. Could anyone suggest a way of doing this?

    Thanks in advance.

    Grant|||

    if i understand you correctly, you want both data flows to finish before control proceeds to moving the files. also, you want this process to repeat once for each file that you have. i'm also assuming that you have two seperate directories of files that you want to process.

    if my understanding above is correct, then you need to do the following: place two sequence containters inside a foreach loop container. connect an on success constraint from one sequence container to the other. there should be two seperate data flow tasks inside the first sequence container. only when this sequence container succeeds will control flow to the second sequence container. the second sequence container should contain two seperate file system tasks. only when this sequence container succeeds will control flow to the next iteration of the foreach loop container.

    i hope this helps.

    |||Hi,

    I'll clarify what i have set up at present.

    There is a root directory in which a number of directories are stored. Within each of these directories are two files (one CSV and one XLS). We can ignore the CSV file just now. I have two data flow tasks, one which extracts transforms and loads data from one worksheet into a SQL table and another which does a similar process but from a different worksheet and to a different SQL table. Upon completion of both these tasks (which are now in a Sequence container) i want to move the directory containing both the files to an archive location. regardless of whether the file system tasks are in a sequence container there is still an error relating to the directory move. It still believes that the process is in use.
    I still think that because the foreach loop is looping through the directories looking for each XLS file it is taking a hold on the directory itself and will not release it until the current directory until the next iteration of the loop.

    With this thought in mind i thought that i could do something at the beginning of the loop to remove the directory that was referenced in the proevious iteration (possibly set a variable at the end of each iteration). The only problem i encouter when i try this is that because the variable has to be used as a source/destination value i cannot have this as a blank value when starting i get errors when this is attempted. I need to really skip the move directory on the first iteration and add an additional one at the end of the foreach loop.

    My other thought is to just copy the directories during the loop and delete them after the loop has completed. I am aware however that using this method i run the risk that if something causes the package to crashh mid way through it might duplicate directories between the processing and archive directories.

    Does this make any more sense.

    Cheers,

    Grant|||I have the problem resolved.

    I have added the move directory task to the start of the foreach loop. A script bypasses the task on the first loop as there isn't a previous folder to archive. The archive folder path is assigned to the variable later in the loop. The next iteration the loop then has a folder to move in the variable which works no problem. After the loop is finished one final move task is added to deal with the final processed folder.

    The reason it wouldn't allow an empty string as a connection source to begin with was because the "Delay Validation" property was set to false. If this is set to true the problem doesn't exist.

    Thanks for all your help on this matter.

    Grant

    File System Task Error

    Hi,
    I am using the 'File System Task ' to create a directory structure (e.g ..\DB; ..\DB\LOG; ..\DB\BACKUP; )
    I set following properties for the single tasks: UseDirectoryIfExists = True; Operation = Create Directory;

    The task works fine before installing SP1 on the server. Now it creates an ERROR if the directory already exists and it is not empty.

    SSIS package "testcreatedirectory.dtsx" starting.

    Warning: 0xC002915A at Create DB Directory, File System Task: The Directory already exists.

    Error: 0xC002F304 at Create DB Directory, File System Task: An error occurred with the following error message: "Das Verzeichnis ist nicht leer.". (The Directory is not empty.)

    Task failed: Create DB Directory

    Warning: 0x80019002 at Create Directorys: The Execution method succeeded, but the number of errors raised (1) reached the maximum allowed (1); resulting in failure. This occurs when the number of errors reaches the number specified in MaximumErrorCount. Change the MaximumErrorCount or fix the errors.

    Warning: 0x80019002 at testcreatedirectory: The Execution method succeeded, but the number of errors raised (1) reached the maximum allowed (1); resulting in failure. This occurs when the number of errors reaches the number specified in MaximumErrorCount. Change the MaximumErrorCount or fix the errors.

    SSIS package "testcreatedirectory.dtsx" finished: Failure.

    Does anyone know if this is a known bug in SP1 or maybe its a feature and if there already exists a solution (maybe I have to set additional properties I have not discovered as yet).

    Thanks in advance
    Holger

    I have just ran into the same problem after applying Service Pack 1. Have you resolved it yet?|||i don't know if this will help with your particular situation, but hotfix packages for sql server 2005 sp1 were recently released: http://support.microsoft.com/kb/918222/en-us|||I installed the SSIS hotfix but I still get the same results. I guess I need to check on Microsoft Bug list site to see if it's been posted.|||I also tried the hotfix (last week) - with the same result . Have you checked the Microsoft Bug list?|||

    I don't know if there is a proper fix for this but you might want to use a item loop container to loop over the paths that you want created and then just have two tasks in there. One task would be a script task to check the existance and the other to create if it does not.

    This would achieve the same result as the former single task.

    Fred

    |||

    Has anyone been able to solve this problem without an unofficial workaround?
    All my packages uses this task to ease logging and currently they are all failing.

    |||

    No, I have not resolved the issue yet. The only soloution I have are work arounds.

    |||

    Could some body help me with this same issue. How do we do the work around method.

    The first time i run it works fine and creates some file inside but when i execute the pacakge for the second time. It throughs up error saying that the directory is not empty. If i go and delete the files manually it write the files corretly. i have the File system task with UseIFDirectoryExists = True and operation = create directory.

    Any help,

    Thanks,

    JA

    |||I had the same issue, but realized that within the constraints of the task there was no "real" solution. I set the "Force Execution Result" property of the File Systme Object (FSO) Task to "Success". This solved the issue. It would still create a directory if it didn't exist and leave it if it didn't. Then I used another FSO task to move the files I wanted to rename to the new directory. So in this case I used the move option to move and rename. I often use the move option to rename things, if the rename won't work. To tell you the truth the move option might do what you wan't as well. I can't tell because I used both, first the create, then the move.|||

    So you had the same the FSO with ForceExecutionResult = success and usedirectoryif exists = true and operation = create Directory. It does not work for me. Even if i use next step to move files to a new directory. you have to create the directory in first place and then move and then rename or delete. this is what i get.

    Source: Creating Directoy Folder
    Description: The Directory already exists.
    End Warning
    Error: 2006-09-08 12:21:31.59
    Code: 0xC002F304
    Source: Creating Directoy Folder
    Description: An error occurred with the following error message: "The directory is not empty.

    Any idea about this error

    Thanks,

    JA

    |||Like I said, there is no "real" solution with the task. It just doesn't do what it should. What I offered was a temporary fix to have the task move on regardless. The only other fix, as others have mentioned is doing it in a script, utilizing the File System Object. It seems to be a bug.

    File System Task Error

    Hi,
    I am using the 'File System Task ' to create a directory structure (e.g ..\DB; ..\DB\LOG; ..\DB\BACKUP; )
    I set following properties for the single tasks: UseDirectoryIfExists = True; Operation = Create Directory;

    The task works fine before installing SP1 on the server. Now it creates an ERROR if the directory already exists and it is not empty.

    SSIS package "testcreatedirectory.dtsx" starting.

    Warning: 0xC002915A at Create DB Directory, File System Task: The Directory already exists.

    Error: 0xC002F304 at Create DB Directory, File System Task: An error occurred with the following error message: "Das Verzeichnis ist nicht leer.". (The Directory is not empty.)

    Task failed: Create DB Directory

    Warning: 0x80019002 at Create Directorys: The Execution method succeeded, but the number of errors raised (1) reached the maximum allowed (1); resulting in failure. This occurs when the number of errors reaches the number specified in MaximumErrorCount. Change the MaximumErrorCount or fix the errors.

    Warning: 0x80019002 at testcreatedirectory: The Execution method succeeded, but the number of errors raised (1) reached the maximum allowed (1); resulting in failure. This occurs when the number of errors reaches the number specified in MaximumErrorCount. Change the MaximumErrorCount or fix the errors.

    SSIS package "testcreatedirectory.dtsx" finished: Failure.

    Does anyone know if this is a known bug in SP1 or maybe its a feature and if there already exists a solution (maybe I have to set additional properties I have not discovered as yet).

    Thanks in advance
    Holger

    I have just ran into the same problem after applying Service Pack 1. Have you resolved it yet?|||i don't know if this will help with your particular situation, but hotfix packages for sql server 2005 sp1 were recently released: http://support.microsoft.com/kb/918222/en-us|||I installed the SSIS hotfix but I still get the same results. I guess I need to check on Microsoft Bug list site to see if it's been posted.|||I also tried the hotfix (last week) - with the same result . Have you checked the Microsoft Bug list?|||

    I don't know if there is a proper fix for this but you might want to use a item loop container to loop over the paths that you want created and then just have two tasks in there. One task would be a script task to check the existance and the other to create if it does not.

    This would achieve the same result as the former single task.

    Fred

    |||

    Has anyone been able to solve this problem without an unofficial workaround?
    All my packages uses this task to ease logging and currently they are all failing.

    |||

    No, I have not resolved the issue yet. The only soloution I have are work arounds.

    |||

    Could some body help me with this same issue. How do we do the work around method.

    The first time i run it works fine and creates some file inside but when i execute the pacakge for the second time. It throughs up error saying that the directory is not empty. If i go and delete the files manually it write the files corretly. i have the File system task with UseIFDirectoryExists = True and operation = create directory.

    Any help,

    Thanks,

    JA

    |||I had the same issue, but realized that within the constraints of the task there was no "real" solution. I set the "Force Execution Result" property of the File Systme Object (FSO) Task to "Success". This solved the issue. It would still create a directory if it didn't exist and leave it if it didn't. Then I used another FSO task to move the files I wanted to rename to the new directory. So in this case I used the move option to move and rename. I often use the move option to rename things, if the rename won't work. To tell you the truth the move option might do what you wan't as well. I can't tell because I used both, first the create, then the move.|||

    So you had the same the FSO with ForceExecutionResult = success and usedirectoryif exists = true and operation = create Directory. It does not work for me. Even if i use next step to move files to a new directory. you have to create the directory in first place and then move and then rename or delete. this is what i get.

    Source: Creating Directoy Folder
    Description: The Directory already exists.
    End Warning
    Error: 2006-09-08 12:21:31.59
    Code: 0xC002F304
    Source: Creating Directoy Folder
    Description: An error occurred with the following error message: "The directory is not empty.

    Any idea about this error

    Thanks,

    JA

    |||Like I said, there is no "real" solution with the task. It just doesn't do what it should. What I offered was a temporary fix to have the task move on regardless. The only other fix, as others have mentioned is doing it in a script, utilizing the File System Object. It seems to be a bug.

    File System Task Error

    Hi,
    I am using the 'File System Task ' to create a directory structure (e.g ..\DB; ..\DB\LOG; ..\DB\BACKUP; )
    I set following properties for the single tasks: UseDirectoryIfExists = True; Operation = Create Directory;

    The task works fine before installing SP1 on the server. Now it creates an ERROR if the directory already exists and it is not empty.

    SSIS package "testcreatedirectory.dtsx" starting.

    Warning: 0xC002915A at Create DB Directory, File System Task: The Directory already exists.

    Error: 0xC002F304 at Create DB Directory, File System Task: An error occurred with the following error message: "Das Verzeichnis ist nicht leer.". (The Directory is not empty.)

    Task failed: Create DB Directory

    Warning: 0x80019002 at Create Directorys: The Execution method succeeded, but the number of errors raised (1) reached the maximum allowed (1); resulting in failure. This occurs when the number of errors reaches the number specified in MaximumErrorCount. Change the MaximumErrorCount or fix the errors.

    Warning: 0x80019002 at testcreatedirectory: The Execution method succeeded, but the number of errors raised (1) reached the maximum allowed (1); resulting in failure. This occurs when the number of errors reaches the number specified in MaximumErrorCount. Change the MaximumErrorCount or fix the errors.

    SSIS package "testcreatedirectory.dtsx" finished: Failure.

    Does anyone know if this is a known bug in SP1 or maybe its a feature and if there already exists a solution (maybe I have to set additional properties I have not discovered as yet).

    Thanks in advance
    Holger

    I have just ran into the same problem after applying Service Pack 1. Have you resolved it yet?|||i don't know if this will help with your particular situation, but hotfix packages for sql server 2005 sp1 were recently released: http://support.microsoft.com/kb/918222/en-us|||I installed the SSIS hotfix but I still get the same results. I guess I need to check on Microsoft Bug list site to see if it's been posted.|||I also tried the hotfix (last week) - with the same result . Have you checked the Microsoft Bug list?|||

    I don't know if there is a proper fix for this but you might want to use a item loop container to loop over the paths that you want created and then just have two tasks in there. One task would be a script task to check the existance and the other to create if it does not.

    This would achieve the same result as the former single task.

    Fred

    |||

    Has anyone been able to solve this problem without an unofficial workaround?
    All my packages uses this task to ease logging and currently they are all failing.

    |||

    No, I have not resolved the issue yet. The only soloution I have are work arounds.

    |||

    Could some body help me with this same issue. How do we do the work around method.

    The first time i run it works fine and creates some file inside but when i execute the pacakge for the second time. It throughs up error saying that the directory is not empty. If i go and delete the files manually it write the files corretly. i have the File system task with UseIFDirectoryExists = True and operation = create directory.

    Any help,

    Thanks,

    JA

    |||I had the same issue, but realized that within the constraints of the task there was no "real" solution. I set the "Force Execution Result" property of the File Systme Object (FSO) Task to "Success". This solved the issue. It would still create a directory if it didn't exist and leave it if it didn't. Then I used another FSO task to move the files I wanted to rename to the new directory. So in this case I used the move option to move and rename. I often use the move option to rename things, if the rename won't work. To tell you the truth the move option might do what you wan't as well. I can't tell because I used both, first the create, then the move.|||

    So you had the same the FSO with ForceExecutionResult = success and usedirectoryif exists = true and operation = create Directory. It does not work for me. Even if i use next step to move files to a new directory. you have to create the directory in first place and then move and then rename or delete. this is what i get.

    Source: Creating Directoy Folder
    Description: The Directory already exists.
    End Warning
    Error: 2006-09-08 12:21:31.59
    Code: 0xC002F304
    Source: Creating Directoy Folder
    Description: An error occurred with the following error message: "The directory is not empty.

    Any idea about this error

    Thanks,

    JA

    |||Like I said, there is no "real" solution with the task. It just doesn't do what it should. What I offered was a temporary fix to have the task move on regardless. The only other fix, as others have mentioned is doing it in a script, utilizing the File System Object. It seems to be a bug.

    File System Error: The record ID is incorrect

    Hi,

    We have deployed SQL Server 2005 with Analysis Services and Reporting Services.

    When running a report, we get the following error:

    An error has occurred during report processing.

    Cannot read the next data row for the data set DM_GASTOS.

    File system error:

    The record ID is incorrect. Physical file: . Logical file: .

    The error occurs both with a local user account and domain user account. The user account is member of a role which has Cell Data Security restriction in Analysis Services. When other member run the report, the error goes away.

    Anybody has seen this before? Any solutions?

    Cheers

    This looks like a data corruption issue. Try fully re-processing your entire database.

    Edward Melomed.
    --
    This posting is provided "AS IS" with no warranties, and confers no rights.

    |||

    Hi. I'm encountering the same problem. I have re-processed my database but I still get the error message when I generate my report. Any other suggestions?

    Thanks

    |||

    I'm having the same problem.

    I have the same OLAP data base in 3 servers, and the error is the same.

    Any other suggestions?