Monday, March 26, 2012
Fillfactor at 10 took all my free space
fillfactor on a clustered index, and now the database has doubled in size
from 200 to 400 gb and there is no free space left but the process keeps
running. Short of restore, what can I do to get the process to end?Have you tried killing the process from QA ? Find out the
spid of the running process and then open up another
window in QA and issue KILL <spid> command to end the
process.
hth.
>--Original Message--
>I do a lot of bcping and was trying to speed it up by
changing the
>fillfactor on a clustered index, and now the database has
doubled in size
>from 200 to 400 gb and there is no free space left but
the process keeps
>running. Short of restore, what can I do to get the
process to end?
>
>.
>
Friday, March 23, 2012
FileSystemTask->Size limitation?
I am using FileSystemTask to copy around 4 gb file(ASCII). Is there any limitation on the file size.
SQL Server 2005 SP2, Windows 2003 server. Copying from one Windows 2K server to other Windows 2K server.
Thank you,
As far as I know, there is no limitation. Are you getting any error?
|||We are copying files much larger than this. What error are you getting? could it be that you have run out of space on the destination?|||Currently I am not getting any error messages.
Same file copy operation is not working fine right now in DTS package. So we decided to move this part of the flow to SSIS.
I was making sure any known size limitation if at all in SSIS.
Thank you all for your answers.
Wednesday, March 21, 2012
Files sizes in a OLE bmp field
I need to work out what size each WORD document is.
Do you know how can I work out the individual size of each word document?
__________
'sp_spaceused worddoc_attached' only gives me the overall size of the data
in the table.
__________
Thank you for any help in advance
JAD
You can use the datalength function to do this. You can find more
information in books online under datalength.
-Sue
On Fri, 15 Oct 2004 16:05:27 +0100, "JAD" <listgrove@.yahoo.co.uk>
wrote:
>I have a database which stores word documents into tables as BLOB fields.
>I need to work out what size each WORD document is.
>Do you know how can I work out the individual size of each word document?
>__________
>'sp_spaceused worddoc_attached' only gives me the overall size of the data
>in the table.
>__________
>
>Thank you for any help in advance
>JAD
>
Filegrowth by percent or by size. (And how much).
Setting a small testmachine (slow disks) filegrowth at 10 % and
a databasesize of 1.2 Gbyte.
Problem : A simple insert took over 1 minute.
Causing a timeout.
Took some time, but the growing of the database took some time.
Because of the timeout, the extend was not added after the action,
so the next action causes the same problem.
Our production machines are quite a bit faster, so the problem has
not been noticed (yet) on a production machine.
Our development machines are faster as wel, but a glitch once
probably wouldn't be noticed as significant.
So what timing is to be expected by extending the database ?
(Is 64 Mbyte a good size for extending)
Any thoughts about this ?
ben brugmanHi Ben,
In our production environment, we give a good initial size
to the data and log files and set the increment value by
200 MB and not by size. We have found that this is the
best way to handle databases that grow by 100 MB in a week
and more than 2 GB in size... In this way, we reduce
frequent growth of dbs.
regards,
bharath
mcdba
>--Original Message--
>Yesterday we encountered a problem.
>Setting a small testmachine (slow disks) filegrowth at 10
% and
>a databasesize of 1.2 Gbyte.
>Problem : A simple insert took over 1 minute.
>Causing a timeout.
>Took some time, but the growing of the database took some
time.
>Because of the timeout, the extend was not added after
the action,
>so the next action causes the same problem.
>Our production machines are quite a bit faster, so the
problem has
>not been noticed (yet) on a production machine.
>Our development machines are faster as wel, but a glitch
once
>probably wouldn't be noticed as significant.
>So what timing is to be expected by extending the
database ?
>(Is 64 Mbyte a good size for extending)
>Any thoughts about this ?
>ben brugman
>
>.
>|||Can you give an indication of the 'size' of your machine
and the time it takes to extend by 200 MB ?
ben
"bharath" <barathsing@.hotmail.com> wrote in message
news:a20401c3b7d3$6dc70d90$a601280a@.phx.gbl...
> Hi Ben,
> In our production environment, we give a good initial size
> to the data and log files and set the increment value by
> 200 MB and not by size. We have found that this is the
> best way to handle databases that grow by 100 MB in a week
> and more than 2 GB in size... In this way, we reduce
> frequent growth of dbs.
> regards,
> bharath
> mcdba
>
> >--Original Message--
> >Yesterday we encountered a problem.
> >Setting a small testmachine (slow disks) filegrowth at 10
> % and
> >a databasesize of 1.2 Gbyte.
> >
> >Problem : A simple insert took over 1 minute.
> >Causing a timeout.
> >
> >Took some time, but the growing of the database took some
> time.
> >Because of the timeout, the extend was not added after
> the action,
> >so the next action causes the same problem.
> >
> >Our production machines are quite a bit faster, so the
> problem has
> >not been noticed (yet) on a production machine.
> >Our development machines are faster as wel, but a glitch
> once
> >probably wouldn't be noticed as significant.
> >
> >So what timing is to be expected by extending the
> database ?
> >(Is 64 Mbyte a good size for extending)
> >
> >Any thoughts about this ?
> >
> >ben brugman
> >
> >
> >.
> >
Monday, March 19, 2012
filegroups
Due to the size of our tables we're considering the use of filegroups that
span multiple disk-arrays.
What's the best aproach:
- create a filegroup with files on multiple disks an let SQL Server figure
out/spread the table data across the disks automaticly.
or
- Analyse and assign the tables to groups/files/disks ourselves
tanx,
Derk JanDerk,
Create your user-defined filegroup(s). Change the default filegroup status
from PRIMARY to one of your user-defined filegroups (ALTER DATABASE). Use
the ON FILEGROUP clause of the CREATE INDEX and CREATE TABLE statements to
control the placement of your tables and indexes.
HTH
Jerry
"Derk Jan" <DerkJan@.discussions.microsoft.com> wrote in message
news:9A2C5762-98F9-470C-95CE-50F1DCDA05DC@.microsoft.com...
> Hi,
> Due to the size of our tables we're considering the use of filegroups that
> span multiple disk-arrays.
> What's the best aproach:
> - create a filegroup with files on multiple disks an let SQL Server figure
> out/spread the table data across the disks automaticly.
> or
> - Analyse and assign the tables to groups/files/disks ourselves
> tanx,
> Derk Jan
>|||Why does the size of the table warrant spanning multiple arrays? Are you
scanning the entire table each time you access it? How large are you
talking about? The size or number of rows in a table or database means much
less than how you use and access the data. Before you go moving stuff
around you should determine where the bottleneck really is and why. That
will play a major role in deciding how to split up data if required.
Andrew J. Kelly SQL MVP
"Derk Jan" <DerkJan@.discussions.microsoft.com> wrote in message
news:9A2C5762-98F9-470C-95CE-50F1DCDA05DC@.microsoft.com...
> Hi,
> Due to the size of our tables we're considering the use of filegroups that
> span multiple disk-arrays.
> What's the best aproach:
> - create a filegroup with files on multiple disks an let SQL Server figure
> out/spread the table data across the disks automaticly.
> or
> - Analyse and assign the tables to groups/files/disks ourselves
> tanx,
> Derk Jan
>
filegroups
Due to the size of our tables we're considering the use of filegroups that
span multiple disk-arrays.
What's the best aproach:
- create a filegroup with files on multiple disks an let SQL Server figure
out/spread the table data across the disks automaticly.
or
- Analyse and assign the tables to groups/files/disks ourselves
tanx,
Derk JanDerk,
Create your user-defined filegroup(s). Change the default filegroup status
from PRIMARY to one of your user-defined filegroups (ALTER DATABASE). Use
the ON FILEGROUP clause of the CREATE INDEX and CREATE TABLE statements to
control the placement of your tables and indexes.
HTH
Jerry
"Derk Jan" <DerkJan@.discussions.microsoft.com> wrote in message
news:9A2C5762-98F9-470C-95CE-50F1DCDA05DC@.microsoft.com...
> Hi,
> Due to the size of our tables we're considering the use of filegroups that
> span multiple disk-arrays.
> What's the best aproach:
> - create a filegroup with files on multiple disks an let SQL Server figure
> out/spread the table data across the disks automaticly.
> or
> - Analyse and assign the tables to groups/files/disks ourselves
> tanx,
> Derk Jan
>|||Why does the size of the table warrant spanning multiple arrays? Are you
scanning the entire table each time you access it? How large are you
talking about? The size or number of rows in a table or database means much
less than how you use and access the data. Before you go moving stuff
around you should determine where the bottleneck really is and why. That
will play a major role in deciding how to split up data if required.
--
Andrew J. Kelly SQL MVP
"Derk Jan" <DerkJan@.discussions.microsoft.com> wrote in message
news:9A2C5762-98F9-470C-95CE-50F1DCDA05DC@.microsoft.com...
> Hi,
> Due to the size of our tables we're considering the use of filegroups that
> span multiple disk-arrays.
> What's the best aproach:
> - create a filegroup with files on multiple disks an let SQL Server figure
> out/spread the table data across the disks automaticly.
> or
> - Analyse and assign the tables to groups/files/disks ourselves
> tanx,
> Derk Jan
>
filegroups
Due to the size of our tables we're considering the use of filegroups that
span multiple disk-arrays.
What's the best aproach:
- create a filegroup with files on multiple disks an let SQL Server figure
out/spread the table data across the disks automaticly.
or
- Analyse and assign the tables to groups/files/disks ourselves
tanx,
Derk Jan
Derk,
Create your user-defined filegroup(s). Change the default filegroup status
from PRIMARY to one of your user-defined filegroups (ALTER DATABASE). Use
the ON FILEGROUP clause of the CREATE INDEX and CREATE TABLE statements to
control the placement of your tables and indexes.
HTH
Jerry
"Derk Jan" <DerkJan@.discussions.microsoft.com> wrote in message
news:9A2C5762-98F9-470C-95CE-50F1DCDA05DC@.microsoft.com...
> Hi,
> Due to the size of our tables we're considering the use of filegroups that
> span multiple disk-arrays.
> What's the best aproach:
> - create a filegroup with files on multiple disks an let SQL Server figure
> out/spread the table data across the disks automaticly.
> or
> - Analyse and assign the tables to groups/files/disks ourselves
> tanx,
> Derk Jan
>
|||Why does the size of the table warrant spanning multiple arrays? Are you
scanning the entire table each time you access it? How large are you
talking about? The size or number of rows in a table or database means much
less than how you use and access the data. Before you go moving stuff
around you should determine where the bottleneck really is and why. That
will play a major role in deciding how to split up data if required.
Andrew J. Kelly SQL MVP
"Derk Jan" <DerkJan@.discussions.microsoft.com> wrote in message
news:9A2C5762-98F9-470C-95CE-50F1DCDA05DC@.microsoft.com...
> Hi,
> Due to the size of our tables we're considering the use of filegroups that
> span multiple disk-arrays.
> What's the best aproach:
> - create a filegroup with files on multiple disks an let SQL Server figure
> out/spread the table data across the disks automaticly.
> or
> - Analyse and assign the tables to groups/files/disks ourselves
> tanx,
> Derk Jan
>
Friday, March 9, 2012
Filegroup backup and full recovery mode
My database in mssql2000 sp3a is about 40 GB in size. I
want to use filegroups and implement filegroup backup - at
the same time - i do not want to compromise on my 'FULL
recovery mode' - which means that i would like to have
point in time recovery as well.
Could any one let me know how to implement the same with
an example...?Hi
I think you will find in BOL a well explained examples about how to
accomlish it
"bharath" <barathsing@.hotmail.com> wrote in message
news:04d801c3b586$6db9bcc0$a501280a@.phx.gbl...
> Hi all,
> My database in mssql2000 sp3a is about 40 GB in size. I
> want to use filegroups and implement filegroup backup - at
> the same time - i do not want to compromise on my 'FULL
> recovery mode' - which means that i would like to have
> point in time recovery as well.
> Could any one let me know how to implement the same with
> an example...?
filegroup auto-growth - know when it happens?
I'm running SQLServer 2000. I am in the process of looking over our current filegroups and providing reports based on file size increases.
One thing I'd like to know is when (date and time) a filegroup last "auto-grew". Is there a way to know when this occurs?
thanks,
Jason
SQL Profiler, Events, Database...
Data File Auto Grow
Data File Auto Shrink
Log File Auto Grow
Log File Auto Shrink
|||Derek,
Thanks for the info. I should have re-worded my question I think:
Is there a way to know when the date/time a filegroup last auto-resized? Let's say it auto-resized last friday, is this timestamp stored somewhere? Profiler would tell me only if I was running it at the time of the resize.
I'm trying to correlate the time of an error with the last time a large filegroup resized.
thanks
Jason
|||No. In general if you do not trace database activity it is lost. SQL Server has a builtin error log but it collects exceptions.|||Thanks for the info. Appreciate the help.
Jason
FileGroup
There is a table which is about 50GB is size.
I am thinking of placing the database of this table into a separate .ndf on a separate drive on the server.
Is it possible to place this particular table into a searate drive to increase the performance even more?
Basically my question is: is it possible to place tables into separate .ndf files?
ThanksYes, Microsoft did a great whitepaper about exactly this thought a few years ago. I'm not sure if they made it public or if it was "restricted publication" due to containing customer specific details.
The general gist of the idea is to create specific filegroups for specific devices, then put the SQL objects on those filegroups.
One area that often helps more than moving specific tables to a given drive is to segregate data pages (either the heap or the clustered index) onto one set of spindles (aka drive) and the indicies onto a different filegroup. This allows parallel IO to occur for even single table changes (assuming you have enough CPUs and IO channels)!
-PatP|||I think I am now confused.
Could you please elaborate further and do you know of a link regarding this issue?
Thanks|||We can create two filegroups or more while creating database. The following SQL Statements maybe help you:
CREATE DATABASE database_name
[ ON
[ PRIMARY ] [ <filespec> [ ,...n ]
[ , <filegroup> [ ,...n ] ]
[ LOG ON { <filespec> [ ,...n ] } ]
]
[ COLLATE collation_name ]
[ WITH <external_access_option> ]
]
Here in the block <filespec>,you have to give the file path,then you can locate the files into different disk.
Then referencing the following SQL statements:
CREATE TABLE
[ database_name . [ schema_name ] . | schema_name . ] table_name
( { <column_definition> | <computed_column_definition> }
[ <table_constraint> ] [ ,...n ] )
[ ON { partition_scheme_name ( partition_column_name ) | filegroup
| "default" } ]
[ { TEXTIMAGE_ON { filegroup | "default" } ]
[ ; ]
So you store datum in different disk.
Good luck!|||Yes, this is what I was after.
This seems to be a very powerfull stuff.
Thank you
Sunday, February 26, 2012
file size of restore
and log files not just the data itself. It is a good idea to have extra
space but you may be pushing it a little too far if you only have 3GB.
Andrew J. Kelly SQL MVP
"Jack Vamvas" <DEL_TO_REPLY@.del.com> wrote in message
news:VvGdna6Hf7i_DF3bnZ2dnUVZ8rOdnZ2d@.bt.com...
> Hi
> I did an initial file size of a db to 100GB .
> The backups reflect the true size of the DB (about 3GB). However if you
> try and restore the DB it wants 500GB of disk space. Is there anyway to
> restore so it only restores the size of the data?
>
>
>
>
>
>I don't know why it wants 500GB
Perhaps the initial size was 100GB and it has now grown to 500GB? Doesn't change what you are
saying, though... :-)
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://sqlblog.com/blogs/tibor_karaszi
"Andrew J. Kelly" <sqlmvpnooospam@.shadhawk.com> wrote in message
news:OMhxzCd3HHA.2312@.TK2MSFTNGP06.phx.gbl...
>I don't know why it wants 500GB but you do need the entire size of the data and log files not just
>the data itself. It is a good idea to have extra space but you may be pushing it a little too far
>if you only have 3GB.
> --
> Andrew J. Kelly SQL MVP
> "Jack Vamvas" <DEL_TO_REPLY@.del.com> wrote in message
> news:VvGdna6Hf7i_DF3bnZ2dnUVZ8rOdnZ2d@.bt.com...
>
file size of restore
I did an initial file size of a db to 100GB .
The backups reflect the true size of the DB (about 3GB). However if you
try and restore the DB it wants 500GB of disk space. Is there anyway to
restore so it only restores the size of the data?I don't know why it wants 500GB but you do need the entire size of the data
and log files not just the data itself. It is a good idea to have extra
space but you may be pushing it a little too far if you only have 3GB.
Andrew J. Kelly SQL MVP
"Jack Vamvas" <DEL_TO_REPLY@.del.com> wrote in message
news:VvGdna6Hf7i_DF3bnZ2dnUVZ8rOdnZ2d@.bt
.com...
> Hi
> I did an initial file size of a db to 100GB .
> The backups reflect the true size of the DB (about 3GB). However if you
> try and restore the DB it wants 500GB of disk space. Is there anyway to
> restore so it only restores the size of the data?
>
>
>
>
>|||>I don't know why it wants 500GB
Perhaps the initial size was 100GB and it has now grown to 500GB? Doesn't ch
ange what you are
saying, though... :-)
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://sqlblog.com/blogs/tibor_karaszi
"Andrew J. Kelly" <sqlmvpnooospam@.shadhawk.com> wrote in message
news:OMhxzCd3HHA.2312@.TK2MSFTNGP06.phx.gbl...
>I don't know why it wants 500GB but you do need the entire size of the data
and log files not just
>the data itself. It is a good idea to have extra space but you may be pushi
ng it a little too far
>if you only have 3GB.
> --
> Andrew J. Kelly SQL MVP
> "Jack Vamvas" <DEL_TO_REPLY@.del.com> wrote in message
> news:VvGdna6Hf7i_DF3bnZ2dnUVZ8rOdnZ2d@.bt
.com...
>|||Only thing that makes sense to me is that 1) the data file has grown to
500GB or more likely the LOG file has grown unchecked to 400GB because you
had Full Recovery Mode set but never backed up the log file.
TheSQLGuru
President
Indicium Resources, Inc.
"Jack Vamvas" <DEL_TO_REPLY@.del.com> wrote in message
news:VvGdna6Hf7i_DF3bnZ2dnUVZ8rOdnZ2d@.bt
.com...
> Hi
> I did an initial file size of a db to 100GB .
> The backups reflect the true size of the DB (about 3GB). However if you
> try and restore the DB it wants 500GB of disk space. Is there anyway to
> restore so it only restores the size of the data?
>
>
>
>
>
file size of restore
I did an initial file size of a db to 100GB .
The backups reflect the true size of the DB (about 3GB). However if you
try and restore the DB it wants 500GB of disk space. Is there anyway to
restore so it only restores the size of the data?I don't know why it wants 500GB but you do need the entire size of the data
and log files not just the data itself. It is a good idea to have extra
space but you may be pushing it a little too far if you only have 3GB.
--
Andrew J. Kelly SQL MVP
"Jack Vamvas" <DEL_TO_REPLY@.del.com> wrote in message
news:VvGdna6Hf7i_DF3bnZ2dnUVZ8rOdnZ2d@.bt.com...
> Hi
> I did an initial file size of a db to 100GB .
> The backups reflect the true size of the DB (about 3GB). However if you
> try and restore the DB it wants 500GB of disk space. Is there anyway to
> restore so it only restores the size of the data?
>
>
>
>
>|||>I don't know why it wants 500GB
Perhaps the initial size was 100GB and it has now grown to 500GB? Doesn't change what you are
saying, though... :-)
--
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://sqlblog.com/blogs/tibor_karaszi
"Andrew J. Kelly" <sqlmvpnooospam@.shadhawk.com> wrote in message
news:OMhxzCd3HHA.2312@.TK2MSFTNGP06.phx.gbl...
>I don't know why it wants 500GB but you do need the entire size of the data and log files not just
>the data itself. It is a good idea to have extra space but you may be pushing it a little too far
>if you only have 3GB.
> --
> Andrew J. Kelly SQL MVP
> "Jack Vamvas" <DEL_TO_REPLY@.del.com> wrote in message
> news:VvGdna6Hf7i_DF3bnZ2dnUVZ8rOdnZ2d@.bt.com...
>> Hi
>> I did an initial file size of a db to 100GB .
>> The backups reflect the true size of the DB (about 3GB). However if you
>> try and restore the DB it wants 500GB of disk space. Is there anyway to
>> restore so it only restores the size of the data?
>>
>>
>>
>>
>>
>|||Only thing that makes sense to me is that 1) the data file has grown to
500GB or more likely the LOG file has grown unchecked to 400GB because you
had Full Recovery Mode set but never backed up the log file.
--
TheSQLGuru
President
Indicium Resources, Inc.
"Jack Vamvas" <DEL_TO_REPLY@.del.com> wrote in message
news:VvGdna6Hf7i_DF3bnZ2dnUVZ8rOdnZ2d@.bt.com...
> Hi
> I did an initial file size of a db to 100GB .
> The backups reflect the true size of the DB (about 3GB). However if you
> try and restore the DB it wants 500GB of disk space. Is there anyway to
> restore so it only restores the size of the data?
>
>
>
>
>
File Size Limitation
I'm trying to upload word documet to report server. I got an error message
that maximum size exceeded. Can anyone tell what's the limitation of size of
the file to be uploaded?
--
Thanks,
IDYou may want to check this related thread:
http://msdn.microsoft.com/newsgroups/default.aspx?dg=microsoft.public.sqlserver.reportingsvcs&mid=74513284-1467-4a32-b711-bdd7598292e0&sloc=en-us
-- Robert
This posting is provided "AS IS" with no warranties, and confers no rights.
"exkievan" <exkievan@.discussions.microsoft.com> wrote in message
news:5E3FA1CA-1C7D-45AB-9F3D-668D26899239@.microsoft.com...
> Hi,
> I'm trying to upload word documet to report server. I got an error message
> that maximum size exceeded. Can anyone tell what's the limitation of size
> of
> the file to be uploaded?
> --
> Thanks,
> ID|||Thanks
"Robert Bruckner [MSFT]" wrote:
> You may want to check this related thread:
> http://msdn.microsoft.com/newsgroups/default.aspx?dg=microsoft.public.sqlserver.reportingsvcs&mid=74513284-1467-4a32-b711-bdd7598292e0&sloc=en-us
>
> -- Robert
> This posting is provided "AS IS" with no warranties, and confers no rights.
> "exkievan" <exkievan@.discussions.microsoft.com> wrote in message
> news:5E3FA1CA-1C7D-45AB-9F3D-668D26899239@.microsoft.com...
> > Hi,
> >
> > I'm trying to upload word documet to report server. I got an error message
> > that maximum size exceeded. Can anyone tell what's the limitation of size
> > of
> > the file to be uploaded?
> > --
> > Thanks,
> >
> > ID
>
>
File size limit / offline cache
Hi,
I have 2 questions:
- Is there any way of getting around the 128MB file size limit when creating and adding SSEv databases to VS2005? Currently I get the following error when trying to connect to a database:"The database file is larger than the configured maximum database size. This setting takes effect on the first concurrent database connection only...". This after I altered the app.config file to "...Max Database Size=600;..." Have anyone tried to use SSEv to cache data with the use of the Smart Application Offline Building Block? Is there a provider I can use for doing this?
Thanks in advance!
A SQLEv database can be up to 4GB in size. You control the maximum size of the database through the connection string. See the System.Data.SqlServerCe.SqlCeConnection MSDN documentation for details on this and other connection string parameters that you'll want to know about when working with large databases (e.g. Autoshrink threshold comes to mind)
http://msdn2.microsoft.com/en-us/library/system.data.sqlserverce.sqlceconnection.connectionstring.aspx
Have a look at the new Mobile Client Software Factory published on MSDN - it includes an off-line app block for use with SQL Mobile and should be directly applicable to SQLEv.
http://msdn.microsoft.com/library/default.asp?url=/library/en-us/dnpag2/html/mcsflp.asp
Darren
File size limit / offline cache
Hi,
I have 2 questions:
Is there any way of getting around the 128MB file size limit when creating and adding SSEv databases to VS2005? Currently I get the following error when trying to connect to a database:"The database file is larger than the configured maximum database size. This setting takes effect on the first concurrent database connection only...". This after I altered the app.config file to "...Max Database Size=600;..."
Have anyone tried to use SSEv to cache data with the use of the Smart Application Offline Building Block? Is there a provider I can use for doing this?
Thanks in advance!
A SQLEv database can be up to 4GB in size. You control the maximum size of the database through the connection string. See the System.Data.SqlServerCe.SqlCeConnection MSDN documentation for details on this and other connection string parameters that you'll want to know about when working with large databases (e.g. Autoshrink threshold comes to mind)
http://msdn2.microsoft.com/en-us/library/system.data.sqlserverce.sqlceconnection.connectionstring.aspx
Have a look at the new Mobile Client Software Factory published on MSDN - it includes an off-line app block for use with SQL Mobile and should be directly applicable to SQLEv.
http://msdn.microsoft.com/library/default.asp?url=/library/en-us/dnpag2/html/mcsflp.asp
Darren
File size is growing fast
My SQL database is about 90MB and start growing very fast to 1000MB over a
w

the size of the log file is reduced. Is there a way to do shrink the
database file through Query Analyzer.
I also want to run the shrink statement at the time of starting the service.
I know that there is a SP in the Master DB that do this.
Any advice on the above topic would be very grateful
SFDBCC SHRINKDATABASE
( database_name [ , target_percent ]
[ , { NOTRUNCATE | TRUNCATEONLY } ]
)
DBCC SHRINKFILE
( { file_name | file_id }
{ [ , target_size ]
| [ , { EMPTYFILE | NOTRUNCATE | TRUNCATEONLY } ]
}
)
You need to run the DBCC SHRIKNDATABASE first and then execute the DBCC
SHRINKFILE for both files, data and log.
Another note, usually there is a reason for the files to grow to the size
they are before you shrink them. You may want to identify and monitor it. Yo
u
don't want to shrink a file which will be expanded subsequently due to the
normal processing. Expanding a file could be affect performance, specially i
t
it happens during peak hours.
"SF" wrote:
> Hi,
> My SQL database is about 90MB and start growing very fast to 1000MB over a
> w

> the size of the log file is reduced. Is there a way to do shrink the
> database file through Query Analyzer.
> I also want to run the shrink statement at the time of starting the servic
e.
> I know that there is a SP in the Master DB that do this.
> Any advice on the above topic would be very grateful
> SF
>
>|||> I also want to run the shrink statement at the time of starting the servic
e.
I suggest you read this:
http://www.karaszi.com/SQLServer/info_dont_shrink.asp
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://www.solidqualitylearning.com/
Blog: http://solidqualitylearning.com/blogs/tibor/
"SF" <socfund@.online.com.kh> wrote in message news:ungJVRrWGHA.1192@.TK2MSFTNGP03.phx.gbl...
> Hi,
> My SQL database is about 90MB and start growing very fast to 1000MB over a
> w

> the size of the log file is reduced. Is there a way to do shrink the
> database file through Query Analyzer.
> I also want to run the shrink statement at the time of starting the servic
e.
> I know that there is a SP in the Master DB that do this.
> Any advice on the above topic would be very grateful
> SF
>|||Is the size of the data file (mdf) growing quickly? or that of the log file
(ldf)?
You can check the size of the database files, and if it is the log file that
grows quickly, you are suggested to implement appropriate backup strategy
accordingly.
Martin C K Poon
Senior Analyst Programmer
====================================
"SF" <socfund@.online.com.kh> bl
news:ungJVRrWGHA.1192@.TK2MSFTNGP03.phx.gbl g...
> Hi,
> My SQL database is about 90MB and start growing very fast to 1000MB over a
> w

> the size of the log file is reduced. Is there a way to do shrink the
> database file through Query Analyzer.
> I also want to run the shrink statement at the time of starting the
service.
> I know that there is a SP in the Master DB that do this.
> Any advice on the above topic would be very grateful
> SF
>|||It is the size of the data file (mdf) that grows quickly.
SF
"Martin C K Poon" < martin__dot__poon__at__multiable__dot__c
om> wrote in
message news:eyqL#OwWGHA.4972@.TK2MSFTNGP02.phx.gbl...
> Is the size of the data file (mdf) growing quickly? or that of the log
file
> (ldf)?
> You can check the size of the database files, and if it is the log file
that
> grows quickly, you are suggested to implement appropriate backup strategy
> accordingly.
> --
> Martin C K Poon
> Senior Analyst Programmer
> ====================================
> "SF" <socfund@.online.com.kh> bl
> news:ungJVRrWGHA.1192@.TK2MSFTNGP03.phx.gbl g...
a
only
> service.
>|||SF (socfund@.online.com.kh) writes:
> It is the size of the data file (mdf) that grows quickly.
Most likely then it grows because you are adding a lot of data to it.
If you don't expect it to grow that fast, you should examine what is
going on.
First run this:
exec sp_spaceused null, true
to get accurate information on space within the file.
Then run
SELECT object_name(id), reserved, used
FROM sysindexes
WHERE indid IN (0, 1)
ORDER BY reserved DESC
to see which objects that are taking space in your database.
Erland Sommarskog, SQL Server MVP, esquel@.sommarskog.se
Books Online for SQL Server 2005 at
http://www.microsoft.com/technet/pr...oads/books.mspx
Books Online for SQL Server 2000 at
http://www.microsoft.com/sql/prodin...ions/books.mspx|||You might want to check the default fillfactor (server level) (Database
Settings Tab or run sp_configure)...I have seen this happen when somone
changed this setting without knowing what they were doing.
FILE SIZE IN MODEL
where Can I see the default size of new database in database MODEL ?
In which table ?
Saimon(Florence)Unless you specify otherwise, new databases will be created the same size as
model. You can query the model database properties with:
EXEC sp_helpdb 'model'
--
Hope this helps.
Dan Guzman
SQL Server MVP
"Saimon" <saimon181072@.supereva.it> wrote in message
news:c53q0m$on9$1@.newsreader.mailgate.org...
> Hi,
> where Can I see the default size of new database in database MODEL ?
> In which table ?
> Saimon(Florence)
File Size for SQL Server 2005 Error Log
My SQL Server 2005 error log is getting very, very large... Like 4 GB in 4
days... How can I recycle my error log so that it doesnt' overrun my c:
drive. Also, if there's a way to move the error log from the c: drive to
another drive, that would also help.
Thanks.
Adam F. HarrisOn Mar 14, 5:39 pm, "Adam Harris" <a...@.jgo.com> wrote:
> Hello,
> My SQL Server 2005 error log is getting very, very large... Like 4 GB in 4
> days... How can I recycle my error log so that it doesnt' overrun my c:
> drive. Also, if there's a way to move the error log from the c: drive to
> another drive, that would also help.
> Thanks.
> Adam F. Harris
You will want to execute sp_cycle_errorlog periodically. I would
suggest scheduling a job that runs it as well.
Regards,
Enrique Martinez
Sr. SQL Server Developer|||Hello Enrique,
Ok, that sounds great, but I have no idea how to do that... Do you have a
website or documentation to do that?
Thanks.
Adam F. Harris
"EMartinez" <emartinez.pr1@.gmail.com> wrote in message
news:1173921963.231488.253620@.b75g2000hsg.googlegroups.com...
> On Mar 14, 5:39 pm, "Adam Harris" <a...@.jgo.com> wrote:
>> Hello,
>> My SQL Server 2005 error log is getting very, very large... Like 4 GB in
>> 4
>> days... How can I recycle my error log so that it doesnt' overrun my c:
>> drive. Also, if there's a way to move the error log from the c: drive to
>> another drive, that would also help.
>> Thanks.
>> Adam F. Harris
> You will want to execute sp_cycle_errorlog periodically. I would
> suggest scheduling a job that runs it as well.
> Regards,
> Enrique Martinez
> Sr. SQL Server Developer
>
File size big when download report
I download my report using SQL Server Reporting service export function
(excel format), The file size is very big, when compare to my previous asp
download function.
Note: The no of rec is same.
Ex: Using ASP, the file size is about 1000 kb
Using RS, the file size is about 23,000 kb.
Pls help me, what should I do to reduce the file size to normal.
Thanks
Regards
Kumar.Are you using SP1 version of RS? SP1 introduced excel using its native
store, while RTM used MHTML which was much larger.
--
-Daniel
This posting is provided "AS IS" with no warranties, and confers no rights.
"Kumar" <Kumar@.discussions.microsoft.com> wrote in message
news:5126C810-2785-4B32-A46C-6FB085FCAADD@.microsoft.com...
> Hi
> I download my report using SQL Server Reporting service export function
> (excel format), The file size is very big, when compare to my previous asp
> download function.
> Note: The no of rec is same.
> Ex: Using ASP, the file size is about 1000 kb
> Using RS, the file size is about 23,000 kb.
> Pls help me, what should I do to reduce the file size to normal.
> Thanks
> Regards
> Kumar.
>