Wednesday, March 7, 2012

Database Maintenance - DBCC REINDEX

Running SQL Server 2000.
I'm working on developing a maintenance plan that will keep our
databases healthy, and running at their best possible potential.
There are a few issues that come to mind after reading the various
ways of cleaning up things...and after trying a few of them I've got a
few more questions.
For instance, I just created a job to do the following:
1) shrink the log file by backing up with truncate only and then doing
a shrinkfile.
2) i then did a DBREINDEX on each table in the db
3) I then did an updateusage
What i found is that I could have done better by doing the shrinkfile
after the DBREINDEX since it grew the log file so much, but I'm wary
of what this could hurt. I know I could switch from Full Recover to
Bulk-Logged but I'm hesitant to do that because of the possible data
loss.
I would love to come up with a plan that could run nightly/weekly that
would fix my indexes and maintain my log files.
What would be the best way to implement what I've done above?
Thanks
Step 1: read and ponder http://www.microsoft.com/technet/prodtechnol/sql/2000/maintain/ss2kidbp.mspx
Step 2, use the code found in BOL under DBCC SHOWCONTIG which only rebuild the indexes which are
fragmented in the first place.
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://sqlblog.com/blogs/tibor_karaszi
"jason7655" <jcrider@.gmail.com> wrote in message
news:1184688227.637963.246580@.m37g2000prh.googlegr oups.com...
> Running SQL Server 2000.
> I'm working on developing a maintenance plan that will keep our
> databases healthy, and running at their best possible potential.
> There are a few issues that come to mind after reading the various
> ways of cleaning up things...and after trying a few of them I've got a
> few more questions.
> For instance, I just created a job to do the following:
> 1) shrink the log file by backing up with truncate only and then doing
> a shrinkfile.
> 2) i then did a DBREINDEX on each table in the db
> 3) I then did an updateusage
> What i found is that I could have done better by doing the shrinkfile
> after the DBREINDEX since it grew the log file so much, but I'm wary
> of what this could hurt. I know I could switch from Full Recover to
> Bulk-Logged but I'm hesitant to do that because of the possible data
> loss.
> I would love to come up with a plan that could run nightly/weekly that
> would fix my indexes and maintain my log files.
> What would be the best way to implement what I've done above?
> Thanks
>
|||I've read through that article several times and I'm aware of the Rebuild on
the tables that need it most.
The main part of my question is about the timing of the shrinkfile, and how
that can help with the log file size during the initial run. Is there a
benefit to running the shrinkfile before the DBCC REINDEX, Before and after,
or just after? That's the biggest question I have.
Our log files get pretty big even under normal operations and I believe that
an index (even if just on the tables that need it) and shrinkfile nightly may
help.
"Tibor Karaszi" wrote:

> Step 1: read and ponder http://www.microsoft.com/technet/prodtechnol/sql/2000/maintain/ss2kidbp.mspx
> Step 2, use the code found in BOL under DBCC SHOWCONTIG which only rebuild the indexes which are
> fragmented in the first place.
> --
> Tibor Karaszi, SQL Server MVP
> http://www.karaszi.com/sqlserver/default.asp
> http://sqlblog.com/blogs/tibor_karaszi
>
> "jason7655" <jcrider@.gmail.com> wrote in message
> news:1184688227.637963.246580@.m37g2000prh.googlegr oups.com...
>
|||Why bother with shrinkfile in the first place? The log is emptied when you do a backup of the log.
If you want to break your log backup chain, then I suggest you do BACKUP LOG dbname WITH
TRUNCATE_ONLY after the shrink operation. I wouldn't shrink the file since it will only bump up to
the same size next night, so all you do if getting worse performance (*).
Actually, BACKUP LOG dbname WITH TRUNCATE ONLY will be removed in a future version, where setting
the db to simple recovery and then back to full achieves the same thing.
Also, you can even do concurrent or interleaved BACKUP LOG dbname WITH TRUNCATE ONLY along with your
DBREINDEX operations. This way you empty the log after each index rebuild. Or, the recommended way
(see above paragraph) would be to have the db in simple recovery during the reindex job.
You need, of course, think about the consequences of breaking the log backup chain. This is not
something to be taken lightly...
(*)
http://www.karaszi.com/SQLServer/info_dont_shrink.asp
http://sqlblog.com/blogs/tibor_karaszi/archive/2007/02/25/leaking-roof-and-file-shrinking.aspx
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://sqlblog.com/blogs/tibor_karaszi
"jason7655" <jason7655@.discussions.microsoft.com> wrote in message
news:8AB8248A-5813-4308-A019-F4C3F2AE994A@.microsoft.com...[vbcol=seagreen]
> I've read through that article several times and I'm aware of the Rebuild on
> the tables that need it most.
> The main part of my question is about the timing of the shrinkfile, and how
> that can help with the log file size during the initial run. Is there a
> benefit to running the shrinkfile before the DBCC REINDEX, Before and after,
> or just after? That's the biggest question I have.
> Our log files get pretty big even under normal operations and I believe that
> an index (even if just on the tables that need it) and shrinkfile nightly may
> help.
> "Tibor Karaszi" wrote:
|||Ok, I'm currently doing a nightly Transaction Log Backup so I'm fine in that
regard and don't need to do that. If I understand you correctly, that does
the same thing as DBCC Shrinkfile (log).
You mention switching the recovery model.
Doesn't switching it from Full -> Simple...doing your stuff...and then
switching Simple -> Full break your log backup chain as well?
When you go from Simple to full you have to do a backup anyway.
From this article (http://msdn2.microsoft.com/en-us/library/ms178052.aspx)
- Immediately after you complete the switch to the full recovery model or
bulk-logged recovery model, take a full or differential database backup to
start the log chain.
The switch to the full or bulk-logged recovery model takes effect only after
the first data backup.
-If you switch from the full or bulk-logged recovery model to the simple
recovery model, you break the backup log chain. Therefore, we strongly
recommend that you back up the log immediately before switching, which allows
you to recover the database up to that point. After switching, you need to
take periodic data backups to protect your data and to truncate the inactive
portion of the transaction log.
Currently we have our the following procedure:
1) Our production server has the transaction logs backed up.
2) The Backup kicks off.
From what I gather, it would be best to run a job before those two that
executes a stored procedure reindexing the tables that need it. Then run a
DBCC UPDATEUSAGE.
Does that sound about right? I'm not sure how switching the recovery models
would play into our plans.
"Tibor Karaszi" wrote:

> Why bother with shrinkfile in the first place? The log is emptied when you do a backup of the log.
> If you want to break your log backup chain, then I suggest you do BACKUP LOG dbname WITH
> TRUNCATE_ONLY after the shrink operation. I wouldn't shrink the file since it will only bump up to
> the same size next night, so all you do if getting worse performance (*).
> Actually, BACKUP LOG dbname WITH TRUNCATE ONLY will be removed in a future version, where setting
> the db to simple recovery and then back to full achieves the same thing.
> Also, you can even do concurrent or interleaved BACKUP LOG dbname WITH TRUNCATE ONLY along with your
> DBREINDEX operations. This way you empty the log after each index rebuild. Or, the recommended way
> (see above paragraph) would be to have the db in simple recovery during the reindex job.
> You need, of course, think about the consequences of breaking the log backup chain. This is not
> something to be taken lightly...
> (*)
> http://www.karaszi.com/SQLServer/info_dont_shrink.asp
> http://sqlblog.com/blogs/tibor_karaszi/archive/2007/02/25/leaking-roof-and-file-shrinking.aspx
> --
> Tibor Karaszi, SQL Server MVP
> http://www.karaszi.com/sqlserver/default.asp
> http://sqlblog.com/blogs/tibor_karaszi
>
> "jason7655" <jason7655@.discussions.microsoft.com> wrote in message
> news:8AB8248A-5813-4308-A019-F4C3F2AE994A@.microsoft.com...
>
|||> Ok, I'm currently doing a nightly Transaction Log Backup so I'm fine in that
> regard and don't need to do that. If I understand you correctly, that does
> the same thing as DBCC Shrinkfile (log).
No, not at all. Backup log will empty the log file. that is *not* the same thing as making the file
smaller. Read my blog about leaking roof for analogy.

> Doesn't switching it from Full -> Simple...doing your stuff...and then
> switching Simple -> Full break your log backup chain as well?
Yes, it does. My point was that BACKUP LOG WITH TRUNCATE ONLY breaks the recovery model, and this
this command is deprecated, the "new" way of achieving this is to switch to simple and back to full.
Switching to simple and back to full does the same thing as BACKUP LOG WITH TRUNCATE ONLY. And yes,
both breaks the log backup chain.

> When you go from Simple to full you have to do a backup anyway.
Yes, if you want to keep doing log backups.

> From what I gather, it would be best to run a job before those two that
> executes a stored procedure reindexing the tables that need it. Then run a
> DBCC UPDATEUSAGE.
I don't understand what DBCC UPDATEUSAGE has to do with things.
Perhaps we can go back to the requirements, because, honestly, I'm a bit lost here.
What are your requirements?
Do you need to do log backups at all?
Do you need an unbroken chain of log backups over a longer time?
Also, what exactly is it that you now want to achieve?
Keep the log file small but also do DBREINDEX?
If you can provide information about above, we can probably suggest some reasonable scheme for you.
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://sqlblog.com/blogs/tibor_karaszi
"jason7655" <jason7655@.discussions.microsoft.com> wrote in message
news:6BFDD6D2-14D3-40D5-A4D1-4F3C0DDF7A84@.microsoft.com...[vbcol=seagreen]
> Ok, I'm currently doing a nightly Transaction Log Backup so I'm fine in that
> regard and don't need to do that. If I understand you correctly, that does
> the same thing as DBCC Shrinkfile (log).
> You mention switching the recovery model.
> Doesn't switching it from Full -> Simple...doing your stuff...and then
> switching Simple -> Full break your log backup chain as well?
> When you go from Simple to full you have to do a backup anyway.
> From this article (http://msdn2.microsoft.com/en-us/library/ms178052.aspx)
> - Immediately after you complete the switch to the full recovery model or
> bulk-logged recovery model, take a full or differential database backup to
> start the log chain.
> The switch to the full or bulk-logged recovery model takes effect only after
> the first data backup.
> -If you switch from the full or bulk-logged recovery model to the simple
> recovery model, you break the backup log chain. Therefore, we strongly
> recommend that you back up the log immediately before switching, which allows
> you to recover the database up to that point. After switching, you need to
> take periodic data backups to protect your data and to truncate the inactive
> portion of the transaction log.
> Currently we have our the following procedure:
> 1) Our production server has the transaction logs backed up.
> 2) The Backup kicks off.
> From what I gather, it would be best to run a job before those two that
> executes a stored procedure reindexing the tables that need it. Then run a
> DBCC UPDATEUSAGE.
> Does that sound about right? I'm not sure how switching the recovery models
> would play into our plans.
>
> "Tibor Karaszi" wrote:
|||The requirements:
The databases have not been properly maintained for some time. The only
maintenance on them have been the backup and transaction log
backup.Fragmentation has occured at the pc level as well as database level.
I would like to implement a plan that essentially returns a database to best
health on a consistent basis (whether that be nightly, weekly, or monthly).
The databases have become large, and that's understandable...but it's also
made me wonder what the benefits of doing a shrinkdatabase on them would do
but that is a different story.
We defragmented the physical disks, and it's suggested that after a physical
defrag you rebuild the indexes. That's what got us to this point and it's
got me wondering the benefits of scheduled reindexing.
I don't guess that the DBCC UPDATEUSAGE is absolutely necessary, but it was
in some old scripts that i found.
Yes, we need to do log backups so that the transaction logs will fit on the
backup tape.
Unbroken chain over a longer period of time? Not so sure, because we keep 2
weeks backup of the transaction logs.
What I want to acheive is the best possible performance of my system, and
sheduled jobs would be the way I'd like to do it.
"Tibor Karaszi" wrote:

> No, not at all. Backup log will empty the log file. that is *not* the same thing as making the file
> smaller. Read my blog about leaking roof for analogy.
>
> Yes, it does. My point was that BACKUP LOG WITH TRUNCATE ONLY breaks the recovery model, and this
> this command is deprecated, the "new" way of achieving this is to switch to simple and back to full.
> Switching to simple and back to full does the same thing as BACKUP LOG WITH TRUNCATE ONLY. And yes,
> both breaks the log backup chain.
>
> Yes, if you want to keep doing log backups.
>
> I don't understand what DBCC UPDATEUSAGE has to do with things.
> Perhaps we can go back to the requirements, because, honestly, I'm a bit lost here.
> What are your requirements?
> Do you need to do log backups at all?
> Do you need an unbroken chain of log backups over a longer time?
> Also, what exactly is it that you now want to achieve?
> Keep the log file small but also do DBREINDEX?
> If you can provide information about above, we can probably suggest some reasonable scheme for you.
> --
> Tibor Karaszi, SQL Server MVP
> http://www.karaszi.com/sqlserver/default.asp
> http://sqlblog.com/blogs/tibor_karaszi
>
> "jason7655" <jason7655@.discussions.microsoft.com> wrote in message
> news:6BFDD6D2-14D3-40D5-A4D1-4F3C0DDF7A84@.microsoft.com...
>
|||So basically what you want to do is to have regular database backups and more frequent log backups.
And also defragment inside the database regularly. OK, this is perfectly normal.
You keep two weeks of log backups, and I suggest you try to have those as an unbroken chain of log
backups. Say you do db backup every night. And say you find out that a corruption occurred (broken
page, for instance) three days ago. You can now restore the most recent *healthy* db backup (4 days
ago) and all subsequent log backups. This is what I mean by having an unbroken chain of log backups.
If you think that this is a good idea, then don't set the db to simple recovery model and don't do
BACKUP LOG WITH TRUNCATE ONLY.
Since we need to start with the requirements and work from there, it is important that you and your
organization agree on whether above is the requirement for your database. Lets assume it is:
What backups to have is a no-brainer. Of course, you need to consider how frequent to do database
and log backups.
You also say you want to defragment your indexes. OK. The MS article I posted earlier has good
background information about this. Based on that article, you can decide whether to use DBCC
INDEXDEFRAG or DBCC DBREINDEX or a combination of the two (like reorganize if fragmentation is
between 5 and 30%, rebuild if higher). I suggest that you only reorganize or rebuild indexes that
aren't fragmented.
Now, the defragmentation of your indexes will generate log records. No way around this if you want
an unbroken chain of log backups. So you need to have resources to handle this (disk space for the
log and subsequent log backups).
As you probably know, I do not recommend shrink, and I posted a couple of articles earlier
describing why.
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://sqlblog.com/blogs/tibor_karaszi
"jason7655" <jason7655@.discussions.microsoft.com> wrote in message
news:9CF903A1-7D43-4774-A644-9EACDC7478C8@.microsoft.com...[vbcol=seagreen]
> The requirements:
> The databases have not been properly maintained for some time. The only
> maintenance on them have been the backup and transaction log
> backup.Fragmentation has occured at the pc level as well as database level.
> I would like to implement a plan that essentially returns a database to best
> health on a consistent basis (whether that be nightly, weekly, or monthly).
> The databases have become large, and that's understandable...but it's also
> made me wonder what the benefits of doing a shrinkdatabase on them would do
> but that is a different story.
> We defragmented the physical disks, and it's suggested that after a physical
> defrag you rebuild the indexes. That's what got us to this point and it's
> got me wondering the benefits of scheduled reindexing.
> I don't guess that the DBCC UPDATEUSAGE is absolutely necessary, but it was
> in some old scripts that i found.
> Yes, we need to do log backups so that the transaction logs will fit on the
> backup tape.
> Unbroken chain over a longer period of time? Not so sure, because we keep 2
> weeks backup of the transaction logs.
> What I want to acheive is the best possible performance of my system, and
> sheduled jobs would be the way I'd like to do it.
> "Tibor Karaszi" wrote:
|||Ok, so any clue why the following will work with 30 or 20, but not 10 for the
maximum fragmentation?
USE Database
-- Declare variables
SET NOCOUNT ON
DECLARE @.tablename VARCHAR (128)
DECLARE @.execstr VARCHAR (255)
DECLARE @.objectid INT
DECLARE @.indexid INT
DECLARE @.frag DECIMAL
DECLARE @.maxfrag DECIMAL
-- Decide on the maximum fragmentation to allow
SELECT @.maxfrag = 10.0;
-- Declare cursor
DECLARE tables CURSOR FOR
SELECT TABLE_NAME
FROM INFORMATION_SCHEMA.TABLES
WHERE TABLE_TYPE = 'BASE TABLE'
-- Create the table
CREATE TABLE #fraglist (
ObjectName CHAR (255),
ObjectId INT,
IndexName CHAR (255),
IndexId INT,
Lvl INT,
CountPages INT,
CountRows INT,
MinRecSize INT,
MaxRecSize INT,
AvgRecSize INT,
ForRecCount INT,
Extents INT,
ExtentSwitches INT,
AvgFreeBytes INT,
AvgPageDensity INT,
ScanDensity DECIMAL,
BestCount INT,
ActualCount INT,
LogicalFrag DECIMAL,
ExtentFrag DECIMAL)
-- Open the cursor
OPEN tables
-- Loop through all the tables in the database
FETCH NEXT
FROM tables
INTO @.tablename
WHILE @.@.FETCH_STATUS = 0
BEGIN
-- Do the showcontig of all indexes of the table
INSERT INTO #fraglist
EXEC ('DBCC SHOWCONTIG (''' + @.tablename + ''')
WITH FAST, TABLERESULTS, ALL_INDEXES, NO_INFOMSGS')
FETCH NEXT
FROM tables
INTO @.tablename
END
-- Close and deallocate the cursor
CLOSE tables
DEALLOCATE tables
-- Declare cursor for list of indexes to be defragged
DECLARE indexes CURSOR FOR
SELECT ObjectName, ObjectId, IndexId, LogicalFrag
FROM #fraglist
WHERE LogicalFrag >= @.maxfrag
AND INDEXPROPERTY (ObjectId, IndexName, 'IndexDepth') > 0
-- Open the cursor
OPEN indexes
-- loop through the indexes
FETCH NEXT
FROM indexes
INTO @.tablename, @.objectid, @.indexid, @.frag
WHILE @.@.FETCH_STATUS = 0
BEGIN
SELECT @.execstr = 'DBCC DBREINDEX (' + RTRIM(@.objectid) + ',
' + RTRIM(@.indexid) + ') WITH NO_INFOMSGS'
EXEC (@.execstr)
FETCH NEXT
FROM indexes
INTO @.tablename, @.objectid, @.indexid, @.frag
END
-- Close and deallocate the cursor
CLOSE indexes
DEALLOCATE indexes
-- Delete the temporary table
DROP TABLE #fraglist
GO
"Tibor Karaszi" wrote:

> So basically what you want to do is to have regular database backups and more frequent log backups.
> And also defragment inside the database regularly. OK, this is perfectly normal.
> You keep two weeks of log backups, and I suggest you try to have those as an unbroken chain of log
> backups. Say you do db backup every night. And say you find out that a corruption occurred (broken
> page, for instance) three days ago. You can now restore the most recent *healthy* db backup (4 days
> ago) and all subsequent log backups. This is what I mean by having an unbroken chain of log backups.
> If you think that this is a good idea, then don't set the db to simple recovery model and don't do
> BACKUP LOG WITH TRUNCATE ONLY.
> Since we need to start with the requirements and work from there, it is important that you and your
> organization agree on whether above is the requirement for your database. Lets assume it is:
> What backups to have is a no-brainer. Of course, you need to consider how frequent to do database
> and log backups.
> You also say you want to defragment your indexes. OK. The MS article I posted earlier has good
> background information about this. Based on that article, you can decide whether to use DBCC
> INDEXDEFRAG or DBCC DBREINDEX or a combination of the two (like reorganize if fragmentation is
> between 5 and 30%, rebuild if higher). I suggest that you only reorganize or rebuild indexes that
> aren't fragmented.
> Now, the defragmentation of your indexes will generate log records. No way around this if you want
> an unbroken chain of log backups. So you need to have resources to handle this (disk space for the
> log and subsequent log backups).
> As you probably know, I do not recommend shrink, and I posted a couple of articles earlier
> describing why.
> --
> Tibor Karaszi, SQL Server MVP
> http://www.karaszi.com/sqlserver/default.asp
> http://sqlblog.com/blogs/tibor_karaszi
>
> "jason7655" <jason7655@.discussions.microsoft.com> wrote in message
> news:9CF903A1-7D43-4774-A644-9EACDC7478C8@.microsoft.com...
>
|||I'm working my way through the script, and below comment things as I find them:
You declare @.objectid as an int and then in the EXEC you LTRIM it. An int doesn't have trailing
spaces, since it isn't a string. I recommend you use CAST, since this is what you need to achieve
and is mire readable.
You pass the id of the table and index into DBCC DBREINDEX. Looking in Books Online, it need the
name, not the id.
You declare @.tablename as varchar(128), where it should be nvarchar(128), or preferrabley sysname.
For safety, I always declare my "execstrings" to nvarchar(4000).
The #fraglist table has table and index name as fixed length datatypes, Define them as variable and
you don't need to trim them later on.
Here's a revised script which seems to work:
-- Declare variables
SET NOCOUNT ON
DECLARE @.tablename sysname
DECLARE @.execstr nvarchar(4000)
DECLARE @.objectid INT
DECLARE @.indexid INT
DECLARE @.indexname sysname
DECLARE @.frag DECIMAL
DECLARE @.maxfrag DECIMAL
DECLARE @.sql nvarchar(4000)
-- Decide on the maximum fragmentation to allow
SELECT @.maxfrag = 10.0;
-- Declare cursor
DECLARE tables CURSOR FOR
SELECT TABLE_NAME
FROM INFORMATION_SCHEMA.TABLES
WHERE TABLE_TYPE = 'BASE TABLE'
-- Create the table
CREATE TABLE #fraglist (
ObjectName varCHAR (255),
ObjectId INT,
IndexName varCHAR (255),
IndexId INT,
Lvl INT,
CountPages INT,
CountRows INT,
MinRecSize INT,
MaxRecSize INT,
AvgRecSize INT,
ForRecCount INT,
Extents INT,
ExtentSwitches INT,
AvgFreeBytes INT,
AvgPageDensity INT,
ScanDensity DECIMAL,
BestCount INT,
ActualCount INT,
LogicalFrag DECIMAL,
ExtentFrag DECIMAL)
-- Open the cursor
OPEN tables
-- Loop through all the tables in the database
FETCH NEXT
FROM tables
INTO @.tablename
WHILE @.@.FETCH_STATUS = 0
BEGIN
-- Do the showcontig of all indexes of the table
PRINT ('DBCC SHOWCONTIG (''' + @.tablename + ''') WITH FAST, TABLERESULTS, ALL_INDEXES,
NO_INFOMSGS')
INSERT INTO #fraglist
EXEC ('DBCC SHOWCONTIG (''' + @.tablename + ''') WITH FAST, TABLERESULTS, ALL_INDEXES,
NO_INFOMSGS')
FETCH NEXT
FROM tables
INTO @.tablename
END
-- Close and deallocate the cursor
CLOSE tables
DEALLOCATE tables
-- Declare cursor for list of indexes to be defragged
DECLARE indexes CURSOR FOR
SELECT ObjectName, ObjectId, IndexId, LogicalFrag, IndexName
FROM #fraglist
WHERE LogicalFrag >= @.maxfrag
AND INDEXPROPERTY (ObjectId, IndexName, 'IndexDepth') > 0
-- Open the cursor
OPEN indexes
-- loop through the indexes
FETCH NEXT
FROM indexes
INTO @.tablename, @.objectid, @.indexid, @.frag, @.indexname
WHILE @.@.FETCH_STATUS = 0
BEGIN
SELECT @.execstr = 'DBCC DBREINDEX (' + @.tablename + ', ' + @.indexname + ') WITH NO_INFOMSGS'
PRINT @.execstr
EXEC (@.execstr)
FETCH NEXT
FROM indexes
INTO @.tablename, @.objectid, @.indexid, @.frag, @.indexname
END
-- Close and deallocate the cursor
CLOSE indexes
DEALLOCATE indexes
-- Delete the temporary table
DROP TABLE #fraglist
GO
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://sqlblog.com/blogs/tibor_karaszi
"jason7655" <jason7655@.discussions.microsoft.com> wrote in message
news:F71215C6-A458-416D-8E51-8457583814CE@.microsoft.com...[vbcol=seagreen]
> Ok, so any clue why the following will work with 30 or 20, but not 10 for the
> maximum fragmentation?
> USE Database
> -- Declare variables
> SET NOCOUNT ON
> DECLARE @.tablename VARCHAR (128)
> DECLARE @.execstr VARCHAR (255)
> DECLARE @.objectid INT
> DECLARE @.indexid INT
> DECLARE @.frag DECIMAL
> DECLARE @.maxfrag DECIMAL
> -- Decide on the maximum fragmentation to allow
> SELECT @.maxfrag = 10.0;
> -- Declare cursor
> DECLARE tables CURSOR FOR
> SELECT TABLE_NAME
> FROM INFORMATION_SCHEMA.TABLES
> WHERE TABLE_TYPE = 'BASE TABLE'
> -- Create the table
> CREATE TABLE #fraglist (
> ObjectName CHAR (255),
> ObjectId INT,
> IndexName CHAR (255),
> IndexId INT,
> Lvl INT,
> CountPages INT,
> CountRows INT,
> MinRecSize INT,
> MaxRecSize INT,
> AvgRecSize INT,
> ForRecCount INT,
> Extents INT,
> ExtentSwitches INT,
> AvgFreeBytes INT,
> AvgPageDensity INT,
> ScanDensity DECIMAL,
> BestCount INT,
> ActualCount INT,
> LogicalFrag DECIMAL,
> ExtentFrag DECIMAL)
> -- Open the cursor
> OPEN tables
> -- Loop through all the tables in the database
> FETCH NEXT
> FROM tables
> INTO @.tablename
> WHILE @.@.FETCH_STATUS = 0
> BEGIN
> -- Do the showcontig of all indexes of the table
> INSERT INTO #fraglist
> EXEC ('DBCC SHOWCONTIG (''' + @.tablename + ''')
> WITH FAST, TABLERESULTS, ALL_INDEXES, NO_INFOMSGS')
> FETCH NEXT
> FROM tables
> INTO @.tablename
> END
> -- Close and deallocate the cursor
> CLOSE tables
> DEALLOCATE tables
> -- Declare cursor for list of indexes to be defragged
> DECLARE indexes CURSOR FOR
> SELECT ObjectName, ObjectId, IndexId, LogicalFrag
> FROM #fraglist
> WHERE LogicalFrag >= @.maxfrag
> AND INDEXPROPERTY (ObjectId, IndexName, 'IndexDepth') > 0
> -- Open the cursor
> OPEN indexes
> -- loop through the indexes
> FETCH NEXT
> FROM indexes
> INTO @.tablename, @.objectid, @.indexid, @.frag
> WHILE @.@.FETCH_STATUS = 0
> BEGIN
> SELECT @.execstr = 'DBCC DBREINDEX (' + RTRIM(@.objectid) + ',
> ' + RTRIM(@.indexid) + ') WITH NO_INFOMSGS'
> EXEC (@.execstr)
> FETCH NEXT
> FROM indexes
> INTO @.tablename, @.objectid, @.indexid, @.frag
> END
> -- Close and deallocate the cursor
> CLOSE indexes
> DEALLOCATE indexes
> -- Delete the temporary table
> DROP TABLE #fraglist
> GO
> "Tibor Karaszi" wrote:

No comments:

Post a Comment