The environment:
- SharePoint 2010 Standard - 1 APP, 1 WFE
- Nintex Workflow running on both servers
- Nintex Analytics running on the APP server
...All sharing the same SQL box... which was running fine for 3 years.
One wonderful day we started getting complaints from users that their Intranet is running slow, server graphs showed 100 % CPU on the SQL box since a couple of minutes. That trend has continued and the only process eating it was the one for the SQL Server itself.
I ran a report of the most CPU-expensive queries on the server and found the following:
SELECT @WebCount = COUNT_BIG(DISTINCT w.ObjectId)
FROM dbo.DimSPObjectsSites o with (readuncommitted)
INNER JOIN (SELECT DISTINCT ObjectId, ObjectTypeId, EventTypeId FROM dbo.FactAuditData with (readuncommitted) WHERE IntervalId >= @IntervalStart AND IntervalId < @IntervalEnd) f
ON f.ObjectId = o.ObjectId
AND f.ObjectTypeId = o.ObjectTypeId
AND f.EventTypeId = 3
INNER JOIN dbo.DimSPWebs w with (readuncommitted)
ON w.ObjectId = o.SPWebId
AND w.WebTemplate = @WebTemplate
That query is making use of the following 3 database indexes:
dbo.FactAuditData.IX_FactAuditData
dbo.DimSPObjectsSites.IX_DimSPObjectsSites
dbo.DimSPWebs.IX_DimSPWebs_WebTemplate
...which are all in the Nintex Analytics Content Databases... so I've started stopping the Nintex Analytics Services that are basically Windows services running on the SharePoint server one by one.. after stopping the Nintex Analytics Data Management Service... the CPU time dropped immediately to the recent levels we've been observing.
I've tried a few other bits and pieces like reconfiguring the reports in terms of data to retain, purging intervals and so on but every time I started the Data Management Service, in a minute the SQL Server was getting hammered (100 % CPU).
I've raised this with the Nintex Support and they've sent me the following SQL query to create a stored procedure:
CREATE PROCEDURE [dbo].[CfgIndexInformation]
AS
BEGIN
SET NOCOUNT ON
DECLARE @indexCounter int,
@maxIndexes int,
@partitioncount bigint,
@schemaname sysname,
@objectname sysname,
@indexname sysname,
@objectid int,
@indexid int,
@partitionnum bigint,
@frag float,
@partitions bigint
DECLARE @work TABLE
(
indexNumber int identity(1,1),
objectId int,
indexId int,
partitionNum bigint,
fragmentation float
)
DECLARE @tables TABLE
(
tableName sysname,
indexName sysname,
fragmentation float
)
INSERT @work (objectId, indexId, partitionNum, fragmentation)
SELECT s.object_id,
s.index_id,
s.partition_number,
s.avg_fragmentation_in_percent
FROM sys.dm_db_index_physical_stats (DB_ID(), NULL, NULL , NULL, 'LIMITED') s
WHERE s.avg_fragmentation_in_percent > 10.0 AND s.index_id > 0
SET @maxIndexes = @@ROWCOUNT
SET @indexCounter = 1
WHILE @indexCounter <= @maxIndexes
BEGIN
SELECT @objectid = objectId,
@indexid = indexId,
@partitionnum = partitionNum,
@frag = fragmentation
FROM @work
WHERE indexNumber = @indexCounter
SELECT @objectname = o.name,
@schemaname = s.name
FROM sys.objects o
INNER JOIN sys.schemas s
ON s.schema_id = o.schema_id
WHERE o.object_id = @objectid
SELECT @indexname = name
FROM sys.indexes
WHERE object_id = @objectid
AND index_id = @indexid
SELECT @partitioncount = count (*)
FROM sys.partitions
WHERE object_id = @objectid
AND index_id = @indexid
INSERT @tables
SELECT @schemaname + '.' + @objectname, @indexname, @frag
SET @indexCounter = @indexCounter + 1
END
SELECT * FROM @tables ORDER BY fragmentation DESC, tableName, indexName
END
Then I ran the newly created stored procedure (exec CfgIndexInformation) to get information on the Indexes fragmentation. The indexes used by the most expensive query I found earlier were gragmented at 95%, 91% and 66% respectively. I stopped and disabled all the Nintex Analytics services and ran exec dbo.CfgIndexRefresh against all the Nintex Analytics content databases as per the support team advice.
That has decreased the fragmentation a lot and we've managed to run the Nintex Analytics fine after that. The long-term solution is to schedule the index refresh on a monthly basis to avoid reoccurences of that.
No comments:
Post a Comment