2

I am trying to reduce parallelism in my query without explicitly setting MAXDOP. I have been informed that MAXDOP can hinder throughput as the platform grows and query executions increase (happy to be corrected on this).

The following query has been observed hitting MAXDOP 8, causing spikes of workers in our Azure SQL database. The result is the error "the request limit for the database is 800 and has been reached"

The following has been applied to try and make this query less parallel:

  1. Split the queries into #ErIds and #BaseTable
  2. Change a big CASE statement into individual IFs for the sort

If anyone has any ideas on how to reduce parallelism that would be really appreciated.

I can provide more info if needed.

Thank you.

declare @uniqueIds dbo.GuidList
  , @ids dbo.IntegerList
  , @searchTerm nvarchar(max) = null
  , @pageNum int
  , @pageSize int
  , @sortBy nvarchar(50) = 'Name'
  , @sortDescending bit = 0

set nocount on;

  -- creating these local variables to avoid parameter sniffing
  declare @pageNumLocal int = case 
      when @pageNum is null
        or @pageNum <= 0
        then 0
      else @pageNum
      end
    , @pageSizeLocal int = isnull(@pageSize, 999999) -- If pageSize is null, return all records
    , @sortByLocal nvarchar(50) = isnull(@sortBy, 'Name')
    , @sortOrderLocal nvarchar(20) = 'asc'
    , @searchTermLocal nvarchar(50) = '%' + lower(trim(isnull(@searchTerm, ''))) + '%';

  if @sortDescending = 1
    set @sortOrderLocal = 'desc'
  else
    set @sortOrderLocal = 'asc';

  create table #ErIds(id integer primary key);
  insert into #ErIds
  select distinct e.Id
  from [TableA] e
  where e.IsSoftDeleted = 0
    and (
      e.Id in (
        select intVal
        from @ids
        )
      or e.UniqueId in (
        select guidVal
        from @uniqueIds
        )
    )

  create table #BaseTable (
    Id int,
    UniqueId uniqueidentifier,
    [Name] nvarchar(200),
    EmployeeCount int,
    ProcessorName nvarchar(255)
  );

  insert into #BaseTable
  select e.Id,
    e.UniqueId,
    e.[Name],
    e.EmployeeCount,
    ProcessorName = isnull(u.FirstName + ' ' + u.LastName, u.EmailAddress),
  from #ErIds erids
  inner join [TableA] e
    on erids.id = e.id
  left join [TableB] bs
    on bs.EmployerId = e.Id
  left join [TableC] u
    on u.Id = bs.ProcessorId
  where 
    (
      isnull(@searchTermLocal, '') = ''
      or lower(trim(e.[Name])) like @searchTermLocal
      or lower(trim(isnull(u.FirstName + ' ' + u.LastName, u.EmailAddress))) like @searchTermLocal
      or lower(trim(convert(nvarchar(36), e.UniqueId))) like @searchTermLocal
    )

  if @sortByLocal = 'Name'
  begin
    if @sortOrderLocal = 'asc'
      select * from #BaseTable order by [Name] asc offset @pageNumLocal * @pageSizeLocal rows fetch next @pageSizeLocal rows only;
    else
      select * from #BaseTable order by [Name] desc offset @pageNumLocal * @pageSizeLocal rows fetch next @pageSizeLocal rows only;
  end
  else if @sortByLocal = 'ProcessorName'
  begin
    if @sortOrderLocal = 'asc'
      select * from #BaseTable order by ProcessorName asc offset @pageNumLocal * @pageSizeLocal rows fetch next @pageSizeLocal rows only;
    else
      select * from #BaseTable order by ProcessorName desc offset @pageNumLocal * @pageSizeLocal rows fetch next @pageSizeLocal rows only;
  end
  else if @sortByLocal = 'EmployeeCount'
  begin
    if @sortOrderLocal = 'asc'
      select * from #BaseTable order by EmployeeCount asc offset @pageNumLocal * @pageSizeLocal rows fetch next @pageSizeLocal rows only;
    else
      select * from #BaseTable order by EmployeeCount desc offset @pageNumLocal * @pageSizeLocal rows fetch next @pageSizeLocal rows only;
  end
  
  drop table #ErIds;
  drop table #BaseTable;
2
  • Please show the table DDL definitions (incl indexes) for TableA TableB and TableC, as well as your original query. Please also share the execution plan via pastetheplan.com Commented Jul 7 at 12:24
  • 3
    Please, see Asking query performance questions to improve the details in your question. Also, you mention this is not the original query, did you get a significant improvement to this query when compared to the original? If not, maybe it would be a good ideia to ask for help using the original (and possible simpler) query. Commented Jul 7 at 12:30

1 Answer 1

6

The problem that you have is too many concurrent worker threads. Here are different ways to solve that problem:

  1. Increase the max number of workers allowed by changing to a larger service tier.
  2. Set MAXDOP at the database level to a value lower than 8.
  3. Set MAXDOP at the query level to a value lower than 8.
  4. Reduce the number of concurrent queries allowed by the application.
  5. Make the queries complete faster so they hold parallel workers for less time. This could be done by resolving blocking issues or improving the performance of the queries.
  6. Split the query into parts with each step using fewer parallel zones. I recommend reading anything by Paul White on the subject including Parallel Execution Plans – Branches and Threads.
  7. Artificially encourage SQL Server to use batch mode processing for your queries. Queries that use batch mode processing are less likely to reserve many parallel worker threads. Note that batch mode processing may not be possible for some queries and some queries may not benefit from it.
0

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.