This cursor is a select on a large table. Each update occurs once for
each row. Does SQL hold all changes in memory on dirty pages until the
cursor is done (or check point is reached) and is this cursor pinning a
large table in memory?
while @.@.FETCH_STATUS = 0
BEGIN
while @.ldt_min <= @.ldt_max
Begin
update Deal_Details_ZN
set {some values}
from Deal_Details_ZN a, Temp_Zainet_Detail b
where a.Xkey = b.Xkey
select @.ldt_min = @.ldt_min + 1
end
Fetch next from Deal_Cur into @.ls_xkey, @.ls_zkey,@.ls_dealxref,@.ls_flowdate
End
Close Deal_Cur
Deallocate Deal_Cur"Shaun Farrugia" <far!!!!ugia!!!s@.dte!!!!ener!gy.com> wrote in message
news:eEgF9zrhDHA.2452@.TK2MSFTNGP10.phx.gbl...
> This cursor is a select on a large table. Each update occurs once for
> each row. Does SQL hold all changes in memory on dirty pages until the
> cursor is done (or check point is reached) and is this cursor pinning a
> large table in memory?
Cursors are inherently less efficient that set-based DML, but the way people
use cursors contributes to their slowness. Not wrapping your cursor-driven
DML in a transaction is a common performance problem that can degrade the
performance fo cursor-driven solutions from "ok" to "terrible".
Changing data in an RDBMS involves 3 basic steps. Editing the in-memory
data pages, writing log entries, and flushing the log entries to disk. Of
these, flushing the log entries to disk is the most expensive since writes
to the disk are slow and serialized.
A single DML statement is always atomic, so SQL Server will make all the
data page changes and write all the log entries first, and then flush all of
the log entries to disk at once. This is much more efficient than flushing
the log entries after each row. But when people write cursor-driven DML,
this is often exactly what they do, and it appears that it's what you're
doing.
If you wrap the whole loop in a transaction it will be more efficient
because the changes will be made to the in-memory structres, but you won't
have to flush the log to disk after each update.
David|||David thanks for your reply.
Is this cursor fully loaded in memory as a data structure the size of
the SELECT that the cursor is based on?
We are having problems where there are no free pages on the server and
this locks the server up. I am trying to determine the cause of the
failure and things like this are suspicious.
So in this situation, every update gets logged and flushed to disk at
each UPDATE statement? Would this cause an issue with memory? Or would
holding a transaction open until the cursor runs take up more memory?
David Browne wrote:
> "Shaun Farrugia" <far!!!!ugia!!!s@.dte!!!!ener!gy.com> wrote in message
> news:eEgF9zrhDHA.2452@.TK2MSFTNGP10.phx.gbl...
>>This cursor is a select on a large table. Each update occurs once for
>>each row. Does SQL hold all changes in memory on dirty pages until the
>>cursor is done (or check point is reached) and is this cursor pinning a
>>large table in memory?
>
> Cursors are inherently less efficient that set-based DML, but the way people
> use cursors contributes to their slowness. Not wrapping your cursor-driven
> DML in a transaction is a common performance problem that can degrade the
> performance fo cursor-driven solutions from "ok" to "terrible".
> Changing data in an RDBMS involves 3 basic steps. Editing the in-memory
> data pages, writing log entries, and flushing the log entries to disk. Of
> these, flushing the log entries to disk is the most expensive since writes
> to the disk are slow and serialized.
> A single DML statement is always atomic, so SQL Server will make all the
> data page changes and write all the log entries first, and then flush all of
> the log entries to disk at once. This is much more efficient than flushing
> the log entries after each row. But when people write cursor-driven DML,
> this is often exactly what they do, and it appears that it's what you're
> doing.
> If you wrap the whole loop in a transaction it will be more efficient
> because the changes will be made to the in-memory structres, but you won't
> have to flush the log to disk after each update.
> David
>|||Did you try re-writing the logic without a cursor?
--
Tibor Karaszi, SQL Server MVP
Archive at: http://groups.google.com/groups?oi=djq&as ugroup=microsoft.public.sqlserver
"Shaun Farrugia" <far!!!!ugia!!!s@.dte!!!!ener!gy.com> wrote in message
news:eEgF9zrhDHA.2452@.TK2MSFTNGP10.phx.gbl...
> This cursor is a select on a large table. Each update occurs once for
> each row. Does SQL hold all changes in memory on dirty pages until the
> cursor is done (or check point is reached) and is this cursor pinning a
> large table in memory?
>
> while @.@.FETCH_STATUS = 0
> BEGIN
> while @.ldt_min <= @.ldt_max
> Begin
> update Deal_Details_ZN
> set {some values}
> from Deal_Details_ZN a, Temp_Zainet_Detail b
> where a.Xkey = b.Xkey
> select @.ldt_min = @.ldt_min + 1
> end
> Fetch next from Deal_Cur into @.ls_xkey, @.ls_zkey,@.ls_dealxref,@.ls_flowdate
> End
> Close Deal_Cur
> Deallocate Deal_Cur
>|||No i just wanted to get an ide of what happens with memory consumption
with that particular cursor.
Tibor Karaszi wrote:
> Did you try re-writing the logic without a cursor?
>
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment