Many times I had been asked from Developer Team "Why this query become slow when it used to run quickly?". I think Jonathan Lewis well answered this question as follows:
- A change in execution plan due to unlucky bind variable peeking with histograms
- A change in execution plan because the statistics have been slowly going out of sync with the data/query
- A change in execution plan because new (and unlucky) statistics have been generated since the last time query ran.
- A change in execution plan because a few new, empty, partitions have been added to a critical table
- An unlucky change in execution plan because a small change in actual data volume (with correct stats in place) can have a dramatic impact on the shape of the plan
- A change in execution plan because you had an incomplete set of hints in your SQL and your luck just ran out
- An unlucky change in data order that has a big impact on the success of subquery caching
- A small slippage in timing that results in the SQL suffering a direct data collision (locking / read-consistency) with some other process.
- A small slippage in timing that leaves the SQL running concurrently with something that is processing unrelated data but competing for the raw (disc / CPU / PX Slave) resources.
- A small slippage in timing that means some other process cleared everything you wanted from the buffer cache before your SQL started running.
- A small slippage in timing that means a process that normally follows your SQL preceded it, and you’re spending all your time on delayed block cleanout.
- Someone dropped some indexes (or made them invalid)
- Someone rebuilt one or more indexes since the last time the SQL executed
- Someone has done a “shrink space” on a critical table since the last time you ran the SQL
Reference:
It's interesting to know the actual cause of such impact. Needs awareness from developer's side while coding as well.
ReplyDelete