暂无图片
暂无图片
暂无图片
暂无图片
暂无图片

How can I tune a tablescan?

2011-01-01
431

The Oracle (tm) Users' Co-Operative FAQ

How can I tune a tablescan?


Author's name: Jonathan Lewis

Author's Email: Jonathan@jlcomp.demon.co.uk

Date written: 4th Jan 2007

Oracle version(s): General

The developers have a case where a Full Table Scan is less costly than an index scan because of the low selectivity of the index. If we supply a parallel hint the cost is further reduced. Is there any other way to tune a tablescan?


This particular example is typical of the generic case. We have a query like the following on a 40GB table, but the index is ‘not selective’:

select
     {list of columns}
from
     very_large_table t
where
     colX = 'ABC'
order by
     colY
;

The first problem with this question is the one of ambiguity – when the poster says “cost” do they really mean the figure reported in the COST column of the execution plan table, or do they mean the resource consumption (e.g. number of physical disk reads) when the query actually runs.

In the former case, it is not necessarily safe to assume that there will be a direct correlation between the COST, the resources used, and the run-time. In the latter, there may not be a direct correlation between resource usage (e.g. number of physical disk I/Os) and the run-time when the query is transferred from development to production, because there may be more competition for physical resources on production, so response time may drop.

So how can you tune a full table scan for very large tables? In a very real sense, you can’t. Once your are committed to a full tablescan, that’s it – you have to scan every block (and every row) in the table typically using multiblock reads. The block scans are likely to generate a lot of I/O; the row examinations will consume a lot of CPU.

Of course, there are a few steps you can take to minimize the resource requirement – but their side effects need to be considered carefully.

Can you make the scan times faster?

In principle, yes, a little. Increasing the size of the parameter db_file_mulitblock_read_count so that the read requests are fewer and larger may help. In practice it is not always that simple. Apart from the side-effects on the optimizer’s calculation, you may find that odd interference effects from the various hardware levels actually make it hard to find an optimum setting for the parameter. Things like disk track size, disk buffer size, network buffer size, O/S stripe size and so on can result in some very odd “impedance” effects. You may have to find a best fit for your system by a tedious process of trial and error.

In practice, you may find that the hardware “sees” your big tablescan starting, and adopts a read-ahead strategy that makes your tuning attempt redundant. (This is actually a problem with mixed-load SANs, they tend to get over-enthusiastic about tablescans at the cost of random I/Os – so on average their rate of throughput can look good while the end-users (and v$session_event) are complaining  about slow disk response times)

Would parallelism help ?

In principle, yes; but only for the query itself. For a tablescan of degree N the table would effectively be broken up into N parts, allowing N separate processes to scan and extract the required data before forwarding the minimum possible data volume on to another set of processes to handle the sorting. So long as the number of slaves was not sufficient to overload the I/O subsystem, and provided the slaves didn’t end up colliding on the same disks all the time, then the speed of the scan would improve by a factor of roughly N.

On the other hand, by making this query run as rapidly as possible – which means the highest degree of parallelism you can get away with – you are likely to put a lot of stress on the I/O subsystem – to you really want to do that, as it will probably slow everything else down.

Can you do something to cache the table?

At 40 GB, probably not (but who knows, maybe you have 128GB of memory to play with and that might be enough to let you put this table into a keep pool) and you’re still going to be using a lot of CPU looking at all those rows whatever you do about caching.

Is it really true that the index has very poor selectivity?

If so, why are you running a query that (apparently) is going to fetch a huge number of rows? Do you really need to fetch all those rows, or are you really after just the first few?

If the volume is actually relatively low, then perhaps a “bad” index is still better than doing a tablescan – it may protect your I/O subsystem for other users.

If the volume of data is high but you only want the first few rows, how about (in the example above) a function-based index” of the form decode(colX,’ABC’,colY,null) – so holds entries for just the rows you are interested in, with the index entries in the right order for your order by clause. If you do this, you could re-write your query to force a range scan through this index, stopping after a few rows, rather than acquiring all the rows and sorting them before discarding most of them.

Would partitioning (on the colX column in this case) work for you?

A list partition strategy, or possibly even a hash partition strategy, could break the table into a reasonable number of much smaller chunks. You would still have to do a ‘full table scan’ but it would be a scan of just one partition, which could be much smaller, so use less resources, run more quickly, and cause much less damage. But can you partition on this column – or would it interfere with all the other functions of the system; and if it does could it still be worth the penalty?

So, although you can’t really “tune” a large tablescan, there are some strategies you can investigate to see if you can find ways of reducing or restructuring the amount of work that you need to do. Whether any of them is appropriate for your system only you can choose.


Further reading: N/A



最后修改时间:2020-04-16 15:11:39
【版权声明】本文为墨天轮用户原创内容,转载时必须标注文章的来源(墨天轮),文章链接,文章作者等基本信息,否则作者和墨天轮有权追究责任。如果您发现墨天轮中有涉嫌抄袭或者侵权的内容,欢迎发送邮件至:contact@modb.pro进行举报,并提供相关证据,一经查实,墨天轮将立刻删除相关内容。

评论