So…let’s say you’re moving your database from the antique hardware its running on (the old hardware is always relatively an antique, right?) to the newest latest greatest hardware. You’re also upgrading your storage to a new SAN or NAS. The storage architect says, “Yep…according to this shiny marketing page on the storage vendor’s website, its the fastest storage in its class, so it should be good enough…and we can thin provision and deduplicate and do backups with snaps and pool all the storage into one monster ring of storage to rule them all! And, since disk drives are cheaper when you buy them in 3tb versions, we’ll save lots of money by getting rid of our smaller spindles and using these big sata spindles!
Ok…maybe I’m pessimistic, but whenever I hear an extreme used like “fastest”, I get a red flag in my head…and although the great features of modern storage might be worth it…there’s usually some overhead involved. Also, the term “larger spindles” usually translates in my head to “less IOPS”, unless you short stroke (check out my “How much faster is Exadata High Performance than High Capacity” blog for details about that) but in a way, if you’re short stroking they really don’t have that much more capacity, and you have to use the capacity smarter.
…and in general, marketing pages on websites are usually only correct under extreme examples that make their hardware look better. Testing…and quantifying performance are always necessary.
After you run your ORION tests and you now know that your new storage is capable of 500,000 IOPS. Is that enough for your databases? Here’s a query you can use (for non-RAC) to find out. It’ll go through the AWR data in your database to find when the busiest hours have been, and the number of IOPS you used then. Its possible that this is lower than the peak amount of IOPS you used, since this is an average over an hour, so treat these results as minimums.
I looked around on the internet and I couldn’t find anything that met my requirements…the solution that came closest ignored the fact that a physical IOP can be a read or a write…which is a pretty big miss. In my situation, I actually had multiple databases using the same SAN, so I took these results and put them into a table, where I could sum up the results by hour, and see when the overlapping busiest IOP requirements were, and how high they were.
Looking back, I should have written this using SQL’s lag, but…this works. Keep in mind, this is what Oracle believes to be a physical IO…if you’re not in ASM (you should be) you may be using the OS filesystem cache…even if you’re in ASM, you may be caching on your NAS or SAN….both situation are hidden from Oracle. When I put this together, there were a couple of issues I had to overcome.
1. The stat_id for physical reads/writes was different from db to db…even databases with the same version. I’m not sure why this is…but I suspect they were originally different versions and upgraded to the same versions.
2. This data is accumulated from snap to snap since the last bounce, so I had to take a snap and compare it to the previous snap to find the delta, then find the number of seconds between those snaps and divide to find the IO/sec. If there’s a database bounce, the accumulator will reset to zero, and it would report a negative value, so I filtered out the negative results.
3. Snaps aren’t exactly as long as you think they are…and they’re configurable…so you might have them set for 15 min or an hour…and they may actually be running for 15 minutes and 1 second. This takes the variable length of a snap into account.
I’m sure this isn’t perfect, but its worked well for every non-RAC databases I’ve run it on. It might work for RAC too…I just haven’t had the need to gather IOPS requirements from a RAC database…usually the migration target is in RAC, not the source. I hope this helps you in your struggle against the vendor-marketing-website-page-believing storage administrator in your life.
select ‘mydb01’, sample_hour, (rps+wps) IOPS
with snaps as (
select hiof1.snap_id, sum(hiof1.value) reads, sum(hiof2.value) writes
from sys.WRH$_SYSSTAT HIOF1, sys.WRH$_SYSSTAT HIOF2
where HIOF1.stat_id in (select stat_id from v$statname where name like ‘%physical read total IO%’)
and HIOF2.stat_id in (select stat_id from v$statname where name like ‘%physical write total IO%’)
group by hiof1.snap_id
(select snap_id, instance_number, begin_interval_time, end_interval_time,
extract(second from (end_interval_time-begin_interval_time))+
(extract(minute from (end_interval_time-begin_interval_time))*60)+
(extract(hour from (end_interval_time-begin_interval_time))*60*60) seconds
select s1.snap_id snap_1, s2.snap_id snap_2, to_date(to_char(begin_interval_time,’MM/DD/YYYY HH24′),’MM/DD/YYYY HH24′) sample_hour, sum(s2.reads-s1.reads) reads, sum(s2.writes-s1.writes) writes,
trunc(sum(s2.reads-s1.reads)/sum(seconds)) rps, trunc(sum(s2.writes-s1.writes)/sum(seconds)) wps
from snaps s1, snaps s2, my_snaps ms
group by s2.snap_id, to_date(to_char(begin_interval_time,’MM/DD/YYYY HH24′),’MM/DD/YYYY HH24′), s1.snap_id
) order by 3 desc;
Keep in mind, Oracle considers usage of AWR tables to be a performance pack licensed option…so if you run this query on a database that isn’t licensed…shame on you. Jonathan Lewis has a nice post to show you how to clean the audit trail.
For example output, here’s the results from a busy database I’ve recently worked on. I changed the db name and times, but not the iops (yes, that’s 210 million physical IOPS…from Oracle’s perspective):
01/05/11 08:00 AM
01/25/11 11:00 PM
01/26/11 01:00 AM
09/04/12 03:00 PM
07/26/11 02:00 AM
06/04/12 02:00 PM
05/19/12 02:00 PM
04/20/12 01:00 PM
03/06/11 11:00 AM
09/17/12 07:00 PM