Yesterday I migrated my dual-xeon 4GB RAID1 rrd to a quad-opteron 8GB
RAID10 server, in the hope to reduce the load.
I am managing 18k RRDs, each being updated randomly every ~10minutes
over ssh, which makes roughly 30 file updates/s.
I had many performance problems with the new server, which I was able to
manage (=server stays alive) using xfs instead of ext3, raid10 instead of
raid5. Underlying disk technology is the same (SCSI 10k drives atatched to
LSI 1030 controller), but box brand is different.
> Now I don't understand why on earth it used to work so well on the old
> server and rrdtool 1.2.10 ...
I wonder if it is the defaults for readahead. "man blockdev" and search
for readahead. You might read what your defaults were on the old kernel;
and temporarilly apply them to your drives under the new kernel, and see
what happens with 1.2.10.
On Fri, Oct 26, 2007 at 05:53:33AM -0700, Jason Fesler wrote:
> I wonder if it is the defaults for readahead. "man blockdev" and search
> for readahead. You might read what your defaults were on the old kernel;
> and temporarilly apply them to your drives under the new kernel, and see
> what happens with 1.2.10.
yeah I was planning on doing that, however now my production system is
up and running again so... ;)
So I went from 120 to 512. Does that mean 4.266 times more data was being read
off the drive? This doesn't account for the difference though, as I went
from average read of 730KB/s to 36.2MB/s (sorry my previous values were
wrong I mistook Avg for Max), which is a factor of more than 50(!). So
unless the response is non-linear, I guess I have to look elsewhere...
I still suspect a 64bit issue, what do you think?
Now that I use rrdtool 1.2.99999[...] the read value dropped even further:
84KB/s, which is consistent with the announcement by the author (10x more
updates per second).