I'm slowly tracking down the issues with sync. There are three or four.
The main problem appears to be simply due to the overhead of the locking,
plus the overhead is a number of recent additions. For example,
VOP_GETVOBJECT(). The core vnode scanning loop is extremely
cycle-sensitive when you have hundreds of thousands of vnodes. Even
just adding a subroutine call can tripple the overhead.
I'm integrating a number of performance fixes for sync into my
kern.maxvnodes patchset. The two are tied together, unfortunately,
due to having to change the vnode list from a LIST to a TAILQ.
I have made two performance fixes for sync so far, they are available
in the kern.maxvnodes patch #3 (for -stable only at the moment), at:
These performance fixes are to vfs_msync() and ffs_sync(). I have
not made any performance fixes to qsync() yet (which only applies if
quotas are turned on). If hundreds of thousands of vnodes are present
'sync' eats about 1/5 the cpu it ate before. The glitch is still there,
but not as pronounced. The only way to really get rid of the glitch will
be to separate the vnode list in the mount structure into two. A 'clean'
and a 'dirty' list. This patch set is pretty messy already so I'm going
wait on that.
To Unsubscribe: send mail to majordomo@FreeBSD.org
with "unsubscribe freebsd-fs" in the body of the message