Aix Jfs2 Cache
Aix Jfs2 Cache
Aix Jfs2 Cache
Monitor with vmstat -v to see when/if it applies. You might need to do something memory intensi
ve to trigger the page replacement daemon into action and take care of that 1%.
cat "somefile_sized_1%_of_memory" > /dev/null
Perfectly and acceptable workaround! I just set the maxclient% to a value equal of minperm%
and strict_maxclient=1, in less of 2 minutes 16GB of memory was freed. Now I will back the
original values.
Short version: look at the in use clnt+pers pages in the svmon -G output (unit is 4k pages) if you
want to know all file cache, or look at vmstat -v and look at "file pages" for file cache excluding
executables (same unit).
Permanent memory is file cache. If that needs to be paged out, it goes back out to the filesystem
where it came from (for dirty pages, clean pages just get recycled). This is subdivided into non-
client (or persistent) pages for JFS, and client pages for JFS2, NFS, and possibly others.
svmon -G (btw, svmon -G -O unit=MB is a bit friendlier) gives you the work versus permanent
pages. The work column is, well, work memory. You get the permanent memory by adding up the
pers (JFS) and clnt (JFS2) columns.
In your case, you've got about 730MB of permanent pages, that are backed by your filesystems
(186151*4k pages).
Now the topas top-right "widget" FileSystemCache (numperm) shows something slightly
different, and you'd get that same data with vmstat -v: that's only non-computational permanent
pages. i.e. same thing as above, but excluding pages for executables.
In your case, that's about 350MB (2.2% of 16G). Either way, that's really not much cache.
Your vmstat output tells me more than just your current situation - as it is the single line output - it
is historical - and I suspect you have been having memory issues as it shows a history of pi/po
(paging space page in/ paging space pageout)
pi: paging space page in (i.e., read application memory from paging space)
po: steal memory and write application (aka working) memory to paging space - only working
memory goes to/from page space
In your presentation you show pi=22 and po=7. This means, on average, the system was reading
information from paging space (after it had been written) 3x more often than it wrote data. This is
an indication of a starved system because data is being read-in (pi) then stolen again (sr/fr) before
it is ever touched (referenced aka used) - or readin and removed again before the application
'waiting' for it ever has a chance to access it.
In short, the data presented is not 'in sync' with the 'pain' moments - although it might explain why
only 2.2% of your memory is now used for caching (it may even be 'computational aka the loaded
programs').
As far as vmstat goes I also suggest the flags -I (capital:i which adds 'fi' and 'fo' - fileIn and fileOut
activity) and -w (wide) so the numbers are better positioned under the textual headers.
$ svmon -P -O filtertype=working,segment=off,filtercat=exclusive,unit=MB
Unit: MB
-------------------------------------------------------------------------------
Pid Command Inuse Pin Pgsp Virtual
5505172 svmon 10.7 0.19 0.44 11.4
6553826 ksh 0.57 0.02 0 0.57
9175288 ksh 0.55 0.02 0 0.55
12910710 sshd 0.55 0.02 0 0.55
15204356 sshd 0.52 0.02 0 0.52
12779760 head 0.18 0.02 0 0.18
You may want to look at a specific command - so switching back to root to look at httpd
Summary:
svmon -C httpd -O filtertype=working,segment=off,filtercat=exclusive,unit=MB
Unit: MB
======================================================================
Command Inuse Pin Pgsp Virtual
httpd 227.44 0.69 0 227.44
Details: excerpt
# svmon -C httpd -O filtertype=working,segment=category,filtercat=exclusive,unit=MB >
Unit: MB
======================================================================
Command Inuse Pin Pgsp Virtual
httpd 230.62 0.81 0 230.62
...............................................................................
EXCLUSIVE segments Inuse Pin Pgsp Virtual
230.62 0.81 0 230.62
...............................................................................
SYSTEM segments Inuse Pin Pgsp Virtual
34.1 33.1 2.38 35.8
...............................................................................
EXCLUSIVE segments Inuse Pin Pgsp Virtual
0.18 0.02 0 0.18
...............................................................................
SHARED segments Inuse Pin Pgsp Virtual
48.2 19.5 2.75 54.6
Vsid Esid Type Description PSize Inuse Pin Pgsp Virtual
9000 d work shared library text m 48.2 19.5 2.75 54.6
It is normal for AIX to use up most of its memory, and it doesn't release memory as quickly as
compared to other OSes. All of these are taken care by AIX's Virtual Memory Manager (VMM)
and the lrud kernel process. The VMM's behavior can be tuned by using the vmo command.
In AIX there are two types of files put in memory - computational (i.e. executable files and their
working area); and non-computational files (i.e. filesystem caches).
When AIX needs more memory, the lrud process is executed to steal memory. The type of files in
memory, which the lrud will remove from memory, is determined by these VMM parameters -
minperm(%), maxperm(%), and lru_file_repage. The vmo command can be used to make changes
to these parameters.
If numperm(%) (non-computational files cache) is higher than maxperm(%); lrud will remove
non-computational files.
If numperm(%) is lower than minperm(%); lrud will remove either > computational or non-
computational file pages, whichever is least recently used.
To determine if AIX is having memory issues, I would look at the ratio of pages scanned and
pages freed (I cannot remember where is this in the nmon output). If this ratio has a high value, it
shows that the lrud is scanning a lot of pages to find pages to remove from memory.
Disclaimer: My answer is based on AIX version 5.3 - 6.0 which I worked on in my previous
company 3 - 4 years ago. But I doubt there might be significant change in the behavior of the lrud
and VMM parameters in newer versions of AIX.