On 05.04.2012 23:41, Konstantin Belousov wrote:
> On Thu, Apr 05, 2012 at 11:33:46PM +0400, Andrey Zonov wrote:
>> On 05.04.2012 19:54, Alan Cox wrote:
>>> On 04/04/2012 02:17, Konstantin Belousov wrote:
>>>> On Tue, Apr 03, 2012 at 11:02:53PM +0400, Andrey Zonov wrote:
>>>>> This is what I expect. But why this doesn't work without reading file
>>>> Issue seems to be in some change of the behaviour of the reserv or
>>>> phys allocator. I Cc:ed Alan.
>>> I'm pretty sure that the behavior here hasn't significantly changed in
>>> about twelve years. Otherwise, I agree with your analysis.
>>> On more than one occasion, I've been tempted to change:
>>> if (mt->dirty != 0)
>> Thanks Alan! Now it works as I expect!
>> But I have more questions to you and kib@. They are in my test below.
>> So, prepare file as earlier, and take information about memory usage
>> from top(1). After preparation, but before test:
>> Mem: 80M Active, 55M Inact, 721M Wired, 215M Buf, 46G Free
>> First run:
>> $ ./mmap /mnt/random
>> mmap: 1 pass took: 7.462865 (none: 0; res: 262144; super:
>> 0; other: 0)
>> No super pages after first run, why?..
>> Mem: 79M Active, 1079M Inact, 722M Wired, 216M Buf, 45G Free
>> Now the file is in inactive memory, that's good.
>> Second run:
>> $ ./mmap /mnt/random
>> mmap: 1 pass took: 0.004191 (none: 0; res: 262144; super:
>> 511; other: 0)
>> All super pages are here, nice.
>> Mem: 1103M Active, 55M Inact, 722M Wired, 216M Buf, 45G Free
>> Wow, all inactive pages moved to active and sit there even after process
>> was terminated, that's not good, what do you think?
> Why do you think this is 'not good' ? You have plenty of free memory,
> there is no memory pressure, and all pages were referenced recently.
> THere is no reason for them to be deactivated.
I always thought that active memory this is a sum of resident memory of
all processes, inactive shows disk cache and wired shows kernel itself.
>> Read the file:
>> $ cat /mnt/random> /dev/null
>> Mem: 79M Active, 55M Inact, 1746M Wired, 1240M Buf, 45G Free
>> Now the file is in wired memory. I do not understand why so.
> You do use UFS, right ?
> There is enough buffer headers and buffer KVA
> to have buffers allocated for the whole file content. Since buffers wire
> corresponding pages, you get pages migrated to wired.
> When there appears a buffer pressure (i.e., any other i/o started),
> the buffers will be repurposed and pages moved to inactive.
OK, how can I get amount of disk cache?
>> Could you please give me explanation about active/inactive/wired memory?
>>> because I suspect that the current code does more harm than good. In
>>> theory, it saves activations of the page daemon. However, more often
>>> than not, I suspect that we are spending more on page reactivations than
>>> we are saving on page daemon activations. The sequential access
>>> detection heuristic is just too easily triggered. For example, I've seen
>>> it triggered by demand paging of the gcc text segment. Also, I think
>>> that pmap_remove_all() and especially vm_page_cache() are too severe for
>>> a detection heuristic that is so easily triggered.
>> Andrey Zonov