LinuxSys Probe Utility
I’ve published a new small GNU/Linux system probe utility to:
-
read cgroups information (cpusets and memory)
-
read /proc information
-
probe CPU affinity and memory allocation
The detailed description and usage could be found on the aforementioned link. Let me here describe a couple of use-cases as an intruduction.
Probe CPU affinity and memory allocation
Setting CPU affinity and allocating memory could silently fail (e.g. if cgroups control them). The utility uses those failures to find the boundaries:
docker run -it --rm -v `pwd`:/opt \
--cpuset-cpus="2-4" --memory="100m" alpine \
/opt/linuxsys-probe -d 10MiB -r probe
probe.cpu::affinity [0] = [*2, 3, 4]
probe.cpu::affinity [1] = [*2, 3, 4]
probe.cpu::affinity [2] = [*2]
probe.cpu::affinity [3] = [*3]
probe.cpu::affinity [4] = [*4]
probe.cpu::affinity [5] = [*2, 3, 4]
probe.cpu::affinity [6] = [2, *3, 4]
probe.cpu::affinity [7] = [2, 3, *4]
probe.cpu::affinity [8] = [2, 3, *4]
probe.cpu::affinity [9] = [2, *3, 4]
probe.cpu::affinity [10] = [*2, 3, 4]
probe.cpu::affinity [11] = [*2, 3, 4]
probe.mem::alloc 4.26 GiB = false
probe.mem::alloc 2.13 GiB = false
probe.mem::alloc 1.07 GiB = false
probe.mem::alloc 545.69 MiB = false
probe.mem::alloc 272.85 MiB = false
probe.mem::alloc 136.42 MiB = false
probe.mem::alloc 68.21 MiB = true
probe.mem::alloc 102.32 MiB = false
probe.mem::alloc 85.26 MiB = true
probe.mem::alloc 93.79 MiB = false
*
marks the active CPU core during the probe.
Plot histogram from procfs values
The example uses bashplotlib python package:
pip install bashplotlib
First, run the utility in refresh mode (please use Enter to stop it). Assuming
60817
is PID of the process we examine:
./linuxsys-probe -i 1 -r proc.stat -H -p 60817 | tee results.txt
proc.stat::pid 60817
proc.stat::comm actix-web
proc.stat::state S
proc.stat::ppid 56306
proc.stat::pgrp 60817
proc.stat::session 56291
proc.stat::tty_nr 34816
proc.stat::tpgid 60817
proc.stat::flags 4194304
proc.stat::minflt 573
proc.stat::cminflt 0
proc.stat::majflt 0
proc.stat::cmajflt 0
proc.stat::utime 0
proc.stat::stime 0
proc.stat::cutime 0
proc.stat::cstime 0
proc.stat::priority 20
proc.stat::nice 0
proc.stat::num_threads 14
proc.stat::starttime 172820
proc.stat::vsize 907780096
proc.stat::rss 4407296
...
Next, use hist
to draw the histogram (grepping for RSS):
grep "rss " results.txt | cut -d' ' -f2 | hist
41| o
39| o
37| o
35| o
33| o
31| o
28| o
26| o o
24| o o
22| o o
20| o o
18| o o
16| o o
13| o o
11| o o
9| o o
7| o o
5| o o o
3| o o o
1| o o oo
-----------
-----------------------------------
| Summary |
-----------------------------------
| observations: 75 |
| min value: 3801088.000000 |
| mean : 10590999.893333 |
| max value: 23162880.000000 |
-----------------------------------
Conclusion
The available probes include:
-
cgroup.v1
- control groups v1 -
cgroup.v2
- control groups v2 -
proc
- various procfs information -
etc
And could be run in both host and guest systems (please note that cgroups
require sys
and cgroup
mounts in the guest system).
The utility could be useful for research/development for the virtualized
platforms (e.g. verify that cgroups are properly enforced and read), and
runtime monitoring. Hope it could help you in your work.
Comments
Post a Comment