Odi's astoundingly incomplete notes
New entriesCode
back | nextGentoo updating gcc, mpfr, mpc
When Gentoo updates gcc together with mpfr and mpc the normal emerge procedure will cause building of gcc twice. Because mpfr and mpc cause automatic rebuild of the (existing) gcc. But if you are going to update gcc anyway then this is utterly pointless waste of energy.
Instead you can
After gcc update don't forget to switch to the new compiler with
Instead you can
emerge --ignore-built-slot-operator-deps=y -1uav mpfr mpc
first without doing the rebuild. This leaves you with a broken gcc maybe (but actually probably not because portage preserves the old library versions), but next you simply emerge -1uav gcc
anyway.After gcc update don't forget to switch to the new compiler with
gcc-config
and rebuild libtool
.Add comment
Gentoo replacing ntp, vixie-cron, man
Gentoo is cleaning out its closet. It has removed unmaintained upstream packages which were still popular: ntp, vixie-cron and man. Of course it's a logical step and using the modern replacements is rational.
For net-misc/ntp, use net-misc/ntpsec: This has way more robust configuration while eliminating ancient obscure features like traps.
For vixie-cron, use sys-process/cronie. It also itegrates anacron, so you get two in one.
For man, use man-db which is faster as it uses a BDB backend instead of text files.
For net-misc/ntp, use net-misc/ntpsec: This has way more robust configuration while eliminating ancient obscure features like traps.
For vixie-cron, use sys-process/cronie. It also itegrates anacron, so you get two in one.
For man, use man-db which is faster as it uses a BDB backend instead of text files.
Java and its use of filesystem syscalls
File f = new File("build.xml"); // openat(AT_FDCWD, "build.xml", O_RDONLY) = 96 // fstat(96 InputStream io = new FileInputStream(f); // read(96 io.read(); // read(96 io.read(); // close(96) io.close(); Path p = f.toPath(); // openat(AT_FDCWD, "build.xml", O_RDONLY) = 96 io = Files.newInputStream(p); // read(96 io.read(); // read(96 io.read(); // close(96) io.close();The NIO way of creating an input stream from a file actually saves an
fstat
syscall.ipset's hashsize and maxelem parameters
When defining a Linux hash ipset the parameters hashsize and maxelem must be chosen.
maxelem is easy: this limits how many entries the ipset can have.
hashsize however is a tuning parameter. It defines how many hash buckets are allocated for the hashtable. This is the amount of memory that you are willing to sacrifice. It has a very coarse granularity and accepts only values that are equal to 2^n where n is 1..32.
Hashtables are most efficient (buckets mostly contain only a single key, eliminating the search within a bucket) when only 3/4 of their buckets are actually used (1/4 is free). But for large ipsets this is not practical as it would waste a lot of memory. For example for an ipset with 100'000 entries the hashsize should be at least 133'333. The next larger legal value of hashsize is 262'144 which is very wasteful (but fast).
So for such large hashtables we can't really afford to avoid the bucket search. Instead we try to find a balance between the size of a bucket and the number of buckets. If we put 8 entries inside a bucket on average then we get 12'500 buckets. The next legal value for hashsize is 16'384, which gets us 6 entries in average in reality. This should yield acceptable performance vs. small enough space.
maxelem is easy: this limits how many entries the ipset can have.
hashsize however is a tuning parameter. It defines how many hash buckets are allocated for the hashtable. This is the amount of memory that you are willing to sacrifice. It has a very coarse granularity and accepts only values that are equal to 2^n where n is 1..32.
Hashtables are most efficient (buckets mostly contain only a single key, eliminating the search within a bucket) when only 3/4 of their buckets are actually used (1/4 is free). But for large ipsets this is not practical as it would waste a lot of memory. For example for an ipset with 100'000 entries the hashsize should be at least 133'333. The next larger legal value of hashsize is 262'144 which is very wasteful (but fast).
So for such large hashtables we can't really afford to avoid the bucket search. Instead we try to find a balance between the size of a bucket and the number of buckets. If we put 8 entries inside a bucket on average then we get 12'500 buckets. The next legal value for hashsize is 16'384, which gets us 6 entries in average in reality. This should yield acceptable performance vs. small enough space.
Java and its use of mmap
These are the syscalls caused by Java's mapped byte buffers:
FileChannel fc = FileChannel.open(f.toPath(), StandardOpenOption.READ, StandardOpenOption.WRITE); // mmap(NULL, 2147483647, PROT_READ|PROT_WRITE, MAP_SHARED, 4, 0) MappedByteBuffer buf = fc.map(MapMode.READ_WRITE, 0, Integer.MAX_VALUE); // madvise(0x7f4294000000, 2147483647, MADV_WILLNEED) = 0 buf.load();When the buffer is garbage collected the
munmap
call happens.How to migrate SonarQube to Postgresql
Perform the migration on an existing SonarQube installation. You can not do it at the same time as upgrading to a newer SonarQube version!
1. Create an emtpy Postgresql DB (no password is used here, depending on settings in
4. Shut down the SonarQube instance again for migration.
5. Delete the
4. Run the mysql-migrator utility.
5. Startup the SonarQube instance again.
6. If you want to update to a newer SonarQube version then do that now.
1. Create an emtpy Postgresql DB (no password is used here, depending on settings in
pg_hba.conf
):
psql -U postgres create user sonar; create database sonarqube owner sonar;2. Change the DB connection of the existing SonarQube installation in
sonar.properties
:
sonar.jdbc.username=sonar #sonar.jdbc.password= sonar.jdbc.url=jdbc:postgresql://localhost/sonarqube?currentSchema=public3. Start up the SonarQube instance so that it creates the DB schema in Postgresql.
4. Shut down the SonarQube instance again for migration.
5. Delete the
sonar/data/es6/nodes
folder.4. Run the mysql-migrator utility.
5. Startup the SonarQube instance again.
6. If you want to update to a newer SonarQube version then do that now.
Java and its use of epoll
In case you wonder how Java NIO uses epoll under Linux:
- The Selector allocates an epoll file descriptor and two FDs (a pipe) for timeout/wakeup. Failing to close the Selector will leak those.
- It uses epoll as level-triggered interface (no EPOLLET)
- It is important to remove a selected key from the Selector's selectedKeys Set. Only then the next select() call will reset its readyOps.
JDK-1.8 simplifies atomic maximizer
With Java 8 the atomic primitives have gained a very useful function: getAndUpdate(). It takes a lamda function to atomically update the value. This simplifies previous complicated code that used compareAndSet in a loop into a one-liner.
As an example look at a piece of code that is used to keep track of a maximum value.
As an example look at a piece of code that is used to keep track of a maximum value.
private AtomicInteger max = new AtomicInteger(); public void oldSample(int v) { int old; do { old = c.get(); } while (!c.compareAndSet(old, Math.max(old, v))); } public void newSample(int v) { max.getAndUpdate(old -> Math.max(old, v)); }
Set your HTTP cache headers correctly
I see sites often disable caching of resources completely with really bad headers like:
It makes a lot more sense to let the client cache and tell it to check if the resource has been modified in the mean time. The easiest way to do that is to pass the Last-Modified header together with:
Maybe this practice comes from bad defaults in Apache. I have not seen any default Apache config that sets sensible Cache-Control. Therefore no header is sent and browsers cache such responses forever, not even clicking the Reload button will fetch it again. This of course makes developers take the simple but radical option to disable caching.
A much better default for Apache is:
Cache-Control: no-store, no-cache, must-revalidate Expires: Wed, 4 Jun 1980 06:02:09 GMT Pragma: nocache
It makes a lot more sense to let the client cache and tell it to check if the resource has been modified in the mean time. The easiest way to do that is to pass the Last-Modified header together with:
Cache-Control: max-age=0, must-revalidateThis will enable caching in the browser and the browser will request the resource with the
If-Modified-Since
header. The server will respond with 304 Not Modified if the resource's last-modified date is still the same, saving the transfer. If you need more control over the content of the resource and a last-modified date is not enough or can not easily be given, you can set the ETag header. ETag is a hash or version number of the content and changes as the resource's content changes. But careful: ETag may change with the Content-Encoding (compression). Carefully test if it behaves correctly with your gateway (reverse proxy).Maybe this practice comes from bad defaults in Apache. I have not seen any default Apache config that sets sensible Cache-Control. Therefore no header is sent and browsers cache such responses forever, not even clicking the Reload button will fetch it again. This of course makes developers take the simple but radical option to disable caching.
A much better default for Apache is:
Header set Cache-Control "max-age=0, must-revalidate"
On Gentoo sshd is killed after udev is triggered
After running some updates I noticed that sshd (including active sessions) were somehow killed sometimes. After much debugging I found the reason: udev and cgroups. It looks like udev can send kill signals to all members of its cgroup if it thinks that it's a systemd system. But on OpenRC systems that just does a lot of harm.
That udev triggering happens for example during:
Note the absent udev directory under
This also explains why the problem is fixed by a reboot.
I have filed a bug against OpenRC.
back
|
next
That udev triggering happens for example during:
- grub-install
- startup of qemu with kvm
/etc/init.d/udev -D restart
. The culplrit being the -D
flag. The flag causes cgroups to be not set. So udev ends up in the main cgroup!Note the absent udev directory under
/sys/fs/cgroup/openrc
This also explains why the problem is fixed by a reboot.
I have filed a bug against OpenRC.