[138957 views]

[]

[toggle ads]

Odi's astoundingly incomplete notes

New Entries

How to fix broken image preview in Gentoo KDE

Due to recent updates your folder preview of images in KDE Dolphin is probably broken. The culprit is the exiv2 library (which in itself is a major problem).

To fix that, rebuild exiv2 first: emerge -1av exiv2

Then check what uses it: revdep-rebuild -pL libexiv2

And recompile that: emerge -1av kde-apps/libkexiv2 kde-apps/kio-extras kfilemetadata gwenview

posted on 2017-06-30 09:01 CEST in Code | 0 comments | permalink

fix Stack Clash on Gentoo

The Stack Clash class of bugs can be easily prevented on Gentoo.

1. Add -fstack-check to your CFLAGS. It instructs the compiler to touch every page when extending the stack by more than one page. So the kernel will trap in the guard page. This even makes the larger stack gap in recent kernels unnecessary (if you don't run other binaries)

/etc/portage/make.conf:
CFLAGS="-march=native -O2 -pipe -fstack-check"

2. Recompile important libraries (like openssl) and programs (setuid root binaries in shadow and util-linux) or simply everything: emerge -ae world

As always, keep your system uptodate regularly: emerge -uavD world

posted on 2017-06-27 15:15 CEST in Code | 0 comments | permalink

Gentoo updates perl from 5.22 to 5.24

On desktop systems emerge usually complains that there are packages requiring 5.22 and refuses to update:
!!! Multiple package instances within a single package slot have been pulled
!!! into the dependency graph, resulting in a slot conflict:

dev-lang/perl:0

  (dev-lang/perl-5.24.1-r1:0/5.24::gentoo, ebuild scheduled for merge) pulled in by
    =dev-lang/perl-5.24* required by (virtual/perl-MIME-Base64-3.150.0-r2:0/0::gentoo, installed)
    ^              ^^^^^                                                                                                                                
    (and 8 more with the same problem)

  (dev-lang/perl-5.22.3_rc4:0/5.22::gentoo, installed) pulled in by
    dev-lang/perl:0/5.22=[-build(-)] required by (dev-perl/Digest-HMAC-1.30.0-r1:0/0::gentoo, installed)
                 ^^^^^^^^                                                                                                                                                                                                                                                         
    (and 13 more with the same problem)      
To resolve that:

Forcibly update perl (-O), then clean up:
# emerge -1uav perl-cleaner
# emerge -1uavO perl
# perl-cleaner --all
(repeat perl-cleaner if emerge fails)
There may still be perl virtuals that need reinstalling:
# emerge -1av $(qlist -IC 'virtual/perl-*')
This should leave you with a consistent perl build and emerge should no longer suggest a downgrade.

posted on 2017-04-18 09:43 CEST in Code | 4 comments | permalink
Great notes & Thank you!

Lucky for me I've just returned back to using Gentoo after an absence well over 5+ years. This error was popping up for me on a fresh install (arch=amd64, plasma profile). Highly annoying considering it was a fresh build.
Thank you for this howto.
You have saved me many hours of frustration with this issue, I'm glad to find this entry.
It has resolved the issue perfectly, many thanks again!
A much simpler and more reliable procedure than I found myself.
Thank you!
You only get 13 conflicts? I get 90+. Made me sad. your post made me happy though. Thanks.

Oracle and HugePages

I have an Oracle 12 instance with 32GB SGA on a modern Gentoo Linux system with 48GB of RAM. The kernel has transparent hugepages set to always but no HugePages configured.

From /proc/meminfo:
PageTables:      2577752 kB
AnonHugePages:     75776 kB
HugePages_Total:       0
HugePages_Free:        0
HugePages_Rsvd:        0
HugePages_Surp:        0
Hugepagesize:       2048 kB
You can clearly see that page tables take up 2.5 GB of RAM. This makes TLB flushes really bad of course. You can also see that Oracle doesn't make much use of transparent huge pages. Only 75 MB are used.

After reserving 32 GB of real 2 MB HugePages (vm.nr_hugepages = 16384) the situation has become:
PageTables:       118456 kB
AnonHugePages:         0 kB
HugePages_Total:   16384
HugePages_Free:       63
HugePages_Rsvd:        0
HugePages_Surp:        0
Hugepagesize:       2048 kB
So Oracle has put the SGA completely into HugePages which greatly reduces the space required for page tables and thus memory management becomes a lot more efficient.


posted on 2017-03-29 16:49 CEST in Code | 0 comments | permalink

How to get rid of ruby-2.0 on Gentoo

As per this Gentoo News Item Ruby 2.0 has been removed. Unfortunately it may still linger on your system until you manually intervene and remove all dependencies to it.
# equery l ruby
 * Searching for ruby ...
[IP-] [ M] dev-lang/ruby-2.0.0:2.0
[IP-] [  ] dev-lang/ruby-2.1.9:2.1

# eselect ruby list
Available Ruby profiles:
  [1]   ruby20
  [2]   ruby21 (with Rubygems) *

# emerge -1Oav virtual/rubygems dev-ruby/rubygems rdoc racc rake dev-ruby/json

# emerge --depclean
If that still doesn't automatically remove ruby:2.0 then you can forcibly uninstall it:
# emerge -C ruby:2.0
Then depclean will tell you which packages you should rebuild to clear the remaining dependencies to that version.
posted on 2017-01-03 11:24 CET in Code | 0 comments | permalink

iptables conntrack helper assignment

Long story short for a client:
[0:0] -A OUTPUT -p tcp --dport 21 -j CT --helper ftp
[0:0] -A OUTPUT -p udp --dport 137:138 -j CT --helper netbios-ns
This will make work: Then you can safely set this sysctl to zero: net.netfilter.nf_conntrack_helper

posted on 2016-12-21 15:42 CET in Code | 0 comments | permalink

Convert a Linux installation to Gentoo

A little fun script I wrote on a boring weekend.

posted on 2016-12-15 16:48 CET in Code | 0 comments | permalink

From WS-* to REST/JSON

To set the frame, please note that during the last 10 years I have done a great many Webservices. In Java. Client and server side. Synchronous and asynchronous. Directly connected and decoupled via ESBs like TIBCO and queuing infrastructure like IBM MQ. And if I say a lot then I mean that a quick find in the git tree reveals about 800 WSDL files. I have worked on most of them. Most of them use CXF. Some ancient ones use Axis.

Lately WS seems to be a dying technology. New interfaces between systems are now usually requested to use REST and JSON. This comes with much joy but unfortunately also much frustration.

Joys first


Now for the frustrations


There is no standard for specifying an interface.
Well there are several competing technologies: Swagger, RAML, WADL to name a few. And then you can often chose to write them in YAML or JSON. If we wanted to support all those technologies, we would have use and maintain a zoo of tools. It would have really helped REST if it had standardized a single very good interface description language. For WS-* there is WSDL. And everybody was using it. And even though there was an option to use Relax-Ng as the modelling language, everybody just used XML Schema. Consequently the number of available tools is endless. You always find good tools that are a joy to use. Not so in the REST world. It's a chaos and a desert at the same time.

Some developers even think it is sufficient to provide some examples of requests and responses in a PDF. Of course simple typos lead to much cursing later and make testing a lengthy and painful experience.

Multiple competing conventions
Apparently people have noticed that REST is not enough of a spec to be actually useful. So multiple conventions have popped up that tell you how to build REST services: HAL, OData to name a few. Naturally each one claims to be the best one. Again if you need to integrate many different REST services you will have to support a zoo of 'standards'.

REST/JSON is unfriendly to strongly typed languages (like Java).
WS-* with XML Schema was equally horrible for dynamically typed languages (like JS). But there was no need to repeat that same mistake. Parsing is already harder than necessary. A JSON object has no name! It starts with a brace followed by a list of name/value pairs. So in a stream I have no idea what type of object I am going to look at and which properties I should expect. I have to know from some other source what I am going to parse.
Nothing prevents a system from throwing an array containing various types of objects at you. Without knowing exactly what type of object to expect at which array index there is simply no way of mapping that into an object model. You have to resort to a generic representation with maps which will cause you more pain later.

The number of defined datatypes in JSON is low, which I consider a good thing. But it lacks two things. There is no fixed-point decimal type. JSON's numbers are by default interpreted as double floating point numbers, which is inadequate for things like quantities or prices. Also it doesn't define any type for date and time. Most of the time though people use string and interprete it using the XML schema date format (ISO 8601). Given the difficulties involved with dates and timezones there is a need to have a good data type for date and time.

REST/JSON is not not good for RPC
A remote procedure call is a call that has an ordered fixed number of arguments whose types are constant and a single return value of constant type, with the assumption that arguments and return value are serializable. This definition matches function call definitions in all but exotic languages. That made it a hugely successful concept. WS-* extended it a little with Faults (exceptions) to ease integration with modern languages like C# or Java. Somehow the REST world decided that it should go a different route. Arguments in REST can be scattered throughout various places: HTTP methods (GET, POST, ...), HTTP headers (including Cookies), URI components, URL parameters and the HTTP entity (body message). Return values / objects usually depend on the HTTP status code. This could in theory be mapped to the simple RPC model, but I have yet to see any good infrastructure that does that well. JAX-RS is not nice here: all return values are simply a generic object, from which you extract the HTTP status code and then ask the object to interprete the return value as a specific type. Non-200 responses should have been mapped to exceptions in my opinion and the 200-response should have been chosen as the return type. But now we have this messy "generic" RPC style which is totally not type safe, which means that avoidable mistakes only show up at runtime instead at compile time.

Swagger doesn't even force the developer to define (named) types at all. You could simply list arguments and return types inline even if they are large complex objects. Code generators for typed languages have no other possibility than generate silly classnames (or let the poor developer specify them via configuration) in that case. Producing a horrible maze of classes that are hard to use. It's not what you want when the logic is already complex. Good names help.

REST/JSON is not good for messaging
By messaging I mean an ESB. So put the arguments to the RPC call into a file, possibly modify it, and send that file asynchronously over some transport queueing mechanism to the destination endpoint. For that to work you need the file to contain everything you need to know to execute the RPC call (except anything which is configuration like the actual endpoint URL of the destination). With WS-* that was part of idea behind it. It made very sure that the message contains the operation name (style: document literal wrapped) for example. Metadata (headers) is also part of the message.
All sorts of ESB middle ware cropped up. But also locally for an application it is essential to be able to queue messages to a remote system and send them one-by-one in a defined order or highly parellelize them, when the remote system is available and ready.

REST makes it harder. Because the REST/JSON message may not contain all information: again JSON objects have no name, more than one HTTP method could apply to the same arguments, some of the arguments could be part of the HTTP headers, URI or need to be passed in the URL parameter and may not be available from the message. If that is the case you need to wrap the message into an additional object that contains the missing information. As a design rule for ESB capable JSON objects, all information should be contained in the JSON object even if it is later partly duplicated elsewhere in the HTTP request.
posted on 2016-12-14 14:01 CET in Code | 0 comments | permalink

What the TLS private key is for really

People think that the private key of the server certificate protects the content of TLS messages. And so if someone obtains the private key they can decrypt a TLS connection. Well, not quite.

These days the private key is primarily used for authentication. So the server can prove that it is what its certificate claims it is. If a server presents a certificate for odi.ch then it needs the matching private key to prove that claim to clients. So losing the server key always enables identity theft and thus man-in-the-middle attacks.

The content of a TLS connection is encrypted using a session key (using a symmetrical algorithm like AES).

If that session key is exchanged using an insecure key exchange protocol then it is true: we can recover it. The original key exchange protocols did that. The client creates a pre-master secret and encrypts it with the server's public key, so it can be obtained with the server's private key. That pre-master secret is the basis for the session key.

If the session key is exchanged using a secure key agreement protocol (Diffie-Hellman) then we can not recover it. A secure key agreement protocol is able to produce a shared secret in plain sight without any encryption of the agreement protocol itself. It does not in any way depend on private key of either party.

posted on 2016-10-07 11:45 CEST in Code | 0 comments | permalink

Making ntp-client (ntpdate) work in Gentoo

tl;dr:
/etc/dhcpcd.conf:
waitip 4
dhcpcd will background already after configuring an IPv6 address on the interface. Which may be seconds before you get an IPv4 lease and DNS information from the DHCP server. The ntp-client init script may therefore run before we have a valid /etc/resolv.conf and will fail due to name resolution not working.
Tell dhcpcd to background only after we haven an IPv4 address, which is like the whole point of DHCP these days. I don't consider the network "up" with IPv6 only.

It's particularly a problem when you get IPv6 router advertisements. Like in qemu VMs with user networking.

Look at the following log, where you see it backgrounding 3 seconds before the lease:
Oct  4 14:35:25 localhost dhcpcd[3356]: eth0: adding address fe80::59c1:f175:aeb3:433
Oct  4 14:35:25 localhost dhcpcd[3356]: DUID 00:01:00:01:1a:7a:85:31:52:54:00:12:34:56
Oct  4 14:35:25 localhost dhcpcd[3356]: eth0: IAID 00:12:34:56
Oct  4 14:35:25 localhost dhcpcd[3356]: eth0: rebinding lease of 10.0.2.15
Oct  4 14:35:25 localhost dhcpcd[3356]: eth0: probing address 10.0.2.15/24
Oct  4 14:35:26 localhost dhcpcd[3356]: eth0: soliciting an IPv6 router
Oct  4 14:35:26 localhost dhcpcd[3356]: eth0: Router Advertisement from fe80::2
Oct  4 14:35:26 localhost dhcpcd[3356]: eth0: adding address fec0::4f23:8633:1bba:42f5/64
Oct  4 14:35:26 localhost dhcpcd[3356]: eth0: adding route to fec0::/64
Oct  4 14:35:26 localhost dhcpcd[3356]: eth0: adding default route via fe80::2
Oct  4 14:35:28 localhost dhcpcd[3356]: forked to background, child pid 3384
Oct  4 14:35:31 localhost dhcpcd[3384]: eth0: leased 10.0.2.15 for 86400 seconds
Oct  4 14:35:31 localhost dhcpcd[3384]: eth0: adding route to 10.0.2.0/24
Oct  4 14:35:31 localhost dhcpcd[3384]: eth0: adding default route via 10.0.2.2


posted on 2016-10-04 13:15 CEST in Code | 0 comments | permalink