Linux Wrangling, 4K and OBS

Two minor tasks that should be quick and easy.   (a) Set up Nvidia quad P1000 graphics card for use with Linux and a couple of 4K screens.  (b) Installed OBS to be able to make some videos about software.   Yeah, right.

First attempts – stock Ubuntu on top of a fairly standard Dell box hosting the Nvidia cards.   Lots of problems with drivers and compatibility and the relationships between X, the graphical layer that supports windowing, desktops using X and the drivers system.   We have various intermediary layers named “Wayland”, “Nouveau” and options to take packaged nvidia binaries for some given kernels, or to install from packages provided by nvidia.   Just as with my XPS 4K laptop, the end result is that one way or another, something doesn’t work.   And that’s before you also try to sort out the tty text terminals.

Some more in depth experience is related here but to be mined….

As for OBS – well, I started from some positive looking instructions but immediately hit package management issues on Scientific Linux 7.4 (tracking the RHEL/Centos stable).   The first clue to problems was a suggestion to install “dnf” which turns out to be “dandified yum” aka the “next” version of yum.   In general the fact that the number of package managers is proliferating is indicative of the major problems that packages present to Linux users.  But -hey why not.   Second recommendation is that in addition to enabling the epel-release repositories, to look at RPM Fusion, or alternatively NUX (supposedly equivalent but better than RPM Fusion).   So – RPM Fusion is a repo center for packages that the Fedora project or RedHat don’t want to ship (another bad sign).   And Nux is a competitor/alternative.

Started to install some of the standard packages and found that to get hold of  Objective-C via gcc-objc or gobc failed.  Eventually found some gnu-step packages names that were touted to work.

Then – for the extensions.   Started with Nux but the dependencies to build OBS from scratch failed at the x264-devel level where I could either have x264-devel but not ffmpeg or vice versa.   So – then looked to scrub nux by moving the repo description out of /etc/yum.repos.d and performing a yum update.   1174 packages later and I was ready to retry the yum installs again, then throw in the “ffmpeg-devel” (tip : package “may” be required generally means package is certainly required).   All OK barring warnings about runtime library libEGL.so.1 in /usr/lib64 which may be hidden by files in /sbin/../lib64.

Finally – git clone the OBS sources, cmake to get the makefiles and then run the build.   Finally “make install” which by default is to /usr/local as a prefix and so run ldconfig (after creating file /etc/ld.so.conf.d/local.conf containing a line listing /usr/local/lib).   We end up with /usr/local/bin/obs which works.

Initial setup and optimisation, trying to run on the full 3840×2160 HD screen one results in a core dump.    Not very impressive.

Moral of the story : don’t buy new hardware and expect to use Linux for less than lots of pain.

Advertisements

Docker and Fusion Apps (1)

This is the first in a short series of blog articles on how to deploy standard software frameworks used in Big Physics (Nuclear Fusion in particular) on docker.

Firstly, we need to install latest Docker CE on the various distributions that are in common use within the Fusion community.   This includes Centos7, RHEL7, Ubuntu16.04, Ubuntu18.04 and ScientificLinux7.4.  Today we’ll look at RHEL7 since I have this installed on a fairly nice server and would like to use my hardware to experiment with some docker images.

RHEL7 tips come courtesy of Nick Janetakis who proposes a recipe that ensures yum-utils are in place, adds a docker repo, updates the container-selinux package and then installs the docker CE package.   Note that this breaks the spirit of using RHEL in which only approved software should be installed, so this is only for development machines and absolutely not for production.    Since his article was written the packages on the mirror.centos.org server have changed, so the specific version of the container-selinux package to install has become 2.68-1 in place of 2.33-1.   In addition, attempting to install this on my version of RHEL7 then hit dependency errors.   

First – container-slinux-2.68-1 requires selinux-policy-base >=3.13.1-192 (yum helpfully says that installed is 1-166, and offers me about 100 available packages with near identical version numbers and an incomprehensible spread of names).  SElinux really does improve security by making sure nobody will ever be able to put a system into production with it.   Of course, yum helpfully points out also that “you could try using –skip-broken to work around the problem” or “you could try running rpm -Va –nofiles –nodigest” (but no hints as to what this might do).   I just hope 747 plane interfaces don’t have similar “you could try fully opening the throttles” hints.

Trying to pick this apart : 

  1. container-slinux 2.68-1 has 3 selinux-policy dependencies
    1. selinux-policy >= 3.13.1-192
      1. Installed 3.13.1-166
      2. Available selinux-policy-{minimum (153),mls (153), targeted (153)}
    2. selinux-policy-base >= 3.13.1-192
      1. Installed 3.13.1-166
      2. Available  : as above more or less.
    3. selinux-policy-targeted >= 3.13.1-192
      1. Installed 3.13.1-166
      2. Available more or less as above.

So – bottom line, yum cannot find a sufficiently up to date matching set of -policy dependencies.   On the mirror for the centos list none of these packages are listed.   To generally force yum to look and see if any updates are available : “yum check-update” (answer none).   So – OK let’s just go with the –skip-broken.  And that goes doubly for also skipping broken dependencies on the yum -y install docker-ce.  The predictable outcome is not a happy docker.

sudo yum install -y yum-utils
sudo yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
sudo yum makecache fast
sudo yum install -y http://mirror.centos.org/centos/7/extras/x86_64/Packages/container-selinux-2..el7.noarch.rpm --skip-broken
sudo yum install -y docker-ce --skip-broken

Note that in the discussion on Nick’s blog, it is possible to simply install the RHEL EE enterprise edition of Docker.  But this is Docker 1.9 which was released in 2015.    Which suggests that hoping to find updated selinux policies to level 192 instead of 166 is unlikely to fly very well.

Conclusion : RHEL is great, as long as you love old software.  Next step : wipe it from my hard drive so that I can have an up to date development environment.

Linux Package Hell

So, in one of my recent posts, I looked into why some distros package git with gnu-tls which is incompatible with some git hosts.  At one stage I found some instructions for checking which version a given system has installed.  These relied on having distinct git binaries for some commands, such as “git-httpd-fetch” for “git httpd-fetch”.   And as far as I can tell, these are packaged in the “git-all” package.    Which is broken on Ubuntu 16.04 (and earlier).  The whole sorry mess is documented here :

https://bugs.launchpad.net/ubuntu/+source/runit/+bug/1448164

So – what do we know?  Package dependencies are complicated and we end up with a lot of broken software.   More importantly, large numbers of people are spending time trying to figure out why, and even with the very good resources of these bug trackers and stackoverflow,  the underlying root causes are not fixed properly.  

On the plus side, the speed of software change is ultra rapid which does lead to lots of choice and explores many directions in parallel which is good.   But a little more velocity (speed in a useful direction) might be helpful.    I saw a plea for better software quality recently (twitter perhaps).  So – my goal for the weekends, to try and contribute more to help in this.

On a more positive note, there are lots of good people and companies engaged in building meaningful software.   I found one interesting example after recently answering my own question about all of the gnu-tls stuff on stackoverflow from one of their sponsored recruitment adverts.

I like one of their tag lines : “together, we can build something meaningful”.

https://stackoverflow.com/jobs/companies/pivotal

git tls update

I’m not the only one to have experienced git/https pain.   A handy article on the subject explains how to check whether your installed git was built against gnu-tls or openssl. However, I’m still going to whip up a docker container with a known good openssl git in case I hit this problem again somewhere.   The general advice is that Ubuntu builds tend to use gnu-tls whereas other distributions build against openssl.

Unfortunately, the article advice pre-supposes that the installed version of git includes a binary named git-http-fetch.   As I have git 2.7.4 on Ubuntu Xenial (16.04) I can run “git http-fetch” but don’t have a binary against which I can perform the ldd shared object lookup.

A few googles and I identify another article with some wisdom about git, getting started and ubuntu which suggests performing

sudo apt-get install git-all

Disk space is cheap, so why not?  Well – “Errors were encountered while processing: runit; git-daemon-run E: Sub-process /usr/bin/dpkg returned an error code (1)” would be one reason why not.   Linux package hell strikes again.   Regardless of the error, no git-http-fetch binary has been added.

The git documentation for 2.5.1 has a special page for git-http-fetch but doesn’t explain how to get it installed.    At this point, I figure I’ve done enough leg work to reach out to stackoverflow.   Will update this post later when I get some responses.

Fighting git TLS

Trying to git clone a repo.   Should be a standard daily event, and normally is.   However, this repo is stubborn and git clone https://servername.domain/user/Project.git just yields a GnuTLS recv error (-110): The TLS connection was non-properly terminated.

Google.   Hint.   GIT_CURL_VERBOSE=1 as a prefix to the above comamnd gives a bunch more data including some of the HTTP header transactions.  There’s an nginx instance with whom git is negotiating which initially seems to be playing ball.   The final transaction gives an HTTP/1.1 200 OK response but then in reaction to the data, the git client bombs out with the GnuTLS recv error.

I vaguely remember rumours suggesting that building a git client from scratch and using an alternate TLS plugin (openssl perhaps?) could be a route out of these twisty passages.   Googling for this turnsup the article that I remember ( https://askubuntu.com/questions/186847/error-gnutls-handshake-failed-when-connecting-to-https-servers )

So – we want to get the sources for git (and dependencies).  On a fresh-ish raspbian () – firstly sudo vi /etc/apt/sources.list and uncomment the line pointing at the source code, then sudo apt-get update.

Now, follow the recipe from the link but updated for git-2.11.0 as follows

sudo apt-get update
sudo apt-get install build-essential fakeroot dpkg-dev
sudo apt-get build-dep git
mkdir ~/git-openssl
cd ~/git-openssl
apt-get source git
dpkg-source -x git_1.7.9.5-1.dsc
cd git-2.11.0
sed -e 's/libcurl/libcurl/g' debian/control > debian/control.ssl
cd debian; mv control{,.gnutls}; cp control.ssl control
sudo apt-get install libcurl4-openssl-dev
sudo dpkg-buildpackage -rfakeroot -b

This recipe is extracting the git source code, adapting the build control rules to hook in the openssl implementation of the TLS layer in place of the GNU tls equivalent, then running the build to recompile with the new config.

Wait a couple of minutes for the installations to run through (making sure to have started with enough space on the root filesystem).   Actually, even on a Pi3+, make that go do something fairly significant and useful while it builds.   The building process is very rigorous and includes building and running a whole bunch of test suites.

At the end of the compile. 

cd ..; sudo dpkg -i git_2.11.0-3+deb9u4_armhf.deb

And finally – I can clone the repository I first thought of.  Next on my list is to spin up a docker container service to handle these awkward git endpoints.

Music to compile git to.

Grocking Docker

So – I’m working on a number of docker images to make it easier to do neater stuff better, faster, with less work.   And, given the benefits of OSS I want to learn from others what best practice.

An immediate question that comes to mind after starting to create Docker images, is to figure out: how was that image made?  (So if the process is amazingly amazing, I can steal it).

A good answer as to why this is not the right question is available on the docker forums.  The logic is that some information may be expressed in the Docker file, but additional steps could be manual.   The docker commands to introspect as far as possible on a docker pulled image are : “docker inspect image/tag” which will show the last command, and then the parent layer as a checksum.  “docker inspect <checksum>” will then iterate back a step.   The author of the post (shout out to Andy Neff) gives a nice bash function which will then recurse over the image to build up some of how it was done.

Of course, since the post was written, a new “docker history” command has been added which wraps up the same functionality very nicely.