Porting MARTe2 to Debian Bullseye

I’m a long term user and contributor to the MARTe2 real-time controls toolkit coming out of nuclear fusion research.

This week’s goal is to get it running on an up to date Debian Bullseye. I’ll make notes along the way that may be of use for sone of my more junior colleagues. Code references are to the develop branch at commit 9de51bfb.

Issue 1 is a call to ftime(timeb*) reported as a deprecated function used in EventSem.cpp:237 within the Core/Scheduler/L1Portability layer (in the Linux environment). This function is indeed a deprecated function in the GNU C library for which clock_gettime(2) is the recommended replacement. It takes a struct timeb pointer as defined in sys/timeb.h and returns current time in seconds and millseconds since the Epoch. The struct has fields for the times, and also timezone and DST flag (though POSIX recommends not relying on the latter two fields).

In contrast, clock_gettime takes two arguments : a clockid_t and a pointer to a timespec struct which returns time in seconds and nanoseconds in differently typed integer fields and omits the timezone and DST flag. The first argument is a constant which identifies which clock to use, with options including CLOCK_REALTIME, CLOCK_REALTIME_ALARM and about 9 other clock options.

So – quite a difference. Both functions are thread safe. The ftime function always returns 0. The clock_gettime can distinguish 6 error codes with 4 possible reasons for EINVAL.

Before starting to work on the changes, I look at how is ftime used within EventSem.cpp and also whether it is called anywhere else in the project. The Wait() call in the class is passed a timeout value in millseconds. From this, the future time in seconds is computed. In turn this is converted into the number of seconds and nanoseconds to wait until via a pthread_cond_timedwait call. The timezone and DSTval are unused and the conversion of the routine is simple except for selecting which clock should be used, but CLOCK_REALTIME seems the obvious choice. Looking for other usage, there is a near identical algorithm in the MutexSem::Lock method. Error handling is provided, even though ftime will never fail, though the only action is to cascade the OSError, but that’s the best we can do to start with.

If that’s all we have to do, then the final question is how will this code fit with older systems? It depends for how long this call has been deprecated and where clock_gettime is supported but it seems likely that this will be fine.

Finally – it is necessary to look for the regression test support to make sure that even if the replacement code compiles and runs that it also works correctly for non-timeout and timeout cases. This may be the hardest part of the work, and necessary to maintain QA standards.

Portable Productivity

I work on many projects and on a very large number of different machines. An essential task in maintaining my productivity whatever environment I find myself in is to make sure I have vim, ideally my standard vim customisations, git, tmux and possibly a python or two. Among my work in progress git projects are some shell scripts to handle the setup of some of this, but it’s all a bit rough.

Inspired by Nicola Paolucci aka durdn who has shared his own take on this I want to do better. He has arranged his dotfiles for git, bash, vim, atom, tmux, and ?hammerspoon?. In addition to the install script linked above he suggests that he has “config tracking”. Reading the install script this is some logic that uses symbolic links to redirect to particular versions of “assets” along with an md5 checksum based decision to back up then replace older versions.

An alternative way to achieve a standardised vim comes from a very productive looking vimmer from Chengdu with their vimplus project. This has a three line install from scratch of

git clone https://github.com/chxuan/vimplus.git ~/.vimplus
cd ~/.vimplus
./install.sh //不加sudo

Alternatively there is a docker invocation

docker run -it chxuan/ubuntu-vimplus

This is looks excellent and since some of the documentation comes in Mandarin, it’s good motivation for me to get back to studying that most excellent and intriguing language as well. Running this customised vim from within a docker instance is attractive in that it ought to work equally well multiplatform. In order to be able to write files on the docker host it is necessary to amend the command to something like

docker run -v /home/user:/mnt/user -it chxuan/ubuntu-vimplus

The startup time of this solution might be a concern for some, since the advantage of using vim is the blazing fast startup time (generally instant) compared to other editor/IDE approaches. Actually, this image is really quick. I may make one of my own with a more stripped down base OS, but it’s pretty effective even on a very old 2010 i7 dual core Dell Latitude E5410.

And finally, in locating the resources above, I also came across a very excellent terminal productivity booster course. This is about 300 hours of expert guidance through a whole range of techniques and tools, much aligned with my new starter mentoring over the years. Kudos to Rob Muhlestein whose repos I look forward to studying and learning from. Rob uses the zettelkasten method which in itself looks useful. He has also legally trademarked his “Beginner Boost” course and “Boost” similarly. Fair enough, although I am now intrigued to learn more legal IP details and how his course and the C++ libraries keep sufficiently far apart. Starting to follow his course, I like the MUDL concept. I wonder if there is a way to render his material within a vim style navigation flow?

Note to self: get https://dev.yorhel.nl/ncdu added to my standard list of tools, because I’m forever filling partitions and it’s better than endless looping over directories with du.

Bare Metal Journey – Part One.

To Qemu and Beyond.

I’m getting into bare metal development, and to kick off with, I found some very nice links to suggested exercises with helpful github repo support to try and learn from. As always with tutorial material, the expectation is that some of the recipes will rust over time and a little rejuvenation will be required to update the incantations as a function of upstream evolution and the inevitable dependency hell.

Firstly – I’m starting with bztsrc/ and their raspi3-tutorial. A discussion in some online forum pointed me to this and a set of instructions for how to build a very specific version of qemu and then a particular commit of the tutorial code. If you’re less interested in the journey than the destination, then skip over the next section of moans and groans.

Twisty Maze

I started on a relatively up to date Ubuntu 20.04 and on trying to build commit v2.12.0 of qemu a configure error bombed out with :

ERROR: glib-2.22 gthread-2.0 is required to compile QEMU

For those who like to run off down the rabbit warren and understand everything, Google will quickly bring you to a nice detailed explanation but a quick proxy to get the correct glibc version is ldd –version. I have 2.31. The obvious next gambit is to ignore at my peril the instructions to use v2.12.0 of qemu and just try to build the main branch. The output from configure says

ERROR: Cannot find Ninja

This is a bit irritating and a problem I am thinking of trying to work on in general, which is how to specify build environment dependencies up front. Of course, I can iterate through “configure; error; fix error” ad infinitum but I’d rather have a python pip type experience where the first thing configure does is offer me the option to accept “apt install package1 package2 package3….”. Naturally, as soon as this functionality exists, I’ll be writing blog posts moaning about how installing Qemu from scratch trashed my nice working Ubuntu setup and why oh why can’t I have “virtualenv” style separation of dependencies. Which of course I can have for the price of knowing enough docker/kvm/VM tech or putting everything into snapspace. Anyway – how to install Ninja (which I vaguely recognise as probably YABS (Yet Another Build System)). Apt doesn’t have a clue about either “ninja” or even “Ninja” and now I recollect it might come from the nodejs stable. Nope, though there is a web Ninja framework. After some Google dead ends (Topeak Ninja-C bike tool) I finally locate ninja-build.org and identify ninja-build as the Ubuntu package to grab. Should I raise an issue on the Qemu project to report “Cannot find ninja-build” ? They probably have better things to do with their time, but maybe 10 people a day hit this. Hmmm. Question for another day. Does this get us any further with the main branch ?

ERROR: glib-2.56 gthread-2.0 is required to compile QEMU

OK. So the Qemu code is very sensitive about versions of glibc or gthread (the message is a little ambiguous). So Google this to get a better understanding of why this should be. Xilinx to the rescue with a recommendation from a similar casualty to

sudo apt-get install libglib2.0-dev

sudo apt-get install libpixman-1-dev # To counter later error that Dependency “pixman-1” not found

So as usual, the issue is due to the fundamental theorem of computer science: the only difficult thing in programming is to name things well. Unlike my Xilinx thread colleague, I did not require “sudo” assistance on the configure step once the dependencies were installed, and the build trickled along through some 2614 steps (Ninja-build gives a nice view on progress which I do like).

Get Shovel

Having escaped the twisty maze of Qemu build glitches, we are tooled up to compile the examples from Zoltan (anyone who has a repo on solving the Goldberg conjecture is someone I feel comfortable being on first name terms with). After installing gcc-aarch64-linux-gnu, the next instruction is to run a rather neat find/xargs combo which automatically edits all the Makefiles in place to substitute this compiler for the bare metal aarch64-elf-gcc compiler. Understanding in more detail what this difference means is definitely added to the TODO list. The invocation is

find . -name Makefile | xargs sed -i ‘s/aarch64-elf-/aarch64-linux-gnu-/’

With the Makefiles tweaked, running a build in the 05_serial folder compiles four objects, links them into an elf file using a specially provided linker script (.ld file) and then useds objcopy to add the elf contents into a preexisting 1048 data file already named kernel8.img. This gives much to study already. The resulting image file can be spun up in qemu with the following recipe :

qemu-system-aarch64 -M raspi3b -kernel kernel8.img -serial stdio

Qemu fires up without problem and even offers to take VNC connections on 127.0.0.1:5900 although pointing a vnc client at this address sees very little. Checking the README.md for the demo, this is to be expected. What the demo shows is writing out a serial number on the UART0 device, which qemu has been instructed to hook up to the terminal stdout. If applied to a real board, this will be much more interesting.

Dig

Time to get into the code, and then also find out how such an image can be used to try booting a real Pi3B+. There are some very high quality README files explaining much of what is going on here. As usual with low level programming, the basics are conceptually simple, but the details are quite technical. It’s a great introduction to how the phases of compiling and running code work, but without all the usual top level gloss that hides what is really happening down at the CPU level.

To summarise very briefly. At powerup, the BCM SoC which is the heart of raspberry pi initialises the GPU chip which knows how to prime the CPU and how to look for an arm compatible image file with arm instructions to feed to the CPU. Most interaction between the CPU and the peripherals (including GPU) is via memory mapped IO (though the GPU has a special mailbox protocol). “All you have to do” to make stuff work is figure out what values to write to what addresses, and whether there is any requirement for special timing, ordering, or checking for possible exceptions. It’s a bit like saying that all you have to do to play a Rachmaninov piano concerto is lift the lid, map from each note on the score to the location of a string and hit that string with a hammer in just the right way, then keep doing that. With the proviso that you need ten concurrent hammers and that the synchronisation must also be kept in time with the rest of the orchestra. However, this is all true, and for my next posts, I will try to expand Zoltan’s excellent, but concise descriptions of how it actually works into a more readable format.

Kudos

Many, many, many thanks (and several beers if I ever meet him) to Zoltan who can be found at https://gitlab.com/bztsrc and https://github.com/bztsrc. He is clearly a cool frood who knows where his towel is. This can be seen from throwaway lines such as “qemu doesn’t support raspberry pi yet, but I’ve implemented it so it’s coming soon and you can compile my changes from source” and the fact that to implement a screen drawing font example he started a project that handles scalable fonts (creditably Knuthian I thought).

Minimalist CLI Copy and Paste

I’m limbering up for a few sprints in embedded development environments and needed to go through my standard “moving-in” routine on a new arm64 board. i.e. Take a vanilla linux environment and set up my usual jigs and tooling. The first step is almost always to establish my credentials with the git repositories that I store this material in. I use two factor authentication on my github, but for regular development, use of ssh keys is also helpful. But to bootstrap this from a purely text environment, I need a way to obtain write access to my github public keys repository.

Step one : generate a personal access token via one of my standard development boxes. Now save that in a file and transfer to the embedded box with scp. I now needed a way to bring the contents of that token into the clipboard so that I can paste it back to the authentication challenge when I clone.

A little stackoverflow searching proposes at first to use xclip as a means of priming the clipboard from text in a file. However, this is only practical where there is an X display environment. The more basic approach is to use the screen utility within which I can cat the token to the terminal output, enter copy mode “C-a [” move to the line with the token, “Y”_ank into the clip board, and then when the password is requested paste with “C-a ]”.

Better Mindmaps with Freeplane and Batch Processing.

I’m a fan of mindmaps as a great technique for keeping an overview of tasks, projects or study areas. For a long time, I’ve been using the excellent freemind tool. Recently, as the number of mindmaps I’m trying to balance concurrently has increased, I wanted a way to convert the mind map files to a directly viewable format to create a visual index. I was unable to find a way to do the conversion in batch mode with freemind.

So – I’m migrating to freeplane, where this feature was documented as existing in the project wiki. To use in practice took a few iterations. The key is working out how to install and use the developer tools add-on. This was not compatible with the stock version of freeplane installed by the apt tools on Ubuntu 18.04 so I downloaded 1.8.11 from sourceforge. With the development tools installed, it’s possible to navigate to any action and find the string which will enable that action to be selected for execution from the -Xaction command line option.

Scripts for freeplane are written in Groovy. A new language but the syntax is straightforward, and the only trick is working out where to find the class documentation for the entities so as to work out how to develop a script. Scripts get stored in the scripts/ folder within the user directory and are parsed and ingested on startup. I hit a minor snag with a null object until I realised that freeplane will load the previous version of a new mindmap from a secret cache and there is no file associated with the map until the first active Save action.

The wiki suggests “freeplane.sh -S -N -Xaction mapfile” with -N indicating no user interaction. I have not got this to work but assume it’s a way to suppress the UI. For my case I can tolerate the UI popping up, even though it is a bit annoying in conjunction with my i3wm/regolith window manager.

It looks like freeplane has a lot more graphical power under the hood than freemind. This may be useful as I start to develop a few more talks and business pitches later in the year.

Docker container for git+openssl

Last November I had some trouble with a git server that wouldn’t work with the out of the box version of git that comes with Ubuntu 18.04. The issue is with the TLS implementation.

Instructions on stack overflow show how to rebuild git from scratch and use the openssl implementation.

To avoid having to repeat this on other boxes in future, I have created a Dockerfile for this and put up a copy of the docker image on hub.docker.com where it can be pulled via avstephen/ub18-git-openssl:latest

Usage is as follows, for example

docker run -it -v `pwd`:/local -w `pwd` avstephen/u18-git-openssl:latest git clone https://vcis-gitlab.f4e.europa.eu/aneto/MARTe2.git /local/m2

It’s a little clunk since the resulting files will be owned by root, but it does the job and avoids messing around with finding equivalent build solutions for other Linux distros.

Hello World Petalinux

I am in the middle of trying to follow a very nice example from Greg Anders on building some qemu based Xilinx testing projects, and have diverted to figure out a few glitches and issues with the petalinux stuff.

First off, petalinux-config is refusing to play ball and generate something when fed a perfectly respectable ZCU102 base platform downloaded pre-built from Xilinx. As google helps me find out, I am not the only one to have experienced this (albeit I am using 2019.2). My compatriot got the usual set of advice that is relevant to Xilinx tools (are you using an up to date version, on a supported platform) but ultimately this boiled down to using the tools from a network file system install directory. This smacks of fragility, and does not directly explain my case. However, I am running inside a VirtualBox VM which is one step away from a “standard” environment, and could have aspects of filesystem performance difference.

Adding a -v (verbose) option gives me more of a steer.

petalinux-config -v --get-hw-description=/home/astephen/Xilinx/Vitis/2019.2/platforms/zcu102_base/hw -p petalinux
INFO: Getting hardware description...
INFO: Rename zcu102_base.xsa to system.xsa
[INFO] generating Kconfig for project
Initialization Failed:
Can't find a usable init.tcl in the following directories: 
    /home/astephen/Xilinx/petalinux-v2019.2/tools/xsct/tps/tcl/tcl8.5 /tmp/pabuild/tcl8.5.14/lib/tcl8.5 /home/astephen/Xilinx/petalinux-v2019.2/tools/xsct/bin/unwrapped/lib/tcl8.5 /home/astephen/Xilinx/petalinux-v2019.2/tools/xsct/bin/lib/tcl8.5 /home/astephen/Xilinx/petalinux-v2019.2/tools/xsct/bin/unwrapped/library /home/astephen/Xilinx/petalinux-v2019.2/tools/xsct/bin/library /home/astephen/Xilinx/petalinux-v2019.2/tools/xsct/bin/tcl8.5.14/library /home/astephen/Xilinx/petalinux-v2019.2/tools/xsct/tcl8.5.14/library

This probably means that Tcl wasn't installed properly.

    while executing
"error $msg"
    (procedure "tclInit" line 61)
    invoked from within
"tclInit"
ERROR: Failed to generate /home/astephen/git-wd/zcu102_example/petalinux/build/misc/config/Kconfig.syshw
ERROR: Failed to Kconfig project
ERROR: Failed to generate System hardware Kconfig file.

Well, find $PETALINUX -name "init.tcl" locates an existing (though not yet demonstrably “usable” init.tcl in webtalk/tps/tcl/tcl8.5 and yet that directory is notably absent from what I infer is the TCLLIBPATH. Now file $(which petalinux-config) tells us that the tool is an ELF binary, so we cannot just go looking for where that is setup. None of the Xilinx settings files (I am also set up for Vitis and XRT here) have populated my environment with a TCLLIBPATH. So some early parsed tcl script is to blame. At this point I would normally reach for strace and look at all the tcl file openings via the system call trace. Regrettably, when I did this, the process core dumped, which was unexpected and uncalled for. For some reason "strace petalinux-config" generates an infinite set of openat(AT_FDCWD, "/proc/self/status"...) calls.

More Googling as we go down the rabbit hole. AR51582 looks promising. It hints at dark consequences when the LD_LIBRARY_PATH accrues entries which get in the way, and the sage advice is to avoid sourcing the settings files from (in this case, a .cshrc file) but unless I do source the settings I will not have petalinux-config in my PATH. Some more of the advice says this can be indicative of something having executed a non Xilinx tcl shell. Well, in my PATH I have no other tclsh.

vivado mode -tcl works perfectly well and drops me into a tclsh.

stephen@h0208-ub1804:~/git-wd/zcu102_example$ vivado -mode tcl

****** Vivado v2019.2 (64-bit)
  **** SW Build 2708876 on Wed Nov  6 21:39:14 MST 2019
  **** IP Build 2700528 on Thu Nov  7 00:09:20 MST 2019
    ** Copyright 1986-2019 Xilinx, Inc. All Rights Reserved.

Vivado% set a 5
5
Vivado% puts $a
5

Try the same tools on another machine? Why not. Only a 7.92GB download for the installer after I remember by Xilinx US export credentials login. Dont get me wrong. I love what Xilinx tools can do, when you finally manage to balance for a moment on the finely poised pyramid of complexity.

However, I now have a good answer when my hardware colleagues make fun of the real-time C++ middleware stack I work on. Quite rightly, the point out that to do true real-time and concurrent stuff, parallelism in hardware cannot be beat. However, in the real world, when I can git clone my middleware and compile an entire nuclear fusion protection application in 20 minutes (my day job has some interest), but it takes me a week to get hello world working on a Xilinx platform, I have a justification for using a general purpose CPU with carefully crafted approaches to avoid the OS getting in the way of deterministic behaviour.

If I manage to carve out the time, I will post a follow up that explains if I ever found a resolution to this glitch before it became more pragmatic to move on to yet a different generation of Xilinx tools.

Update 23 Jun. On the host of the VM which is running Ubuntu 18.04.4 LTS directly on a real filesystem, I repeated the installation of Petalinux 2019.2, and the ZCU102 base platform and made another attempt to generate the config for the same project. In this case, there were no Tcl related errors, but simply these errors instead:

astephen@H0208:~/petalinux_project_zcu102/vitis_example$ petalinux-config -v --get-hw-description=../zcu102_base/hw/ -p petalinux
INFO: Getting hardware description...
INFO: Rename zcu102_base.xsa to system.xsa
[INFO] generating Kconfig for project
INFO: [Hsi 55-2053] elapsed time for repository (/home/astephen/Xilinx/tools/xsct/data/embeddedsw) loading 0 seconds
hsi::open_hw_design: Time (s): cpu = 00:00:06 ; elapsed = 00:00:06 . Memory (MB): peak = 811.488 ; gain = 148.066 ; free physical = 10719 ; free virtual = 14436
[INFO] menuconfig project
ERROR: Failed to menu config project component 
ERROR: Failed to config project.
ERROR: Get hw description Failed!.

Working with Xilinx tools is like working for Edison. I have not failed. I’ve just found 10,000 ways that won’t work.

Update 2020-06-23. From another fine host, this time with Petalinux 2020.1 installed, another repeat of the same action. Finally, this time, we get to a menuconfig option for the system configuration (when run interactively). Taking the defaults from this, the tool proceeds to function as expected, up to the “generating workspace directory” which failed.

$>petalinux-config --get-hw-description=./platforms/zcu102_base/hw/ -p petalinux
WARNING: Your PetaLinux project was last modified by PetaLinux SDK version "2019.2",
WARNING: however, you are using PetaLinux SDK version "2020.1".
Please input "y" to continue. Otherwise it will exit![n]y
INFO: sourcing build tools
INFO: Getting hardware description...
INFO: Rename zcu102_base.xsa to system.xsa
[INFO] generating Kconfig for project
[INFO] menuconfig project
configuration written to /media/2TB/workspace/astephen/git-wd/petalinux_zcu102_example/petalinux/project-spec/configs/config

*** End of the configuration.
*** Execute 'make' to start the build or try 'make help'.

[INFO] extracting yocto SDK to components/yocto
[INFO] sourcing build environment
[INFO] generating u-boot configuration files, This will be deprecated in upcoming releases
[INFO] generating kernel configuration files, This will be deprecated in upcoming releases
[INFO] generating kconfig for Rootfs
[INFO] silentconfig rootfs
yes: standard output: Broken pipe
[INFO] generating plnxtool conf
[INFO] generating user layers
[INFO] generating workspace directory
ERROR: Failed to create workspace directory
ERROR: Failed to config project.
ERROR: Get hw description Failed!.

Repeating, but with the verbose flag and the –silentconfig option…

auto-login (auto-login) [N/y/?] 
*
* user packages 
*
opencl-clhpp-dev (opencl-clhpp-dev) [Y/n/?] 
opencl-headers-dev (opencl-headers-dev) [Y/n/?] 
xrt (xrt) [Y/n/?] 
xrt-dev (xrt-dev) [Y/n/?] 
zocl (zocl) [Y/n/?] 
*
* PetaLinux RootFS Settings
*
Root password (ROOTFS_ROOT_PASSWD) [********] 
Add Extra Users (ADD_EXTRA_USERS) [] 
#
# configuration written to /media/2TB/workspace/astephen/git-wd/petalinux_zcu102_example/petalinux/project-spec/configs/rootfs_config
#
[INFO] generating plnxtool conf
[INFO] generating user layers
[INFO] generating workspace directory
ERROR: Failed to create workspace directory
ERROR: Failed to config project.
ERROR: Get hw description Failed!.
astephen@ukaea-fpga petalinux_zcu102_example master

Ubuntu 18.04 or Not Ubuntu 18.04

So during my mammoth Xilinx install exercise yesterday, I learned something about Ubuntu distributions that I had not known.

Xilinx Vitis 2019.2 release states that it is compatible with Ubuntu 18.04.2 LTS. I never really knew what the final minor version suffix indicated (the .2) and so I was sloppy in preparing a virtual box VM onto which to install the OS and downloaded the generic .iso from Ubuntu for “18.04” which gave me 18.04.4.

Several hours later, this came back to bite me when the XRT Xilinx run time install failed, because one of the kernel modules would not build, and also the pyopencl installation fell over. Looking at the first problem revealed that 18.04.4 provides a 5.x kernel and the kernel modules required a 4.x kernel. The pyopencl problem was not so clear cut but revolved about the usual Python2/Python3 issue (gee, thanks python guys for giving me some nostalgia about the bad old Perl4/Perl5 days).

Since the Vitis install involves a 25GB download, I was loathe to throw away the work so far, but nursing a non supported OS to support as complex a software stack as Xilinx could just be banging my head against a wall. On the other hand, a good opportunity to learn a few things, so what might we try?

Plan A was to consider downgrading the installed system. Numerous articles explain that this is possible by editing /etc/apt/sources.list to point at the desired version, then “pinning” the packages by tweaking the pin priority in /etc/apt/preferences and running an “apt update; apt upgrade; apt dist-upgrade” sequence.

However, most articles also caution YMMV and that if you have post installed other packages (and XRT pulls in no less than 136 packages) then the likely messed up root file system could be pretty flaky.

Plan B : no need to downgrade everything, since the problem is clearly with the kernel. Just back out, or install an older kernel and retry. The first challenge is to work out exactly which Ubuntu kernel was used in 18.04.2. In principle documented, but in practice I decided just to download the 18.04.2 install iso and make a second Virtual Box VM. Bad mistake in doing this was to check the “download other stuff while installing tickbox” because apparently by having done so I got a free 18.04.2 kernel upgrade to 5.3.0-59-generic and not the 4.x kernel I expected. Oddly, I still got a post install”would you like to download updates now” option. Just to prove I´m not dreaming I checked a 2019 article and sure enough (unless that article history gets rewritten which is not out of the question) it explained that 18.04.2 shipped with a 4.18 kernel.

Some more googling located what sounds like a handy Ubuntu Kernel Update Utility (UKUU)utility for my problem.

$ sudo add-apt-repository ppa:teejee2008/ppa
$ sudo apt-get install ukuu

But this failed. WTF? Checking Tony George´s ppa repo finds links to his ukuu utility for 18.04.1. This minor version on the Ubuntu releases is really starting to grate. However, while browsing the launchpad pages, I start to get a feel for the support that is present for managing “recipes” for packaging code for Ubuntu and this itself could lead to more resources for a forthcoming blog article on the packaging wars.

OK. I don´t really want to become a debian packaging ninja this morning, so we can install an older kernel without some flash GUI support, surely? Perhaps https://kernel.ubuntu.com/~kernel-ppa/mainline/v4.18.20/ Following the stackoverflow instructions reveals a couple of missing tips. Firstly, the modules are required in addition to the kernel image and they must be installed first. Second, after doing the installation, to make it possible to select the new kernel from the grub bootloader where a key tip is to edit /etc/default/grub and comment out the line that hides the timeout so you can select the correct kernel. OK. I now have 4.18.20 on my test 18.4.2 system and so I can replay this solution to backport my kernel in 18.04.4.

A couple of other possibilities were at the back of my mind had this approach failed.

  • Create a new VM with 18.4.2, then mount the VDI virtual disk from the 18.4.4 and copy over the directory where the Xilinx installations had been done, or mount and use the VDI disk (a good future exercise will be to check how to do this anyway)
  • Try an 18.4.2 install over the existing VDI disk, hoping that the installer would offer an option to install alongside and handle the housekeeping of partition wrangling to just make this work,

VOTTO II: HDF5 install

Part II in my series on variations on the theme of HDF5. Getting some demonstration C examples compiling and running on Ubuntu 18.04 but in contrast to part I we build the HDF libraries from scratch.

https://support.hdfgroup.org/ftp/HDF5/releases/hdf5-1.12/hdf5-1.12.0/src/hdf5-1.12.0.tar.gz

From release_docs/INSTALL we discover that the dependencies are on :

  1. Zlib : if zlib-1.1.2 or later is found, HDF5 will use it. Otherwise the compression stub will default to no compression.
  2. Szip (optional) provides another option to support Szip compression filter.
  3. MPI and MPI-IO. If the parallel version of the librariy is to be built then MPI and MPI-IO are required. Otherwise, only the serial version of HDF5 can be built.

The installation steps are :

  1. Download the source, either via git or wget for a released snapshot.
  2. If downloaded, extract and decompress the tar.gz archive.
  3. Out of source builds are always to be recommended and are supported with Make. Thus mkdir build; cd build; ../hdf5-X.Y.Z/configure options.

To summarise, the following script will work to build from scratch.

wget  https://support.hdfgroup.org/ftp/HDF5/releases/hdf5-1.12/hdf5-1.12.0/src/hdf5-1.12.0.tar.gz
tar zxvf hdf5-1.12.0.tar.gz
mkdir build-hdf5-1.12.0-make
cd build-hdf5-1.12.0-make
../hdf5-1-12.0/configure --prefix=/opt/hdf5-1.12.0-make
make
sudo make install

Now to use this installation, it is necessary to add the target install directory to PATH. i.e. PATH=/opt/hdf5-1.12.0/bin:$PATH.

To see a full working example, I have created github and docker repos with some test HDF5 code, a docker image where all of this has been done, and a github repo of the Dockerfile from which the image was made. See

Provenance

RepoCommit / DigestComment
hdf5-quickstartaca4cd0Commit
u18-hdf5-scratchsha256:e02bc5d93332df070c2e3dec49
ba5b6a0930afee867bcced212d7de04bab7c6d
Image digest
my-dockers658f051Commit
Provenance of the versions used in this article.

Related Articles

  • Series Overview
  • Part I
  • Part III: CPP bindings.
  • Part IV: Cross compiling for Arm
  • Part V: Arm on a ZynqMP under Qemu
  • Part VI: Yocto-ized Build Recipe
  • Part VII: Petalinux 2019.2 Integration

VOTTO HDF5 I : Ubuntu

Part I in my series on variations on the theme of HDF5. Getting some demonstration C examples compiling and running on Ubuntu 18.04

One route to installing the HDF5 tools is via the Anaconda python package manager. By default, installing the HDF5 module will give the python bindings. This includes the h5cc compiler script front end, but not by default the actual gcc toolchain configured for HDF5 compatibility. To be able to compile say the HDF group examples the following install is required.

conda install gxx_linux-64

Since this kind of task is always a bit fiddly, and since I work on a very large range of topics, I like to create github repos with sample code and dockerhub docker containers to latch my results. Where I can, I make these public. So – the repos below contain my HDF5 example code, a docker hub prebuilt image which can run it, and finally a repo where I keep my Dockerfiles since these are sometimes more useful than runnable docker container images from Docker hub.

A few notes as to which Ubuntu 18.04 packages[1] were found to be useful for HDF5 work :

  1. hdf5-tools : provides about 15 utilities for working with HDF5 files.
  2. libhdf5-dev : development headers and libraries. This adds the h5cc front end for compiling C HDF5 programs. Interestingly the dependencies in apt are not sufficient to cause this to pull in gcc.
  3. build-essential : always disappointing to me that stock Ubuntu requires custom installation of gcc. A linux box without gcc is totally pointless. It ought to be illegal.
  4. vim : Again, almost more essential than even a compiler. Should be even more illegal to have Linux without vim installed.
  5. cmake : Not strictly necessary, but important to placate my good friend Rashed Sarwar.

This is sufficient to build and run the C based examples. To see this for yourself, try this :

docker run -it avstephen/u18-hdf5:1.0
cd hdf5-quickstart/examples
make all
ls

Provenance

The tagged docker has the provenance of the github examples, but for reference, this was made from commit aca4cd0 of hdf5-quickstart and commit 4bdd100 of my-dockers and the u18-hdf5/Dockerfile.

The digest of avstephen/u18-hdf5:1.0 is sha256:b7fc21af122b875e69cb71b03bdb9dc0017590d8f4c03fd743643915099766c9

Related Articles

  • Series Overview
  • Part II : HDF5 from Scratch (Make version)
  • Part II(b) HDF5 from Scratch (cmake version)
  • Part III: CPP bindings.
  • Part IV: Cross compiling for Arm
  • Part V: Arm on a ZynqMP under Qemu
  • Part VI: Yocto-ized Build Recipe
  • Part VII: Petalinux 2019.2 Integration

Footnotes

[1] I know that 20.04 LTS is available, but I suspect it will be another year or so before the typical Ubuntu user upgrades, so I prefer to base tutorials on a slightly older version of Ubuntu. I am running with Ubuntu 14.04, 16.04, 18.04, 20.04, Centos 7, RHEL6, RHEL7, Scientific Linux 7.4 and Solaris. And for more interesting projects, forget distros, I build custom kernels and root filesystems for fun.