Neither one nor Many

September 6 2016

The following steps are to quickly test how this stuff works.

Using my docker images (master, slave) and helper scripts on github, it's easy to get Cloudera Manager running inside a few docker containers. Steps: get most recent docker, install (GNU) screen, checkout the repo, in there do cd cloudera, bash This should do it. Note that the image(s) require being able to invoke --privileged and the scripts currently invoke sudo. After running the script you get something like (full example output here).

CONTAINER ID        IMAGE                               COMMAND             CREATED             STATUS              PORTS                    NAMES
31e5ee6b7e65        rayburgemeestre/cloudera-slave:3    "/usr/sbin/init"    20 seconds ago      Up 17 seconds                                node003
f052c52b02bf        rayburgemeestre/cloudera-slave:3    "/usr/sbin/init"    25 seconds ago      Up 23 seconds                                node002
1a50df894f28        rayburgemeestre/cloudera-slave:3    "/usr/sbin/init"    30 seconds ago      Up 29 seconds>8888/tcp   node001
54fd3c1cf93b        rayburgemeestre/cloudera-master:3   "/usr/sbin/init"    50 seconds ago      Up 48 seconds>7180/tcp   cloudera

Not really in the way docker was designed perhaps, it's running systemd inside, but for simple experimentation this is fine. These images have not been designed to run in production, but perhaps with some more orchestration it's possible .

Step 1: install Cloudera Manager

One caveat because of the way docker controls /etc/resolv.conf, /etc/hostname, /etc/hosts, these guys show up in the output for the mount command. The Cloudera Manager Wizard does some parsing of this (I guess) and pre-fills some directories with values like:

/etc/hostname/<path dn>
/etc/resolv.conf/<path dn>
/etc/hosts/<path dn>

Just remove the additional two paths, and change one to <path dn> only. There is a few of these configuration parameters that get screwed up. (Checked until <= CDH 5.8)

Step 2: install kerberos packages on the headnode

docker exec -i -t cloudera /bin/bash # go into the docker image for headnode

yum install krb5-server krb5-workstation krb5-libs

# ntp is already working

systemctl enable krb5kdc
systemctl enable kadmin

Configuration files need to be fixed, so starting will not work yet.

Step 3: modify /etc/krb5.conf

Into something like:

 default = FILE:/var/log/krb5libs.log
 kdc = FILE:/var/log/krb5kdc.log
 admin_server = FILE:/var/log/kadmind.log

 dns_lookup_realm = false
 ticket_lifetime = 24h
 renew_lifetime = 7d
 forwardable = true
 rdns = false
 default_realm = MYNET
 default_ccache_name = KEYRING:persistent:%{uid}

 MYNET = {
  kdc = cloudera.mynet
  admin_server = cloudera.mynet

 .mynet = MYNET
 mynet = MYNET

In this example cloudera.mynet is just hostname --fqdn of the headnode which will be running kerberos. (Note that mynet / MYNET could also be something like / FOO.BAR.)

Step 4: modify /var/kerberos/krb5kdc/kdc.conf

 kdc_ports = 88
 kdc_tcp_ports = 88

 MYNET = {
  #master_key_type = aes256-cts

  master_key_type = aes256-cts-hmac-sha1-96
  max_life = 24h 10m 0s
  max_renewable_life = 30d 0h 0m 0s

  acl_file = /var/kerberos/krb5kdc/kadm5.acl
  dict_file = /usr/share/dict/words
  admin_keytab = /var/kerberos/krb5kdc/kadm5.keytab
  supported_enctypes = aes256-cts:normal aes128-cts:normal des3-hmac-sha1:normal arcfour-hmac:normal camellia256-cts:normal camellia128-cts:normal des-hmac-sha1:normal des-cbc-md5:normal des-cbc-crc:normal aes256-cts-hmac-sha1-96

I specifically added aes256-cts-hmac-sha1-96 as master key and supported encryption types, and the max_life plus max_renewable_life properties.

But there is a chance Cloudera Manager might add this stuff as well.

Step 5: modify /var/kerberos/krb5kdc/kadm5.acl

*/admin@MYNET      *

Step 6: initialize the database

# kdb5_util create -r MYNET -s
Loading random data
Initializing database '/var/kerberos/krb5kdc/principal' for realm 'MYNET',
master key name 'K/M@MYNET'
You will be prompted for the database Master Password.
It is important that you NOT FORGET this password.
Enter KDC database master key: ******
Re-enter KDC database master key to verify: ******

Step 7: add master root/admin user

[root@rb-clouderahadoop2 krb5kdc]# kadmin.local
Authenticating as principal root/admin@MYNET with password.
kadmin.local:  addprinc root/admin
WARNING: no policy specified for root/admin@MYNET; defaulting to no policy
Enter password for principal "root/admin@MYNET": ******
Re-enter password for principal "root/admin@MYNET": ******
Principal "root/admin@MYNET" created.
kadmin.local:  ktadd -k /var/kerberos/krb5kdc/kadm5.keytab kadmin/admin
Entry for principal kadmin/admin with kvno 2, encryption type aes256-cts-hmac-sha1-96 added to keytab WRFILE:/var/kerberos/krb5kdc/kadm5.keytab.
Entry for principal kadmin/admin with kvno 2, encryption type aes128-cts-hmac-sha1-96 added to keytab WRFILE:/var/kerberos/krb5kdc/kadm5.keytab.
Entry for principal kadmin/admin with kvno 2, encryption type des3-cbc-sha1 added to keytab WRFILE:/var/kerberos/krb5kdc/kadm5.keytab.
Entry for principal kadmin/admin with kvno 2, encryption type arcfour-hmac added to keytab WRFILE:/var/kerberos/krb5kdc/kadm5.keytab.
Entry for principal kadmin/admin with kvno 2, encryption type camellia256-cts-cmac added to keytab WRFILE:/var/kerberos/krb5kdc/kadm5.keytab.
Entry for principal kadmin/admin with kvno 2, encryption type camellia128-cts-cmac added to keytab WRFILE:/var/kerberos/krb5kdc/kadm5.keytab.
Entry for principal kadmin/admin with kvno 2, encryption type des-hmac-sha1 added to keytab WRFILE:/var/kerberos/krb5kdc/kadm5.keytab.
Entry for principal kadmin/admin with kvno 2, encryption type des-cbc-md5 added to keytab WRFILE:/var/kerberos/krb5kdc/kadm5.keytab.
kadmin.local:  ktadd -kt /var/kerberos/krb5kdc/kadm5.keytab kadmin/changepw
Entry for principal kadmin/changepw with kvno 2, encryption type aes256-cts-hmac-sha1-96 added to keytab WRFILE:/var/kerberos/krb5kdc/kadm5.keytab.
Entry for principal kadmin/changepw with kvno 2, encryption type aes128-cts-hmac-sha1-96 added to keytab WRFILE:/var/kerberos/krb5kdc/kadm5.keytab.
Entry for principal kadmin/changepw with kvno 2, encryption type des3-cbc-sha1 added to keytab WRFILE:/var/kerberos/krb5kdc/kadm5.keytab.
Entry for principal kadmin/changepw with kvno 2, encryption type arcfour-hmac added to keytab WRFILE:/var/kerberos/krb5kdc/kadm5.keytab.
Entry for principal kadmin/changepw with kvno 2, encryption type camellia256-cts-cmac added to keytab WRFILE:/var/kerberos/krb5kdc/kadm5.keytab.
Entry for principal kadmin/changepw with kvno 2, encryption type camellia128-cts-cmac added to keytab WRFILE:/var/kerberos/krb5kdc/kadm5.keytab.
Entry for principal kadmin/changepw with kvno 2, encryption type des-hmac-sha1 added to keytab WRFILE:/var/kerberos/krb5kdc/kadm5.keytab.
Entry for principal kadmin/changepw with kvno 2, encryption type des-cbc-md5 added to keytab WRFILE:/var/kerberos/krb5kdc/kadm5.keytab.
kadmin.local:  exit

This will be the user we will give Cloudera to take over managing kerberos.

Step 8: start services

systemctl start krb5kdc
systemctl start kadmin

Step 9: do the Enable security wizard in Cloudera Manager

This should be self explanatory, but in summary:

  • Enable the four checkboxes on the first page of the wizard.
  • Next page, kdc = hostname --fqdn headnode, realm = MYNET (in our example). Leave other defaults.
  • Next page, select Manage krb5.conf through Cloudera Manager. Leave all defaults.
  • Next page, Username root/admin and password you typed in step 7.

The wizard will do it's magic and hopefully succeed without problems.

Blog Comments (0)
May 7 2016

In case you are looking for a free alternative to Camtasia Studio or many other alternatives... One of my favorite tools of all time, ffmpeg can do it for free!

The simplest thing that will work is ffmpeg -f gdigrab -framerate 10 -i desktop output.mkv (source) This gives pretty good results already (if you use an MKV container, FLV will give worse results for example).

HiDPI: Fix mouse pointer

gdigrab adds a mouse pointer to the video but does not scale it according to HiDPI settings, so it will be extremely small. You can configure the mouse pointer to extra large to fix that. That mouse pointer won't scale either, but at least you end up with a regular size pointer in the video

Optional: Use H264 codec

More options you can find here, I've settled with single pass encoding using -c:v libx264 -preset ultrafast -crf 22.

ffmpeg -f gdigrab -framerate 30 -i desktop ^
       -c:v libx264 -preset ultrafast -crf 22 output.mkv

Optional: Include sound in the video

First execute ffmpeg -list_devices true -f dshow -i dummy this will give you directshow devices. (source) On my laptop this command outputs:

[dshow @ 00000000023224a0] DirectShow video devices (some may be both video and audio devices)
[dshow @ 00000000023224a0]  "USB2.0 HD UVC WebCam"
[dshow @ 00000000023224a0]     Alternative name "@device_pnp_\\?\usb#vid_04f2&pid_b3fd&mi_00#6&11eacec2&0&0000#{65e8773d-8f56-11d0-a3b9-00a0c9223196}\global"
[dshow @ 00000000023224a0]  "UScreenCapture"
[dshow @ 00000000023224a0]     Alternative name "@device_sw_{860BB310-5D01-11D0-BD3B-00A0C911CE86}\UScreenCapture"
[dshow @ 00000000023224a0] DirectShow audio devices
[dshow @ 00000000023224a0]  "Microphone (Realtek High Definition Audio)"
[dshow @ 00000000023224a0]     Alternative name "@device_cm_{33D9A762-90C8-11D0-BD43-00A0C911CE86}\wave_{1DDF1986-9476-451F-A6A4-7EBB5FB1D2AB}"

Now I know the device name I can use for audio is "Microphone (Realtek High Definition Audio)". Use it for the following parameters in ffmpeg -f dshow -i audio="Microphone (Realtek High Definition Audio)".

The end result

I ended up with capture-video.bat like this:

ffmpeg -f dshow -i audio="Microphone (Realtek High Definition Audio)" ^
       -f gdigrab -framerate 30 -i desktop ^
       -c:v libx264 -preset ultrafast -crf 22 output.mkv

This is a resulting video where I used this command, resolution of the video is 3840x2160 and the HiDPI scale is set to 2.5.


Update 1> Add more keyframes for better editing

For this I use the following command, to insert a keyframe every 25 frames (the closer to one, the larger the output file will be):

ffmpeg.exe -i %1 -qscale 0 -g 25 %2

The option -qscale 0 is for preserving the quality of the video.

(Changing the container to .mov was probably not necessary, I tried this hoping that Adobe Premiere would support it, but it didn't!)

Update 2> Editing 4K on Windows 10...

Found the following tool for editing: Filmora and (on my laptop) it was able to smoothly edit the footage. They support GPU acceleration, but the additional keyrames really help with a smooth experience.

Once you get the hang of it (shortcut keys are your friend) it's pretty easy to cut & paste your videos.

Update 3> Support Adobe Premiere

As I discovered Adobe Premiere earlier, doesn't like MKV, but it also doesn't like 4:4:4 (yuv444p), the pixel format used by default (it seems). You can view such information using ffprobe <VIDEO FILE>. Anyway, it seems to like yuv420p, so add -pix_fmt yuv420p to make it work for Premiere:

ffmpeg.exe -i input.mkv -qscale 0 -g 25 -pix_fmt yuv420p 
Blog Comments (3)
April 2 2016

A crazy idea, building a profiler/visualizer based on strace output. Just for fun. But, who knows there may even be something useful we can do with this..

The following image shows exactly such a visualization for a specific HTTP GET request (f.i., to (URL not accessible online)). The analysis from the image is based on the strace log output from the Apache HTTP server thread handling the request. Parameters for the strace call include -f and -F so it includes basically everything the Apache worker thread does for itself. (If it were to start a child process, it would be included.)

This request took 1700 milliseconds, which seems exceptionally slow, even for a very cheap micro compute instance. It is, I had to cheat a little by restarting Apache and MySQL in advance, to introduce some delays that make the graph more interesting. It's still still normal though that strace will slow down the program execution speed.

I grouped all strace lines by process ID and their activity on a specific FD (file descriptor). Pairs like open()/close() or socket()/close() introduce a specific FD and in between are likely functions operating on that FD (like read()/write()). I group these related strace lines together and called them "stream"s in the above image.

In the image you can see that the longest and slowest "stream" is 1241 milliseconds, this one is used for querying MySQL and probably intentionally closed last to allow re-use of the DB connection during processing of the request. The three streams lower in the visualization follow each other sequentially and appear to be performing a lookup in /etc/hosts, follewed by two DNS lookups directed to

Why are we doing this? (Other than because it's Awesome!)

This works for any strace output, but my idea originated while doing web development. This was for a relatively complicated web application, that was divided in many sub-systems that communicate mostly via REST calls with each other. All these systems had lots of external calls to other systems, and I wanted a view where I could see regardless of which sub-system or actual PHP code being executed, how the performance was for specifically: I/O with (i.e. for i18n/locale) files, scripts, SQL queries to MySQL, Oracle, the REST API calls to system X, Y & Z, Redis, Memcached, Solr, Shared memory even and Disk caching.

If only there was a tool really good at capturing that kind of I/O... ahh yeah there is, strace! I switched jobs 7 months ago, before applying my strace tool to this code-base, but I've applied it to similar complex applications with success.

We already had tools for (more traditional) profiling of PHP requests. Quite often the interpretation was difficult, probably because of a lot of nasty runtime reflection being used. Also when you needed to follow a slow function (doing a REST call) it was a lot of effort to move profiling efforts to the other system (because of OAuth 1.0b(omg..), expired tokens, ..). Nothing unsolveable of course, but with strace you can just trace everything at once on a development environment (especially in Vagrant which we used), spanning multiple vhosts. If it's just you on the VM, perhaps you can strace the main Apache PID recursively, I didn't try that however, but I think that would work.

Products like NewRelic provide dashboards for requests where you can gain such deep insights, "off the shelve", basically, but the downside is that it's not cheap. NewRelic f.i. hooks into Apache & PHP and has access to actual PHP function calls, SQL queries, etc. strace cant do that, because it only sits between the process(es) and the Linux kernel.

First, let's take one step back & properly parse the strace output..

It quickly became apparent that I couldn't get away with some trivial regex for parsing it, so I turned to bnfc and created the following BNF grammer to generate the parser. I was quite suprised that this was so easy that it took me less than a working day to find a tool for the job, learn it and get the grammer right for some strace output.

With this tool you are provided with an autogenerated base class "Skeleton" which you can extend to create your own Visitor implementation. With this pattern it becomes quite easy to extract some meta-data you are interested in. I will show a simply example.

The grammer

I came up with the following grammer that bnfc uses to generate the Parser. Reading it from top to bottom is more or less the way you can incrementally construct this kind of stuff. You start really small; first chunking multiple strace-lines into single strace-lines, then chunk strace-lines into Pid, Timestamp and (remaining) Line. Then further specify a Pid, the Timestamp, Line, etc., slowly making the grammer more coarse-grained.

EStraceLines.          StraceLines         ::= [StraceLine];
EStraceLine.           StraceLine          ::= [Pid] [Timestamp] Line;

EPidStdOut.            Pid                 ::= "[pid " PidNumber "] ";
EPidOutput.            Pid                 ::= PidNumber [Whitespace] ;
EPidNumber.            PidNumber           ::= Integer;

ETimestamp.            Timestamp           ::= EpochElapsedTime;

ELine.                 Line                ::= Function "(" Params ")" [Whitespace] "=" [Whitespace] ReturnValue [TrailingData];
ELineUnfinished.       Line                ::= Function "(" Params "<unfinished ...>";
ELineContinued.        Line                ::= "<... " Function " resumed> )" [Whitespace] "=" [Whitespace] ReturnValue [TrailingData];
ELineExited.           Line                ::= "+++ exited with" [Whitespace] Integer [Whitespace] "+++" ;

EFunction.             Function            ::= Ident ;
EFunctionPrivate.      Function            ::= "_" Ident ;

EParams.               Params              ::= [Param];

EParamArray.           Param               ::= "[" [Param] "]" ;
EParamObject.          Param               ::= "{" [Param] "}" ;
EParamComment.         Param               ::= "/* " [CommentString] " */";
EParamInteger.         Param               ::= Number ;
EParamFlags.           Param               ::= [Flag] ;
EParamIdent.           Param               ::= Ident ;
EParamString.          Param               ::= String ;
EParamWhitespace.      Param               ::= Whitespace ;
EParamAddress.         Param               ::= Address ;
EParamDateTime.        Param               ::= DateYear "/" DateMonth "/" DateDay "-" TimeHour ":" TimeMinute ":" TimeSecond ;
EParamKeyValue.        Param               ::= Param "=" Param ;
EParamKeyValueCont.    Param               ::= "...";
EParamExpression.      Param               ::= Integer Operator Integer;
EParamFunction.        Param               ::= Function "(" [Param] ")" ;

EDateYear.             DateYear            ::= Integer ;
EDateMonth.            DateMonth           ::= Integer ;
EDateDay.              DateDay             ::= Integer ;
ETimeHour.             TimeHour            ::= Integer ;
ETimeMinute.           TimeMinute          ::= Integer ;
ETimeSecond.           TimeSecond          ::= Integer ;

EOperatorMul.          Operator            ::= "*";
EOperatorAdd.          Operator            ::= "+";

EEpochElapsedTime.     EpochElapsedTime    ::= Seconds "." Microseconds ;
ESeconds.              Seconds             ::= Integer ;
EMicroseconds.         Microseconds        ::= Integer ;

ECSString.             CommentString       ::= String ;
ECSIdent.              CommentString       ::= Ident ;
ECSInteger.            CommentString       ::= Integer ;

ENegativeNumber.       Number              ::= "-" Integer;
EPositiveNumber.       Number              ::= Integer;

EFlag.                 Flag                ::= Ident;
EFlagUmask.            Flag                ::= Integer;

ERetvalAddress.        ReturnValue         ::= Address ;
ERetvalNumber.         ReturnValue         ::= Number ;
ERetvalUnknown.        ReturnValue         ::= "?";

EAddress.              Address             ::= HexChar;

ETrailingDataConst.    TrailingData        ::= " " [Param] " (" [CommentString] ")";
ETrailingDataParams.   TrailingData        ::= " (" [Param] ")" ;

ESpace.                Whitespace          ::= " ";
ESpace4x.              Whitespace          ::= "    ";
ETab.                  Whitespace          ::= "    ";

terminator             CommentString       "" ;
terminator             Param               "" ;
terminator             Pid                 " " ;
terminator             Timestamp           " " ;
terminator             TrailingData        "" ;
terminator             Whitespace          "" ;

separator              CommentString       " " ;
separator              Flag                "|" ;
separator              Param               ", " ;
separator              Pid                 " " ;
separator              StraceLine          "";

token HexChar ('0' 'x' (digit | letter)*);

Given the above grammer bnfc can parse this strace line 15757 1429444463.750111 poll([{fd=3, events=POLLIN|POLLPRI|POLLRDNORM|POLLRDBAND}], 1, 0) = 1 ([{fd=3, revents=POLLIN|POLLRDNORM}]) into an Abstract Syntax Tree.

[Abstract Syntax]

(EStraceLines [
        [(EPidOutput [(EPidNumber 15757)])] 
        [(ETimestamp [(EEpochElapsedTime
                         [(ESeconds 1429444463)]
                         [(EMicroseconds 750111)])])] 
            [(EFunction "poll")] 
            [(EParams [
                (EParamArray [
                    (EParamObject [
                        (EParamKeyValue (EParamIdent "fd") 
                                        (EParamInteger [(EPositiveNumber 3)])),
                        (EParamKeyValue (EParamIdent "events")
                                        (EParamFlags [
                                            (EFlag "POLLIN"),
                                            (EFlag "POLLPRI"),
                                            (EFlag "POLLRDNORM"),
                                            (EFlag "POLLRDBAND")]))])]), 
                (EParamInteger [(EPositiveNumber 1)]),
                (EParamInteger [(EPositiveNumber 0)])])]

            ESpace ESpace

            [(ERetvalNumber [(EPositiveNumber 1)])]

                    [(EParamObject [
                        (EParamKeyValue (EParamIdent "fd")
                                        (EParamInteger [(EPositiveNumber 3)])),
                        (EParamKeyValue (EParamIdent "revents")
                                        (EParamFlags [
                                            (EFlag "POLLIN"),
                                            (EFlag "POLLRDNORM")]))])])])

No matter how nested these lines get, it will parse them as long as I didn't forget anything in the grammer. (So far it seems to be complete to parse everything.)

Visitor example

Using the BNF grammer, the above structure and occasional peeking at the generated Skeleton base class, you can simply override methods in your own visitor to do something "useful". The following visitor is a less "useful" but simple example that outputs all the strings captured for strace lines containing the open() function. Just to illustrate how you use this Visitor.

class OutputOpenVisitor : public Skeleton
    string timestamp;
    string function;
    string strings;
    void visitEStraceLine(EStraceLine* p)
        timestamp = "";
        function  = "";
        strings   = "";
        if (function == "open") {
            cout << timestamp << " " << function << " " << strings << endl;
    void visitEFunction(EFunction* p)
        function = p->ident_;
    void visitEEpochElapsedTime(EEpochElapsedTime *p)
        auto secs      = static_cast<ESeconds *>(p->seconds_);
        auto microsecs = static_cast<EMicroseconds *>(p->microseconds_);
        timestamp = to_elasticsearch_timestamp(secs, microsecs);
    void visitString(String x)

You can find this example in the examples forder in the git repository here.

After compiling this example into strace-output-visualizer:

# capture a strace log
trigen@firefly:/projects/strace-output-parser[master]> strace -f -F -ttt -s 512 -o test.log uptime
17:53:02 up 32 days, 22:44, 23 users,  load average: 2.39, 2.20, 2.12

# strace log contains stuff like
trigen@firefly:/projects/strace-output-parser[master]> head -n 10 test.log 
19151 1458147182.196711 execve("/usr/bin/uptime", ["uptime"], [/* 47 vars */]) = 0
19151 1458147182.197415 brk(0)          = 0x7c1000
19151 1458147182.197484 access("/etc/", F_OK) = -1 ENOENT (No such file or directory)
19151 1458147182.197555 mmap(NULL, 8192, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f45cd85e000
19151 1458147182.197618 access("/etc/", R_OK) = -1 ENOENT (No such file or directory)
19151 1458147182.197679 open("/etc/", O_RDONLY|O_CLOEXEC) = 3
19151 1458147182.197740 fstat(3, {st_mode=S_IFREG|0644, st_size=156161, ...}) = 0
19151 1458147182.197813 mmap(NULL, 156161, PROT_READ, MAP_PRIVATE, 3, 0) = 0x7f45cd830000
19151 1458147182.197888 close(3)        = 0
19151 1458147182.197969 access("/etc/", F_OK) = -1 ENOENT (No such file or directory)

# pipe the log through the example program
trigen@firefly:/projects/strace-output-parser[master]> cat test.log | ./strace-output-parser 
2016-03-16T16:53:02.198Z open /etc/
2016-03-16T16:53:02.198Z open /lib/x86_64-linux-gnu/
2016-03-16T16:53:02.199Z open /lib/x86_64-linux-gnu/
2016-03-16T16:53:02.200Z open /sys/devices/system/cpu/online
2016-03-16T16:53:02.200Z open /usr/lib/locale/locale-archive
2016-03-16T16:53:02.200Z open /etc/localtime
2016-03-16T16:53:02.201Z open /proc/uptime
2016-03-16T16:53:02.202Z open /var/run/utmp
2016-03-16T16:53:02.273Z open /proc/loadavg

Opposed to a simple Visitor like this example, I parse all the lines, prepare a JSON representation for each line and store that in ElasticSearch. This way selecting and filtering can be done afterwards. And also ElasticSearch is really a fast solution in case you want to do more complex queries on your log.

A Proof of concept for Web

This time at the beginning of each request I have PHP instruct some script to run a strace on the process id for the current PHP script's pid (or rather the Apache worker's) and all it's (virtual) threads and sub processes. (If I would track the Request accross the stack with "Cross application tracing" you can even combine all the relevant straces for a given request. I didn't implement this (again) because of I switched jobs. (Info on Cross application tracing in newrelic). This is even relatively easy to implement if you have a codebase where you can just make the change (like inject a unique id for the current request in curl call for example).)

The following image and code shows how I capture straces from specific PHP requests, like the wordpress example I started this blog with. You can skip this part. Eventually these straces are linked to a specific request, ran through a slightly more elaborate Visitor class and fed into ElasticSearch for later processing.

(This omits also some other details w/respect to generating a UUID for each request, and keeping track of what strace outputs are related to each request.)

Inject in your application 'header', i.e., top index.php:

register_shutdown_function(function () { touch("/tmp/strace-visualizer-test/done/" . getmypid()); });
$file = "/tmp/strace-visualizer-test/todo/" . getmypid();
while (file_exists($file)) { sleep(1); } // continue with the request when removed from todo folder

A separate long running process runs the following:

trigen@CppSe:~/strace-visualizer-test> cat run.ksh 
mkdir -p /tmp/strace-visualizer-test/todo
mkdir -p /tmp/strace-visualizer-test/done
while true; do
    find /tmp/strace-visualizer-test/todo/ -type f | \
        xargs -I{} -n 1 sh -c "strace -f -F -ttt -s 4096 -o \$(basename {}).strace -p \$(basename {}) & rm -rf {};"
    find /tmp/strace-visualizer-test/done/ -type f | \
        xargs -I{} -n 1 sh -c "(ps axufw | grep [s]trace.*\$(basename {}) | grep -v grep | awk -F ' ' '{print \$2}' | xargs -n 1 kill -1 ) & (sleep 1; rm -rf {};)"
    printf ".";

This way you end up with .strace files per process ID (it should probably include a timestamp too). The long running process removes the file the client checks from the todo folder as soon as it started strace. That way the client will no longer block and the interesting stuff will be captured. It uses a shutdown handler to instruct the long running process to stop the capture (the Apache thread won't exit, it will wait for a next request).

Final step, To ElasticSearch!

I use a Visitor and my strace parser to create JSON representations for the strace log lines. Containing the meta-data I need: file descriptors, an array with all strings, a timestamp that ElasticSearch can understand out of the box, etc.

To get to my previous example, I can use cat test.log | ./strace-output-parser elasticsearch localhost 9200 strace_index to import the parsed lines to ElasticSearch.

In above example I use filtering with a plugin called "head" to basically make the same selection as I did with the simple visitor example. I also highlighted one specific line to show the JSON representation.

I used PHP for processing the wordpress strace output from ElasticSearch and generated the visualization from the very first image in this blog post. You can view the HTML output here.

Hopefully this blog post was interesting to read, and maybe you find some use for the strace parser yourself. If you do, please let me know, that would be fun to know .

Blog Comments (2)
December 17 2015

Most people are probably familiar with gdb, and Ribamar pointed out to me there is also a ncurses frontend inside gdb. But in case anyone is interested I learned that NetBeans also supports remote debugging. Even though it's not the most modern IDE in the world, and it's vi emulation is cumbersome , it seems to have pretty good support for remote-debugging. It will just login to some machine via ssh (i.e., dev11 or a real cluster), and issue gdb <something> and wrap around it. If you make sure it knows where the sources files are on your development machine, you can use all the step-debugging features.

The only downside is that loading up cmd in gdb takes a while probably ~ 30 seconds. Still it's a lot faster than debugging with print-statements and recompiling. For cmsh it's already a lot faster and on top of that you can issue a command multiple times via the REPL, so you can step debug it multiple times within the same gdb session. (Beware though that you probably need to connect again as your connection may be lost)

Example workflow

To show off how it works first with CMDaemon. My workflow is to create a unit-test that fails, set a breakpoint in the unit-test and start the debug.

break point set followed by the debugger stopping execution at that point.

step-into example, select the function to step into ➀ and click the button highlighted with ➁.

There is also the F7 key to "step into", but be prepared to step into assembly a lot of times (use CTRL+F7 to step out, and try again). You will jump into the -> operator, shared pointer dereferences, std::string constructors, before getting into the function you want. (Also note that the first time you step into assembly it will be very slow, but it will get faster the next few times).

Wizard example to debug cmd unit test

Download from
chmod +x

Note that you want to set some bogus command like whoami.
Netbeans will try to be smart and clean your project directory for you
(and rebuild without using multiple cores, ..)

Note the working directory should be including src.
This is to help gdb later with finding source code.




There is one fix needed that the Wizard didn't set properly for us.
Go to project properties, Build / Make, and set Build Result to the executable.
The remote debugger will use this value for issuing with gdb, and it's somehow empty by default.

Use ALT+SHIFT+o to Jump to the file containing the test.
Set a breakpoint there using CTRL+F8

The final thing we want to pass to gdb is the parameters for running our specific unittest.
In my example "${OUTPUT_PATH}" --unittests --gtest_filter=LineParserTest.empty.

You can use these settings to double check if everything is correct

C++ Comments (0)
December 1 2015

In addition to my previous blog post How to debug XUL applications.

Last friday I learned that you can use the DOM inspector on XUL applications as well. This is quite useful if you want to see what events are hidden behind a button, try out layout changes, etc., etc. It is also quite fast, I don't notice any performance difference.

These instructions are taken from a very useful stackoverflow answer. Summarizing:

  • Add [XRE] EnableExtensionManager=1 to your application.ini if it isn't already.
  • If you are using the xulrunner app you already have the Error Console available (for info see my previous blog post for this). Type in it the following: window.openDialog("chrome://mozapps/content/extensions/extensions.xul", "", "chrome,dialog=no,resizable=yes");.
  • You will be presented the Add-ons Manager, in there choose "Install Add-on From File..." and download the "DOM Inspector". (I have a local copy here: addon-6622-latest.xpi (downloaded from: here)).
  • You need to restart and start xulrunner with an additional -inspector flag.

One tip with the DOM inspector, if you use "File >> Inspect Chrome Document" and the list is huge, highlight an item with your mouse and press the End key on your keyboard. You likely need one at the bottom of the list because those are the XUL files loaded most recently.

Blog Comments (0)
November 25 2015

You can use Mozilla Firefox (Javascript) debugging on your XUL application using the Remote Debugging facility. This blog post could be useful as a HOWTO, because I was lucky enough to attempt this 3rd of July 2015. You see had I tried this today I would have failed, because stuff seems broken in newer versions of xulrunner (and Firefox). This is true for the project I work on at least. The very fact that I struggled with setting this up today was my motivation to dig into why it wasn't working and made me think this might be useful to others.

I know everything in this blog post to work for both CentOS 6.6 and Ubuntu 15.04. These steps (except for the xulrunner download) should be platform independent.

First get a slightly older xulrunner

You need a reasonably new xulrunner in order for Remote Debugging to work. I downloaded xulrunner version 38 at the time from The Mozilla Project Page (xulrunner-38.0.5.en-US.linux-x86_64.tar should be on their FTP somewhere, but you can also use this local copy hosted with this blog). I think we should cherish that version, because that one works.

The newest and version is version 41, but also the last because they started integrating it in Mozilla Firefox since then. I tried version 41, and grabbing a recent Thunderbird Firefox, but all steps work, except when you arrive in the "Connect Dialog", the clickable Main Process hyperlink (as shown in the image) is simply not there for you to click on.

Enable a debug listener in the code

In your application you need to start the debug listener. Probably in the top of your main.js include the following lines.

if (!DebuggerServer.initialized) {
  // Don't specify a window type parameter below if "navigator:browser"
  // is suitable for your app.
var listener = DebuggerServer.createListener();
listener.portOrPath = '6000';;

Also enable in the preferences (probably defaults/preferences/prefs.js).

pref("devtools.debugger.remote-enabled", true);

If you forget to change this last preference you will get the following error.

JavaScript error: resource://gre/modules/commonjs/toolkit/loader.js -> resource://gre/modules/devtools/server/main.js, line 584: Error: Can't create listener, remote debugging disabled

Start the application with this xulrunner

Extract the xulrunner runtime to somewhere, i.e. /projects/xulrunner, and issue from the your program's directory like this:

shell$> /projects/xulrunner/xulrunner application.ini

Attach debugger from Mozilla Firefox

Open a fairly recent Firefox browser and open the remote debugger which is available via "Tools ⏩ Web Developer ⏩ Connect...".

If the above "Connect.." option is not available, you have to enable the same preference inside Firefox in the "about:config" page. Search for remote-enabled.

Then connect to localhost port 6000.

Your program will present you a dialog to accept the incoming connection from the debugger.

After accepting you can click to attach to "Main Process" (your program).

You should be presented with a debugger that will automatically break when it encounters the debugger keyword. You can also set breakpoints inside.

This can look similar to the following image where a call stack is shown, and you have your usual ways to inspect variables and perform step debugging with F10, F11, Shift+F11

I am convinced it should also be possible to make it so that the javascript in can handle inspection from the debuggers console. In order to get a REPL working there (for inspecting variables), but I didn't find out how this can be achieved. Using the Watch (and Auto) expressions you can already inspect everything.

Just beware that once you attach to the process your program can freeze up for a while as the debugger is loading all the javascript files.

Blog Comments (0)
September 13 2015

Today I published my first Android (Wear) App! . The idea behind this clock is that it uses concentric circles to show time, and doesn't use analog clock hands or numeric time notation. This is something I have on a bigger LCD screen at home for a while now, and now that there is Android Wear for a while, I wanted to implement this for Android.

Some example visualizations

There is more theory behind the visualization, more on that on the website:

Android Watch Face

WebGL from the Website

You need to have WebGL support in your browser in order to see the following live-clock.

Some comments on Android Wear development

Android Wear is relatively new, and I never read any book on the Android Framework. Luckily I had some Java experience. Overall I am impressed by the design of the Framework, although it also confused the hell out of me on various occasions @:|@.

Some stuff I needed to realize or discover during development:

  • (Very basic:) an Activity only runs when it's the current activity.
  • If you need stuff running for longer than an Activity, you need Services.
  • In Java you don't have RAII like in C++/PHP. If you have handlers for threads etc. you should stop them in some onDestroy() method.
  • Packaging, creating the APK for use in f.i. the Play Store was counter intuitive, at least for me. Follow the example project provided by Google closely in your project w/respect to Gradle files. I had a perfectly good working APK that came out of Android Studio, it worked whenever I sent it to others, but it was not accepted by the Play store.
  • There is already OpenGL support for Watch Faces. You need to extend Gles2WatchFaceService.
Blog Comments (0)
September 2 2015

I use CLion in this blog post, but it should be the same for any of the other editors. (PyCharm, PhpStorm, Intellij, etc.).

It took me a while to get a setup that works reasonably well for me at work, for what I expect not a very uncommon setup. That's why I'm sharing this in a blog post.

The project I'm working on is quite big, 10yr under development; large codebase and a complex build process. The debug build results in a 1.2 GiB executable, all intermediate files generated by the compiler/linker are many, and big. During build a lot of files are removed/(re)created/generated, so in general a lot of I/O happens.

Our build machines are extremely powerful, so it doesn't make sense to work on a local machine because of the build times. That's why compiling happens on remote machines. I have worked remotely at a lot of companies, and usually I would simply use vim + a lot of plugins. However, nowadays I'm accustomed to the power IDE's can provide, primarily navigation-wise (jumping to classes, files, finding usages, etc.) and simply don't want to work without a proper IDE.

This is my setup

I use an NFS mount (sshfs would suffice as well) where I mount from the remote to local, not the other way around, or compiling will be extremely slow. In my opinion using file synchronization in these kinds of setups is too error prone and difficult to get right.

As a side-note; I've seen synchronization work moderately okay within a PHP project. But so far not in a C++ project where intermediate/build-files/libraries are first of all large and scattered throughout the project folder.

In my previous blog post we fixed fsnotifier such as in the previous image, but this also causes a new problem.

Lot's of I/O is slow over network mount

During compiling I noticed my IDE would hang, the only cause could be that it's somehow flooded by the enourmous lines of input it now receives from fsnotifier. Perhaps when we're working with the project files on a local disk the IDE wouldn't hang, because simple I/O (even just checking file stats) doesn't have network overhead.

Solution, ignore as much (irrelevant) I/O as possible

Here I made the fsnotifier script--that was at first just a simple proxy (calling the real fsnotifier via ssh)--more intelligent. It now filters out intermediate files generated by the compiler (.o, .d, and some other patterns).

function custom_filter
    typeset -n return_val=$1
    typeset cmd=$2  # i.e., DELETE/CREATE/CHANGE/...
    typeset file=$3 # i.e., /full/path/to/file

    # Ignore some files that are not interesting to my IDE
    if [[ $file =~ (cmd|mm)\.log$ ]] || \
       [[ $file =~ deps.*\.d$ ]]         || \
       [[ $file =~ \.o$ ]]            || \
       [[ $file =~ \.o\. ]]            || \
       [[ $file =~ \.html$ ]]         || \
       [[ $file =~ core\.*([0-9])$ ]];


Download all source code from GitHub:

Alternative solutions

The fsnotifier script outputs it's process id to /tmp/ and hooks two signals, so you can enable/disable it with a signal. Disabling will simply pause outputting all updates from the real fsnotifier (that is invoked via ssh).

kill -SIGINT $(cat /tmp/ - pause all activity
kill -SIGHUP $(cat /tmp/ - continue all activity

Another extension you may find useful would be to make the buildscript touch a file like, i.e. /path/to/project/DISABLE_FSNOTIFIER and make the fsnotifier script pause itself (or behave differently) during the build until it sees for example the ENABLE_FSNOTIFIER file.

Simply disabling fsnotifier again doesn't fix the problem, CLion would keep nagging occasionally about conflicts with files that have changed both on disk and in memory. And when auto-generated files are being re-generated by the build, I want my IDE to reflect them immediately.

Fine-tuning your filter

The filter is just a bash/ksh function, so you can easily extend it with patterns appropriate to your project. The fun thing is you can "killall -9 fsnotifier", and Jetbrains will simply restart it. So no need to restart Jetbrains (and with that having it re-index your project). Debug the filters by tailing: /tmp/fsnotifier-included.log and /tmp/fsnotifier-filtered.log.

Linux/Unix Comments (0)
August 14 2015

This should work for all their editors, PyCharm, Intellij, CLion, PhpStorm, Webstorm, etc.

The editor(s) use this tool to "subscribe" to changes on the filesystem. So if you change a file that's also in a buffer in for example CLion, it will know it needs to reload that file from disk in order to show the latest changes.

Without this tool it will fallback to periodically checking for changes or when a specific file is activated, I don't know exactly, but it's slower anyway.

You probably started searching for a solution because you saw this error in the console or in a popup in the IDE:

trigen@baymax:/home/trigen/Downloads/clion-1.0.4> FSNOTIFIER_LOG_LEVEL=info ./bin/  
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=350m; support was removed in 8.0
[   3243]   WARN - om.intellij.util.ProfilingUtil - Profiling agent is not enabled. Add -agentlib:yjpagent to idea.vmoptions if necessary to profile IDEA. 
[  14166]   WARN - api.vfs.impl.local.FileWatcher - Project files cannot be watched (are they under network mount?)  <<<<<<<<<<<

Let's fix it by having the IDE run fsnotifier over SSH on the actually server.

I will use as an example a project named MyFirstProject mounted via NFS from a server named DevelopmentMachine:

sudo mount -t nfs DevelopmentMachine:/home/ray/projects/MyFirstProject /projects/MyFirstProject -o rw,user,hard,intr,tcp,vers=3,timeo=600,_netdev,nolock,exec

First you need fsnotifier on DevelopmentMachine, because that machine should be able to subscribe to the filesystem events. I downloaded and build the one from ThiefMaster/fsnotifier-remote.

Test it by starting it and adding the project like this (>>> is your input, <<< the output you get):

[ray@DevelopmentMachine linux]$ ./fsnotifier
>>> /home/ray/projects/MyFirstProject
>>> #
<<< #

Now it's watching, trigger some changes on something in that root (i.e. open a vim hi.txt):

<<< /home/ray/projects/MyFirstProject/.hi.txt.swp
<<< /home/ray/projects/MyFirstProject/.hi.txt.swp
<<< /home/ray/projects/MyFirstProject/.hi.txt.swp

In this example I work locally on /projects/MyFirstProject, where it's /home/ray/projects/MyFirstProject on the server. The super easy solution is to make sure your local path is exactly the same. In my case I made a symlink so I have /home/ray/projects/MyFirstProject both on my local- and remote machine.

On the local machine I can run the above ./fsnotifier example through ssh, lets test that (make sure you have ssh keys configured correctly for this, otherwise you will get an authentication prompt):

trigen@baymax:/projects/fsnotifier-remote[master]> ssh -l ray DevelopmentMachine /home/ray/projects/fsnotifier-remote/linux/fsnotifier64
>>> /home/ray/projects/MyFirstProject

The fun thing is that the displayed files are actually already correct, so you don't need to do some any mapping. Just make sure you launch your IDE on the /home/ray/projects/MyFirstProject folder. (Which the beforementioned fsnotifier-remote script should be able to do, but I encountered multiple issues executing it under Linux and I didn't like to dive into it's Python code).

I created a local fsnotifier script with the following contents:

ssh -l ray DevelopmentMachine /home/ray/projects/fsnotifier-remote/linux/fsnotifier64

Then told my IDE to use this wrapper (make sure it's executable with chmod +x)

trigen@baymax:/home/trigen/Downloads/clion-1.0.4/bin> vim
idea.filewatcher.executable.path=/projects/fsnotifier-remote/fsnotifier <<< add this line!

You can log the communication between the IDE and fsnotifier over ssh by inserting this in the fsnotifier wrapper script: strace -f -F -ttt -s 512 -o /tmp/fsnotifier-debug.log (put it before the ssh command). Then you can find stuff like this in the /tmp/fsnotifier-debug.log:

823 3722  1439468464.159229 read(4, "ROOTS\n", 16384) = 6
833 3722  1439468464.159644 read(4, "/home/ray/projects/MyFirstProject\n", 16384) = 28
843 3722  1439468464.160011 read(4, "/home/trigen/.clion10/system/extResources\n", 16384) = 42
853 3722  1439468464.160489 read(4, "#\n", 16384) = 2

Hope this approach will help others!

Linux/Unix Comments (2)

Page 1 <<<< You are Here!
Page 2
Page 3
Page 4
Page 5
Page 6
Page 7
Page 8
Page 9
Ray Burgemeestre
february 23th, 1984

C++, Linux, Webdev

Other interests:
Music, Art, Zen