Neither one nor Many

 
November 21 2016

Finally, I was able to attend this conference, missing out two years in a row, and it was great. So far it has been the largest yet with 600 attendees, and AFAIK Bjarne Stroustrup was present for the first time this year.

I went to Berlin with my girlfriend two days before the event so we had a chance to see Berlin. Even though the weather was very much what you would expect around this time of year, cloudy, rainy, etc. we had a great time. Especially renting bikes and sightseeing.


[Image caption] Brief moment of no-rain..

Talks I attended... DAY 1

Opening Keynote - Bjarne Stroustrup

What is C++ and what will it become? It was a nice presentation showing the strength of C++ and providing a little history here and there (like code below). Funny quote from the presentation "Only a computer scientist makes a copy then destroys the original"; The committee has a difficult task, making the C++ language less complex, but the only thing the committee can do is add more to it , but they still succeed (i.e., with auto, constexpr, ..).

int i; // 70's? 
for (i=0; i<10; i++) a[i] = 0;
----------
for (int i=0; i<10; i++) a[i] = 0; // 80's? no declaration outside the for
----------
for (auto &el : a) el = 0; // mistakes like reading out of bounds no longer possible
                           // ... also mistakes like; for (int i=0; i<10; j++) {}

Boris Schäling asked "Scott Meyers retired from C++ a year ago; do we need to be worried about you?", luckily we don't have to worry ;-). Bjarne answered that he tried a few times to quit C++ in the past, but apparently he is not very good at it .

Learning and teaching Modern C++ - Arne Mertz

The speaker made an interesting point regarding some pitfalls, i.e. that many C++ developers learned C first, pointers, pointer arithmetic, C++03, C++11, .., basically a "layered evolution". However Modern C++ isn't a layered evolution, rather it is a "moving target". Nowadays we prefer make_unique, unique_ptr and therefor why not postpone teaching new, delete, new[], delete[], pointer arithmetic etc. when teaching Modern C++? The same goes for C-style arrays, more complex to teach as opposed to std::array.

Actually kind of sad news; there are still schools in some Countries where C++ is taught with Turbo C++ (see this SO question from a few days ago) compiler (which is extremely outdated). Other notes I scribbled down were for me to check "clang tidy" and adding "isocpp.org" to my RSS feeds.

Wouter van OOijen--a professor teaching C++ in the context of embedded devices--made a good point: the order in which material is presented to students is the most difficult thing to get right. In most books on C++ the order doesn't make sense for embedded, that's why he creates his own material.

Implementation of a multithreaded compile-time ECS in C++14 - Vittorio Romeo

This was quite interesting, maybe it was just me but in the beginning of the presentation it wasn't clear to me what an Entity Component System was, it became clear to me during the talk though. He walked us through the implementation, advanced templating, lambdas, bit fiddling, all quite interesting, maybe a bit too much content for one presentation but very impressive stuff. The room temperature during the presentation was extremely hot, making it sometimes difficult to concentrate and the talk went a bit over the scheduled time.

Some stuff I found interesting: the usage of sparse sets, the use of proxy objects to make sure that certain methods of the library cannot be called at the wrong time.

ctx->step([&](auto& proxy)
    {
        // do something with proxy
    });

He went through a large list of features and how they are implemented

Ranges v3 and microcontrollers, a revolution -- Odin Holmes

Quite an awesome talk this one, the speaker is extremely knowledgeable on meta programming and embedded programming. His company works with devices with very little memory (just a few kilobyte) and this talk was very forward looking. There was a crash course regarding limitations for such devices, there is limited stack space, how do exceptions and interrupts play along with it.

He then started with real demo/hello world for such a device and demonstrated how even that small code contained bugs and a lot of boilerplate. The rest of the talk he showed how to improve it, like instead of parsing (dangerously) with scanf (you can overflow the buffer, so you need a "large enough" buffer up-front... "And we all know that coming up with a size for a large enough buffer is easy, right?" ) can be replaced with a statemachine known at compile time. Ranges can be applied to lazy evaluate input, and as a result it would consume only the minimal memory.

C++ Today - The Beast is back - Jon Kalb

Why was C/C++ successful? It was based on proven track record, and not a "pure theoretical language". High-level abstractions at low cost, with a goal of zero-abstraction principle. In other words; not slower than you could do by coding the same feature by hand (i.e., vtables).

If you like a good story and are curious about why there was a big red button on the IBM 360, the reason behind the C++ "Dark ages" (2000 - 2010), where very little seem to happen, then this is the presentation to watch. Spoiler alert: cough Java cough, OOP was the buzzword at the time, it was "almost as fast", computers got faster and faster, we "solved the performance issue"!

Interesting statements I jotted down "Managed code optimizes the wrong thing (ease of programming)", and regarding Java's finally (try {} catch {} finally {}): "finally violates DRY". He then asked the audience a few times what DRY stands for, which is quite funny as some people realize they were indeed repeating themselves, not all as someone else yelled "the opposite of WET" . He also "pulled the age card" when discussing Alexander Stephanov (the author of the STL) "You kids think std::vector grew on trees!".

DAY 2

Functional reactive programming in C++ - Ivan Cukic

Talk of two parts, first functional programming: higher order functions, purity, immutable state. Functional thinking = data transformation. He discussed referential transparency, f.i. replacing any function with its value should produce the same outcome. This can depend on your definition.

int foobar()
{
    std::cout << "Returning 42..." << '\n';
    return 42;
}

Above function when used in int n = foobar(); can be replaced by 42, and the line of code would result in exactly the same thing (n containing 42), however the console output won't be printed. Whether you consider std::cout to count as part of the referential transparency is up to you.

He continued with Object thinking = no getters, ask the object to do it. "Objects tend to become immutable.". I will have to review the presentation to get exactly what was meant by this.

Next: reactive programming, if I am correct this was his definition:

  • responds quickly
  • resilient to failure
  • responsive under workload
  • based on message-passing

Note: reacting not replying, i.e., piping Linux shell commands there is only one-way data flow. To conclude, some random notes I made during his talk below.

  • He's writing a book on Functional programming in C++
  • flatmap from functional programming does [x, a], [y, b, c] -> x, a, y, b, c.
  • His talk reminded me to lookup the meaning of placing && behind a member function declaration.

See below for an example from cppreference.com.

#include <iostream>

struct S {
    void f() & { std::cout << "lvalue\n"; }
    void f() &&{ std::cout << "rvalue\n"; }
};

int main(){
    S s;
    s.f();            // prints "lvalue"
    std::move(s).f(); // prints "rvalue"
    S().f();          // prints "rvalue"
}

The Speed Game: Automated Trading Systems in C++ - Carl Cook

This talk was probably one of the most well attended talks at the conference. The room was packed. Coming in slightly late I had to sit down on my knees for the entire talk. Which was worth it, I think I liked this talk most of all I attended. It was just the right mix of super interesting material and practical advice.

Coming from Amsterdam where Automated Trading companies seem to kind of dominate C++, it has always been very mysterious what exactly it is they do. It felt to me like it was basically the first time the veil was lifted a little bit. It's just amazing to hear how far they go in order to get the lowest latency possible. Within the time it takes for light to travel from the ground to the top of the Eiffel tower they can take an order, assess whether it's interesting or not, and place the order... times ten!

// Some practical advice, instead of the following..
if (checkForErrorA)
    handleErrorA();
elseif (checkForErrorB)
    handleErrorB();
elseif (checkForErrorC)
    handleErrorC();
else
    executeHotPath();

// Aim for this..
uint32_t errorFlags;
if (errorFlags)
    handleError(errorFlags);
else
{
    ... hotpath
}

Really interesting talk to watch whenever it comes online, it shows the importance of optimizing hardware, bypassing the kernel completely in the hot path, staying in user space for 100%, this includes network I/O (f.i., OpenOnload), cache warming, beware of signed/unsigned conversions, check the assembly, inplace_function (the speakers proposals, stdext::inplace_function<void(), 32>), benchmarking without the 'observable effect' by observing network packets, and more.

One note regarding Network I/O for example; if you read a lot but very little is interesting to the hot path, you may negatively affect your cache. A solution would be to offload all the reads to a different CPU and cherry-pick only the interesting reads and send them to the "hot" CPU.

Lock-free concurrent toolkit for hazard pointers and RCU - Michael Wong

Well, I was a bit tired at this point, so I cannot do the talk justice with a very thorough summary. Even if I could it's better to watch it from Michael Wong himself, because the slides help a lot in understanding the story.

I did learn a few things, maybe the first lesson for me is to try stay away from all of this.. Still, aside from being super complicated, it's also an interesting topic, and good to know more about. The ABA problem: he had good slides that visualized actually step-by-step the challenge of updating data in a multi-threading situation, having readers while writing to it, all wrapped in a fun story of Schrödingers Cat (and Zoo). Solutions discussed were hazard pointers and RCU (Read Copy Update).

The gains you can get by starting late, having a grace period so you can do multiple updates at the same time are interesting to learn about. Situations where "being lazy" actually pays off!

Lightning talks!

Surprise! They had secret lightning talks planned. To be honest at first I thought it was a bit long to have 1 hour and 40 minutes planned for a Meeting C++ update/review, so this was a nice surprise. My favorite lightning talk was from Michael Caisse reading from the standard as if it were a very exiting story, hilarious. Second James McNellis' "function pointers all the way down" (like "Turtles all the way down", actually Bjarne also had a reference to this in his keynote). The remaining lightning talks were also very good: Michael Wong, Jens Weller, Chandler Carruth, and Bjarne's. The latter on Concepts was quite interesting; "what makes a good concept?" It has to have semantics specifying it, which in practice seems to be an efficient design technique. Quite funny was his "Onion principle" on abstractions (IIRC?), "you peel away layer by layer, and you cry more and more as you go along" . Also Jens talk was really fun, it started with end of the world scenarios, working towards the future C++ standards.

C++ metaprogramming: evolution and future directions - Louis Dionne

The closing keynote was a really clear and relaxed presentation of how meta programming evolved, and in particular how boost::hana did. Again a nice lesson of history where Alexandrescu's Modern C++, boost::mpl, boost::fusion and the like all passed the revue. He showed what you can do with boost::hana at compile-time and runtime. His talk really opened my eyes on using constexpr, integral_constant, differences in meta programming with types and objects, and a lot more. It's amazing what his library can do. He argued the world needs more meta programming, but less template meta programming and concluded by sharing his view for the future.

The conference

There was a fun quiz, with really difficult puzzles (C++ programs) that had to be solved in < 3 minutes each. This was basically similar to peeling Bjarne's Onion.. but in a good way.

Between talks there were lunch-break Meetups planned (each 20 minutes, each had a specific topic). I attended two and my view is that it's a great idea, but the fact people have to come from talks, and leave on time to catch the next one, sometimes caused the time to be way too short (or yourself missing out on a talk because the room is now full).

The organization was super, the drinks and food, especially the second day. The Andel's Hotel is a really good location, the Hotel as well (if you are lucky enough to get a room there). For me it was all really worth the money.

Personally I like to write down a summary for myself, but I hope this blog post was also a fun to read to someone else!

Blog Comments (1)
 
September 6 2016

The following steps are to quickly test how this stuff works.

Using my docker images (master, slave) and helper scripts on github, it's easy to get Cloudera Manager running inside a few docker containers. Steps: get most recent docker, install (GNU) screen, checkout the repo, in there do cd cloudera, bash start_all.sh. This should do it. Note that the image(s) require being able to invoke --privileged and the scripts currently invoke sudo. After running the script you get something like (full example output here).

CONTAINER ID        IMAGE                               COMMAND             CREATED             STATUS              PORTS                    NAMES
31e5ee6b7e65        rayburgemeestre/cloudera-slave:3    "/usr/sbin/init"    20 seconds ago      Up 17 seconds                                node003
f052c52b02bf        rayburgemeestre/cloudera-slave:3    "/usr/sbin/init"    25 seconds ago      Up 23 seconds                                node002
1a50df894f28        rayburgemeestre/cloudera-slave:3    "/usr/sbin/init"    30 seconds ago      Up 29 seconds       0.0.0.0:8888->8888/tcp   node001
54fd3c1cf93b        rayburgemeestre/cloudera-master:3   "/usr/sbin/init"    50 seconds ago      Up 48 seconds       0.0.0.0:7180->7180/tcp   cloudera

Not really in the way docker was designed perhaps, it's running systemd inside, but for simple experimentation this is fine. These images have not been designed to run in production, but perhaps with some more orchestration it's possible .

Step 1: install Cloudera Manager

One caveat because of the way docker controls /etc/resolv.conf, /etc/hostname, /etc/hosts, these guys show up in the output for the mount command. The Cloudera Manager Wizard does some parsing of this (I guess) and pre-fills some directories with values like:

/etc/hostname/<path dn>
/etc/resolv.conf/<path dn>
/etc/hosts/<path dn>

Just remove the additional two paths, and change one to <path dn> only. There is a few of these configuration parameters that get screwed up. (Checked until <= CDH 5.8)

Step 2: install kerberos packages on the headnode

docker exec -i -t cloudera /bin/bash # go into the docker image for headnode

yum install krb5-server krb5-workstation krb5-libs

# ntp is already working

systemctl enable krb5kdc
systemctl enable kadmin

Configuration files need to be fixed, so starting will not work yet.

Step 3: modify /etc/krb5.conf

Into something like:

[logging]
 default = FILE:/var/log/krb5libs.log
 kdc = FILE:/var/log/krb5kdc.log
 admin_server = FILE:/var/log/kadmind.log

[libdefaults]
 dns_lookup_realm = false
 ticket_lifetime = 24h
 renew_lifetime = 7d
 forwardable = true
 rdns = false
 default_realm = MYNET
 default_ccache_name = KEYRING:persistent:%{uid}

[realms]
 MYNET = {
  kdc = cloudera.mynet
  admin_server = cloudera.mynet
 }

[domain_realm]
 .mynet = MYNET
 mynet = MYNET

In this example cloudera.mynet is just hostname --fqdn of the headnode which will be running kerberos. (Note that mynet / MYNET could also be something like foo.bar / FOO.BAR.)

Step 4: modify /var/kerberos/krb5kdc/kdc.conf

[kdcdefaults]
 kdc_ports = 88
 kdc_tcp_ports = 88

[realms]
 MYNET = {
  #master_key_type = aes256-cts

  master_key_type = aes256-cts-hmac-sha1-96
  max_life = 24h 10m 0s
  max_renewable_life = 30d 0h 0m 0s

  acl_file = /var/kerberos/krb5kdc/kadm5.acl
  dict_file = /usr/share/dict/words
  admin_keytab = /var/kerberos/krb5kdc/kadm5.keytab
  supported_enctypes = aes256-cts:normal aes128-cts:normal des3-hmac-sha1:normal arcfour-hmac:normal camellia256-cts:normal camellia128-cts:normal des-hmac-sha1:normal des-cbc-md5:normal des-cbc-crc:normal aes256-cts-hmac-sha1-96
 }

I specifically added aes256-cts-hmac-sha1-96 as master key and supported encryption types, and the max_life plus max_renewable_life properties.

But there is a chance Cloudera Manager might add this stuff as well.

Step 5: modify /var/kerberos/krb5kdc/kadm5.acl

*/admin@MYNET      *

Step 6: initialize the database

# kdb5_util create -r MYNET -s
Loading random data
Initializing database '/var/kerberos/krb5kdc/principal' for realm 'MYNET',
master key name 'K/M@MYNET'
You will be prompted for the database Master Password.
It is important that you NOT FORGET this password.
Enter KDC database master key: ******
Re-enter KDC database master key to verify: ******

Step 7: add master root/admin user

[root@rb-clouderahadoop2 krb5kdc]# kadmin.local
Authenticating as principal root/admin@MYNET with password.
kadmin.local:  addprinc root/admin
WARNING: no policy specified for root/admin@MYNET; defaulting to no policy
Enter password for principal "root/admin@MYNET": ******
Re-enter password for principal "root/admin@MYNET": ******
Principal "root/admin@MYNET" created.
kadmin.local:  ktadd -k /var/kerberos/krb5kdc/kadm5.keytab kadmin/admin
Entry for principal kadmin/admin with kvno 2, encryption type aes256-cts-hmac-sha1-96 added to keytab WRFILE:/var/kerberos/krb5kdc/kadm5.keytab.
Entry for principal kadmin/admin with kvno 2, encryption type aes128-cts-hmac-sha1-96 added to keytab WRFILE:/var/kerberos/krb5kdc/kadm5.keytab.
Entry for principal kadmin/admin with kvno 2, encryption type des3-cbc-sha1 added to keytab WRFILE:/var/kerberos/krb5kdc/kadm5.keytab.
Entry for principal kadmin/admin with kvno 2, encryption type arcfour-hmac added to keytab WRFILE:/var/kerberos/krb5kdc/kadm5.keytab.
Entry for principal kadmin/admin with kvno 2, encryption type camellia256-cts-cmac added to keytab WRFILE:/var/kerberos/krb5kdc/kadm5.keytab.
Entry for principal kadmin/admin with kvno 2, encryption type camellia128-cts-cmac added to keytab WRFILE:/var/kerberos/krb5kdc/kadm5.keytab.
Entry for principal kadmin/admin with kvno 2, encryption type des-hmac-sha1 added to keytab WRFILE:/var/kerberos/krb5kdc/kadm5.keytab.
Entry for principal kadmin/admin with kvno 2, encryption type des-cbc-md5 added to keytab WRFILE:/var/kerberos/krb5kdc/kadm5.keytab.
kadmin.local:  ktadd -kt /var/kerberos/krb5kdc/kadm5.keytab kadmin/changepw
Entry for principal kadmin/changepw with kvno 2, encryption type aes256-cts-hmac-sha1-96 added to keytab WRFILE:/var/kerberos/krb5kdc/kadm5.keytab.
Entry for principal kadmin/changepw with kvno 2, encryption type aes128-cts-hmac-sha1-96 added to keytab WRFILE:/var/kerberos/krb5kdc/kadm5.keytab.
Entry for principal kadmin/changepw with kvno 2, encryption type des3-cbc-sha1 added to keytab WRFILE:/var/kerberos/krb5kdc/kadm5.keytab.
Entry for principal kadmin/changepw with kvno 2, encryption type arcfour-hmac added to keytab WRFILE:/var/kerberos/krb5kdc/kadm5.keytab.
Entry for principal kadmin/changepw with kvno 2, encryption type camellia256-cts-cmac added to keytab WRFILE:/var/kerberos/krb5kdc/kadm5.keytab.
Entry for principal kadmin/changepw with kvno 2, encryption type camellia128-cts-cmac added to keytab WRFILE:/var/kerberos/krb5kdc/kadm5.keytab.
Entry for principal kadmin/changepw with kvno 2, encryption type des-hmac-sha1 added to keytab WRFILE:/var/kerberos/krb5kdc/kadm5.keytab.
Entry for principal kadmin/changepw with kvno 2, encryption type des-cbc-md5 added to keytab WRFILE:/var/kerberos/krb5kdc/kadm5.keytab.
kadmin.local:  exit

This will be the user we will give Cloudera to take over managing kerberos.

Step 8: start services

systemctl start krb5kdc
systemctl start kadmin

Step 9: do the Enable security wizard in Cloudera Manager

This should be self explanatory, but in summary:

  • Enable the four checkboxes on the first page of the wizard.
  • Next page, kdc = hostname --fqdn headnode, realm = MYNET (in our example). Leave other defaults.
  • Next page, select Manage krb5.conf through Cloudera Manager. Leave all defaults.
  • Next page, Username root/admin and password you typed in step 7.

The wizard will do it's magic and hopefully succeed without problems.

Blog Comments (1)
 
May 7 2016

In case you are looking for a free alternative to Camtasia Studio or many other alternatives... One of my favorite tools of all time, ffmpeg can do it for free!

The simplest thing that will work is ffmpeg -f gdigrab -framerate 10 -i desktop output.mkv (source) This gives pretty good results already (if you use an MKV container, FLV will give worse results for example).

HiDPI: Fix mouse pointer

gdigrab adds a mouse pointer to the video but does not scale it according to HiDPI settings, so it will be extremely small. You can configure the mouse pointer to extra large to fix that. That mouse pointer won't scale either, but at least you end up with a regular size pointer in the video

Optional: Use H264 codec

More options you can find here, I've settled with single pass encoding using -c:v libx264 -preset ultrafast -crf 22.

ffmpeg -f gdigrab -framerate 30 -i desktop ^
       -c:v libx264 -preset ultrafast -crf 22 output.mkv

Optional: Include sound in the video

First execute ffmpeg -list_devices true -f dshow -i dummy this will give you directshow devices. (source) On my laptop this command outputs:

[dshow @ 00000000023224a0] DirectShow video devices (some may be both video and audio devices)
[dshow @ 00000000023224a0]  "USB2.0 HD UVC WebCam"
[dshow @ 00000000023224a0]     Alternative name "@device_pnp_\\?\usb#vid_04f2&pid_b3fd&mi_00#6&11eacec2&0&0000#{65e8773d-8f56-11d0-a3b9-00a0c9223196}\global"
[dshow @ 00000000023224a0]  "UScreenCapture"
[dshow @ 00000000023224a0]     Alternative name "@device_sw_{860BB310-5D01-11D0-BD3B-00A0C911CE86}\UScreenCapture"
[dshow @ 00000000023224a0] DirectShow audio devices
[dshow @ 00000000023224a0]  "Microphone (Realtek High Definition Audio)"
[dshow @ 00000000023224a0]     Alternative name "@device_cm_{33D9A762-90C8-11D0-BD43-00A0C911CE86}\wave_{1DDF1986-9476-451F-A6A4-7EBB5FB1D2AB}"

Now I know the device name I can use for audio is "Microphone (Realtek High Definition Audio)". Use it for the following parameters in ffmpeg -f dshow -i audio="Microphone (Realtek High Definition Audio)".

The end result

I ended up with capture-video.bat like this:

ffmpeg -f dshow -i audio="Microphone (Realtek High Definition Audio)" ^
       -f gdigrab -framerate 30 -i desktop ^
       -c:v libx264 -preset ultrafast -crf 22 output.mkv

This is a resulting video where I used this command, resolution of the video is 3840x2160 and the HiDPI scale is set to 2.5.

 

Update 1> Add more keyframes for better editing

For this I use the following command, to insert a keyframe every 25 frames (the closer to one, the larger the output file will be):

ffmpeg.exe -i %1 -qscale 0 -g 25 %2

The option -qscale 0 is for preserving the quality of the video.

(Changing the container to .mov was probably not necessary, I tried this hoping that Adobe Premiere would support it, but it didn't!)

Update 2> Editing 4K on Windows 10...

Found the following tool for editing: Filmora and (on my laptop) it was able to smoothly edit the footage. They support GPU acceleration, but the additional keyrames really help with a smooth experience.

Once you get the hang of it (shortcut keys are your friend) it's pretty easy to cut & paste your videos.

Update 3> Support Adobe Premiere

As I discovered Adobe Premiere earlier, doesn't like MKV, but it also doesn't like 4:4:4 (yuv444p), the pixel format used by default (it seems). You can view such information using ffprobe <VIDEO FILE>. Anyway, it seems to like yuv420p, so add -pix_fmt yuv420p to make it work for Premiere:

ffmpeg.exe -i input.mkv -qscale 0 -g 25 -pix_fmt yuv420p output.mov 
Blog Comments (3)
 
April 2 2016

A crazy idea, building a profiler/visualizer based on strace output. Just for fun. But, who knows there may even be something useful we can do with this..

The following image shows exactly such a visualization for a specific HTTP GET request (f.i., to http://default-wordpress.cppse.nl/wp-admin/index.php (URL not accessible online)). The analysis from the image is based on the strace log output from the Apache HTTP server thread handling the request. Parameters for the strace call include -f and -F so it includes basically everything the Apache worker thread does for itself. (If it were to start a child process, it would be included.)

This request took 1700 milliseconds, which seems exceptionally slow, even for a very cheap micro compute instance. It is, I had to cheat a little by restarting Apache and MySQL in advance, to introduce some delays that make the graph more interesting. It's still still normal though that strace will slow down the program execution speed.

I grouped all strace lines by process ID and their activity on a specific FD (file descriptor). Pairs like open()/close() or socket()/close() introduce a specific FD and in between are likely functions operating on that FD (like read()/write()). I group these related strace lines together and called them "stream"s in the above image.

In the image you can see that the longest and slowest "stream" is 1241 milliseconds, this one is used for querying MySQL and probably intentionally closed last to allow re-use of the DB connection during processing of the request. The three streams lower in the visualization follow each other sequentially and appear to be performing a lookup in /etc/hosts, follewed by two DNS lookups directed to 8.8.4.4.

Why are we doing this? (Other than because it's Awesome!)

This works for any strace output, but my idea originated while doing web development. This was for a relatively complicated web application, that was divided in many sub-systems that communicate mostly via REST calls with each other. All these systems had lots of external calls to other systems, and I wanted a view where I could see regardless of which sub-system or actual PHP code being executed, how the performance was for specifically: I/O with (i.e. for i18n/locale) files, scripts, SQL queries to MySQL, Oracle, the REST API calls to system X, Y & Z, Redis, Memcached, Solr, Shared memory even and Disk caching.

If only there was a tool really good at capturing that kind of I/O... ahh yeah there is, strace! I switched jobs 7 months ago, before applying my strace tool to this code-base, but I've applied it to similar complex applications with success.

We already had tools for (more traditional) profiling of PHP requests. Quite often the interpretation was difficult, probably because of a lot of nasty runtime reflection being used. Also when you needed to follow a slow function (doing a REST call) it was a lot of effort to move profiling efforts to the other system (because of OAuth 1.0b(omg..), expired tokens, ..). Nothing unsolveable of course, but with strace you can just trace everything at once on a development environment (especially in Vagrant which we used), spanning multiple vhosts. If it's just you on the VM, perhaps you can strace the main Apache PID recursively, I didn't try that however, but I think that would work.

Products like NewRelic provide dashboards for requests where you can gain such deep insights, "off the shelve", basically, but the downside is that it's not cheap. NewRelic f.i. hooks into Apache & PHP and has access to actual PHP function calls, SQL queries, etc. strace cant do that, because it only sits between the process(es) and the Linux kernel.

First, let's take one step back & properly parse the strace output..

It quickly became apparent that I couldn't get away with some trivial regex for parsing it, so I turned to bnfc and created the following BNF grammer to generate the parser. I was quite suprised that this was so easy that it took me less than a working day to find a tool for the job, learn it and get the grammer right for some strace output.

With this tool you are provided with an autogenerated base class "Skeleton" which you can extend to create your own Visitor implementation. With this pattern it becomes quite easy to extract some meta-data you are interested in. I will show a simply example.

The grammer

I came up with the following grammer that bnfc uses to generate the Parser. Reading it from top to bottom is more or less the way you can incrementally construct this kind of stuff. You start really small; first chunking multiple strace-lines into single strace-lines, then chunk strace-lines into Pid, Timestamp and (remaining) Line. Then further specify a Pid, the Timestamp, Line, etc., slowly making the grammer more coarse-grained.

EStraceLines.          StraceLines         ::= [StraceLine];
EStraceLine.           StraceLine          ::= [Pid] [Timestamp] Line;

EPidStdOut.            Pid                 ::= "[pid " PidNumber "] ";
EPidOutput.            Pid                 ::= PidNumber [Whitespace] ;
EPidNumber.            PidNumber           ::= Integer;

ETimestamp.            Timestamp           ::= EpochElapsedTime;

ELine.                 Line                ::= Function "(" Params ")" [Whitespace] "=" [Whitespace] ReturnValue [TrailingData];
ELineUnfinished.       Line                ::= Function "(" Params "<unfinished ...>";
ELineContinued.        Line                ::= "<... " Function " resumed> )" [Whitespace] "=" [Whitespace] ReturnValue [TrailingData];
ELineExited.           Line                ::= "+++ exited with" [Whitespace] Integer [Whitespace] "+++" ;

EFunction.             Function            ::= Ident ;
EFunctionPrivate.      Function            ::= "_" Ident ;

EParams.               Params              ::= [Param];

EParamArray.           Param               ::= "[" [Param] "]" ;
EParamObject.          Param               ::= "{" [Param] "}" ;
EParamComment.         Param               ::= "/* " [CommentString] " */";
EParamInteger.         Param               ::= Number ;
EParamFlags.           Param               ::= [Flag] ;
EParamIdent.           Param               ::= Ident ;
EParamString.          Param               ::= String ;
EParamWhitespace.      Param               ::= Whitespace ;
EParamAddress.         Param               ::= Address ;
EParamDateTime.        Param               ::= DateYear "/" DateMonth "/" DateDay "-" TimeHour ":" TimeMinute ":" TimeSecond ;
EParamKeyValue.        Param               ::= Param "=" Param ;
EParamKeyValueCont.    Param               ::= "...";
EParamExpression.      Param               ::= Integer Operator Integer;
EParamFunction.        Param               ::= Function "(" [Param] ")" ;

EDateYear.             DateYear            ::= Integer ;
EDateMonth.            DateMonth           ::= Integer ;
EDateDay.              DateDay             ::= Integer ;
ETimeHour.             TimeHour            ::= Integer ;
ETimeMinute.           TimeMinute          ::= Integer ;
ETimeSecond.           TimeSecond          ::= Integer ;

EOperatorMul.          Operator            ::= "*";
EOperatorAdd.          Operator            ::= "+";

EEpochElapsedTime.     EpochElapsedTime    ::= Seconds "." Microseconds ;
ESeconds.              Seconds             ::= Integer ;
EMicroseconds.         Microseconds        ::= Integer ;

ECSString.             CommentString       ::= String ;
ECSIdent.              CommentString       ::= Ident ;
ECSInteger.            CommentString       ::= Integer ;

ENegativeNumber.       Number              ::= "-" Integer;
EPositiveNumber.       Number              ::= Integer;

EFlag.                 Flag                ::= Ident;
EFlagUmask.            Flag                ::= Integer;

ERetvalAddress.        ReturnValue         ::= Address ;
ERetvalNumber.         ReturnValue         ::= Number ;
ERetvalUnknown.        ReturnValue         ::= "?";

EAddress.              Address             ::= HexChar;

ETrailingDataConst.    TrailingData        ::= " " [Param] " (" [CommentString] ")";
ETrailingDataParams.   TrailingData        ::= " (" [Param] ")" ;

ESpace.                Whitespace          ::= " ";
ESpace4x.              Whitespace          ::= "    ";
ETab.                  Whitespace          ::= "    ";

terminator             CommentString       "" ;
terminator             Param               "" ;
terminator             Pid                 " " ;
terminator             Timestamp           " " ;
terminator             TrailingData        "" ;
terminator             Whitespace          "" ;

separator              CommentString       " " ;
separator              Flag                "|" ;
separator              Param               ", " ;
separator              Pid                 " " ;
separator              StraceLine          "";

token HexChar ('0' 'x' (digit | letter)*);

Given the above grammer bnfc can parse this strace line 15757 1429444463.750111 poll([{fd=3, events=POLLIN|POLLPRI|POLLRDNORM|POLLRDBAND}], 1, 0) = 1 ([{fd=3, revents=POLLIN|POLLRDNORM}]) into an Abstract Syntax Tree.

[Abstract Syntax]

(EStraceLines [
    (EStraceLine 
        [(EPidOutput [(EPidNumber 15757)])] 
        [(ETimestamp [(EEpochElapsedTime
                         [(ESeconds 1429444463)]
                         [(EMicroseconds 750111)])])] 
        [(ELine 
            [(EFunction "poll")] 
            [(EParams [
                (EParamArray [
                    (EParamObject [
                        (EParamKeyValue (EParamIdent "fd") 
                                        (EParamInteger [(EPositiveNumber 3)])),
                        (EParamKeyValue (EParamIdent "events")
                                        (EParamFlags [
                                            (EFlag "POLLIN"),
                                            (EFlag "POLLPRI"),
                                            (EFlag "POLLRDNORM"),
                                            (EFlag "POLLRDBAND")]))])]), 
                (EParamInteger [(EPositiveNumber 1)]),
                (EParamInteger [(EPositiveNumber 0)])])]

            ESpace ESpace

            [(ERetvalNumber [(EPositiveNumber 1)])]

            [(ETrailingDataParams 
                [(EParamArray 
                    [(EParamObject [
                        (EParamKeyValue (EParamIdent "fd")
                                        (EParamInteger [(EPositiveNumber 3)])),
                        (EParamKeyValue (EParamIdent "revents")
                                        (EParamFlags [
                                            (EFlag "POLLIN"),
                                            (EFlag "POLLRDNORM")]))])])])
            ]
          )
        ]
     )
])

No matter how nested these lines get, it will parse them as long as I didn't forget anything in the grammer. (So far it seems to be complete to parse everything.)

Visitor example

Using the BNF grammer, the above structure and occasional peeking at the generated Skeleton base class, you can simply override methods in your own visitor to do something "useful". The following visitor is a less "useful" but simple example that outputs all the strings captured for strace lines containing the open() function. Just to illustrate how you use this Visitor.

class OutputOpenVisitor : public Skeleton
{   
    string timestamp;
    string function;
    string strings;
public:
    void visitEStraceLine(EStraceLine* p)
    {   
        timestamp = "";
        function  = "";
        strings   = "";
        Skeleton::visitEStraceLine(p);
        if (function == "open") {
            cout << timestamp << " " << function << " " << strings << endl;
        }
    }
    void visitEFunction(EFunction* p)
    {   
        function = p->ident_;
        Skeleton::visitEFunction(p);
    }
    void visitEEpochElapsedTime(EEpochElapsedTime *p)
    {   
        auto secs      = static_cast<ESeconds *>(p->seconds_);
        auto microsecs = static_cast<EMicroseconds *>(p->microseconds_);
        timestamp = to_elasticsearch_timestamp(secs, microsecs);
        Skeleton::visitEEpochElapsedTime(p);
    }
    void visitString(String x)
    {   
        strings.append(x);
        Skeleton::visitString(x);
    }
};

You can find this example in the examples forder in the git repository here.

After compiling this example into strace-output-visualizer:

# capture a strace log
trigen@firefly:/projects/strace-output-parser[master]> strace -f -F -ttt -s 512 -o test.log uptime
17:53:02 up 32 days, 22:44, 23 users,  load average: 2.39, 2.20, 2.12

# strace log contains stuff like
trigen@firefly:/projects/strace-output-parser[master]> head -n 10 test.log 
19151 1458147182.196711 execve("/usr/bin/uptime", ["uptime"], [/* 47 vars */]) = 0
19151 1458147182.197415 brk(0)          = 0x7c1000
19151 1458147182.197484 access("/etc/ld.so.nohwcap", F_OK) = -1 ENOENT (No such file or directory)
19151 1458147182.197555 mmap(NULL, 8192, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f45cd85e000
19151 1458147182.197618 access("/etc/ld.so.preload", R_OK) = -1 ENOENT (No such file or directory)
19151 1458147182.197679 open("/etc/ld.so.cache", O_RDONLY|O_CLOEXEC) = 3
19151 1458147182.197740 fstat(3, {st_mode=S_IFREG|0644, st_size=156161, ...}) = 0
19151 1458147182.197813 mmap(NULL, 156161, PROT_READ, MAP_PRIVATE, 3, 0) = 0x7f45cd830000
19151 1458147182.197888 close(3)        = 0
19151 1458147182.197969 access("/etc/ld.so.nohwcap", F_OK) = -1 ENOENT (No such file or directory)

# pipe the log through the example program
trigen@firefly:/projects/strace-output-parser[master]> cat test.log | ./strace-output-parser 
2016-03-16T16:53:02.198Z open /etc/ld.so.cache
2016-03-16T16:53:02.198Z open /lib/x86_64-linux-gnu/libprocps.so.3
2016-03-16T16:53:02.199Z open /lib/x86_64-linux-gnu/libc.so.6
2016-03-16T16:53:02.200Z open /sys/devices/system/cpu/online
2016-03-16T16:53:02.200Z open /usr/lib/locale/locale-archive
2016-03-16T16:53:02.200Z open /etc/localtime
2016-03-16T16:53:02.201Z open /proc/uptime
2016-03-16T16:53:02.202Z open /var/run/utmp
2016-03-16T16:53:02.273Z open /proc/loadavg

Opposed to a simple Visitor like this example, I parse all the lines, prepare a JSON representation for each line and store that in ElasticSearch. This way selecting and filtering can be done afterwards. And also ElasticSearch is really a fast solution in case you want to do more complex queries on your log.

A Proof of concept for Web

This time at the beginning of each request I have PHP instruct some script to run a strace on the process id for the current PHP script's pid (or rather the Apache worker's) and all it's (virtual) threads and sub processes. (If I would track the Request accross the stack with "Cross application tracing" you can even combine all the relevant straces for a given request. I didn't implement this (again) because of I switched jobs. (Info on Cross application tracing in newrelic). This is even relatively easy to implement if you have a codebase where you can just make the change (like inject a unique id for the current request in curl call for example).)

The following image and code shows how I capture straces from specific PHP requests, like the wordpress example I started this blog with. You can skip this part. Eventually these straces are linked to a specific request, ran through a slightly more elaborate Visitor class and fed into ElasticSearch for later processing.

(This omits also some other details w/respect to generating a UUID for each request, and keeping track of what strace outputs are related to each request.)

Inject in your application 'header', i.e., top index.php:

register_shutdown_function(function () { touch("/tmp/strace-visualizer-test/done/" . getmypid()); });
$file = "/tmp/strace-visualizer-test/todo/" . getmypid();
touch($file);
while (file_exists($file)) { sleep(1); } // continue with the request when removed from todo folder

A separate long running process runs the following:

trigen@CppSe:~/strace-visualizer-test> cat run.ksh 
#!/bin/ksh93
mkdir -p /tmp/strace-visualizer-test/todo
mkdir -p /tmp/strace-visualizer-test/done
while true; do
    find /tmp/strace-visualizer-test/todo/ -type f | \
        xargs -I{} -n 1 sh -c "strace -f -F -ttt -s 4096 -o \$(basename {}).strace -p \$(basename {}) & rm -rf {};"
    find /tmp/strace-visualizer-test/done/ -type f | \
        xargs -I{} -n 1 sh -c "(ps axufw | grep [s]trace.*\$(basename {}) | grep -v grep | awk -F ' ' '{print \$2}' | xargs -n 1 kill -1 ) & (sleep 1; rm -rf {};)"
    printf ".";
done

This way you end up with .strace files per process ID (it should probably include a timestamp too). The long running process removes the file the client checks from the todo folder as soon as it started strace. That way the client will no longer block and the interesting stuff will be captured. It uses a shutdown handler to instruct the long running process to stop the capture (the Apache thread won't exit, it will wait for a next request).

Final step, To ElasticSearch!

I use a Visitor and my strace parser to create JSON representations for the strace log lines. Containing the meta-data I need: file descriptors, an array with all strings, a timestamp that ElasticSearch can understand out of the box, etc.

To get to my previous example, I can use cat test.log | ./strace-output-parser elasticsearch localhost 9200 strace_index to import the parsed lines to ElasticSearch.

In above example I use filtering with a plugin called "head" to basically make the same selection as I did with the simple visitor example. I also highlighted one specific line to show the JSON representation.

I used PHP for processing the wordpress strace output from ElasticSearch and generated the visualization from the very first image in this blog post. You can view the HTML output here.

Hopefully this blog post was interesting to read, and maybe you find some use for the strace parser yourself. If you do, please let me know, that would be fun to know .

Blog Comments (2)
 
December 17 2015

Most people are probably familiar with gdb, and Ribamar pointed out to me there is also a ncurses frontend inside gdb. But in case anyone is interested I learned that NetBeans also supports remote debugging. Even though it's not the most modern IDE in the world, and it's vi emulation is cumbersome , it seems to have pretty good support for remote-debugging. It will just login to some machine via ssh (i.e., dev11 or a real cluster), and issue gdb <something> and wrap around it. If you make sure it knows where the sources files are on your development machine, you can use all the step-debugging features.

The only downside is that loading up cmd in gdb takes a while probably ~ 30 seconds. Still it's a lot faster than debugging with print-statements and recompiling. For cmsh it's already a lot faster and on top of that you can issue a command multiple times via the REPL, so you can step debug it multiple times within the same gdb session. (Beware though that you probably need to connect again as your connection may be lost)

Example workflow

To show off how it works first with CMDaemon. My workflow is to create a unit-test that fails, set a breakpoint in the unit-test and start the debug.


break point set followed by the debugger stopping execution at that point.


step-into example, select the function to step into ➀ and click the button highlighted with ➁.

There is also the F7 key to "step into", but be prepared to step into assembly a lot of times (use CTRL+F7 to step out, and try again). You will jump into the -> operator, shared pointer dereferences, std::string constructors, before getting into the function you want. (Also note that the first time you step into assembly it will be very slow, but it will get faster the next few times).

Wizard example to debug cmd unit test


Download from https://netbeans.org/downloads/
chmod +x netbeans-8.1-cpp-linux-x64.sh
./netbeans-8.1-cpp-linux-x64.sh


      
Note that you want to set some bogus command like whoami.
Netbeans will try to be smart and clean your project directory for you
(and rebuild without using multiple cores, ..)


 
Note the working directory should be including src.
This is to help gdb later with finding source code.


 


   

 

 
There is one fix needed that the Wizard didn't set properly for us.
Go to project properties, Build / Make, and set Build Result to the executable.
The remote debugger will use this value for issuing with gdb, and it's somehow empty by default.



 
Use ALT+SHIFT+o to Jump to the file containing the test.
Set a breakpoint there using CTRL+F8



The final thing we want to pass to gdb is the parameters for running our specific unittest.
In my example "${OUTPUT_PATH}" --unittests --gtest_filter=LineParserTest.empty.





You can use these settings to double check if everything is correct

C++ Comments (0)
 
December 1 2015

In addition to my previous blog post How to debug XUL applications.

Last friday I learned that you can use the DOM inspector on XUL applications as well. This is quite useful if you want to see what events are hidden behind a button, try out layout changes, etc., etc. It is also quite fast, I don't notice any performance difference.

These instructions are taken from a very useful stackoverflow answer. Summarizing:

  • Add [XRE] EnableExtensionManager=1 to your application.ini if it isn't already.
  • If you are using the xulrunner app you already have the Error Console available (for info see my previous blog post for this). Type in it the following: window.openDialog("chrome://mozapps/content/extensions/extensions.xul", "", "chrome,dialog=no,resizable=yes");.
  • You will be presented the Add-ons Manager, in there choose "Install Add-on From File..." and download the "DOM Inspector". (I have a local copy here: addon-6622-latest.xpi (downloaded from: here)).
  • You need to restart and start xulrunner with an additional -inspector flag.

One tip with the DOM inspector, if you use "File >> Inspect Chrome Document" and the list is huge, highlight an item with your mouse and press the End key on your keyboard. You likely need one at the bottom of the list because those are the XUL files loaded most recently.

Blog Comments (0)
 
November 25 2015

You can use Mozilla Firefox (Javascript) debugging on your XUL application using the Remote Debugging facility. This blog post could be useful as a HOWTO, because I was lucky enough to attempt this 3rd of July 2015. You see had I tried this today I would have failed, because stuff seems broken in newer versions of xulrunner (and Firefox). This is true for the project I work on at least. The very fact that I struggled with setting this up today was my motivation to dig into why it wasn't working and made me think this might be useful to others.

I know everything in this blog post to work for both CentOS 6.6 and Ubuntu 15.04. These steps (except for the xulrunner download) should be platform independent.

First get a slightly older xulrunner

You need a reasonably new xulrunner in order for Remote Debugging to work. I downloaded xulrunner version 38 at the time from The Mozilla Project Page (xulrunner-38.0.5.en-US.linux-x86_64.tar should be on their FTP somewhere, but you can also use this local copy hosted with this blog). I think we should cherish that version, because that one works.

The newest and version is version 41, but also the last because they started integrating it in Mozilla Firefox since then. I tried version 41, and grabbing a recent Thunderbird Firefox, but all steps work, except when you arrive in the "Connect Dialog", the clickable Main Process hyperlink (as shown in the image) is simply not there for you to click on.

Enable a debug listener in the code

In your application you need to start the debug listener. Probably in the top of your main.js include the following lines.

Components.utils.import('resource://gre/modules/devtools/dbg-server.jsm');
if (!DebuggerServer.initialized) {
  DebuggerServer.init();
  // Don't specify a window type parameter below if "navigator:browser"
  // is suitable for your app.
  DebuggerServer.addBrowserActors("myXULRunnerAppWindowType");
}
var listener = DebuggerServer.createListener();
listener.portOrPath = '6000';
listener.open();

Also enable in the preferences (probably defaults/preferences/prefs.js).

pref("devtools.debugger.remote-enabled", true);

If you forget to change this last preference you will get the following error.

JavaScript error: resource://gre/modules/commonjs/toolkit/loader.js -> resource://gre/modules/devtools/server/main.js, line 584: Error: Can't create listener, remote debugging disabled

Start the application with this xulrunner

Extract the xulrunner runtime to somewhere, i.e. /projects/xulrunner, and issue from the your program's directory like this:

shell$> /projects/xulrunner/xulrunner application.ini

Attach debugger from Mozilla Firefox

Open a fairly recent Firefox browser and open the remote debugger which is available via "Tools ⏩ Web Developer ⏩ Connect...".

If the above "Connect.." option is not available, you have to enable the same preference inside Firefox in the "about:config" page. Search for remote-enabled.

Then connect to localhost port 6000.

Your program will present you a dialog to accept the incoming connection from the debugger.

After accepting you can click to attach to "Main Process" (your program).

You should be presented with a debugger that will automatically break when it encounters the debugger keyword. You can also set breakpoints inside.

This can look similar to the following image where a call stack is shown, and you have your usual ways to inspect variables and perform step debugging with F10, F11, Shift+F11

I am convinced it should also be possible to make it so that the javascript in can handle inspection from the debuggers console. In order to get a REPL working there (for inspecting variables), but I didn't find out how this can be achieved. Using the Watch (and Auto) expressions you can already inspect everything.

Just beware that once you attach to the process your program can freeze up for a while as the debugger is loading all the javascript files.

Blog Comments (0)
 
September 13 2015

Today I published my first Android (Wear) App! . The idea behind this clock is that it uses concentric circles to show time, and doesn't use analog clock hands or numeric time notation. This is something I have on a bigger LCD screen at home for a while now, and now that there is Android Wear for a while, I wanted to implement this for Android.

Some example visualizations

There is more theory behind the visualization, more on that on the website: http://circlix.click.

Android Watch Face


WebGL from the Website

You need to have WebGL support in your browser in order to see the following live-clock.

Some comments on Android Wear development

Android Wear is relatively new, and I never read any book on the Android Framework. Luckily I had some Java experience. Overall I am impressed by the design of the Framework, although it also confused the hell out of me on various occasions @:|@.

Some stuff I needed to realize or discover during development:

  • (Very basic:) an Activity only runs when it's the current activity.
  • If you need stuff running for longer than an Activity, you need Services.
  • In Java you don't have RAII like in C++/PHP. If you have handlers for threads etc. you should stop them in some onDestroy() method.
  • Packaging, creating the APK for use in f.i. the Play Store was counter intuitive, at least for me. Follow the example project provided by Google closely in your project w/respect to Gradle files. I had a perfectly good working APK that came out of Android Studio, it worked whenever I sent it to others, but it was not accepted by the Play store.
  • There is already OpenGL support for Watch Faces. You need to extend Gles2WatchFaceService.
Blog Comments (0)
 
September 2 2015

I use CLion in this blog post, but it should be the same for any of the other editors. (PyCharm, PhpStorm, Intellij, etc.).

It took me a while to get a setup that works reasonably well for me at work, for what I expect not a very uncommon setup. That's why I'm sharing this in a blog post.

The project I'm working on is quite big, 10yr under development; large codebase and a complex build process. The debug build results in a 1.2 GiB executable, all intermediate files generated by the compiler/linker are many, and big. During build a lot of files are removed/(re)created/generated, so in general a lot of I/O happens.

Our build machines are extremely powerful, so it doesn't make sense to work on a local machine because of the build times. That's why compiling happens on remote machines. I have worked remotely at a lot of companies, and usually I would simply use vim + a lot of plugins. However, nowadays I'm accustomed to the power IDE's can provide, primarily navigation-wise (jumping to classes, files, finding usages, etc.) and simply don't want to work without a proper IDE.

This is my setup

I use an NFS mount (sshfs would suffice as well) where I mount from the remote to local, not the other way around, or compiling will be extremely slow. In my opinion using file synchronization in these kinds of setups is too error prone and difficult to get right.

As a side-note; I've seen synchronization work moderately okay within a PHP project. But so far not in a C++ project where intermediate/build-files/libraries are first of all large and scattered throughout the project folder.

In my previous blog post we fixed fsnotifier such as in the previous image, but this also causes a new problem.

Lot's of I/O is slow over network mount

During compiling I noticed my IDE would hang, the only cause could be that it's somehow flooded by the enourmous lines of input it now receives from fsnotifier. Perhaps when we're working with the project files on a local disk the IDE wouldn't hang, because simple I/O (even just checking file stats) doesn't have network overhead.

Solution, ignore as much (irrelevant) I/O as possible

Here I made the fsnotifier script--that was at first just a simple proxy (calling the real fsnotifier via ssh)--more intelligent. It now filters out intermediate files generated by the compiler (.o, .d, and some other patterns).

function custom_filter
{
    typeset -n return_val=$1
    typeset cmd=$2  # i.e., DELETE/CREATE/CHANGE/...
    typeset file=$3 # i.e., /full/path/to/file

    # Ignore some files that are not interesting to my IDE
    if [[ $file =~ (cmd|mm)\.log$ ]] || \
       [[ $file =~ deps.*\.d$ ]]         || \
       [[ $file =~ \.o$ ]]            || \
       [[ $file =~ \.o\. ]]            || \
       [[ $file =~ \.html$ ]]         || \
       [[ $file =~ core\.*([0-9])$ ]];
    then
        return_val=false
        return
    fi

    return_val=true
    return
}

Download all source code from GitHub: https://github.com/rayburgemeestre/fsnotifier-remote/.

Alternative solutions

The fsnotifier script outputs it's process id to /tmp/fsnotifier.pid and hooks two signals, so you can enable/disable it with a signal. Disabling will simply pause outputting all updates from the real fsnotifier (that is invoked via ssh).

kill -SIGINT $(cat /tmp/fsnotifier.pid) - pause all activity
kill -SIGHUP $(cat /tmp/fsnotifier.pid) - continue all activity

Another extension you may find useful would be to make the buildscript touch a file like, i.e. /path/to/project/DISABLE_FSNOTIFIER and make the fsnotifier script pause itself (or behave differently) during the build until it sees for example the ENABLE_FSNOTIFIER file.

Simply disabling fsnotifier again doesn't fix the problem, CLion would keep nagging occasionally about conflicts with files that have changed both on disk and in memory. And when auto-generated files are being re-generated by the build, I want my IDE to reflect them immediately.

Fine-tuning your filter

The filter is just a bash/ksh function, so you can easily extend it with patterns appropriate to your project. The fun thing is you can "killall -9 fsnotifier", and Jetbrains will simply restart it. So no need to restart Jetbrains (and with that having it re-index your project). Debug the filters by tailing: /tmp/fsnotifier-included.log and /tmp/fsnotifier-filtered.log.

Update: 13th October 2016

No longer do I need to filter out *.o files etc. to get a better responsive IDE nowadays. The network improved (and perhaps it's something that improved in newer CLion versions). Another change I did make to the script is based on the ROOTS that get registered (for monitoring the project path) use fsnotifier over ssh or not. (for local projects it would try to login via ssh otherwise, finding nothing and the IDE would hang at that point).

https://github.com/rayburgemeestre/fsnotifier-remote/commit/414e2e1f937a59a9ab11eede6b999c8170e30af0

Linux/Unix Comments (0)

Paging
Page 1 <<<< You are Here!
Page 2
Page 3
Page 4
Page 5
Page 6
Page 7
Page 8
Page 9
Author:
Ray Burgemeestre
february 23th, 1984

Topics:
C++, Linux, Webdev

Other interests:
Music, Art, Zen