Neither one nor Many

 
January 14 2012

DISCLAIMER: Okay, probably still almost any firewall. There are a few posts on the internet about how SSH tunnels bypass "almost any firewall", I believe this proxy will probably bypass a whole lot more firewalls. So I had to do come up with something better than "almost any" img1.

When is this useful?

ProxyTunnel is awesome as it allows you to tunnel to SSH through--for example--port 443. And due to SSH supporting port forwards you can go from there to wherever you want. If I am correct, it requires that the proxy in question supports the CONNECT syntax.

Sometimes however, proxies are more restricted than that: CONNECT may not be supported; connections are not allowed to stream (i.e., file downloads are first downloaded by the proxy server, scanned for viruses, executables and other filetypes may be blocked); base64 may actually be decoded to see if it contains anything that isn't allowed, it may go as far as to inspect content of zip files and may have restrictions on the maximum file size for downloads (XX MB limit). In that case ProxyTunnel won't suffice.

If you're unfortunate enough to be behind such a firewall, no worries because now there is a way to tunnel through it! The only requirement for it to work is that you can receive plain text from a webpage, and post data to it. One that you own or have access to. Well If you can't do that, I suggest you look for another Job, because this is REALLY important!!!!1 (Not really img1 but then this proxy solution won't work). Do not expect it to be very performant with broadband type of stuff by the way.

How it works in short

It works with three PHP scripts. And just like with Proxytunnel you need to run one of them on your local computer: localclient.php. This script binds to a local port, you connect with your program to this local port. Each local client is configured to establish a connection with some destination host + port. But the cool part is, it does so by simply reading plain old HTML from an url, and posting some formdata back to it. Well actually it appears to be plain old HTML, because it's the data prefixed with an HTML tag, followed by the connection identifier and the DES encrypted data (converted into base64).

The curl proxy (as I call it, because I use the cURL extension in PHP) retrieves HTML pages like this:

Example of packet with data "PONG :leguin.freenode.net", is sent as the following HTML:

<PACKET>a5bc97ba2f6574612MNIoHM6FyG0VuU6BTF/Pv/UcVkSXM5AbiUrF4BDBB4Q=
|______||_______________||__________________________________________|
       |                |                                           `=BASE64 OF ENCRYPTED DATA
       |                `=Session id / socket id
       `=Fake HTML tag

POSTing back sends a string with the same syntax back, basically only prefixed with "POST_DATA=".

In order for this to work, a second script has to be callable on the web, you must be able to access it, and the same machine has to be able to make the connections you want. For example: http://your-server/proxy.php (you could rename it to something less suspicious; there are some smart things you can do here, but I'll leave that to your imagination img1). All proxy.php does is write and read files from a directory, nothing more.

Then a shellscript has to be started to run in background, with access to the same directory. This script scans that directory for instructions, specifically starting server.php processes for new connections. The actual connection is made in the server.php script. And all this script does is read from the same directory for packets received, which it will send to it's socket, any data read from the proxy is written back to the directory, which proxy.php will eventually sent back to the client.

Graphical explanation

You should follow the arrows in the same order as presented in the Legend. Click to enlarge the image.

Design decisions

When I had the idea to make it, I didn't feel like spending alot of time on it, so I hacked it together in a few hours. Then I tested it, it worked and it got me exited enough to refactor it and make a blog post out of it.

  • After the encryption of the packets I use base64 encoding, which increases the size of the messages, but it looks more HTML-like. If I wanted to send the encrypted data raw I'd have to do some more exotic stuff, maybe disguise it as a file upload, because AFAIK a plain old POST does not support binary data.
  • I use BASE64 and not urlencode on the encrypted data, because when I tested it urlencode produced even more overhead. Of course the BASE64 string is still "urlencoded" before POST, but only a few chars are affected.
  • I don't use a socket for communicating between proxy.php and server.php, but files and some lock-files because I preferred them somehow. A database would be nicer, but implies more configuration hassle.

Encryption used

define('CRYPT_KEY', pack('H*', substr(md5($crypt_key),0,16)));

function encrypt_fn($str)
{
    $block = mcrypt_get_block_size('des', 'ecb');
    $pad = $block - (strlen($str) % $block);
    $str .= str_repeat(chr($pad), $pad);

    return base64_encode(mcrypt_encrypt(MCRYPT_DES, CRYPT_KEY, $str, MCRYPT_MODE_ECB));
}

function decrypt_fn($str)
{   
    $str = mcrypt_decrypt(MCRYPT_DES, CRYPT_KEY, base64_decode($str), MCRYPT_MODE_ECB);

    $block = mcrypt_get_block_size('des', 'ecb');
    $pad = ord($str[($len = strlen($str)) - 1]);

    return substr($str, 0, strlen($str) - $pad);
}

If you prefer something else, simply re-implement the functions, you'll have to copy them to all three scripts (sorry, I wanted all three scripts to be fully self-contained).

I found my "ASCII key → md5 → 16 hexadecimal display chars → actual binary" a pretty cool find by the way. Did you notice it? img1

Demonstration

Note that first I demo it where the server is running on an Amazon AMI image. Appended to the video is a short demo where I run the server on my local windows pc (just to show how it it'd work on windows). This second part starts when I open my browser with the google page.

Remote desktop actually works pretty good through the curl proxy by the way. Establishing the connection is a little slow like with WinSCP, but once connected it performs pretty good. I could't demo it because I don't have a machine to connect to from home.

Sourcecode & downloads

Put it here on bitbucket: https://bitbucket.org/rayburgemeestre/curlproxy Placed it under MPL 2.0 license, which seamed appropriate. Basically this means that when you distribute it with your own software in some way, you'll have to release your code changes/improvements/bugfixes (applicable to curlproxy) to the initial developer. This way the original repository will also benefit and you're pretty much unrestricted.

Blog Comments (0)
 
November 25 2011

Work in progress...

It will be a lot easier to compile. No longer dependant on the json lib. A single .cpp file (as the code is quite small).

No makefile, just a g++ goto.cpp -o goto -lncurses

Get the source code here

P.S. I added colours:

Updates

24-feb-2013: Now listens for ncurses KEY_RESIZE event so changing window size will redraw.

C++ Comments (0)
 
November 20 2011

Bash wrapper script

With Apache (2.2) you could get an generic "Internal Server Error" error message in case the cgi sends the wrong headers. There is probably a setting for this in Apache as well, but I always create a bash wrapper script. For example someapp.cgi:

#!/bin/bash
printf "Content-type: text/html\n\n"
/path/to/actual_appl

This immediately makes the output visible and you can comment the printf statement once fixed. This trick only makes sense if you don't have quick access to a debugger or a core dump.

Running application in chroot

There are plugins for apache AFAIK for running cgi applications in a chroot. I didn't experiment with these, as I simply use my (probably lame) bash wrapper here as well:

#!/bin/bash
sudo -E /usr/bin/chroot /usr/local/src/some_jail /usr/bin/some_appl 2>&1

The -E flag means "preserve environment". To allow this you have to configure sudoers properly (visudo). Something like this:

wwwrun ALL=(ALL) SETENV: ALL, NOPASSWD : /usr/bin/chroot
C++ Comments (0)
 
November 11 2011

This is no rocket science but I thought this was a really cool solution to the problem. img1

First I created a helper function Xprintf to interface with an existing C API that works with (non const) char arrays. Hence its char * return value.

char *Xprintf(const char *format, ...);

// This function works in the following situations

foo1(Xprintf("Hello world: %d", 1001)); // void foo1(char *);
foo2(Xprintf("Hello world: %d", 1001)); // void foo2(const char *);
foo3(Xprintf("Hello world: %d", 1001)); // void foo3(const string);
foo4(Xprintf("Hello world: %d", 1001)); // void foo4(const string &);
foo5(Xprintf("Hello world: %d", 1001),
     Xprintf("...", ...));              // void foo5(char *, char *);

Xprintf cannot use just one buffer because the case of 'foo5' would fail (it would get the same pointer twice).

I needed a different return value, like std::string, so that copies could be returned which would clean themselves up as soon as they went out of scope. But std::string does not provide implicit casting to const char *, only explicit casting through .c_str(). The call to foo1 would become: foo1(const_cast(Xprintf("").c_str())), which is kind of ugly!

The following fixes it, creating a tmp_str class that extends std::string and simply provides the implicit cast:

class tmp_str : public std::string
{
public:
    tmp_str(const char *str)
        : std::string(str) {}

    // g++ is fine with adding this one, xlC isn't
    //operator const char *() const { return c_str(); }

    operator char *() const { return const_cast<char *>(c_str()); }
};

tmp_str cHelperCharArray::Xprintf(const char *format, ...)
{
    char buffer[512] = {0x00};

    va_list args;
    va_start(args, format);
    vsprintf(buffer, format, args);
    va_end(args);

    return tmp_str(buffer);
}

A note why tmp_str is-a std::string and not an is-implemented-in-terms-of: the call to foo4 would fail as it would not accept tmp_str as a reference to string (A parameter of type "const std::basic_string,std::allocator > &" cannot be initialized with an rvalue of type "tmp_str".). )

g++ accepts all these foo* functions, but IIRC xlC doesn't like foo2. In that case I had to cast to const. Adding the const char * operator overload would make some casts for that compiler ambiguous.

C++ Comments (0)
 
October 27 2011

mplayer can easily be instructed to render on a custom window with the -wid (window handle) parameter.

// On windows
long targetWindowId = reinterpret_cast<long>(canvas->GetHWND());

// On Linux
long targetWindowId = GDK_WINDOW_XWINDOW(canvas->GetHandle()->window);

Now that I got it to render on my canvas, I cannot render on top of it without flickering, because I cannot do double buffering. (I cannot control when mplayer renders frames on the window). That's why I add a second window that reads the first window to a bitmap, I can do whatever I want to that bitmap, and display it img1. This meant that I could no longer use my preferred video renderer on windows -vo direct3d because somehow that setting doesn't actually draw on the window, just in the same region. When reading the first window I'd get an empty bitmap and not the video. I ended up using -vo directx:noaccel in order to properly read it.

Fix overlap problem

This posed another problem, when hovering the second window on top of the first, it interferes with the video as it renders itself in window1 first. I only encountered this on my windows pc:

I decided to ignore this problem and try to find a way to hide the first window so that it wouldn't interfere. I tried minimizing it, Hide(), move it outside the screen, etc. But mplayer would not render the video in these cases. I then tried making the window 100% transparent and this worked. It also fixed my overlap-problem as I could now overlap the windows without problems. Somehow making the windows transparent forces the no-hardware-acceleration-directx renderer to behave differently. Making the window 1% transparent also fixes the overlap-problem.

Fix linux support

On Linux I use the -vo X11 video output, and overlap wasn't a problem. The only annoying thing is that in order to get the GTK window handle you have to include a GTK header in C++, which requires adding a lot of include directories to your include path. Because you need to cast the window handle to a GTKWidget instance, and ask it for the xid.

Result

The code is available on bitbucket and works on Windows (tested Windows vista with aero theme) and Linux (openSUSE 11.4). Makefile and Visual studio project files included.

Where I used this for..

All texts and images are rendered on top of the background video.

C++ Comments (1)
 
May 22 2011

Milestone 1

Milestone 2

StarcryPublic Comments (0)
 
April 30 2011

Putting source files in separate directory


View youtube video
(don't forget to enable 720p and fullscreen)

  • How to put source and image files in subdirectories in DialogBlocks..
  • NOTE for git users: If you're going to do this with an existing git repository. My advice is to do a move through git as well: "git mv source dest". So that git knows it was a file move, not a delete/create.

Seperate implementation files on a few panels


View youtube video
(don't forget to enable 720p and fullscreen)

  • We put two panels (with their contents) into separate implementation files.
  • Show some examples you can do with it (or more easily than before).
  • Events and other functions can be nicely grouped into that implementation.
DialogBlocks Comments (0)
 
April 30 2011

Compiling with MinGW


View youtube video
(don't forget to enable 720p and fullscreen)

  • How to add configuration for MinGW

Compiling with Visual C++ compiler with project files


View youtube video
(don't forget to enable 720p and fullscreen)

  • Shows how to set the platform SDK.
  • Some specific errors you can get w/ DialogBlocks.
  • How to use generated project file(s) for visual studio.
    • How they handle file changes
    • Why to use: autocomplete, debugger, etc.
DialogBlocks Comments (0)
 
April 17 2011

Werkzaamheden tijden reverse engineering

Indien je urenregistratie moet invullen met begin- en eindtijden is het soms lastig als je dat achteraf doet. Soms weet je nog wel uit het hoofd wat je gedaan hebt, soms zoek je dat op in (verstuurde) e-mails, of aantekeningen, agendapunten, enz.

Als je dan toch de tijden moet reverse-engineeren, kun je het jezelf natuurlijk ook wat makkelijker maken door in een simpele lijst je werkzaamheden te tikken. Op te geven hoeveel tijd het bij elkaar moet zijn, van een aantal zelf de tijd invullen en vastzetten ('freeze'), een beetje aan knopjes draaien om aan te geven wat meer en wat minder werk is, af en toe op F5 drukken om te zien hoe de taart verdeeld is.

Indien tevreden, kun je het hieruit overkopieeren. Mij heeft het al meerdere malen geholpen img1

Download

Executable hier te downloaden.

Blog Comments (0)

Paging
Page 1
Page 2
Page 3
Page 4
Page 5
Page 6
Page 7 <<<< You are Here!
Page 8
Page 9
Author:
Ray Burgemeestre
february 23th, 1984

Topics:
C++, Linux, Webdev

Other interests:
Music, Art, Zen