Saturday, April 07, 2018

Google Code Jam 2018 Qualification Round

That was certainly an interesting codejam. One that was marked by the question "Do I even want to do this?".

The new system

Turns out Codejam is using a different system and rules than before. I don't really like it Click that link if you want to see my live reaction after finding out at 7:00 PM last night. My plan for the weekend was to possibly spend the whole Friday night and Saturday working on the code jam and prepare a cool blog post about it with explanations and all for old time's sake. That didn't work out.

I already said too much on those tweets, but what I just want to be clear about is: The Codejam Qualifaction was not just a contest. It was an event. It was always really great to see what happened. It always has the most massive number of participants ever. A big part of this is the idea that some coders might try some bizarre things. Solve the contest in ... Assembler? Or that year I used a programming language I designed myself. Looking at the score board and filtering it so it only shows you people you know (this option is gone). And sometimes there would be an epic problem that keeps you busy the whole day.

But the worst of all is the fact that (one of) the biggest programming competitions just got way less accessible than it was. Telling people that they can just use any programming language was great, but we lost that.

Do I even want to do this?

So I am not even sure if I want to participate in the contest. I know how to solve problem A, but I can't even find a vital bit of info about the new rules: What is the CPU of the servers where our code runs? So this sounds suspiciously like one of those badly made ACM contests:/ do I even want to participate? I no longer have so much free time as before. I barely have time in the weekends to rest. So a Saturday is not something I can just spend in some contest if it doesn' sound at least a bit fun. And the prizes are as low as ever. I decide that at least for Friday I am not going to bother.

The last few hours - Problem B

So okay, so I am a bit curious about how the codejam is going and I accidentally read problem B and figure out that it is an extremely easy problem.

There is a badly made sorting algorithm. Taking an array `V`, it finds two indexes `i`, and `(i+2)` such that `V[i+2] > V[i]`. In that case, it takes the part of the array: `(V[i],V[i+1],V[i+2])` and reverses it `(V[i+2],V[i+1],V[i])`. It repeats this until it cant find any more such `i`, and `(i+2)` pairs. This algorithm can't always sort the array correctly, and given an input `V ` we are supposed to predict the part where the algorithm is going to fail. If it is going to fail, we have to find the first index that is not sorted correctly. For B large the number of elements can be up to 100000.

The algorithm is `O(n^2)` so for the large version of the problem we can't just simulate it. The key of the problem is to notice that after reversing `(V[i],V[i+1],V[i+2])` into `(V[i+2],V[i+1],V[i])`, the position of `V[i+1]` is not going to change. So the operation is the same as swapping `V[i]` and `V[i+2]`, which are the elements we compared. This means that there are actually two independent partitions of the array. The elements with even indexes will never interact with the elements with odd indexes. So imagine the array `[6,5,4,3,2,1]` : It has two independent sub-arrays: `[6,..,4,..,2,..]` and `[5,..,3,..,1]` and in these sub problems, all you can do is swap consecutive elements. This means that the two sub arrays are being sorted with normal bubble sort. And eventually the bubble sorts will end running and we will get: `[2,..,4,..,6,..]` and `[1,..,3,..,5]` and when we put the two sub-arrays back into the big array we get: `[2,1,4,3,6,5]` and it is clearly not sorted. So to solve this problem, we just split the array into two sub-arrays, one with the even indexes and one with the odd indexes. Sort the two arrays and put them back together. If the result is sorted, then all is fine, else we return the first index where it breaks.

// get the sub array with parity (0 or 1)
vector<int> getv(const vector<int> &V, int parity) {
    vector<int> r;
    for (int i = parity; i < V.size(); i += 2) {
        r.push_back( V[i] );
    }
    return r;
}

// mix two sub-arrays back into one large one:
vector<int> mix(const vector<int> &v1, const vector<int> &v2) {
    vector<int> V;
    for (int i = 0; i < v1.size() + v2.size(); i++) {
        V.push_back( (i%2 == 0)? v1[i/2] : v2[i/2] );
    }
    return V;
}

string solve(const vector<int> &V) {
    // split
    vector<int> v1 = getv(V,0);
    vector<int> v2 = getv(V,1);
    // sort
    sort(v1.begin(), v1.end());
    sort(v2.begin(), v2.end());
    // merge again
    vector<int> v3 = mix(v1,v2);
    // check
    for (int i = 0; i+1 < V.size(); i++) {
        if (v3[i] > v3[i+1]) {
            return to_string(i);
        }
    }
    return "OK";
}

Problem C is interactive

Okay ... so apparently I am participating in this. I am too bored to bother with A, so I open C. This is an interactive problem. You are given a 1000 x 1000 grid and want to paint at least `A` cells (in the easy version, `A` is 20 and in the hard version, 200). The objective is that the bounding box of all the painted cells should not contain any unpainted cell. And the catch is that when you pick a single cell to paint, the system is actually going to pick one of the 9 squares in the neighborhood of the cell you picked and paint that one. Even if the cell is already painted, if the system picks it, it will paint it again and waste a turn. You have only 1000 turns. Of course it picks any of the 9 squares in the neighborhood with 1/9 probability.

So okay, this is an interesting thing that we wouldnapos;t have been able to have in the old system (or maybe we could, by making the system expose a public API that interacts with your program?). At least it is a bit interesting. The solution is to come up with a strategy and prove that the probability that it needs more than 1000 turns is very low.

My solution works by making it into a linear problem. Instead of worry about 2D rectangles, you go in a straight line from bottom to top. Imagine we keep giving the system the order to paint at point `(x,y)`. Eventually, all 9 points around it will be painted. Once this happens, you can start returning `(x,y+3)` and paint another `3 x 3` square. And keep repeating. Until you/apos;ve painted enough squares. The result will always be a perfect rectangle.

The only catch is to calculate that this will tend to require less than 1000 steps. Actually, due to the random nature of the problem, there is always a probability that for some reason it could pick the same point 1000 times, but that is a very improbable thing.

So let's calculate the expected value for the number of turns we need before filling a complete 3 x 3 square. The number is between 25 and 26. To paint at least 20 squares, you would need 3 squares of `3 x 3` in total. That means that the expected number of turns you'll need is 228, which is pretty reasonable. Of course, the expected value is not the same as the probability it will work in less than 1000 steps. But a) It is easier to calculate and b) The expected value is designed to give us a good estimate. it's the average number of turns we will need, and because everything is so random, it would be a bit crazy to expect that the number of turns will get much larger than 228. Definitely not up until 1000...

But that's not the solution I tried. Mine is a bit smarter, no need to wait for the whole 3x3 square, you only need to wait for the bottom row of 3 square cells. Once it is full, we can start trying a square higher. So basically, we start with `(x,y)`, once the row of 3 cells bellow `(x,y)` is full, we start trying with `(x,y+1)` and so and so. We only need to figure out a good ending condition for this. If filling the current `3x3` square surrounding the current `(x,y)` is enough to complete the requirement for `A`, then we can stop

My calculations told me that the expected number of turns will be around 106 for `A=20` and more than 1200 for `A=200`. So it would pass the easy but not the hard problem. But in fact my calculations where over pessimistic and it passes the hard version of the problem as well.

You may be wondering how the hell did I calculate all of this? Well, here is an example: Imagine that we will keep printing `(x,y)` until the bottom row of `(x-1,y-1)`, `(x,y-1)` and `(x+1,y-1)` are all painted. What is the expected number of steps we need to perform?

There are 9 cells in total that can be painted by our `(x,y)` move but our objective is to paint 3 of them. The function `f(t)` will tell us the expected number of turns before we paint `t` specific cells (the other ones don't matter.

  • For `f(0)` : There are 0 cells that matter to us, so we don't need to paint anything, all is ready.
  • For `f(1)` : We need at least one turn. there are two possibilities:
    • With probability `1/9`, we painted the correct cell and there are now zero more cells to paint. The expected number of steps will be `1 + f(0)`
    • With probability `8/9`, we painted a cell that doesn't matter so there is still one special cell. The expected number of steps will be `1 + f(1)`
    • The total is: `f(1) = (1/9)(1 + f(0)) + (8/9)(1 + f(1))`, note the recursion, but the recursion is not a problem, just consider `f(1)` as a variable, like `z`:
    • Solve the equation `z = (1/9)(1 + f(0)) + (8/9)(1 + z)`, the result of `z` will give us `f(1)`.
  • Then we can use that result to calculate `f(2)` (it's the same logic), and so and so. By filling the values of `f(t)`, I could find that it takes around 16 turns before we fill the bottom row and 25.something steps before filling the whole square. The reason this estimate is too pessimistic is that I don't consider that after filling the bottom row, there will be many squares in the `(x,y)` neighborhood that are painted, which increases the chance that future iterations of `y` are going to receive less work to do.

Turns that the problem came with a tester tool to interact with your program locally. But I didn't notice until after solving the whole problem and testing manually multiple times :/

Problem D and A

A is very easy but greedy and I am not feeling like explaining it right now. D was all about geometry, and I honestly was feeling a bit lazy. Do I really want to spend all that much time solving and specially debugging a geomtry problem? Not really. So I skipped it.

Turns out Stack Overflow has a nice explanation: https://twitter.com/fakevexorian/status/982800480886775808 , but the question was posted during the contest.... So ... why bother even If I went through the problem of solving this, no one can tell if anyone solved the problem on their own or because they found the stack overflow thread?

That's it

Thursday, April 05, 2018

I just wanted to ask, what's up with the 2010-quality prizes in algorithm competitions?

So hey, yes, I am going to be participating in "Google Code Jam 2018". I've been thinking for long to make a huge blogpost explaining my situation and why I stopped participating in algorithm contests and (more painfully) making editorials in TopCoder. (Just to be clear, it's not because I don't want to participate). I am unsure if I am going to make that blogpost, but one of the many tangents I wanted to include in that post is just a rant about the prize pools we've had in these contents for years now.
So as a bit of context, I used to be very active in these competitions. I started in 2008, I think. Back then the Google Code Jam was basically a Google-sponsored contest at Topcoder. And the prize pool in those times was just ridiculous in comparison to what we have now. First of all, there were both regional and global contests. And that's how I started, I was a finalist in the Latin American regional. And I really think that if not for the encouragement received from qualifying to that regional, I probably wouldn't have had such a large and extended "career" in these contests. Why? Because it would not have been worth it.
Another example: The TCO was also a thing back then - We had qualification rounds and there were t-shirts for the 3000 coders who qualified. The latter rounds would have other prizes. Well, probably the best thing you could get besides the finals' money prizes was a Topcoder-branded USB drive. But it was still pretty good.
And the money prizes were also quite there. Every month there were 2 SRMs where you could earn money prizes by acquiring the top positions in your room and this included division 2 coders. And I am not talking about those 5 dollars you can get for registering in those Harvard experiment matches. A talented coder could get 40 USD per match in average.
But let's forget about the money. Even those other prizes or the much higher chance to get a t-shirt were very important as encouragement.

Back to reality

The year is 2018, and although the Google Code Jam admins want us to believe it is a big deal that it is its 15-th year. It doesn't feel like such a special year. For some reason we are still following the prize pool from 2010. There was an economic crisis in 2010, (ironically, this crisis was caused in part by the irrational worship of algorithms as infallible) and this crisis brought extreme deflation in the availability of budgets for algorithm competitions. Topcoder eventually dropped their money SRMs (sans very extreme rare cases). The number of finalists dropped to the incredibly small value it is now. And there are at most 1000 t-shirts in code jam. (For TCO I think it is 450)? Although all of this was understandable during the recession, it stayed that way. And it has been 8 years since. I guess that the powers that be are content with seeing the same 25 people in all finals and not having to care about the remaining tens of thousands that participate in these contests. But maybe they should? You know, if there was more encouragement for those 10K coders, maybe more of them would be willing to commit the ridiculous amount of time it takes to become good at this stuff. Which would mean we would have more people who reach finalist level and there would be more competition there and we would all improve for it (maybe).
And although the number of t-shirts hasn't increased, the competition has only gotten tougher. Problems are harder and there's far more coders participating than before. I really think 1000 t-shirts is too little. Because I think that t-shirts should exist as a way to encourage even beginners to participate and continue participating. I don't think there's really so much benefit in having so few t-shirt winners. If you so desire make the top 1000 tshirts a different version so that people who really want to feel special would be able to remain feeling special.
In the last two years I've been able to look at these contests from more of an outsider perspective and it is really amazing how much of a niche we are. That there's so few benefits for complete new comers to start working hard to participate in these contests does not help.