Sunday, February 24, 2013

Topcoder open 2013, 1A editorial

It is ready to be seen in public:

This was a fun problem set. 250 was very easy but it was still interesting. The 500 points problem was there to differentiate (wo)men from children and the 1000 was a good counterbalance so that people that really know algorithms could have a backup to advance.

I cannot believe that I have spent so many years of my life without knowing that GIMP can create animated GIFs. This time I really felt that the best proof for 500 was animated, so I had to learn how to do GIFs without proprietary software and it turns out my old pal GIMP can!.

Saturday, February 23, 2013

Topcoder Open 2013: Round 1A

As I begin to write this during the intermission phase, it appears that I did well and solved the three problems. But I am really not sure about the medium I coded. And dumb mistakes are always possible.

I forgot to mention that, since Tournament matches (unlike SRMs) are not practice, I am not going to use my suicidal strategy during these matches. But I think that I I advance to round 3, my only chance to advance at all would be to solve the hard problem, so if I get to round 3, I will open ONLY the hard problem, maybe I get lucky and advance (Yeah right).

250: The one with heights.

You are given a matrix of up to 50x50 heights , each height is from 0 to 9, you can increase or decrease each height any number of times. The objective is for the difference between maximum and minimum height to be at most 1 after you decrease/increase. Return the minimum number of steps (increase/decrease) needed.

Simple! One height of the final matrix got to be the minimum height, and this height is from 0 to 9, so just try each of these 10 values for the minimum height. Then each of the heights has to be between minimum and minimum+1, so it is easy to calculate the minimum cost to make the height fall in that interval.

500: The one with jumps.

You want to jump using leaps of width X starting at a point 0, and up until D or any point greater than it. The catch is that there are many pits, and for each pit i, the space between L[i] and R[i] are impassable (Not inclusive, so L[i] and R[i] are good to stand in), return the minimum X.

The first thing to notice is that we can do D = R[n-1] and everything will be good.

At first it seemed that as long as the last leap fell in R[n-1] it would be fine, so I solved with that assumption and I even passed examples. I passed examples long after spending a good time debugging the solution, so I forgot that I was supposed to actually make sure that the R[n-1] condition was true. I submitted it and then my brain reacted... wait. So it turns out that the case : D=6, L={0,4}, R={3,5} this would be wrong. It is not a good idea to fall in position 5, the best result is 3.

But that challenge case made me notice the real condition: At some point the leap should touch one of the values of R[i]. It makes sense but it is hard to prove. So I just tweaked the solution I already had to make it try all values of R[i] instead of the last one. Time out seemed a risk, but it seems fine.

Anyway, let us say you want a leap that falls in R[i], then you got to first verify that leaps of width R[i] will not have an issue), then you can try R[i]/2, R[i]/3, ... and so and so until R[i]/R[i] and see if any of those leaps also work. Remember the minimum.

For some odd reason, at a point during debug I decided to use fractions instead of doubles. I think it is overkill the bug I was having was not related to precision.

And to check if a leap of width X is possible, you can just try for each pit : If there is a integer r such that: r*X is between L[i] and R[i] then it is wrong.

1000: The one with arrows

You got a matrix of up to 15x15 directions (up, down, right, left). The objective is to change the contents of the matrix so that, no matter what cell you start at, you will always eventually return to the initial position, by always moving to the cell pointed by the direction of the current cell you stand at. The rows and columns are cyclic.

I am not sure why, but I noticed "this is max flow" right a way. I just didn't know how to do it until some thought work.

The trick is that , the objective is fulfilled if and only if for each cell x: There is one cell that points to cell x. Try it. Then we can solve this using min cost perfect bipartite matching: For each cell, assign a target cell to it, each cell must have only one incoming and outcoming cell. If the initial direction of a cell is LEFT, then it is free to assign the cell to the left, else the cost is 1. And so and so for each direction.


I plan to make the editorial as quickly as possible to have a free schedule.

Thursday, February 21, 2013

SRM 571 editorial

It is "ready" -

I am trying a whole new layout for editorials. I wonder how it will be received.

The problems were mostly ok. Division 2's easy and medium were WAY easy. The Molecule problems were interesting. And I really love the div1 hard. It was a good choice not to implement it. My initial idea was the one with the donuts, but I didn't think of the cool optimization.

Feel free to comment (really).

Tuesday, February 19, 2013

TopCoder SRM 571: Educative

As I write this before the challenge phase, it appears this SRM was easier than usual. Tons of solutions, even for the hard problem. Even I could solve medium and easy. But I am not 100% sure about div1 medium.

Div1 hard

This was about using disks to move candy around making it rotate using the disk's center until you reach a specific point.

At first candy has a whole radius it can reach, this radius intersects with circles, and then creates a donut pattern. Thought that even If I thought of a solution I would not have too much time, so I skipped it.

Div1 medium: The one with Cliques

You are given a molecule (graph) of up to 50 atoms (vertices) , each with some power (weight). Find a sub-graph that has m elements. Such that 3*m is at least as large as 2*n. Every pair of atoms in the graph must be connected. Find the maximum sum of weights (power)

So, the result set is a Clique. Also, it should be a maximal Clique. Because if it is possible to make a bigger Clique without removing some atoms, then it makes no sense not to (no negative power). I was trying to come up with an algorithm that did this, and took advantage that the clique has to be large (at least 2/3 of the graph). But then I thought. (Wouldn&pos;t this be a classic problem?). So I decided to do a google search: "Find maximal complete Sub-graphs" A stack overflow question and a wikipedia link later I ended up on this page: Bron Kerbosch's algorithm. According to that page, as long as you use the pivot optimization that algorithm will be O(3n/3), which is great for n=50.

I implement that algorithm (had a couple of implementation bugs, so it took some time). I then tested with some large graphs. The full n=50 graph, and a graph that had 15 complete subgraphs of 3 vertices and one of 5. It seems the algorithm is very fast, but I am not so sure if there might be a better challenge case..

As I write this, I noticed that my program has already survived 2 challenge cases.

typedef long long int64;
#define long int64

struct MagicMolecule
int n;
long N[50];
int res;
vector<int> magicPower;
// Taken from wikipedia, translated to use bitmasks.
void BronKerbosch2(long R, long P, long X)
if (P == 0LL && X == 0LL) {
// P and X empty, R is a maximal clique
int t = 0;
int m = 0;
for (int i=0; i<n; i++) {
if ( (1ll << i) & R) {
t += magicPower[i];
if ( 3*m >= 2*n ) {
res = std::max(res, t);
//choose a pivot vertex u in P U X
int u = 0;

while ( ! ( (1ll<<u) & (P|X) ) ) {

//for each vertex v in P \ N(u):
for (int v = 0; v < n; v++) {
if ( ( (1ll << v) & P) && !( (1ll << v) & N[u]) ) {
BronKerbosch2( R | (1ll << v), P & N[v], X & N[v]);
P -= (1ll << v);
X |= (1ll << v);

int maxMagicPower(vector <int> magicPower, vector <string> magicBond)

res = -1;
n = magicPower.size();

//N[i] is the set of neighbors of i
for (int i=0; i<n; i++) {
N[i] = 0;
for (int j=0; j<n; j++) {
if (magicBond[i][j] == 'Y') {
N[i] |= ( 1ll << j );
this->magicPower = magicPower;
BronKerbosch2(0, (1ll<<n) - 1, 0);
return res;
#undef long

Div1 easy

There are n files named 1.mp3, 2.mp3, ... 10.mp3, 11.mp3, .... n.mp3. They are sorted by name. And anyone who has dealt with file managers knows that sorting by name is not the same as sorting by song number, unless the song numbers have leading 0s.

Return the first min(n, 50) file names after sorting the files.

This is as interesting as an easy problem can be.

So the results are: 1.mp3, 10.mp3 ... (until 10...0 is greater than n), then 100001.mp3, then 100002, ... 100009, 100010, ... and so and so.

I preferred to use a backtracking approach. Starting at i=0, I recurse through i*10 + 0, i*10 + 1, ... i*10+9, in that order. Whenever I find a integer, I add it to the result. When the result has min(n,50) elements, it is time to end the whole recursion. So this recursion will need min(50, n) steps overall.

struct FoxAndMp3
int n;
vector<string> res;

void rec(int x)
if (x > n) {
if ( res.size() == n || res.size() == 50) {
if (x != 0) {
ostringstream st;
st << x <<".mp3";
for (int i= (x == 0); i<=9; i++) {
if ( res.size() == n || res.size() == 50) {

if ( x <= (n - i) / 10 ) {
rec(x * 10 + i);

vector <string> playList(int n)
this->n = n;
return res;


At this moment, my div1 500 has already survived 3 challenges.

Let us see how it goes

Will my submissions survive the system tests? Will I recover yellow? No idea. It was a interesting match. But easier than usual. I think admins want a warm up for Saturday's TCO match (which will have easy problems) and also want to make up for the latest trend in hard matches.


Friday, February 15, 2013

TopCoder SRM 570 Editorial is ready

And there it is. You can access at :

Feel free to comment, even though I'll be disconnected for the weekend.

Div1 easy / div2 medium were a bit too fun to explain. Like I mentioned I was in a rush when making this editorial, but those problems are just too good to leave it incomplete not mentioning at least four approaches. A good div 1 easy / div 2 medium has tons of approaches, so that fools (e.g: me) have more of a chance to fail challenges or to succeed.

Div2 easy seemed a bit heavier than usual. My initial solution seemed a bit too complicated to explain. So I went shopping the division 2 summary looking for what kind of solution were the people in that division using. It seems that sorting as a shortcut was too esoteric for that level.

TopCoder SRM 570: CentaurCompany and CentaurCompanyDiv2

Another 570 editorial update: This time for the division 2 hard and division 1 medium.

My verdict is that the division 1 medium's editorial really blows. It is a bit of a fruitless problem to explain. For starters, dynamic programming problems tend to have this issue that it is difficult to avoid reaching that state in which you explain what the solution does instead of how to find it. Then we have that the problem is 20% an obscure formula and 80% obscure dp optimizations. I really didn't have a lot of room to maneuver.

Also, since tomorrow and Sunday I will be traveling against my will, I really want to finish this editorial quick. Nobody wants to wait 48 extra hours, and I am tired of editorials taking so long. Like I mentioned in the previous post, I had the complete intention to release this editorial 24 hours ago. It just could not happen because of the Centaur Company :(.

Topcoder SRM 570 -"CurvyonRails" Editorial

A preview for the editorial is up:

This problem was interesting to say the least. Interesting in that it is in theory a simple min-cost max flow problem. Yet nobody was able to solve it. I know that in paper, if I was developing this match, I would expect this problem to be easier than the usual division 1 hard. And it seems the admins and writer for the match thought the same - Gave it a 900 points score tag.

So why did nobody submit it? The reduction to min-cost flow is probably not trivial. I also think that most coders just didn't have time . The division 1 medium turned out to be quite a time sink. Even if you manage to think of a solution, implementing a many-dimensions dynamic programming algorithm with tricky cases is always a time sink...

That division 1 550... It really frustrated my plans. I wanted to get the editorial finished 24 hours ago. But 24 hours ago I was still trying to optimize the complexity after taking ages to actually understand what we need to do. Let me return back to writing that part of the editorial....

Wednesday, February 13, 2013

SRM 570: Blue

This is it, I am going blue today.

As I write this (Before challenge phase) I know that if it was not for a last second mistake, I would have actually had a good score in division 1 250. But this last mistake made me unable to submit.

Div1 250: The one with robot

A robot in an infinite grid has a program. For each integer a[i], it moves forward a[i] times, then rotates left a[i] times. This program is repeated T times, return abs(x) + abs(y) where (x,y) is the final position.

Let us represent directions by 4 integers: 0, 1, 2, 3. Rotating left x times really just does direction = (direction + x) % 4. The direction is an index for off set array, so moving by direction i means moving dx[i] units in x axis and dy[i] units in y axis. dx[] and dy[] should be in clockwise order.

The trick is to know that after running the program one time, the direction will change by either 0, 1, 2 or 3. x and y should also change.

If the direction changes by 0, repeating the program again will not change the direction, then final x = 4*x , final y = 4*y (where x,y are the position after running once).

If the direction changes by 1 or 3, then repeating the program FOUR times will take us to a total program that modifies the direction by 0. So we can just repeat four times to find out how many does this combined program change, multiply x and y by T/4, then repeat original program T%4 times.

If the direction changes by 2, then you just need to repeat 2 times.

// simulates program a one time, updating initial x,y and d. 
void doit(vector<int> a, long &x , long &y, int&d)
int dx[4] = {0, 1, 0, -1};
int dy[4] = {1, 0, -1, 0};
for (int i=0; i < a.size(); i++) {
x += dx[d] * a[i];
y += dy[d] * a[i];
d = ( d + a[i]) % 4;
long getdist(int T, vector <int> a)
long x = 0, y = 0;
int d = 0;
// run once to know how will d change:
doit(a, x,y,d);
// Select the number of times to repeat the program so that d=0
int r = 1;
if (d == 0) {
r = 1;
} else if ( (d == 1) || (d==3 ) ) {
r = 4;
} else {
r = 2;
// Simulate how it is to repeat the program r times (Already done once)
for (int i=0; i<r - 1; i++) {
doit(a, x,y,d);
x = x * (T/r);
y = y * (T/r);
// Repeat T%r more times:
for (int i=0; i<T%r; i++) {
doit(a, x,y,d);
return abs(x) + abs(y);

As usual, I only had 10 minutes to solve this problem. So I rushed a bit, and actually managed to do it all -- with the exception that in my rush I thought that when the direction changes by 3 you should repeat thrice instead of four. This was dumb because when I initially thought of the solution I knew it was 4 repetitions (The results modulo 4 are known after you do these problems for a while). But for some reason, when there was about one minute left, I did sums modulo 4 manually. I did 0+3 = 3 mod 4, 3+3 = 1 mod 4, then 1+3 = 0 mod 4. The mistake was that I accidentally had 3+3=1 mod 4, when it should be 2.

So I failed example cases. I had 40 seconds to debug. I didn't have time to find out what the issue was. Around 10 seconds after the end of the coding phase I figured what the mistake was.

But I think the real blunder was not to engineer the code better. I kept pasting stuff and redoing the bit of code for each d, I think this copy paste delayed me too much. This problem was workable for 10 minutes.

Later in the challenge phase, I found a solution that returned 0 in a case, so I challenged it (I knew that there was no difference between 0 and -25, so basically no risk). It turns out that was correct. That was clever actually, if d=2 after one run and T is even, the result is 0. There are other similar tricks. After that moment I just kept challenging it to gain more minuses.

Last words

Anyway, I am going to be blue today, but I will maintain my strategy. I think that with more practice I will be able to start solving div1 250s in less than 10 minutes without all these mess ups. And now that I am blue I have nothing to lose anymore. This was a consequence I foresaw after taking the decision to use this strategy. Short term rating is not the objective.

Of course, I spent the first 65 minutes of the match looking at div1 medium and hard. I think div1 medium should be meet in the middle (That n<=36 constraint?). Div1 hard might be my favorite kind of problem, super optimized dp. Or it might be some sort of brute force.

Monday, February 11, 2013

SRM 569 Editorial

It is not published yet, but I finished it last night and it is in a state that is sort of ok:

Remember when I said that I liked the mathy div1 1000 points problem of this match? I predicted that although difficult, it would have a solution that would not be impossible to explain, so that was good by me.

It turns out this problem is the coolest thing I have solved in months. I just really like this twist on the usual matrix exponentiation idea. This problem does not use matrix exponentiation, but it adapts that idea into something (that at least to me is) fresh.

Saturday, February 09, 2013

SRM 569 editorial preview

Division 2 and division 1 easy explanations are complete. Go here:

I think I have div1 medium figured out, I just need to make sure to understand a couple of thing. Div1 hard is hard as usual, but after rng_58 explanation I think/hope this won't take me as long as last match.

There are forces at works towards compensating the current trend of unsolvable div1 1000s by extending the duration of the matches. I really dislike the idea. A similar change happened to Google's Code Jam and I think that the problem set quality has been on a downwards spiral since then. And the longer matches have been less comfortable to my taste. Instead of increasing the chances to solve problems, problems became longer to solve.

What I do know is that we used to have people solving div1 hard problems in the past. So it is certainly possible to make problem sets that are solvable in the current 75 minutes.

Wednesday, February 06, 2013

SRM 569 : Blunder

Having a bad day here because even after a whole week I was still not able to finish the editorial for SRM 568. The first big delay was because of the harder than usual div1 500, but the worst happened later when I got sick... I recovered on Monday and tried to use that time to understand the div1 hard. It is a work in progress.

Then it was SRM time, and I scored another flat zero.

Div1 1000: Mega factorial

This problem was like an evil over the top version of SuperSum, but this time with factorials. And you have to count the number of leading zeros in a very recursive variation of the factorial. Seems mathy. That's probably good because I am tired of those silly div1 1000s in which the solution comes out of nowhere. At least this one will be easy to explain once I find the solution, I think

Div1 500: The one with Jedis

You are given an array of numbers called students that contains the initial number of students in each floor. When assigning Jedis to floors with some students you compute sum( ceiling( students in floor i) / K). However, you can move any students from a floor to either the next or previous one. Let us minimize that sum.

The constraint on the number of floors is low (20). I knew the solution was going to be exponential, making a binary decision for each floor. But no idea how to make that decision.

Some cases I thought of seemed very confusing, like {7,2,7} K=8, it is best to move 1 student to the first floor and another to the third floor. This specific case can be solved by moving 14 students to the middle floor, but there are other variations. It was a bit confusing to my brain.

Div1 250: The one with binary operations.

As my new strategy says, 10 minutes before the end of the match and I have to open the easy problem.

You have a device that takes 2 sequences of bits, and for each bit position in the sequence performs binary AND, OR or XOR between the two bits of the corresponding positions of the input sequences . You already got a number of sequences of bits you can use. What is the minimum number of additional sequences you will need?

Each bit position is a separate problem in reality. If for a bit position all the sequences have bit 0 , you probably will need a sequence with bit 1 in that position. Let us say you know the number of required extra bits for each position. The maximum of them all is the final result : Just make some new sequences with the required bits in total, for the slots that do not need as many new sequences, feel free to repeat some bits.

What is left is this question, that deals with only one bit position: You have z sequences that contain 0 in this position and o sequences that contain 1, what is the number of additional sequences this position needs?

Here is when I made the blunder. I knew that I had to make truth tables for XoR, OR and AND to know what sort of bits are needed to differentiate between the three. But for some stupid reason I decided to rush and make a stupid assumption instead. So I assumed that you need 2 zeros and 2 ones.

That is wrong and fails example cases. Then I made another stupid assumption. That you need two different bits and either one extra 1 or one extra 0. This is also wrong.

Match ended.

This is when I make a truth table in paper:


And it was so fast and it makes things so clear that I really regret I didn't do it during the coding phase. Note how having two zeros is completely useless, (0,0) cannot help you differentiate between AND, OR, XOR. Having two different bits (0,1) helps you differentiate between OR and AND or between OR and XOR, but cannot help you when differentiating between OR and XOR. 1,1 helps you differentiate between OR and XOR. In conclusion you need at least two sequences with 1 and one with 0.

int minimumAdditional(vector <string> plates)
int need = 0;
for (int i=0; i<plates[0].size(); i++) {
int z = 0;
int o = 0;
// count the number of sequences with 0 and 1 in this bit position
for (int j=0; j<plates.size(); j++) {
z += ( plates[j][i] == '0');
o += ( plates[j][i] == '1');
// we need 2 ones and 1 zero:
int x = 2 - std::min(2, o) + 1 - std::min(1, z) ;
need = std::max(need, x);

return need;