Tuesday, October 15, 2013

SRM 594: More slowness

One of the reasons I was trying a risky strategy to put more focus on div1 medium / hard and risk not solving div1 easy is matches like this one, you only solve div1 easy, and you take too long, so you get a low score and is not too different from getting a zero, but much more boring. It gets frustrating.

Div1 250: The one with planets.

You get two arrays A, B, lists of planets in increasing order of distance from the sun. Each list might be missing some of the planets in the solar system. The lists contains planet dimensions, but both lists use a different measurement system, so two planets with sizes {6, 18} in one list might have {10, 30} in another list. Return the minimum number of planets in a solar system for both lists to be correct. In other words, find the maximum number of planets the lists can have in common (and the proportions have to be consistent) and subtract from the total number of planets in the list.

I took too long to think of this solution, but the key is to figure that at least one pair of planets from each list can be equal. (Worst case scenario, you make the last planet in one list and the first planet in the other equal). We can iterate through all possible pairs `(A[i], B[j])` and ask "What is the minimum number of planets if we assume these two are equal?". If we determine that two integers `A_i,B_j` describe the same planet in different measurement systems, then it is easy to see the conversion rate between the two systems. One unit in system A is equivalent to `B_i/A_i` units in system B. So we can just multiply all planets with that proportion. If we want to avoid doubles, we can actually find the greatest common divisor `g = gcd(A_i,B_j)`, and multiply each array by `B_j/g` and `A_i/g` , respectively, that would make all numbers in the same measurement system.

Once all numbers have the same system, we just need to find the maximum number of planets that are equal. Since both lists have the same order. This is actually the same as the longest common subsequence, a classical dynamic programming problem.

long gcd(long a, long b)
    while (b != 0) {
        tie(a,b) = make_tuple( b, a % b);
    return a;
vector<long> fix(vector<int> A)
    int g = A[0];
    for (int x: A) {
        g = gcd(g, x);
    vector<long> LA;
    for (int &x: A) {
        LA.push_back(x / g);
    return LA;

int sub(vector<long> A, vector<long> B, int i, int j)
    //if we assume A[i] and B[j] are the same planet, what is the
    // minimum number of planets?
    long a = A[i], b = B[j];
    long g = gcd(a, b);
    // Scale vectors such that the new vectors A' , B' 
    // Have:  A'[i] = B'[j] = lcm(A[i], B[j])
    for (long & x: A) {
        x *= (b / g);
    for (long & x: B) {
        x *= (a / g);
    // largest common subsequence:
    int nA = A.size(), nB = B.size();
    int dp[nA + 1][nB + 1];
    for (int s = 0; s <= nA; s++) {
         for (int t = 0; t <= nB; t++) {
             if (s == 0 || t == 0) {
                 dp[s][t] = 0;
             } else {
                 bool eq = (A[s-1] == B[t-1]);
                 dp[s][t] = max( {dp[s-1][t], dp[s][t-1], eq + dp[s-1][t-1] } );
    return nA + nB - dp[nA][nB];


int minimalPlanets(vector<int> _A, vector<int> _B)
    // if the result is not |A| + |B|, at least one pair must be equal
    vector<long> A = fix(_A);
    vector<long> B = fix(_B);
    int nA = A.size(), nB = B.size();
    int res = nA + nB;
    for (int i=0; i<nA; i++) {
        for (int j=0; j<nB; j++) {
            res = std::min(res, sub(A,B, i,j));
    return res;

I had a fun delay, when multiplying A by `B_j/g` and B by `A_i/g`, I didn't notice that `A[i]` changes after the first step. Good thing example 2 catches this mistake.

Div1 550

I tried plenty of things with no success. At first I thought it was clearly a Maximum Flow problem. It is at least very easy to know, through min-cut the minimum number of black checkers needed to remove all white checkers. Is there a way to do the same but to remove only a limited number of white checkers? I have no idea. But I noticed Petr uses maxflow. Albeit a bit more straightforward, maybe there is a way to solve the whole problem with max flow.

Later I thought [maybe it is greedy?] but nah, I was able to come up with counter examples to any greedy idea I had, so probably not.

Challenge phase

I thought that maybe overflow was going to be a possible mistake guys would make. But not too much luck.

There was a greedy solution to 550 in my room, but it took me a while to understand its strategy and come up with a counter example, someone else challenged while I was inputting my challenge case.


I need to improve my speed. No idea how can it be done. I thought limiting the time I have for 250 to 10 minutes would work, but it didn't seem to. Any comments about the match?


Vijay said...

I too thought (more deliberately) of max-flow, but it didn't come more naturally. What is a right direction to think and see it reduce to max-flow easily ?

2rf said...

Is your tie/make_tuple version of gcd faster than "return b == 0 ? a : gcd(b, a % b)" ?

qwerty said...

Why don't you practice more 250 until your speed becomes good enough? ;)

vexorian said...

No idea but it is probably slower. tie and make_tuple() have 1-step recursion, so it probably cancels out the advantage over recursive gcd(). But maybe the optimizer turns it into int x=a, y=b; a = y; b = x%y; If that happens then it would be faster. Benchmarker needed.

JOmegaCV said...

You don't need gcd or fix correct? You can just scale A by B[j] and B by A[i] without dividing by g?

vexorian said...

I've been there.

vexorian said...

Sure, but I didn't have time to make the changes before submitting my quick blog post.

Something like fix is needed so that the vector is of 64 bits integers. As long as you want the LCS to be independent code from the original problem. You can do the LCS and the multiplication at the same time.

JOmegaCV said...

I understand it needs to be 64 bits, but the gcd normalization in fix is unnecessary. vector A(_A.begin(), _A.end()); would have worked instead of fix. Not trying to nitpick, if it works it works. Just enjoy trimming algorithms!