Tuesday, July 22, 2014

SRM 628: sleeping during the srm

Okay, so here is the thing. This is a 7:00 AM match. That is not usually a problem except I went to sleep at 4:00+something AM, at least I could register early to the match. So I put my alarm clock to 7:15 AM, I thought losing those 15 minutes wouldn't really change my score too much unless div1 medium is easy. I *really* woke up at 7:30. With 45 minutes the objective was to do at least div1 easy...

Div1 Easy: The one with sub-routines

Given `2 <= n <= 10^18` Find the maximum `x` such that : `(x ^ d(x) = n)` where `d(x)` is the number of divisors of `x`. If no such `x` exists, return -1.

The key is to notice that `d(x)` cannot really get too large. Any valid `d(x)` is `O(log(n))`, so we iterate through all the values of `d` such that `n` is a power of `x`. Until we reach a `d` that is very large.

For a given `d`, let's find `x : x ^ d = n` , this is the `d`-th root of `n`. We could use the pow function here, but it uses floating point numbers and you would need to be careful with precision and stuff... So we can just use a binary search for the largest `x` such that `x ^ d <= n`. Incidentally, if this `x` is 1, then we know that `d` has become too large and we can stop the search.

Once we have `d` and its valid value of `x` , all we need to do is verify that the number of divisors of `x` is equal to `d`. `d` will be at least 2, which means that `x` is `O(sqrt(n))`. We can count the divisors of `x` in `O(sqrt(x))` time. This yields an `O(n ^ (1/4))` algorithm. Nice?

The main method of my solution looks like this:

long findArgument(long n)
{
    long d = 2;
    long res = -1;
    while (true) {
        // find x : x ^ d = n
        
        long x = binarySearch(1, n+1, [&](long x) {
                return ipow(x,d) <= n;
        });
        if (ipow(x,d) == n) {
            // possible
            if (countDivisors(x) == d ) {
                res = std::max(res, x);
            }
        } else if (x == 1) {
            break;
        }
        
        d++;
    }
    
    return res;
}

Everything else are standard sub-routines ...

long binarySearch(long lo, long hi, std::function<bool(long)> P)
{
    // P(lo) true
    // P(hi) false
    while (lo + 1 < hi) {
        long ha = lo + (hi - lo) / 2;
        if (P(ha)) {
            lo = ha;
        } else {
            hi = ha;
        }
    }
    return lo;
}

const long INF = 1000000000000000002LL;
long ipow(long x, long d)
{
    // returns x ^ d if x ^ d < INF, else INF 
    long r = 1;
    while (d > 0) {
        if (d % 2 == 1) {
            if (r > INF / x) {
                r = INF;
            } else {
                r = r * x;
            }
        }
        d /= 2;
        if (x > INF/x) {
            x = INF;
        } else {
            x = x * x;
        }
    }
    return r;
}

long countDivisors(long x)
{
    long p = 1;
    long c = 0;
    while ( p <= x / p) {
        if (x % p == 0) {
            c++;
            if (x / p != p) {
                c++;
            }
        }
        p += 1;
    }
    return c;
}

Div1 medium: The one with expressions

You are given a tree of binary expressions. The expressions can either be the ADD function or the MAX function. You also have a list of (positive) values you can assign to each of the leaves in the tree. There are at most 2000 leaves. Return the maximum total value

Good luck with THAT. Solution is likely some sort of greedy but I am too asleep to do this.

.

Afterwards

During the challenge phase I read codes for div1 500. The idea appears to be that you can find out the maximum quantity of values that will be added up together in the final expression. If you have this number `x`, then you just have to pick the largest `x` integers from the values and add them together. Of course they had to pick THIS match to have an easy div1 medium...

Thursday, July 10, 2014

SRM 627: Did I really code THAT?

Is this just fantasy? I am not sure if this actually happened, backtracking with binary indexed trees? What?

Div1 250: The one with letters

Game starts with a multiset of alphabetic letters. In every step, two distinct letters are taken and removed. Eventually, the multiset will be empty or contain only repeated letters, those letters are called winning letters. Return all the letters that can possibly be winning letters.

My idea, which I am not totally sure is correct. Is to check for each letter `c`, if it is possible that it will win. Then all the moves have to have the intention that the specified letter remains at the end. This means that we should first try to clear as many letters different to `c` as possible. Eventually, a list of equal letters that are distinct to `c` will remain, and we need to get rid of them by using the `c` letters. If these repeated letters are less than `c`, then `c` can win.

The trick then is to always pick pairs of letters distinct to `c` , and pick the two letters that are currently most frequent.

class HappyLetterDiv1:
 def getHappyLetters(self, letters):
    res = ''
    # for each letter ch, can we do it?
    for ch in sorted(set(letters)):
        # get a dict of distinct letters and counts
        other = { x : letters.count(x) for x in letters if x != ch }
        while len(other) > 1:  # at least two different ones
            # sort by frequency
            least = sorted([ (other[x],x) for x in other.keys() ])
            # remove a letter of each of the two most frequent:
            a = least[-1][1]
            b = least[-2][1]
            def dec(y):
                other[y] -= 1
                if other[y] == 0:
                    other.pop(y)
            dec(a)
            dec(b)
        if (len(other) == 0) or (other[other.keys()[0]] < letters.count(ch)) :
            res += ch
    return res

Div1 500: The one with a graph and inversions

You have a graph of at most `1000` vertices. The graph is special in which it is a tree with an extra edge added onto it. (The graph is connected, undirected, lacks loops and has exactly `|V|` edges)

The value of a vertex `x` is `V[x]`.

For a simple path of vertices in the graph, we make a sequence `S` of the values of the vertices in visited order. Now count the number of "inversions" of the graph. This is the number of pairs `(i,j)` such that `(i < j)` and `(S[i] > S[j])`. Return the minimum number of inversions in a sequence of at least `K` elements.

.... So the trick is that the graph is a tree with exactly one extra edge. So there is only one simple cycle in it. So there are at most 2 simple paths connecting each pair of vertices. So there are at most 2000000 paths. 2000000 sequences. With backtracking, we can visit them all. The problem is that you need to calculate the sequence's number of inversions as you build the sequence...

Let's say you have already chosen some of the previous elements in a sequence, and want to add `V[x]` to the sequence, we can count the number of added inversions by calculating the number of elements in the sequence that are higher than `V[x]`. So we "just" need to use a data structure that adds elements, removes elements and counts the number of elements larger than a integer `x` in `O(log(N))`. And we need to update that sequence (add , then remove) as part of the backtracking. Easy... So I implemented a binary indexed tree , called it fenwik for some reason and it seems to work.

// for each pair of edges (x,y), there are at most 2 paths.
// thus there will be at most 2000000 paths.


int fenwick[2048]; //probably wrong name

void fenwick_add(int x, int a = 0, int b = 1024, int node = 0)
{
    if ( a <= x && x < b ) {
        fenwick[node]++;
        if (a < b - 1) {
            int c = (a + b) / 2;
            fenwick_add(x, a,c, node*2 + 1);
            fenwick_add(x, c,b, node*2 + 2);
        }
    }
}
void fenwick_remove(int x, int a = 0, int b = 1024, int node = 0)
{
    if ( a <= x && x < b ) {
        fenwick[node]--;
        if (a < b - 1) {
            int c = (a + b) / 2;
            fenwick_remove(x, a,c, node*2 + 1);
            fenwick_remove(x, c,b, node*2 + 2);
        }
    }
}
int fenwick_counthigher(int x, int a = 0, int b = 1024, int node = 0)
{
    if (a > x) {
        return fenwick[node];
    }
    if ( (a==x) || (b <= x) ) {
        return 0;
    }
    int c = (a + b) / 2;
    int r = fenwick_counthigher(x, a,c, node*2 + 1)
          + fenwick_counthigher(x, c,b, node*2 + 2);
    return r;
}


int getMinimumInversions(vector<int> A, vector<int> B, vector<int> V, int K)
{
    int N = V.size();
    vector<list<int>> g(N);
    for (int i = 0; i < A.size(); i++) {
        g[A[i]].push_back(B[i]);
        g[B[i]].push_back(A[i]);
    }

    const int INF = 20000000;
    int res = INF;
    memset(fenwick, 0, sizeof(fenwick));
    for (int s = 0; s < N; s++) {
        vector<bool> visited(N, false);
        int t = 0;
        int inv = 0;
        
        std::function< void(int) > backtrack = [&](int x) {
            visited[x] = true;
            t++;
            int oinv = inv;
            inv += fenwick_counthigher( V[x] );                
            fenwick_add( V[x] );
            if (t >= K) {
                res = std::min(res, inv);
            }
            for (int y : g[x]) {
                if (! visited[y] ) {
                    backtrack(y);
                }
            }
            fenwick_remove( V[x] );
            inv = oinv;
            t--;
            visited[x] = false;
        };
        backtrack(s);
    }
    
    
    
    return (res >= INF) ? -1: res;
}

Challenge phase

Got lucky, the first place in the room clearly did 500 without the data structure part (Was using a std::multiset, but multiset doesn't help in the counting higher elements part). So I just challenged with a large case.