Saturday, March 31, 2012

Topcoder Open 2012 round 1A

This is what I wrote for the official TCO blog:

More things to say:
  • Cute codes for 250 and 500:
    #define for_each(q, s) for(typeof(s.begin()) q=s.begin(); q!=s.end(); q++) 
    struct EllysJuice
    vector <string> getWinners(vector <string> players)
    if (players.size() == 1) {
    //special case with one turn, the only player wins.
    return vector<string>(1, players[0] );
    // Count how many times a player exists:
    map<string,int> cnt;
    for_each(p, players) {
    // Add those guys with more than 2 instances to the result:
    vector<string> res;
    for_each(x, cnt) {
    if (x->second >= 2) {
    sort(res.begin(), res.end());
    return res;

    // long long once killed a kitten. 
    typedef long long int64;
    #define long int64
    #define for_each(q, s) for(typeof(s.begin()) q=s.begin(); q!=s.end(); q++)
    struct EllysFractions

    long getCount(int N)
    long pn = 0;
    long res = 0;
    set<int> primes;
    for (int x=2; x<=N; x++) {
    // is x prime?
    bool isprime = true;
    for_each(p, primes) {
    isprime &= ( (x % (*p) ) != 0);
    // Yes. Yes, it is...
    if (isprime) {
    // (2 raised to the pn-th) / 2 is the number
    // of fractions for x!
    res += (1LL << pn)/2;
    return res;
    #undef long

  • The problem in which you have to split numbers from 1-N in two parts and one must be denominator and one numerator is a interesting problem by itself. That's the problem I actually solved during the match. It took me 10 minutes to solve this problem and 100 minutes to figure out it was the wrong problem, so I spent most of the match trying to find out why was my solution returning such wrong value for N=100 (By the way, I see no reason at all not to allow a smaller number like 6 but larger than 5, in the example cases other than make people go wrong with this).

  • Once I got desperate and it seemed like I was not going to solve the 500, I switched to 1000. And it seemed like I had solved the problem before and in TopCoder. But I had no idea how to solve it anymore and I could not find the old problem. So far the main suspect is round 306's div1 medium but it is actually a bit different. Maybe it was just a bugged Dejavu.

    In my attempts to find the solution to it through google I found this: Interesting.

  • Blogger has updated its interface and: BOY IT SUCKS ARGGGGGGGGGHHH IT SUCKS IT SUCKS IT SUCKS. Thanks I needed to get it out of my system.

    Seriously, what is Google's problem? This is starting to look like I wasn't exaggerating when I said google was starting a war on color. Worse, google is going backwards now and blogger's interface is actually worse in small screens (buttons don't show up when you edit an entry) than before.

Friday, March 30, 2012

Codeforces round #114. 167C: Wizards and numbers

This problem beats me. I was trying to solve it since the match's day. Came to understand a lot of it, except the subproblem. Then I read the official explanation and thought I understood how to solve the problem, but I didn't. As soon as I found free time I gave it a serious try and found out that the subproblem in question I have just found out I solved in 2010 during a topcoder match. I guess I am getting worse...

b%a, a is inevitable
Let us say our numbers are 0 < a <= b . We represent such a state as (a,b). (If b < a, then just swap them). The first thing to notice is that it is inevitable that we will eventually reach the state equal to (b%a, a). One of the available steps actually does it. But it is also nice to notice that the other kind of allowed steps, to subtract from b a power of a not equal to 1, will never change the result b'%a. This is because all ak when (k != 0) are multiples of a. So, if we have (b >= ak) then (b % a) will always be equal to ( (b - ak) % a ).

This is useful, because given a state (a,b), the number of different states reached by doing only (b % a, a) operations is quite small. (This is actually like an Euclidean algorithm, which we know has few states even for pairs of large numbers).

Some recursion won't hurt.
We got a state (a,b), and we know it will eventually become (b%a, b). Let us assume that we are using our knowledge in the theory of impartial games: We want to know if the player to receive the state (a,b) will win or not and we already know, whether the smaller case (b%a, b) is a victorious game or not (We are using recursion).

If (b % a, b) is a losing state, then the player that got state (a,b) can already win: Simply do the b%=a, operation.

In the other case, we got quite a sub-problem. The player is not able to use the b%=a operation without losing. Now, imagine the current player did some subtraction operation that did not lead to state (b%a, b), the next player will not be able to use the (b%a%a, b%a) move without losing either. In this battle to avoid letting the other player get state (b%a, a), we will have both players using only subtraction operations.

Let us consider the total number of times we can subtract a from b before it becomes equal to (b%a). We can assume b is not a multiple of a, (because else (b%a = 0, a) would be a losing state). Thus we have c = (b / a) - The number of times we can subtract a.

A subproblem
We can translate the previous subproblem to this version: We have a stack of c "stones" and two players. Each player can take 1, a, a^2, a^3, ... stones from the stack. The last player to remove stones from the stack before it becomes empty, will lose - The player that receives an empty stack wins.

(Removing one stone from the stack is the equivalent to subtracting a from b. Removing a stones is the equivalent of subtracting a^2. Etc. And when the stack is empty, it is equivalent to reaching state (b%a, a) which is a victory state).

At first I thought that we could somehow reduce this to Nim. But it turns out that's not the best approach (although I think it is possible and the end result is exactly the same we will find).

Imagine for a second that c is not very large. In fact, it is not larger than a. This means that the only move we are allowed to do is to take a single stone. (We cannot take a*a stones or greater, and if we take a stones, we lose). In this case, it is easy to see that the player will win if and only if c is even.

That is simply because 1 is an odd number. In fact, we can extend the idea: Imagine a was an odd number. Then every power of a will also be an odd number. This means that each player can only take an odd amount of stones in each turns. Once again, the current player wins if and only if c is even.

What to do when a is even and c is greater than a?. That is a nice question. Please note that we already know the answers for smaller values of c. Let us model our f(c) that says if it is a winning state or not:

f(0) = 1 (The player who receives the state with 0 stones wins).
f(1) = 0
f(2) = 1
f(3) = 0 // Since the only legal move is to subtract 1, the results alternate unti:
f(a-1) = 0
f(a) = 1 (We are assuming a is even)
f(a+1) = 1

f(a+2) = 0
f(a+3) = 1
f(a+a-1) = 1
f(a+a) = 0
f(a+a+1) = 1
f(a+a+2) = 1

Yes, (a+1) is a winning state. We can simply remove a stones and we force the other player to receive state 1, which is a losing state.

(a+2) is not a winning state. If we remove a single stone, the other player receives (a+1) which is a winning state. If we take (a) stones, the other player receives state (2). Another winning state. (a+3), ... keep alternating until we reach f(a+a-1) which is a winning state (subtract a to reach (a-1), a losing state). f(a+a) is a losing state, subtract 1 to reach (a+a-1) or a to reach state (a). It becomes interesting in (a+a+2)'s case, in which once again it is necessary to subtract (a) and reach (a+2). Now note that for each (a + 2 <= c <= a+a+2), f(c) is equal to f(c-(a+1) ). With this it is enough to conclude that if the allowed moves were 1 and a, the results are cyclic every (a+1) steps. In fact, the result is equal to taking (c % (a+1)) - if (c % (a+1)) is even, you win, else you lose.

Finally, consider a value of c such that it is a losing state in our supposed variation in which only taking 1 or a stones is allowed. This means that both f(c-1) and f(c-a) are winning states. Is it possible for f(c-a^k) (k > 1) to be a losing state? By induction, assume that the formula is valid for smaller states. So Since f(c - a) is a winning state and for contradiction purposes we assume that f(c - a^k) is a losing state then we have that the parities of the following two expressions must be different:

1. (c - a) % (a + 1)
2. (c - a^k) % (a + 1)

BUT, a and a^k always have the same parity (for k >= 2). So this is a contradiction and we have demonstrated that the formula is correct.

The similar problem from an old SRM: PotatoGame. I thank the blog at for pointing it out.

The code

// program:
bool solve(long a, long b)
// {a <= b}
if (a > b) {
return solve(b, a);
if (a==0) {
//first player always loses
return false;
bool next = solve(b % a, a);
if (! next) {
// next state is defeat, always move to it
return true;
//return ( ( (b / a)%(a + 1) ) % 2 == 0);

// The next state is win, our current player must think of a way to reach
// next state in an even number of moves.
// we know that a is not 1 and b is not a multiple of a.

long c = (b / a);
// If current player wants to win, c elements must be taken from a 'stack'
// In an even number of moves when the only allowed moves are to take
// 1, a, a^2, a^3, ...

// if a is odd or if (a^2 is too large),
// then all allowed moves can only change the parity of the
// number of stacks. The parity of 0 is 0. Must reach "even" parity after
// an even number of moves.
if ( (a % 2 == 1) || (c <= a) ) {
return (c % 2 == 0);
} else {
// moves no longer forcibly change parity. Meh.
return ( ( c%(a + 1) ) % 2 == 0);
// Note that last give lines can be replaced with a single:
// return ( ( c%(a + 1) ) % 2 == 0); Because it also works
// when (a % 2 == 1) || (c <= a)

inline void init(){}
// I/O:
int main()

bool prev = false;
int T;
while( cin >> T) {
if (prev) {
cout << endl;
prev = true;
long a,b;
for (int i=0; i<T; i++) {
cin >> a >> b;
cout << (solve(a,b) ?"First": "Second") << endl;

return 0;

Once again, my explanation is humongous.

Thursday, March 29, 2012

Official TCO 2012 blogger

2012 may as well be the end of the world. As it seems I am finishing college this year and also I will for once attend an on-site final for the TopCoder open.

TCO'12 blog: The other algorithm and marathon blogger

It shall be fun. It requires me to post some amount of posts per month to mantain a quote though. And of course, it also means I will probably use the official blog to talk about TopCoder Open related topics.

Of course, it is not the same to attend as a blogger as it is to attend as a finalist. My objective is to keep trying and trying until all the current targets and reds tire out and so I will have the way open to be a finalist in 2030. It WILL happen. Eventually. I am relentless.

Tuesday, March 27, 2012

Codeforces round #114 (div1)

So, there we go. What a month. The amount of contests I participated in seems so large that I can't remember a day of the month in which I wasn't participating in a contest, trying to fix a mistake made in a contest a day before or preparing the problem set for the next day. Saturday 31st is approaching and with it the first round of the TopCoder open and the last cool programming contest this month.

Problem A link
Seems we are back to high school physics once again.

What is the most important thing here is that bus #i cannot reach in a time earlier than bus #(i-1). After that, assume that bus #i 's max speed allowed it to reach the place faster, this bus would have no choice other than reach bus #(i-1) and then go with the same speed. So, the time of bus #i will be equal to the time of bus #(i-1).

In fact, this happens with any bus that is earlier than bus #i. And if any bus smaller than i reaches the end after bus #i's optimal time, bus #i will have no other chance than to slow down.

It all boils down to calculate the optimal time t needed for each bus i. And then, the result is max(t , maxTime). Where maxTime is the maximum time for all the earlier buses.

And to calculate the time, use formulas based on acceleration. To be honest I just opened this page:

int n,a,d; 
int t0[100000];
int v[100000];

void solve()
double minTime = 0;
for (int i=0; i<n; i++) {
double t = t0[i]; //time bus i needs to reach the station ignoring i-1
//t1: min time to reach v[i]
double t1 = v[i]/(double)a;
//displacement after t1 seconds
double s = a*0.5 * t1 * t1;
if (s > d) {
// won't reach max vel
t += sqrt( 2*d / (double)a );
} else {
// will reach max vel.
double x = d - s;
//t2: time it takes the bus to move x distance units at v[i] speed
double t2 = x / (double)v[i];
t += t1 + t2;
t = max(minTime, t);
minTime = t;
cout.setf(ios::fixed, ios::fixed);
cout << t << endl;


Problem B link
I really should have solved this problem faster, but spent some time debugging things and getting confused by my correct result for example 1.

Let's try a dp solution with many states [bags][won][prizes][num]. bags is the number of available bag spaces you have. won is the number of won tours, prizes the number of tours that gave you a prize and num, the number of tours you have completed. After each tour, there is a probability to win (which will update bags or prizes and will update won) or not (won't update anything).

The base case is when num = all tours, there are no tours left and now you have to verify that there is enough empty space for all the prizes you got and that you won at least l times.

The overall result will be [K][0][0][0]
This is great, except that it is a little slow. Well, the key optimization is to notice that you do not really need to remember bags AND prizes, you can just remember (bags - prizes). If this difference is non-negative at the end of all tours, then you had enough space.

int n, l, k; 
double p[200];
int a[200];

double dp[2][401][201];

double solve()
for (int spaces = -200; spaces <= 200; spaces++) {
for (int won = 0; won <= 200; won++) {
dp[n&1][spaces+200][won] = ( ((spaces >= 0) && (won>=l)) ? 1.0 : 0.0 );
for (int t=n-1; t>=0; t--) {
int tt = (t&1);
int nt = (t+1)&1;
for (int spaces = -200; spaces <= 200; spaces++) {
for (int won = 0; won <= 200; won++) {
double & res = dp[tt][spaces+200][won];
res = 0;
double prob = p[t];
// win
if ( (spaces+a[t] >= -200) && (won+1 <= 200) ) {
res += prob*dp[nt][ min(200, spaces + a[t]) + 200][won+1];
// lose
res += (1-prob)*dp[nt][spaces + 200][won];


return dp[0][k+200][0];

As you can see from my code, I did some rather unnecessary optimizations like doing iterative dp to save up memory. I don't think this was needed.

Problem C link

Let us think of a and b. Whenever you subtract a power of a from b, b' will still be equal to b modulo a. So, in fact the relationship won't change.

Here's a sort of idea. For a<=b, take b%a, find the result for (b%a, a) . Now, if (b%a, a) is a losing state, then you should always do b=b%a. But if (b%a, a) is a winning state, then you have to do something to make sure that you are the one who reaches it.

In fact, it is like setting (c = b / a) and now you have a stack of c elements and some allowed numbers of 1, a^2, a^3, ... . A player can remove any allowed number of elements. The player to get 0 elements in the stack wins.

I am sure that this can be reduced to nim. I have no idea how. I tried something funny during the match mixing up some xors hoping that I get to the correct reductiong. But it got hacked. I expected that to happen, but it was still fun.

Update: editutorial is up (quite quickly this time). I feel lame about not solving problem C. It was easy to notice the property, an odd number raised to any power will stay odd. I over focused on trying to adapt nim to it that I didn't try any simpler stuff.

Sunday, March 25, 2012

5 reasons editorial votes in Topcoder are useless

Know what? I am quite tired of that non-sense. I hate that after working hard for 2 days to write an editorial I am somehow supposed to care about anonymous votes that are completely uninformative.

5. Not sure people are actually voting about the editorial itself.
There have been plenty of times it seemed like votes indicated not the people's opinion about the editorial, but their opinions about other things.

Let me refer to the first editorial I wrote in the "new" system: editorial thread. Look at those votes, +53 net votes!. That is the best outcome I ever got on the editorial feedback post. But really, was that editorial so great? I doubt it. I can easily tell that I have improved over the time and that editorial has some problems I learned to avoid. I am quite sure that a humongous amount of pluses are really about the idea of finally dropping the wiki 100% contribution-based system which resulted in hundreds of editorials that lack explanations for most of their problems (SRM 453 happens to be the very second SRM to use the new method).

Now, take a look at TCHS 2010 round 3: editorial thread. The first and only time I got a negative net result. I am pretty sure that people were voting against the problem set rather than the editorial. It was a very messed up problem set that was also evil and kind of boring and the Folding Maze problem makes your brain melt. Yet it was the match to define advancers to the championship round. So, it is easy to see why many people disliked the problem set. Then my editorial who I am sure was slightly better than SRM 453, received much worse feedback than it. Coincidence?

Yet, I use words like worse or better to compare voting results. When it turns out I have no idea what is a better outcome and what isn't:

4. What is better anyway?
Compare an editorial that got +20/-7 vs. and editorial than had +12/0. In once case, the votes are unanimous, yet in the other there are actually more people that like the editorial. How are we supposed to interpret this?

But this is part of a greater issue:

3. As feedback, it is useless
I really have no idea what I do right when I get seemingly positive outcomes in this vote. I really have no idea what I do wrong when I get more negative outcomes.

A single post saying why the editorial is good and what can be improved. Or at least what is it that you hate about the editorial is the sort of thing that really allows you to improve and improve in the next editorials. It is also very important to know who is the one making such suggestions.

That's the irony, the completely anonymous and quick votes, actually reduce people's reasons to make such posts. Not content with being useless feedback, it reduces the chances you will get useful feedback.

So, maybe it is true that you can't find out how to improve an editorial by looking at the votes. They would at least be able to quantify the number of people that liked the editorial. If it wasn't that:

2. Nobody actually votes.
Quick exercise: There are 2500 registered coders for a SRM. Of them , 2300-ish actually participate. Some smaller percentage actually read the forums. Some other small percentage reads the editorials. And what's the maximum number of votes we ever had for an editorial? I would say it is unlikely it is over 100. And most frequently, only around 30 people seem to vote. 30 out of 1500? Does not that sound of quite a minority?

I am not making the discovery of the year. misof pointed this out 2.25 years ago.

* I don't think that the number of +s in mystic_tc's "Editorial" threads is a good measure of how good the editorial was. For rounds where the problems are such that more discussions occur, more people will read the post and consider voting. Additionally, only people who read forums will vote, those who go straight to the editorial page won't, and this makes the vote biased.

But maybe we shouldn't worry that much, because:

1. They make no effect whatsoever
I don't think any editorial writer has ever lost access to write editorials. In fact, most of the time, the admins are not even in position to pick an editorial writer, and if there are more than one editorial writer, the rules to pick have 0 consideration of these votes. The problem setter has priority over tester. The tester over unrelated editorial writers and otherwise. Otherwise, the priority goes to the person that has not written an editorial for the most time.

Codeforces VK Cup round 2: (ouch)

This was a very discouraging round to me. Well, I see a lot of problems out of my league. But for each of them, there are tons of people that solved it. That said, I have suspicions about a couple of them.

Problem A (link)

A quick dp problem to boot. I guess. There are a couple of complications. The length 5000, shall make it tight to stay under the 256 MB limit. This is a dp that although it may be straightforward to see that it just needs dp, you need to know how to guard a couple of details, one is the memory limit (At most you can do a [2][5000][5000] int array). The other is taking care of a couple of details...

Eventually, after trying some stuff that made me redo the problem, I decided it is best to move the dp inside another loop. Let us iterate for the position of the last character of the substring. Then solve the subproblem: How many pairs of substring of s, subsequence of t are equal such that the substring's last character is at the given position?

Then we should avoid counting empty strings. Simply add a variable to the state (nonempty) if we want to count empty strings or not. (We would like to count empty strings in a sub problem if we already found a pair of characters that match).

Thus we have the recurrence f(nonempty, a, b) which should return the number of pairs such that:
- The last character of the substring is at position (a-1).
- The last character of the subsequence is at a position less than b.
- If nonempty is 1, count the empty substrings/sequences that match.

The logic works as follows. We know that the substring must begin at (a-1), so s[a-1] must be matched to a character from t. From t, we can remove any number of characters. So, if (s[a-1]==t[b-1]) we have the option to match them. If we do, then we
have to add to the result f(1, a-1, b-1) , because that is the result of subsequences and substrings that we can append to s[a-1] and t[b-1] to get a result.

We can also move to f(nonempty, a, b-1) , in other words, just ignore the character at position t[b-1], until we find a match for s[a].

string s, t; 

const int MOD = 1000000007;
int dp[2][5001][5001];

#define SUBMOD(x) if (x>=MOD) { x-=MOD;}

int rec(int nonempty, int a, int b)
int& res = dp[nonempty][a][b];
if (res == -1) {
if ( (a==0) || (b==0) ) {
res = nonempty;
} else {
res = 0;
if (s[a-1] == t[b-1]) {
res += rec(1, a-1, b-1);
res += rec(nonempty, a, b-1);
//cout << sstart<<", "<<a<<", "<<b<<" = "<<res<<endl;
return res;

int solve()
int res = 0;
for (int i=1; i<=s.length(); i++) {
res += rec(0, i, t.length() );
return res;

Problem B (link)
I think the high level idea of my approach is correct, but details need to be polished. I submitted almost knowing that it was unlikely I would pass. When you want to minimize the maximum time to assign to some variables. It is almost always an indication to use binary search for the time. There is an issue in this case, and it is that times are real values, so your binary search would need many iterations to be precise.

I cut down some iterations by noticing that the height h is irrelevant as we do not really need to output the time. Its overall effect on the time is always a factor of h, since that factor is constant we can just switch to minimizing the worst (i/v_j). This is helpful because it reduces the maximum time from 10^9 to 10^5.

For a given time, is it possible to solve the problem in a time less than or equal to it? Once the time is fixed, you can find, for each lemming, the maximum position it can be assigned to. Then for each position from greater to lower, you can just assign the lemming with the maximum weight available to that position. (Making sure not to pick a lemming heavier than the one in the above position). This greedy strategy will always work correctly. The problem is to implement it in a way that is quick. Whether it is the constant factor or the logarithmic factor I used to do things like find the maximum weight, it seems it is too slow.

Problem C (link)
I didn't do much, except note that d=(l/(v1+v2))*v1 is the length of the interval of positions in which a chocolate can be picked. After that you need to do some data structure magic to get all the probabilities in O(n*log(n)) or something like that.

Problem D (link)
I tried many things without much success. At first I thought of simple ideas like always distributing each prime number with an exponent >= 3 evenly between the side lengths and then deciding on exponent%3. But examples like V=8*7*7, break such idea.

Failed B, my A submission was very slow. I hope I at least earn some rating.

Friday, March 23, 2012

Codeforces round #113 div2 (unofficial)

I am tired of these unofficial events in codeforces. All normal rounds this month seem to be div2 only. Then we have all the VK cup stuff which down right banned us old people. I think div2-only matches could be rated for div1 coders. The rating system should permit it.

Today was strange in that Codeforces was testing "dynamic problem scores". This just means that at the start of the contest, you have no idea what problem is the easiest anymore. The problem scores get updated according to the amount of people that solve each problem. So at the end, you get more points for solving problems less people solve. I am not sure yet if this is an amazing idea or another hassle that turns the game into "find the hidden easy problem" like ACM.

Problem A link
I had a delay at the start of the contest. Lack of rating makes me take these things too lightly. Anyway, not much to do here. Constraints are lower than you'll ever need. You can just sort the given array of problems/penalties according to the statement and just calculate the answer manually.

Problem B link
This problem made me realize that problems are not sorted by difficulty in this contest. Oh well. It is your generic geometry problem. I decided to skip it.

A solution idea that is correct but has many implementation hassles: Pick an arbitrary point from B. If it is outside A's polygon, then return NO. Else, just move between the segments in B starting at that point. If any segment of B intersects a segment of A, then there is no way. You might be able to do a line sweep algorithm to do this quickly enough. A variation, since they are polygons is to, after picking the point in B, find the angles between the points in A and the point you picked. Then for each segment in B, do the same. When picking a segment in B and wanting to look for intersections, you only need segments in A that match the angle range...

Problem E link
First thing I did after noticing that B was hard, was open E. And surprise! It is a rather standard problem. In fact, perhaps way too standard. The shape is a mere dense graph of 4 nodes. Counting the number of paths between all pairs (i,j) of a given length is a standard problem that can be solved in O(n^3 * log( length ) ) time - Simply raise the adjacency matrix (result for length=1) to the (length-1)-th power. Then the result is A[0][0] (the graph is dense, so A[1][1], A[2][2] and A[3][3] hold the same result.

Problem C link
It seemed many were solving this problem, so I picked it.

Note that the number of elements to add is O(n). If the median is currently at position K of the sorted array, you can add other K elements and done. So we can just iterate for the number of added elements.

Note that the wanted median may not necessarily be in the array. You can handle this special case by just adding it, and increasing the result by 1.

Then for each new array size nn, you test you find that the median will be at position (nn+1) / 2 of the sorted array. Let's say that the original array has a variable "less" of elements smaller than the wanted median and "eq" of elements equal to the median (because we handled the special case, this is at least 1). The wanted median will be at least in position (less+1). The upper bound is more interesting, if we don't add any element, then it will be in at most position (less+eq). But since we will add (available = nn-n) elements, we can decide some of these elements to be smaller than the wanted median. Thus the maximum position of the wanted median is: (less+eq+available). Finally, if (nn+1)/2 is between (less+1) and (less+eq+available) then it is possible to have that median when the length of the new array is nn.

int n; 
int a[100000]; 
int wantedMedian; 
int solve() 
    int less = 0; //elements < wantedMedian in the original array 
    int eq = 0; //elements <= wantedMedian in the original array 
    for (int i=0; i<n; i++) { 
        less += (a[i] < wantedMedian); 
        eq += ( a[i] == wantedMedian ); 
    int add = 0; 
    // If the array does not contain the wanted median, we shall always add it: 
    if (eq == 0) { 
        add = 1; 
        eq = 1; 
        n ++; 
    int nn = n - 1; 
    bool worked; 
    do { 
        nn ++; 
        // can the final array have nn elements? 
        // median will be located at position: 
        int p = (nn + 1) / 2; 
        int available = (nn - n); 
        // the wanted median is at least at position less+1. 
        // the wanted median is at most at position less+eq+available. 
        worked =  ( less+1 <= p && p <= less+eq+available );         
    } while (! worked); 
    return (nn - n) + add; 

I then went to have lunch while trying to solve problem D. I came back and tried some code with no success. I never felt like trying to code B. Forgot to mention there's likely an easier solution for it. But of course, always with geometric functions I was sort of too bored to play with.