The decision

Before starting this SRM I had made a decision, it was based on many reasons, some of which I can list:

* I have stayed in the 1800->2050 topcoder rating spectrum for years. It has become repetitive and one cannot help but feel stuck in this status quo. Matches that you win 100 points, followed by matches in which you lose points and eventually it all evens out and you notice you are still in the same area as three years ago...

* mishastassen moved to opening the hard problem first. It at first destroyed his rating and moved him to gray territory. But when he came back, he became able to solve some of the hard problems.

* TopCoder problem sets recently... Have become very annoying. "Easy" problems that steal all your match's time leaving me little to no time to solve the Medium problem and zero time to even open the Hard problem. I am tired of all matches being about "Can you solve 250 in time?" and nothing else. Specially because the recent Easy problems are mostly math oriented and turn out to be boring once you understand them. All while you are missing the cool dp/graph theory Medium pointer fun...

So, I have made a decision, starting SRM 493 I will open the Hard problem first, the Medium problem next and finally, the Easy problem.

I know that this will kill my rating but that is not the point, the point is to actually improve and get better. I think that if I deal with harder problems more usually, I will start becoming able to solve them during the match. Also, if I stop relying on having the whole 75 minutes available to solve the Easy problem, I will stop being so slow when solving it.

Anyway, I don't want the situation to reverse and end up only dealing with the hard problem. I want to open all three problems during a match and give them appropriate time. Since Topcoder problem scores are supposedly based on the time it takes to solve them. Then I can assign time to the problems by using an equation: x+2x+4x = 75 . Where x is the amount of time to spend in the 250, 2x the time to spend in the 500 and 4x the time to spend in the 1000. It is not an exact division but I decided to assign 11 minutes to the "Easy" problem, 22 minutes to the "Medium" problem and 42 minutes to the hard. So, the "strategy" is :

* Open the Hard problem when the match begins.

* Open the Medium problem when there are 33 minutes of coding phase left. (Or once the Hard problem has been solved, whatever happens first).

* Open the Easy problem when there are 11 minutes of coding phase left. (Or once the Medium problem has been solved, whatever happens first).

The SRM

Before the coding phase, I could take a look to the scores, 300,450,1000. My conclusion was that 1000 was going to be unsolvable or solvable only by Petr. But 450 was probably going to be simpler than 300. Nevertheless, I would only have 22 minutes to solve 450, which seemed unlikely. Because I am not that fast. And a 300 would make the dream of solving it in less than 11 minutes impossible. While trying to solve the 1000 pointer, I noticed that only one coder took less than 11 minutes to solve the 300... Everything was a bad omen. This SRM was not the right one for this strategy.

1000 pointer - AmoebaDivOne

Hey, maybe I would get lucky "and 1000 is actually solvable", I thought to myself. Well, it was a interesting problem. I eventually thought of a O(N⁴) dynamic programming solution. The idea is that, if you consider that the previous row has the cells between a and b taken by an amoeba, then the next row can have cells between other variables, na and nb. na and nb can change, but I thought to myself, that in order to keep the amoeba convex, The lower limit cannot decrease after it was increased at least once and upper limit cannot increase after it was increased. So, you could use a recursion of sorts.

Well, while I was coding that solution to find out if it actually works, I noticed that the problem was basically forcing a special kind of input to have a constraint of 100x100 instead of the usual 50x50. That was suspicious. Once I ended up coding my initial idea, it didn't work correctly in the examples that had "matter" cells, and when I gave it a 100x100 case, it timed out. So, the idea was lacking something. But by then, the time was out, I moved to the medium problem.

450 pointer - AmoebaCode

A simple problem statement. Out of a number with more than K digits, you need to replace the 0 digits to a digit between 1 and K, such that the minimum distance between two equal digits is as large as possible. Very classic.

I noticed that K is at most 7. So, the low value of K was probably necessary. For some reason, the only dp variation I could think of was to keep a list of O(K) integers which say the position of the last digit equal to i placed (for each i between 1 and K). That would definitely time out.

The strangest idea I had was to try a binary search. If we had a way to check if the result is lower than X. The only idea for this check I had was to use something related to graph coloring (If we want the minimum distance to be K, then each cell has K-1 cells to the right and to the left that must have a different number (color). We can call the cells edges and the link that means they should have a different color, edges). But I don't think there was an easy way to verify the coloring. The time eventually ended.

Word on the streets is that the intended solution for this problem was a form of dynamic programming. That's embarrassing, I usually see the dp solutions rather quickly, specially for a 450. hmnn...

Div1 300 - StonesGame

Ok... 1000000 stones , that seemed complicated. After a while, I started to experiment with the idea that if Player 1 cannot win in a single move, he won't win, and that the same happens to Player 2. If neither can win in his first move it is a tie.

After the match, I imagined of ways to prove the assertion. Imagine that Player 1 can only win after his first step, Player 2 can just revert his first step and make him unable to win. Something similar can be done about Player 2.

Anyway, by the time I decided to try implementing that idea, I had around 2 minutes left. But I still didn't think of how to deal with the way that moves work in this problem. It was only around intermission time that each move consists on moving the white stone to a position K-1, K-3, K-5, ... K-1-2*n.

The outcome

A loss of 97 rating points!. It was not too much. A lot of people had zero scores today. Well, the objective with this experiment is not short-term rating but a long-term improvement. It is too early to see if it works.

## 2 comments :

Why don't you try "medium-easy" strategy instead? You could use first 50 minutes to solve the medium and the next 25 minutes for the easy. And in case you solve both easy and medium quickly, you can proceed to the hard.

Advantages:

1) Solving hard is usually tough even for targets, there's few chances you'll solve it at this point.

2) Solving the medium in 50 minutes is pretty approachable, while solving it in 22 minutes is a very challenging task.

3) You'll still have some pressure for the easy (25 minutes is not that many).

4) Consistently solving easy+medium is more than enough for red rating.

Disadvantages:

Well, I don't see any.

If you then see that at some point you started solving medium+easy very quickly, you can then try your current strategy.

Sorry Ivan, I didn't know you made this comment until today.

I have been doing medium-easy for about two years before a switch to 250-500-1000 some time ago. It worked well.

But I am going to try 1000-500-250 for a while, it will force me to at least try the 1000. And I think that failing systests in a 1000 (like what happened recently) is an improvement over just assuming that you are not supposed to solve the 1000.

Post a Comment