Tuesday, October 14, 2008

IOI 2002: Frog

This time I'll try to remember my encounter with problem Frog from IOI 2002 (http://olympiads.win.tue.nl/ioi/ioi2002/contest/day1/frog/frog.pdf). This looks interesting to me because I remember long discussion with teammates after the contest about an efficient algorithm, we spent maybe several hours and could come up with something OK but difficult to write. Of course none of us thought he got it during the contest (but I'm not sure if some of us didn't actually get a full score because we overestimated our algorithm complexities). I was wondering if this problem will look difficult to me now, 6 years and hundreds of competitions later.

And the thing is - when I look at it now, it is painfully obvious. The authors seem to expect a N2 solution, with N equal to 5000, and give us 64MB of memory (restrictions come from http://olympiads.win.tue.nl/ioi/ioi2002/contest/day1/overview1.pdf). But hey, there's the obvious N3 solution: for every pair of marks (a,b) we check if the path that starts with a and has b as the first hit continues correctly until the outside (and that going to the left of a gets us outside immediately). To avoid counting the same path twice, we can order all marks by x-coordinate and then by y-coordinate, and only look for pairs with a less than b. It's not clear if this algo implemented with hashtables will actually hit the time bound and not get full score; but there's no need to check that, as this solution is very easily transformed to an N2 DP: we can compute the same boolean D[a,b] as above (does the path continue correctly beyond b until the end of the field) for each pair (a,b) with a less than b, but without requiring anything about what precedes a on that path (and only take that into account when constructing the answer). To compute that, we find the next mark c on the path a-b-..., and if that mark is present and the D[b,c] is true or c is outside the field, then D[a,b] is true. The only issue is to find c in constant time. This can be done with hashtables, but again, there's no need: if we traverse the values for b (with the value for a fixed) in our order (first by x-coordinate, then by y-coordinate), then we should traverse the values for c in the same order, thus only requiring O(N) operations for each a, thus getting O(N2) in total. In terms of memory this uses only 25M bools, which is compressable to just about 3M, but there's no need as we have 64M. And the code should be so clean and simple that I won't even write it here (does anyone want me to?). In other terms, I would spent at most 20-30 minutes on such problem today.

But one might say, wasn't this expected? Don't the contestants' knowledge and problem difficulty constantly raise and try to pursue one another? True. But, it amazed me how there's no (I mean, totally no) clever algorithm involved, just a simple DP with a simple optimization that reduces O(N) to amortized O(1). I can swear we used to do much more complicated DPs in high school without any trouble. Is the amortization idea so new that it was very difficult 6 years ago? I doubt that it's the only reason. Looking back, I can think of at least 2 more reasons:
  1) I expect that we were too nervous to find that at the actual IOI and after that. But the point is, we didn't feel that way! At least I remember myself being fairly confident and determined then.
  2) We were reluctant to inject the ideas that we knew into algorithms that we knew, changing them significantly. This is more of a mathematical drawback, so both the university education and solving problems in a tougher timeframe at the ICPC trainings should have eliminated that difficulty.

  Any more ideas?


  1. I Love your blog , hope you will continue writing in it

  2. I also still remember this problem. I remember I found it quite hard for a simple DP, although looking back I get the same feelings as you do. I was quite well trained in DP at the time, but I recall struggling to fit most of the things in memory. I screwed up somewhere which costed me valuable points as the code segfaulted for many cases.

    It was also a really tough exam: this was supposed to be the easiest problem, and then you had Utopia which was mostly mathematics (didn't require any algorithmic skills, once you got the idea for the greedy it was pretty straightforward) and Xor which was very creative but also quite difficult.

  3. My solution to it was dp+hashing. Interesting that the official solution did not mention the amortized O( 1 ) solution :) Nice idea :)

  4. have you ever thought about finding a solution using the hough transform?

  5. Can you post your code for this problem, please?
    I hope that formal description of the amortization idea in a form of source code will be more clear for such dummies like me. :)

  6. 2den-ned:

    I don't have enough courage to do that now (it looks rather boring to me); probably some of the readers can help you.

  7. This comment has been removed by the author.

  8. Good news! Nobody have to help me on this now. I just have got the idea myself. Here is the code for dummies :-)

    #include <vector>
    #include <iostream>
    #include <algorithm>
    using namespace std;

    pair<int, int> points[5000];
    short dp[5000][5000];

    int main() {
     int R,C,N;
     for (int i=0; i<N; ++i) {
      int r,c;
     sort(points, points+N);
     int ans=0;
     for (int i=0; i<N; ++i) {
      for (int j=i+1, k=i-1; j<N; ++j) {
       int dr=points[j].first-points[i].first;
       int dc=points[j].second-points[i].second;
       int r=points[i].first-dr;
       int c=points[i].second-dc;
       if (r<=0 || c<=0 || c>C) {
       else {
        pair<int, int> p(r,c);
        while (k>=0 && points[k]>p) {
        if (k>=0 && points[k]==p && dp[k][i]>0) {
         if (r>R || c<=0 || c>C) {
     return 0;

    Is there any normal way to post a code in comments?

  9. Haritha Gorijavolu10 May, 2013 11:34

    Hi, can you throw some light on "Constant optimization problems"? SPOJ seems to have a lot of them. I googled but can't find any material about constant optimization.