Kotlin Heroes Episode 4 was the main event of the May 25 - May 31 week (problems, results, top 5 on the left, analysis). Only tourist, Golobanov399 and eatmore have managed to solve the hardest problem I, but tourist was done 45 minutes before his competitors and earned a very convincing victory. Well done!
In my previous summary, I have mentioned an AtCoder problem: there are n<=200000 cells arranged in a circle. The i-th cell has two numbers associated with it, ai and bi (0<=ai<=1012, 0<=bi<=100). We put a token into one of the cells uniformly at random, and our initial score is zero. Then, we repeatedly face the following choice: we either add ai to our score and end the process (where i is the number of the cell the token is currently in), or we subtract bi from our score, move the token uniformly at random into one of the two adjacent cells, and continue the process. What is the maximum expected score we can get if we play optimally?
Since the only state of the process is the cell the token is currently in, and the only decision we make at every moment is whether to stop or to continue, a strategy is completely described by a set S of cells in which we stop (by taking ai). How do we compute the expected score if we choose a particular set S?
The parts of the circle between consecutive elements of S can be treated completely independently, and we have the following subproblem there: given a segment of length k, we start in a random position and keep moving randomly either to the left or to the right until we leave the segment (because of symmetry we will exit to the left or to the right with probability 50%). What is the expected sum of bi's of the cells we go through, where one bi is added multiple times if its cell is visited multiple times?
Because of linearity of expectation, this sum is equal to sum of bi times the expected number of times the i-th cell is visited. We can denote that expected number of times as f(k, i). Since this number has just two integer parameters, we can implement simulation to check if there is a pattern. I did just that in the beginning of the contest, and the pattern was very clear: f(k, i)=(i+1)*(k-i). There is probably an easy way to deduce that or prove it by induction, but for me the experimental proof was good enough :)
Now that we've learned how to compute the expected score given the set S, we can turn our attention to finding the optimal set S. My intuition told me during the round that greedy should work, and indeed it works because of the following lemma: suppose we have a non-empty set S which is locally optimal: neither adding one cell to it, nor removing one cell from it, leads to a set S' with a better expected score. Then this set S is actually globally optimal, and is the answer to the problem.
Here is the proof: let's denote the expected score we get if we start from cell i and use the set S as the stopping cells as g(S, i). Let's prove that for any other set S', g(S', i)<=g(S, i) by the infinite induction on the number of moves that happen before we stop. If we decide to stop in some cell i, then g(S', i)=ai, and since S is locally optimal g(S, i)>=ai, so g(S', i)<=g(S, i) in this case. If we decide to continue in some cell i, then get we g(S', i)=-bi+(g(S', i-1)+g(S', i+1))/2, which by induction hypothesis (sorry for being a bit informal here, to make it formal we'd have to introduce a third parameter to g) is <=-bi+(g(S, i-1)+g(S, i+1))/2, which is in turn <=g(S, i) because S is locally optimal.
Therefore all that remains is to find a way to apply local optimizations to the set S in such a way that we converge quickly. We can actually use a very similar argument to guide our way: suppose for some set S' we determine that it's better to remove the cell i from it, in other words it's better to continue if we are in cell i. Then the cell i does not belong to the optimal set S, in other words it is better to continue from the cell i in the optimal solution as well! To see why it's true, let's write down the condition that it's better to remove the cell i from S': ai<-bi +(g(S'', i-1)+g(S'', i+1))/2 (where S''=S'-{i}). g(S'', i-1)<=g(S, i-1) and g(S'', i+1)<=g(S, i+1), therefore ai<-bi+(g(S, i-1)+g(S, i+1))/2, which means that i will not belong to S since S is locally optimal.
So we can just start with S containing all cells, and keep removing cells while it improves the expected score. In order to avoid traversing the list of cells n times and therefore getting O(n2) running time, we can use the standard idea of keeping a set of "pending" cells, which would improve the expected score when removed, and update the status of two adjacent non-removed cells when we process the removal of a cell. Another way to do it is to start with a cell that is definitely in S (the one with the highest ai), and keep going to the right and adding cells to S (which will be represented by a stack), and then repeatedly checking if removing next-to-last cell in S leads to a better expected score. This second approach is very similar to the convex hull algorithms.
Thanks for reading, and check back for more!
In my previous summary, I have mentioned an AtCoder problem: there are n<=200000 cells arranged in a circle. The i-th cell has two numbers associated with it, ai and bi (0<=ai<=1012, 0<=bi<=100). We put a token into one of the cells uniformly at random, and our initial score is zero. Then, we repeatedly face the following choice: we either add ai to our score and end the process (where i is the number of the cell the token is currently in), or we subtract bi from our score, move the token uniformly at random into one of the two adjacent cells, and continue the process. What is the maximum expected score we can get if we play optimally?
Since the only state of the process is the cell the token is currently in, and the only decision we make at every moment is whether to stop or to continue, a strategy is completely described by a set S of cells in which we stop (by taking ai). How do we compute the expected score if we choose a particular set S?
The parts of the circle between consecutive elements of S can be treated completely independently, and we have the following subproblem there: given a segment of length k, we start in a random position and keep moving randomly either to the left or to the right until we leave the segment (because of symmetry we will exit to the left or to the right with probability 50%). What is the expected sum of bi's of the cells we go through, where one bi is added multiple times if its cell is visited multiple times?
Because of linearity of expectation, this sum is equal to sum of bi times the expected number of times the i-th cell is visited. We can denote that expected number of times as f(k, i). Since this number has just two integer parameters, we can implement simulation to check if there is a pattern. I did just that in the beginning of the contest, and the pattern was very clear: f(k, i)=(i+1)*(k-i). There is probably an easy way to deduce that or prove it by induction, but for me the experimental proof was good enough :)
Now that we've learned how to compute the expected score given the set S, we can turn our attention to finding the optimal set S. My intuition told me during the round that greedy should work, and indeed it works because of the following lemma: suppose we have a non-empty set S which is locally optimal: neither adding one cell to it, nor removing one cell from it, leads to a set S' with a better expected score. Then this set S is actually globally optimal, and is the answer to the problem.
Here is the proof: let's denote the expected score we get if we start from cell i and use the set S as the stopping cells as g(S, i). Let's prove that for any other set S', g(S', i)<=g(S, i) by the infinite induction on the number of moves that happen before we stop. If we decide to stop in some cell i, then g(S', i)=ai, and since S is locally optimal g(S, i)>=ai, so g(S', i)<=g(S, i) in this case. If we decide to continue in some cell i, then get we g(S', i)=-bi+(g(S', i-1)+g(S', i+1))/2, which by induction hypothesis (sorry for being a bit informal here, to make it formal we'd have to introduce a third parameter to g) is <=-bi+(g(S, i-1)+g(S, i+1))/2, which is in turn <=g(S, i) because S is locally optimal.
Therefore all that remains is to find a way to apply local optimizations to the set S in such a way that we converge quickly. We can actually use a very similar argument to guide our way: suppose for some set S' we determine that it's better to remove the cell i from it, in other words it's better to continue if we are in cell i. Then the cell i does not belong to the optimal set S, in other words it is better to continue from the cell i in the optimal solution as well! To see why it's true, let's write down the condition that it's better to remove the cell i from S': ai<-bi +(g(S'', i-1)+g(S'', i+1))/2 (where S''=S'-{i}). g(S'', i-1)<=g(S, i-1) and g(S'', i+1)<=g(S, i+1), therefore ai<-bi+(g(S, i-1)+g(S, i+1))/2, which means that i will not belong to S since S is locally optimal.
So we can just start with S containing all cells, and keep removing cells while it improves the expected score. In order to avoid traversing the list of cells n times and therefore getting O(n2) running time, we can use the standard idea of keeping a set of "pending" cells, which would improve the expected score when removed, and update the status of two adjacent non-removed cells when we process the removal of a cell. Another way to do it is to start with a cell that is definitely in S (the one with the highest ai), and keep going to the right and adding cells to S (which will be represented by a stack), and then repeatedly checking if removing next-to-last cell in S leads to a better expected score. This second approach is very similar to the convex hull algorithms.
Thanks for reading, and check back for more!
No comments:
Post a Comment