Last week had an Open Cup round for the fourth week in a row. Open Cup 2018-19 Grand Prix of America allowed teams from all over the world to participate in NAIPC 2019 (problems, results, top 5 on the left, NAIPC results, analysis). Just 3.5 hours were enough for team Past Glory to wrap the problemset up, a good 40 minutes before other teams could do the same. Congratulations on the win!
In my previous summary, I have mentioned two (in some sense three :)) problems. The first group was from the AtCoder World Tour Finals: consider an infinite 2D grid, where each cell can be either black or white. Initially, exactly one cell was black, and then we repeatedly applied the following operation: take some integers x and y, and invert the color of three cells: (x, y), (x+1, y) and (x, y+1). You are given the set of black cells in the final state of the grid. There are at most 105 black cells in C1 and at most 104 in C2, and each black cell has coordinates not exceeding 1017 by absolute value. Your goal is to find the coordinates of the only cell that was black originally. In problem C1 you know that its y-coordinate was 0, and in C2 there are no further constraints.
A typical idea in this type of problem is to come up with an invariant that is not changed by the operation and that can be efficiently computed for source and target positions. A typical invariant that plays well with inversions is boolean or bitwise xor. Let's say that a white cell is a 0, a black cell is a 1, let's also pick some set of cells S, then our invariant will be their bitwise xor (in other words, the parity of the number of black cells in S).
Not any set S works, though: we must make sure that for each invertible triple (x, y), (x+1, y) and (x, y+1) either 0 or 2 cells belong to S, to make sure the xor does not change when we invert the triple. Suppose that for some row y=y0 the set S contains exactly one cell (x0,y0). The triple (x0,y0), (x0+1,y0), (x0,y0+1) must have 0 or 2 cells in S, and since (x0,y0) is in S but (x0+1,y0) is not, (x0,y0+1) must be in S. Applying a similar argument, we find that in the row y=y0 exactly two cells must be in S: (x0,y0+1) and (x0-1,y0+1). Then we can compute which cells must be in S in the row y=y0+2, and so on.
We can realize by looking at the resulting pattern, or by understanding what the process really is, that in general a cell (x0-k,y0+n) is in S if and only if C(n,k) is odd. And that, in turn, is true if and only if n&k=k, where & denotes bitwise and. There are still multiple ways to extend the set S to rows with y<y0, so to avoid thinking about that we will always pick very small y0 that is below all interesting points.
Now it is very easy to compute our invariant for the input set of cells: we need to count the parity of the number of such (x,y) in the set that (y-y0)&(x0-x)=x0-x. But we know that the operations do not change the invariant, and that initially we had only one cell (xA,yA). This means that we have an oracle that can tell us whether (yA-y0)&(x0-xA)=x0-xA for any two numbers x0 and y0 such that y0<=yA.
In problem C1, we know that yA=0, so we can just pick y0 in such a way that -y0 has all bits set except one, and then the oracle will effectively tell us the corresponding bit of x0-xA, so we can determine xA in logarithmic number of queries.
In problem C2 the things are not so simple. However, suppose we have found some x0 and y0 such that (yA-y0)&(x0-xA)=x0-xA, in other words we made our oracle return 1. Now we can go from the highest bits to the lowest bit, and by calling our oracle 3 additional times checking what happens if we add 2k to x0 and/or subtract 2k from y0, we can determine the k-th bit of yA-y0 and x0-xA, then setting both to 0 and proceeding to the lower bits, and eventually recovering yA and xA.
The tricky part lies in the initial step of making the oracle return 1 for something: the probability of n&k=k for random n and k is roughly 0.75number_of_bits, which is way too low to just stumble upon it. This is how far I got during the contest, so the remaining magic is closely based on the official editorial and my conversations with Makoto.
Instead of running the oracle for the set S with just one cell (x0,y0) in the row y=y0, we will run it with the set S which has all cells (x0,y0) in the row y=y0 where x0 mod 3=u, l<=x0<=r, where y0, u, l and r are the parameters of the oracle. This requires counting the number of x0 satisfying the above criteria and also the constraint (y-y0)&(x0-x)=x0-x for each black cell in the input, which can be done by a relatively standard dynamic programming on the bitwise representation of x0.
As we know, this will give us the parity of the amount of such x0 that x0 mod 3=u, l<=x0<=r, and (yA-y0)&(x0-xA)=x0-xA. A crucial observation is: no matter what y0 we pick, if l is very small and r is very large, then for at least one u from the set {0, 1, 2} this oracle will return 1! This can be proven by induction by yA: when yA=y0, (yA-y0)&(x0-xA)=x0-xA only when x0=xA, so the oracle will return 1 when u=xA mod 3 and zero in the other two cases. When yA increases by one, given our C(n,k)-like propagation, every x0 for which the oracle returns 1 contributes one to itself and x0+1, which means that we go from parities (a, b, c) for the three remainders modulo 3 to parities (a+b, b+c, a+c), so from (1, 0, 0) to (1, 0, 1), and then to (1, 1, 0), and then to (0, 1, 1), and then to (1, 0, 1) and so on, and never get (0, 0, 0).
This means that in at most three attempts we can make this complex oracle return 1. Now (more magic incoming!) we can do a binary search: if for (y0, u, l, r) the oracle returns 1, then it must return 1 for exactly one of (y0, u, l, m) and (y0, u, m+1, r). This way we can find a single cell (x0,y0) for which the oracle returns 1 in a logarithmic number of requests to the complex oracle, and then switch to using the simple oracle and proceed with reconstructing the answer as described above, completing the solution of C2.
I have also mentioned an Open Cup problem: you have some amount x of money between 0 and 1. You're playing a betting game where in one turn, you bet some amount y, and with probability p (p<0.5) your amount of money becomes x+y, and with probability 1-p it becomes x-y. Your bet must not exceed your current amount of money. Your goal is to reach amount 1. So far this setup is somewhat standard, but here comes the twist: your bets must be non-decreasing, in other words at each turn you must bet at least the amount you bet in the previous turn. In case you don't have enough money for that, you lose. What is the probability of winning if you play optimally? More precisely, what is the supremum of the set of probabilities of winning of all possible strategies? Both x and p are given as fractions with numerator and denominator not exceeding 106, and you need to return the answer using division modulo 998244353.
You can find my approach to solving it in this Codeforces comment.
Thanks for reading, and check back for this week's summary!
In my previous summary, I have mentioned two (in some sense three :)) problems. The first group was from the AtCoder World Tour Finals: consider an infinite 2D grid, where each cell can be either black or white. Initially, exactly one cell was black, and then we repeatedly applied the following operation: take some integers x and y, and invert the color of three cells: (x, y), (x+1, y) and (x, y+1). You are given the set of black cells in the final state of the grid. There are at most 105 black cells in C1 and at most 104 in C2, and each black cell has coordinates not exceeding 1017 by absolute value. Your goal is to find the coordinates of the only cell that was black originally. In problem C1 you know that its y-coordinate was 0, and in C2 there are no further constraints.
A typical idea in this type of problem is to come up with an invariant that is not changed by the operation and that can be efficiently computed for source and target positions. A typical invariant that plays well with inversions is boolean or bitwise xor. Let's say that a white cell is a 0, a black cell is a 1, let's also pick some set of cells S, then our invariant will be their bitwise xor (in other words, the parity of the number of black cells in S).
Not any set S works, though: we must make sure that for each invertible triple (x, y), (x+1, y) and (x, y+1) either 0 or 2 cells belong to S, to make sure the xor does not change when we invert the triple. Suppose that for some row y=y0 the set S contains exactly one cell (x0,y0). The triple (x0,y0), (x0+1,y0), (x0,y0+1) must have 0 or 2 cells in S, and since (x0,y0) is in S but (x0+1,y0) is not, (x0,y0+1) must be in S. Applying a similar argument, we find that in the row y=y0 exactly two cells must be in S: (x0,y0+1) and (x0-1,y0+1). Then we can compute which cells must be in S in the row y=y0+2, and so on.
We can realize by looking at the resulting pattern, or by understanding what the process really is, that in general a cell (x0-k,y0+n) is in S if and only if C(n,k) is odd. And that, in turn, is true if and only if n&k=k, where & denotes bitwise and. There are still multiple ways to extend the set S to rows with y<y0, so to avoid thinking about that we will always pick very small y0 that is below all interesting points.
Now it is very easy to compute our invariant for the input set of cells: we need to count the parity of the number of such (x,y) in the set that (y-y0)&(x0-x)=x0-x. But we know that the operations do not change the invariant, and that initially we had only one cell (xA,yA). This means that we have an oracle that can tell us whether (yA-y0)&(x0-xA)=x0-xA for any two numbers x0 and y0 such that y0<=yA.
In problem C1, we know that yA=0, so we can just pick y0 in such a way that -y0 has all bits set except one, and then the oracle will effectively tell us the corresponding bit of x0-xA, so we can determine xA in logarithmic number of queries.
In problem C2 the things are not so simple. However, suppose we have found some x0 and y0 such that (yA-y0)&(x0-xA)=x0-xA, in other words we made our oracle return 1. Now we can go from the highest bits to the lowest bit, and by calling our oracle 3 additional times checking what happens if we add 2k to x0 and/or subtract 2k from y0, we can determine the k-th bit of yA-y0 and x0-xA, then setting both to 0 and proceeding to the lower bits, and eventually recovering yA and xA.
The tricky part lies in the initial step of making the oracle return 1 for something: the probability of n&k=k for random n and k is roughly 0.75number_of_bits, which is way too low to just stumble upon it. This is how far I got during the contest, so the remaining magic is closely based on the official editorial and my conversations with Makoto.
Instead of running the oracle for the set S with just one cell (x0,y0) in the row y=y0, we will run it with the set S which has all cells (x0,y0) in the row y=y0 where x0 mod 3=u, l<=x0<=r, where y0, u, l and r are the parameters of the oracle. This requires counting the number of x0 satisfying the above criteria and also the constraint (y-y0)&(x0-x)=x0-x for each black cell in the input, which can be done by a relatively standard dynamic programming on the bitwise representation of x0.
As we know, this will give us the parity of the amount of such x0 that x0 mod 3=u, l<=x0<=r, and (yA-y0)&(x0-xA)=x0-xA. A crucial observation is: no matter what y0 we pick, if l is very small and r is very large, then for at least one u from the set {0, 1, 2} this oracle will return 1! This can be proven by induction by yA: when yA=y0, (yA-y0)&(x0-xA)=x0-xA only when x0=xA, so the oracle will return 1 when u=xA mod 3 and zero in the other two cases. When yA increases by one, given our C(n,k)-like propagation, every x0 for which the oracle returns 1 contributes one to itself and x0+1, which means that we go from parities (a, b, c) for the three remainders modulo 3 to parities (a+b, b+c, a+c), so from (1, 0, 0) to (1, 0, 1), and then to (1, 1, 0), and then to (0, 1, 1), and then to (1, 0, 1) and so on, and never get (0, 0, 0).
This means that in at most three attempts we can make this complex oracle return 1. Now (more magic incoming!) we can do a binary search: if for (y0, u, l, r) the oracle returns 1, then it must return 1 for exactly one of (y0, u, l, m) and (y0, u, m+1, r). This way we can find a single cell (x0,y0) for which the oracle returns 1 in a logarithmic number of requests to the complex oracle, and then switch to using the simple oracle and proceed with reconstructing the answer as described above, completing the solution of C2.
I have also mentioned an Open Cup problem: you have some amount x of money between 0 and 1. You're playing a betting game where in one turn, you bet some amount y, and with probability p (p<0.5) your amount of money becomes x+y, and with probability 1-p it becomes x-y. Your bet must not exceed your current amount of money. Your goal is to reach amount 1. So far this setup is somewhat standard, but here comes the twist: your bets must be non-decreasing, in other words at each turn you must bet at least the amount you bet in the previous turn. In case you don't have enough money for that, you lose. What is the probability of winning if you play optimally? More precisely, what is the supremum of the set of probabilities of winning of all possible strategies? Both x and p are given as fractions with numerator and denominator not exceeding 106, and you need to return the answer using division modulo 998244353.
You can find my approach to solving it in this Codeforces comment.
Thanks for reading, and check back for this week's summary!
No comments:
Post a Comment