The Dec 22 - Dec 29 week wrapped up the competitive 2025 on the major platforms. The 4th Universal Cup. Stage 10: Grand Prix of Wrocław took place on Saturday (
problems,
results, top 5 on the left,
onsite results,
analysis). Team USA1 were not the first to finish, but they were still the fastest overall, and extended their streak of 8 (!) won rounds in a row. Well done!
Codeforces Good Bye 2025 followed the same day (
problems,
results, top 5 on the left,
analysis). While the round lasted 3 hours, the top three needed just two of those to solve all problems. Similar to the above round, being first to finish did not mean first place, though, as Benq was a bit slower on the medium problems and therefore jiangly could overtake him. Congratulations!
I have enjoyed solving the problems and was making good progress, but it was nowhere near as fast, as problem F by itself took me over an hour of iterations, stress testing etc.
AtCoder Grand Contest 076 took place on Sunday (
problems,
results, top 5 on the left,
analysis). It turns out that it was indeed possible to qualify for AWTF with a win after a relatively poor season, only it was noimi and not myself who has achieved that feat. Well done!
Quite unsually for an AGC I was able to solve A very quickly, but did not score any points after that. In problem B, I have empirically discovered that the answer is always 1 or 2, but could not figure out a condition to separate those two cases, not to mention the actual sequence of steps to achieve that number. In problem C my incorrect solutions were quite close to the one described in the editorial by the idea, but not close enough by the details. At the end of the contest, I have tried to solve E as a heuristic problem, trying various greedy approaches with local optimizations on top, also without much success. Well, better luck next year!
Here is that heuristic-inviting
problem E: you are given at most 250000 vectors, and need to choose a subset that maximizes the value of (squared length of the sum of the chosen vectors) minus three times (sum of squared lengths of the chosen vectors). Can you find an exact solution?
This round wrapped up the race for the 2026 World Tour Finals (
results, top 12 on the left). Everyone with >=100 points qualified, but people had wildly different ways to achieve that amount of points, with some qualifying thanks to a round victory, and some never placing higher than fourth in any particular round. I'm looking forward to yet another exciting final in Japan!
It is also notable that some usual suspects did not qualify this time, including zhoukangyang who was
first and
first in qualification in the previous two seasons, and
first and
second in the last two World Tour Finals. Better luck next year as well!
In
my previous summary, I have mentioned
another AtCoder problem: you are given two sequences of integers
ai and
bi of the same length
n up to 2*10
5, 1<=
ai<=10
9, 0<=
bi<=30. We will permute the sequence
ai arbitrarily, and then compute sum(
ai>>
bi), where >> denotes the bitwise shift right. How many of the
n! permutations yield the smallest possible value of that sum? You need to print the answer modulo 998244353.
First, let us sort both arrays, so we can assume they are sorted below.
Intuitively, it feels that all permutations of ai that deliver the minimum
must be sorted or almost sorted. To make further progress, we can either implement
a brute force approach to find out experimentally what does almost mean exactly, or make
the intuitive argument more formal, with the same goal.
Suppose we have only two elements. If a0 = a1
or b0 = b1, then they can be swapped freely without changing
our target value. Now consider the case when they are distinct: a0 < a1 and
b0 < b1. When is it OK to swap them?
Since we're going to shift right by at least b0, it makes sense to introduce
p = a0 >> b0, q = a1 >> b0,
and r = b1 - b0. Then we need the following to be true:
p + q >> r = q + p >> r, which is equivalent to
q - p = q >> r - p >> r.
What happens to the difference of two numbers if we cut their last bit? It is not hard to check
that it stays the same in only one case: if the bigger number was even, and exactly 1 higher than
the smaller number. In all other cases it becomes smaller. So for the above condition to be true,
we need q = p + 1 and q divisible by 2r. These are the only cases
where two distinct numbers can be swapped without increasing our target value.
Now we will consider our arrays in blocks of equal bi. Consider the first such block:
b0 = b1 = ... = bk-1 = 0,
bk > 0.
Denote x = ak if ak is even, and ak + 1 otherwise.
Therefore x is always even. From the above argument, the only swaps we can do between this block and the rest of the array is swapping
the values x-1 from the block with the values x outside the block.
Let us denote c as the number of i such that ai = x - 1 and i < k,
d as the number of i such that ai = x - 1 and i >= k,
e as the number of i such that ai = x and i < k,
f as the number of i such that ai = x and i >= k.
Note that we always have either d = 0, or e = 0, or both.
Suppose we decided to do g swaps of the above form (x - 1 from the block with x from the rest).
Then we have (c + d choose c - g) ways to choose which values x - 1 go inside the block,
(e + f choose e + g) ways to choose which values x go inside the block, and
(k factorial) ways to permute the values inside the block. We need to multiply those three numbers by the answer
for the remaining subproblem, which is considering the arrays starting from the position k, such that
g values of x have been replaced with x - 1, and we must make sure that those replaced values all
end up in the next r blocks, where r is the number of 0s at the end of the binary representation
of x.
Now, what happens when we consider the next block, the one where bi = 1? We can now shift all values
ai right by 1, and denote y = x >> 1 (remember that x is even), and now we can handle this
block in the same way we handled the block with bi = 0. The only difference is that now we have
the additional complexity that g values of y have been replaced with y - 1. Let us define z in the same way as x was defined above, as the "swap value" between this new block and the rest, and suppose we decide to do h swaps of z - 1 from the block with z from the rest.
We now have the following four cases:
- y < z - 1. This means that all values of y - 1 must go in this block, so they are not special anymore, and we can just sum up the number of ways over all values of g to get a single multiplier, and then solve this block exactly like the first block.
- y = z - 1. Here all values of y - 1 also must go in this block, but additionally g of the values of y (= z - 1) are no longer available for swapping forwards since they were already swapped backwards, so we need to subtract g from the number of occurrences of z - 1 in the block before computing the above combination numbers.
- y = z. Here g and h refer to the same swaps (z with z - 1), but done over different block boundaries. The number of occurrences of z and z - 1 within the block (which are again used to compute the above combination numbers) will therefore depend on the difference between g and h (and we need to be careful to make sure h can be smaller than g, too).
- y > z. Initially I thought this was impossible, but then an assertion failed in my solution :) And indeed, if in the first block the first value of the rest was 5, we'd get x = 6 and therefore y = 3; if the next block still had 5 after it, but now after the right shift we get z = 2. After some thinking I realized that this case means that we have no place to put the values of y - 1 without violating the restrictions, therefore we must only consider the value for g = 0 for the previous block.
This finally gives rise to a polynomial dynamic programming solution. The state of this dynamic programming will be the bit number that is being considered (the value that we have already shifted all
ai right by, or alternatively the next value of
bi to consider), and the number
g of the swaps through the previous block boundary. Note that the other important values, such as the value of
y that participated in those swaps, or the position
k of the block, are uniquely determined by the state, and therefore do not need to be part of the state. And if we loop over the bit number on the outside, our dynamic programming array can be just one-dimensional. This solution has O(
n) states in total over all bits, but each transition can also be O(
n), therefore it is quadratic, and indeed it
gets TLE.
However, the dynamic programming transition here is summation after multiplying by a combination number, which when expressed via factorials ends up being a product/fraction of factorials of
g,
h, and either
g +
h in the
y =
z - 1 case, or of
g -
h in the
y =
z case, and for that we have a trick that was
described in this blog 9+ years ago: we can use fast polynomial multiplication via FFT, while using (const -
g) and (
g +
h) as the monomial degrees in the
y =
z - 1 case to get (const +
h) in the product, and using
g and (const +
h - g) as the monomial degrees in the
y =
z case to get (const +
h) in the product. This gives us an O(
n * log(
n) * num_bits) solution, which is
fast enough. You can check the linked solution for the implementation details.
Thanks for reading, and check back soon for more catch-up summaries and EUC 2026 reports :)