- We will run a 5-round Swiss tournament on lichess with 3+0 time control (3 minutes for each side per game, no increment). This yields a ~30-minute duration but if games finish earlier hopefully it will be a bit faster and everybody will be in time for the push-ups :)
- To join the tournament, you need a lichess account and a phone or a laptop to play. If you do not have a lichess account and want to participate, please create it in advance!
- To join the tournament, you also need to join this team on lichess first. I will need to approve everybody who wants to join the team so that it keeps being an onsite event, so you will need to find me in person for that.
- The easiest way to find me in person will be to come to the Chill Zone in the Marriott Blvd today at 17:30 (or a bit earlier).
- The tournament will start at 17:35, so don't be late! If you are late, you should be able to join the tournament late during the first two rounds as well, but then you will miss some games :)
Wednesday, September 3, 2025
A chess day
Monday, September 1, 2025
An alumni day
Thanks for reading, and check back tomorrow for more #ICPCBaku updates!
Sunday, August 31, 2025
A universal week
Saturday, August 23, 2025
A byte week
Nevertheless, this raises a logical question: what would be a good strategy to avoid missing such nice competitions? Reading all Codeforces posts does not seem to work, and neither does checking clist regularly enough. Is there a way to get notified when new contest websites are added to clist maybe (for example, maybe one can achieve that by excluding less relevant resources via "hide" rules in the filters, as opposed to choosing the relevant resources as "show", which I am doing now)? Or should I create an AI bot that checks unfiltered clist or unfiltered Codeforces and decides if I should be notified? What is your strategy? And of course, what other cool contests am I already missing? :)
Thanks for reading, and check back for this week's summary!
Saturday, August 9, 2025
A thesis week
I got my share of successful randomized solving in problem H, where after being stuck for a while I decided to submit a solution that took edges that can definitely be taken, and when there is no such edge, took a more or less random edge and continued, and then repeated this process until the time runs out. This should be roughly the same as backtracking but even simpler to implement, and I expected that with n=30 it has a high change of passing, and indeed it did from the first attempt in which I fixed all bugs.
It is curious that neither problemsetters nor myself were able to see:
1) the relatively standard dynamic programming solution for the problem that takes advantage of the fact that from every subtree of the DFS tree we have at most 5 backward edges, and therefore can just add the state of all those edges to the dynamic programming state.
2) the non-bipartite matching solution that does not even rely on the additional "at most 5" property from the problem statement: the problem statement essentially asks for maximum matching, but where each vertex can be used twice instead of once, so we can just duplicate the vertices and be done.
Point 2 is even more bizarre in my case, since my masters thesis 18 years ago was on a more general version of exactly this problem, 2-path-packings. The thesis was in Russian and I don't think it is published anywhere, but it is roughly equivalent to chapter 3 "O(mn)-time algorithm" in this paper. So the matching interpretation should really have been obvious to me, even after 18 years :)
Thanks for reading, and check back next week!
Sunday, August 3, 2025
An open week
I continue to be amazed about many things IOI:
- the repeat participants who keep pushing me down the hall of fame even further, this year Ryan Bai and István Ádám Molnár.
- the repeat problemsetters that keep coming up with amazing problems, setting three out of six problems just like two years ago: square1001 and E869120.
- the amazing openness of the organizers who publish both the GA minutes (2024, all, this year's minutes will probably appear a bit later?) and the IC minutes (March 2025, all). I am learning a lot about organizing those great events, and about democratic processes in general, from reading those.
Keep up the good work!
Codeforces hosted its Round 1040 between the IOI competition days (problems, results, top 5 on the left, analysis). Kevin114514, who as I understand was too young to qualify for China's IOI team (please correct me if I'm wrong!), instead had to settle with AKing this round and overtaking jiangly for the second place in the overall rating. Well done!Thanks for reading, and check back next week.
Saturday, August 2, 2025
A focus week
The first problem was from the AtCoder World Tour Finals 2025: you are given n<=100 days and m movies, the i-th movie running from day li to day ri (all pairs (li, ri) are different). Find the sum of the following values: for each of the 2m sets of movies, what is the maximum amount of different movies from that set one can watch, if one only watches at most one movie per day?
During the broadcast I got some ideas from listening to Riku and reading the contestants' solutions. However, it turns out that even with those ideas I could not solve this problem after at least 5 hours of thinking (probably more since I did not count so precisely), so I actually checked the editorial to complete the below solution.
The first idea that I had (and which Riku confirmed was on the right track) was to choose a particular algorithm for finding the maximum amount for a particular set (out of 2m possible) of movies. This is a classical problem of course, and the typical greedy approach is to sort the movies by the end day, and then process them in this order and for each movie that still has free days to watch it choose the earliest free day to do it. This works since when we process the movies in the increasing order of end day, the early free days are less flexible than the late free days (because every movie that can be watched on the early one can also be watched on the late one), so it makes sense to pick the least flexible day for a given movie.
Now we will count not just the sum of those maximums over all 2m sets, but the sum of those maximums as obtained by the above greedy. This is of course the same number, but we can now regroup those sums, for example look at how often we watch a movie on a particular day.
More precisely, let us look at how often we do not watch a movie on a particular day a. It turns out that the problem neatly separates into two independent subproblems: since we know that day a always stays free in the above greedy, it means that movies that start on or before day a are never watched after day a. Therefore the movies that start on or before day a only affect days before a, and the movies that start after day a only affect days after a. The actual greedy processes movies that start before and after a in some interleaved order, but it does not matter since they do not interact with each other as long as a is free.
A problem splitting into two independent parts naturally suggests dynamic programming. The subproblem happening after day a in this case is completely identical to the original problem, which is very nice for the dynamic programming.
However, what happens before day a has an additional constraint: we must never watch a movie on day a. And actually, to avoid double-counting we should likely choose a as the first day where we do not watch a movie, so the subproblem now is: for a given segment of days [l, r], and for movies that start in this segment, how many subsets are there that lead to us watching a movie on all days in this segment except the last day r?
This subproblem is not the same as the original problem, so we need to come up with a different way to count this. First, I considered possibilities to get this number by subtraction. If we know the answer for segments [l, r'] for r'<r, it is relatively easy to use subtraction to find the number of subsets where we watch a movie for all days between l and r-1, but I could not figure out a way to split this number into those where we also watch a movie on day r and those where we don't.
Then I tried to come up with a similar dynamic programming argument. A natural question to ask is: what is the last day on the segment [l, r] that was filled by the above greedy? Suppose this day is m. Now it would seem that for days on the segment [m+1, r] we can solve the same subproblem again, and multiply the answer by the answer for the subproblem. However, I think such multiplication leads to double-counting, since it matters which of the segments (filling the last spot in the left part, or filling the last spot in the right part) we encounter before the other. This is where I got stuck, and could not make further progress.
After checking the editorial, it turns out that the idea that I was missing was more or less the following: since we need to rely on the order of segments in the greedy to avoid double-counting, we need to add the segment position in the greedy order (by end day) to the dynamic programming state, instead of just trying to have states based on days only!
So for each k,l,r we count the number of subsets of the movies that start in the segment [l, r], and also are among first k movies in the greedy order, that fill the entire segment [l, r] except its right boundary. When going from k to k+1 we now have a concrete movie that we are processing, and the above approach over iterating over the day m that would be the last one to be filled on the segment now works, since we can check if this concrete movie covers that day or not, and since before the (k+1)-th movie day m was free, the subproblems before and after day m can be treated independently and theirs answers multiplied.
This means that I was on the right track (which makes sense, given Riku's hints), and still could not see the last step of the solution for many hours. Any advice on how to avoid getting stuck here? Maybe there is a sub-step that can be made first?
The second problem was from the EGOI 2025: there are n<=50 players playing bingo. Each player will have a card with exactly 12 numbers, each between 1 and 50. Then, all numbers from 1 to 50 will be read out in some order, and the first player whose all 12 numbers have been read out will be declared the winner. Can you design the contents of the players' cards in such a way that there will never be a tie?It seems surprising at first that this is possible at all beyond the trivial cases where the player cards do not intersect. On the other hand, we have only 50 possible inputs. So my first instinct was to implement some kind of randomized hill climbing.
In order to do that, we need to learn to check if a tie is possible for a particular set of cards. To do this, let us think how a tie can happen. Given a pair of cards x, y that have a common number u, reading this number would result in a tie between x and y if all other numbers from x and y have already been read, and no other card has won. So if we never have a tie, it means that for all such x, y, u there must be another card z that is a subset of the union of x and y, and does not have u.
This leads to the following hill climbing approach: while we have a possible tie, we need to either modify x or y to not have u, or to modify another card to become such z. We can make one of those things at random, and keep repeating this until we have no possible ties.
Unfortunately, this does not seem to ever converge for any n>=5, so we need to try something else. My next idea was to try coming up (on paper) with any non-trivial set of cards without ties. And I quickly found a somewhat natural set: consider 13 cards, each with all numbers between 1 and 13 except one. Then as soon as 12 out of the first 13 numbers are read, exactly one of those cards will win, so we never have a tie.
This only solves the problem for n=13, but we can make several copies of this to solve n=26 and n=39, and also augment with the cards that have 12 numbers that do not appear on any other card to also solve this for n=14, n=15, n=16, n=27, n=28.
The next idea is to try to generalize this a bit more. For example, we can build a similar set of cards, each with all numbers between 1 and 7 except one. Those cards satisfy the no tie criterion, but they have 6 numbers each instead of 12. However, now we can replace each number x with two numbers x,x+7, and obtain a valid set. So we have 7 cards with numbers between 1 and 14 and no ties, solving the problem for n=7, n=7+7, n=13+7 etc. We can also use other divisors of 12 in the same fashion to also solve n=5 with numbers up to 15, n=4 with numbers up to 16, n=3 with numbers up to 18. Combining those actually solves 33 out of 50 test cases, yielding 66 points.
It turns out that we can generalize this pattern even more: instead of having all numbers except one on a card, we can have all numbers except two, or all numbers except three, and so on. This gets us n equal to some combination numbers. And it turns out that combining copies of those constructions allows us to solve all 50 testcases. The most difficult is n=47, which actually needs all numbers from 1 to 50 (so the two 50s in the problem statement, the maximum n and the maximum number on the card, are in fact the same only by coincidence).
Before solving this problem, I was hoping that there would be some computer experimentation involved, and indeed, even though the constructions for individual combination numbers can and likely will be invented on paper, checking if combining their copies always fits within 50 is much easier to do with a quick knapsack implementation. So insisting on solving the problem fully on paper might backfire as one might search for an even more general construction without realizing the current one is already enough.
I have checked the editorial after getting 100 points for the above solution, and I was very surprised to see that the solution there is exactly the same. It seems that the problem gives us so much freedom that there must be other approaches that work, potentially involving even more implementation/experimentation. Do you know one?
Thanks for reading, and check back tomorrow for this week's summary!
Sunday, July 20, 2025
A 21st century week
Wednesday, July 16, 2025
AWTF25 heuristic day
Tuesday, July 15, 2025
AWTF25 arrival day
Sunday, March 2, 2025
EUC 2025
The EUC is a new contest that was launched last year, so this is the second edition. It brings together the top teams from 4 European regionals, providing them both an additional qualification path to the ICPC World Finals (detailed rules), and a significant onsite competition of its own, giving many teams who will not make it to the World Finals an opportunity to meet other students and enjoy the atmosphere of a big international competition, while keeping the travel costs in check. I am very happy to support this competition by volunteering as a judge for the second time!
We are also running a mirror contest on Codeforces, so come try it in a few hours! I think the problems are quite nice.