The May 23 - May 29 week was full with various tournament rounds. TopCoder Open 2016 Round 2B took place on Thursday (problems, results, top 5 on the left). Top 40 have qualified for the Round 3s which was of course the main point, but W4yneb0t wasn't content with just that as he found 100 points in the challenge phase to snatch the first place from Egor – congratulations!
Google Code Jam 2016 Online Round 2 was the main event on Saturday (problems, results, top 5 on the left, analysis). Just four contestants have managed to solve all four problems in full, and EgorKulikov has claimed a clear first place by solving everything with 40 minutes to spare – great job! Of course, well done to all 500 contestants qualifying for Round 3.
I was the author of the hardest problem D in this round, but it's actually problem C which I found the most beautiful. Consider a RxC grid such that every cell has exactly one of its two diagonals drawn. Those diagonals separate the grid into pathways which connect 2*(R+C) unit segments on the sides of the grid to each other. Given a matching telling to which other unit segment should each unit segment be connected, construct any grid that results in such connectivity. R times C is at most 100.
I like this problem so much in part because the solution doesn't rely on any advanced algorithms or data structures – it is a pure thinking challenge. Consider giving it a go!
I was the author of the hardest problem D in this round, but it's actually problem C which I found the most beautiful. Consider a RxC grid such that every cell has exactly one of its two diagonals drawn. Those diagonals separate the grid into pathways which connect 2*(R+C) unit segments on the sides of the grid to each other. Given a matching telling to which other unit segment should each unit segment be connected, construct any grid that results in such connectivity. R times C is at most 100.
I like this problem so much in part because the solution doesn't rely on any advanced algorithms or data structures – it is a pure thinking challenge. Consider giving it a go!
Russian Code Cup 2016 Qualification Round 2 selected another 200 participants for the upcoming Elimination Round (problems, results, top 5 on the left, analysis). With four easy problems and only one hard one, the competition for the top spots was all about accuracy. Tourist has demonstrated just that and has managed to overcome the early gap vs enot.1.10 to claim the top spot by just 7 penalty minutes. Way to go!
Finally, Google Distributed Code Jam 2016 Online Round 1 on Sunday was the first round in 2016 that challenged contestants with problems that require distributed solutions (problems, results, top 5 on the left, analysis). Unlike last year, the contestants already had quite a lot of experience with the format, and thus breezed through the easier problems. Last year's finalist simonlindholm has solved all problems correctly in just over an hour (out of the three available hours) – awesome job!
In my previous summary, I've mentioned a very nice ACM ICPC World Finals problem: you have n disk drives (n up to 1 million), i-th having capacity ai. You want to reformat all of them to a new filesystem, and i-th drive will have capacity bi after reformatting. However, the drives are filled with data initially, and in order to reformat a drive you have to move its data elsewhere, be it other drives that gained more capacity after reformatting, or a new drive that you buy – that one already uses the new filesystem and doesn’t need reformatting. What is the minimum capacity the new drive needs to have in order to support reformatting all existing drives? Note that we are free to pick the order of reformatting the drives, and are free to move the data around arbitrarily between reformattings.
Let's split the drives into two classes: increasing ones have bi>=ai, and all others are decreasing. First, we can note that all increasing drives should be reformatted before all decreasing ones (assume we reformat an increasing drive right after a decreasing drive; if we do those actions in reverse order, the end result is the same but the capacity we have before both reformattings is greater or equal).
The remaining part is to find in which order to handle the increasing drives, and in which order to handle the decreasing drives. To see that, consider the drive we reformat the first, with some parameters ai and bi. In order to not lose data, we need at least ai capacity of the new drive. But if we have that capacity, we can also reformat any other increasing drive with the same or smaller ai, and since increasing drives make our situation strictly better, we can always reformat them as soon as we have enough spare capacity. Because of this, we can always start by reformatting the increasing drive with the smallest ai. And that, in turn, enables us to see that we can handle all increasing drives in sorted order by ai. Similarly, all decreasing drives should be handled in reverse sorted order by bi.
Now that we've figured out the exact order of reformatting the drives, we can find the required spare capacity by simply tracking the total available size after every operation.
Thanks for reading, and check back soon for some June news!
In my previous summary, I've mentioned a very nice ACM ICPC World Finals problem: you have n disk drives (n up to 1 million), i-th having capacity ai. You want to reformat all of them to a new filesystem, and i-th drive will have capacity bi after reformatting. However, the drives are filled with data initially, and in order to reformat a drive you have to move its data elsewhere, be it other drives that gained more capacity after reformatting, or a new drive that you buy – that one already uses the new filesystem and doesn’t need reformatting. What is the minimum capacity the new drive needs to have in order to support reformatting all existing drives? Note that we are free to pick the order of reformatting the drives, and are free to move the data around arbitrarily between reformattings.
Let's split the drives into two classes: increasing ones have bi>=ai, and all others are decreasing. First, we can note that all increasing drives should be reformatted before all decreasing ones (assume we reformat an increasing drive right after a decreasing drive; if we do those actions in reverse order, the end result is the same but the capacity we have before both reformattings is greater or equal).
The remaining part is to find in which order to handle the increasing drives, and in which order to handle the decreasing drives. To see that, consider the drive we reformat the first, with some parameters ai and bi. In order to not lose data, we need at least ai capacity of the new drive. But if we have that capacity, we can also reformat any other increasing drive with the same or smaller ai, and since increasing drives make our situation strictly better, we can always reformat them as soon as we have enough spare capacity. Because of this, we can always start by reformatting the increasing drive with the smallest ai. And that, in turn, enables us to see that we can handle all increasing drives in sorted order by ai. Similarly, all decreasing drives should be handled in reverse sorted order by bi.
Now that we've figured out the exact order of reformatting the drives, we can find the required spare capacity by simply tracking the total available size after every operation.
Thanks for reading, and check back soon for some June news!
No comments:
Post a Comment