Let’s talk about the hard problem from TopCoder SRM 619.
Consider two sequences of N (up to 100) integers, each integer between 1 and M. We call them similar if it's possible to remove at most two elements from each sequence so that they become equal. How many pairs of similar sequences are there for the given N and M (modulo 109+7)?
It’s quite natural to expect that this problem can be solved by dynamic programming: add characters one by one to both strings, and remember something about the characters already added. But how do we figure out what to remember – in other words, what the dynamic programming state is?
It occurred to me that we can approach this question itself algorithmically. Let’s start with the trivial dynamic programming – when we remember the entire strings. In other words, the solution simply is: answer(N, M, [], []), where answer(N, M, prefix1, prefix2) = {1 if len(prefix1) == N and the largest common substring of prefix1 and prefix2 is at least N-2; 0 if len(prefix1) == N and the largest common substring is less than N-2; sum(answer(N, M, prefix1 + [c1], prefix2 + [c2]) for all numbers c1 from 1 to M and for all numbers c2 from 1 to M otherwise}.
This algorithm has O(M2N) states, so it’s way too slow for any reasonable constraints. But we can still implement it and run for very small values of M and N. And then we can notice that some states of our DP are actually equivalent. First, if len(prefix1) == N, then there are just two equivalence classes of states: where then answer is 1 and where the answer is 0. Then, for each state with len(prefix) == N-1 we can check how many pairs of (c1, c2) lead to a state with answer 1 (the rest lead to a state with answer 0), and this is the only thing that matters – so there will be at most M2+1 different states. In the general situation, the states with length A are classified based on the multiset of states with length A+1 that are reachable from them.
This allows to write a program that will find the equivalence classes of states, and thus might give us a clue on what do we really need to remember in the DP state! The program is available here, here’s its output for N=6, M=2:
For len 6 there are 2 representatives.
For len 5 there are 4 representatives.
For len 4 there are 10 representatives.
For len 3 there are 9 representatives.
For len 2 there are 4 representatives.
For len 1 there are 2 representatives.
For len 0 there are 1 representatives.
So we have just 32 different states out of the total 5461 states!
But how do we translate this knowledge into a better solution? Well, the program referenced above also prints out the equivalence classes of states with length 3:
aaa$baa bba$baa aba$aaa bbb$bab bbb$abb aaa$aba abb$bab abb$bbb baa$aba baa$aaa aab$abb bab$bbb baa$bba aba$baa bab$abb abb$aab
aaa$bbb bbb$aaa
aaa$bba bba$aaa aab$bbb bbb$aab
aaa$bab bbb$aba aba$bbb bab$aaa
bba$aab aab$bba
bba$aba aab$bab aba$bba bab$aab
bba$abb bba$bab bba$bbb aba$aab bbb$bba aba$abb aab$baa aaa$aab baa$aab abb$bba aab$aaa aab$aba bab$bba bab$baa baa$bab aba$bab bab$aba abb$aba
bba$bba bbb$bbb aba$aba aaa$aaa aab$aab bab$bab baa$baa abb$abb
bbb$baa aaa$abb baa$abb abb$baa baa$bbb abb$aaa
By looking at those equivalence classes, we can notice certain symmetries. First, replacing a with b and vice versa clearly doesn’t change the state. Also, swapping the first and the second string doesn’t change the state.
Now we can use this knowledge to obtain a faster solution. Instead of considering all M2N possible states, we can already merge the states that differ only by swapping the strings and permuting the alphabet. We modify our program (new version available here) and run it for larger values of N and M, hoping to gain more insight. The result of a run for N=7, M=3, and printing equivalence classes of length 4 is available here.
We can notice two things: first, even for N=7 and M=3 there are only a handful of truly different states; and then, there are much more situations where states are equivalent which we’re still missing.
And now comes the part that I still haven’t figured out: can we make the jump from the above output to the final idea required to solve this problem? The final idea is: we should only remember the last 3 characters from each string (again up to a permutation of the alphabet), and the answers for the following 9 questions: what is the largest common subsequence of prefix1[1..l1] and prefix2[1..l2] for l1 from len(prefix1)-2 to len(prefix1) and for l2 from len(prefix2)-2 to len(prefix2). Since the largest common subsequence of prefix1 and prefix2 must be at least len(prefix1)-2 (otherwise we’ll be unable to complete both sequences in such a way that the problem constraints are satisfied), there’re only a few such states possible. This idea is explained in more detail in the editorial.
Can you see how the output of our program can lead to such insight? Do you have ideas on how to complete this approach? Are there any interesting articles that use this approach that I should read? Having a disciplined way to come up with dynamic programming solutions sounds quite nice :)
Consider two sequences of N (up to 100) integers, each integer between 1 and M. We call them similar if it's possible to remove at most two elements from each sequence so that they become equal. How many pairs of similar sequences are there for the given N and M (modulo 109+7)?
It’s quite natural to expect that this problem can be solved by dynamic programming: add characters one by one to both strings, and remember something about the characters already added. But how do we figure out what to remember – in other words, what the dynamic programming state is?
It occurred to me that we can approach this question itself algorithmically. Let’s start with the trivial dynamic programming – when we remember the entire strings. In other words, the solution simply is: answer(N, M, [], []), where answer(N, M, prefix1, prefix2) = {1 if len(prefix1) == N and the largest common substring of prefix1 and prefix2 is at least N-2; 0 if len(prefix1) == N and the largest common substring is less than N-2; sum(answer(N, M, prefix1 + [c1], prefix2 + [c2]) for all numbers c1 from 1 to M and for all numbers c2 from 1 to M otherwise}.
This algorithm has O(M2N) states, so it’s way too slow for any reasonable constraints. But we can still implement it and run for very small values of M and N. And then we can notice that some states of our DP are actually equivalent. First, if len(prefix1) == N, then there are just two equivalence classes of states: where then answer is 1 and where the answer is 0. Then, for each state with len(prefix) == N-1 we can check how many pairs of (c1, c2) lead to a state with answer 1 (the rest lead to a state with answer 0), and this is the only thing that matters – so there will be at most M2+1 different states. In the general situation, the states with length A are classified based on the multiset of states with length A+1 that are reachable from them.
This allows to write a program that will find the equivalence classes of states, and thus might give us a clue on what do we really need to remember in the DP state! The program is available here, here’s its output for N=6, M=2:
For len 6 there are 2 representatives.
For len 5 there are 4 representatives.
For len 4 there are 10 representatives.
For len 3 there are 9 representatives.
For len 2 there are 4 representatives.
For len 1 there are 2 representatives.
For len 0 there are 1 representatives.
So we have just 32 different states out of the total 5461 states!
But how do we translate this knowledge into a better solution? Well, the program referenced above also prints out the equivalence classes of states with length 3:
aaa$baa bba$baa aba$aaa bbb$bab bbb$abb aaa$aba abb$bab abb$bbb baa$aba baa$aaa aab$abb bab$bbb baa$bba aba$baa bab$abb abb$aab
aaa$bbb bbb$aaa
aaa$bba bba$aaa aab$bbb bbb$aab
aaa$bab bbb$aba aba$bbb bab$aaa
bba$aab aab$bba
bba$aba aab$bab aba$bba bab$aab
bba$abb bba$bab bba$bbb aba$aab bbb$bba aba$abb aab$baa aaa$aab baa$aab abb$bba aab$aaa aab$aba bab$bba bab$baa baa$bab aba$bab bab$aba abb$aba
bba$bba bbb$bbb aba$aba aaa$aaa aab$aab bab$bab baa$baa abb$abb
bbb$baa aaa$abb baa$abb abb$baa baa$bbb abb$aaa
By looking at those equivalence classes, we can notice certain symmetries. First, replacing a with b and vice versa clearly doesn’t change the state. Also, swapping the first and the second string doesn’t change the state.
Now we can use this knowledge to obtain a faster solution. Instead of considering all M2N possible states, we can already merge the states that differ only by swapping the strings and permuting the alphabet. We modify our program (new version available here) and run it for larger values of N and M, hoping to gain more insight. The result of a run for N=7, M=3, and printing equivalence classes of length 4 is available here.
We can notice two things: first, even for N=7 and M=3 there are only a handful of truly different states; and then, there are much more situations where states are equivalent which we’re still missing.
And now comes the part that I still haven’t figured out: can we make the jump from the above output to the final idea required to solve this problem? The final idea is: we should only remember the last 3 characters from each string (again up to a permutation of the alphabet), and the answers for the following 9 questions: what is the largest common subsequence of prefix1[1..l1] and prefix2[1..l2] for l1 from len(prefix1)-2 to len(prefix1) and for l2 from len(prefix2)-2 to len(prefix2). Since the largest common subsequence of prefix1 and prefix2 must be at least len(prefix1)-2 (otherwise we’ll be unable to complete both sequences in such a way that the problem constraints are satisfied), there’re only a few such states possible. This idea is explained in more detail in the editorial.
Can you see how the output of our program can lead to such insight? Do you have ideas on how to complete this approach? Are there any interesting articles that use this approach that I should read? Having a disciplined way to come up with dynamic programming solutions sounds quite nice :)
No comments:
Post a Comment