Editorial for CPC '19 Contest 1 P6 - Maintaining Some Coins
Submitting an official solution before solving the problem yourself is a bannable offence.
Authors:
,Subtask 1
For the first subtask, we can simply use a std::vector
(C++) to insert and delete at an index. Then we can iterate through the entire list to count the number of elements that meet the query criteria.
Time Complexity:
Subtask 2
For the second subtask, the problem can be reduced to determining if an element exists in the range of a dynamic array. To support this, we can use an implicit balanced binary search tree to maintain the order of elements, insert at an index, delete at an index, and query a range in time. For the queries, we will store a pointer to the node for each value. To get the current index, we can follow the parent of a node until we reach the root, and add the size of the left subtree if a node is the right child of its parent.
Time Complexity:
Subtask 3
Official Solution by
Similarly to Subtask 2, we can use an implicit balanced binary search tree (BBST) with parent pointers to store the dynamic array. This tree will be known from now on as the main BBST.
Before, we could just store the pointer for each value as each value was distinct. However, now that there can be multiple coins with the same value, we now must use a std::unordered_map
(C++) that maps each unique value in the array to a BBST which stores the indices of those values. However, the indices themselves are not stored directly but rather as pointers to their respective nodes on the main BBST. The indices themselves are dynamically determined by walking up from the pointer to the root node of the main BBST. Note that these BBSTs will be known from now on as pointer BBSTs.
We can insert and delete from the main BBST in time and insert and delete from the pointer BBSTs in time since the index of a pointer is calculated in .
For queries, we can search for the and values of the query in the pointer BBST corresponding to its value. The final answer is calculated by subtracting the indices of the and values and then adding .
Note that while the insert, delete, and query for the pointer BBSTs are standard BBST insert, delete, and queries, the values of each node are calculated dynamically in , so the overall complexity of each query is .
Time Complexity:
Alternate solution by
First, we will decompose the initial array into blocks of size . There will be approximately blocks in total. For each block, we will maintain a count of each element using a std::unordered_map
(C++), the starting index of each block, and the actual elements (ordered) in each block. For insertions, we will locate the initial block by binary searching over the starting indices of the blocks (this is a very minor optimization over simply iterating through every block). We can then insert an element into the block in a manner similar to subtask 1. Remember to update the count of an element in a block as well as the starting index of all blocks. Assume there will be at most elements in a block. Then inserting will take time. Deletions can be handled in a similar manner. The problem is that after many insertions, can become very large. To fix this, when a block becomes larger than , we can split it into two different blocks in time. Thus, .
For queries, we can make the observation that each query can be divided into three (possibly empty) sections: a section that is a suffix of a block, a section spanning multiple blocks entirely, and a section that is a prefix of a block (the exception is when a query is contained fully inside a single block). For the sections that are prefixes and suffixes of a block, we can go through every element in that block and count the number of elements meeting the query criteria. This takes time. For the section spanning multiple blocks, we can iterate through each of those blocks and count the number of elements equal to the query element by using the std::unordered_map
for that block. This takes time.
Here, it is optimal to choose . Note that since std::unordered_map
has a bad constant, we should make larger than .
There are other variants of this solution, such as splitting a block at the query borders and rebuilding the structure after queries. Another variant is to have each block as a fixed size, and use a deque to shift elements to adjacent blocks to maintain the size of each block.
Time Complexity:
Bonus Tasks:
As a warm-up, try solving Binary Search Tree Test with the same technique.
If you're up for a challenge, you can try solving Maintaining a Sequence with this technique.
Comments
This is a really comprehensive and intuitive editorial and I applaud the effort that went into writing it!
Could somebody please explain the rationale behind making larger than in wleung_bvg's solution given the constant factor of unordered_map? As you can probably tell, sqrt decomposition isn't exactly my strongest algorithm.
As an extra question, how can a hash table (unordered_map) have a constant factor of anything other than 1?
Edit: it all makes sense now - thanks so much!
While hash tables are on average for inserting, deleting, and accessing elements, there are generally a lot of machine operations involved, compared to a simple array access. In addition, hash collisions can result in having to access many more memory locations for any of these operations. As such, most operations involving hash tables, while still , has a large constant factor involved.
In the solution above, all insertions and deletion operations will require some use of a hash table. While splitting a block will require hash table read/write operations, this should only happen after insertions on average.
On the other hand, queries require hash table accesses for every query. Since we want to minimize the number of hash table operations, we should make larger, which makes smaller.