Editorial for BPC 1 S5 - Temple


Remember to use this editorial only when stuck, and not to copy-paste code from it. Please be respectful to the problem author and editorialist.
Submitting an official solution before solving the problem yourself is a bannable offence.

Author: BalintR

Subtask 1

Picking up a treasure essentially consists of going to the node on the shortest path, increasing the weight of each edge by a_i, then returning on the same shortest path.

Let dist_i denote the length of the shortest path from node 1 to node i. Assume we want to visit nodes i and j consecutively and want to determine which to visit first. The total amount they increase edge weights by is the same for both orderings, so we can ignore their effects on later nodes. We can obviously also ignore the effects of previous nodes, since they will increase the total time by the same amount for both orderings. The only thing we have to worry about is the extra time activating the traps at the first of the nodes causes for the second node. Therefore, we can decide which to put first by comparing a_i \cdot dist_j with a_j \cdot dist_i. The above comparison is equivalent to comparing \frac{a_i}{dist_i} with \frac{a_j}{dist_j} as real numbers. From this we can see that the comparison imposes a total ordering on the nodes, so ordering by this comparison is optimal. Therefore, we can run Breadth First Search after each query to calculate the shortest path to each node, then sort by the above comparison. Make sure to handle the cases where a_i = 0, dist_i = 0.

Time Complexity: \mathcal{O}(QN^2)

Subtask 2

For this subtask, we need to be able to BFS efficiently on the complement of a graph. We can modify normal BFS to do this. We keep a list of all nodes that were not yet visited. Then, whenever we pop a node from our BFS queue, we iterate over all unvisited nodes and check whether they have an edge to the current node. If not, remove them from the list of unvisited nodes and add them to the BFS queue like normally. This has good time complexity because it happens at most M times that a node is iterated over but not visited.

Time Complexity: \mathcal{O}(M + N \log N)

Subtask 3

Before moving on to the actual solution, it is useful to quantify some properties of worst case distances of nodes. First, all nodes will have distance at most proportional to \sqrt{M+Q}. This is because if we have a node with distance x, we must have at least x-1 edges going from it to nodes with a smaller distance and there must be a node with distance x-1. Second, the sum of distances of nodes is at most proportional to M+Q, for the same reason as above.

For this subtask, we need to dynamically maintain the shortest distance to each node. We will process the queries in reverse order since that is more convenient for this solution. This means that each query will add an edge to our almost complete graph instead of removing one. This will only ever decrease the distances of nodes.

Whenever we add an edge and decrease the distance of a node, we can effectively start a mini-BFS at that node to determine which other nodes need to have their distance values reduced. To make it more efficient than the normal BFS on a complement graph as outlined in subtask 2, we will only iterate through nodes with distances that could decrease if this node's distance decreases. For this, we will also need to maintain a list of nodes for each distance. The rest of the BFS can be done roughly the same way as before and will still be efficient. Whenever we iterate through a node, we either decrease its distance, which can happen at most \mathcal{O}(M+Q) times, or it has an edge to a node whose distance just decreased, which happens at most \mathcal{O}((M+Q)^{1.5}) times. The latter is because each node's distance decreases at most \sqrt{M+Q} times and the node has at most M+Q edges.

This solution can be implemented efficiently with dynamically sized arrays or linked lists for storing the list of nodes for each distance. The constant of this solution isn't bad in practice since there are softer requirements the complexity doesn't account for well. For example, the fact that whenever we reduce the distance of the node and the distance of the other node doesn't decrease due to there being an edge between them, the other node would also "use" a bunch of edges to have a large distance. However, we can prove the lower bound of the solution to be the same as the upper bound by considering a graph where the number of nodes is proportional to \sqrt{M+Q}. For such a graph, the fact that we need to calculate stuff on the complement doesn't help since there could be as many edges in the normal graph as in its complement.

Time Complexity: \mathcal{O}((M+Q)^{1.5})

Subtask 4

The solution to the final subtask is much the same as to the third. The only difference is we need to add some data structure to calculate the total cost given the a values and the updating distances. This can be done offline with a Binary Index Tree (AKA. Fenwick Tree) or online with a Balanced Binary Search Tree.

Time Complexity: \mathcal{O}((M+Q)^{1.5})


Comments

There are no comments at the moment.