You’re the owner of a concert hall. There are a list of performances with start time , end time and price . We want to pick the set of performances to rent out such that we can collect the maximum amount of rent collected.

This problem can be solved with DP. We consider the following properties:

  • The solution space is given by all the subsets of performances that occur
  • The function says that for each , we have .
  • The constraints here is that performances cannot overlap.


Input: tuples .

Output: a subset such that the performances don’t overlap, and we get the maximum .

Naive solution

are all the subsets of performances, which is given by (the power set). We could iterate through each one and calculate the collected rent for each. But this has an exponential runtime.

Potential high level questions

  • We’re maximizing total rent collected, we should start with performances with the highest prices
  • Performances can’t overlap, maybe we should keep track of the max possible money to be made in a given time period
  • Sort the performances by price so we can consider the higher prices first
  • Sort w.r.t. time so that we can easily check in linear time whether performances overlap

Some intuition

Let’s say that we process inputs by the starting times , so . The question we should ask, is, do I rent out to performance ?

  • If yes, then we find the first performer where , because anyone before the first performer’s end time cannot be booked. Then we can say the total reward is given by , as we’re considering the rest of the performers recursively.
  • If no, then we total reward is given by . We don’t discard performance 1, we just don’t consider it, and recurse on the rest of the performances.

Greedily solving without caching

Similar to divide and conquer, we recurse into both options (yes or no), and then calculate the max reward we can get if we choose it. We then compare which one returns the higher price, or , and choose that instead.

The base case is when the array only has one element (some performance ), in which case we rent it out to them and return .

  1. Find , the successor to current first performance in the list.
  2. Run the algorithm on array . return .
  3. Run the algorithm on array . return .
  4. Output .


We can use binary search to find , which runs in . Then, we are recursing onto an array of size (if we chose yes), as well as an array of size (if we chose no). The recurrence is then given by

Which must be greater than or equal to

Worst case, . We decrement by 1 each time so we will recurse times, which means we multiply by a total of times. Therefore runtime is . This… isn’t any better than naive.

Memoizing the previous algorithm

Using divide and conquer repeats calls to the same subarrays and so we waste time computing the same things over and over.

Let us cache values that we’ve computed, and whenever we’re about to recurse we first check if we’ve already computed it before. Before we execute a recursive call, we check if we’ve run it already.

Let be an array that stores our computed values. If a is non-null, this means we’ve already computed the max reward for . More formally, we can say

  1. Find using binary search.
  2. Check if is non-null. If yes, then use that value. If not, then we actually recurse, and store the value into .
  3. Do the same thing for . check , if it’s non-null use the stored value, if it’s null then we recurse into to calculate the value and store in .
  4. Output .


We do binary search in each subproblem, where there are of. Therefore, the runtime is .