Fast Algorithms for Knapsack via Convolution and Prediction
Abstract
The knapsack problem is a fundamental problem in combinatorial optimization. It has been studied extensively from theoretical as well as practical perspectives as it is one of the most well-known NP-hard problems. The goal is to pack a knapsack of size t with the maximum value from a collection of n items with given sizes and values.
Recent evidence suggests that a classic O(nt) dynamic-programming solution for the knapsack problem might be the fastest in the worst case. In fact, solving the knapsack problem was shown to be equivalent to the (min,+) convolution problem (Cygan et al., ICALP 2017), which is thought to be facing a quadratic-time barrier. This hardness is in contrast to the more famous (+,·) convolution (generally known as polynomial multiplication), that has an O(nlogn)-time solution via Fast Fourier Transform.
Our main results are algorithms with near-linear running times for the knapsack problem, if either the values or sizes of items are small integers. More specifically, if item sizes are integers bounded by s_max, the running time of our algorithm is O~((n + t)s_max). If the item values are integers bounded by v_max, our algorithm runs in time O~(n + t v_max). Best previously known running times were O(nt), O(n^2 s_max) and O(n s_max v_max) (Pisinger, J. of Alg., 1999).
At the core of our algorithms lies the prediction technique: Roughly speaking, this new technique enables us to compute the convolution of two vectors in time O (n e_max) when an approximation of the solution within an additive error of e_max is available.
Our results also have implications regarding algorithms for several other problems including tree sparsity, tree separability and the unbounded knapsack problem, in the case when some of the relevant numerical input values are bounded.
Recent evidence suggests that a classic O(nt) dynamic-programming solution for the knapsack problem might be the fastest in the worst case. In fact, solving the knapsack problem was shown to be equivalent to the (min,+) convolution problem (Cygan et al., ICALP 2017), which is thought to be facing a quadratic-time barrier. This hardness is in contrast to the more famous (+,·) convolution (generally known as polynomial multiplication), that has an O(nlogn)-time solution via Fast Fourier Transform.
Our main results are algorithms with near-linear running times for the knapsack problem, if either the values or sizes of items are small integers. More specifically, if item sizes are integers bounded by s_max, the running time of our algorithm is O~((n + t)s_max). If the item values are integers bounded by v_max, our algorithm runs in time O~(n + t v_max). Best previously known running times were O(nt), O(n^2 s_max) and O(n s_max v_max) (Pisinger, J. of Alg., 1999).
At the core of our algorithms lies the prediction technique: Roughly speaking, this new technique enables us to compute the convolution of two vectors in time O (n e_max) when an approximation of the solution within an additive error of e_max is available.
Our results also have implications regarding algorithms for several other problems including tree sparsity, tree separability and the unbounded knapsack problem, in the case when some of the relevant numerical input values are bounded.