>It'd have been good to mention the important bit ("the summed log-energy after quantization was the same for both") in the article.
Perhaps you're right. I was trying to err on the side of not going down too many technical ratholes, but I might have kept it too simple there. What I was worried about: the figure is dominated by the DC components, and so that quickly devolves into other questions about 'why no DC prediction?' or 'why didn't you leave the DC out of the figure, since it's the AC behavior that's interesting?' and so on.
>That said, that particular example uses very high quantization, and from what I've heard the problem with lapping is that while it helps at low rates it hurts at higher ones.
We've gotten far enough to see that it helps with both, at least in Daala. You can find a number of researchers on the net who say 'lapped transforms don't really help' but not much documentation of what they tried that didn't work... Tim mentions a bit of this in his slide deck.
>Apparently the issue is that they spread detail across more coefficients, so the end result is more expensive to code - hence my original question.
I would think the problem is not that they spread detail across more coefficients in a block (this is the problem with wavelets), but that they duplicate edge detail into multiple blocks because blocks overlap. Perhaps we're winning because our predictors are able to account for that.
Another potential drawback is that lapping is going to spread ringing farther; an 8x8 lapped transform operating with 16x16 support is going to have ringing somewhere between an 8x8 and 16x16 DCT. That said, a 4x4 or 8x8 lapped transform is going to have as much coding gain as an 8x8 or 16x16 unlapped DCT, so we still theoretically come out ahead. Of course, you have to use a transform coefficient set that actually delivers that much coding gain, and we've spent quite a lot to time looking for them.
Re: Lapped comparison
Date: 2013-06-25 02:34 am (UTC)Perhaps you're right. I was trying to err on the side of not going down too many technical ratholes, but I might have kept it too simple there. What I was worried about: the figure is dominated by the DC components, and so that quickly devolves into other questions about 'why no DC prediction?' or 'why didn't you leave the DC out of the figure, since it's the AC behavior that's interesting?' and so on.
>That said, that particular example uses very high quantization, and from what I've heard the problem with lapping is that while it helps at low rates it hurts at higher ones.
We've gotten far enough to see that it helps with both, at least in Daala. You can find a number of researchers on the net who say 'lapped transforms don't really help' but not much documentation of what they tried that didn't work... Tim mentions a bit of this in his slide deck.
>Apparently the issue is that they spread detail across more coefficients, so the end result is more expensive to code - hence my original question.
I would think the problem is not that they spread detail across more coefficients in a block (this is the problem with wavelets), but that they duplicate edge detail into multiple blocks because blocks overlap. Perhaps we're winning because our predictors are able to account for that.
Another potential drawback is that lapping is going to spread ringing farther; an 8x8 lapped transform operating with 16x16 support is going to have ringing somewhere between an 8x8 and 16x16 DCT. That said, a 4x4 or 8x8 lapped transform is going to have as much coding gain as an 8x8 or 16x16 unlapped DCT, so we still theoretically come out ahead. Of course, you have to use a transform coefficient set that actually delivers that much coding gain, and we've spent quite a lot to time looking for them.