Seam Generation Using A*

The key to content-aware image resizing (and other recent image algorithms) is the discovery of "seams" -- paths of pixels that don't differ much from each other. Such seams are   places where the picture can be "pulled apart" gracefully. In image resizing, the seam is discarded and the two remaining pieces glued together -- when done time and time again, the result is the astonishing resizing behavior.

In the image in this post, a blown-up anti-aliased letter "R", an obvious seam exists straight down the left side. A not-quite-as-good seam might be along the right side, but cutting diagonally back across the tail of the "R."

Other people have used seams to automate collaging and masking. All very interesting, but the point of this post is to talk very briefly about the algorithms used to generate the seams.

Optimal seam generation is NP-hard. So it boils down to what works efficiently. \<a href=""" target="_blank" rel="noopener noreferrer">Graph-cuts are the preferred method, but I just can't get out of my head that the simple path-finding method called A* might be effective.

The thing that is nagging at me is that A* uses a cost approximation to figure out the next step forward; it seems to me that when manipulating many seams, using approximations that might last for dozens of seams would have a big benefit. The other obvious thing about A* is that the approximations are easy to build: recursively develop lower-and-lower resolution images.

For instance, when the above image is converted into brightness values, you can average neighbors to create the values shown in yellow, and average those to create the values in red. (Did you think "\<a href=""" target="_blank" rel="noopener noreferrer">Haar wavelet"?)

Initially, this looks promising (the left-most yellow column is a clear winner), but at coarse levels, it's deceptive (For the red, lowest-resolution "image," you would clearly be drawn towards the right side for initial processing. Intuitively, you would think that most images would have several side-by-side seams (i.e., that would show up at  coarser levels). Also, after seam removal, you would be able to "mark dirty" the approximations (i.e., the coarser calculations) and trigger recalculation as the amount of removed seams became significant (and, obviously, not recalculate areas through which the seam did not pass).  \<a href=""" atomicselection="true">

This is kind of killing me, because I don't have time to explore an algorithm that I only suspect will be efficient.