computer vision - Assessing the quality of an image with respect to compression? -


i have images using computer vision task. task sensitive image quality. i'd remove images below threshold, unsure if there method/heuristic automatically detect images heavily compressed via jpeg. have idea?

image quality assessment rapidly developing research field. don't mention being able access original (uncompressed) images, interested in no reference image quality assessment. pretty hard problem, here points started:

  • since mention jpeg, there 2 major degradation features manifest in jpeg-compressed images: blocking , blurring
  • no-reference image quality assessment metrics typically 2 features
  • blocking easy pick up, appears on macroblock boundaries. macroblocks fixed size -- 8x8 or 16x16 depending on image encoded with
  • blurring bit more difficult. occurs because higher frequencies in image have been attenuated (removed). can break image blocks, dct (discrete cosine transform) each block , @ high-frequency components of dct result. if high-frequency components lacking majority of blocks, looking @ blurry image
  • another approach blur detection measure average width of edges of image. perform sobel edge detection on image , measure distance between local minima/maxima on each side of edge. google "a no-reference perceptual blur metric" marziliano -- it's famous approach. "no reference block based blur detection" debing more recent paper

regardless of metric use, think how deal false positives/negatives. opposed simple thresholding, i'd use metric result sort images , snip end of list looks contains blurry images.

your task lot simpler if image set contains similar content (e.g. faces only). because image quality assessment metrics can influenced image content, unfortunately.

google scholar friend here. wish give concrete solution, don't have 1 yet -- if did, i'd successful masters student.

update:

just thought of idea: each image, re-compress image jpeg , examine change in file size before , after re-compression. if file size after re-compression smaller before, it's image not heavily compressed, because had significant detail removed re-compression. otherwise (very little difference or file size after re-compression greater) image heavily compressed.

the use of quality setting during re-compression allow determine heavily compressed means.

if you're on linux, shouldn't hard implement using bash , imagemagick's convert utility.

you can try other variations of approach:

  • instead of jpeg compression, try form of degradation, such gaussian blurring
  • instead of merely comparing file-sizes, try full reference metric such ssim -- there's opencv implementation freely available. other implementations (e.g. matlab, c#) exist, around.

let me know how go.


Comments

Popular posts from this blog

android - Spacing between the stars of a rating bar? -

aspxgridview - Devexpress grid - header filter does not work if column is initially hidden -

c# - How to execute a particular part of code asynchronously in a class -