An implementation of the Robertson-HDR algorithm as a CoreImage filter. Use it to make HDR Images (without tone mapping) of a bracket of processed images. This algorithm is not meant to be used with RAW images.
When generating a HDR Image, the resulting Image seems overexposed. The problem is that the HDR algorithm does not scale pixel values to fit intp the interval [0...1]. Therefore, the maximum of an image has to be found, so that all pixels can be scaled by the factor 1/max.
Better: Get a Histogram, find the upper and lower 1% of pixels brightness values and clip them, so that numerical outliers will be eliminated and the range in the intervall 0...1 is filled with meaningful data.
When using downscaled photos at the resolution of 1024x688, the response function can be estimated, which does not work for the full resolution versions.
the HDR and response summation shader both expect a fixed amount of input images, which is not made clear when using the meta computer. it only limits the number of images to a maximum value (currently 5 images), but does not warn the user when he uses less.
As a result, the algorithm only works if the number of images is exactly 5. There must be something done in order to make this number flexible.
Right now, every shader resides in a single file, which makes the code hard to read and hard to maintain. I suggest you should move the shaders in files with more relevant filenames.
When calculating the response even with high quality images, the plot of the resulting curve is neither monotonically increasing nor does the smooth shader take any effect.