Set colour limit axis in OpenCV 4 (c++) akin to Matlab's CAXIS












2














Matlab offers the ability to set colour limits for the current axis using CAXIS. OpenCV has applyColorMap which can be used to highlight differences in pixel intensity in a greyscale image which I believe maps pixel from 0 - 255.



I am new to Matlab/Image-processing and have been asked to port a simple program from MatLab which uses the CAXIS function to change the "brightness" of a colour map. I have no experience in Matlab but it appears that they use this function to "lower" the intensity requirements needed for pixels to be mapped to a more intense colour on the map



i.e. Colour map using "JET"




  • When brightness = 1, red = 255

  • When brightness = 10, red >= 25


The matlab program allows 16bit images to be read in and displayed which obviouly gives higher pixel values whereas everything i've read and done indicates OpenCV only supports 8 bit images (for colour maps)



Therefore my question is, is it possible to provide similar functionality in OpenCV? How do you set the axis limit for a colourmap/how do you scale the colour map lookup table so that "less" intense pixels are scaled to the more intense regions?



A similar question was asked with a reply stating the array needs to be "normalised" but unfortunately I don't quite know how to achieve this and can't reply to the answer as i don't have enough rep!



I have gone ahead and used cv::normalize to set the max value in the array to be maxPixelValue/brightness but that doesn't work at all.



I have also experimented and tried converting my 16bit image into a CV_8UC1 with a scale factor to no avail. Any help would be greatly appreciated!










share|improve this question



























    2














    Matlab offers the ability to set colour limits for the current axis using CAXIS. OpenCV has applyColorMap which can be used to highlight differences in pixel intensity in a greyscale image which I believe maps pixel from 0 - 255.



    I am new to Matlab/Image-processing and have been asked to port a simple program from MatLab which uses the CAXIS function to change the "brightness" of a colour map. I have no experience in Matlab but it appears that they use this function to "lower" the intensity requirements needed for pixels to be mapped to a more intense colour on the map



    i.e. Colour map using "JET"




    • When brightness = 1, red = 255

    • When brightness = 10, red >= 25


    The matlab program allows 16bit images to be read in and displayed which obviouly gives higher pixel values whereas everything i've read and done indicates OpenCV only supports 8 bit images (for colour maps)



    Therefore my question is, is it possible to provide similar functionality in OpenCV? How do you set the axis limit for a colourmap/how do you scale the colour map lookup table so that "less" intense pixels are scaled to the more intense regions?



    A similar question was asked with a reply stating the array needs to be "normalised" but unfortunately I don't quite know how to achieve this and can't reply to the answer as i don't have enough rep!



    I have gone ahead and used cv::normalize to set the max value in the array to be maxPixelValue/brightness but that doesn't work at all.



    I have also experimented and tried converting my 16bit image into a CV_8UC1 with a scale factor to no avail. Any help would be greatly appreciated!










    share|improve this question

























      2












      2








      2







      Matlab offers the ability to set colour limits for the current axis using CAXIS. OpenCV has applyColorMap which can be used to highlight differences in pixel intensity in a greyscale image which I believe maps pixel from 0 - 255.



      I am new to Matlab/Image-processing and have been asked to port a simple program from MatLab which uses the CAXIS function to change the "brightness" of a colour map. I have no experience in Matlab but it appears that they use this function to "lower" the intensity requirements needed for pixels to be mapped to a more intense colour on the map



      i.e. Colour map using "JET"




      • When brightness = 1, red = 255

      • When brightness = 10, red >= 25


      The matlab program allows 16bit images to be read in and displayed which obviouly gives higher pixel values whereas everything i've read and done indicates OpenCV only supports 8 bit images (for colour maps)



      Therefore my question is, is it possible to provide similar functionality in OpenCV? How do you set the axis limit for a colourmap/how do you scale the colour map lookup table so that "less" intense pixels are scaled to the more intense regions?



      A similar question was asked with a reply stating the array needs to be "normalised" but unfortunately I don't quite know how to achieve this and can't reply to the answer as i don't have enough rep!



      I have gone ahead and used cv::normalize to set the max value in the array to be maxPixelValue/brightness but that doesn't work at all.



      I have also experimented and tried converting my 16bit image into a CV_8UC1 with a scale factor to no avail. Any help would be greatly appreciated!










      share|improve this question













      Matlab offers the ability to set colour limits for the current axis using CAXIS. OpenCV has applyColorMap which can be used to highlight differences in pixel intensity in a greyscale image which I believe maps pixel from 0 - 255.



      I am new to Matlab/Image-processing and have been asked to port a simple program from MatLab which uses the CAXIS function to change the "brightness" of a colour map. I have no experience in Matlab but it appears that they use this function to "lower" the intensity requirements needed for pixels to be mapped to a more intense colour on the map



      i.e. Colour map using "JET"




      • When brightness = 1, red = 255

      • When brightness = 10, red >= 25


      The matlab program allows 16bit images to be read in and displayed which obviouly gives higher pixel values whereas everything i've read and done indicates OpenCV only supports 8 bit images (for colour maps)



      Therefore my question is, is it possible to provide similar functionality in OpenCV? How do you set the axis limit for a colourmap/how do you scale the colour map lookup table so that "less" intense pixels are scaled to the more intense regions?



      A similar question was asked with a reply stating the array needs to be "normalised" but unfortunately I don't quite know how to achieve this and can't reply to the answer as i don't have enough rep!



      I have gone ahead and used cv::normalize to set the max value in the array to be maxPixelValue/brightness but that doesn't work at all.



      I have also experimented and tried converting my 16bit image into a CV_8UC1 with a scale factor to no avail. Any help would be greatly appreciated!







      c++ matlab opencv image-processing






      share|improve this question













      share|improve this question











      share|improve this question




      share|improve this question










      asked 2 days ago









      Christopher Whiting

      214




      214
























          2 Answers
          2






          active

          oldest

          votes


















          1














          In my opinion you can use cv::normalize to "crop" values in the source picture to the corresponding ones in color map you are interested in. Say you want your image to be mapped to the blue-ish region of Jet colormap then you should do something like:



          int minVal = 0, maxVal = 80;
          cv::normalize(src,dst, minVal, maxVal, cv::NORM_MINMAX);


          If you plan to apply some kind of custom map it's fairly easy for 1-or3-channel 8-bit image, you only need to create LUT with 255 values (with proper number of channels) and apply it using cv::LUT, more about it in this blog, also see the dosc about LUT



          If the image you are working is of different depth, 16-bit or even floating point data I guess all you need to do is write a function like:



          template<class T> 
          T customColorMapper(T input_pixel)
          {
          T output_pixel = 0;
          // do something with output_pixel basing on intput_pixel
          return output_pixel;
          }


          and apply it to each source image pixel like:



          cv::Mat dst_image = src_image.clone(); //copy data
          dst_image.forEach<TYPE>((TYPE& input_pixel, const int* pos_row_col) -> void {
          input_pixel = customColorMapper<TYPE>(input_pixel);
          });


          of course TYPE need to be a valid type. Maybe specialized version of this function taking cv::Scalar or cv::Vec3-something would be nice if you need to work with multiple channels.



          Hope this helps!






          share|improve this answer





















          • Thanks for taking the time to reply. I tried some of the techniques you mentioned but to no avail. Unfortunately I'm having to be a little vague with the requirements otherwise I would have shared more code/printscreens. I did manage to get it working through! I resorted to using a manual loop setting each pixel to the MAX value if it was more than the calculated threshold, if not i multiplied by a scale factor. See my updated post for more info.
            – Christopher Whiting
            yesterday





















          0














          I managed to replicate the MATLAB behaviour but had to resort to manually iterating over each pixel and setting the value to the maximum value for the image depth or scaling the value where needed.



          my code looked something like this



          cv::minMaxLoc(dst, &min, &max);

          double axisThreshold = floor(max / contrastLevel);

          for (int i = 0; i < dst.rows; i++)
          {
          for (int j = 0; j < dst.cols; j++)
          {
          short pixel = dst.at<short>(i, j);
          if (pixel >= axisThreshold)
          {
          pixel = USHRT_MAX;
          }
          else
          {
          pixel *= (USHRT_MAX / axisThreshold);
          }
          dst.at<short>(i, j) = cv::saturate_cast<short>(pixel);
          }
          }


          In my example I had a slider which adjusted the contrast/brightness (we called it contrast, the original implementation called it brightness).



          When the contrast/brightness was changed, the program would retrieve the maximum pixel value and then compute the axis limit by doing



          calculatedThreshold = Max pixel value / contrast



          Each pixel more than the threshold gets set to MAX, each pixel lower than the threshold gets multiplied by a scale factor calculated by



          scale = MAX Pixel Value / calculatedThreshold.



          TBH i can't say I fully understand the maths behind it. I just used trial and error until it worked; any help in that department would be appreciated HOWEVER it seems to do what i want to!



          My understanding of the initial matlab implementation and the terminology "brightness" is in fact their attempt to scale the colourmap so that the "brighter" the image, the less intense each pixel had to be to map to a particular colour in the colourmap.



          Since applycolourmap only works on 8 bit images, when the brightness increases and the colourmap axis values decrease, we need to ensure the values of the pixels scale accordingly so that they now match up with the "higher" intensity values in the map.



          I have seen numerous OPENCV tutorials which use this approach to changing the contrast/brightness but they often promote the use of optimised convertTo (especially if you're trying to use the GPU). However as far as I can see, convertTo applies the aplha/beta values uniformly and not on a pixel by pixel basis therefore I can't use that approach.



          I will update this question If i found more suitable OPENCV functions to achieve what I want.






          share|improve this answer























            Your Answer






            StackExchange.ifUsing("editor", function () {
            StackExchange.using("externalEditor", function () {
            StackExchange.using("snippets", function () {
            StackExchange.snippets.init();
            });
            });
            }, "code-snippets");

            StackExchange.ready(function() {
            var channelOptions = {
            tags: "".split(" "),
            id: "1"
            };
            initTagRenderer("".split(" "), "".split(" "), channelOptions);

            StackExchange.using("externalEditor", function() {
            // Have to fire editor after snippets, if snippets enabled
            if (StackExchange.settings.snippets.snippetsEnabled) {
            StackExchange.using("snippets", function() {
            createEditor();
            });
            }
            else {
            createEditor();
            }
            });

            function createEditor() {
            StackExchange.prepareEditor({
            heartbeatType: 'answer',
            autoActivateHeartbeat: false,
            convertImagesToLinks: true,
            noModals: true,
            showLowRepImageUploadWarning: true,
            reputationToPostImages: 10,
            bindNavPrevention: true,
            postfix: "",
            imageUploader: {
            brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
            contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
            allowUrls: true
            },
            onDemand: true,
            discardSelector: ".discard-answer"
            ,immediatelyShowMarkdownHelp:true
            });


            }
            });














            draft saved

            draft discarded


















            StackExchange.ready(
            function () {
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53945145%2fset-colour-limit-axis-in-opencv-4-c-akin-to-matlabs-caxis%23new-answer', 'question_page');
            }
            );

            Post as a guest















            Required, but never shown

























            2 Answers
            2






            active

            oldest

            votes








            2 Answers
            2






            active

            oldest

            votes









            active

            oldest

            votes






            active

            oldest

            votes









            1














            In my opinion you can use cv::normalize to "crop" values in the source picture to the corresponding ones in color map you are interested in. Say you want your image to be mapped to the blue-ish region of Jet colormap then you should do something like:



            int minVal = 0, maxVal = 80;
            cv::normalize(src,dst, minVal, maxVal, cv::NORM_MINMAX);


            If you plan to apply some kind of custom map it's fairly easy for 1-or3-channel 8-bit image, you only need to create LUT with 255 values (with proper number of channels) and apply it using cv::LUT, more about it in this blog, also see the dosc about LUT



            If the image you are working is of different depth, 16-bit or even floating point data I guess all you need to do is write a function like:



            template<class T> 
            T customColorMapper(T input_pixel)
            {
            T output_pixel = 0;
            // do something with output_pixel basing on intput_pixel
            return output_pixel;
            }


            and apply it to each source image pixel like:



            cv::Mat dst_image = src_image.clone(); //copy data
            dst_image.forEach<TYPE>((TYPE& input_pixel, const int* pos_row_col) -> void {
            input_pixel = customColorMapper<TYPE>(input_pixel);
            });


            of course TYPE need to be a valid type. Maybe specialized version of this function taking cv::Scalar or cv::Vec3-something would be nice if you need to work with multiple channels.



            Hope this helps!






            share|improve this answer





















            • Thanks for taking the time to reply. I tried some of the techniques you mentioned but to no avail. Unfortunately I'm having to be a little vague with the requirements otherwise I would have shared more code/printscreens. I did manage to get it working through! I resorted to using a manual loop setting each pixel to the MAX value if it was more than the calculated threshold, if not i multiplied by a scale factor. See my updated post for more info.
              – Christopher Whiting
              yesterday


















            1














            In my opinion you can use cv::normalize to "crop" values in the source picture to the corresponding ones in color map you are interested in. Say you want your image to be mapped to the blue-ish region of Jet colormap then you should do something like:



            int minVal = 0, maxVal = 80;
            cv::normalize(src,dst, minVal, maxVal, cv::NORM_MINMAX);


            If you plan to apply some kind of custom map it's fairly easy for 1-or3-channel 8-bit image, you only need to create LUT with 255 values (with proper number of channels) and apply it using cv::LUT, more about it in this blog, also see the dosc about LUT



            If the image you are working is of different depth, 16-bit or even floating point data I guess all you need to do is write a function like:



            template<class T> 
            T customColorMapper(T input_pixel)
            {
            T output_pixel = 0;
            // do something with output_pixel basing on intput_pixel
            return output_pixel;
            }


            and apply it to each source image pixel like:



            cv::Mat dst_image = src_image.clone(); //copy data
            dst_image.forEach<TYPE>((TYPE& input_pixel, const int* pos_row_col) -> void {
            input_pixel = customColorMapper<TYPE>(input_pixel);
            });


            of course TYPE need to be a valid type. Maybe specialized version of this function taking cv::Scalar or cv::Vec3-something would be nice if you need to work with multiple channels.



            Hope this helps!






            share|improve this answer





















            • Thanks for taking the time to reply. I tried some of the techniques you mentioned but to no avail. Unfortunately I'm having to be a little vague with the requirements otherwise I would have shared more code/printscreens. I did manage to get it working through! I resorted to using a manual loop setting each pixel to the MAX value if it was more than the calculated threshold, if not i multiplied by a scale factor. See my updated post for more info.
              – Christopher Whiting
              yesterday
















            1












            1








            1






            In my opinion you can use cv::normalize to "crop" values in the source picture to the corresponding ones in color map you are interested in. Say you want your image to be mapped to the blue-ish region of Jet colormap then you should do something like:



            int minVal = 0, maxVal = 80;
            cv::normalize(src,dst, minVal, maxVal, cv::NORM_MINMAX);


            If you plan to apply some kind of custom map it's fairly easy for 1-or3-channel 8-bit image, you only need to create LUT with 255 values (with proper number of channels) and apply it using cv::LUT, more about it in this blog, also see the dosc about LUT



            If the image you are working is of different depth, 16-bit or even floating point data I guess all you need to do is write a function like:



            template<class T> 
            T customColorMapper(T input_pixel)
            {
            T output_pixel = 0;
            // do something with output_pixel basing on intput_pixel
            return output_pixel;
            }


            and apply it to each source image pixel like:



            cv::Mat dst_image = src_image.clone(); //copy data
            dst_image.forEach<TYPE>((TYPE& input_pixel, const int* pos_row_col) -> void {
            input_pixel = customColorMapper<TYPE>(input_pixel);
            });


            of course TYPE need to be a valid type. Maybe specialized version of this function taking cv::Scalar or cv::Vec3-something would be nice if you need to work with multiple channels.



            Hope this helps!






            share|improve this answer












            In my opinion you can use cv::normalize to "crop" values in the source picture to the corresponding ones in color map you are interested in. Say you want your image to be mapped to the blue-ish region of Jet colormap then you should do something like:



            int minVal = 0, maxVal = 80;
            cv::normalize(src,dst, minVal, maxVal, cv::NORM_MINMAX);


            If you plan to apply some kind of custom map it's fairly easy for 1-or3-channel 8-bit image, you only need to create LUT with 255 values (with proper number of channels) and apply it using cv::LUT, more about it in this blog, also see the dosc about LUT



            If the image you are working is of different depth, 16-bit or even floating point data I guess all you need to do is write a function like:



            template<class T> 
            T customColorMapper(T input_pixel)
            {
            T output_pixel = 0;
            // do something with output_pixel basing on intput_pixel
            return output_pixel;
            }


            and apply it to each source image pixel like:



            cv::Mat dst_image = src_image.clone(); //copy data
            dst_image.forEach<TYPE>((TYPE& input_pixel, const int* pos_row_col) -> void {
            input_pixel = customColorMapper<TYPE>(input_pixel);
            });


            of course TYPE need to be a valid type. Maybe specialized version of this function taking cv::Scalar or cv::Vec3-something would be nice if you need to work with multiple channels.



            Hope this helps!







            share|improve this answer












            share|improve this answer



            share|improve this answer










            answered 2 days ago









            michelson

            83110




            83110












            • Thanks for taking the time to reply. I tried some of the techniques you mentioned but to no avail. Unfortunately I'm having to be a little vague with the requirements otherwise I would have shared more code/printscreens. I did manage to get it working through! I resorted to using a manual loop setting each pixel to the MAX value if it was more than the calculated threshold, if not i multiplied by a scale factor. See my updated post for more info.
              – Christopher Whiting
              yesterday




















            • Thanks for taking the time to reply. I tried some of the techniques you mentioned but to no avail. Unfortunately I'm having to be a little vague with the requirements otherwise I would have shared more code/printscreens. I did manage to get it working through! I resorted to using a manual loop setting each pixel to the MAX value if it was more than the calculated threshold, if not i multiplied by a scale factor. See my updated post for more info.
              – Christopher Whiting
              yesterday


















            Thanks for taking the time to reply. I tried some of the techniques you mentioned but to no avail. Unfortunately I'm having to be a little vague with the requirements otherwise I would have shared more code/printscreens. I did manage to get it working through! I resorted to using a manual loop setting each pixel to the MAX value if it was more than the calculated threshold, if not i multiplied by a scale factor. See my updated post for more info.
            – Christopher Whiting
            yesterday






            Thanks for taking the time to reply. I tried some of the techniques you mentioned but to no avail. Unfortunately I'm having to be a little vague with the requirements otherwise I would have shared more code/printscreens. I did manage to get it working through! I resorted to using a manual loop setting each pixel to the MAX value if it was more than the calculated threshold, if not i multiplied by a scale factor. See my updated post for more info.
            – Christopher Whiting
            yesterday















            0














            I managed to replicate the MATLAB behaviour but had to resort to manually iterating over each pixel and setting the value to the maximum value for the image depth or scaling the value where needed.



            my code looked something like this



            cv::minMaxLoc(dst, &min, &max);

            double axisThreshold = floor(max / contrastLevel);

            for (int i = 0; i < dst.rows; i++)
            {
            for (int j = 0; j < dst.cols; j++)
            {
            short pixel = dst.at<short>(i, j);
            if (pixel >= axisThreshold)
            {
            pixel = USHRT_MAX;
            }
            else
            {
            pixel *= (USHRT_MAX / axisThreshold);
            }
            dst.at<short>(i, j) = cv::saturate_cast<short>(pixel);
            }
            }


            In my example I had a slider which adjusted the contrast/brightness (we called it contrast, the original implementation called it brightness).



            When the contrast/brightness was changed, the program would retrieve the maximum pixel value and then compute the axis limit by doing



            calculatedThreshold = Max pixel value / contrast



            Each pixel more than the threshold gets set to MAX, each pixel lower than the threshold gets multiplied by a scale factor calculated by



            scale = MAX Pixel Value / calculatedThreshold.



            TBH i can't say I fully understand the maths behind it. I just used trial and error until it worked; any help in that department would be appreciated HOWEVER it seems to do what i want to!



            My understanding of the initial matlab implementation and the terminology "brightness" is in fact their attempt to scale the colourmap so that the "brighter" the image, the less intense each pixel had to be to map to a particular colour in the colourmap.



            Since applycolourmap only works on 8 bit images, when the brightness increases and the colourmap axis values decrease, we need to ensure the values of the pixels scale accordingly so that they now match up with the "higher" intensity values in the map.



            I have seen numerous OPENCV tutorials which use this approach to changing the contrast/brightness but they often promote the use of optimised convertTo (especially if you're trying to use the GPU). However as far as I can see, convertTo applies the aplha/beta values uniformly and not on a pixel by pixel basis therefore I can't use that approach.



            I will update this question If i found more suitable OPENCV functions to achieve what I want.






            share|improve this answer




























              0














              I managed to replicate the MATLAB behaviour but had to resort to manually iterating over each pixel and setting the value to the maximum value for the image depth or scaling the value where needed.



              my code looked something like this



              cv::minMaxLoc(dst, &min, &max);

              double axisThreshold = floor(max / contrastLevel);

              for (int i = 0; i < dst.rows; i++)
              {
              for (int j = 0; j < dst.cols; j++)
              {
              short pixel = dst.at<short>(i, j);
              if (pixel >= axisThreshold)
              {
              pixel = USHRT_MAX;
              }
              else
              {
              pixel *= (USHRT_MAX / axisThreshold);
              }
              dst.at<short>(i, j) = cv::saturate_cast<short>(pixel);
              }
              }


              In my example I had a slider which adjusted the contrast/brightness (we called it contrast, the original implementation called it brightness).



              When the contrast/brightness was changed, the program would retrieve the maximum pixel value and then compute the axis limit by doing



              calculatedThreshold = Max pixel value / contrast



              Each pixel more than the threshold gets set to MAX, each pixel lower than the threshold gets multiplied by a scale factor calculated by



              scale = MAX Pixel Value / calculatedThreshold.



              TBH i can't say I fully understand the maths behind it. I just used trial and error until it worked; any help in that department would be appreciated HOWEVER it seems to do what i want to!



              My understanding of the initial matlab implementation and the terminology "brightness" is in fact their attempt to scale the colourmap so that the "brighter" the image, the less intense each pixel had to be to map to a particular colour in the colourmap.



              Since applycolourmap only works on 8 bit images, when the brightness increases and the colourmap axis values decrease, we need to ensure the values of the pixels scale accordingly so that they now match up with the "higher" intensity values in the map.



              I have seen numerous OPENCV tutorials which use this approach to changing the contrast/brightness but they often promote the use of optimised convertTo (especially if you're trying to use the GPU). However as far as I can see, convertTo applies the aplha/beta values uniformly and not on a pixel by pixel basis therefore I can't use that approach.



              I will update this question If i found more suitable OPENCV functions to achieve what I want.






              share|improve this answer


























                0












                0








                0






                I managed to replicate the MATLAB behaviour but had to resort to manually iterating over each pixel and setting the value to the maximum value for the image depth or scaling the value where needed.



                my code looked something like this



                cv::minMaxLoc(dst, &min, &max);

                double axisThreshold = floor(max / contrastLevel);

                for (int i = 0; i < dst.rows; i++)
                {
                for (int j = 0; j < dst.cols; j++)
                {
                short pixel = dst.at<short>(i, j);
                if (pixel >= axisThreshold)
                {
                pixel = USHRT_MAX;
                }
                else
                {
                pixel *= (USHRT_MAX / axisThreshold);
                }
                dst.at<short>(i, j) = cv::saturate_cast<short>(pixel);
                }
                }


                In my example I had a slider which adjusted the contrast/brightness (we called it contrast, the original implementation called it brightness).



                When the contrast/brightness was changed, the program would retrieve the maximum pixel value and then compute the axis limit by doing



                calculatedThreshold = Max pixel value / contrast



                Each pixel more than the threshold gets set to MAX, each pixel lower than the threshold gets multiplied by a scale factor calculated by



                scale = MAX Pixel Value / calculatedThreshold.



                TBH i can't say I fully understand the maths behind it. I just used trial and error until it worked; any help in that department would be appreciated HOWEVER it seems to do what i want to!



                My understanding of the initial matlab implementation and the terminology "brightness" is in fact their attempt to scale the colourmap so that the "brighter" the image, the less intense each pixel had to be to map to a particular colour in the colourmap.



                Since applycolourmap only works on 8 bit images, when the brightness increases and the colourmap axis values decrease, we need to ensure the values of the pixels scale accordingly so that they now match up with the "higher" intensity values in the map.



                I have seen numerous OPENCV tutorials which use this approach to changing the contrast/brightness but they often promote the use of optimised convertTo (especially if you're trying to use the GPU). However as far as I can see, convertTo applies the aplha/beta values uniformly and not on a pixel by pixel basis therefore I can't use that approach.



                I will update this question If i found more suitable OPENCV functions to achieve what I want.






                share|improve this answer














                I managed to replicate the MATLAB behaviour but had to resort to manually iterating over each pixel and setting the value to the maximum value for the image depth or scaling the value where needed.



                my code looked something like this



                cv::minMaxLoc(dst, &min, &max);

                double axisThreshold = floor(max / contrastLevel);

                for (int i = 0; i < dst.rows; i++)
                {
                for (int j = 0; j < dst.cols; j++)
                {
                short pixel = dst.at<short>(i, j);
                if (pixel >= axisThreshold)
                {
                pixel = USHRT_MAX;
                }
                else
                {
                pixel *= (USHRT_MAX / axisThreshold);
                }
                dst.at<short>(i, j) = cv::saturate_cast<short>(pixel);
                }
                }


                In my example I had a slider which adjusted the contrast/brightness (we called it contrast, the original implementation called it brightness).



                When the contrast/brightness was changed, the program would retrieve the maximum pixel value and then compute the axis limit by doing



                calculatedThreshold = Max pixel value / contrast



                Each pixel more than the threshold gets set to MAX, each pixel lower than the threshold gets multiplied by a scale factor calculated by



                scale = MAX Pixel Value / calculatedThreshold.



                TBH i can't say I fully understand the maths behind it. I just used trial and error until it worked; any help in that department would be appreciated HOWEVER it seems to do what i want to!



                My understanding of the initial matlab implementation and the terminology "brightness" is in fact their attempt to scale the colourmap so that the "brighter" the image, the less intense each pixel had to be to map to a particular colour in the colourmap.



                Since applycolourmap only works on 8 bit images, when the brightness increases and the colourmap axis values decrease, we need to ensure the values of the pixels scale accordingly so that they now match up with the "higher" intensity values in the map.



                I have seen numerous OPENCV tutorials which use this approach to changing the contrast/brightness but they often promote the use of optimised convertTo (especially if you're trying to use the GPU). However as far as I can see, convertTo applies the aplha/beta values uniformly and not on a pixel by pixel basis therefore I can't use that approach.



                I will update this question If i found more suitable OPENCV functions to achieve what I want.







                share|improve this answer














                share|improve this answer



                share|improve this answer








                edited yesterday

























                answered yesterday









                Christopher Whiting

                214




                214






























                    draft saved

                    draft discarded




















































                    Thanks for contributing an answer to Stack Overflow!


                    • Please be sure to answer the question. Provide details and share your research!

                    But avoid



                    • Asking for help, clarification, or responding to other answers.

                    • Making statements based on opinion; back them up with references or personal experience.


                    To learn more, see our tips on writing great answers.





                    Some of your past answers have not been well-received, and you're in danger of being blocked from answering.


                    Please pay close attention to the following guidance:


                    • Please be sure to answer the question. Provide details and share your research!

                    But avoid



                    • Asking for help, clarification, or responding to other answers.

                    • Making statements based on opinion; back them up with references or personal experience.


                    To learn more, see our tips on writing great answers.




                    draft saved


                    draft discarded














                    StackExchange.ready(
                    function () {
                    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53945145%2fset-colour-limit-axis-in-opencv-4-c-akin-to-matlabs-caxis%23new-answer', 'question_page');
                    }
                    );

                    Post as a guest















                    Required, but never shown





















































                    Required, but never shown














                    Required, but never shown












                    Required, but never shown







                    Required, but never shown

































                    Required, but never shown














                    Required, but never shown












                    Required, but never shown







                    Required, but never shown







                    Popular posts from this blog

                    Monofisismo

                    Angular Downloading a file using contenturl with Basic Authentication

                    Olmecas