Reverse order sequential digits












5















I have a very long array, and I'm trying to do the following in a way as efficient as possible:



For each consecutively incrementing chunk in the list I have to reverse its order.



So, for the following array:



a = np.array([1,5,7,3,2,5,4,45,1,5,10,12])


I would like to obtain:



array([7,5,1,3,5,2,45,4,12,10,5,1])


I was wondering if this could be vectorized, using numpy perhaps?



I have already had some answers in this previous question, but the results, although they are big improvements, they are still a little slow.










share|improve this question





























    5















    I have a very long array, and I'm trying to do the following in a way as efficient as possible:



    For each consecutively incrementing chunk in the list I have to reverse its order.



    So, for the following array:



    a = np.array([1,5,7,3,2,5,4,45,1,5,10,12])


    I would like to obtain:



    array([7,5,1,3,5,2,45,4,12,10,5,1])


    I was wondering if this could be vectorized, using numpy perhaps?



    I have already had some answers in this previous question, but the results, although they are big improvements, they are still a little slow.










    share|improve this question



























      5












      5








      5








      I have a very long array, and I'm trying to do the following in a way as efficient as possible:



      For each consecutively incrementing chunk in the list I have to reverse its order.



      So, for the following array:



      a = np.array([1,5,7,3,2,5,4,45,1,5,10,12])


      I would like to obtain:



      array([7,5,1,3,5,2,45,4,12,10,5,1])


      I was wondering if this could be vectorized, using numpy perhaps?



      I have already had some answers in this previous question, but the results, although they are big improvements, they are still a little slow.










      share|improve this question
















      I have a very long array, and I'm trying to do the following in a way as efficient as possible:



      For each consecutively incrementing chunk in the list I have to reverse its order.



      So, for the following array:



      a = np.array([1,5,7,3,2,5,4,45,1,5,10,12])


      I would like to obtain:



      array([7,5,1,3,5,2,45,4,12,10,5,1])


      I was wondering if this could be vectorized, using numpy perhaps?



      I have already had some answers in this previous question, but the results, although they are big improvements, they are still a little slow.







      python numpy






      share|improve this question















      share|improve this question













      share|improve this question




      share|improve this question








      edited Jan 2 at 23:48

























      asked Jan 2 at 21:20







      user10652346































          4 Answers
          4






          active

          oldest

          votes


















          2














          Other option without dependencies:



          array = [1,5,7,3,2,5,4,45,1,5,10,12]

          res, memo = ,
          for e in array:
          if len(memo) == 0 or e > memo[-1]: memo.append(e)
          else:
          res.extend(reversed(memo))
          memo = [e]
          res.extend(reversed(memo))

          res # => [7, 5, 1, 3, 5, 2, 45, 4, 12, 10, 5, 1]





          Modified version which is a bit faster:

          def reverse_if_chunck_increases(array):
          res, memo, last_memo = , , None
          for e in array:
          if not last_memo or e > last_memo:
          last_memo = e
          memo.append(e)
          else:
          res.extend(memo[::-1])
          last_memo, memo = e, [e]
          res.extend(memo[::-1])
          return res

          print(reverse_if_chunck_increases(array) == [7, 5, 1, 3, 5, 2, 45, 4, 12, 10, 5, 1])
          #=> True





          EDIT after the answer was accepted (Maybe it's useful.)

          I was able to get the result so easily and apparently a bit faster coding in Ruby as:



          array.chunk_while { |x, y| x < y }.flat_map{ |chunk| chunk.reverse }


          So, I wondered why there isn't an itertool like chunk_while. Then I tried to code one similar using yield:



          def reverse_if_chunk_increases(array):
          i, x, size, res = 0, 0, len(array),
          while i < size-1:
          if array[i] > array[i+1]:
          yield array[x:i+1][::-1]
          x = i +1
          i += 1
          yield array[x:size][::-1]


          The execution is superfast but it return a generator to iterate instead of a list:



          chunks = reverse_if_chunk_increases(array)
          for chunk in chunks:
          print(chunk)
          # [7, 5, 1]
          # [3]
          # [5, 2]
          # [45, 4]
          # [12, 10, 5, 1]


          It can be converted to a list, which is the slowest process.
          Note that a generator can be called once.
          Removing [::-1] you get a result similar to Ruby enumerator/generator chunk_while.






          share|improve this answer





















          • 1





            @Gerald, I was still trying to find a faster way, but the fastest I get is using generators, see my edit. Thanks for the choice, by the way.

            – iGian
            Jan 3 at 13:07



















          2














          Can you use pandas?



          import pandas as pd
          a = [1,5,7,3,2,5,4,45,1,5,10,12]
          aa = pd.Series(a)
          aa.groupby(aa.diff().bfill().lt(0).cumsum()).apply(lambda x: x[::-1]).tolist()


          Output:



          [7, 5, 1, 3, 5, 2, 45, 4, 12, 10, 5, 1]





          share|improve this answer
























          • Yes, too slow. 5 M rows.

            – Scott Boston
            Jan 2 at 21:55











          • Appreciate the effort though. Perhaps with a simple python loop is as good as it can get

            – user10652346
            Jan 2 at 21:56



















          1














          How is this? Seems to be faster afaik but without knowing how fast is fast enough its tough to tell for sure



          all_l = 
          sub_l =
          for i in a:
          if sub_l:
          if sub_l[0] > i:
          all_l.extend(sub_l)
          sub_l = [i]
          else:
          sub_l.insert(0, i)
          else:
          sub_l = [i]
          all_l.extend(sub_l)





          share|improve this answer
























          • Ideally less than a sec for an array of the order of np.random.choice(10,5_000_000)

            – user10652346
            Jan 2 at 22:20



















          1














          I don't think you're going to get much faster than using a pure python loop.



          For instance, here is a numpy+itertools solution:



          import numpy as np
          from itertools import chain, groupby
          from operator import itemgetter

          def reverse_increasing_sequences_numpy(a):
          idx = (np.diff(np.concatenate([[a[0]], a]))<0).cumsum()
          return list(
          chain.from_iterable(
          (reversed([x[0] for x in g]) for v, g in groupby(zip(a, idx), itemgetter(1)))
          )
          )

          print(reverse_increasing_sequences(a))
          #[7, 5, 1, 3, 5, 2, 45, 4, 12, 10, 5, 1]


          But looking at the speed test results:



          b = np.random.choice(10, 100000)

          %%timeit
          reverse_increasing_sequences_numpy(b)
          #47.1 ms ± 778 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)

          %%timeit
          reverse_increasing_sequences_iGian(b)
          #40.3 ms ± 1.31 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)

          %%%timeit
          reverse_increasing_sequences_hchw(b)
          #26.1 ms ± 1.35 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)


          @hchw's solution ran is almost 2 times faster than my numpy version.






          share|improve this answer
























          • Thank you very much @pault! Yes, I was under the impression that perhaps a python loop was the best solution. Very nice solution anyway and thanks again for the effort and time complexity comparisson

            – user10652346
            Jan 2 at 23:47











          • Nice numpy solution @pault. Interesting that numpy couldn't outperform straight python loops.

            – Scott Boston
            Jan 3 at 14:24











          • @ScottBoston I can't think of a way to optimize this in any way using numpy. The fastest solution is one pass through the list, building the output as you go.

            – pault
            Jan 3 at 22:00













          • @pault Yes, breaking the list part and reassembling is time consuming.

            – Scott Boston
            Jan 3 at 22:02











          Your Answer






          StackExchange.ifUsing("editor", function () {
          StackExchange.using("externalEditor", function () {
          StackExchange.using("snippets", function () {
          StackExchange.snippets.init();
          });
          });
          }, "code-snippets");

          StackExchange.ready(function() {
          var channelOptions = {
          tags: "".split(" "),
          id: "1"
          };
          initTagRenderer("".split(" "), "".split(" "), channelOptions);

          StackExchange.using("externalEditor", function() {
          // Have to fire editor after snippets, if snippets enabled
          if (StackExchange.settings.snippets.snippetsEnabled) {
          StackExchange.using("snippets", function() {
          createEditor();
          });
          }
          else {
          createEditor();
          }
          });

          function createEditor() {
          StackExchange.prepareEditor({
          heartbeatType: 'answer',
          autoActivateHeartbeat: false,
          convertImagesToLinks: true,
          noModals: true,
          showLowRepImageUploadWarning: true,
          reputationToPostImages: 10,
          bindNavPrevention: true,
          postfix: "",
          imageUploader: {
          brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
          contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
          allowUrls: true
          },
          onDemand: true,
          discardSelector: ".discard-answer"
          ,immediatelyShowMarkdownHelp:true
          });


          }
          });














          draft saved

          draft discarded


















          StackExchange.ready(
          function () {
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f54013340%2freverse-order-sequential-digits%23new-answer', 'question_page');
          }
          );

          Post as a guest















          Required, but never shown
























          4 Answers
          4






          active

          oldest

          votes








          4 Answers
          4






          active

          oldest

          votes









          active

          oldest

          votes






          active

          oldest

          votes









          2














          Other option without dependencies:



          array = [1,5,7,3,2,5,4,45,1,5,10,12]

          res, memo = ,
          for e in array:
          if len(memo) == 0 or e > memo[-1]: memo.append(e)
          else:
          res.extend(reversed(memo))
          memo = [e]
          res.extend(reversed(memo))

          res # => [7, 5, 1, 3, 5, 2, 45, 4, 12, 10, 5, 1]





          Modified version which is a bit faster:

          def reverse_if_chunck_increases(array):
          res, memo, last_memo = , , None
          for e in array:
          if not last_memo or e > last_memo:
          last_memo = e
          memo.append(e)
          else:
          res.extend(memo[::-1])
          last_memo, memo = e, [e]
          res.extend(memo[::-1])
          return res

          print(reverse_if_chunck_increases(array) == [7, 5, 1, 3, 5, 2, 45, 4, 12, 10, 5, 1])
          #=> True





          EDIT after the answer was accepted (Maybe it's useful.)

          I was able to get the result so easily and apparently a bit faster coding in Ruby as:



          array.chunk_while { |x, y| x < y }.flat_map{ |chunk| chunk.reverse }


          So, I wondered why there isn't an itertool like chunk_while. Then I tried to code one similar using yield:



          def reverse_if_chunk_increases(array):
          i, x, size, res = 0, 0, len(array),
          while i < size-1:
          if array[i] > array[i+1]:
          yield array[x:i+1][::-1]
          x = i +1
          i += 1
          yield array[x:size][::-1]


          The execution is superfast but it return a generator to iterate instead of a list:



          chunks = reverse_if_chunk_increases(array)
          for chunk in chunks:
          print(chunk)
          # [7, 5, 1]
          # [3]
          # [5, 2]
          # [45, 4]
          # [12, 10, 5, 1]


          It can be converted to a list, which is the slowest process.
          Note that a generator can be called once.
          Removing [::-1] you get a result similar to Ruby enumerator/generator chunk_while.






          share|improve this answer





















          • 1





            @Gerald, I was still trying to find a faster way, but the fastest I get is using generators, see my edit. Thanks for the choice, by the way.

            – iGian
            Jan 3 at 13:07
















          2














          Other option without dependencies:



          array = [1,5,7,3,2,5,4,45,1,5,10,12]

          res, memo = ,
          for e in array:
          if len(memo) == 0 or e > memo[-1]: memo.append(e)
          else:
          res.extend(reversed(memo))
          memo = [e]
          res.extend(reversed(memo))

          res # => [7, 5, 1, 3, 5, 2, 45, 4, 12, 10, 5, 1]





          Modified version which is a bit faster:

          def reverse_if_chunck_increases(array):
          res, memo, last_memo = , , None
          for e in array:
          if not last_memo or e > last_memo:
          last_memo = e
          memo.append(e)
          else:
          res.extend(memo[::-1])
          last_memo, memo = e, [e]
          res.extend(memo[::-1])
          return res

          print(reverse_if_chunck_increases(array) == [7, 5, 1, 3, 5, 2, 45, 4, 12, 10, 5, 1])
          #=> True





          EDIT after the answer was accepted (Maybe it's useful.)

          I was able to get the result so easily and apparently a bit faster coding in Ruby as:



          array.chunk_while { |x, y| x < y }.flat_map{ |chunk| chunk.reverse }


          So, I wondered why there isn't an itertool like chunk_while. Then I tried to code one similar using yield:



          def reverse_if_chunk_increases(array):
          i, x, size, res = 0, 0, len(array),
          while i < size-1:
          if array[i] > array[i+1]:
          yield array[x:i+1][::-1]
          x = i +1
          i += 1
          yield array[x:size][::-1]


          The execution is superfast but it return a generator to iterate instead of a list:



          chunks = reverse_if_chunk_increases(array)
          for chunk in chunks:
          print(chunk)
          # [7, 5, 1]
          # [3]
          # [5, 2]
          # [45, 4]
          # [12, 10, 5, 1]


          It can be converted to a list, which is the slowest process.
          Note that a generator can be called once.
          Removing [::-1] you get a result similar to Ruby enumerator/generator chunk_while.






          share|improve this answer





















          • 1





            @Gerald, I was still trying to find a faster way, but the fastest I get is using generators, see my edit. Thanks for the choice, by the way.

            – iGian
            Jan 3 at 13:07














          2












          2








          2







          Other option without dependencies:



          array = [1,5,7,3,2,5,4,45,1,5,10,12]

          res, memo = ,
          for e in array:
          if len(memo) == 0 or e > memo[-1]: memo.append(e)
          else:
          res.extend(reversed(memo))
          memo = [e]
          res.extend(reversed(memo))

          res # => [7, 5, 1, 3, 5, 2, 45, 4, 12, 10, 5, 1]





          Modified version which is a bit faster:

          def reverse_if_chunck_increases(array):
          res, memo, last_memo = , , None
          for e in array:
          if not last_memo or e > last_memo:
          last_memo = e
          memo.append(e)
          else:
          res.extend(memo[::-1])
          last_memo, memo = e, [e]
          res.extend(memo[::-1])
          return res

          print(reverse_if_chunck_increases(array) == [7, 5, 1, 3, 5, 2, 45, 4, 12, 10, 5, 1])
          #=> True





          EDIT after the answer was accepted (Maybe it's useful.)

          I was able to get the result so easily and apparently a bit faster coding in Ruby as:



          array.chunk_while { |x, y| x < y }.flat_map{ |chunk| chunk.reverse }


          So, I wondered why there isn't an itertool like chunk_while. Then I tried to code one similar using yield:



          def reverse_if_chunk_increases(array):
          i, x, size, res = 0, 0, len(array),
          while i < size-1:
          if array[i] > array[i+1]:
          yield array[x:i+1][::-1]
          x = i +1
          i += 1
          yield array[x:size][::-1]


          The execution is superfast but it return a generator to iterate instead of a list:



          chunks = reverse_if_chunk_increases(array)
          for chunk in chunks:
          print(chunk)
          # [7, 5, 1]
          # [3]
          # [5, 2]
          # [45, 4]
          # [12, 10, 5, 1]


          It can be converted to a list, which is the slowest process.
          Note that a generator can be called once.
          Removing [::-1] you get a result similar to Ruby enumerator/generator chunk_while.






          share|improve this answer















          Other option without dependencies:



          array = [1,5,7,3,2,5,4,45,1,5,10,12]

          res, memo = ,
          for e in array:
          if len(memo) == 0 or e > memo[-1]: memo.append(e)
          else:
          res.extend(reversed(memo))
          memo = [e]
          res.extend(reversed(memo))

          res # => [7, 5, 1, 3, 5, 2, 45, 4, 12, 10, 5, 1]





          Modified version which is a bit faster:

          def reverse_if_chunck_increases(array):
          res, memo, last_memo = , , None
          for e in array:
          if not last_memo or e > last_memo:
          last_memo = e
          memo.append(e)
          else:
          res.extend(memo[::-1])
          last_memo, memo = e, [e]
          res.extend(memo[::-1])
          return res

          print(reverse_if_chunck_increases(array) == [7, 5, 1, 3, 5, 2, 45, 4, 12, 10, 5, 1])
          #=> True





          EDIT after the answer was accepted (Maybe it's useful.)

          I was able to get the result so easily and apparently a bit faster coding in Ruby as:



          array.chunk_while { |x, y| x < y }.flat_map{ |chunk| chunk.reverse }


          So, I wondered why there isn't an itertool like chunk_while. Then I tried to code one similar using yield:



          def reverse_if_chunk_increases(array):
          i, x, size, res = 0, 0, len(array),
          while i < size-1:
          if array[i] > array[i+1]:
          yield array[x:i+1][::-1]
          x = i +1
          i += 1
          yield array[x:size][::-1]


          The execution is superfast but it return a generator to iterate instead of a list:



          chunks = reverse_if_chunk_increases(array)
          for chunk in chunks:
          print(chunk)
          # [7, 5, 1]
          # [3]
          # [5, 2]
          # [45, 4]
          # [12, 10, 5, 1]


          It can be converted to a list, which is the slowest process.
          Note that a generator can be called once.
          Removing [::-1] you get a result similar to Ruby enumerator/generator chunk_while.







          share|improve this answer














          share|improve this answer



          share|improve this answer








          edited Jan 3 at 13:05

























          answered Jan 2 at 22:05









          iGianiGian

          4,7192725




          4,7192725








          • 1





            @Gerald, I was still trying to find a faster way, but the fastest I get is using generators, see my edit. Thanks for the choice, by the way.

            – iGian
            Jan 3 at 13:07














          • 1





            @Gerald, I was still trying to find a faster way, but the fastest I get is using generators, see my edit. Thanks for the choice, by the way.

            – iGian
            Jan 3 at 13:07








          1




          1





          @Gerald, I was still trying to find a faster way, but the fastest I get is using generators, see my edit. Thanks for the choice, by the way.

          – iGian
          Jan 3 at 13:07





          @Gerald, I was still trying to find a faster way, but the fastest I get is using generators, see my edit. Thanks for the choice, by the way.

          – iGian
          Jan 3 at 13:07













          2














          Can you use pandas?



          import pandas as pd
          a = [1,5,7,3,2,5,4,45,1,5,10,12]
          aa = pd.Series(a)
          aa.groupby(aa.diff().bfill().lt(0).cumsum()).apply(lambda x: x[::-1]).tolist()


          Output:



          [7, 5, 1, 3, 5, 2, 45, 4, 12, 10, 5, 1]





          share|improve this answer
























          • Yes, too slow. 5 M rows.

            – Scott Boston
            Jan 2 at 21:55











          • Appreciate the effort though. Perhaps with a simple python loop is as good as it can get

            – user10652346
            Jan 2 at 21:56
















          2














          Can you use pandas?



          import pandas as pd
          a = [1,5,7,3,2,5,4,45,1,5,10,12]
          aa = pd.Series(a)
          aa.groupby(aa.diff().bfill().lt(0).cumsum()).apply(lambda x: x[::-1]).tolist()


          Output:



          [7, 5, 1, 3, 5, 2, 45, 4, 12, 10, 5, 1]





          share|improve this answer
























          • Yes, too slow. 5 M rows.

            – Scott Boston
            Jan 2 at 21:55











          • Appreciate the effort though. Perhaps with a simple python loop is as good as it can get

            – user10652346
            Jan 2 at 21:56














          2












          2








          2







          Can you use pandas?



          import pandas as pd
          a = [1,5,7,3,2,5,4,45,1,5,10,12]
          aa = pd.Series(a)
          aa.groupby(aa.diff().bfill().lt(0).cumsum()).apply(lambda x: x[::-1]).tolist()


          Output:



          [7, 5, 1, 3, 5, 2, 45, 4, 12, 10, 5, 1]





          share|improve this answer













          Can you use pandas?



          import pandas as pd
          a = [1,5,7,3,2,5,4,45,1,5,10,12]
          aa = pd.Series(a)
          aa.groupby(aa.diff().bfill().lt(0).cumsum()).apply(lambda x: x[::-1]).tolist()


          Output:



          [7, 5, 1, 3, 5, 2, 45, 4, 12, 10, 5, 1]






          share|improve this answer












          share|improve this answer



          share|improve this answer










          answered Jan 2 at 21:48









          Scott BostonScott Boston

          56.8k73158




          56.8k73158













          • Yes, too slow. 5 M rows.

            – Scott Boston
            Jan 2 at 21:55











          • Appreciate the effort though. Perhaps with a simple python loop is as good as it can get

            – user10652346
            Jan 2 at 21:56



















          • Yes, too slow. 5 M rows.

            – Scott Boston
            Jan 2 at 21:55











          • Appreciate the effort though. Perhaps with a simple python loop is as good as it can get

            – user10652346
            Jan 2 at 21:56

















          Yes, too slow. 5 M rows.

          – Scott Boston
          Jan 2 at 21:55





          Yes, too slow. 5 M rows.

          – Scott Boston
          Jan 2 at 21:55













          Appreciate the effort though. Perhaps with a simple python loop is as good as it can get

          – user10652346
          Jan 2 at 21:56





          Appreciate the effort though. Perhaps with a simple python loop is as good as it can get

          – user10652346
          Jan 2 at 21:56











          1














          How is this? Seems to be faster afaik but without knowing how fast is fast enough its tough to tell for sure



          all_l = 
          sub_l =
          for i in a:
          if sub_l:
          if sub_l[0] > i:
          all_l.extend(sub_l)
          sub_l = [i]
          else:
          sub_l.insert(0, i)
          else:
          sub_l = [i]
          all_l.extend(sub_l)





          share|improve this answer
























          • Ideally less than a sec for an array of the order of np.random.choice(10,5_000_000)

            – user10652346
            Jan 2 at 22:20
















          1














          How is this? Seems to be faster afaik but without knowing how fast is fast enough its tough to tell for sure



          all_l = 
          sub_l =
          for i in a:
          if sub_l:
          if sub_l[0] > i:
          all_l.extend(sub_l)
          sub_l = [i]
          else:
          sub_l.insert(0, i)
          else:
          sub_l = [i]
          all_l.extend(sub_l)





          share|improve this answer
























          • Ideally less than a sec for an array of the order of np.random.choice(10,5_000_000)

            – user10652346
            Jan 2 at 22:20














          1












          1








          1







          How is this? Seems to be faster afaik but without knowing how fast is fast enough its tough to tell for sure



          all_l = 
          sub_l =
          for i in a:
          if sub_l:
          if sub_l[0] > i:
          all_l.extend(sub_l)
          sub_l = [i]
          else:
          sub_l.insert(0, i)
          else:
          sub_l = [i]
          all_l.extend(sub_l)





          share|improve this answer













          How is this? Seems to be faster afaik but without knowing how fast is fast enough its tough to tell for sure



          all_l = 
          sub_l =
          for i in a:
          if sub_l:
          if sub_l[0] > i:
          all_l.extend(sub_l)
          sub_l = [i]
          else:
          sub_l.insert(0, i)
          else:
          sub_l = [i]
          all_l.extend(sub_l)






          share|improve this answer












          share|improve this answer



          share|improve this answer










          answered Jan 2 at 21:30









          hchwhchw

          34318




          34318













          • Ideally less than a sec for an array of the order of np.random.choice(10,5_000_000)

            – user10652346
            Jan 2 at 22:20



















          • Ideally less than a sec for an array of the order of np.random.choice(10,5_000_000)

            – user10652346
            Jan 2 at 22:20

















          Ideally less than a sec for an array of the order of np.random.choice(10,5_000_000)

          – user10652346
          Jan 2 at 22:20





          Ideally less than a sec for an array of the order of np.random.choice(10,5_000_000)

          – user10652346
          Jan 2 at 22:20











          1














          I don't think you're going to get much faster than using a pure python loop.



          For instance, here is a numpy+itertools solution:



          import numpy as np
          from itertools import chain, groupby
          from operator import itemgetter

          def reverse_increasing_sequences_numpy(a):
          idx = (np.diff(np.concatenate([[a[0]], a]))<0).cumsum()
          return list(
          chain.from_iterable(
          (reversed([x[0] for x in g]) for v, g in groupby(zip(a, idx), itemgetter(1)))
          )
          )

          print(reverse_increasing_sequences(a))
          #[7, 5, 1, 3, 5, 2, 45, 4, 12, 10, 5, 1]


          But looking at the speed test results:



          b = np.random.choice(10, 100000)

          %%timeit
          reverse_increasing_sequences_numpy(b)
          #47.1 ms ± 778 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)

          %%timeit
          reverse_increasing_sequences_iGian(b)
          #40.3 ms ± 1.31 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)

          %%%timeit
          reverse_increasing_sequences_hchw(b)
          #26.1 ms ± 1.35 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)


          @hchw's solution ran is almost 2 times faster than my numpy version.






          share|improve this answer
























          • Thank you very much @pault! Yes, I was under the impression that perhaps a python loop was the best solution. Very nice solution anyway and thanks again for the effort and time complexity comparisson

            – user10652346
            Jan 2 at 23:47











          • Nice numpy solution @pault. Interesting that numpy couldn't outperform straight python loops.

            – Scott Boston
            Jan 3 at 14:24











          • @ScottBoston I can't think of a way to optimize this in any way using numpy. The fastest solution is one pass through the list, building the output as you go.

            – pault
            Jan 3 at 22:00













          • @pault Yes, breaking the list part and reassembling is time consuming.

            – Scott Boston
            Jan 3 at 22:02
















          1














          I don't think you're going to get much faster than using a pure python loop.



          For instance, here is a numpy+itertools solution:



          import numpy as np
          from itertools import chain, groupby
          from operator import itemgetter

          def reverse_increasing_sequences_numpy(a):
          idx = (np.diff(np.concatenate([[a[0]], a]))<0).cumsum()
          return list(
          chain.from_iterable(
          (reversed([x[0] for x in g]) for v, g in groupby(zip(a, idx), itemgetter(1)))
          )
          )

          print(reverse_increasing_sequences(a))
          #[7, 5, 1, 3, 5, 2, 45, 4, 12, 10, 5, 1]


          But looking at the speed test results:



          b = np.random.choice(10, 100000)

          %%timeit
          reverse_increasing_sequences_numpy(b)
          #47.1 ms ± 778 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)

          %%timeit
          reverse_increasing_sequences_iGian(b)
          #40.3 ms ± 1.31 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)

          %%%timeit
          reverse_increasing_sequences_hchw(b)
          #26.1 ms ± 1.35 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)


          @hchw's solution ran is almost 2 times faster than my numpy version.






          share|improve this answer
























          • Thank you very much @pault! Yes, I was under the impression that perhaps a python loop was the best solution. Very nice solution anyway and thanks again for the effort and time complexity comparisson

            – user10652346
            Jan 2 at 23:47











          • Nice numpy solution @pault. Interesting that numpy couldn't outperform straight python loops.

            – Scott Boston
            Jan 3 at 14:24











          • @ScottBoston I can't think of a way to optimize this in any way using numpy. The fastest solution is one pass through the list, building the output as you go.

            – pault
            Jan 3 at 22:00













          • @pault Yes, breaking the list part and reassembling is time consuming.

            – Scott Boston
            Jan 3 at 22:02














          1












          1








          1







          I don't think you're going to get much faster than using a pure python loop.



          For instance, here is a numpy+itertools solution:



          import numpy as np
          from itertools import chain, groupby
          from operator import itemgetter

          def reverse_increasing_sequences_numpy(a):
          idx = (np.diff(np.concatenate([[a[0]], a]))<0).cumsum()
          return list(
          chain.from_iterable(
          (reversed([x[0] for x in g]) for v, g in groupby(zip(a, idx), itemgetter(1)))
          )
          )

          print(reverse_increasing_sequences(a))
          #[7, 5, 1, 3, 5, 2, 45, 4, 12, 10, 5, 1]


          But looking at the speed test results:



          b = np.random.choice(10, 100000)

          %%timeit
          reverse_increasing_sequences_numpy(b)
          #47.1 ms ± 778 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)

          %%timeit
          reverse_increasing_sequences_iGian(b)
          #40.3 ms ± 1.31 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)

          %%%timeit
          reverse_increasing_sequences_hchw(b)
          #26.1 ms ± 1.35 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)


          @hchw's solution ran is almost 2 times faster than my numpy version.






          share|improve this answer













          I don't think you're going to get much faster than using a pure python loop.



          For instance, here is a numpy+itertools solution:



          import numpy as np
          from itertools import chain, groupby
          from operator import itemgetter

          def reverse_increasing_sequences_numpy(a):
          idx = (np.diff(np.concatenate([[a[0]], a]))<0).cumsum()
          return list(
          chain.from_iterable(
          (reversed([x[0] for x in g]) for v, g in groupby(zip(a, idx), itemgetter(1)))
          )
          )

          print(reverse_increasing_sequences(a))
          #[7, 5, 1, 3, 5, 2, 45, 4, 12, 10, 5, 1]


          But looking at the speed test results:



          b = np.random.choice(10, 100000)

          %%timeit
          reverse_increasing_sequences_numpy(b)
          #47.1 ms ± 778 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)

          %%timeit
          reverse_increasing_sequences_iGian(b)
          #40.3 ms ± 1.31 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)

          %%%timeit
          reverse_increasing_sequences_hchw(b)
          #26.1 ms ± 1.35 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)


          @hchw's solution ran is almost 2 times faster than my numpy version.







          share|improve this answer












          share|improve this answer



          share|improve this answer










          answered Jan 2 at 23:16









          paultpault

          16.2k32652




          16.2k32652













          • Thank you very much @pault! Yes, I was under the impression that perhaps a python loop was the best solution. Very nice solution anyway and thanks again for the effort and time complexity comparisson

            – user10652346
            Jan 2 at 23:47











          • Nice numpy solution @pault. Interesting that numpy couldn't outperform straight python loops.

            – Scott Boston
            Jan 3 at 14:24











          • @ScottBoston I can't think of a way to optimize this in any way using numpy. The fastest solution is one pass through the list, building the output as you go.

            – pault
            Jan 3 at 22:00













          • @pault Yes, breaking the list part and reassembling is time consuming.

            – Scott Boston
            Jan 3 at 22:02



















          • Thank you very much @pault! Yes, I was under the impression that perhaps a python loop was the best solution. Very nice solution anyway and thanks again for the effort and time complexity comparisson

            – user10652346
            Jan 2 at 23:47











          • Nice numpy solution @pault. Interesting that numpy couldn't outperform straight python loops.

            – Scott Boston
            Jan 3 at 14:24











          • @ScottBoston I can't think of a way to optimize this in any way using numpy. The fastest solution is one pass through the list, building the output as you go.

            – pault
            Jan 3 at 22:00













          • @pault Yes, breaking the list part and reassembling is time consuming.

            – Scott Boston
            Jan 3 at 22:02

















          Thank you very much @pault! Yes, I was under the impression that perhaps a python loop was the best solution. Very nice solution anyway and thanks again for the effort and time complexity comparisson

          – user10652346
          Jan 2 at 23:47





          Thank you very much @pault! Yes, I was under the impression that perhaps a python loop was the best solution. Very nice solution anyway and thanks again for the effort and time complexity comparisson

          – user10652346
          Jan 2 at 23:47













          Nice numpy solution @pault. Interesting that numpy couldn't outperform straight python loops.

          – Scott Boston
          Jan 3 at 14:24





          Nice numpy solution @pault. Interesting that numpy couldn't outperform straight python loops.

          – Scott Boston
          Jan 3 at 14:24













          @ScottBoston I can't think of a way to optimize this in any way using numpy. The fastest solution is one pass through the list, building the output as you go.

          – pault
          Jan 3 at 22:00







          @ScottBoston I can't think of a way to optimize this in any way using numpy. The fastest solution is one pass through the list, building the output as you go.

          – pault
          Jan 3 at 22:00















          @pault Yes, breaking the list part and reassembling is time consuming.

          – Scott Boston
          Jan 3 at 22:02





          @pault Yes, breaking the list part and reassembling is time consuming.

          – Scott Boston
          Jan 3 at 22:02


















          draft saved

          draft discarded




















































          Thanks for contributing an answer to Stack Overflow!


          • Please be sure to answer the question. Provide details and share your research!

          But avoid



          • Asking for help, clarification, or responding to other answers.

          • Making statements based on opinion; back them up with references or personal experience.


          To learn more, see our tips on writing great answers.




          draft saved


          draft discarded














          StackExchange.ready(
          function () {
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f54013340%2freverse-order-sequential-digits%23new-answer', 'question_page');
          }
          );

          Post as a guest















          Required, but never shown





















































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown

































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown







          Popular posts from this blog

          Monofisismo

          Angular Downloading a file using contenturl with Basic Authentication

          Olmecas