How to keep original page's elements with selenium after opening a link generated by javascript and coming...












0















It seems impossible or very complicated to keep the original elements in webdriver of selenium after moving another page via a link generated by javascript. How can I do this?



I'm trying to do web scraping for a particular web page using the following components:




  • Ubuntu 18.04.1 LTS

  • Python 3.6.1

  • Selenium (Python package) 3.141.0

  • Google Chrome 71.0.3578.98

  • ChromeDriver 2.45.615279


The web page includes links which "href" is javascript function like the following:



<a href="javascript:funcName(10, 24, 100)"></a>


The definition of the function is something like this.



var funcName = function(arg1, arg2, arg3) {
var url = 'XXXXXXXX' // dynamically generated using arguments
var form = $('<form>', {
name: 'formName',
action: url,
method: 'post'
});
// Some procedure to enhance the form element with input arguments.
form.submit()
}


The above post request redirects me to another page which I'd like to scrape.



The thing is the original web page includes many links and I'd like to scrape redirected pages one by one. However, it seems impossible to get the redirected page's url without actually clicking the link (<a>) as it's redirected by dynamically generated post request. On the other hand, if I click it and move to the redirected page, the elements I used for the original web page cannot be used anymore, so, after coming back to the original page, I need to get the next link from the beginning. This feels very redundant.



Python code example



for a in driver.find_elements_by_css_selector(.some-class-name):
a.click() # this redirects me to another page
print(driver.current_url) # this shows the redirected page
driver.back()
print(driver.current_url). # this shows the original page
# After coming back to the original page and when doing looping process, Python returns StaleElementReferenceException
# because a is attached to the original page before redirected.


What I did to keep the original page's elements but did not work:



1.Copy a element (or driver) object



from copy import deepcopy
for a in driver.find_elements_by_css_selector(.some-class-name):
a2 = deepcopy(a)
a2.click() # this redirects me to another page
print(driver.current_url) # Expected result is that this remains the original web page, but didn't


I tried deepcopy for driver itself, but didn't work either.
Returned error is



TypeError: can't pickle _thread.lock objects


2.Open a redirected page in a new tab



from selenium.webdriver import ActionChains
from selenium.webdriver.common.keys import Keys

for a in driver.find_elements_by_css_selector(.some-class-name):

action = ActionChains(driver)

# Expected result is the following open the redirected page in a new tab, and CONTROL + TAB changes between tabs
action.key_down(Keys.CONTROL).click(a).key_down(Keys.CONTROL).perform()
driver.send_keys(Keys.CONTROL + Keys.TAB)


However, this didn't open a new tab, just move to the redirected page in the same tab.



If there is no simple way, I will do this by creating a list or dictionary object to store which links I've already scraped, and every time after scraping redirected page, I'll parse the original page over again and skip the link that has already been checked. But I don't want to do because it's very redundant.










share|improve this question























  • In java, getCurrentUrl() give URL of page where driver is active, even new pages opened because of clicks driver will not move to those pages, so getCurrentUrl will not give that url.

    – murali selenium
    Jan 3 at 4:14











  • Do you have a test url?

    – QHarr
    Jan 3 at 6:43











  • This seems like something I would use fetch for.

    – pguardiario
    Jan 3 at 8:31











  • @QHarr Sorry I don't.

    – Ken
    Jan 3 at 11:28











  • @pguardiario Is fetch a method of some package? If I understand correctly, selenium webdriver doesn't have such a method.

    – Ken
    Jan 3 at 11:29
















0















It seems impossible or very complicated to keep the original elements in webdriver of selenium after moving another page via a link generated by javascript. How can I do this?



I'm trying to do web scraping for a particular web page using the following components:




  • Ubuntu 18.04.1 LTS

  • Python 3.6.1

  • Selenium (Python package) 3.141.0

  • Google Chrome 71.0.3578.98

  • ChromeDriver 2.45.615279


The web page includes links which "href" is javascript function like the following:



<a href="javascript:funcName(10, 24, 100)"></a>


The definition of the function is something like this.



var funcName = function(arg1, arg2, arg3) {
var url = 'XXXXXXXX' // dynamically generated using arguments
var form = $('<form>', {
name: 'formName',
action: url,
method: 'post'
});
// Some procedure to enhance the form element with input arguments.
form.submit()
}


The above post request redirects me to another page which I'd like to scrape.



The thing is the original web page includes many links and I'd like to scrape redirected pages one by one. However, it seems impossible to get the redirected page's url without actually clicking the link (<a>) as it's redirected by dynamically generated post request. On the other hand, if I click it and move to the redirected page, the elements I used for the original web page cannot be used anymore, so, after coming back to the original page, I need to get the next link from the beginning. This feels very redundant.



Python code example



for a in driver.find_elements_by_css_selector(.some-class-name):
a.click() # this redirects me to another page
print(driver.current_url) # this shows the redirected page
driver.back()
print(driver.current_url). # this shows the original page
# After coming back to the original page and when doing looping process, Python returns StaleElementReferenceException
# because a is attached to the original page before redirected.


What I did to keep the original page's elements but did not work:



1.Copy a element (or driver) object



from copy import deepcopy
for a in driver.find_elements_by_css_selector(.some-class-name):
a2 = deepcopy(a)
a2.click() # this redirects me to another page
print(driver.current_url) # Expected result is that this remains the original web page, but didn't


I tried deepcopy for driver itself, but didn't work either.
Returned error is



TypeError: can't pickle _thread.lock objects


2.Open a redirected page in a new tab



from selenium.webdriver import ActionChains
from selenium.webdriver.common.keys import Keys

for a in driver.find_elements_by_css_selector(.some-class-name):

action = ActionChains(driver)

# Expected result is the following open the redirected page in a new tab, and CONTROL + TAB changes between tabs
action.key_down(Keys.CONTROL).click(a).key_down(Keys.CONTROL).perform()
driver.send_keys(Keys.CONTROL + Keys.TAB)


However, this didn't open a new tab, just move to the redirected page in the same tab.



If there is no simple way, I will do this by creating a list or dictionary object to store which links I've already scraped, and every time after scraping redirected page, I'll parse the original page over again and skip the link that has already been checked. But I don't want to do because it's very redundant.










share|improve this question























  • In java, getCurrentUrl() give URL of page where driver is active, even new pages opened because of clicks driver will not move to those pages, so getCurrentUrl will not give that url.

    – murali selenium
    Jan 3 at 4:14











  • Do you have a test url?

    – QHarr
    Jan 3 at 6:43











  • This seems like something I would use fetch for.

    – pguardiario
    Jan 3 at 8:31











  • @QHarr Sorry I don't.

    – Ken
    Jan 3 at 11:28











  • @pguardiario Is fetch a method of some package? If I understand correctly, selenium webdriver doesn't have such a method.

    – Ken
    Jan 3 at 11:29














0












0








0








It seems impossible or very complicated to keep the original elements in webdriver of selenium after moving another page via a link generated by javascript. How can I do this?



I'm trying to do web scraping for a particular web page using the following components:




  • Ubuntu 18.04.1 LTS

  • Python 3.6.1

  • Selenium (Python package) 3.141.0

  • Google Chrome 71.0.3578.98

  • ChromeDriver 2.45.615279


The web page includes links which "href" is javascript function like the following:



<a href="javascript:funcName(10, 24, 100)"></a>


The definition of the function is something like this.



var funcName = function(arg1, arg2, arg3) {
var url = 'XXXXXXXX' // dynamically generated using arguments
var form = $('<form>', {
name: 'formName',
action: url,
method: 'post'
});
// Some procedure to enhance the form element with input arguments.
form.submit()
}


The above post request redirects me to another page which I'd like to scrape.



The thing is the original web page includes many links and I'd like to scrape redirected pages one by one. However, it seems impossible to get the redirected page's url without actually clicking the link (<a>) as it's redirected by dynamically generated post request. On the other hand, if I click it and move to the redirected page, the elements I used for the original web page cannot be used anymore, so, after coming back to the original page, I need to get the next link from the beginning. This feels very redundant.



Python code example



for a in driver.find_elements_by_css_selector(.some-class-name):
a.click() # this redirects me to another page
print(driver.current_url) # this shows the redirected page
driver.back()
print(driver.current_url). # this shows the original page
# After coming back to the original page and when doing looping process, Python returns StaleElementReferenceException
# because a is attached to the original page before redirected.


What I did to keep the original page's elements but did not work:



1.Copy a element (or driver) object



from copy import deepcopy
for a in driver.find_elements_by_css_selector(.some-class-name):
a2 = deepcopy(a)
a2.click() # this redirects me to another page
print(driver.current_url) # Expected result is that this remains the original web page, but didn't


I tried deepcopy for driver itself, but didn't work either.
Returned error is



TypeError: can't pickle _thread.lock objects


2.Open a redirected page in a new tab



from selenium.webdriver import ActionChains
from selenium.webdriver.common.keys import Keys

for a in driver.find_elements_by_css_selector(.some-class-name):

action = ActionChains(driver)

# Expected result is the following open the redirected page in a new tab, and CONTROL + TAB changes between tabs
action.key_down(Keys.CONTROL).click(a).key_down(Keys.CONTROL).perform()
driver.send_keys(Keys.CONTROL + Keys.TAB)


However, this didn't open a new tab, just move to the redirected page in the same tab.



If there is no simple way, I will do this by creating a list or dictionary object to store which links I've already scraped, and every time after scraping redirected page, I'll parse the original page over again and skip the link that has already been checked. But I don't want to do because it's very redundant.










share|improve this question














It seems impossible or very complicated to keep the original elements in webdriver of selenium after moving another page via a link generated by javascript. How can I do this?



I'm trying to do web scraping for a particular web page using the following components:




  • Ubuntu 18.04.1 LTS

  • Python 3.6.1

  • Selenium (Python package) 3.141.0

  • Google Chrome 71.0.3578.98

  • ChromeDriver 2.45.615279


The web page includes links which "href" is javascript function like the following:



<a href="javascript:funcName(10, 24, 100)"></a>


The definition of the function is something like this.



var funcName = function(arg1, arg2, arg3) {
var url = 'XXXXXXXX' // dynamically generated using arguments
var form = $('<form>', {
name: 'formName',
action: url,
method: 'post'
});
// Some procedure to enhance the form element with input arguments.
form.submit()
}


The above post request redirects me to another page which I'd like to scrape.



The thing is the original web page includes many links and I'd like to scrape redirected pages one by one. However, it seems impossible to get the redirected page's url without actually clicking the link (<a>) as it's redirected by dynamically generated post request. On the other hand, if I click it and move to the redirected page, the elements I used for the original web page cannot be used anymore, so, after coming back to the original page, I need to get the next link from the beginning. This feels very redundant.



Python code example



for a in driver.find_elements_by_css_selector(.some-class-name):
a.click() # this redirects me to another page
print(driver.current_url) # this shows the redirected page
driver.back()
print(driver.current_url). # this shows the original page
# After coming back to the original page and when doing looping process, Python returns StaleElementReferenceException
# because a is attached to the original page before redirected.


What I did to keep the original page's elements but did not work:



1.Copy a element (or driver) object



from copy import deepcopy
for a in driver.find_elements_by_css_selector(.some-class-name):
a2 = deepcopy(a)
a2.click() # this redirects me to another page
print(driver.current_url) # Expected result is that this remains the original web page, but didn't


I tried deepcopy for driver itself, but didn't work either.
Returned error is



TypeError: can't pickle _thread.lock objects


2.Open a redirected page in a new tab



from selenium.webdriver import ActionChains
from selenium.webdriver.common.keys import Keys

for a in driver.find_elements_by_css_selector(.some-class-name):

action = ActionChains(driver)

# Expected result is the following open the redirected page in a new tab, and CONTROL + TAB changes between tabs
action.key_down(Keys.CONTROL).click(a).key_down(Keys.CONTROL).perform()
driver.send_keys(Keys.CONTROL + Keys.TAB)


However, this didn't open a new tab, just move to the redirected page in the same tab.



If there is no simple way, I will do this by creating a list or dictionary object to store which links I've already scraped, and every time after scraping redirected page, I'll parse the original page over again and skip the link that has already been checked. But I don't want to do because it's very redundant.







javascript python selenium web-scraping selenium-chromedriver






share|improve this question













share|improve this question











share|improve this question




share|improve this question










asked Jan 3 at 3:57









KenKen

3817




3817













  • In java, getCurrentUrl() give URL of page where driver is active, even new pages opened because of clicks driver will not move to those pages, so getCurrentUrl will not give that url.

    – murali selenium
    Jan 3 at 4:14











  • Do you have a test url?

    – QHarr
    Jan 3 at 6:43











  • This seems like something I would use fetch for.

    – pguardiario
    Jan 3 at 8:31











  • @QHarr Sorry I don't.

    – Ken
    Jan 3 at 11:28











  • @pguardiario Is fetch a method of some package? If I understand correctly, selenium webdriver doesn't have such a method.

    – Ken
    Jan 3 at 11:29



















  • In java, getCurrentUrl() give URL of page where driver is active, even new pages opened because of clicks driver will not move to those pages, so getCurrentUrl will not give that url.

    – murali selenium
    Jan 3 at 4:14











  • Do you have a test url?

    – QHarr
    Jan 3 at 6:43











  • This seems like something I would use fetch for.

    – pguardiario
    Jan 3 at 8:31











  • @QHarr Sorry I don't.

    – Ken
    Jan 3 at 11:28











  • @pguardiario Is fetch a method of some package? If I understand correctly, selenium webdriver doesn't have such a method.

    – Ken
    Jan 3 at 11:29

















In java, getCurrentUrl() give URL of page where driver is active, even new pages opened because of clicks driver will not move to those pages, so getCurrentUrl will not give that url.

– murali selenium
Jan 3 at 4:14





In java, getCurrentUrl() give URL of page where driver is active, even new pages opened because of clicks driver will not move to those pages, so getCurrentUrl will not give that url.

– murali selenium
Jan 3 at 4:14













Do you have a test url?

– QHarr
Jan 3 at 6:43





Do you have a test url?

– QHarr
Jan 3 at 6:43













This seems like something I would use fetch for.

– pguardiario
Jan 3 at 8:31





This seems like something I would use fetch for.

– pguardiario
Jan 3 at 8:31













@QHarr Sorry I don't.

– Ken
Jan 3 at 11:28





@QHarr Sorry I don't.

– Ken
Jan 3 at 11:28













@pguardiario Is fetch a method of some package? If I understand correctly, selenium webdriver doesn't have such a method.

– Ken
Jan 3 at 11:29





@pguardiario Is fetch a method of some package? If I understand correctly, selenium webdriver doesn't have such a method.

– Ken
Jan 3 at 11:29












2 Answers
2






active

oldest

votes


















0














Even you return the same page, but selenium don't know it's the same page, selenium will treat it as an new page. The links found before the for loop is not belong to the new page. You need to find the links again on the new page and assign them to the same variable links inside for loop. Using index to iterate to next link.



links = driver.find_elements_by_css_selector(.some-class-name)

for i in range(0, len(links)):
links[i].click() # this redirects me to another page
print(driver.current_url) # this shows the redirected page
driver.back()
print(driver.current_url).

# Important: find the links again on the page back from redirected page
# to resolve the StaleElementReferenceException.
links = driver.find_elements_by_css_selector(.some-class-name)





share|improve this answer
























  • Thank you, @yong. Actually, my code includes some loopings, so in my case, I need to find elements used for each loop, which may not be a good idea. But, in some cases, your solution will be helpful.

    – Ken
    Jan 3 at 11:32



















0














I chose a way to create another webdriver instance.



driver = webdriver.Chrome()
driver_sub = webdriver.Chrome()

driver.get(url)
driver_sub.get(url) # access the same page with different instance

for a in driver.find_elements_by_css_selector('.some-class-name'):
script = a.get_attribute('href')
driver_sub.execute_script(script)
# do some work on the redirected page with driver_sub
driver_sub.execute_script('window.history.go(-1)') # this is almost same as driver_sub.back()





share|improve this answer

























    Your Answer






    StackExchange.ifUsing("editor", function () {
    StackExchange.using("externalEditor", function () {
    StackExchange.using("snippets", function () {
    StackExchange.snippets.init();
    });
    });
    }, "code-snippets");

    StackExchange.ready(function() {
    var channelOptions = {
    tags: "".split(" "),
    id: "1"
    };
    initTagRenderer("".split(" "), "".split(" "), channelOptions);

    StackExchange.using("externalEditor", function() {
    // Have to fire editor after snippets, if snippets enabled
    if (StackExchange.settings.snippets.snippetsEnabled) {
    StackExchange.using("snippets", function() {
    createEditor();
    });
    }
    else {
    createEditor();
    }
    });

    function createEditor() {
    StackExchange.prepareEditor({
    heartbeatType: 'answer',
    autoActivateHeartbeat: false,
    convertImagesToLinks: true,
    noModals: true,
    showLowRepImageUploadWarning: true,
    reputationToPostImages: 10,
    bindNavPrevention: true,
    postfix: "",
    imageUploader: {
    brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
    contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
    allowUrls: true
    },
    onDemand: true,
    discardSelector: ".discard-answer"
    ,immediatelyShowMarkdownHelp:true
    });


    }
    });














    draft saved

    draft discarded


















    StackExchange.ready(
    function () {
    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f54016155%2fhow-to-keep-original-pages-elements-with-selenium-after-opening-a-link-generate%23new-answer', 'question_page');
    }
    );

    Post as a guest















    Required, but never shown

























    2 Answers
    2






    active

    oldest

    votes








    2 Answers
    2






    active

    oldest

    votes









    active

    oldest

    votes






    active

    oldest

    votes









    0














    Even you return the same page, but selenium don't know it's the same page, selenium will treat it as an new page. The links found before the for loop is not belong to the new page. You need to find the links again on the new page and assign them to the same variable links inside for loop. Using index to iterate to next link.



    links = driver.find_elements_by_css_selector(.some-class-name)

    for i in range(0, len(links)):
    links[i].click() # this redirects me to another page
    print(driver.current_url) # this shows the redirected page
    driver.back()
    print(driver.current_url).

    # Important: find the links again on the page back from redirected page
    # to resolve the StaleElementReferenceException.
    links = driver.find_elements_by_css_selector(.some-class-name)





    share|improve this answer
























    • Thank you, @yong. Actually, my code includes some loopings, so in my case, I need to find elements used for each loop, which may not be a good idea. But, in some cases, your solution will be helpful.

      – Ken
      Jan 3 at 11:32
















    0














    Even you return the same page, but selenium don't know it's the same page, selenium will treat it as an new page. The links found before the for loop is not belong to the new page. You need to find the links again on the new page and assign them to the same variable links inside for loop. Using index to iterate to next link.



    links = driver.find_elements_by_css_selector(.some-class-name)

    for i in range(0, len(links)):
    links[i].click() # this redirects me to another page
    print(driver.current_url) # this shows the redirected page
    driver.back()
    print(driver.current_url).

    # Important: find the links again on the page back from redirected page
    # to resolve the StaleElementReferenceException.
    links = driver.find_elements_by_css_selector(.some-class-name)





    share|improve this answer
























    • Thank you, @yong. Actually, my code includes some loopings, so in my case, I need to find elements used for each loop, which may not be a good idea. But, in some cases, your solution will be helpful.

      – Ken
      Jan 3 at 11:32














    0












    0








    0







    Even you return the same page, but selenium don't know it's the same page, selenium will treat it as an new page. The links found before the for loop is not belong to the new page. You need to find the links again on the new page and assign them to the same variable links inside for loop. Using index to iterate to next link.



    links = driver.find_elements_by_css_selector(.some-class-name)

    for i in range(0, len(links)):
    links[i].click() # this redirects me to another page
    print(driver.current_url) # this shows the redirected page
    driver.back()
    print(driver.current_url).

    # Important: find the links again on the page back from redirected page
    # to resolve the StaleElementReferenceException.
    links = driver.find_elements_by_css_selector(.some-class-name)





    share|improve this answer













    Even you return the same page, but selenium don't know it's the same page, selenium will treat it as an new page. The links found before the for loop is not belong to the new page. You need to find the links again on the new page and assign them to the same variable links inside for loop. Using index to iterate to next link.



    links = driver.find_elements_by_css_selector(.some-class-name)

    for i in range(0, len(links)):
    links[i].click() # this redirects me to another page
    print(driver.current_url) # this shows the redirected page
    driver.back()
    print(driver.current_url).

    # Important: find the links again on the page back from redirected page
    # to resolve the StaleElementReferenceException.
    links = driver.find_elements_by_css_selector(.some-class-name)






    share|improve this answer












    share|improve this answer



    share|improve this answer










    answered Jan 3 at 9:08









    yongyong

    7,1011412




    7,1011412













    • Thank you, @yong. Actually, my code includes some loopings, so in my case, I need to find elements used for each loop, which may not be a good idea. But, in some cases, your solution will be helpful.

      – Ken
      Jan 3 at 11:32



















    • Thank you, @yong. Actually, my code includes some loopings, so in my case, I need to find elements used for each loop, which may not be a good idea. But, in some cases, your solution will be helpful.

      – Ken
      Jan 3 at 11:32

















    Thank you, @yong. Actually, my code includes some loopings, so in my case, I need to find elements used for each loop, which may not be a good idea. But, in some cases, your solution will be helpful.

    – Ken
    Jan 3 at 11:32





    Thank you, @yong. Actually, my code includes some loopings, so in my case, I need to find elements used for each loop, which may not be a good idea. But, in some cases, your solution will be helpful.

    – Ken
    Jan 3 at 11:32













    0














    I chose a way to create another webdriver instance.



    driver = webdriver.Chrome()
    driver_sub = webdriver.Chrome()

    driver.get(url)
    driver_sub.get(url) # access the same page with different instance

    for a in driver.find_elements_by_css_selector('.some-class-name'):
    script = a.get_attribute('href')
    driver_sub.execute_script(script)
    # do some work on the redirected page with driver_sub
    driver_sub.execute_script('window.history.go(-1)') # this is almost same as driver_sub.back()





    share|improve this answer






























      0














      I chose a way to create another webdriver instance.



      driver = webdriver.Chrome()
      driver_sub = webdriver.Chrome()

      driver.get(url)
      driver_sub.get(url) # access the same page with different instance

      for a in driver.find_elements_by_css_selector('.some-class-name'):
      script = a.get_attribute('href')
      driver_sub.execute_script(script)
      # do some work on the redirected page with driver_sub
      driver_sub.execute_script('window.history.go(-1)') # this is almost same as driver_sub.back()





      share|improve this answer




























        0












        0








        0







        I chose a way to create another webdriver instance.



        driver = webdriver.Chrome()
        driver_sub = webdriver.Chrome()

        driver.get(url)
        driver_sub.get(url) # access the same page with different instance

        for a in driver.find_elements_by_css_selector('.some-class-name'):
        script = a.get_attribute('href')
        driver_sub.execute_script(script)
        # do some work on the redirected page with driver_sub
        driver_sub.execute_script('window.history.go(-1)') # this is almost same as driver_sub.back()





        share|improve this answer















        I chose a way to create another webdriver instance.



        driver = webdriver.Chrome()
        driver_sub = webdriver.Chrome()

        driver.get(url)
        driver_sub.get(url) # access the same page with different instance

        for a in driver.find_elements_by_css_selector('.some-class-name'):
        script = a.get_attribute('href')
        driver_sub.execute_script(script)
        # do some work on the redirected page with driver_sub
        driver_sub.execute_script('window.history.go(-1)') # this is almost same as driver_sub.back()






        share|improve this answer














        share|improve this answer



        share|improve this answer








        edited Jan 3 at 11:33

























        answered Jan 3 at 5:01









        KenKen

        3817




        3817






























            draft saved

            draft discarded




















































            Thanks for contributing an answer to Stack Overflow!


            • Please be sure to answer the question. Provide details and share your research!

            But avoid



            • Asking for help, clarification, or responding to other answers.

            • Making statements based on opinion; back them up with references or personal experience.


            To learn more, see our tips on writing great answers.




            draft saved


            draft discarded














            StackExchange.ready(
            function () {
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f54016155%2fhow-to-keep-original-pages-elements-with-selenium-after-opening-a-link-generate%23new-answer', 'question_page');
            }
            );

            Post as a guest















            Required, but never shown





















































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown

































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown







            Popular posts from this blog

            generate and download xml file after input submit (php and mysql) - JPK

            Angular Downloading a file using contenturl with Basic Authentication

            Can't read property showImagePicker of undefined in react native iOS