How to save all of the content on a long Firefox webpage? (Solved)
Fredx, do u know of any way of saving all of your content on a long Firefox 32bit page, other than 'save page as'?
In the default Firefox for Slacko 7.0.
Discussion, talk and tips
https://forum.puppylinux.com/
Fredx, do u know of any way of saving all of your content on a long Firefox 32bit page, other than 'save page as'?
In the default Firefox for Slacko 7.0.
Could you explain what you mean by 'content'? Are you composing a web page? If so, SeaMonkey has a suite that includes a composer.
Tell the browser to print it to cups-pdf, that gives you a .pdf you can use. maybe not what you need though.
wizard
kuman11 wrote:Fredx, do u know of any way of saving all of your content ...
Please explain "all of your content".
@Flash I see you moved this thread. It can be very difficult for an OP to find his/her thread if you move it, perhaps inform the user through PM in that case ?
EDIT: Probably wrong assuming you moved the thread, Flash, sorry.
Fred, by "all of my content", I meant text & pics on a particular webpage, one with a lot of that, it takes time to fully open. Is that 'print to .pdf', a good idea for that? The browser's Firefox.
p.s. bigpup sent me a pm about that.
Bionicpup64 has a PMirrorget website downloader which I have used to download our complete website when for archival storage.
Menu --> Internet --> PMirrorget website downloader
There should be an option by right clicking on the page and selecting 'Save as' or something along that line. There will be a option in the following GTK dialog as to how it will be saved. You can either choose 'Webpage, Complete' or just the page itself.
I didn't move it. I might have if I could have figured out what it was about.
fredx181 wrote: ↑Tue Jan 18, 2022 8:05 pmkuman11 wrote:Fredx, do u know of any way of saving all of your content ...
Please explain "all of your content".
@Flash I see you moved this thread. It can be very difficult for an OP to find his/her thread if you move it, perhaps inform the user through PM in that case ?
EDIT: Probably wrong assuming you moved the thread, Flash, sorry.
@Flash I guess u have no imagination ...
This was mainly for Fredx to answer.
It seems it saves the RSS feed of the page. Can u restore the whole page from it offline after that?
kuman11 wrote:This was mainly for Fredx to answer.
I wonder why you ask me specifically, I'm no expert in that and it's still not really clear to me what exactly your goal is (giving an example may be helpful).
Fredx,
ndujoe has gotten best what I need. Look up his suggestion.
This works for me. I have over a thousand pdf files of webpage content containing text, pictures, and links:
Depending what you want to save. ie. only the current webpage or all the webpages for that internet address. To only save the current webpage displayed, select Save Page (or Save page as) from your browser's menu and then select Web Page complete. Best is to create your own named folder first and save the contents there. You can then access the webpage offline by running the appropriate .html file from the folder you saved to. If you want to save the contents of all the webpages for a specific address, you need to use something like pmirror ( this could take long).
amethyst wrote: ↑Fri Jan 21, 2022 1:53 amDepending what you want to save. ie. only the current webpage or all the webpages for that internet address. To only save the current webpage displayed, select Save Page (or Save page as) from your browser's menu and then select Web Page complete. Best is to create your own named folder first and save the contents there. You can then access the webpage offline by running the appropriate .html file from the folder you saved to. If you want to save the contents of all the webpages for a specific address, you need to use something like pmirror ( this could take long).
amethyst, I used pmirrorget, it'd saved most of the content of that address, though it didn't go to the bottom of the page. Do u have some clue, why?
It has been sometime since I archived our club page with this Pmirrorget utility. As I recall the top branch html was able to recreated the web page and associated web content from the saved download.