Re: Download tool for multiple files or wildcards in http

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



> Let's say you know that you want a sequence of brf files--
> for example, 12260-12269;

In the event that you don't want a range, or have a fixed list of those
"root" files (eg. 12260, 17800, 31415), you can preprocess things, and
do

    for x in 12260 17800 31415;
    do curl -u username:password
        http://www.loc.gov/nls/braille/${x}v[01-05].brf;
    done

(all on one line will do, though I think Bash is gracious enough to take
it in multiple lines)

Alternatively, if you have a big list of those 5-digit text numbers
stored in a file, you can just do

    for x in `cat list_of_5_digit_numbers.txt`;
    do curl -u username:password
        http://www.loc.gov/nls/braille/${x}v[01-05].brf;
    done

Something like this could easily be put in a shell script, perhaps
taking the 5-digit numbers on the command line, or using the same file,
which you could just update with the new books you want.

This would be faster than trying to pull down a range of texts, either
if there are gaps (which would waste time getting the 404) or if there
are simply texts in that range that you're not interested in.

Thanks to Bjoern for pointing out Curl...a very handy tool indeed, and
may even end up replacing wget in my toolbox of command-line utils.

-tim








_______________________________________________

Blinux-list@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/blinux-list

[Index of Archives]     [Linux Speakup]     [Fedora]     [Linux Kernel]     [Yosemite News]     [Big List of Linux Books]