Re: [PATCH v2 1/3] remote-curl: add testing for intelligent retry for HTTP

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



> Sean McAllister <smcallis@xxxxxxxxxx> writes:
>
> > +# generate a process unique one-up value
> > +global_counter_for_nonce=0
> > +gen_nonce () {
> > +     global_counter_for_nonce=$((global_counter_for_nonce + 1)) &&
> > +     echo "$global_counter_for_nonce"
> > +}
>
> This must not be called in a subprocess if the caller truly wants
> uniqueness.  May want to be described in a comment.
>
Done.

> > diff --git a/t/lib-httpd/error-ntime.sh b/t/lib-httpd/error-ntime.sh
> > new file mode 100755
> > index 0000000000..64dc878746
> > --- /dev/null
> > +++ b/t/lib-httpd/error-ntime.sh
> > @@ -0,0 +1,80 @@
> > +#!/bin/sh
> > +
> > +# Script to simulate a transient error code with Retry-After header set.
> > +#
> > +# PATH_INFO must be of the form /<nonce>/<times>/<retcode>/<retry-after>/<path>
> > +#   (eg: /dc724af1/3/429/10/some/url)
> > +#
> > +# The <nonce> value uniquely identifies the URL, since we're simulating
> > +# a stateful operation using a stateless protocol, we need a way to "namespace"
> > +# URLs so that they don't step on each other.
> > +#
> > +# The first <times> times this endpoint is called, it will return the given
> > +# <retcode>, and if the <retry-after> is non-negative, it will set the
> > +# Retry-After head to that value.
> > +#
> > +# Subsequent calls will return a 302 redirect to <path>.
> > +#
> > +# Supported error codes are 429,502,503, and 504
> > +
>
> I thought "error codes" were rephrased after the first round's
> review to some other term (which I do not recall--was it status?)?
>
Yes you're right, fixed to reflect that and adjust formatting.

> > +print_status() {
> > +     if [ "$1" -eq "302" ]; then
> > +             printf "Status: 302 Found\n"
> > +     elif [ "$1" -eq "429" ]; then
> > +             printf "Status: 429 Too Many Requests\n"
> > +     elif [ "$1" -eq "502" ]; then
> > +             printf "Status: 502 Bad Gateway\n"
> > +     elif [ "$1" -eq "503" ]; then
> > +             printf "Status: 503 Service Unavailable\n"
> > +     elif [ "$1" -eq "504" ]; then
> > +             printf "Status: 504 Gateway Timeout\n"
> > +     else
> > +             printf "Status: 500 Internal Server Error\n"
> > +     fi
> > +     printf "Content-Type: text/plain\n"
> > +}
>
> Style (Documentation/CodingGuidelines).
>
>         print_status () {
>                 if test "$1" = "302"
>                 then
>                         printf "...";
>                 ...
>         }
>
> but in this particular case, I do not see why we want if/else
> cascade.  Perhaps
>
>         print_status () {
>                 case "$1" in
>                 302)    printf "Status: 302 Found\n" ;;
>                 429)    ... ;;
>                 ...
>                 *)      printf "Status: 500 Internal Server Error\n" ;;
>                 esac
>                 printf "Content-Type: text/plain\n";
>         }
>
> would be more standard?
>
> Also I am not sure why we want "printf ...\n" not "echo" here.  If
> we want to talk HTTP ourselves strictly, I would understand avoiding
> "echo" and doing "printf ...\r\n", though.  If we fear DOS line
> endings coming out of localized "echo", and ensure we use LF line
> ending even on Windows and Cygwin, it is sort of understandable but
> if that is what is going on, that does not explain a lone "echo"
> at the end of the caller.
>
> Puzzled.
>
I modified it to use echo as the standard, it turns out apache handles
properly terminating lines for you
with CRLF.  As for the lone echo, a double CRLF signals the end of the response
header and the start of the body.  Curl doesn't behave properly without it.


>   +oIFS="$IFS"
> > +IFS='/'
> > +set -f
> > +set -- $PATH_INFO
> > +set +f
>   +IFS="$oIFS"
> > +
> > +# pull out first four path components
> > +shift
> > +nonce=$1
> > +times=$2
> > +code=$3
> > +retry=$4
> > +shift 4
> > +
> > +# glue the rest back together as redirect path
> > +path=""
> > +while [ "$#" -gt "0" ]; do
> > +     path="${path}/$1"
> > +     shift
> > +done
>
> Hmph.  Would this work better, I wonder?
>
>         path=${PATH_INFO#*/}    ;# discard leading '/'
>         nonce=${path%%/*}       path=${path#*/}
>         times=${path%%/*}       path=${path#*/}
>         code=${path%%/*}        path=${path#*/}
>         retry=${path%%/*}       path=${path#*/}
>
> At least it is shorter and easier to read.
>
I agree it's better. changed.


> > +# leave a cookie for this request/retry count
> > +state_file="request_${REMOTE_ADDR}_${nonce}_${times}_${code}_${retry}"
> > +
> > +if [ ! -f "$state_file" ]; then
> > +     echo 0 > "$state_file"
> > +fi
>
> Style (Documentation/CodingGuidelines, look for "For shell scripts
> specifically").
>
>  - use "test" not "[]";
>
>  - don't write ";then" on the same line (rule of thumb. you should
>    be able to write your shell scripts without semicolon except for
>    double-semicolons in case/esac statements)
>
>  - don't leave SP between redirection operator '>' and its target
>    file, i.e. write 'echo 0 >"$state_file"'.
>
Done.  I went back over the guidelines and tried to follow everything.

> > +read -r cnt < "$state_file"
> > +if [ "$cnt" -lt "$times" ]; then
> > +     echo $((cnt+1)) > "$state_file"
> > +
> > +     # return error
> > +     print_status "$code"
> > +     if [ "$retry" -ge "0" ]; then
> > +             printf "Retry-After: %s\n" "$retry"
> > +     fi
> > +else
> > +     # redirect
> > +     print_status 302
> > +     printf "Location: %s?%s\n" "$path" "${QUERY_STRING}"
> > +fi
> > +
> > +echo
>
> This "echo" to the client also puzzles me further, after seeing
> puzzling use of "printf ...\n" earlier.
>
See earlier comment, we need a second CRLF to end the HTTP header.
Before I added this, curl was very unhappy.

> > diff --git a/t/t5601-clone.sh b/t/t5601-clone.sh
> > index 7df3c5373a..72aaed5a93 100755
> > --- a/t/t5601-clone.sh
> > +++ b/t/t5601-clone.sh
> > @@ -756,6 +756,15 @@ test_expect_success 'partial clone using HTTP' '
> >       partial_clone "$HTTPD_DOCUMENT_ROOT_PATH/server" "$HTTPD_URL/smart/server"
> >  '
> >
> > +test_expect_success 'partial clone using HTTP with redirect' '
> > +     _NONCE=`gen_nonce` && export _NONCE &&
> > +     curl "$HTTPD_URL/error_ntime/${_NONCE}/3/502/10/smart/server" &&
> > +     curl "$HTTPD_URL/error_ntime/${_NONCE}/3/502/10/smart/server" &&
> > +     curl "$HTTPD_URL/error_ntime/${_NONCE}/3/502/10/smart/server" &&
>
> Why do we need to test "curl" here?  Is this remnant of debugging of
> the server side?
>
At this point in the patch set the retry logic isn't implemented yet.
So this triggers the first error status manually
and then clones the repo using the same URL to verify that clone still
works when returning a 302 redirect status.
I've modified it in v3 so that it only has to do the manual call once though.

> > +     partial_clone "$HTTPD_DOCUMENT_ROOT_PATH/server" "$HTTPD_URL/error_ntime/${_NONCE}/3/502/10/smart/server"
> > +'
> > +
> > +
> >  # DO NOT add non-httpd-specific tests here, because the last part of this
> >  # test script is only executed when httpd is available and enabled.



[Index of Archives]     [Linux Kernel Development]     [Gcc Help]     [IETF Annouce]     [DCCP]     [Netdev]     [Networking]     [Security]     [V4L]     [Bugtraq]     [Yosemite]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux SCSI]     [Fedora Users]

  Powered by Linux