curlHandle configure ?options?
curlHandle perform
curlHandle getinfo curlinfo_option
curlhandle cleanup
curlhandle reset
curlHandle duhandle
curlHandle pause
curlHandle resume
curl::transfer ?options?
curl::version
curl::escape url
curl::unescape url
curl::curlConfig option
curl::versioninfo option
curl::easystrerror errorCode
RETURN VALUE
configure is called to set the options for the transfer. Most operations in TclCurl have default actions, and by using the appropriate options you can make them behave differently (as documented). All options are set with the option followed by a parameter.
Notes: the options set with this procedure are valid for the forthcoming data transfers that are performed when you invoke perform
The options are not reset between transfers (except where noted), so if you want subsequent transfers with different options, you must change them between the transfers. You can optionally reset all options back to the internal default with curlHandle reset.
curlHandle is the return code from the curl::init call.
OPTIONS
You hardly ever want this set in production use, you will almost always want this when you debug/report problems. Another neat option for debugging is -debugproc
If this option is set and libcurl has been built with the standard name resolver, timeouts will not occur while the name resolve takes place. Consider building libcurl with c-ares support to enable asynchronous DNS lookups, which enables nice timeouts for name resolves without signals.
Setting nosignal to 1 makes libcurl NOT ask the system to ignore SIGPIPE signals, which otherwise are sent by the system when trying to send data to a socket which is closed in the other end. libcurl makes an effort to never cause such SIGPIPEs to trigger, but some operating systems have no way to avoid them and even on those that have there are some corner cases when they may still happen, contrary to our desire. In addition, using ntlm_Wb authentication could cause a SIGCHLD signal to be raised.
By default, TClCurl uses its internal wildcard matching implementation. You can provide your own matching function by the -fnmatchproc option.
This feature is only supported by the FTP download for now.
A brief introduction of its syntax follows:
ftp://example.com/some/path/photo?.jpeg
[a-zA-Z0-9] or [f-gF-G] - character interval
[abc] - character enumeration
[^abc] or [!abc] - negation
[[:name:]] class expression. Supported classes are alnum,lower, space, alpha, digit, print, upper, blank, graph, xdigit.
[][-!^] - special case - matches only '-', ']', '[', '!' or '^'. These characters have no special purpose.
[\[\]\\] - escape syntax. Matches '[', ']' or '\'.
Using the rules above, a file name pattern can be constructed:
ftp://example.com/some/path/[a-z[:upper:]\\].jpeg
NOTE: you will be passed as much data as possible in all invokes, but you cannot possibly make any assumptions. It may be nothing if the file is empty or it may be thousands of bytes.
If you stop the current transfer by returning 0 "pre-maturely" (i.e before the server expected it, like when you've said you will upload N bytes and you upload less than N bytes), you may experience that the server "hangs" waiting for the rest of the data that won't come.
Bugs: when doing TFTP uploads, you must return the exact amount of data that the callback wants, or it will be considered the final packet by the server end and the transfer will end there.
proc ProgressCallback {dltotal dlnow ultotal ulnow}
In order to this option to work you have to set the noprogress option to '0'. Setting this option to the empty string will restore the original progress function.
If you transfer data with the multi interface, this procedure will not be called during periods of idleness unless you call the appropriate procedure that performs transfers.
You can pause and resume a transfer from within this procedure using the pause and resume commands.
See also the headervar option to get the headers into an array.
debugProc {infoType data}
where infoType specifies what kind of information it is (0 text, 1 incoming header, 2 outgoing header, 3 incoming data, 4 outgoing data, 5 incoming SSL data, 6 outgoing SSL data).
ChunkBgnProc {remains}
Where remains is the number of files left to be transfered (or skipped)
This callback makes sense only when using the -wildcard option.
The available data is: filename, filetype (file, directory, symlink, device block, device char, named pipe, socket, door or error if it couldn't be identified), time, perm, uid, gid, size, hardlinks and flags.
ChunkEndProc {}
It should return '0' if everyhting is fine and '1' if some error occurred.
FnMatchProc {pattern string}
Returns '0' if it matches, '1' if it doesn't.
This method is not fail-safe and there are occasions where non-successful response codes will slip through, especially when authentication is involved (response codes 401 and 407).
You might get some amounts of headers transferred before this situation is detected, like for when a "100-continue" is received as a response to a POST/PUT and a 401 or 407 is received immediately afterwards.
If the given URL lacks the protocol part ("http://" or "ftp://" etc), it will attempt to guess which protocol to use based on the given host name. If the given protocol of the set URL is not supported, TclCurl will return the unsupported protocol error when you call perform. Use curl::versioninfo for detailed info on which protocols are supported.
Starting with version 7.22.0, the fragment part of the URI will not be send as part of the path, which was the case previously.
NOTE: this is the one option required to be set before perform is called.
Accepted protocols are 'http', 'https', 'ftp', 'ftps', 'scp', 'sftp', 'telnet', 'ldap', and 'all'.
By default TclCurl will allow all protocols except for FILE and SCP. This is a difference compared to pre-7.19.4 versions which unconditionally would follow to all protocols supported.
When you tell the extension to use a HTTP proxy, TclCurl will transparently convert operations to HTTP even if you specify a FTP URL etc. This may have an impact on what other features of the library you can use, such as quote and similar FTP specifics that will not work unless you tunnel through the HTTP proxy. Such tunneling is activated with proxytunnel
TclCurl respects the environment variables http_proxy, ftp_proxy, all_proxy etc, if any of those are set. The use of this option does however override any possibly set environment variables.
Setting the proxy string to "" (an empty string) will explicitly disable the use of a proxy, even if there is an environment variable set for it.
The proxy host string can be specified the exact same way as the proxy environment variables, include protocol prefix (http://) and embedded user + password.
Since 7.22.0, the proxy string may be specified with a protocol:// prefix to specify alternative proxy protocols. Use socks4://, socks4a://, socks5:// or socks5h:// (the last one to enable socks5 and asking the proxy to do the resolving) to request the specific SOCKS version to be used. No protocol specified, http:// and all others will be treated as HTTP proxies.
man2html: unable to open or read file
If you set it to http1.0, it will only affect how libcurl speaks to a proxy when CONNECT is used. The HTTP version used for "regular" HTTP requests is instead controled with httpversion.
The name resolve functions of various libc implementations don't re-read name
server information unless explicitly told so (for example, by calling
res_init(3)). This may cause TclCurl to keep using the older server even
if DHCP has updated the server info, and this may look like a DNS cache issue.
WARNING: this option is considered obsolete. Stop using it. Switch over to using the share interface instead! See tclcurl_share.
Pass the number specifying what remote port to connect to, instead of the one specified in the URL or the default port for the used protocol.
Pass a number to specify whether the TCP_NODELAY option should be set or cleared (1 = set, 0 = clear). The option is cleared by default. This will have no effect after the connection has been established.
Setting this option will disable TCP's Nagle algorithm. The purpose of this algorithm is to try to minimize the number of small packets on the network (where "small packets" means TCP segments less than the Maximum Segment Size (MSS) for the network).
Maximizing the amount of data sent per TCP segment is good because it amortizes the overhead of the send. However, in some cases (most notably telnet or rlogin) small segments may need to be sent without delay. This is less efficient than sending larger amounts of data at a time, and can contribute to congestion on the network if overdone.
You can set it to the following values:
Undefined values of the option will have this effect.
When using NTLM, you can set domain by prepending it to the user name and separating the domain and name with a forward (/) or backward slash (\). Like this: "domain/user:password" or "domain\user:password". Some HTTP servers (on Windows) support this style even for Basic authentication.
When using HTTP and -followlocation, TclCurl might perform several requests to possibly different hosts. TclCurl will only send this user and password information to hosts using the initial host name (unless -unrestrictedauth is set), so if TclCurl follows locations to other hosts it will not send the user and password to those. This is enforced to prevent accidental information leakage.
In order to specify the password to be used in conjunction with the user name use the -password option.
It should be used in conjunction with the -username option.
It should be used in same way as the -proxyuserpwd is used, except that it allows the username to contain a colon, like in the following example: "sip:user@example.com".
Note the -proxyusername option is an alternative way to set the user name while connecting to Proxy. It doesn't make sense to use them together.
Note that libcurl will fork when necessary to run the winbind application and kill it when complete, calling waitpid() to await its exit when done. On POSIX operating systems, killing the process will cause a SIGCHLD signal to be raised (regardless of whether -nosignal is set). This behavior is subject to change in future versions of libcurl.
You need to build libcurl with GnuTLS or OpenSSL with TLS-SRP support for this to work.
The methods are those listed above for the httpauth option. As of this writing, only Basic and NTLM work.
This is a request, not an order; the server may or may not do it. This option must be set or else any unsolicited encoding done by the server is ignored. See the special file lib/README.encoding in libcurl docs for details.
Transfer-Encoding differs slightly from the Content-Encoding you ask for with -encoding in that a Transfer-Encoding is strictly meant to be for the transfer and thus MUST be decoded before the data arrives in the client. Traditionally, Transfer-Encoding has been much less used and supported by both HTTP clients and HTTP servers.
This means that the extension will re-send the same request on the new location and follow new Location: headers all the way until no more such headers are returned. -maxredirs can be used to limit the number of redirects TclCurl will follow.
Since 7.19.4, TclCurl can limit what protocols it will automatically follow. The accepted protocols are set with -redirprotocols and it excludes the FILE protocol by default.
The non-RFC behaviour is ubiquitous in web browsers, so the extension does the conversion by default to maintain consistency. However, a server may require a POST to remain a POST after such a redirection.
This option is meaningful only when setting -followlocation
The option used to be known as -post301, which should still work but is know deprecated.
This option is deprecated starting with version 0.12.1, you should use -upload.
This option does not limit how much data TclCurl will actually send, as that is controlled entirely by what the read callback returns.
Use the -postfields option to specify what data to post and -postfieldsize to set the data size. Optionally, you can provide data to POST using the -readproc options.
You can override the default POST Content-Type: header by setting your own with -httpheader.
Using POST with HTTP 1.1 implies the use of a "Expect: 100-continue" header. You can disable this header with -httpheader as usual.
If you use POST to a HTTP 1.1 server, you can send data without knowing the size before starting the POST if you use chunked encoding. You enable this by adding a header like "Transfer-Encoding: chunked" with -httpheader. With HTTP 1.0 or without chunked transfer, you must specify the size in the request.
When setting post to an 1 value, it will automatically set nobody to 0.
NOTE: if you have issued a POST request and want to make a HEAD or GET instead, you must explicitly pick the new request type using -nobody or -httpget or similar.
This is a normal application/x-www-form-urlencoded kind, which is the most commonly used one by HTML forms.
If you want to do a zero-byte POST, you need to set -postfieldsize explicitly to zero, as simply setting -postfields to NULL or "" just effectively disables the sending of the specified string. TclCurl will instead assume that the POST data will be send using the read callback!
Using POST with HTTP 1.1 implies the use of a "Expect: 100-continue" header. You can disable this header with -httpheader as usual.
Note: to make multipart/formdata posts (aka rfc1867-posts), check out -httppost option.
This is the only case where the data is reset after a transfer.
First, there are some basics you need to understand about multipart/formdata posts. Each part consists of at least a NAME and a CONTENTS part. If the part is made for file upload, there are also a stored CONTENT-TYPE and a FILENAME. Below, we'll discuss on what options you use to set these properties in the parts you want to add to your post.
The list must contain a 'name' tag with the name of the section followed by a string with the name, there are three tags to indicate the value of the section: 'value' followed by a string with the data to post, 'file' followed by the name of the file to post and 'contenttype' with the type of the data (text/plain, image/jpg, ...), you can also indicate a false file name with 'filename', this is useful in case the server checks if the given file name is valid, for example, by testing if it starts with 'c:\' as any real file name does or if you want to include the full path of the file to post. You can also post the content of a variable as if it were a file with the options 'bufferName' and 'buffer' or use 'filecontent' followed by a file name to read that file and use the contents as data.
Should you need to specify extra headers for the form POST section, use 'contentheader' followed by a list with the headers to post.
Please see 'httpPost.tcl' and 'httpBufferPost.tcl' for examples.
If TclCurl can't set the data to post an error will be returned:
The headers included in the linked list must not be CRLF-terminated, because TclCurl adds CRLF after each header item. Failure to comply with this will result in strange bugs because the server will most likely ignore part of the headers you specified.
The first line in a request (containing the method, usually a GET or POST) is not a header and cannot be replaced using this option. Only the lines following the request-line are headers. Adding this method line in this list of headers will only cause your request to send an invalid header.
NOTE:The most commonly replaced headers have "shortcuts" in the options: cookie, useragent, and referer.
NOTE:The alias itself is not parsed for any version strings. Before version 7.16.3, TclCurl used the value set by option httpversion, but starting with 7.16.3 the protocol is assumed to match HTTP 1.0 when an alias matched.
If you need to set mulitple cookies, you need to set them all using a single option and thus you need to concatenate them all in one single string. Set multiple cookies in one string like this: "name1=content1; name2=content2;" etc.
This option sets the cookie header explictly in the outgoing request(s). If multiple requests are done due to authentication, followed redirections or similar, they will all get this cookie passed on.
Using this option multiple times will only make the latest string override the previous ones.
Given an empty or non-existing file, this option will enable cookies for this curl handle, making it understand and parse received cookies and then use matching cookies in future requests.
If you use this option multiple times, you add more files to read.
Using this option also enables cookies for this session, so if you, for example, follow a location it will make matching cookies get sent accordingly.
TclCurl will not and cannot report an error for this. Using 'verbose' will get a warning to display, but that is the only visible feedback you get about this possibly lethal situation.
When setting httpget to 1, nobody will automatically be set to 0.
Each recipient in SMTP lingo is specified with angle brackets (<>), but should you not use an angle bracket as first letter, TclCurl will assume you provide a single email address only and enclose that with angle brackets for you.
Specify the block size to use for TFTP data transmission. Valid range as per RFC 2348 is 8-65464 bytes. The default of 512 bytes will be used if this option is not specified. The specified block size will only be used pending support by the remote server. If the server does not return an option acknowledgement or returns an option acknowledgement with no blksize, the default of 512 bytes will be used.
The address can be followed by a ':' to specify a port, optionally followed by a '-' o specify a port range. If the port specified is 0, the operating system will pick a free port. If a range is provided and all ports in the range are not available, libcurl will report CURLE_FTP_PORT_FAILED for the handle. Invalid port/range settings are ignored. IPv6 addresses followed by a port or portrange have to be in brackets. IPv6 addresses without port/range specifier can be in brackets.
Examples with specified ports:
eth0:0 192.168.1.2:32000-33000 curl.se:32123 [::1]:1234-4567
You disable PORT again and go back to using the passive version by setting this option to an empty string.
Prefix the command with an asterisk (*) to make TclCurl continue even if the command fails as by default TclCurl will stop.
Disable this operation again by setting an empty string to this option.
Keep in mind the commands to send must be 'raw' ftp commands, for example, to create a directory you need to send mkd Test, not mkdir Test.
Valid SFTP commands are: chgrp, chmod, chown, ln, mkdir, pwd, rename, rm, rmdir and symlink.
This causes an FTP NLST command to be sent. Beware that some FTP servers list only files in their response to NLST, they might not include subdirectories and symbolic links.
Setting this option to 1 also implies a directory listing even if the URL doesn't end with a slash, which otherwise is necessary.
Do NOT use this option if you also use -wildcardmatch as it will effectively break that feature.
Set to one to tell TclCurl to send a PRET command before PASV (and EPSV). Certain FTP servers, mainly drftpd, require this non-standard command for directory listings as well as up and downloads in PASV mode. Has no effect when using the active FTP transfers mode.
This setting also applies to SFTP-connections. TclCurl will attempt to create the remote directory if it can't obtain a handle to the target-location. The creation will fail if a file of the same name as the directory to create already exists or lack of permissions prevents creation.
If set to 2, TclCurl will retry the CWD command again if the subsequent MKD command fails. This is especially useful if you're doing many simultanoeus connections against the same server and they all have this option enabled, as then CWD may first fail but then another connection does MKD before this connection and thus MKD fails but trying CWD works
This option has no effect if PORT, EPRT or EPSV is used instead of PASV.
Pass TclCurl one of the values from below, to alter how TclCurl issues "AUTH TLS" or "AUTH SSL" when FTP over SSL is activated (see -ftpssl).
You may need this option because of servers like BSDFTPD-SSL which won't work properly when "AUTH SSL" is issued (although the server responds fine and everything) but requires "AUTH TLS" instead.
NOTE: TclCurl does not do a complete ASCII conversion when doing ASCII transfers over FTP. This is a known limitation/flaw that nobody has rectified. TclCurl simply sets the mode to ascii and performs a standard transfer.
Ranges only work on HTTP, FTP and FILE transfers.
For FTP, set this option to -1 to make the transfer start from the end of the target file (useful to continue an interrupted upload).
When doing uploads with FTP, the resume position is where in the local/source file TclCurl should try to resume the upload from and it will then append the source file to the remote target file.
Note that TclCurl will still act and assume the keyword it would use if you do not set your custom and it will act according to that. Thus, changing this to a HEAD when TclCurl otherwise would do a GET might cause TclCurl to act funny, and similar. To switch to a proper HEAD, use -nobody, to switch to a proper POST, use -post or -postfields and so on.
To change request to GET, you should use httpget. Change request to POST with post etc.
This option is mandatory for uploading using SCP.
Using PUT with HTTP 1.1 implies the use of a "Expect: 100-continue" header. You can disable this header with -httpheader as usual.
If you use PUT to a HTTP 1.1 server, you can upload data without knowing the size before starting the transfer if you use chunked encoding. You enable this by adding a header like "Transfer-Encoding: chunked" with -httpheader. With HTTP 1.0 or without chunked transfer, you must specify the size.
NOTE: The file size is not always known prior to download, and for such files this option has no effect even if the file transfer ends up being larger than this given limit. This concerns both FTP and HTTP transfers.
In unix-like systems, this might cause signals to be used unless -nosignal is used.
When reaching the maximum limit, TclCurl closes the oldest connection in the cache to prevent the number of open connections to increase.
Note: if you have already performed transfers with this curl handle, setting a smaller maxconnects than before may cause open connections to unnecessarily get closed.
If you add this easy handle to a multi handle, this setting is not being acknowledged, instead you must configure the multi handle its own maxconnects option.
In unix-like systems, this might cause signals to be used unless -nosignal is set.
Each single name resolve string should be written using the format HOST:PORT:ADDRESS where HOST is the name TclCurl will try to resolve, PORT is the port number of the service where TclCurl wants to connect to the HOST and ADDRESS is the numerical IP address. If libcurl is built to support IPv6, ADDRESS can be either IPv4 or IPv6 style addressing.
This option effectively pre-populates the DNS cache with entries for the host+port pair so redirects and everything that operations against the HOST+PORT will instead use your provided ADDRESS.
You can remove names from the DNS cache again, to stop providing these fake resolves, by including a string in the linked list that uses the format "-HOST:PORT". The host name must be prefixed with a dash, and the host name and port number must exactly match what was already added previously.
You can use ftps:// URLs to explicitly switch on SSL/TSL for the control connection and the data connection.
Alternatively you can set the option to one of these values:
With NSS this is the nickname of the certificate you wish to authenticate with. If you want to use a file from the current directory, please precede it with the "./" prefix, in order to avoid confusion with a nickname.
NOTE:The format "ENG" enables you to load the private key from a crypto engine. in this case -sslkey is used as an identifier passed to the engine. You have to set the crypto engine with -sslengine. The "DER" format key file currently does not work because of a bug in OpenSSL.
You never need a pass phrase to load a certificate but you need one to load you private key.
This option used to be known as -sslkeypasswd and -sslcertpasswd.
NOTE:If the crypto device cannot be loaded, an error will be returned.
NOTE:If the crypto device cannot be set, an error will be returned.
When negotiating an SSL connection, the server sends a certificate indicating its identity. TclCurl verifies whether the certificate is authentic, i.e. that you can trust that the server is who the certificate says it is. This trust is based on a chain of digital signatures, rooted in certification authority (CA) certificates you supply.
TclCurl uses a default bundle of CA certificates that comes with libcurl but you can specify alternate certificates with the -cainfo or the -capath options.
When -sslverifypeer is nonzero, and the verification fails to prove that the certificate is authentic, the connection fails. When the option is zero, the peer certificate verification succeeds regardless.
Authenticating the certificate is not by itself very useful. You typically want to ensure that the server, as authentically identified by its certificate, is the server you mean to be talking to, use -sslverifyhost to control that. The check that the host name in the certificate is valid for the host name you're connecting to is done independently of this option.
This option is by default set to the system path where libcurl's cacert bundle is assumed to be stored, as established at build time.
When built against NSS this is the directory that the NSS certificate database resides in.
This option apparently does not work in Windows due to some limitation in openssl.
This option is OpenSSL-specific and does nothing if libcurl is built to use GnuTLS. NSS-powered libcurl provides the option only for backward compatibility.
A specific error code (CURLE_SSL_CRL_BADFILE) is defined with the option. It is returned when the SSL exchange fails because the CRL file cannot be loaded. A failure in certificate verification due to a revocation information found in the CRL does not trigger this specific error.
When negotiating an SSL connection, the server sends a certificate indicating its identity.
When -sslverifyhost is set to 2, that certificate must indicate that the server is the server to which you meant to connect, or the connection fails.
TclCurl considers the server the intended one when the Common Name field or a Subject Alternate Name field in the certificate matches the host name in the URL to which you told Curl to connect.
When set to 1, the certificate must contain a Common Name field, but it does not matter what name it says. (This is not ordinarily a useful setting).
When the value is 0, the connection succeeds regardless of the names in the certificate.
The default value for this option is 2.
This option controls the identity that the server claims. The server could be lying. To control lying, see -sslverifypeer. If libcurl is built against NSS and -verifypeer is zero, -verifyhost is ignored.
For OpenSSL and GnuTLS valid examples of cipher lists include 'RC4-SHA', 'SHA1+DES',
You will find more details about cipher lists on this URL:
http://www.openssl.org/docs/apps/ciphers.html
For NSS valid examples of cipher lists include 'rsa_rc4_128_md5', 'rsa_aes_128_sha',
etc. With NSS you don't add/remove ciphers. If you use this option then all known
ciphers are disabled and only those passed in are enabled.
You'll find more details about the NSS cipher lists on this URL:
http://directory.fedora.redhat.com/docs/mod_nss.html
It gets passed a list with three elements, the first one is a list with the type of the key from the known_hosts file and the key itself, the second is another list with the type of the key from the remote site and the key itslef, the third tells you what TclCurl thinks about the matching status.
The known key types are: "rsa", "rsa1" and "dss", in any other case "unknown" is given.
TclCurl opinion about how they match may be: "match", "mismatch", "missing" or "error".
The procedure must return:
Any other value will cause the connection to be closed.
CURLOPT_FRESH_CONNECT, CURLOPT_FORBID_REUSE, CURLOPT_PRIVATE, CURLOPT_SSL_CTX_FUNCTION, CURLOPT_SSL_CTX_DATA, CURLOPT_SSL_CTX_FUNCTION and CURLOPT_CONNECT_ONLY, CURLOPT_OPENSOCKETFUNCTION, CURLOPT_OPENSOCKETDATA.
It must be called with the same curlHandle curl::init call returned. You can do any amount of calls to perform while using the same handle. If you intend to transfer more than one file, you are even encouraged to do so. TclCurl will then attempt to re-use the same connection for the following transfers, thus making the operations faster, less CPU intense and using less network resources. Just note that you will have to use configure between the invokes to set options for the following perform.
You must never call this procedure simultaneously from two places using the same handle. Let it return first before invoking it another time. If you want parallel transfers, you must use several curl handles.
The following information can be extracted:
In order for this to work you have to set the -filetime option before the transfer.
NOTE: this option is only available in libcurl built with OpenSSL support.
Re-initializes all options previously set on a specified handle to the default values.
This puts back the handle to the same state as it was in when it was just created with curl::init.
It does not change the following information kept in the handle: live connections, the Session ID cache, the DNS cache, the cookies and shares.
You can also get the getinfo information by using -infooption variable pairs, after the transfer variable will contain the value that would have been returned by $curlHandle getinfo option.
Applications should use this information to judge if things are possible to do or not, instead of using compile-time checks, as dynamic/DLL libraries can be changed independent of applications.