FAQ/TODO/KNOWN_BUGS: convert to markdown

- convert to markdown
- auto-generate the TOCs on the website, remove them from the docs
- cleanups
- spellchecked
- updated links

Closes #19875
diff --git a/.github/scripts/badwords.ok b/.github/scripts/badwords.ok
index d5401a8..fe2d9cf 100644
--- a/.github/scripts/badwords.ok
+++ b/.github/scripts/badwords.ok
@@ -4,4 +4,4 @@
 #
 # whitelisted uses of bad words
 # file:[line]:rule
-docs/FAQ::\bwill\b
+docs/FAQ.md::\bwill\b
diff --git a/.github/scripts/pyspelling.words b/.github/scripts/pyspelling.words
index 9844242..6b755d2 100644
--- a/.github/scripts/pyspelling.words
+++ b/.github/scripts/pyspelling.words
@@ -53,6 +53,7 @@
 backend
 backends
 backoff
+backtick
 backticks
 balancers
 Baratov
@@ -171,8 +172,10 @@
 Debian
 DEBUGBUILD
 decrypt
+decrypted
 decrypting
 deepcode
+defacto
 DELE
 DER
 dereference
@@ -201,6 +204,7 @@
 DNS
 dns
 dnsop
+DNSSEC
 DoH
 DoT
 doxygen
@@ -214,6 +218,7 @@
 EAGAIN
 EBCDIC
 ECC
+ECCN
 ECDHE
 ECH
 ECHConfig
@@ -258,6 +263,7 @@
 Feltzing
 ffi
 filesize
+filesystem
 FindCURL
 FLOSS
 fnmatch
@@ -299,6 +305,7 @@
 giga
 Gisle
 Glesys
+glibc
 globbed
 globbing
 gmail
@@ -380,6 +387,7 @@
 IMAPS
 imaps
 impacket
+implementers
 init
 initializer
 inlined
@@ -433,6 +441,7 @@
 LDAPS
 ldaps
 LF
+LGPL
 LGTM
 libbacktrace
 libbrotlidec
@@ -568,6 +577,7 @@
 NetBSD
 netrc
 netstat
+NetWare
 Netware
 NFS
 nghttp
@@ -711,6 +721,7 @@
 Roadmap
 Rockbox
 roffit
+RPC
 RPG
 RR
 RRs
@@ -798,6 +809,7 @@
 src
 SRP
 SRWLOCK
+SSI
 SSL
 ssl
 SSLeay
@@ -962,8 +974,10 @@
 WebDAV
 WebOS
 webpage
+webpages
 WebSocket
 WEBSOCKET
+Wget
 WHATWG
 whitespace
 Whitespaces
@@ -982,6 +996,7 @@
 Xbox
 XDG
 xdigit
+XHTML
 Xilinx
 xmllint
 XP
diff --git a/.github/workflows/checkdocs.yml b/.github/workflows/checkdocs.yml
index ba3858f..a079aea 100644
--- a/.github/workflows/checkdocs.yml
+++ b/.github/workflows/checkdocs.yml
@@ -138,7 +138,7 @@
           persist-credentials: false
 
       - name: 'badwords'
-        run: .github/scripts/badwords.pl -w .github/scripts/badwords.ok '**.md' docs/FAQ docs/KNOWN_BUGS docs/TODO packages/OS400/README.OS400 < .github/scripts/badwords.txt
+        run: .github/scripts/badwords.pl -w .github/scripts/badwords.ok '**.md' packages/OS400/README.OS400 < .github/scripts/badwords.txt
 
       - name: 'verify synopsis'
         run: .github/scripts/verify-synopsis.pl docs/libcurl/curl*.md
diff --git a/README b/README
index 2f68ef0..9401434 100644
--- a/README
+++ b/README
@@ -15,7 +15,8 @@
   available to be used by your software. Read the libcurl.3 man page to
   learn how.
 
-  You find answers to the most frequent questions we get in the FAQ document.
+  You find answers to the most frequent questions we get in the FAQ.md
+  document.
 
   Study the COPYING file for distribution terms.
 
diff --git a/REUSE.toml b/REUSE.toml
index e9e9ecf..e341973 100644
--- a/REUSE.toml
+++ b/REUSE.toml
@@ -13,14 +13,11 @@
 
 [[annotations]]
 path = [
-  "docs/FAQ",
   "docs/INSTALL",
-  "docs/KNOWN_BUGS",
   "docs/libcurl/symbols-in-versions",
   "docs/MAIL-ETIQUETTE",
   "docs/options-in-versions",
   "docs/THANKS",
-  "docs/TODO",
   "lib/libcurl.vers.in",
   "lib/libcurl.def",
   "packages/OS400/README.OS400",
diff --git a/docs/FAQ b/docs/FAQ
deleted file mode 100644
index ef93c91..0000000
--- a/docs/FAQ
+++ /dev/null
@@ -1,1559 +0,0 @@
-                                  _   _ ____  _
-                              ___| | | |  _ \| |
-                             / __| | | | |_) | |
-                            | (__| |_| |  _ <| |___
-                             \___|\___/|_| \_\_____|
-
-FAQ
-
- 1. Philosophy
-  1.1 What is curl?
-  1.2 What is libcurl?
-  1.3 What is curl not?
-  1.4 When will you make curl do XXXX ?
-  1.5 Who makes curl?
-  1.6 What do you get for making curl?
-  1.7 What about CURL from curl.com?
-  1.8 I have a problem, who do I mail?
-  1.9 Where do I buy commercial support for curl?
-  1.10 How many are using curl?
-  1.11 Why do you not update ca-bundle.crt
-  1.12 I have a problem, who can I chat with?
-  1.13 curl's ECCN number?
-  1.14 How do I submit my patch?
-  1.15 How do I port libcurl to my OS?
-
- 2. Install Related Problems
-  2.1 configure fails when using static libraries
-  2.2 Does curl work/build with other SSL libraries?
-  2.3 How do I upgrade curl.exe in Windows?
-  2.4 Does curl support SOCKS (RFC 1928) ?
-
- 3. Usage Problems
-  3.1 curl: (1) SSL is disabled, https: not supported
-  3.2 How do I tell curl to resume a transfer?
-  3.3 Why does my posting using -F not work?
-  3.4 How do I tell curl to run custom FTP commands?
-  3.5 How can I disable the Accept: */* header?
-  3.6 Does curl support ASP, XML, XHTML or HTML version Y?
-  3.7 Can I use curl to delete/rename a file through FTP?
-  3.8 How do I tell curl to follow HTTP redirects?
-  3.9 How do I use curl in my favorite programming language?
-  3.10 What about SOAP, WebDAV, XML-RPC or similar protocols over HTTP?
-  3.11 How do I POST with a different Content-Type?
-  3.12 Why do FTP-specific features over HTTP proxy fail?
-  3.13 Why do my single/double quotes fail?
-  3.14 Does curl support JavaScript or PAC (automated proxy config)?
-  3.15 Can I do recursive fetches with curl?
-  3.16 What certificates do I need when I use SSL?
-  3.17 How do I list the root directory of an FTP server?
-  3.18 Can I use curl to send a POST/PUT and not wait for a response?
-  3.19 How do I get HTTP from a host using a specific IP address?
-  3.20 How to SFTP from my user's home directory?
-  3.21 Protocol xxx not supported or disabled in libcurl
-  3.22 curl -X gives me HTTP problems
-
- 4. Running Problems
-  4.2 Why do I get problems when I use & or % in the URL?
-  4.3 How can I use {, }, [ or ] to specify multiple URLs?
-  4.4 Why do I get downloaded data even though the webpage does not exist?
-  4.5 Why do I get return code XXX from an HTTP server?
-   4.5.1 "400 Bad Request"
-   4.5.2 "401 Unauthorized"
-   4.5.3 "403 Forbidden"
-   4.5.4 "404 Not Found"
-   4.5.5 "405 Method Not Allowed"
-   4.5.6 "301 Moved Permanently"
-  4.6 Can you tell me what error code 142 means?
-  4.7 How do I keep usernames and passwords secret in curl command lines?
-  4.8 I found a bug
-  4.9 curl cannot authenticate to a server that requires NTLM?
-  4.10 My HTTP request using HEAD, PUT or DELETE does not work
-  4.11 Why do my HTTP range requests return the full document?
-  4.12 Why do I get "certificate verify failed" ?
-  4.13 Why is curl -R on Windows one hour off?
-  4.14 Redirects work in browser but not with curl
-  4.15 FTPS does not work
-  4.16 My HTTP POST or PUT requests are slow
-  4.17 Non-functional connect timeouts on Windows
-  4.18 file:// URLs containing drive letters (Windows, NetWare)
-  4.19 Why does not curl return an error when the network cable is unplugged?
-  4.20 curl does not return error for HTTP non-200 responses
-
- 5. libcurl Issues
-  5.1 Is libcurl thread-safe?
-  5.2 How can I receive all data into a large memory chunk?
-  5.3 How do I fetch multiple files with libcurl?
-  5.4 Does libcurl do Winsock initialization on Win32 systems?
-  5.5 Does CURLOPT_WRITEDATA and CURLOPT_READDATA work on Win32 ?
-  5.6 What about Keep-Alive or persistent connections?
-  5.7 Link errors when building libcurl on Windows
-  5.8 libcurl.so.X: open failed: No such file or directory
-  5.9 How does libcurl resolve hostnames?
-  5.10 How do I prevent libcurl from writing the response to stdout?
-  5.11 How do I make libcurl not receive the whole HTTP response?
-  5.12 Can I make libcurl fake or hide my real IP address?
-  5.13 How do I stop an ongoing transfer?
-  5.14 Using C++ non-static functions for callbacks?
-  5.15 How do I get an FTP directory listing?
-  5.16 I want a different time-out
-  5.17 Can I write a server with libcurl?
-  5.18 Does libcurl use threads?
-
- 6. License Issues
-  6.1 I have a GPL program, can I use the libcurl library?
-  6.2 I have a closed-source program, can I use the libcurl library?
-  6.3 I have a BSD licensed program, can I use the libcurl library?
-  6.4 I have a program that uses LGPL libraries, can I use libcurl?
-  6.5 Can I modify curl/libcurl for my program and keep the changes secret?
-  6.6 Can you please change the curl/libcurl license to XXXX?
-  6.7 What are my obligations when using libcurl in my commercial apps?
-
- 7. PHP/CURL Issues
-  7.1 What is PHP/CURL?
-  7.2 Who wrote PHP/CURL?
-  7.3 Can I perform multiple requests using the same handle?
-  7.4 Does PHP/CURL have dependencies?
-
- 8. Development
-  8.1 Why does curl use C89?
-  8.2 Will curl be rewritten?
-
-==============================================================================
-
-1. Philosophy
-
-  1.1 What is curl?
-
-  curl is the name of the project. The name is a play on 'Client for URLs',
-  originally with URL spelled in uppercase to make it obvious it deals with
-  URLs. The fact it can also be read as 'see URL' also helped, it works as
-  an abbreviation for "Client URL Request Library" or why not the recursive
-  version: "curl URL Request Library".
-
-  The curl project produces two products:
-
-  libcurl
-
-    A client-side URL transfer library, supporting DICT, FILE, FTP, FTPS,
-    GOPHER, GOPHERS, HTTP, HTTPS, IMAP, IMAPS, LDAP, LDAPS, MQTT, POP3, POP3S,
-    RTMP, RTMPS, RTSP, SCP, SFTP, SMB, SMBS, SMTP, SMTPS, TELNET, TFTP, WS
-    and WSS.
-
-    libcurl supports HTTPS certificates, HTTP POST, HTTP PUT, FTP uploading,
-    Kerberos, SPNEGO, HTTP form based upload, proxies, cookies, user+password
-    authentication, file transfer resume, http proxy tunneling and more.
-
-    libcurl is highly portable, it builds and works identically on numerous
-    platforms, including Solaris, NetBSD, FreeBSD, OpenBSD, Darwin, HP-UX,
-    IRIX, AIX, Tru64, Linux, UnixWare, HURD, Windows, Amiga, OS/2, macOS,
-    Ultrix, QNX, OpenVMS, RISC OS, Novell NetWare, DOS, Symbian, OSF, Android,
-    Minix, IBM TPF and more...
-
-    libcurl is free, thread-safe, IPv6 compatible, feature rich, well
-    supported and fast.
-
-  curl
-
-    A command line tool for getting or sending data using URL syntax.
-
-    Since curl uses libcurl, curl supports the same wide range of common
-    Internet protocols that libcurl does.
-
-  We pronounce curl with an initial k sound. It rhymes with words like girl
-  and earl. This is a short WAV file to help you:
-
-     https://media.merriam-webster.com/soundc11/c/curl0001.wav
-
-  There are numerous sub-projects and related projects that also use the word
-  curl in the project names in various combinations, but you should take
-  notice that this FAQ is directed at the command-line tool named curl (and
-  libcurl the library), and may therefore not be valid for other curl-related
-  projects. (There is however a small section for the PHP/CURL in this FAQ.)
-
-  1.2 What is libcurl?
-
-  libcurl is a reliable and portable library for doing Internet data transfers
-  using one or more of its supported Internet protocols.
-
-  You can use libcurl freely in your application, be it open source,
-  commercial or closed-source.
-
-  libcurl is most probably the most portable, most powerful and most often
-  used C-based multi-platform file transfer library on this planet - be it
-  open source or commercial.
-
-  1.3 What is curl not?
-
-  curl is not a wget clone. That is a common misconception. Never, during
-  curl's development, have we intended curl to replace wget or compete on its
-  market. curl is targeted at single-shot file transfers.
-
-  curl is not a website mirroring program. If you want to use curl to mirror
-  something: fine, go ahead and write a script that wraps around curl or use
-  libcurl to make it reality.
-
-  curl is not an FTP site mirroring program. Sure, get and send FTP with curl
-  but if you want systematic and sequential behavior you should write a
-  script (or write a new program that interfaces libcurl) and do it.
-
-  curl is not a PHP tool, even though it works perfectly well when used from
-  or with PHP (when using the PHP/CURL module).
-
-  curl is not a program for a single operating system. curl exists, compiles,
-  builds and runs under a wide range of operating systems, including all
-  modern Unixes (and a bunch of older ones too), Windows, Amiga, OS/2, macOS,
-  QNX etc.
-
-  1.4 When will you make curl do XXXX ?
-
-  We love suggestions of what to change in order to make curl and libcurl
-  better. We do however believe in a few rules when it comes to the future of
-  curl:
-
-  curl -- the command line tool -- is to remain a non-graphical command line
-  tool. If you want GUIs or fancy scripting capabilities, you should look for
-  another tool that uses libcurl.
-
-  We do not add things to curl that other small and available tools already do
-  well at the side. curl's output can be piped into another program or
-  redirected to another file for the next program to interpret.
-
-  We focus on protocol related issues and improvements. If you want to do more
-  magic with the supported protocols than curl currently does, chances are
-  good we will agree. If you want to add more protocols, we may agree.
-
-  If you want someone else to do all the work while you wait for us to
-  implement it for you, that is not a friendly attitude. We spend a
-  considerable time already on maintaining and developing curl. In order to
-  get more out of us, you should consider trading in some of your time and
-  effort in return. Simply go to the GitHub repository which resides at
-  https://github.com/curl/curl, fork the project, and create pull requests
-  with your proposed changes.
-
-  If you write the code, chances are better that it will get into curl faster.
-
-  1.5 Who makes curl?
-
-  curl and libcurl are not made by any single individual. Daniel Stenberg is
-  project leader and main developer, but other persons' submissions are
-  important and crucial. Anyone can contribute and post their changes and
-  improvements and have them inserted in the main sources (of course on the
-  condition that developers agree that the fixes are good).
-
-  The full list of all contributors is found in the docs/THANKS file.
-
-  curl is developed by a community, with Daniel at the wheel.
-
-  1.6 What do you get for making curl?
-
-  Project curl is entirely free and open. We do this voluntarily, mostly in
-  our spare time. Companies may pay individual developers to work on curl.
-  This is not controlled by nor supervised in any way by the curl project.
-
-  We get help from companies. Haxx provides website, bandwidth, mailing lists
-  etc, GitHub hosts the primary git repository and other services like the bug
-  tracker at https://github.com/curl/curl. Also again, some companies have
-  sponsored certain parts of the development in the past and I hope some will
-  continue to do so in the future.
-
-  If you want to support our project, consider a donation or a banner-program
-  or even better: by helping us with coding, documenting or testing etc.
-
-  See also: https://curl.se/sponsors.html
-
-  1.7 What about CURL from curl.com?
-
-  During the summer of 2001, curl.com was busy advertising their client-side
-  programming language for the web, named CURL.
-
-  We are in no way associated with curl.com or their CURL programming
-  language.
-
-  Our project name curl has been in effective use since 1998. We were not the
-  first computer related project to use the name "curl" and do not claim any
-  rights to the name.
-
-  We recognize that we will be living in parallel with curl.com and wish them
-  every success.
-
-  1.8 I have a problem, who do I mail?
-
-  Please do not mail any single individual unless you really need to. Keep
-  curl-related questions on a suitable mailing list. All available mailing
-  lists are listed in the MANUAL document and online at
-  https://curl.se/mail/
-
-  Keeping curl-related questions and discussions on mailing lists allows
-  others to join in and help, to share their ideas, to contribute their
-  suggestions and to spread their wisdom. Keeping discussions on public mailing
-  lists also allows for others to learn from this (both current and future
-  users thanks to the web based archives of the mailing lists), thus saving us
-  from having to repeat ourselves even more. Thanks for respecting this.
-
-  If you have found or simply suspect a security problem in curl or libcurl,
-  submit all the details at https://hackerone.com/curl. On there we keep the
-  issue private while we investigate, confirm it, work and validate a fix and
-  agree on a time schedule for publication etc. That way we produce a fix in a
-  timely manner before the flaw is announced to the world, reducing the impact
-  the problem risks having on existing users.
-
-  Security issues can also be taking to the curl security team by emailing
-  security at curl.se (closed list of receivers, mails are not disclosed).
-
-  1.9 Where do I buy commercial support for curl?
-
-  curl is fully open source. It means you can hire any skilled engineer to fix
-  your curl-related problems.
-
-  We list available alternatives on the curl website:
-  https://curl.se/support.html
-
-  1.10 How many are using curl?
-
-  It is impossible to tell.
-
-  We do not know how many users that knowingly have installed and use curl.
-
-  We do not know how many users that use curl without knowing that they are in
-  fact using it.
-
-  We do not know how many users that downloaded or installed curl and then
-  never use it.
-
-  In 2020, we estimate that curl runs in roughly ten billion installations
-  world wide.
-
-  1.11 Why do you not update ca-bundle.crt
-
-  In the curl project we have decided not to attempt to keep this file updated
-  (or even present) since deciding what to add to a ca cert bundle is an
-  undertaking we have not been ready to accept, and the one we can get from
-  Mozilla is perfectly fine so there is no need to duplicate that work.
-
-  Today, with many services performed over HTTPS, every operating system
-  should come with a default ca cert bundle that can be deemed somewhat
-  trustworthy and that collection (if reasonably updated) should be deemed to
-  be a lot better than a private curl version.
-
-  If you want the most recent collection of ca certs that Mozilla Firefox
-  uses, we recommend that you extract the collection yourself from Mozilla
-  Firefox (by running 'make ca-bundle), or by using our online service setup
-  for this purpose: https://curl.se/docs/caextract.html
-
-  1.12 I have a problem who, can I chat with?
-
-  There is a bunch of friendly people hanging out in the #curl channel on the
-  IRC network libera.chat. If you are polite and nice, chances are good that
-  you can get -- or provide -- help instantly.
-
-  1.13 curl's ECCN number?
-
-  The US government restricts exports of software that contains or uses
-  cryptography. When doing so, the Export Control Classification Number (ECCN)
-  is used to identify the level of export control etc.
-
-  Apache Software Foundation gives a good explanation of ECCNs at
-  https://www.apache.org/dev/crypto.html
-
-  We believe curl's number might be ECCN 5D002, another possibility is
-  5D992. It seems necessary to write them (the authority that administers ECCN
-  numbers), asking to confirm.
-
-  Comprehensible explanations of the meaning of such numbers and how to obtain
-  them (resp.) are here
-
-  https://www.bis.doc.gov/licensing/exportingbasics.htm
-  https://www.bis.doc.gov/licensing/do_i_needaneccn.html
-
-  An incomprehensible description of the two numbers above is here
-  https://www.bis.doc.gov/index.php/documents/new-encryption/1653-ccl5-pt2-3
-
-  1.14 How do I submit my patch?
-
-  We strongly encourage you to submit changes and improvements directly as
-  "pull requests" on GitHub: https://github.com/curl/curl/pulls
-
-  If you for any reason cannot or will not deal with GitHub, send your patch to
-  the curl-library mailing list. We are many subscribers there and there are
-  lots of people who can review patches, comment on them and "receive" them
-  properly.
-
-  Lots of more details are found in the CONTRIBUTE.md and INTERNALS.md
-  documents.
-
-  1.15 How do I port libcurl to my OS?
-
-  Here's a rough step-by-step:
-
-  1. copy a suitable lib/config-*.h file as a start to lib/config-[youros].h
-
-  2. edit lib/config-[youros].h to match your OS and setup
-
-  3. edit lib/curl_setup.h to include config-[youros].h when your OS is
-     detected by the preprocessor, in the style others already exist
-
-  4. compile lib/*.c and make them into a library
-
-
-2. Install Related Problems
-
-  2.1 configure fails when using static libraries
-
-  You may find that configure fails to properly detect the entire dependency
-  chain of libraries when you provide static versions of the libraries that
-  configure checks for.
-
-  The reason why static libraries is much harder to deal with is that for them
-  we do not get any help but the script itself must know or check what more
-  libraries that are needed (with shared libraries, that dependency "chain" is
-  handled automatically). This is an error-prone process and one that also
-  tends to vary over time depending on the release versions of the involved
-  components and may also differ between operating systems.
-
-  For that reason, configure does few attempts to actually figure this out and
-  you are instead encouraged to set LIBS and LDFLAGS accordingly when you
-  invoke configure, and point out the needed libraries and set the necessary
-  flags yourself.
-
-  2.2 Does curl work with other SSL libraries?
-
-  curl has been written to use a generic SSL function layer internally, and
-  that SSL functionality can then be provided by one out of many different SSL
-  backends.
-
-  curl can be built to use one of the following SSL alternatives: OpenSSL,
-  LibreSSL, BoringSSL, AWS-LC, GnuTLS, wolfSSL, mbedTLS, Schannel (native
-  Windows) or Rustls. They all have their pros and cons, and we try to
-  maintain a comparison of them here: https://curl.se/docs/ssl-compared.html
-
-  2.3 How do I upgrade curl.exe in Windows?
-
-  The curl tool that is shipped as an integrated component of Windows 10 and
-  Windows 11 is managed by Microsoft. If you were to delete the file or
-  replace it with a newer version downloaded from https://curl.se/windows,
-  then Windows Update will cease to work on your system.
-
-  There is no way to independently force an upgrade of the curl.exe that is
-  part of Windows other than through the regular Windows update process. There
-  is also nothing the curl project itself can do about this, since this is
-  managed and controlled entirely by Microsoft as owners of the operating
-  system.
-
-  You can always download and install the latest version of curl for Windows
-  from https://curl.se/windows into a separate location.
-
-  2.4 Does curl support SOCKS (RFC 1928) ?
-
-  Yes, SOCKS 4 and 5 are supported.
-
-3. Usage problems
-
-  3.1 curl: (1) SSL is disabled, https: not supported
-
-  If you get this output when trying to get anything from an HTTPS server, it
-  means that the instance of curl/libcurl that you are using was built without
-  support for this protocol.
-
-  This could have happened if the configure script that was run at build time
-  could not find all libs and include files curl requires for SSL to work. If
-  the configure script fails to find them, curl is simply built without SSL
-  support.
-
-  To get HTTPS support into a curl that was previously built but that reports
-  that HTTPS is not supported, you should dig through the document and logs
-  and check out why the configure script does not find the SSL libs and/or
-  include files.
-
-  Also, check out the other paragraph in this FAQ labeled "configure does not
-  find OpenSSL even when it is installed".
-
-  3.2 How do I tell curl to resume a transfer?
-
-  curl supports resumed transfers both ways on both FTP and HTTP.
-  Try the -C option.
-
-  3.3 Why does my posting using -F not work?
-
-  You cannot arbitrarily use -F or -d, the choice between -F or -d depends on
-  the HTTP operation you need curl to do and what the web server that will
-  receive your post expects.
-
-  If the form you are trying to submit uses the type 'multipart/form-data',
-  then and only then you must use the -F type. In all the most common cases,
-  you should use -d which then causes a posting with the type
-  'application/x-www-form-urlencoded'.
-
-  This is described in some detail in the MANUAL and TheArtOfHttpScripting
-  documents, and if you do not understand it the first time, read it again
-  before you post questions about this to the mailing list. Also, try reading
-  through the mailing list archives for old postings and questions regarding
-  this.
-
-  3.4 How do I tell curl to run custom FTP commands?
-
-  You can tell curl to perform optional commands both before and/or after a
-  file transfer. Study the -Q/--quote option.
-
-  Since curl is used for file transfers, you do not normally use curl to
-  perform FTP commands without transferring anything. Therefore you must
-  always specify a URL to transfer to/from even when doing custom FTP
-  commands, or use -I which implies the "no body" option sent to libcurl.
-
-  3.5 How can I disable the Accept: */* header?
-
-  You can change all internally generated headers by adding a replacement with
-  the -H/--header option. By adding a header with empty contents you safely
-  disable that one. Use -H "Accept:" to disable that specific header.
-
-  3.6 Does curl support ASP, XML, XHTML or HTML version Y?
-
-  To curl, all contents are alike. It does not matter how the page was
-  generated. It may be ASP, PHP, Perl, shell-script, SSI or plain HTML
-  files. There is no difference to curl and it does not even know what kind of
-  language that generated the page.
-
-  See also item 3.14 regarding JavaScript.
-
-  3.7 Can I use curl to delete/rename a file through FTP?
-
-  Yes. You specify custom FTP commands with -Q/--quote.
-
-  One example would be to delete a file after you have downloaded it:
-
-     curl -O ftp://example.com/coolfile -Q '-DELE coolfile'
-
-  or rename a file after upload:
-
-     curl -T infile ftp://example.com/dir/ -Q "-RNFR infile" -Q "-RNTO newname"
-
-  3.8 How do I tell curl to follow HTTP redirects?
-
-  curl does not follow so-called redirects by default. The Location: header
-  that informs the client about this is only interpreted if you are using the
-  -L/--location option. As in:
-
-     curl -L https://example.com
-
-  Not all redirects are HTTP ones, see 4.14
-
-  3.9 How do I use curl in my favorite programming language?
-
-  Many programming languages have interfaces/bindings that allow you to use
-  curl without having to use the command line tool. If you are fluent in such
-  a language, you may prefer to use one of these interfaces instead.
-
-  Find out more about which languages that support curl directly, and how to
-  install and use them, in the libcurl section of the curl website:
-  https://curl.se/libcurl/
-
-  All the various bindings to libcurl are made by other projects and people,
-  outside of the curl project. The curl project itself only produces libcurl
-  with its plain C API. If you do not find anywhere else to ask you can ask
-  about bindings on the curl-library list too, but be prepared that people on
-  that list may not know anything about bindings.
-
-  In December 2021, there were interfaces available for the following
-  languages: Ada95, Basic, C, C++, Ch, Cocoa, D, Delphi, Dylan, Eiffel,
-  Euphoria, Falcon, Ferite, Gambas, glib/GTK+, Go, Guile, Harbour, Haskell,
-  Java, Julia, Lisp, Lua, Mono, .NET, node.js, Object-Pascal, OCaml, Pascal,
-  Perl, PHP, PostgreSQL, Python, R, Rexx, Ring, RPG, Ruby, Rust, Scheme,
-  Scilab, S-Lang, Smalltalk, SP-Forth, SPL, Tcl, Visual Basic, Visual FoxPro,
-  Q, wxwidgets, XBLite and Xoho. By the time you read this, additional ones
-  may have appeared.
-
-  3.10 What about SOAP, WebDAV, XML-RPC or similar protocols over HTTP?
-
-  curl adheres to the HTTP spec, which basically means you can play with *any*
-  protocol that is built on top of HTTP. Protocols such as SOAP, WebDAV and
-  XML-RPC are all such ones. You can use -X to set custom requests and -H to
-  set custom headers (or replace internally generated ones).
-
-  Using libcurl is of course just as good and you would just use the proper
-  library options to do the same.
-
-  3.11 How do I POST with a different Content-Type?
-
-  You can always replace the internally generated headers with -H/--header.
-  To make a simple HTTP POST with text/xml as content-type, do something like:
-
-        curl -d "datatopost" -H "Content-Type: text/xml" [URL]
-
-  3.12 Why do FTP-specific features over HTTP proxy fail?
-
-  Because when you use an HTTP proxy, the protocol spoken on the network will
-  be HTTP, even if you specify an FTP URL. This effectively means that you
-  normally cannot use FTP-specific features such as FTP upload and FTP quote
-  etc.
-
-  There is one exception to this rule, and that is if you can "tunnel through"
-  the given HTTP proxy. Proxy tunneling is enabled with a special option (-p)
-  and is generally not available as proxy admins usually disable tunneling to
-  ports other than 443 (which is used for HTTPS access through proxies).
-
-  3.13 Why do my single/double quotes fail?
-
-  To specify a command line option that includes spaces, you might need to
-  put the entire option within quotes. Like in:
-
-   curl -d " with spaces " example.com
-
-  or perhaps
-
-   curl -d ' with spaces ' example.com
-
-  Exactly what kind of quotes and how to do this is entirely up to the shell
-  or command line interpreter that you are using. For most Unix shells, you
-  can more or less pick either single (') or double (") quotes. For
-  Windows/DOS command prompts you must use double (") quotes, and if the
-  option string contains inner double quotes you can escape them with a
-  backslash.
-
-  For Windows powershell the arguments are not always passed on as expected
-  because curl is not a powershell script. You may or may not be able to use
-  single quotes. To escape inner double quotes seems to require a
-  backslash-backtick escape sequence and the outer quotes as double quotes.
-
-  Please study the documentation for your particular environment. Examples in
-  the curl docs will use a mix of both of these as shown above. You must
-  adjust them to work in your environment.
-
-  Remember that curl works and runs on more operating systems than most single
-  individuals have ever tried.
-
-  3.14 Does curl support JavaScript or PAC (automated proxy config)?
-
-  Many webpages do magic stuff using embedded JavaScript. curl and libcurl
-  have no built-in support for that, so it will be treated just like any other
-  contents.
-
-  .pac files are a Netscape invention and are sometimes used by organizations
-  to allow them to differentiate which proxies to use. The .pac contents is
-  just a JavaScript program that gets invoked by the browser and that returns
-  the name of the proxy to connect to. Since curl does not support JavaScript,
-  it cannot support .pac proxy configuration either.
-
-  Some workarounds usually suggested to overcome this JavaScript dependency:
-
-  Depending on the JavaScript complexity, write up a script that translates it
-  to another language and execute that.
-
-  Read the JavaScript code and rewrite the same logic in another language.
-
-  Implement a JavaScript interpreter, people have successfully used the
-  Mozilla JavaScript engine in the past.
-
-  Ask your admins to stop this, for a static proxy setup or similar.
-
-  3.15 Can I do recursive fetches with curl?
-
-  No. curl itself has no code that performs recursive operations, such as
-  those performed by wget and similar tools.
-
-  There exists wrapper scripts with that functionality (for example the
-  curlmirror perl script), and you can write programs based on libcurl to do
-  it, but the command line tool curl itself cannot.
-
-  3.16 What certificates do I need when I use SSL?
-
-  There are three different kinds of "certificates" to keep track of when we
-  talk about using SSL-based protocols (HTTPS or FTPS) using curl or libcurl.
-
-  CLIENT CERTIFICATE
-
-  The server you communicate with may require that you can provide this in
-  order to prove that you actually are who you claim to be. If the server
-  does not require this, you do not need a client certificate.
-
-  A client certificate is always used together with a private key, and the
-  private key has a passphrase that protects it.
-
-  SERVER CERTIFICATE
-
-  The server you communicate with has a server certificate. You can and should
-  verify this certificate to make sure that you are truly talking to the real
-  server and not a server impersonating it.
-
-  CERTIFICATE AUTHORITY CERTIFICATE ("CA cert")
-
-  You often have several CA certs in a CA cert bundle that can be used to
-  verify a server certificate that was signed by one of the authorities in the
-  bundle. curl does not come with a CA cert bundle but most curl installs
-  provide one. You can also override the default.
-
-  The server certificate verification process is made by using a Certificate
-  Authority certificate ("CA cert") that was used to sign the server
-  certificate. Server certificate verification is enabled by default in curl
-  and libcurl and is often the reason for problems as explained in FAQ entry
-  4.12 and the SSLCERTS document
-  (https://curl.se/docs/sslcerts.html). Server certificates that are
-  "self-signed" or otherwise signed by a CA that you do not have a CA cert
-  for, cannot be verified. If the verification during a connect fails, you are
-  refused access. You then need to explicitly disable the verification to
-  connect to the server.
-
-  3.17 How do I list the root directory of an FTP server?
-
-  There are two ways. The way defined in the RFC is to use an encoded slash
-  in the first path part. List the "/tmp" directory like this:
-
-     curl ftp://ftp.example.com/%2ftmp/
-
-  or the not-quite-kosher-but-more-readable way, by simply starting the path
-  section of the URL with a slash:
-
-     curl ftp://ftp.example.com//tmp/
-
-  3.18 Can I use curl to send a POST/PUT and not wait for a response?
-
-  No.
-
-  You can easily write your own program using libcurl to do such stunts.
-
-  3.19 How do I get HTTP from a host using a specific IP address?
-
-  For example, you may be trying out a website installation that is not yet in
-  the DNS. Or you have a site using multiple IP addresses for a given host
-  name and you want to address a specific one out of the set.
-
-  Set a custom Host: header that identifies the server name you want to reach
-  but use the target IP address in the URL:
-
-    curl --header "Host: www.example.com" https://somewhere.example/
-
-  You can also opt to add faked hostname entries to curl with the --resolve
-  option. That has the added benefit that things like redirects will also work
-  properly. The above operation would instead be done as:
-
-    curl --resolve www.example.com:80:127.0.0.1 https://www.example.com/
-
-  3.20 How to SFTP from my user's home directory?
-
-  Contrary to how FTP works, SFTP and SCP URLs specify the exact directory to
-  work with. It means that if you do not specify that you want the user's home
-  directory, you get the actual root directory.
-
-  To specify a file in your user's home directory, you need to use the correct
-  URL syntax which for SFTP might look similar to:
-
-    curl -O -u user:password sftp://example.com/~/file.txt
-
-  and for SCP it is just a different protocol prefix:
-
-    curl -O -u user:password scp://example.com/~/file.txt
-
-  3.21 Protocol xxx not supported or disabled in libcurl
-
-  When passing on a URL to curl to use, it may respond that the particular
-  protocol is not supported or disabled. The particular way this error message
-  is phrased is because curl does not make a distinction internally of whether
-  a particular protocol is not supported (i.e. never got any code added that
-  knows how to speak that protocol) or if it was explicitly disabled. curl can
-  be built to only support a given set of protocols, and the rest would then
-  be disabled or not supported.
-
-  Note that this error will also occur if you pass a wrongly spelled protocol
-  part as in "htpts://example.com" or as in the less evident case if you
-  prefix the protocol part with a space as in " https://example.com/".
-
-  3.22 curl -X gives me HTTP problems
-
-  In normal circumstances, -X should hardly ever be used.
-
-  By default you use curl without explicitly saying which request method to
-  use when the URL identifies an HTTP transfer. If you just pass in a URL like
-  "curl https://example.com" it will use GET. If you use -d or -F curl will use
-  POST, -I will cause a HEAD and -T will make it a PUT.
-
-  If for whatever reason you are not happy with these default choices that curl
-  does for you, you can override those request methods by specifying -X
-  [WHATEVER]. This way you can for example send a DELETE by doing "curl -X
-  DELETE [URL]".
-
-  It is thus pointless to do "curl -XGET [URL]" as GET would be used anyway.
-  In the same vein it is pointless to do "curl -X POST -d data [URL]". You can
-  make a fun and somewhat rare request that sends a request-body in a GET
-  request with something like "curl -X GET -d data [URL]"
-
-  Note that -X does not actually change curl's behavior as it only modifies the
-  actual string sent in the request, but that may of course trigger a
-  different set of events.
-
-  Accordingly, by using -XPOST on a command line that for example would follow
-  a 303 redirect, you will effectively prevent curl from behaving
-  correctly. Be aware.
-
-
-4. Running Problems
-
-  4.2 Why do I get problems when I use & or % in the URL?
-
-  In general Unix shells, the & symbol is treated specially and when used, it
-  runs the specified command in the background. To safely send the & as a part
-  of a URL, you should quote the entire URL by using single (') or double (")
-  quotes around it. Similar problems can also occur on some shells with other
-  characters, including ?*!$~(){}<>\|;`. When in doubt, quote the URL.
-
-  An example that would invoke a remote CGI that uses &-symbols could be:
-
-     curl 'https://www.example.com/cgi-bin/query?text=yes&q=curl'
-
-  In Windows, the standard DOS shell treats the percent sign specially and you
-  need to use TWO percent signs for each single one you want to use in the
-  URL.
-
-  If you want a literal percent sign to be part of the data you pass in a POST
-  using -d/--data you must encode it as '%25' (which then also needs the
-  percent sign doubled on Windows machines).
-
-  4.3 How can I use {, }, [ or ] to specify multiple URLs?
-
-  Because those letters have a special meaning to the shell, to be used in
-  a URL specified to curl you must quote them.
-
-  An example that downloads two URLs (sequentially) would be:
-
-    curl '{curl,www}.haxx.se'
-
-  To be able to use those characters as actual parts of the URL (without using
-  them for the curl URL "globbing" system), use the -g/--globoff option:
-
-    curl -g 'www.example.com/weirdname[].html'
-
-  4.4 Why do I get downloaded data even though the webpage does not exist?
-
-  curl asks remote servers for the page you specify. If the page does not exist
-  at the server, the HTTP protocol defines how the server should respond and
-  that means that headers and a "page" will be returned. That is simply how
-  HTTP works.
-
-  By using the --fail option you can tell curl explicitly to not get any data
-  if the HTTP return code does not say success.
-
-  4.5 Why do I get return code XXX from an HTTP server?
-
-  RFC 2616 clearly explains the return codes. This is a short transcript. Go
-  read the RFC for exact details:
-
-    4.5.1 "400 Bad Request"
-
-    The request could not be understood by the server due to malformed
-    syntax. The client SHOULD NOT repeat the request without modifications.
-
-    4.5.2 "401 Unauthorized"
-
-    The request requires user authentication.
-
-    4.5.3 "403 Forbidden"
-
-    The server understood the request, but is refusing to fulfill it.
-    Authorization will not help and the request SHOULD NOT be repeated.
-
-    4.5.4 "404 Not Found"
-
-    The server has not found anything matching the Request-URI. No indication
-    is given as to whether the condition is temporary or permanent.
-
-    4.5.5 "405 Method Not Allowed"
-
-    The method specified in the Request-Line is not allowed for the resource
-    identified by the Request-URI. The response MUST include an Allow header
-    containing a list of valid methods for the requested resource.
-
-    4.5.6 "301 Moved Permanently"
-
-    If you get this return code and an HTML output similar to this:
-
-       <H1>Moved Permanently</H1> The document has moved <A
-       HREF="https://same_url_now_with_a_trailing_slash.example/">here</A>.
-
-    it might be because you requested a directory URL but without the trailing
-    slash. Try the same operation again _with_ the trailing URL, or use the
-    -L/--location option to follow the redirection.
-
-  4.6 Can you tell me what error code 142 means?
-
-  All curl error codes are described at the end of the man page, in the
-  section called "EXIT CODES".
-
-  Error codes that are larger than the highest documented error code means
-  that curl has exited due to a crash. This is a serious error, and we
-  appreciate a detailed bug report from you that describes how we could go
-  ahead and repeat this.
-
-  4.7 How do I keep usernames and passwords secret in curl command lines?
-
-  This problem has two sides:
-
-  The first part is to avoid having clear-text passwords in the command line
-  so that they do not appear in 'ps' outputs and similar. That is easily
-  avoided by using the "-K" option to tell curl to read parameters from a file
-  or stdin to which you can pass the secret info. curl itself will also
-  attempt to "hide" the given password by blanking out the option - this
-  does not work on all platforms.
-
-  To keep the passwords in your account secret from the rest of the world is
-  not a task that curl addresses. You could of course encrypt them somehow to
-  at least hide them from being read by human eyes, but that is not what
-  anyone would call security.
-
-  Also note that regular HTTP (using Basic authentication) and FTP passwords
-  are sent as cleartext across the network. All it takes for anyone to fetch
-  them is to listen on the network. Eavesdropping is easy. Use more secure
-  authentication methods (like Digest, Negotiate or even NTLM) or consider the
-  SSL-based alternatives HTTPS and FTPS.
-
-  4.8 I found a bug
-
-  It is not a bug if the behavior is documented. Read the docs first.
-  Especially check out the KNOWN_BUGS file, it may be a documented bug.
-
-  If it is a problem with a binary you have downloaded or a package for your
-  particular platform, try contacting the person who built the package/archive
-  you have.
-
-  If there is a bug, read the BUGS document first. Then report it as described
-  in there.
-
-  4.9 curl cannot authenticate to a server that requires NTLM?
-
-  NTLM support requires OpenSSL, GnuTLS, mbedTLS or Microsoft Windows
-  libraries at build-time to provide this functionality.
-
-  4.10 My HTTP request using HEAD, PUT or DELETE does not work
-
-  Many web servers allow or demand that the administrator configures the
-  server properly for these requests to work on the web server.
-
-  Some servers seem to support HEAD only on certain kinds of URLs.
-
-  To fully grasp this, try the documentation for the particular server
-  software you are trying to interact with. This is not anything curl can do
-  anything about.
-
-  4.11 Why do my HTTP range requests return the full document?
-
-  Because the range may not be supported by the server, or the server may
-  choose to ignore it and return the full document anyway.
-
-  4.12 Why do I get "certificate verify failed" ?
-
-  When you invoke curl and get an error 60 error back it means that curl
-  could not verify that the server's certificate was good. curl verifies the
-  certificate using the CA cert bundle and verifying for which names the
-  certificate has been granted.
-
-  To completely disable the certificate verification, use -k. This does
-  however enable man-in-the-middle attacks and makes the transfer INSECURE.
-  We strongly advise against doing this for more than experiments.
-
-  If you get this failure with a CA cert bundle installed and used, the
-  server's certificate might not be signed by one of the CA's in your CA
-  store. It might for example be self-signed. You then correct this problem by
-  obtaining a valid CA cert for the server. Or again, decrease the security by
-  disabling this check.
-
-  At times, you find that the verification works in your favorite browser but
-  fails in curl. When this happens, the reason is usually that the server
-  sends an incomplete cert chain. The server is mandated to send all
-  "intermediate certificates" but does not. This typically works with browsers
-  anyway since they A) cache such certs and B) supports AIA which downloads
-  such missing certificates on demand. This is a server misconfiguration. A
-  good way to figure out if this is the case it to use the SSL Labs server
-  test and check the certificate chain: https://www.ssllabs.com/ssltest/
-
-  Details are also in the SSLCERTS.md document, found online here:
-  https://curl.se/docs/sslcerts.html
-
-  4.13 Why is curl -R on Windows one hour off?
-
-  Since curl 7.53.0 this issue should be fixed as long as curl was built with
-  any modern compiler that allows for a 64-bit curl_off_t type. For older
-  compilers or prior curl versions it may set a time that appears one hour off.
-  This happens due to a flaw in how Windows stores and uses file modification
-  times and it is not easily worked around. For more details read this:
-  https://www.codeproject.com/Articles/1144/Beating-the-Daylight-Savings-Time-bug-and-getting
-
-  4.14 Redirects work in browser but not with curl
-
-  curl supports HTTP redirects well (see item 3.8). Browsers generally support
-  at least two other ways to perform redirects that curl does not:
-
-  Meta tags. You can write an HTML tag that will cause the browser to redirect
-  to another given URL after a certain time.
-
-  JavaScript. You can write a JavaScript program embedded in an HTML page that
-  redirects the browser to another given URL.
-
-  There is no way to make curl follow these redirects. You must either
-  manually figure out what the page is set to do, or write a script that parses
-  the results and fetches the new URL.
-
-  4.15 FTPS does not work
-
-  curl supports FTPS (sometimes known as FTP-SSL) both implicit and explicit
-  mode.
-
-  When a URL is used that starts with FTPS://, curl assumes implicit SSL on
-  the control connection and will therefore immediately connect and try to
-  speak SSL. FTPS:// connections default to port 990.
-
-  To use explicit FTPS, you use an FTP:// URL and the --ssl-reqd option (or one
-  of its related flavors). This is the most common method, and the one
-  mandated by RFC 4217. This kind of connection will then of course use the
-  standard FTP port 21 by default.
-
-  4.16 My HTTP POST or PUT requests are slow
-
-  libcurl makes all POST and PUT requests (except for requests with a small
-  request body) use the "Expect: 100-continue" header. This header allows the
-  server to deny the operation early so that libcurl can bail out before having
-  to send any data. This is useful in authentication cases and others.
-
-  However, many servers do not implement the Expect: stuff properly and if the
-  server does not respond (positively) within 1 second libcurl will continue
-  and send off the data anyway.
-
-  You can disable libcurl's use of the Expect: header the same way you disable
-  any header, using -H / CURLOPT_HTTPHEADER, or by forcing it to use HTTP 1.0.
-
-  4.17 Non-functional connect timeouts
-
-  In most Windows setups having a timeout longer than 21 seconds make no
-  difference, as it will only send 3 TCP SYN packets and no more. The second
-  packet sent three seconds after the first and the third six seconds after
-  the second. No more than three packets are sent, no matter how long the
-  timeout is set.
-
-  See option TcpMaxConnectRetransmissions on this page:
-  https://web.archive.org/web/20160819015101/support.microsoft.com/en-us/kb/175523
-
-  Also, even on non-Windows systems there may run a firewall or anti-virus
-  software or similar that accepts the connection but does not actually do
-  anything else. This will make (lib)curl to consider the connection connected
-  and thus the connect timeout will not trigger.
-
-  4.18 file:// URLs containing drive letters (Windows, NetWare)
-
-  When using curl to try to download a local file, one might use a URL
-  in this format:
-
-  file://D:/blah.txt
-
-  you will find that even if D:\blah.txt does exist, curl returns a 'file
-  not found' error.
-
-  According to RFC 1738 (https://datatracker.ietf.org/doc/html/rfc1738),
-  file:// URLs must contain a host component, but it is ignored by
-  most implementations. In the above example, 'D:' is treated as the
-  host component, and is taken away. Thus, curl tries to open '/blah.txt'.
-  If your system is installed to drive C:, that will resolve to 'C:\blah.txt',
-  and if that does not exist you will get the not found error.
-
-  To fix this problem, use file:// URLs with *three* leading slashes:
-
-  file:///D:/blah.txt
-
-  Alternatively, if it makes more sense, specify 'localhost' as the host
-  component:
-
-  file://localhost/D:/blah.txt
-
-  In either case, curl should now be looking for the correct file.
-
-  4.19 Why does not curl return an error when the network cable is unplugged?
-
-  Unplugging a cable is not an error situation. The TCP/IP protocol stack
-  was designed to be fault tolerant, so even though there may be a physical
-  break somewhere the connection should not be affected, just possibly
-  delayed. Eventually, the physical break will be fixed or the data will be
-  re-routed around the physical problem through another path.
-
-  In such cases, the TCP/IP stack is responsible for detecting when the
-  network connection is irrevocably lost. Since with some protocols it is
-  perfectly legal for the client to wait indefinitely for data, the stack may
-  never report a problem, and even when it does, it can take up to 20 minutes
-  for it to detect an issue. The curl option --keepalive-time enables
-  keep-alive support in the TCP/IP stack which makes it periodically probe the
-  connection to make sure it is still available to send data. That should
-  reliably detect any TCP/IP network failure.
-
-  TCP keep alive will not detect the network going down before the TCP/IP
-  connection is established (e.g. during a DNS lookup) or using protocols that
-  do not use TCP. To handle those situations, curl offers a number of timeouts
-  on its own. --speed-limit/--speed-time will abort if the data transfer rate
-  falls too low, and --connect-timeout and --max-time can be used to put an
-  overall timeout on the connection phase or the entire transfer.
-
-  A libcurl-using application running in a known physical environment (e.g.
-  an embedded device with only a single network connection) may want to act
-  immediately if its lone network connection goes down. That can be achieved
-  by having the application monitor the network connection on its own using an
-  OS-specific mechanism, then signaling libcurl to abort (see also item 5.13).
-
-  4.20 curl does not return error for HTTP non-200 responses
-
-  Correct. Unless you use -f (--fail).
-
-  When doing HTTP transfers, curl will perform exactly what you are asking it
-  to do and if successful it will not return an error. You can use curl to
-  test your web server's "file not found" page (that gets 404 back), you can
-  use it to check your authentication protected webpages (that gets a 401
-  back) and so on.
-
-  The specific HTTP response code does not constitute a problem or error for
-  curl. It simply sends and delivers HTTP as you asked and if that worked,
-  everything is fine and dandy. The response code is generally providing more
-  higher level error information that curl does not care about. The error was
-  not in the HTTP transfer.
-
-  If you want your command line to treat error codes in the 400 and up range
-  as errors and thus return a non-zero value and possibly show an error
-  message, curl has a dedicated option for that: -f (CURLOPT_FAILONERROR in
-  libcurl speak).
-
-  You can also use the -w option and the variable %{response_code} to extract
-  the exact response code that was returned in the response.
-
-5. libcurl Issues
-
-  5.1 Is libcurl thread-safe?
-
-  Yes.
-
-  We have written the libcurl code specifically adjusted for multi-threaded
-  programs. libcurl will use thread-safe functions instead of non-safe ones if
-  your system has such. Note that you must never share the same handle in
-  multiple threads.
-
-  There may be some exceptions to thread safety depending on how libcurl was
-  built. Please review the guidelines for thread safety to learn more:
-  https://curl.se/libcurl/c/threadsafe.html
-
-  5.2 How can I receive all data into a large memory chunk?
-
-  [ See also the examples/getinmemory.c source ]
-
-  You are in full control of the callback function that gets called every time
-  there is data received from the remote server. You can make that callback do
-  whatever you want. You do not have to write the received data to a file.
-
-  One solution to this problem could be to have a pointer to a struct that you
-  pass to the callback function. You set the pointer using the
-  CURLOPT_WRITEDATA option. Then that pointer will be passed to the callback
-  instead of a FILE * to a file:
-
-        /* imaginary struct */
-        struct MemoryStruct {
-          char *memory;
-          size_t size;
-        };
-
-        /* imaginary callback function */
-        size_t
-        WriteMemoryCallback(void *ptr, size_t size, size_t nmemb, void *data)
-        {
-          size_t realsize = size * nmemb;
-          struct MemoryStruct *mem = (struct MemoryStruct *)data;
-
-          mem->memory = (char *)realloc(mem->memory, mem->size + realsize + 1);
-          if(mem->memory) {
-            memcpy(&(mem->memory[mem->size]), ptr, realsize);
-            mem->size += realsize;
-            mem->memory[mem->size] = 0;
-          }
-          return realsize;
-        }
-
-  5.3 How do I fetch multiple files with libcurl?
-
-  libcurl has excellent support for transferring multiple files. You should
-  just repeatedly set new URLs with curl_easy_setopt() and then transfer it
-  with curl_easy_perform(). The handle you get from curl_easy_init() is not
-  only reusable, but you are even encouraged to reuse it if you can, as that
-  will enable libcurl to use persistent connections.
-
-  5.4 Does libcurl do Winsock initialization on Win32 systems?
-
-  Yes, if told to in the curl_global_init() call.
-
-  5.5 Does CURLOPT_WRITEDATA and CURLOPT_READDATA work on Win32 ?
-
-  Yes, but you cannot open a FILE * and pass the pointer to a DLL and have
-  that DLL use the FILE * (as the DLL and the client application cannot access
-  each others' variable memory areas). If you set CURLOPT_WRITEDATA you must
-  also use CURLOPT_WRITEFUNCTION as well to set a function that writes the
-  file, even if that simply writes the data to the specified FILE *.
-  Similarly, if you use CURLOPT_READDATA you must also specify
-  CURLOPT_READFUNCTION.
-
-  5.6 What about Keep-Alive or persistent connections?
-
-  curl and libcurl have excellent support for persistent connections when
-  transferring several files from the same server. curl will attempt to reuse
-  connections for all URLs specified on the same command line/config file, and
-  libcurl will reuse connections for all transfers that are made using the
-  same libcurl handle.
-
-  When you use the easy interface the connection cache is kept within the easy
-  handle. If you instead use the multi interface, the connection cache will be
-  kept within the multi handle and will be shared among all the easy handles
-  that are used within the same multi handle.
-
-  5.7 Link errors when building libcurl on Windows
-
-  You need to make sure that your project, and all the libraries (both static
-  and dynamic) that it links against, are compiled/linked against the same run
-  time library.
-
-  This is determined by the /MD, /ML, /MT (and their corresponding /M?d)
-  options to the command line compiler. /MD (linking against MSVCRT dll) seems
-  to be the most commonly used option.
-
-  When building an application that uses the static libcurl library, you must
-  add -DCURL_STATICLIB to your CFLAGS. Otherwise the linker will look for
-  dynamic import symbols. If you are using Visual Studio, you need to instead
-  add CURL_STATICLIB in the "Preprocessor Definitions" section.
-
-  If you get a linker error like "unknown symbol __imp__curl_easy_init ..." you
-  have linked against the wrong (static) library. If you want to use the
-  libcurl.dll and import lib, you do not need any extra CFLAGS, but use one of
-  the import libraries below. These are the libraries produced by the various
-  lib/Makefile.* files:
-
-       Target:          static lib.   import lib for libcurl*.dll.
-       -----------------------------------------------------------
-       MinGW:           libcurl.a     libcurldll.a
-       MSVC (release):  libcurl.lib   libcurl_imp.lib
-       MSVC (debug):    libcurld.lib  libcurld_imp.lib
-       Borland:         libcurl.lib   libcurl_imp.lib
-
-  5.8 libcurl.so.X: open failed: No such file or directory
-
-  This is an error message you might get when you try to run a program linked
-  with a shared version of libcurl and your runtime linker (ld.so) could not
-  find the shared library named libcurl.so.X. (Where X is the number of the
-  current libcurl ABI, typically 3 or 4).
-
-  You need to make sure that ld.so finds libcurl.so.X. You can do that
-  multiple ways, and it differs somewhat between different operating systems.
-  They are usually:
-
-  * Add an option to the linker command line that specify the hard-coded path
-    the runtime linker should check for the lib (usually -R)
-
-  * Set an environment variable (LD_LIBRARY_PATH for example) where ld.so
-    should check for libs
-
-  * Adjust the system's config to check for libs in the directory where you have
-    put the library (like Linux's /etc/ld.so.conf)
-
-  'man ld.so' and 'man ld' will tell you more details
-
-  5.9 How does libcurl resolve hostnames?
-
-  libcurl supports a large number of name resolve functions. One of them is
-  picked at build-time and will be used unconditionally. Thus, if you want to
-  change name resolver function you must rebuild libcurl and tell it to use a
-  different function.
-
-  - The non-IPv6 resolver that can use one of four different hostname resolve
-  calls (depending on what your system supports):
-
-      A - gethostbyname()
-      B - gethostbyname_r() with 3 arguments
-      C - gethostbyname_r() with 5 arguments
-      D - gethostbyname_r() with 6 arguments
-
-  - The IPv6-resolver that uses getaddrinfo()
-
-  - The c-ares based name resolver that uses the c-ares library for resolves.
-    Using this offers asynchronous name resolves.
-
-  - The threaded resolver (default option on Windows). It uses:
-
-      A - gethostbyname() on plain IPv4 hosts
-      B - getaddrinfo() on IPv6 enabled hosts
-
-  Also note that libcurl never resolves or reverse-lookups addresses given as
-  pure numbers, such as 127.0.0.1 or ::1.
-
-  5.10 How do I prevent libcurl from writing the response to stdout?
-
-  libcurl provides a default built-in write function that writes received data
-  to stdout. Set the CURLOPT_WRITEFUNCTION to receive the data, or possibly
-  set CURLOPT_WRITEDATA to a different FILE * handle.
-
-  5.11 How do I make libcurl not receive the whole HTTP response?
-
-  You make the write callback (or progress callback) return an error and
-  libcurl will then abort the transfer.
-
-  5.12 Can I make libcurl fake or hide my real IP address?
-
-  No. libcurl operates on a higher level. Besides, faking IP address would
-  imply sending IP packets with a made-up source address, and then you normally
-  get a problem with receiving the packet sent back as they would then not be
-  routed to you.
-
-  If you use a proxy to access remote sites, the sites will not see your local
-  IP address but instead the address of the proxy.
-
-  Also note that on many networks NATs or other IP-munging techniques are used
-  that makes you see and use a different IP address locally than what the
-  remote server will see you coming from. You may also consider using
-  https://www.torproject.org/ .
-
-  5.13 How do I stop an ongoing transfer?
-
-  With the easy interface you make sure to return the correct error code from
-  one of the callbacks, but none of them are instant. There is no function you
-  can call from another thread or similar that will stop it immediately.
-  Instead, you need to make sure that one of the callbacks you use returns an
-  appropriate value that will stop the transfer. Suitable callbacks that you
-  can do this with include the progress callback, the read callback and the
-  write callback.
-
-  If you are using the multi interface, you can also stop a transfer by
-  removing the particular easy handle from the multi stack at any moment you
-  think the transfer is done or when you wish to abort the transfer.
-
-  5.14 Using C++ non-static functions for callbacks?
-
-  libcurl is a C library, it does not know anything about C++ member functions.
-
-  You can overcome this "limitation" with relative ease using a static
-  member function that is passed a pointer to the class:
-
-     // f is the pointer to your object.
-     static size_t YourClass::func(void *buffer, size_t sz, size_t n, void *f)
-     {
-       // Call non-static member function.
-       static_cast<YourClass*>(f)->nonStaticFunction();
-     }
-
-     // This is how you pass pointer to the static function:
-     curl_easy_setopt(hcurl, CURLOPT_WRITEFUNCTION, YourClass::func);
-     curl_easy_setopt(hcurl, CURLOPT_WRITEDATA, this);
-
-  5.15 How do I get an FTP directory listing?
-
-  If you end the FTP URL you request with a slash, libcurl will provide you
-  with a directory listing of that given directory. You can also set
-  CURLOPT_CUSTOMREQUEST to alter what exact listing command libcurl would use
-  to list the files.
-
-  The follow-up question tends to be how is a program supposed to parse the
-  directory listing. How does it know what's a file and what's a directory and
-  what's a symlink etc. If the FTP server supports the MLSD command then it
-  will return data in a machine-readable format that can be parsed for type.
-  The types are specified by RFC 3659 section 7.5.1. If MLSD is not supported
-  then you have to work with what you are given. The LIST output format is
-  entirely at the server's own liking and the NLST output does not reveal any
-  types and in many cases does not even include all the directory entries.
-  Also, both LIST and NLST tend to hide Unix-style hidden files (those that
-  start with a dot) by default so you need to do "LIST -a" or similar to see
-  them.
-
-  Example - List only directories.
-  ftp.funet.fi supports MLSD and ftp.kernel.org does not:
-
-     curl -s ftp.funet.fi/pub/ -X MLSD | \
-       perl -lne 'print if s/(?:^|;)type=dir;[^ ]+ (.+)$/$1/'
-
-     curl -s ftp.kernel.org/pub/linux/kernel/ | \
-       perl -lne 'print if s/^d[-rwx]{9}(?: +[^ ]+){7} (.+)$/$1/'
-
-  If you need to parse LIST output in libcurl one such existing
-  list parser is available at https://cr.yp.to/ftpparse.html  Versions of
-  libcurl since 7.21.0 also provide the ability to specify a wildcard to
-  download multiple files from one FTP directory.
-
-  5.16 I want a different time-out
-
-  Sometimes users realize that CURLOPT_TIMEOUT and CURLOPT_CONNECTIMEOUT are
-  not sufficiently advanced or flexible to cover all the various use cases and
-  scenarios applications end up with.
-
-  libcurl offers many more ways to time-out operations. A common alternative
-  is to use the CURLOPT_LOW_SPEED_LIMIT and CURLOPT_LOW_SPEED_TIME options to
-  specify the lowest possible speed to accept before to consider the transfer
-  timed out.
-
-  The most flexible way is by writing your own time-out logic and using
-  CURLOPT_XFERINFOFUNCTION (perhaps in combination with other callbacks) and
-  use that to figure out exactly when the right condition is met when the
-  transfer should get stopped.
-
-  5.17 Can I write a server with libcurl?
-
-  No. libcurl offers no functions or building blocks to build any kind of
-  Internet protocol server. libcurl is only a client-side library. For server
-  libraries, you need to continue your search elsewhere but there exist many
-  good open source ones out there for most protocols you could want a server
-  for. There are also really good stand-alone servers that have been tested
-  and proven for many years. There is no need for you to reinvent them.
-
-  5.18 Does libcurl use threads?
-
-  Put simply: no, libcurl will execute in the same thread you call it in. All
-  callbacks will be called in the same thread as the one you call libcurl in.
-
-  If you want to avoid your thread to be blocked by the libcurl call, you make
-  sure you use the non-blocking multi API which will do transfers
-  asynchronously - still in the same single thread.
-
-  libcurl will potentially internally use threads for name resolving, if it
-  was built to work like that, but in those cases it will create the child
-  threads by itself and they will only be used and then killed internally by
-  libcurl and never exposed to the outside.
-
-6. License Issues
-
-  curl and libcurl are released under an MIT/X derivative license. The license
-  is liberal and should not impose a problem for your project. This section is
-  just a brief summary for the cases we get the most questions. (Parts of this
-  section was much enhanced by Bjorn Reese.)
-
-  We are not lawyers and this is not legal advice. You should probably consult
-  one if you want true and accurate legal insights without our prejudice. Note
-  especially that this section concerns the libcurl license only; compiling in
-  features of libcurl that depend on other libraries (e.g. OpenSSL) may affect
-  the licensing obligations of your application.
-
-  6.1 I have a GPL program, can I use the libcurl library?
-
-  Yes
-
-  Since libcurl may be distributed under the MIT/X derivative license, it can
-  be used together with GPL in any software.
-
-  6.2 I have a closed-source program, can I use the libcurl library?
-
-  Yes
-
-  libcurl does not put any restrictions on the program that uses the library.
-
-  6.3 I have a BSD licensed program, can I use the libcurl library?
-
-  Yes
-
-  libcurl does not put any restrictions on the program that uses the library.
-
-  6.4 I have a program that uses LGPL libraries, can I use libcurl?
-
-  Yes
-
-  The LGPL license does not clash with other licenses.
-
-  6.5 Can I modify curl/libcurl for my program and keep the changes secret?
-
-  Yes
-
-  The MIT/X derivative license practically allows you to do almost anything
-  with the sources, on the condition that the copyright texts in the sources
-  are left intact.
-
-  6.6 Can you please change the curl/libcurl license to XXXX?
-
-  No.
-
-  We have carefully picked this license after years of development and
-  discussions and a large amount of people have contributed with source code
-  knowing that this is the license we use. This license puts the restrictions
-  we want on curl/libcurl and it does not spread to other programs or
-  libraries that use it. It should be possible for everyone to use libcurl or
-  curl in their projects, no matter what license they already have in use.
-
-  6.7 What are my obligations when using libcurl in my commercial apps?
-
-  Next to none. All you need to adhere to is the MIT-style license (stated in
-  the COPYING file) which basically says you have to include the copyright
-  notice in "all copies" and that you may not use the copyright holder's name
-  when promoting your software.
-
-  You do not have to release any of your source code.
-
-  You do not have to reveal or make public any changes to the libcurl source
-  code.
-
-  You do not have to broadcast to the world that you are using libcurl within
-  your app.
-
-  All we ask is that you disclose "the copyright notice and this permission
-  notice" somewhere. Most probably like in the documentation or in the section
-  where other third party dependencies already are mentioned and acknowledged.
-
-  As can be seen here: https://curl.se/docs/companies.html and elsewhere,
-  more and more companies are discovering the power of libcurl and take
-  advantage of it even in commercial environments.
-
-
-7. PHP/CURL Issues
-
-  7.1 What is PHP/CURL?
-
-  The module for PHP that makes it possible for PHP programs to access curl-
-  functions from within PHP.
-
-  In the curl project we call this module PHP/CURL to differentiate it from
-  curl the command line tool and libcurl the library. The PHP team however
-  does not refer to it like this (for unknown reasons). They call it plain
-  CURL (often using all caps) or sometimes ext/curl, but both cause much
-  confusion to users which in turn gives us a higher question load.
-
-  7.2 Who wrote PHP/CURL?
-
-  PHP/CURL was initially written by Sterling Hughes.
-
-  7.3 Can I perform multiple requests using the same handle?
-
-  Yes - at least in PHP version 4.3.8 and later (this has been known to not
-  work in earlier versions, but the exact version when it started to work is
-  unknown to me).
-
-  After a transfer, you just set new options in the handle and make another
-  transfer. This will make libcurl reuse the same connection if it can.
-
-  7.4 Does PHP/CURL have dependencies?
-
-  PHP/CURL is a module that comes with the regular PHP package. It depends on
-  and uses libcurl, so you need to have libcurl installed properly before
-  PHP/CURL can be used.
-
-8. Development
-
- 8.1 Why does curl use C89?
-
- As with everything in curl, there is a history and we keep using what we have
- used before until someone brings up the subject and argues for and works on
- changing it.
-
- We started out using C89 in the 1990s because that was the only way to write
- a truly portable C program and have it run as widely as possible. C89 was for
- a long time even necessary to make things work on otherwise considered modern
- platforms such as Windows. Today, we do not really know how many users that
- still require the use of a C89 compiler.
-
- We will continue to use C89 for as long as nobody brings up a strong enough
- reason for us to change our minds. The core developers of the project do not
- feel restricted by this and we are not convinced that going C99 will offer us
- enough of a benefit to warrant the risk of cutting off a share of users.
-
- 8.2 Will curl be rewritten?
-
- In one go: no. Little by little over time? Maybe.
-
- Over the years, new languages and clever operating environments come and go.
- Every now and then the urge apparently arises to request that we rewrite curl
- in another language.
-
- Some the most important properties in curl are maintaining the API and ABI
- for libcurl and keeping the behavior for the command line tool. As long as we
- can do that, everything else is up for discussion. To maintain the ABI, we
- probably have to maintain a certain amount of code in C, and to remain rock
- stable, we will never risk anything by rewriting a lot of things in one go.
- That said, we can certainly offer more and more optional backends written in
- other languages, as long as those backends can be plugged in at build-time.
- Backends can be written in any language, but should probably provide APIs
- usable from C to ease integration and transition.
diff --git a/docs/FAQ.md b/docs/FAQ.md
new file mode 100644
index 0000000..ffef644
--- /dev/null
+++ b/docs/FAQ.md
@@ -0,0 +1,1432 @@
+<!--
+Copyright (C) Daniel Stenberg, <daniel@haxx.se>, et al.
+
+SPDX-License-Identifier: curl
+-->
+
+# Frequently Asked Questions
+
+# Philosophy
+
+## What is curl?
+
+curl is the name of the project. The name is a play on *Client for URLs*,
+originally with URL spelled in uppercase to make it obvious it deals with
+URLs. The fact it can also be read as *see URL* also helped, it works as an
+abbreviation for *Client URL Request Library* or why not the recursive
+version: *curl URL Request Library*.
+
+The curl project produces two products:
+
+### libcurl
+
+A client-side URL transfer library, supporting DICT, FILE, FTP, FTPS, GOPHER,
+GOPHERS, HTTP, HTTPS, IMAP, IMAPS, LDAP, LDAPS, MQTT, POP3, POP3S, RTMP,
+RTMPS, RTSP, SCP, SFTP, SMB, SMBS, SMTP, SMTPS, TELNET, TFTP, WS and WSS.
+
+libcurl supports HTTPS certificates, HTTP POST, HTTP PUT, FTP uploading,
+Kerberos, SPNEGO, HTTP form based upload, proxies, cookies, user+password
+authentication, file transfer resume, http proxy tunneling and more.
+
+libcurl is highly portable, it builds and works identically on numerous
+platforms. The [internals document](https://curl.se/docs/install.html#Ports)
+lists more than 110 operating systems and 28 CPU architectures on which curl
+has been reported to run.
+
+libcurl is free, thread-safe, IPv6 compatible, feature rich, well supported
+and fast.
+
+### curl
+
+A command line tool for getting or sending data using URL syntax.
+
+Since curl uses libcurl, curl supports the same wide range of common Internet
+protocols that libcurl does.
+
+We pronounce curl with an initial k sound. It rhymes with words like girl and
+earl. [This is a short WAV
+file](https://media.merriam-webster.com/soundc11/c/curl0001.wav) to help you.
+
+There are numerous sub-projects and related projects that also use the word
+curl in the project names in various combinations, but you should take notice
+that this FAQ is directed at the command-line tool named curl (and libcurl the
+library), and may therefore not be valid for other curl-related projects.
+(There is however a small section for the PHP/CURL in this FAQ.)
+
+## What is libcurl?
+
+libcurl is a reliable and portable library for doing Internet data transfers
+using one or more of its supported Internet protocols.
+
+You can use libcurl freely in your application, be it open source, commercial
+or closed-source.
+
+libcurl is most probably the most portable, most powerful and most often used
+C-based multi-platform file transfer library on this planet - be it open
+source or commercial.
+
+## What is curl not?
+
+curl is not a Wget clone. That is a common misconception. Never, during curl's
+development, have we intended curl to replace Wget or compete on its market.
+curl is targeted at single-shot file transfers.
+
+curl is not a website mirroring program. If you want to use curl to mirror
+something: fine, go ahead and write a script that wraps around curl or use
+libcurl to make it reality.
+
+curl is not an FTP site mirroring program. Sure, get and send FTP with curl
+but if you want systematic and sequential behavior you should write a script
+(or write a new program that interfaces libcurl) and do it.
+
+curl is not a PHP tool, even though it works perfectly well when used from or
+with PHP (when using the PHP/CURL module).
+
+curl is not a program for a single operating system. curl exists, compiles,
+builds and runs under a wide range of operating systems, including all modern
+Unixes (and a bunch of older ones too), Windows, Amiga, OS/2, macOS, QNX etc.
+
+## When will you make curl do ... ?
+
+We love suggestions of what to change in order to make curl and libcurl
+better. We do however believe in a few rules when it comes to the future of
+curl:
+
+curl the command line tool is to remain a non-graphical command line tool. If
+you want GUIs or fancy scripting capabilities, you should look for another
+tool that uses libcurl.
+
+We do not add things to curl that other small and available tools already do
+well at the side. curl's output can be piped into another program or
+redirected to another file for the next program to interpret.
+
+We focus on protocol related issues and improvements. If you want to do more
+magic with the supported protocols than curl currently does, chances are good
+we will agree. If you want to add more protocols, we may agree.
+
+If you want someone else to do all the work while you wait for us to implement
+it for you, that is not a friendly attitude. We spend a considerable time
+already on maintaining and developing curl. In order to get more out of us,
+you should consider trading in some of your time and effort in return. Simply
+go to the [GitHub repository](https://github.com/curl/curl), fork the project,
+and create pull requests with your proposed changes.
+
+If you write the code, chances are better that it will get into curl faster.
+
+## Who makes curl?
+
+curl and libcurl are not made by any single individual. Daniel Stenberg is
+project leader and main developer, but other persons' submissions are
+important and crucial. Anyone can contribute and post their changes and
+improvements and have them inserted in the main sources (of course on the
+condition that developers agree that the fixes are good).
+
+The full list of all contributors is found in the docs/THANKS file.
+
+curl is developed by a community, with Daniel at the wheel.
+
+## What do you get for making curl?
+
+Project curl is entirely free and open. We do this voluntarily, mostly in our
+spare time. Companies may pay individual developers to work on curl. This is
+not controlled by nor supervised in any way by the curl project.
+
+We get help from companies. Haxx provides website, bandwidth, mailing lists
+etc, GitHub hosts [the primary git repository](https://github.com/curl/curl)
+and other services like the bug tracker. Also again, some companies have
+sponsored certain parts of the development in the past and I hope some will
+continue to do so in the future.
+
+If you want to [support our project](https://curl.se/sponsors.html), consider
+a donation or a banner-program or even better: by helping us with coding,
+documenting or testing etc.
+
+## What about CURL from curl.com?
+
+During the summer of 2001, curl.com was busy advertising their client-side
+programming language for the web, named CURL.
+
+We are in no way associated with curl.com or their CURL programming language.
+
+Our project name curl has been in effective use since 1998. We were not the
+first computer related project to use the name *curl* and do not claim any
+rights to the name.
+
+We recognize that we will be living in parallel with curl.com and wish them
+every success.
+
+## I have a problem, who do I mail?
+
+Please do not mail any single individual unless you really need to. Keep
+curl-related questions on a suitable mailing list. All available mailing lists
+are listed [online](https://curl.se/mail/).
+
+Keeping curl-related questions and discussions on mailing lists allows others
+to join in and help, to share their ideas, to contribute their suggestions and
+to spread their wisdom. Keeping discussions on public mailing lists also
+allows for others to learn from this (both current and future users thanks to
+the web based archives of the mailing lists), thus saving us from having to
+repeat ourselves even more. Thanks for respecting this.
+
+If you have found or simply suspect a security problem in curl or libcurl,
+submit all the details at [HackerOne](https://hackerone.com/curl). On there we
+keep the issue private while we investigate, confirm it, work and validate a
+fix and agree on a time schedule for publication etc. That way we produce a
+fix in a timely manner before the flaw is announced to the world, reducing the
+impact the problem risks having on existing users.
+
+Security issues can also be taking to the curl security team by emailing
+security at curl.se (closed list of receivers, mails are not disclosed).
+
+## Where do I buy commercial support for curl?
+
+curl is fully open source. It means you can hire any skilled engineer to fix
+your curl-related problems.
+
+We list [available alternatives](https://curl.se/support.html).
+
+## How many are using curl?
+
+It is impossible to tell.
+
+We do not know how many users that knowingly have installed and use curl.
+
+We do not know how many users that use curl without knowing that they are in
+fact using it.
+
+We do not know how many users that downloaded or installed curl and then never
+use it.
+
+In 2025, we estimate that curl runs in roughly thirty billion installations
+world wide.
+
+## Why do you not update ca-bundle.crt
+
+In the curl project we have decided not to attempt to keep this file updated
+(or even present) since deciding what to add to a ca cert bundle is an
+undertaking we have not been ready to accept, and the one we can get from
+Mozilla is perfectly fine so there is no need to duplicate that work.
+
+Today, with many services performed over HTTPS, every operating system should
+come with a default ca cert bundle that can be deemed somewhat trustworthy and
+that collection (if reasonably updated) should be deemed to be a lot better
+than a private curl version.
+
+If you want the most recent collection of ca certs that Mozilla Firefox uses,
+we recommend that using our online [CA certificate
+service](https://curl.se/docs/caextract.html) setup for this purpose.
+
+## I have a problem who, can I chat with?
+
+There is a bunch of friendly people hanging out in the #curl channel on the
+IRC network libera.chat. If you are polite and nice, chances are good that you
+can get -- or provide -- help instantly.
+
+## curl's ECCN number?
+
+The US government restricts exports of software that contains or uses
+cryptography. When doing so, the Export Control Classification Number (ECCN)
+is used to identify the level of export control etc.
+
+Apache Software Foundation has [a good explanation of
+ECCN](https://www.apache.org/dev/crypto.html).
+
+We believe curl's number might be ECCN 5D002, another possibility is 5D992. It
+seems necessary to write them (the authority that administers ECCN numbers),
+asking to confirm.
+
+Comprehensible explanations of the meaning of such numbers and how to obtain
+them (resp.) are [here](https://www.bis.gov/licensing/classify-your-item)
+and [here](https://www.bis.gov/licensing/classify-your-item/publicly-available-classification-information).
+
+An incomprehensible description of the two numbers above is available on
+[bis.doc.gov](https://www.bis.doc.gov/index.php/documents/new-encryption/1653-ccl5-pt2-3)
+
+## How do I submit my patch?
+
+We strongly encourage you to submit changes and improvements directly as [pull
+requests on GitHub](https://github.com/curl/curl/pulls).
+
+If you for any reason cannot or will not deal with GitHub, send your patch to
+the curl-library mailing list. We are many subscribers there and there are
+lots of people who can review patches, comment on them and receive them
+properly.
+
+Lots of more details are found in the
+[contribute](https://curl.se/dev/contribute.html) and
+[internals](https://curl.se/dev/internals.html)
+documents.
+
+## How do I port libcurl to my OS?
+
+Here's a rough step-by-step:
+
+1. copy a suitable lib/config-*.h file as a start to `lib/config-[youros].h`
+2. edit `lib/config-[youros].h` to match your OS and setup
+3. edit `lib/curl_setup.h` to include `config-[youros].h` when your OS is
+   detected by the preprocessor, in the style others already exist
+4. compile `lib/*.c` and make them into a library
+
+# Install
+
+## configure fails when using static libraries
+
+You may find that configure fails to properly detect the entire dependency
+chain of libraries when you provide static versions of the libraries that
+configure checks for.
+
+The reason why static libraries is much harder to deal with is that for them
+we do not get any help but the script itself must know or check what more
+libraries that are needed (with shared libraries, that dependency chain is
+handled automatically). This is an error-prone process and one that also tends
+to vary over time depending on the release versions of the involved components
+and may also differ between operating systems.
+
+For that reason, configure does few attempts to actually figure this out and
+you are instead encouraged to set `LIBS` and `LDFLAGS` accordingly when you invoke
+configure, and point out the needed libraries and set the necessary flags
+yourself.
+
+## Does curl work with other SSL libraries?
+
+curl has been written to use a generic SSL function layer internally, and
+that SSL functionality can then be provided by one out of many different SSL
+backends.
+
+curl can be built to use one of the following SSL alternatives: OpenSSL,
+LibreSSL, BoringSSL, AWS-LC, GnuTLS, wolfSSL, mbedTLS, Schannel (native
+Windows) or Rustls. They all have their pros and cons, and we maintain [a TLS
+library comparison](https://curl.se/docs/ssl-compared.html).
+
+## How do I upgrade curl.exe in Windows?
+
+The curl tool that is shipped as an integrated component of Windows 10 and
+Windows 11 is managed by Microsoft. If you were to delete the file or replace
+it with a newer version downloaded from [the curl
+website](https://curl.se/windows), then Windows Update will cease to work on
+your system.
+
+There is no way to independently force an upgrade of the curl.exe that is part
+of Windows other than through the regular Windows update process. There is
+also nothing the curl project itself can do about this, since this is managed
+and controlled entirely by Microsoft as owners of the operating system.
+
+You can always download and install [the latest version of curl for
+Windows](https://curl.se/windows) into a separate location.
+
+## Does curl support SOCKS (RFC 1928) ?
+
+Yes, SOCKS 4 and 5 are supported.
+
+# Usage
+
+## curl: (1) SSL is disabled, https: not supported
+
+If you get this output when trying to get anything from an HTTPS server, it
+means that the instance of curl/libcurl that you are using was built without
+support for this protocol.
+
+This could have happened if the configure script that was run at build time
+could not find all libs and include files curl requires for SSL to work. If
+the configure script fails to find them, curl is simply built without SSL
+support.
+
+To get HTTPS support into a curl that was previously built but that reports
+that HTTPS is not supported, you should dig through the document and logs and
+check out why the configure script does not find the SSL libs and/or include
+files.
+
+## How do I tell curl to resume a transfer?
+
+curl supports resumed transfers both ways on both FTP and HTTP. Try the `-C`
+option.
+
+## Why does my posting using -F not work?
+
+You cannot arbitrarily use `-F` or `-d`, the choice between `-F` or `-d`
+depends on the HTTP operation you need curl to do and what the web server that
+will receive your post expects.
+
+If the form you are trying to submit uses the type 'multipart/form-data',
+then and only then you must use the -F type. In all the most common cases,
+you should use `-d` which then causes a posting with the type
+`application/x-www-form-urlencoded`.
+
+This is described in some detail in the
+[Manual](https://curl.se/docs/tutorial.html) and [The Art Of HTTP
+Scripting](https://curl.se/docs/httpscripting.html) documents, and if you do
+not understand it the first time, read it again before you post questions
+about this to the mailing list. Also, try reading through the mailing list
+archives for old postings and questions regarding this.
+
+## How do I tell curl to run custom FTP commands?
+
+You can tell curl to perform optional commands both before and/or after a file
+transfer. Study the `-Q`/`--quote` option.
+
+Since curl is used for file transfers, you do not normally use curl to perform
+FTP commands without transferring anything. Therefore you must always specify
+a URL to transfer to/from even when doing custom FTP commands, or use `-I`
+which implies the *no body*" option sent to libcurl.
+
+## How can I disable the Accept: header?
+
+You can change this and all internally generated headers by adding a
+replacement with the `-H`/`--header` option. By adding a header with empty
+contents you safely disable that one. Use `-H Accept:` to disable that
+specific header.
+
+## Does curl support ASP, XML, XHTML or HTML version Y?
+
+To curl, all contents are alike. It does not matter how the page was
+generated. It may be ASP, PHP, Perl, shell-script, SSI or plain HTML
+files. There is no difference to curl and it does not even know what kind of
+language that generated the page.
+
+See also the separate question about JavaScript.
+
+## Can I use curl to delete/rename a file through FTP?
+
+Yes. You specify custom FTP commands with `-Q`/`--quote`.
+
+One example would be to delete a file after you have downloaded it:
+
+    curl -O ftp://example.com/coolfile -Q '-DELE coolfile'
+
+or rename a file after upload:
+
+    curl -T infile ftp://example.com/dir/ -Q "-RNFR infile" -Q "-RNTO newname"
+
+## How do I tell curl to follow HTTP redirects?
+
+curl does not follow so-called redirects by default. The `Location:` header that
+informs the client about this is only interpreted if you are using the
+`-L`/`--location` option. As in:
+
+    curl -L https://example.com
+
+Not all redirects are HTTP ones. See [Redirects work in browser but not with
+curl](#redirects-work-in-browser-but-not-with-curl)
+
+## How do I use curl in my favorite programming language?
+
+Many programming languages have interfaces and bindings that allow you to use
+curl without having to use the command line tool. If you are fluent in such a
+language, you may prefer to use one of these interfaces instead.
+
+Find out more about which languages that support curl directly, and how to
+install and use them, in the [libcurl section of the curl
+website](https://curl.se/libcurl/).
+
+All the various bindings to libcurl are made by other projects and people,
+outside of the curl project. The curl project itself only produces libcurl
+with its plain C API. If you do not find anywhere else to ask you can ask
+about bindings on the curl-library list too, but be prepared that people on
+that list may not know anything about bindings.
+
+In December 2025 there were around **60** different [interfaces
+available](https://curl.se/libcurl/bindings.html) for just about all the
+languages you can imagine.
+
+## What about SOAP, WebDAV, XML-RPC or similar protocols over HTTP?
+
+curl adheres to the HTTP spec, which basically means you can play with *any*
+protocol that is built on top of HTTP. Protocols such as SOAP, WebDAV and
+XML-RPC are all such ones. You can use `-X` to set custom requests and -H to
+set custom headers (or replace internally generated ones).
+
+Using libcurl is of course just as good and you would just use the proper
+library options to do the same.
+
+## How do I POST with a different Content-Type?
+
+You can always replace the internally generated headers with `-H`/`--header`.
+To make a simple HTTP POST with `text/xml` as content-type, do something like:
+
+    curl -d "datatopost" -H "Content-Type: text/xml" [URL]
+
+## Why do FTP-specific features over HTTP proxy fail?
+
+Because when you use an HTTP proxy, the protocol spoken on the network will be
+HTTP, even if you specify an FTP URL. This effectively means that you normally
+cannot use FTP-specific features such as FTP upload and FTP quote etc.
+
+There is one exception to this rule, and that is if you can *tunnel through*
+the given HTTP proxy. Proxy tunneling is enabled with a special option (`-p`)
+and is generally not available as proxy admins usually disable tunneling to
+ports other than 443 (which is used for HTTPS access through proxies).
+
+## Why do my single/double quotes fail?
+
+To specify a command line option that includes spaces, you might need to put
+the entire option within quotes. Like in:
+
+    curl -d " with spaces " example.com
+
+or perhaps
+
+    curl -d ' with spaces ' example.com
+
+Exactly what kind of quotes and how to do this is entirely up to the shell or
+command line interpreter that you are using. For most Unix shells, you can
+more or less pick either single (`'`) or double (`"`) quotes. For Windows/DOS
+command prompts you must use double (") quotes, and if the option string
+contains inner double quotes you can escape them with a backslash.
+
+For Windows PowerShell the arguments are not always passed on as expected
+because curl is not a PowerShell script. You may or may not be able to use
+single quotes. To escape inner double quotes seems to require a
+backslash-backtick escape sequence and the outer quotes as double quotes.
+
+Please study the documentation for your particular environment. Examples in
+the curl docs will use a mix of both of these as shown above. You must adjust
+them to work in your environment.
+
+Remember that curl works and runs on more operating systems than most single
+individuals have ever tried.
+
+## Does curl support JavaScript or PAC (automated proxy config)?
+
+Many webpages do magic stuff using embedded JavaScript. curl and libcurl have
+no built-in support for that, so it will be treated just like any other
+contents.
+
+`.pac` files are a Netscape invention and are sometimes used by organizations
+to allow them to differentiate which proxies to use. The `.pac` contents is
+just a JavaScript program that gets invoked by the browser and that returns
+the name of the proxy to connect to. Since curl does not support JavaScript,
+it cannot support .pac proxy configuration either.
+
+Some workarounds usually suggested to overcome this JavaScript dependency:
+
+Depending on the JavaScript complexity, write up a script that translates it
+to another language and execute that.
+
+Read the JavaScript code and rewrite the same logic in another language.
+
+Implement a JavaScript interpreter, people have successfully used the
+Mozilla JavaScript engine in the past.
+
+Ask your admins to stop this, for a static proxy setup or similar.
+
+## Can I do recursive fetches with curl?
+
+No. curl itself has no code that performs recursive operations, such as those
+performed by Wget and similar tools.
+
+There exists curl using scripts with that functionality, and you can write
+programs based on libcurl to do it, but the command line tool curl itself
+cannot.
+
+## What certificates do I need when I use SSL?
+
+There are three different kinds of certificates to keep track of when we talk
+about using SSL-based protocols (HTTPS or FTPS) using curl or libcurl.
+
+### Client certificate
+
+The server you communicate with may require that you can provide this in
+order to prove that you actually are who you claim to be. If the server
+does not require this, you do not need a client certificate.
+
+A client certificate is always used together with a private key, and the
+private key has a passphrase that protects it.
+
+### Server certificate
+
+The server you communicate with has a server certificate. You can and should
+verify this certificate to make sure that you are truly talking to the real
+server and not a server impersonating it.
+
+Servers often also provide an intermediate certificate. It acts as a bridge
+between a website's SSL certificate and a Certificate Authority's (CA) root
+certificate, creating a "chain of trust".
+
+### Certificate Authority Certificate ("CA cert")
+
+You often have several CA certs in a CA cert bundle that can be used to verify
+a server certificate that was signed by one of the authorities in the bundle.
+curl does not come with a CA cert bundle but most curl installs provide one.
+You can also override the default.
+
+Server certificate verification is enabled by default in curl and libcurl.
+Server certificates that are *self-signed* or otherwise signed by a CA that
+you do not have a CA cert for, cannot be verified. If the verification during
+a connect fails, you are refused access. You then might have to explicitly
+disable the verification to connect to the server.
+
+## How do I list the root directory of an FTP server?
+
+There are two ways. The way defined in the RFC is to use an encoded slash in
+the first path part. List the `/tmp` directory like this:
+
+    curl ftp://ftp.example.com/%2ftmp/
+
+or the not-quite-kosher-but-more-readable way, by simply starting the path
+section of the URL with a slash:
+
+    curl ftp://ftp.example.com//tmp/
+
+## Can I use curl to send a POST/PUT and not wait for a response?
+
+No.
+
+You can easily write your own program using libcurl to do such stunts.
+
+## How do I get HTTP from a host using a specific IP address?
+
+For example, you may be trying out a website installation that is not yet in
+the DNS. Or you have a site using multiple IP addresses for a given host
+name and you want to address a specific one out of the set.
+
+Set a custom `Host:` header that identifies the server name you want to reach
+but use the target IP address in the URL:
+
+    curl --header "Host: www.example.com" https://somewhere.example/
+
+You can also opt to add faked hostname entries to curl with the --resolve
+option. That has the added benefit that things like redirects will also work
+properly. The above operation would instead be done as:
+
+    curl --resolve www.example.com:80:127.0.0.1 https://www.example.com/
+
+## How to SFTP from my user's home directory?
+
+Contrary to how FTP works, SFTP and SCP URLs specify the exact directory to
+work with. It means that if you do not specify that you want the user's home
+directory, you get the actual root directory.
+
+To specify a file in your user's home directory, you need to use the correct
+URL syntax which for SFTP might look similar to:
+
+    curl -O -u user:password sftp://example.com/~/file.txt
+
+and for SCP it is just a different protocol prefix:
+
+    curl -O -u user:password scp://example.com/~/file.txt
+
+## Protocol xxx not supported or disabled in libcurl
+
+When passing on a URL to curl to use, it may respond that the particular
+protocol is not supported or disabled. The particular way this error message
+is phrased is because curl does not make a distinction internally of whether a
+particular protocol is not supported (i.e. never got any code added that knows
+how to speak that protocol) or if it was explicitly disabled. curl can be
+built to only support a given set of protocols, and the rest would then be
+disabled or not supported.
+
+Note that this error will also occur if you pass a wrongly spelled protocol
+part as in `htpts://example.com` or as in the less evident case if you prefix
+the protocol part with a space as in `" https://example.com/"`.
+
+## curl `-X` gives me HTTP problems
+
+In normal circumstances, `-X` should hardly ever be used.
+
+By default you use curl without explicitly saying which request method to use
+when the URL identifies an HTTP transfer. If you just pass in a URL like `curl
+https://example.com` it will use GET. If you use `-d` or `-F`, curl will use
+POST, `-I` will cause a HEAD and `-T` will make it a PUT.
+
+If for whatever reason you are not happy with these default choices that curl
+does for you, you can override those request methods by specifying `-X
+[WHATEVER]`. This way you can for example send a DELETE by doing
+`curl -X DELETE [URL]`.
+
+It is thus pointless to do `curl -XGET [URL]` as GET would be used anyway. In
+the same vein it is pointless to do `curl -X POST -d data [URL`. You can make
+a fun and somewhat rare request that sends a request-body in a GET request
+with something like `curl -X GET -d data [URL]`.
+
+Note that `-X` does not actually change curl's behavior as it only modifies
+the actual string sent in the request, but that may of course trigger a
+different set of events.
+
+Accordingly, by using `-XPOST` on a command line that for example would follow
+a 303 redirect, you will effectively prevent curl from behaving correctly. Be
+aware.
+
+# Running
+
+## Why do I get problems when I use & or % in the URL?
+
+In general Unix shells, the & symbol is treated specially and when used, it
+runs the specified command in the background. To safely send the & as a part
+of a URL, you should quote the entire URL by using single (`'`) or double
+(`"`) quotes around it. Similar problems can also occur on some shells with
+other characters, including ?*!$~(){}<>\|;`. When in doubt, quote the URL.
+
+An example that would invoke a remote CGI that uses &-symbols could be:
+
+    curl 'https://www.example.com/cgi-bin/query?text=yes&q=curl'
+
+In Windows, the standard DOS shell treats the percent sign specially and you
+need to use TWO percent signs for each single one you want to use in the URL.
+
+If you want a literal percent sign to be part of the data you pass in a POST
+using `-d`/`--data` you must encode it as `%25` (which then also needs the
+percent sign doubled on Windows machines).
+
+## How can I use {, }, [ or ] to specify multiple URLs?
+
+Because those letters have a special meaning to the shell, to be used in a URL
+specified to curl you must quote them.
+
+An example that downloads two URLs (sequentially) would be:
+
+    curl '{curl,www}.haxx.se'
+
+To be able to use those characters as actual parts of the URL (without using
+them for the curl URL *globbing* system), use the `-g`/`--globoff` option:
+
+    curl -g 'www.example.com/weirdname[].html'
+
+## Why do I get downloaded data even though the webpage does not exist?
+
+curl asks remote servers for the page you specify. If the page does not exist
+at the server, the HTTP protocol defines how the server should respond and
+that means that headers and a page will be returned. That is simply how HTTP
+works.
+
+By using the `--fail` option you can tell curl explicitly to not get any data
+if the HTTP return code does not say success.
+
+## Why do I get return code XXX from an HTTP server?
+
+RFC 2616 clearly explains the return codes. This is a short transcript. Go
+read the RFC for exact details:
+
+### 400 Bad Request
+
+The request could not be understood by the server due to malformed
+syntax. The client SHOULD NOT repeat the request without modifications.
+
+### 401 Unauthorized
+
+The request requires user authentication.
+
+### 403 Forbidden
+
+The server understood the request, but is refusing to fulfill it.
+Authorization will not help and the request SHOULD NOT be repeated.
+
+### 404 Not Found
+
+The server has not found anything matching the Request-URI. No indication is
+given as to whether the condition is temporary or permanent.
+
+### 405 Method Not Allowed
+
+The method specified in the Request-Line is not allowed for the resource
+identified by the Request-URI. The response MUST include an `Allow:` header
+containing a list of valid methods for the requested resource.
+
+### 301 Moved Permanently
+
+If you get this return code and an HTML output similar to this:
+
+    <H1>Moved Permanently</H1> The document has moved <A
+    HREF="https://same_url_now_with_a_trailing_slash.example/">here</A>.
+
+it might be because you requested a directory URL but without the trailing
+slash. Try the same operation again _with_ the trailing URL, or use the
+`-L`/`--location` option to follow the redirection.
+
+## Can you tell me what error code 142 means?
+
+All curl error codes are described at the end of the man page, in the section
+called **EXIT CODES**.
+
+Error codes that are larger than the highest documented error code means that
+curl has exited due to a crash. This is a serious error, and we appreciate a
+detailed bug report from you that describes how we could go ahead and repeat
+this.
+
+## How do I keep usernames and passwords secret in curl command lines?
+
+This problem has two sides:
+
+The first part is to avoid having clear-text passwords in the command line so
+that they do not appear in *ps* outputs and similar. That is easily avoided by
+using the `-K` option to tell curl to read parameters from a file or stdin to
+which you can pass the secret info. curl itself will also attempt to hide the
+given password by blanking out the option - this does not work on all
+platforms.
+
+To keep the passwords in your account secret from the rest of the world is
+not a task that curl addresses. You could of course encrypt them somehow to
+at least hide them from being read by human eyes, but that is not what
+anyone would call security.
+
+Also note that regular HTTP (using Basic authentication) and FTP passwords are
+sent as cleartext across the network. All it takes for anyone to fetch them is
+to listen on the network. Eavesdropping is easy. Use more secure
+authentication methods (like Digest, Negotiate or even NTLM) or consider the
+SSL-based alternatives HTTPS and FTPS.
+
+## I found a bug
+
+It is not a bug if the behavior is documented. Read the docs first. Especially
+check out the KNOWN_BUGS file, it may be a documented bug.
+
+If it is a problem with a binary you have downloaded or a package for your
+particular platform, try contacting the person who built the package/archive
+you have.
+
+If there is a bug, read the BUGS document first. Then report it as described
+in there.
+
+## curl cannot authenticate to a server that requires NTLM?
+
+NTLM support requires OpenSSL, GnuTLS, mbedTLS or Microsoft Windows libraries
+at build-time to provide this functionality.
+
+## My HTTP request using HEAD, PUT or DELETE does not work
+
+Many web servers allow or demand that the administrator configures the server
+properly for these requests to work on the web server.
+
+Some servers seem to support HEAD only on certain kinds of URLs.
+
+To fully grasp this, try the documentation for the particular server software
+you are trying to interact with. This is not anything curl can do anything
+about.
+
+## Why do my HTTP range requests return the full document?
+
+Because the range may not be supported by the server, or the server may choose
+to ignore it and return the full document anyway.
+
+## Why do I get "certificate verify failed" ?
+
+When you invoke curl and get an error 60 error back it means that curl could
+not verify that the server's certificate was good. curl verifies the
+certificate using the CA cert bundle and verifying for which names the
+certificate has been granted.
+
+To completely disable the certificate verification, use `-k`. This does
+however enable man-in-the-middle attacks and makes the transfer **insecure**.
+We strongly advise against doing this for more than experiments.
+
+If you get this failure with a CA cert bundle installed and used, the server's
+certificate might not be signed by one of the certificate authorities in your
+CA store. It might for example be self-signed. You then correct this problem
+by obtaining a valid CA cert for the server. Or again, decrease the security
+by disabling this check.
+
+At times, you find that the verification works in your favorite browser but
+fails in curl. When this happens, the reason is usually that the server sends
+an incomplete cert chain. The server is mandated to send all *intermediate
+certificates* but does not. This typically works with browsers anyway since
+they A) cache such certs and B) supports AIA which downloads such missing
+certificates on demand. This is a bad server configuration. A good way to
+figure out if this is the case it to use [the SSL Labs
+server](https://www.ssllabs.com/ssltest/) test and check the certificate
+chain.
+
+Details are also in [the SSL certificates
+document](https://curl.se/docs/sslcerts.html).
+
+
+## Why is curl -R on Windows one hour off?
+
+Since curl 7.53.0 this issue should be fixed as long as curl was built with
+any modern compiler that allows for a 64-bit curl_off_t type. For older
+compilers or prior curl versions it may set a time that appears one hour off.
+This happens due to a flaw in how Windows stores and uses file modification
+times and it is not easily worked around. For more details [read
+this](https://www.codeproject.com/articles/Beating-the-Daylight-Savings-Time-Bug-and-Getting#comments-section).
+
+## Redirects work in browser but not with curl
+
+curl supports HTTP redirects well (see a previous question above). Browsers
+generally support at least two other ways to perform redirects that curl does
+not:
+
+Meta tags. You can write an HTML tag that will cause the browser to redirect
+to another given URL after a certain time.
+
+JavaScript. You can write a JavaScript program embedded in an HTML page that
+redirects the browser to another given URL.
+
+There is no way to make curl follow these redirects. You must either manually
+figure out what the page is set to do, or write a script that parses the
+results and fetches the new URL.
+
+## FTPS does not work
+
+curl supports FTPS (sometimes known as FTP-SSL) both implicit and explicit
+mode.
+
+When a URL is used that starts with `FTPS://`, curl assumes implicit SSL on
+the control connection and will therefore immediately connect and try to speak
+SSL. `FTPS://` connections default to port 990.
+
+To use explicit FTPS, you use an `FTP://` URL and the `--ssl-reqd` option (or
+one of its related flavors). This is the most common method, and the one
+mandated by RFC 4217. This kind of connection will then of course use the
+standard FTP port 21 by default.
+
+## My HTTP POST or PUT requests are slow
+
+libcurl makes all POST and PUT requests (except for requests with a small
+request body) use the `Expect: 100-continue` header. This header allows the
+server to deny the operation early so that libcurl can bail out before having
+to send any data. This is useful in authentication cases and others.
+
+However, many servers do not implement the `Expect:` stuff properly and if the
+server does not respond (positively) within 1 second libcurl will continue and
+send off the data anyway.
+
+You can disable libcurl's use of the `Expect:` header the same way you disable
+any header, using `-H` / `CURLOPT_HTTPHEADER`, or by forcing it to use HTTP
+1.0.
+
+## Non-functional connect timeouts
+
+In most Windows setups having a timeout longer than 21 seconds make no
+difference, as it will only send 3 TCP SYN packets and no more. The second
+packet sent three seconds after the first and the third six seconds after
+the second. No more than three packets are sent, no matter how long the
+timeout is set.
+
+See option `TcpMaxConnectRetransmissions` on [this
+page](https://support.microsoft.com/bg-bg/topic/hotfix-enables-the-configuration-of-the-tcp-maximum-syn-retransmission-amount-in-windows-7-or-windows-server-2008-r2-1b6f8352-2c5f-58bb-ead7-2cf021407c8e).
+
+Also, even on non-Windows systems there may run a firewall or anti-virus
+software or similar that accepts the connection but does not actually do
+anything else. This will make (lib)curl to consider the connection connected
+and thus the connect timeout will not trigger.
+
+## file:// URLs containing drive letters (Windows, NetWare)
+
+When using curl to try to download a local file, one might use a URL in this
+format:
+
+    file://D:/blah.txt
+
+you will find that even if `D:\blah.txt` does exist, curl returns a 'file not
+found' error.
+
+According to [RFC 1738](https://www.ietf.org/rfc/rfc1738.txt), `file://` URLs
+must contain a host component, but it is ignored by most implementations. In
+the above example, `D:` is treated as the host component, and is taken away.
+Thus, curl tries to open `/blah.txt`. If your system is installed to drive C:,
+that will resolve to `C:\blah.txt`, and if that does not exist you will get
+the not found error.
+
+To fix this problem, use `file://` URLs with *three* leading slashes:
+
+    file:///D:/blah.txt
+
+Alternatively, if it makes more sense, specify `localhost` as the host
+component:
+
+    file://localhost/D:/blah.txt
+
+In either case, curl should now be looking for the correct file.
+
+## Why does not curl return an error when the network cable is unplugged?
+
+Unplugging a cable is not an error situation. The TCP/IP protocol stack was
+designed to be fault tolerant, so even though there may be a physical break
+somewhere the connection should not be affected, just possibly delayed.
+Eventually, the physical break will be fixed or the data will be re-routed
+around the physical problem through another path.
+
+In such cases, the TCP/IP stack is responsible for detecting when the network
+connection is irrevocably lost. Since with some protocols it is perfectly
+legal for the client to wait indefinitely for data, the stack may never report
+a problem, and even when it does, it can take up to 20 minutes for it to
+detect an issue. The curl option `--keepalive-time` enables keep-alive support
+in the TCP/IP stack which makes it periodically probe the connection to make
+sure it is still available to send data. That should reliably detect any
+TCP/IP network failure.
+
+TCP keep alive will not detect the network going down before the TCP/IP
+connection is established (e.g. during a DNS lookup) or using protocols that
+do not use TCP. To handle those situations, curl offers a number of timeouts
+on its own. `--speed-limit`/`--speed-time` will abort if the data transfer
+rate falls too low, and `--connect-timeout` and `--max-time` can be used to
+put an overall timeout on the connection phase or the entire transfer.
+
+A libcurl-using application running in a known physical environment (e.g. an
+embedded device with only a single network connection) may want to act
+immediately if its lone network connection goes down. That can be achieved by
+having the application monitor the network connection on its own using an
+OS-specific mechanism, then signaling libcurl to abort.
+
+## curl does not return error for HTTP non-200 responses
+
+Correct. Unless you use `-f` (`--fail`) or `--fail-with-body`.
+
+When doing HTTP transfers, curl will perform exactly what you are asking it to
+do and if successful it will not return an error. You can use curl to test
+your web server's "file not found" page (that gets 404 back), you can use it
+to check your authentication protected webpages (that gets a 401 back) and so
+on.
+
+The specific HTTP response code does not constitute a problem or error for
+curl. It simply sends and delivers HTTP as you asked and if that worked,
+everything is fine and dandy. The response code is generally providing more
+higher level error information that curl does not care about. The error was
+not in the HTTP transfer.
+
+If you want your command line to treat error codes in the 400 and up range as
+errors and thus return a non-zero value and possibly show an error message,
+curl has a dedicated option for that: `-f` (`CURLOPT_FAILONERROR` in libcurl
+speak).
+
+You can also use the `-w` option and the variable `%{response_code}` to
+extract the exact response code that was returned in the response.
+
+# libcurl
+
+## Is libcurl thread-safe?
+
+Yes.
+
+We have written the libcurl code specifically adjusted for multi-threaded
+programs. libcurl will use thread-safe functions instead of non-safe ones if
+your system has such. Note that you must never share the same handle in
+multiple threads.
+
+There may be some exceptions to thread safety depending on how libcurl was
+built. Please review [the guidelines for thread
+safety](https://curl.se/libcurl/c/threadsafe.html) to learn more.
+
+## How can I receive all data into a large memory chunk?
+
+(See the [get in memory](https://curl.se/libcurl/c/getinmemory.html) example.)
+
+You are in full control of the callback function that gets called every time
+there is data received from the remote server. You can make that callback do
+whatever you want. You do not have to write the received data to a file.
+
+One solution to this problem could be to have a pointer to a struct that you
+pass to the callback function. You set the pointer using the CURLOPT_WRITEDATA
+option. Then that pointer will be passed to the callback instead of a FILE *
+to a file:
+
+~~~c
+/* store data this struct */
+struct MemoryStruct {
+  char *memory;
+  size_t size;
+};
+
+/* imaginary callback function */
+size_t
+WriteMemoryCallback(void *ptr, size_t size, size_t nmemb, void *data)
+{
+  size_t realsize = size * nmemb;
+  struct MemoryStruct *mem = (struct MemoryStruct *)data;
+
+  mem->memory = (char *)realloc(mem->memory, mem->size + realsize + 1);
+  if(mem->memory) {
+    memcpy(&(mem->memory[mem->size]), ptr, realsize);
+    mem->size += realsize;
+    mem->memory[mem->size] = 0;
+  }
+  return realsize;
+}
+~~~
+
+## How do I fetch multiple files with libcurl?
+
+libcurl has excellent support for transferring multiple files. You should just
+repeatedly set new URLs with `curl_easy_setopt()` and then transfer it with
+`curl_easy_perform()`. The handle you get from curl_easy_init() is not only
+reusable, but you are even encouraged to reuse it if you can, as that will
+enable libcurl to use persistent connections.
+
+## Does libcurl do Winsock initialization on Win32 systems?
+
+Yes, if told to in the `curl_global_init()` call.
+
+## Does CURLOPT_WRITEDATA and CURLOPT_READDATA work on Win32 ?
+
+Yes, but you cannot open a FILE * and pass the pointer to a DLL and have that
+DLL use the FILE * (as the DLL and the client application cannot access each
+others' variable memory areas). If you set `CURLOPT_WRITEDATA` you must also use
+`CURLOPT_WRITEFUNCTION` as well to set a function that writes the file, even if
+that simply writes the data to the specified FILE *. Similarly, if you use
+`CURLOPT_READDATA` you must also specify `CURLOPT_READFUNCTION`.
+
+## What about Keep-Alive or persistent connections?
+
+curl and libcurl have excellent support for persistent connections when
+transferring several files from the same server. curl will attempt to reuse
+connections for all URLs specified on the same command line/config file, and
+libcurl will reuse connections for all transfers that are made using the same
+libcurl handle.
+
+When you use the easy interface the connection cache is kept within the easy
+handle. If you instead use the multi interface, the connection cache will be
+kept within the multi handle and will be shared among all the easy handles
+that are used within the same multi handle.
+
+## Link errors when building libcurl on Windows
+
+You need to make sure that your project, and all the libraries (both static
+and dynamic) that it links against, are compiled/linked against the same run
+time library.
+
+This is determined by the `/MD`, `/ML`, `/MT` (and their corresponding `/M?d`)
+options to the command line compiler. `/MD` (linking against `MSVCRT.dll`)
+seems to be the most commonly used option.
+
+When building an application that uses the static libcurl library, you must
+add `-DCURL_STATICLIB` to your `CFLAGS`. Otherwise the linker will look for
+dynamic import symbols. If you are using Visual Studio, you need to instead
+add `CURL_STATICLIB` in the "Preprocessor Definitions" section.
+
+If you get a linker error like `unknown symbol __imp__curl_easy_init ...` you
+have linked against the wrong (static) library. If you want to use the
+libcurl.dll and import lib, you do not need any extra `CFLAGS`, but use one of
+the import libraries below. These are the libraries produced by the various
+lib/Makefile.* files:
+
+| Target         | static lib     | import lib for DLL |
+|----------------|----------------|--------------------|
+| MinGW          | `libcurl.a`    | `libcurldll.a`     |
+| MSVC (release) | `libcurl.lib`  | `libcurl_imp.lib`  |
+| MSVC (debug)   | `libcurld.lib` | `libcurld_imp.lib` |
+
+## libcurl.so.X: open failed: No such file or directory
+
+This is an error message you might get when you try to run a program linked
+with a shared version of libcurl and your runtime linker (`ld.so`) could not
+find the shared library named `libcurl.so.X`. (Where X is the number of the
+current libcurl ABI, typically 3 or 4).
+
+You need to make sure that `ld.so` finds `libcurl.so.X`. You can do that
+multiple ways, and it differs somewhat between different operating systems.
+They are usually:
+
+* Add an option to the linker command line that specify the hard-coded path
+  the runtime linker should check for the lib (usually `-R`)
+* Set an environment variable (`LD_LIBRARY_PATH` for example) where `ld.so`
+  should check for libs
+* Adjust the system's config to check for libs in the directory where you have
+  put the library (like Linux's `/etc/ld.so.conf`)
+
+`man ld.so` and`'man ld` will tell you more details
+
+## How does libcurl resolve hostnames?
+
+libcurl supports a large number of name resolve functions. One of them is
+picked at build-time and will be used unconditionally. Thus, if you want to
+change name resolver function you must rebuild libcurl and tell it to use a
+different function.
+
+### The non-IPv6 resolver
+
+The non-IPv6 resolver that can use one of four different hostname resolve
+calls depending on what your system supports:
+
+1. gethostbyname()
+2. gethostbyname_r() with 3 arguments
+3. gethostbyname_r() with 5 arguments
+4. gethostbyname_r() with 6 arguments
+
+### The IPv6 resolver
+
+Uses getaddrinfo()
+
+### The cares resolver
+
+The c-ares based name resolver that uses the c-ares library for resolves.
+Using this offers asynchronous name resolves.
+
+## The threaded resolver
+
+It uses the IPv6 or the non-IPv6 resolver solution in a temporary thread.
+
+## How do I prevent libcurl from writing the response to stdout?
+
+libcurl provides a default built-in write function that writes received data
+to stdout. Set the `CURLOPT_WRITEFUNCTION` to receive the data, or possibly
+set `CURLOPT_WRITEDATA` to a different FILE * handle.
+
+## How do I make libcurl not receive the whole HTTP response?
+
+You make the write callback (or progress callback) return an error and libcurl
+will then abort the transfer.
+
+## Can I make libcurl fake or hide my real IP address?
+
+No. libcurl operates on a higher level. Besides, faking IP address would
+imply sending IP packets with a made-up source address, and then you normally
+get a problem with receiving the packet sent back as they would then not be
+routed to you.
+
+If you use a proxy to access remote sites, the sites will not see your local
+IP address but instead the address of the proxy.
+
+Also note that on many networks NATs or other IP-munging techniques are used
+that makes you see and use a different IP address locally than what the remote
+server will see you coming from. You may also consider using
+[Tor]()https://www.torproject.org/).
+
+## How do I stop an ongoing transfer?
+
+With the easy interface you make sure to return the correct error code from
+one of the callbacks, but none of them are instant. There is no function you
+can call from another thread or similar that will stop it immediately.
+Instead, you need to make sure that one of the callbacks you use returns an
+appropriate value that will stop the transfer. Suitable callbacks that you can
+do this with include the progress callback, the read callback and the write
+callback.
+
+If you are using the multi interface, you can also stop a transfer by removing
+the particular easy handle from the multi stack at any moment you think the
+transfer is done or when you wish to abort the transfer.
+
+## Using C++ non-static functions for callbacks?
+
+libcurl is a C library, it does not know anything about C++ member functions.
+
+You can overcome this limitation with relative ease using a static member
+function that is passed a pointer to the class:
+
+~~~c++
+// f is the pointer to your object.
+static size_t YourClass::func(void *buffer, size_t sz, size_t n, void *f)
+{
+  // Call non-static member function.
+  static_cast<YourClass*>(f)->nonStaticFunction();
+}
+
+// This is how you pass pointer to the static function:
+curl_easy_setopt(hcurl, CURLOPT_WRITEFUNCTION, YourClass::func);
+curl_easy_setopt(hcurl, CURLOPT_WRITEDATA, this);
+~~~
+
+## How do I get an FTP directory listing?
+
+If you end the FTP URL you request with a slash, libcurl will provide you with
+a directory listing of that given directory. You can also set
+`CURLOPT_CUSTOMREQUEST` to alter what exact listing command libcurl would use
+to list the files.
+
+The follow-up question tends to be how is a program supposed to parse the
+directory listing. How does it know what's a file and what's a directory and
+what's a symlink etc. If the FTP server supports the `MLSD` command then it
+will return data in a machine-readable format that can be parsed for type. The
+types are specified by RFC 3659 section 7.5.1. If `MLSD` is not supported then
+you have to work with what you are given. The `LIST` output format is entirely
+at the server's own liking and the `NLST` output does not reveal any types and
+in many cases does not even include all the directory entries. Also, both
+`LIST` and `NLST` tend to hide Unix-style hidden files (those that start with
+a dot) by default so you need to do `LIST -a` or similar to see them.
+
+Example - List only directories. `ftp.funet.fi` supports `MLSD` and
+`ftp.kernel.org` does not:
+
+    curl -s ftp.funet.fi/pub/ -X MLSD | \
+      perl -lne 'print if s/(?:^|;)type=dir;[^ ]+ (.+)$/$1/'
+
+    curl -s ftp.kernel.org/pub/linux/kernel/ | \
+      perl -lne 'print if s/^d[-rwx]{9}(?: +[^ ]+){7} (.+)$/$1/'
+
+If you need to parse LIST output, libcurl provides the ability to specify a
+wildcard to download multiple files from an FTP directory.
+
+## I want a different time-out
+
+Sometimes users realize that `CURLOPT_TIMEOUT` and `CURLOPT_CONNECTIMEOUT` are
+not sufficiently advanced or flexible to cover all the various use cases and
+scenarios applications end up with.
+
+libcurl offers many more ways to time-out operations. A common alternative is
+to use the `CURLOPT_LOW_SPEED_LIMIT` and `CURLOPT_LOW_SPEED_TIME` options to
+specify the lowest possible speed to accept before to consider the transfer
+timed out.
+
+The most flexible way is by writing your own time-out logic and using
+`CURLOPT_XFERINFOFUNCTION` (perhaps in combination with other callbacks) and
+use that to figure out exactly when the right condition is met when the
+transfer should get stopped.
+
+## Can I write a server with libcurl?
+
+No. libcurl offers no functions or building blocks to build any kind of
+Internet protocol server. libcurl is only a client-side library. For server
+libraries, you need to continue your search elsewhere but there exist many
+good open source ones out there for most protocols you could want a server
+for. There are also really good stand-alone servers that have been tested and
+proven for many years. There is no need for you to reinvent them.
+
+## Does libcurl use threads?
+
+Put simply: no, libcurl will execute in the same thread you call it in. All
+callbacks will be called in the same thread as the one you call libcurl in.
+
+If you want to avoid your thread to be blocked by the libcurl call, you make
+sure you use the non-blocking multi API which will do transfers
+asynchronously - still in the same single thread.
+
+libcurl will potentially internally use threads for name resolving, if it was
+built to work like that, but in those cases it will create the child threads
+by itself and they will only be used and then killed internally by libcurl and
+never exposed to the outside.
+
+# License
+
+curl and libcurl are released under an MIT/X derivative license. The license
+is liberal and should not impose a problem for your project. This section is
+just a brief summary for the cases we get the most questions.
+
+We are not lawyers and this is not legal advice. You should probably consult
+one if you want true and accurate legal insights without our prejudice. Note
+especially that this section concerns the libcurl license only; compiling in
+features of libcurl that depend on other libraries (e.g. OpenSSL) may affect
+the licensing obligations of your application.
+
+## I have a GPL program, can I use the libcurl library?
+
+Yes
+
+Since libcurl may be distributed under the MIT/X derivative license, it can be
+used together with GPL in any software.
+
+## I have a closed-source program, can I use the libcurl library?
+
+Yes
+
+libcurl does not put any restrictions on the program that uses the library.
+
+## I have a BSD licensed program, can I use the libcurl library?
+
+Yes
+
+libcurl does not put any restrictions on the program that uses the library.
+
+## I have a program that uses LGPL libraries, can I use libcurl?
+
+Yes
+
+The LGPL license does not clash with other licenses.
+
+## Can I modify curl/libcurl for my program and keep the changes secret?
+
+Yes
+
+The MIT/X derivative license practically allows you to do almost anything with
+the sources, on the condition that the copyright texts in the sources are left
+intact.
+
+## Can you please change the curl/libcurl license?
+
+No.
+
+We have carefully picked this license after years of development and
+discussions and a large amount of people have contributed with source code
+knowing that this is the license we use. This license puts the restrictions we
+want on curl/libcurl and it does not spread to other programs or libraries
+that use it. It should be possible for everyone to use libcurl or curl in
+their projects, no matter what license they already have in use.
+
+## What are my obligations when using libcurl in my commercial apps?
+
+Next to none. All you need to adhere to is the MIT-style license (stated in
+the COPYING file) which basically says you have to include the copyright
+notice in *all copies* and that you may not use the copyright holder's name
+when promoting your software.
+
+You do not have to release any of your source code.
+
+You do not have to reveal or make public any changes to the libcurl source
+code.
+
+You do not have to broadcast to the world that you are using libcurl within
+your app.
+
+All we ask is that you disclose *the copyright notice and this permission
+notice* somewhere. Most probably like in the documentation or in the section
+where other third party dependencies already are mentioned and acknowledged.
+
+As can be seen [here](https://curl.se/docs/companies.html) and elsewhere, more
+and more companies are discovering the power of libcurl and take advantage of
+it even in commercial environments.
+
+## What license does curl use exactly?
+
+curl is released under an [MIT derivative
+license](https://curl.se/docs/copyright.html). It is similar but not identical
+to the MIT license.
+
+The difference is considered big enough to make SPDX list it under its own
+identifier: [curl](https://spdx.org/licenses/curl.html).
+
+The changes done to the license that make it uniquely curl were tiny and
+well-intended, but the reasons for them have been forgotten and we strongly
+discourage others from doing the same thing.
+
+# PHP/CURL
+
+## What is PHP/CURL?
+
+The module for PHP that makes it possible for PHP programs to access curl
+functions from within PHP.
+
+In the curl project we call this module PHP/CURL to differentiate it from curl
+the command line tool and libcurl the library. The PHP team however does not
+refer to it like this (for unknown reasons). They call it plain CURL (often
+using all caps) or sometimes ext/curl, but both cause much confusion to users
+which in turn gives us a higher question load.
+
+## Who wrote PHP/CURL?
+
+PHP/CURL was initially written by Sterling Hughes.
+
+## Can I perform multiple requests using the same handle?
+
+Yes.
+
+After a transfer, you just set new options in the handle and make another
+transfer. This will make libcurl reuse the same connection if it can.
+
+## Does PHP/CURL have dependencies?
+
+PHP/CURL is a module that comes with the regular PHP package. It depends on
+and uses libcurl, so you need to have libcurl installed properly before
+PHP/CURL can be used.
+
+# Development
+
+## Why does curl use C89?
+
+As with everything in curl, there is a history and we keep using what we have
+used before until someone brings up the subject and argues for and works on
+changing it.
+
+We started out using C89 in the 1990s because that was the only way to write a
+truly portable C program and have it run as widely as possible. C89 was for a
+long time even necessary to make things work on otherwise considered modern
+platforms such as Windows. Today, we do not really know how many users that
+still require the use of a C89 compiler.
+
+We will continue to use C89 for as long as nobody brings up a strong enough
+reason for us to change our minds. The core developers of the project do not
+feel restricted by this and we are not convinced that going C99 will offer us
+enough of a benefit to warrant the risk of cutting off a share of users.
+
+## Will curl be rewritten?
+
+In one go: no. Little by little over time? Sure.
+
+Over the years, new languages and clever operating environments come and go.
+Every now and then the urge apparently arises to request that we rewrite curl
+in another language.
+
+Some the most important properties in curl are maintaining the API and ABI for
+libcurl and keeping the behavior for the command line tool. As long as we can
+do that, everything else is up for discussion. To maintain the ABI, we
+probably have to maintain a certain amount of code in C, and to remain rock
+stable, we will never risk anything by rewriting a lot of things in one go.
+That said, we can certainly offer more and more optional backends written in
+other languages, as long as those backends can be plugged in at build-time.
+Backends can be written in any language, but should probably provide APIs
+usable from C to ease integration and transition.
diff --git a/docs/KNOWN_BUGS b/docs/KNOWN_BUGS
deleted file mode 100644
index f65e9f1..0000000
--- a/docs/KNOWN_BUGS
+++ /dev/null
@@ -1,663 +0,0 @@
-                                  _   _ ____  _
-                              ___| | | |  _ \| |
-                             / __| | | | |_) | |
-                            | (__| |_| |  _ <| |___
-                             \___|\___/|_| \_\_____|
-
-                                  Known Bugs
-
-These are problems and bugs known to exist at the time of this release. Feel
-free to join in and help us correct one or more of these. Also be sure to
-check the changelog of the current development status, as one or more of these
-problems may have been fixed or changed somewhat since this was written.
-
- 1. HTTP
-
- 2. TLS
- 2.1 IMAPS connection fails with Rustls error
- 2.2 Access violation sending client cert with Schannel
- 2.5 Client cert handling with Issuer DN differs between backends
- 2.7 Client cert (MTLS) issues with Schannel
- 2.11 Schannel TLS 1.2 handshake bug in old Windows versions
- 2.13 CURLOPT_CERTINFO results in CURLE_OUT_OF_MEMORY with Schannel
- 2.14 mbedTLS and CURLE_AGAIN handling
-
- 3. Email protocols
- 3.1 IMAP SEARCH ALL truncated response
- 3.2 No disconnect command
- 3.4 AUTH PLAIN for SMTP is not working on all servers
- 3.5 APOP authentication fails on POP3
- 3.6 POP3 issue when reading small chunks
-
- 4. Command line
- 4.1 -T /dev/stdin may upload with an incorrect content length
- 4.2 -T - always uploads chunked
-
- 5. Build and portability issues
- 5.1 OS400 port requires deprecated IBM library
- 5.2 curl-config --libs contains private details
- 5.3 LDFLAGS passed too late making libs linked incorrectly
- 5.6 Cygwin: make install installs curl-config.1 twice
- 5.12 flaky CI builds
- 5.13 long paths are not fully supported on Windows
- 5.15 Unicode on Windows
-
- 6. Authentication
- 6.1 Digest auth-int for PUT/POST
- 6.2 MIT Kerberos for Windows build
- 6.3 NTLM in system context uses wrong name
- 6.5 NTLM does not support password with Unicode 'SECTION SIGN' character
- 6.6 libcurl can fail to try alternatives with --proxy-any
- 6.7 Do not clear digest for single realm
- 6.9 SHA-256 digest not supported in Windows SSPI builds
- 6.10 curl never completes Negotiate over HTTP
- 6.11 Negotiate on Windows fails
- 6.13 Negotiate against Hadoop HDFS
-
- 7. FTP
- 7.4 FTP with ACCT
- 7.12 FTPS directory listing hangs on Windows with Schannel
-
- 9. SFTP and SCP
- 9.1 SFTP does not do CURLOPT_POSTQUOTE correct
- 9.3 Remote recursive folder creation with SFTP
- 9.4 libssh blocking and infinite loop problem
- 9.5 Cygwin: "WARNING: UNPROTECTED PRIVATE KEY FILE!"
-
- 10. Connection
- 10.1 --interface with link-scoped IPv6 address
- 10.2 Does not acknowledge getaddrinfo sorting policy
- 10.3 SOCKS-SSPI discards the security context
-
- 11. Internals
- 11.1 gssapi library name + version is missing in curl_version_info()
- 11.2 error buffer not set if connection to multiple addresses fails
- 11.4 HTTP test server 'connection-monitor' problems
- 11.5 Connection information when using TCP Fast Open
- 11.6 test cases sometimes timeout
- 11.7 CURLOPT_CONNECT_TO does not work for HTTPS proxy
- 11.8 WinIDN test failures
- 11.9 setting a disabled option should return CURLE_NOT_BUILT_IN
-
- 12. LDAP
- 12.1 OpenLDAP hangs after returning results
- 12.2 LDAP on Windows does authentication wrong?
- 12.3 LDAP on Windows does not work
- 12.4 LDAPS requests to ActiveDirectory server hang
-
- 13. TCP/IP
- 13.1 telnet code does not handle partial writes properly
- 13.2 Trying local ports fails on Windows
-
- 15. CMake
- 15.1 cmake outputs: no version information available
- 15.6 uses -lpthread instead of Threads::Threads
- 15.7 generated .pc file contains strange entries
- 15.13 CMake build with MIT Kerberos does not work
-
- 16. aws-sigv4
- 16.2 aws-sigv4 does not handle multipart/form-data correctly
-
- 17. HTTP/2
- 17.1 HTTP/2 prior knowledge over proxy
- 17.2 HTTP/2 frames while in the connection pool kill reuse
- 17.3 ENHANCE_YOUR_CALM causes infinite retries
- 17.4 HTTP/2 + TLS spends a lot of time in recv
-
- 18. HTTP/3
- 18.1 connection migration does not work
- 18.2 quiche: QUIC connection is draining
- 18.3 OpenSSL-QUIC problems on google.com
-
- 19. RTSP
- 19.1 Some methods do not support response bodies
-
-==============================================================================
-
-1. HTTP
-
-2. TLS
-
-2.1 IMAPS connection fails with Rustls error
-
- https://github.com/curl/curl/issues/10457
-
-2.2 Access violation sending client cert with Schannel
-
- When using Schannel to do client certs, curl sets PKCS12_NO_PERSIST_KEY to
- avoid leaking the private key into the filesystem. Unfortunately that flag
- instead seems to trigger a crash.
-
- See https://github.com/curl/curl/issues/17626
-
-2.5 Client cert handling with Issuer DN differs between backends
-
- When the specified client certificate does not match any of the
- server-specified DNs, the OpenSSL and GnuTLS backends behave differently.
- The github discussion may contain a solution.
-
- See https://github.com/curl/curl/issues/1411
-
-2.7 Client cert (MTLS) issues with Schannel
-
- See https://github.com/curl/curl/issues/3145
-
-2.11 Schannel TLS 1.2 handshake bug in old Windows versions
-
- In old versions of Windows such as 7 and 8.1 the Schannel TLS 1.2 handshake
- implementation likely has a bug that can rarely cause the key exchange to
- fail, resulting in error SEC_E_BUFFER_TOO_SMALL or SEC_E_MESSAGE_ALTERED.
-
- https://github.com/curl/curl/issues/5488
-
-2.13 CURLOPT_CERTINFO results in CURLE_OUT_OF_MEMORY with Schannel
-
- https://github.com/curl/curl/issues/8741
-
-2.14 mbedTLS and CURLE_AGAIN handling
-
- https://github.com/curl/curl/issues/15801
-
-3. Email protocols
-
-3.1 IMAP SEARCH ALL truncated response
-
- IMAP "SEARCH ALL" truncates output on large boxes. "A quick search of the
- code reveals that pingpong.c contains some truncation code, at line 408, when
- it deems the server response to be too large truncating it to 40 characters"
- https://curl.se/bug/view.cgi?id=1366
-
-3.2 No disconnect command
-
- The disconnect commands (LOGOUT and QUIT) may not be sent by IMAP, POP3 and
- SMTP if a failure occurs during the authentication phase of a connection.
-
-3.4 AUTH PLAIN for SMTP is not working on all servers
-
- Specifying "--login-options AUTH=PLAIN" on the command line does not seem to
- work correctly.
-
- See https://github.com/curl/curl/issues/4080
-
-3.5 APOP authentication fails on POP3
-
- See https://github.com/curl/curl/issues/10073
-
-3.6 POP3 issue when reading small chunks
-
- CURL_DBG_SOCK_RMAX=4 ./runtests.pl -v 982
-
- See https://github.com/curl/curl/issues/12063
-
-4. Command line
-
-4.1 -T /dev/stdin may upload with an incorrect content length
-
- -T stats the path to figure out its size in bytes to use it as Content-Length
- if it is a regular file.
-
- The problem with that is that, on BSDs and some other UNIXes (not Linux),
- open(path) may not give you a file descriptor with a 0 offset from the start
- of the file.
-
- See https://github.com/curl/curl/issues/12177
-
-4.2 -T - always uploads chunked
-
- When the `<` shell operator is used. curl should realise that stdin is a
- regular file in this case, and that it can do a non-chunked upload, like it
- would do if you used -T file.
-
- See https://github.com/curl/curl/issues/12171
-
-5. Build and portability issues
-
-5.1 OS400 port requires deprecated IBM library
-
- curl for OS400 requires QADRT to build, which provides ASCII wrappers for
- libc/POSIX functions in the ILE, but IBM no longer supports or even offers
- this library to download.
-
- See https://github.com/curl/curl/issues/5176
-
-5.2 curl-config --libs contains private details
-
- "curl-config --libs" include details set in LDFLAGS when configure is run
- that might be needed only for building libcurl. Further, curl-config --cflags
- suffers from the same effects with CFLAGS/CPPFLAGS.
-
-5.3 LDFLAGS passed too late making libs linked incorrectly
-
- Compiling latest curl on HP-UX and linking against a custom OpenSSL (which is
- on the default loader/linker path), fails because the generated Makefile has
- LDFLAGS passed on after LIBS.
-
- See https://github.com/curl/curl/issues/14893
-
-5.6 Cygwin: make install installs curl-config.1 twice
-
- https://github.com/curl/curl/issues/8839
-
-5.12 flaky CI builds
-
- We run many CI builds for each commit and PR on github, and especially a
- number of the Windows builds are flaky. This means that we rarely get all CI
- builds go green and complete without errors. This is unfortunate as it makes
- us sometimes miss actual build problems and it is surprising to newcomers to
- the project who (rightfully) do not expect this.
-
- See https://github.com/curl/curl/issues/6972
-
-5.13 long paths are not fully supported on Windows
-
- curl on Windows cannot access long paths (paths longer than 260 characters).
- However, as a workaround, the Windows path prefix \\?\ which disables all
- path interpretation may work to allow curl to access the path. For example:
- \\?\c:\longpath.
-
- See https://github.com/curl/curl/issues/8361
-
-5.15 Unicode on Windows
-
- Passing in a Unicode filename with -o:
-
- https://github.com/curl/curl/issues/11461
-
- Passing in Unicode character with -d:
-
- https://github.com/curl/curl/issues/12231
-
- Windows Unicode builds use homedir in current locale
-
- The Windows Unicode builds of curl use the current locale, but expect Unicode
- UTF-8 encoded paths for internal use such as open, access and stat. The
- user's home directory is retrieved via curl_getenv in the current locale and
- not as UTF-8 encoded Unicode.
-
- See https://github.com/curl/curl/pull/7252 and
-     https://github.com/curl/curl/pull/7281
-
- Cannot handle Unicode arguments in non-Unicode builds on Windows
-
- If a URL or filename cannot be encoded using the user's current codepage then
- it can only be encoded properly in the Unicode character set. Windows uses
- UTF-16 encoding for Unicode and stores it in wide characters, however curl
- and libcurl are not equipped for that at the moment except when built with
- _UNICODE and UNICODE defined. Except for Cygwin, Windows cannot use UTF-8 as
- a locale.
-
-  https://curl.se/bug/?i=345
-  https://curl.se/bug/?i=731
-  https://curl.se/bug/?i=3747
-
- NTLM authentication and Unicode
-
- NTLM authentication involving Unicode username or password only works
- properly if built with UNICODE defined together with the Schannel backend.
- The original problem was mentioned in:
- https://curl.se/mail/lib-2009-10/0024.html
- https://curl.se/bug/view.cgi?id=896
-
- The Schannel version verified to work as mentioned in
- https://curl.se/mail/lib-2012-07/0073.html
-
-6. Authentication
-
-6.1 Digest auth-int for PUT/POST
-
- We do not support auth-int for Digest using PUT or POST
-
-6.2 MIT Kerberos for Windows build
-
- libcurl fails to build with MIT Kerberos for Windows (KfW) due to KfW's
- library header files exporting symbols/macros that should be kept private to
- the KfW library.
-
-6.3 NTLM in system context uses wrong name
-
- NTLM authentication using SSPI (on Windows) when (lib)curl is running in
- "system context" makes it use wrong(?) username - at least when compared to
- what winhttp does. See https://curl.se/bug/view.cgi?id=535
-
-6.5 NTLM does not support password with Unicode 'SECTION SIGN' character
-
- Code point: U+00A7
-
- https://en.wikipedia.org/wiki/Section_sign
- https://github.com/curl/curl/issues/2120
-
-6.6 libcurl can fail to try alternatives with --proxy-any
-
- When connecting via a proxy using --proxy-any, a failure to establish an
- authentication causes libcurl to abort trying other options if the failed
- method has a higher preference than the alternatives. As an example,
- --proxy-any against a proxy which advertise Negotiate and NTLM, but which
- fails to set up Kerberos authentication does not proceed to try
- authentication using NTLM.
-
- https://github.com/curl/curl/issues/876
-
-6.7 Do not clear digest for single realm
-
- https://github.com/curl/curl/issues/3267
-
-6.9 SHA-256 digest not supported in Windows SSPI builds
-
- Windows builds of curl that have SSPI enabled use the native Windows API calls
- to create authentication strings. The call to InitializeSecurityContext fails
- with SEC_E_QOP_NOT_SUPPORTED which causes curl to fail with CURLE_AUTH_ERROR.
-
- Microsoft does not document supported digest algorithms and that SEC_E error
- code is not a documented error for InitializeSecurityContext (digest).
-
- https://github.com/curl/curl/issues/6302
-
-6.10 curl never completes Negotiate over HTTP
-
- Apparently it is not working correctly...?
-
- See https://github.com/curl/curl/issues/5235
-
-6.11 Negotiate on Windows fails
-
- When using --negotiate (or NTLM) with curl on Windows, SSL/TLS handshake
- fails despite having a valid kerberos ticket cached. Works without any issue
- in Unix/Linux.
-
- https://github.com/curl/curl/issues/5881
-
-6.13 Negotiate authentication against Hadoop HDFS
-
- https://github.com/curl/curl/issues/8264
-
-7. FTP
-
-7.4 FTP with ACCT
-
- When doing an operation over FTP that requires the ACCT command (but not when
- logging in), the operation fails since libcurl does not detect this and thus
- fails to issue the correct command: https://curl.se/bug/view.cgi?id=635
-
-7.12 FTPS server compatibility on Windows with Schannel
-
- FTPS is not widely used with the Schannel TLS backend and so there may be
- more bugs compared to other TLS backends such as OpenSSL. In the past users
- have reported hanging and failed connections. It is likely some changes to
- curl since then fixed the issues. None of the reported issues can be
- reproduced any longer.
-
- If you encounter an issue connecting to your server via FTPS with the latest
- curl and Schannel then please search for open issues or file a new issue.
-
-9. SFTP and SCP
-
-9.1 SFTP does not do CURLOPT_POSTQUOTE correct
-
- When libcurl sends CURLOPT_POSTQUOTE commands when connected to an SFTP
- server using the multi interface, the commands are not being sent correctly
- and instead the connection is "cancelled" (the operation is considered done)
- prematurely. There is a half-baked (busy-looping) patch provided in the bug
- report but it cannot be accepted as-is. See
- https://curl.se/bug/view.cgi?id=748
-
-9.3 Remote recursive folder creation with SFTP
-
- On this servers, the curl fails to create directories on the remote server
- even when the CURLOPT_FTP_CREATE_MISSING_DIRS option is set.
-
- See https://github.com/curl/curl/issues/5204
-
-9.4 libssh blocking and infinite loop problem
-
- In the SSH_SFTP_INIT state for libssh, the ssh session working mode is set to
- blocking mode. If the network is suddenly disconnected during sftp
- transmission, curl is stuck, even if curl is configured with a timeout.
-
- https://github.com/curl/curl/issues/8632
-
-9.5 Cygwin: "WARNING: UNPROTECTED PRIVATE KEY FILE!"
-
- Running SCP and SFTP tests on Cygwin makes this warning message appear.
-
- https://github.com/curl/curl/issues/11244
-
-10. Connection
-
-10.1 --interface with link-scoped IPv6 address
-
- When you give the `--interface` option telling curl to use a specific
- interface for its outgoing traffic in combination with an IPv6 address in the
- URL that uses a link-local scope, curl might pick the wrong address from the
- named interface and the subsequent transfer fails.
-
- Example command line:
-
-    curl --interface eth0 'http://[fe80:928d:xxff:fexx:xxxx]/'
-
- The fact that the given IP address is link-scoped should probably be used as
- input to somehow make curl make a better choice for this.
-
- https://github.com/curl/curl/issues/14782
-
-10.2 Does not acknowledge getaddrinfo sorting policy
-
- Even if a user edits /etc/gai.conf to prefer IPv4, curl still prefers and
- tries IPv6 addresses first.
-
- https://github.com/curl/curl/issues/16718
-
-
-10.3 SOCKS-SSPI discards the security context
-
- After a successful SSPI/GSS-API exchange, the function queries and logs the
- authenticated username and reports the supported data-protection level, but
- then immediately deletes the negotiated SSPI security context and frees the
- credentials before returning. The negotiated context is not stored on the
- connection and is therefore never used to protect later SOCKS5 traffic.
-
-11. Internals
-
-11.1 gssapi library name + version is missing in curl_version_info()
-
- The struct needs to be expanded and code added to store this info.
-
- See https://github.com/curl/curl/issues/13492
-
-11.2 error buffer not set if connection to multiple addresses fails
-
- If you ask libcurl to resolve a hostname like example.com to IPv6 addresses
- when you only have IPv4 connectivity. libcurl fails with
- CURLE_COULDNT_CONNECT, but the error buffer set by CURLOPT_ERRORBUFFER
- remains empty. Issue: https://github.com/curl/curl/issues/544
-
-11.4 HTTP test server 'connection-monitor' problems
-
- The 'connection-monitor' feature of the sws HTTP test server does not work
- properly if some tests are run in unexpected order. Like 1509 and then 1525.
-
- See https://github.com/curl/curl/issues/868
-
-11.5 Connection information when using TCP Fast Open
-
- CURLINFO_LOCAL_PORT (and possibly a few other) fails when TCP Fast Open is
- enabled.
-
- See https://github.com/curl/curl/issues/1332 and
- https://github.com/curl/curl/issues/4296
-
-11.6 test cases sometimes timeout
-
- Occasionally, one of the tests timeouts. Inexplicably.
-
- See https://github.com/curl/curl/issues/13350
-
-11.7 CURLOPT_CONNECT_TO does not work for HTTPS proxy
-
- It is unclear if the same option should even cover the proxy connection or if
- if requires a separate option.
-
- See https://github.com/curl/curl/issues/14481
-
-11.8 WinIDN test failures
-
- Test 165 disabled when built with WinIDN.
-
-11.9 setting a disabled option should return CURLE_NOT_BUILT_IN
-
- When curl has been built with specific features or protocols disabled,
- setting such options with curl_easy_setopt() should rather return
- CURLE_NOT_BUILT_IN instead of CURLE_UNKNOWN_OPTION to signal the difference
- to the application
-
- See https://github.com/curl/curl/issues/15472
-
-12. LDAP
-
-12.1 OpenLDAP hangs after returning results
-
- By configuration defaults, OpenLDAP automatically chase referrals on
- secondary socket descriptors. The OpenLDAP backend is asynchronous and thus
- should monitor all socket descriptors involved. Currently, these secondary
- descriptors are not monitored, causing OpenLDAP library to never receive
- data from them.
-
- As a temporary workaround, disable referrals chasing by configuration.
-
- The fix is not easy: proper automatic referrals chasing requires a
- synchronous bind callback and monitoring an arbitrary number of socket
- descriptors for a single easy handle (currently limited to 5).
-
- Generic LDAP is synchronous: OK.
-
- See https://github.com/curl/curl/issues/622 and
-     https://curl.se/mail/lib-2016-01/0101.html
-
-12.2 LDAP on Windows does authentication wrong?
-
- https://github.com/curl/curl/issues/3116
-
-12.3 LDAP on Windows does not work
-
- A simple curl command line getting "ldap://ldap.forumsys.com" returns an
- error that says "no memory" !
-
- https://github.com/curl/curl/issues/4261
-
-12.4 LDAPS requests to ActiveDirectory server hang
-
- https://github.com/curl/curl/issues/9580
-
-13. TCP/IP
-
-13.1 telnet code does not handle partial writes properly
-
- It probably does not happen too easily because of how slow and infrequent
- sends are normally performed.
-
-13.2 Trying local ports fails on Windows
-
- This makes '--local-port [range]' to not work since curl cannot properly
- detect if a port is already in use, so it tries the first port, uses that and
- then subsequently fails anyway if that was actually in use.
-
- https://github.com/curl/curl/issues/8112
-
-15. CMake
-
-15.1 cmake outputs: no version information available
-
- Something in the SONAME generation seems to be wrong in the cmake build.
-
- https://github.com/curl/curl/issues/11158
-
-15.6 uses -lpthread instead of Threads::Threads
-
- See https://github.com/curl/curl/issues/6166
-
-15.7 generated .pc file contains strange entries
-
- The Libs.private field of the generated .pc file contains -lgcc -lgcc_s -lc
- -lgcc -lgcc_s
-
- See https://github.com/curl/curl/issues/6167
-
-15.13 CMake build with MIT Kerberos does not work
-
- Minimum CMake version was bumped in curl 7.71.0 (#5358) Since CMake 3.2
- try_compile started respecting the CMAKE_EXE_FLAGS. The code dealing with
- MIT Kerberos detection sets few variables to potentially weird mix of space,
- and ;-separated flags. It had to blow up at some point. All the CMake checks
- that involve compilation are doomed from that point, the configured tree
- cannot be built.
-
- https://github.com/curl/curl/issues/6904
-
-16. aws-sigv4
-
-16.2 aws-sigv4 does not handle multipart/form-data correctly
-
- https://github.com/curl/curl/issues/13351
-
-17. HTTP/2
-
-17.1 HTTP/2 prior knowledge over proxy
-
- https://github.com/curl/curl/issues/12641
-
-17.2 HTTP/2 frames while in the connection pool kill reuse
-
- If the server sends HTTP/2 frames (like for example an HTTP/2 PING frame) to
- curl while the connection is held in curl's connection pool, the socket is
- found readable when considered for reuse and that makes curl think it is dead
- and then it is closed and a new connection gets created instead.
-
- This is *best* fixed by adding monitoring to connections while they are kept
- in the pool so that pings can be responded to appropriately.
-
-17.3 ENHANCE_YOUR_CALM causes infinite retries
-
- Infinite retries with 2 parallel requests on one connection receiving GOAWAY
- with ENHANCE_YOUR_CALM error code.
-
- See https://github.com/curl/curl/issues/5119
-
-17.4 HTTP/2 + TLS spends a lot of time in recv
-
- It has been observed that by making the speed limit less accurate we could
- improve this performance. (by reverting
- https://github.com/curl/curl/commit/db5c9f4f9e0779b49624752b135281a0717b277b)
- Can we find a golden middle ground?
-
- See https://curl.se/mail/lib-2024-05/0026.html and
- https://github.com/curl/curl/issues/13416
-
-18. HTTP/3
-
-18.1 connection migration does not work
-
- https://github.com/curl/curl/issues/7695
-
-18.2 quiche: QUIC connection is draining
-
- The transfer ends with error "QUIC connection is draining".
-
- https://github.com/curl/curl/issues/12037
-
-18.3 OpenSSL-QUIC problems on google.com
-
- With some specific Google servers, and seemingly timing dependent, the
- OpenSSL-QUIC backend seems to not actually send off the HTTP/3 request which
- makes the QUIC connection just sit idle until killed by the server. curl or
- OpenSSL bug?
-
- https://github.com/curl/curl/issues/18336
-
-19. RTSP
-
-19.1 Some methods do not support response bodies
-
- The RTSP implementation is written to assume that a number of RTSP methods
- always get responses without bodies, even though there seems to be no
- indication in the RFC that this is always the case.
-
- https://github.com/curl/curl/issues/12414
diff --git a/docs/KNOWN_BUGS.md b/docs/KNOWN_BUGS.md
new file mode 100644
index 0000000..1f5cebe
--- /dev/null
+++ b/docs/KNOWN_BUGS.md
@@ -0,0 +1,546 @@
+<!--
+Copyright (C) Daniel Stenberg, <daniel@haxx.se>, et al.
+
+SPDX-License-Identifier: curl
+-->
+
+# Known bugs intro
+
+These are problems and bugs known to exist at the time of this release. Feel
+free to join in and help us correct one or more of these. Also be sure to
+check the changelog of the current development status, as one or more of these
+problems may have been fixed or changed somewhat since this was written.
+
+# TLS
+
+## IMAPS connection fails with Rustls error
+
+[curl issue 10457](https://github.com/curl/curl/issues/10457)
+
+## Access violation sending client cert with Schannel
+
+When using Schannel to do client certs, curl sets `PKCS12_NO_PERSIST_KEY` to
+avoid leaking the private key into the filesystem. Unfortunately that flag
+instead seems to trigger a crash.
+
+See [curl issue 17626](https://github.com/curl/curl/issues/17626)
+
+## Client cert handling with Issuer `DN` differs between backends
+
+When the specified client certificate does not match any of the
+server-specified `DN` fields, the OpenSSL and GnuTLS backends behave
+differently. The GitHub discussion may contain a solution.
+
+See [curl issue 1411](https://github.com/curl/curl/issues/1411)
+
+## Client cert (MTLS) issues with Schannel
+
+See [curl issue 3145](https://github.com/curl/curl/issues/3145)
+
+## Schannel TLS 1.2 handshake bug in old Windows versions
+
+In old versions of Windows such as 7 and 8.1 the Schannel TLS 1.2 handshake
+implementation likely has a bug that can rarely cause the key exchange to
+fail, resulting in error SEC_E_BUFFER_TOO_SMALL or SEC_E_MESSAGE_ALTERED.
+
+[curl issue 5488](https://github.com/curl/curl/issues/5488)
+
+## `CURLOPT_CERTINFO` results in `CURLE_OUT_OF_MEMORY` with Schannel
+
+[curl issue 8741](https://github.com/curl/curl/issues/8741)
+
+## mbedTLS and CURLE_AGAIN handling
+
+[curl issue 15801](https://github.com/curl/curl/issues/15801)
+
+# Email protocols
+
+## IMAP `SEARCH ALL` truncated response
+
+IMAP `SEARCH ALL` truncates output on large boxes. "A quick search of the code
+reveals that `pingpong.c` contains some truncation code, at line 408, when it
+deems the server response to be too large truncating it to 40 characters"
+
+https://curl.se/bug/view.cgi?id=1366
+
+## No disconnect command
+
+The disconnect commands (`LOGOUT` and `QUIT`) may not be sent by IMAP, POP3
+and SMTP if a failure occurs during the authentication phase of a connection.
+
+## `AUTH PLAIN` for SMTP is not working on all servers
+
+Specifying `--login-options AUTH=PLAIN` on the command line does not seem to
+work correctly.
+
+See [curl issue 4080](https://github.com/curl/curl/issues/4080)
+
+## `APOP` authentication fails on POP3
+
+See [curl issue 10073](https://github.com/curl/curl/issues/10073)
+
+## POP3 issue when reading small chunks
+
+    CURL_DBG_SOCK_RMAX=4 ./runtests.pl -v 982
+
+See [curl issue 12063](https://github.com/curl/curl/issues/12063)
+
+# Command line
+
+## `-T /dev/stdin` may upload with an incorrect content length
+
+`-T` stats the path to figure out its size in bytes to use it as
+`Content-Length` if it is a regular file.
+
+The problem with that is that on BSD and some other UNIX systems (not Linux),
+open(path) may not give you a file descriptor with a 0 offset from the start
+of the file.
+
+See [curl issue 12177](https://github.com/curl/curl/issues/12177)
+
+## `-T -` always uploads chunked
+
+When the `<` shell operator is used. curl should realize that stdin is a
+regular file in this case, and that it can do a non-chunked upload, like it
+would do if you used `-T` file.
+
+See [curl issue 12171](https://github.com/curl/curl/issues/12171)
+
+# Build and portability issues
+
+## OS400 port requires deprecated IBM library
+
+curl for OS400 requires `QADRT` to build, which provides ASCII wrappers for
+libc/POSIX functions in the ILE, but IBM no longer supports or even offers
+this library to download.
+
+See [curl issue 5176](https://github.com/curl/curl/issues/5176)
+
+## `curl-config --libs` contains private details
+
+`curl-config --libs` include details set in `LDFLAGS` when configure is run
+that might be needed only for building libcurl. Further, `curl-config
+--cflags` suffers from the same effects with `CFLAGS`/`CPPFLAGS`.
+
+## `LDFLAGS` passed too late making libs linked incorrectly
+
+Compiling latest curl on HP-UX and linking against a custom OpenSSL (which is
+on the default loader/linker path), fails because the generated Makefile has
+`LDFLAGS` passed on after `LIBS`.
+
+See [curl issue 14893](https://github.com/curl/curl/issues/14893)
+
+## Cygwin: make install installs curl-config.1 twice
+
+[curl issue 8839](https://github.com/curl/curl/issues/8839)
+
+## flaky CI builds
+
+We run many CI builds for each commit and PR on GitHub, and especially a
+number of the Windows builds are flaky. This means that we rarely get all CI
+builds go green and complete without errors. This is unfortunate as it makes
+us sometimes miss actual build problems and it is surprising to newcomers to
+the project who (rightfully) do not expect this.
+
+See [curl issue 6972](https://github.com/curl/curl/issues/6972)
+
+## long paths are not fully supported on Windows
+
+curl on Windows cannot access long paths (paths longer than 260 characters).
+However, as a workaround, the Windows path prefix `\\?\` which disables all
+path interpretation may work to allow curl to access the path. For example:
+`\\?\c:\longpath`.
+
+See [curl issue 8361](https://github.com/curl/curl/issues/8361)
+
+## Unicode on Windows
+
+Passing in a Unicode filename with -o:
+
+[curl issue 11461](https://github.com/curl/curl/issues/11461)
+
+Passing in Unicode character with -d:
+
+ [curl issue 12231](https://github.com/curl/curl/issues/12231)
+
+Windows Unicode builds use the home directory in current locale.
+
+The Windows Unicode builds of curl use the current locale, but expect Unicode
+UTF-8 encoded paths for internal use such as open, access and stat. The user's
+home directory is retrieved via curl_getenv in the current locale and not as
+UTF-8 encoded Unicode.
+
+See [curl pull request 7252](https://github.com/curl/curl/pull/7252) and [curl pull request 7281](https://github.com/curl/curl/pull/7281)
+
+Cannot handle Unicode arguments in non-Unicode builds on Windows
+
+If a URL or filename cannot be encoded using the user's current code page then
+it can only be encoded properly in the Unicode character set. Windows uses
+UTF-16 encoding for Unicode and stores it in wide characters, however curl and
+libcurl are not equipped for that at the moment except when built with
+_UNICODE and UNICODE defined. Except for Cygwin, Windows cannot use UTF-8 as a
+locale.
+
+ https://curl.se/bug/?i=345
+ https://curl.se/bug/?i=731
+ https://curl.se/bug/?i=3747
+
+NTLM authentication and Unicode
+
+NTLM authentication involving Unicode username or password only works properly
+if built with UNICODE defined together with the Schannel backend. The original
+problem was mentioned in: https://curl.se/mail/lib-2009-10/0024.html and
+https://curl.se/bug/view.cgi?id=896
+
+The Schannel version verified to work as mentioned in
+https://curl.se/mail/lib-2012-07/0073.html
+
+# Authentication
+
+## Digest `auth-int` for PUT/POST
+
+We do not support auth-int for Digest using PUT or POST
+
+## MIT Kerberos for Windows build
+
+libcurl fails to build with MIT Kerberos for Windows (`KfW`) due to its
+library header files exporting symbols/macros that should be kept private to
+the library.
+
+## NTLM in system context uses wrong name
+
+NTLM authentication using SSPI (on Windows) when (lib)curl is running in
+"system context" makes it use wrong(?) username - at least when compared to
+what `winhttp` does. See https://curl.se/bug/view.cgi?id=535
+
+## NTLM does not support password with Unicode 'SECTION SIGN' character
+
+ https://en.wikipedia.org/wiki/Section_sign
+ [curl issue 2120](https://github.com/curl/curl/issues/2120)
+
+## libcurl can fail to try alternatives with `--proxy-any`
+
+When connecting via a proxy using `--proxy-any`, a failure to establish an
+authentication causes libcurl to abort trying other options if the failed
+method has a higher preference than the alternatives. As an example,
+`--proxy-any` against a proxy which advertise Negotiate and NTLM, but which
+fails to set up Kerberos authentication does not proceed to try authentication
+using NTLM.
+
+[curl issue 876](https://github.com/curl/curl/issues/876)
+
+## Do not clear digest for single realm
+
+ [curl issue 3267](https://github.com/curl/curl/issues/3267)
+
+## SHA-256 digest not supported in Windows SSPI builds
+
+Windows builds of curl that have SSPI enabled use the native Windows API calls
+to create authentication strings. The call to `InitializeSecurityContext` fails
+with `SEC_E_QOP_NOT_SUPPORTED` which causes curl to fail with
+`CURLE_AUTH_ERROR`.
+
+Microsoft does not document supported digest algorithms and that `SEC_E` error
+code is not a documented error for `InitializeSecurityContext` (digest).
+
+ [curl issue 6302](https://github.com/curl/curl/issues/6302)
+
+## curl never completes Negotiate over HTTP
+
+Apparently it is not working correctly...?
+
+See [curl issue 5235](https://github.com/curl/curl/issues/5235)
+
+## Negotiate on Windows fails
+
+When using `--negotiate` (or NTLM) with curl on Windows, SSL/TLS handshake
+fails despite having a valid kerberos ticket cached. Works without any issue
+in Unix/Linux.
+
+[curl issue 5881](https://github.com/curl/curl/issues/5881)
+
+## Negotiate authentication against Hadoop
+
+[curl issue 8264](https://github.com/curl/curl/issues/8264)
+
+# FTP
+
+## FTP with ACCT
+
+When doing an operation over FTP that requires the `ACCT` command (but not when
+logging in), the operation fails since libcurl does not detect this and thus
+fails to issue the correct command: https://curl.se/bug/view.cgi?id=635
+
+## FTPS server compatibility on Windows with Schannel
+
+FTPS is not widely used with the Schannel TLS backend and so there may be more
+bugs compared to other TLS backends such as OpenSSL. In the past users have
+reported hanging and failed connections. It is likely some changes to curl
+since then fixed the issues. None of the reported issues can be reproduced any
+longer.
+
+If you encounter an issue connecting to your server via FTPS with the latest
+curl and Schannel then please search for open issues or file a new issue.
+
+# SFTP and SCP
+
+## SFTP does not do `CURLOPT_POSTQUOTE` correct
+
+When libcurl sends `CURLOPT_POSTQUOTE` commands when connected to an SFTP
+server using the multi interface, the commands are not being sent correctly
+and instead the connection is canceled (the operation is considered done)
+prematurely. There is a half-baked (busy-looping) patch provided in the bug
+report but it cannot be accepted as-is. See
+https://curl.se/bug/view.cgi?id=748
+
+## Remote recursive folder creation with SFTP
+
+On this servers, the curl fails to create directories on the remote server
+even when the `CURLOPT_FTP_CREATE_MISSING_DIRS` option is set.
+
+See [curl issue 5204](https://github.com/curl/curl/issues/5204)
+
+## libssh blocking and infinite loop problem
+
+In the `SSH_SFTP_INIT` state for libssh, the ssh session working mode is set
+to blocking mode. If the network is suddenly disconnected during sftp
+transmission, curl is stuck, even if curl is configured with a timeout.
+
+ [curl issue 8632](https://github.com/curl/curl/issues/8632)
+
+## Cygwin: "WARNING: UNPROTECTED PRIVATE KEY FILE!"
+
+Running SCP and SFTP tests on Cygwin makes this warning message appear.
+
+[curl issue 11244](https://github.com/curl/curl/issues/11244)
+
+# Connection
+
+## `--interface` with link-scoped IPv6 address
+
+When you give the `--interface` option telling curl to use a specific
+interface for its outgoing traffic in combination with an IPv6 address in the
+URL that uses a link-local scope, curl might pick the wrong address from the
+named interface and the subsequent transfer fails.
+
+Example command line:
+
+    curl --interface eth0 'http://[fe80:928d:xxff:fexx:xxxx]/'
+
+The fact that the given IP address is link-scoped should probably be used as
+input to somehow make curl make a better choice for this.
+
+[curl issue 14782](https://github.com/curl/curl/issues/14782)
+
+## Does not acknowledge getaddrinfo sorting policy
+
+Even if a user edits `/etc/gai.conf` to prefer IPv4, curl still prefers and
+tries IPv6 addresses first.
+
+[curl issue 16718](https://github.com/curl/curl/issues/16718)
+
+## SOCKS-SSPI discards the security context
+
+After a successful SSPI/GSS-API exchange, the function queries and logs the
+authenticated username and reports the supported data-protection level, but
+then immediately deletes the negotiated SSPI security context and frees the
+credentials before returning. The negotiated context is not stored on the
+connection and is therefore never used to protect later SOCKS5 traffic.
+
+# Internals
+
+## GSSAPI library name + version is missing in `curl_version_info()`
+
+The struct needs to be expanded and code added to store this info.
+
+See [curl issue 13492](https://github.com/curl/curl/issues/13492)
+
+## error buffer not set if connection to multiple addresses fails
+
+If you ask libcurl to resolve a hostname like example.com to IPv6 addresses
+when you only have IPv4 connectivity. libcurl fails with
+`CURLE_COULDNT_CONNECT`, but the error buffer set by `CURLOPT_ERRORBUFFER`
+remains empty. Issue: [curl issue 544](https://github.com/curl/curl/issues/544)
+
+## HTTP test server 'connection-monitor' problems
+
+The `connection-monitor` feature of the HTTP test server does not work
+properly if some tests are run in unexpected order. Like 1509 and then 1525.
+
+See [curl issue 868](https://github.com/curl/curl/issues/868)
+
+## Connection information when using TCP Fast Open
+
+`CURLINFO_LOCAL_PORT` (and possibly a few other) fails when TCP Fast Open is
+enabled.
+
+See [curl issue 1332](https://github.com/curl/curl/issues/1332) and
+[curl issue 4296](https://github.com/curl/curl/issues/4296)
+
+## test cases sometimes timeout
+
+Occasionally, one of the tests timeouts. Inexplicably.
+
+See [curl issue 13350](https://github.com/curl/curl/issues/13350)
+
+## `CURLOPT_CONNECT_TO` does not work for HTTPS proxy
+
+It is unclear if the same option should even cover the proxy connection or if
+if requires a separate option.
+
+See [curl issue 14481](https://github.com/curl/curl/issues/14481)
+
+## WinIDN test failures
+
+Test 165 disabled when built with WinIDN.
+
+## setting a disabled option should return `CURLE_NOT_BUILT_IN`
+
+When curl has been built with specific features or protocols disabled, setting
+such options with `curl_easy_setopt()` should rather return
+`CURLE_NOT_BUILT_IN` instead of `CURLE_UNKNOWN_OPTION` to signal the
+difference to the application
+
+See [curl issue 15472](https://github.com/curl/curl/issues/15472)
+
+# LDAP
+
+## OpenLDAP hangs after returning results
+
+By configuration defaults, OpenLDAP automatically chase referrals on secondary
+socket descriptors. The OpenLDAP backend is asynchronous and thus should
+monitor all socket descriptors involved. Currently, these secondary
+descriptors are not monitored, causing OpenLDAP library to never receive data
+from them.
+
+As a temporary workaround, disable referrals chasing by configuration.
+
+The fix is not easy: proper automatic referrals chasing requires a synchronous
+bind callback and monitoring an arbitrary number of socket descriptors for a
+single easy handle (currently limited to 5).
+
+Generic LDAP is synchronous: OK.
+
+See [curl issue 622](https://github.com/curl/curl/issues/622) and
+https://curl.se/mail/lib-2016-01/0101.html
+
+## LDAP on Windows does authentication wrong?
+
+[curl issue 3116](https://github.com/curl/curl/issues/3116)
+
+## LDAP on Windows does not work
+
+A simple curl command line getting `ldap://ldap.forumsys.com` returns an error
+that says `no memory` !
+
+[curl issue 4261](https://github.com/curl/curl/issues/4261)
+
+## LDAPS requests to Active Directory server hang
+
+[curl issue 9580](https://github.com/curl/curl/issues/9580)
+
+# TCP/IP
+
+## telnet code does not handle partial writes properly
+
+It probably does not happen too easily because of how slow and infrequent
+sends are normally performed.
+
+## Trying local ports fails on Windows
+
+This makes `--local-port [range]` to not work since curl cannot properly
+detect if a port is already in use, so it tries the first port, uses that and
+then subsequently fails anyway if that was actually in use.
+
+[curl issue 8112](https://github.com/curl/curl/issues/8112)
+
+# CMake
+
+## cmake outputs: no version information available
+
+Something in the SONAME generation seems to be wrong in the cmake build.
+
+[curl issue 11158](https://github.com/curl/curl/issues/11158)
+
+## uses `-lpthread` instead of `Threads::Threads`
+
+See [curl issue 6166](https://github.com/curl/curl/issues/6166)
+
+## generated `.pc` file contains strange entries
+
+The `Libs.private` field of the generated `.pc` file contains `-lgcc -lgcc_s
+-lc -lgcc -lgcc_s`.
+
+See [curl issue 6167](https://github.com/curl/curl/issues/6167)
+
+## CMake build with MIT Kerberos does not work
+
+Minimum CMake version was bumped in curl 7.71.0 (#5358) Since CMake 3.2
+try_compile started respecting the `CMAKE_EXE_FLAGS`. The code dealing with
+MIT Kerberos detection sets few variables to potentially weird mix of space,
+and ;-separated flags. It had to blow up at some point. All the CMake checks
+that involve compilation are doomed from that point, the configured tree
+cannot be built.
+
+[curl issue 6904](https://github.com/curl/curl/issues/6904)
+
+# Authentication
+
+## `--aws-sigv4` does not handle multipart/form-data correctly
+
+[curl issue 13351](https://github.com/curl/curl/issues/13351)
+
+# HTTP/2
+
+## HTTP/2 prior knowledge over proxy
+
+ [curl issue 12641](https://github.com/curl/curl/issues/12641)
+
+## HTTP/2 frames while in the connection pool kill reuse
+
+If the server sends HTTP/2 frames (like for example an HTTP/2 PING frame) to
+curl while the connection is held in curl's connection pool, the socket is
+found readable when considered for reuse and that makes curl think it is dead
+and then it is closed and a new connection gets created instead.
+
+This is *best* fixed by adding monitoring to connections while they are kept
+in the pool so that pings can be responded to appropriately.
+
+## `ENHANCE_YOUR_CALM` causes infinite retries
+
+Infinite retries with 2 parallel requests on one connection receiving `GOAWAY`
+with `ENHANCE_YOUR_CALM` error code.
+
+See [curl issue 5119](https://github.com/curl/curl/issues/5119)
+
+## HTTP/2 + TLS spends a lot of time in recv
+
+It has been observed that by making the speed limit less accurate we could
+improve this performance. (by reverting
+[db5c9f4f9e0779](https://github.com/curl/curl/commit/db5c9f4f9e0779b49624752b135281a0717b277b))
+Can we find a golden middle ground?
+
+See https://curl.se/mail/lib-2024-05/0026.html and
+[curl issue 13416](https://github.com/curl/curl/issues/13416)
+
+# HTTP/3
+
+## connection migration does not work
+
+[curl issue 7695](https://github.com/curl/curl/issues/7695)
+
+## quiche: QUIC connection is draining
+
+The transfer ends with error "QUIC connection is draining".
+
+[curl issue 12037](https://github.com/curl/curl/issues/12037)
+
+# RTSP
+
+## Some methods do not support response bodies
+
+The RTSP implementation is written to assume that a number of RTSP methods
+always get responses without bodies, even though there seems to be no
+indication in the RFC that this is always the case.
+
+[curl issue 12414](https://github.com/curl/curl/issues/12414)
diff --git a/docs/Makefile.am b/docs/Makefile.am
index da5812a..0b619b6 100644
--- a/docs/Makefile.am
+++ b/docs/Makefile.am
@@ -93,7 +93,7 @@
  EARLY-RELEASE.md                               \
  ECH.md                                         \
  EXPERIMENTAL.md                                \
- FAQ                                            \
+ FAQ.md                                         \
  FEATURES.md                                    \
  GOVERNANCE.md                                  \
  HELP-US.md                                     \
@@ -108,7 +108,7 @@
  INSTALL.md                                     \
  INTERNALS.md                                   \
  IPFS.md                                        \
- KNOWN_BUGS                                     \
+ KNOWN_BUGS.md                                  \
  KNOWN_RISKS.md                                 \
  MAIL-ETIQUETTE.md                              \
  MANUAL.md                                      \
@@ -121,7 +121,8 @@
  SPONSORS.md                                    \
  SSL-PROBLEMS.md                                \
  SSLCERTS.md                                    \
- THANKS TODO                                    \
+ THANKS                                         \
+ TODO.md                                        \
  TheArtOfHttpScripting.md                       \
  URL-SYNTAX.md                                  \
  VERSIONS.md                                    \
diff --git a/docs/TODO b/docs/TODO
deleted file mode 100644
index 22ca27f..0000000
--- a/docs/TODO
+++ /dev/null
@@ -1,1301 +0,0 @@
-                                  _   _ ____  _
-                              ___| | | |  _ \| |
-                             / __| | | | |_) | |
-                            | (__| |_| |  _ <| |___
-                             \___|\___/|_| \_\_____|
-
-                Things that could be nice to do in the future
-
- Things to do in project curl. Please tell us what you think, contribute and
- send us patches that improve things.
-
- Be aware that these are things that we could do, or have once been considered
- things we could do. If you want to work on any of these areas, please
- consider bringing it up for discussions first on the mailing list so that we
- all agree it is still a good idea for the project.
-
- All bugs documented in the KNOWN_BUGS document are subject for fixing.
-
- 1. libcurl
- 1.1 TFO support on Windows
- 1.2 Consult %APPDATA% also for .netrc
- 1.3 struct lifreq
- 1.4 alt-svc sharing
- 1.5 get rid of PATH_MAX
- 1.6 thread-safe sharing
- 1.10 auto-detect proxy
- 1.12 updated DNS server while running
- 1.13 c-ares and CURLOPT_OPENSOCKETFUNCTION
- 1.15 Monitor connections in the connection pool
- 1.16 Try to URL encode given URL
- 1.17 Add support for IRIs
- 1.18 try next proxy if one does not work
- 1.19 provide timing info for each redirect
- 1.20 SRV and URI DNS records
- 1.22 CURLINFO_PAUSE_STATE
- 1.25 Expose tried IP addresses that failed
- 1.30 config file parsing
- 1.31 erase secrets from heap/stack after use
- 1.32 add asynch getaddrinfo support
- 1.33 make DoH inherit more transfer properties
-
- 2. libcurl - multi interface
- 2.1 More non-blocking
- 2.2 Better support for same name resolves
- 2.3 Non-blocking curl_multi_remove_handle()
- 2.4 Split connect and authentication process
- 2.5 Edge-triggered sockets should work
- 2.6 multi upkeep
- 2.7 Virtual external sockets
- 2.8 dynamically decide to use socketpair
-
- 3. Documentation
- 3.1 Improve documentation about fork safety
-
- 4. FTP
- 4.1 HOST
- 4.2 A fixed directory listing format
- 4.6 GSSAPI via Windows SSPI
- 4.7 STAT for LIST without data connection
- 4.8 Passive transfer could try other IP addresses
-
- 5. HTTP
- 5.1 Provide the error body from a CONNECT response
- 5.2 Obey Retry-After in redirects
- 5.3 Rearrange request header order
- 5.4 Allow SAN names in HTTP/2 server push
- 5.5 auth= in URLs
- 5.6 alt-svc should fallback if alt-svc does not work
- 5.7 Require HTTP version X or higher
-
- 6. TELNET
- 6.1 ditch stdin
- 6.2 ditch telnet-specific select
- 6.3 feature negotiation debug data
- 6.4 exit immediately upon connection if stdin is /dev/null
-
- 7. SMTP
- 7.1 Passing NOTIFY option to CURLOPT_MAIL_RCPT
- 7.2 Enhanced capability support
- 7.3 Add CURLOPT_MAIL_CLIENT option
-
- 8. POP3
- 8.2 Enhanced capability support
-
- 9. IMAP
- 9.1 Enhanced capability support
-
- 10. LDAP
- 10.1 SASL based authentication mechanisms
- 10.2 CURLOPT_SSL_CTX_FUNCTION for LDAPS
- 10.3 Paged searches on LDAP server
- 10.4 Certificate-Based Authentication
-
- 11. SMB
- 11.1 File listing support
- 11.2 Honor file timestamps
- 11.3 Use NTLMv2
- 11.4 Create remote directories
-
- 12. FILE
- 12.1 Directory listing on non-POSIX
-
- 13. TLS
- 13.1 TLS-PSK with OpenSSL
- 13.2 TLS channel binding
- 13.3 Defeat TLS fingerprinting
- 13.4 Consider OCSP stapling by default
- 13.6 Provide callback for cert verification
- 13.7 Less memory massaging with Schannel
- 13.8 Support DANE
- 13.9 TLS record padding
- 13.10 Support Authority Information Access certificate extension (AIA)
- 13.11 Some TLS options are not offered for HTTPS proxies
- 13.13 Make sure we forbid TLS 1.3 post-handshake authentication
- 13.14 Support the clienthello extension
- 13.16 Share the CA cache
- 13.17 Add missing features to TLS backends
-
- 14. Proxy
- 14.1 Retry SOCKS handshake on address type not supported
-
- 15. Schannel
- 15.1 Extend support for client certificate authentication
- 15.2 Extend support for the --ciphers option
- 15.4 Add option to allow abrupt server closure
-
- 16. SASL
- 16.1 Other authentication mechanisms
- 16.2 Add QOP support to GSSAPI authentication
-
- 17. SSH protocols
- 17.1 Multiplexing
- 17.2 Handle growing SFTP files
- 17.3 Read keys from ~/.ssh/id_ecdsa, id_ed25519
- 17.4 Support CURLOPT_PREQUOTE
- 17.5 SSH over HTTPS proxy with more backends
- 17.6 SFTP with SCP://
-
- 18. Command line tool
- 18.1 sync
- 18.2 glob posts
- 18.4 --proxycommand
- 18.5 UTF-8 filenames in Content-Disposition
- 18.6 Option to make -Z merge lined based outputs on stdout
- 18.7 specify which response codes that make -f/--fail return error
- 18.9 Choose the name of file in braces for complex URLs
- 18.10 improve how curl works in a Windows console window
- 18.11 Windows: set attribute 'archive' for completed downloads
- 18.12 keep running, read instructions from pipe/socket
- 18.13 Acknowledge Ratelimit headers
- 18.14 --dry-run
- 18.15 --retry should resume
- 18.17 consider filename from the redirected URL with -O ?
- 18.18 retry on network is unreachable
- 18.20 hostname sections in config files
- 18.21 retry on the redirected-to URL
- 18.23 Set the modification date on an uploaded file
- 18.24 Use multiple parallel transfers for a single download
- 18.25 Prevent terminal injection when writing to terminal
- 18.26 Custom progress meter update interval
- 18.27 -J and -O with %-encoded filenames
- 18.28 -J with -C -
- 18.29 --retry and transfer timeouts
-
- 19. Build
- 19.2 Enable PIE and RELRO by default
- 19.3 Do not use GNU libtool on OpenBSD
- 19.4 Package curl for Windows in a signed installer
- 19.5 make configure use --cache-file more and better
-
- 20. Test suite
- 20.1 SSL tunnel
- 20.2 more protocols supported
- 20.3 more platforms supported
- 20.4 write an SMB test server to replace impacket
- 20.5 Use the RFC 6265 test suite
- 20.6 Run web-platform-tests URL tests
-
- 21. MQTT
- 21.1 Support rate-limiting
- 21.2 Support MQTTS
- 21.3 Handle network blocks
- 21.4 large payloads
-
- 22. TFTP
- 22.1 TFTP does not convert LF to CRLF for mode=netascii
-
- 23. Gopher
- 23.1 Handle network blocks
-
-==============================================================================
-
-1. libcurl
-
-1.1 TFO support on Windows
-
- libcurl supports the CURLOPT_TCP_FASTOPEN option since 7.49.0 for Linux and
- macOS. Windows supports TCP Fast Open starting with Windows 10, version 1607
- and we should add support for it.
-
- TCP Fast Open is supported on several platforms but not on Windows. Work on
- this was once started but never finished.
-
- See https://github.com/curl/curl/pull/3378
-
-1.2 Consult %APPDATA% also for .netrc
-
- %APPDATA%\.netrc is not considered when running on Windows. should not it?
-
- See https://github.com/curl/curl/issues/4016
-
-1.3 struct lifreq
-
- Use 'struct lifreq' and SIOCGLIFADDR instead of 'struct ifreq' and
- SIOCGIFADDR on newer Solaris versions as they claim the latter is obsolete.
- To support IPv6 interface addresses for network interfaces properly.
-
-1.4 alt-svc sharing
-
- The share interface could benefit from allowing the alt-svc cache to be
- possible to share between easy handles.
-
- See https://github.com/curl/curl/issues/4476
-
- The share interface offers CURL_LOCK_DATA_CONNECT to have multiple easy
- handle share a connection cache, but due to how connections are used they are
- still not thread-safe when used shared.
-
- See https://github.com/curl/curl/issues/4915 and lib1541.c
-
- The share interface offers CURL_LOCK_DATA_HSTS to have multiple easy handle
- share an HSTS cache, but this is not thread-safe.
-
-1.5 get rid of PATH_MAX
-
- Having code use and rely on PATH_MAX is not nice:
- https://insanecoding.blogspot.com/2007/11/pathmax-simply-isnt.html
-
- Currently the libssh2 SSH based code uses it, but to remove PATH_MAX from
- there we need libssh2 to properly tell us when we pass in a too small buffer
- and its current API (as of libssh2 1.2.7) does not.
-
-1.6 thread-safe sharing
-
- Using the share interface users can share some data between easy handles but
- several of the sharing options are documented as not safe and supported to
- share between multiple concurrent threads. Fixing this would enable more
- users to share data in more powerful ways.
-
-1.10 auto-detect proxy
-
- libcurl could be made to detect the system proxy setup automatically and use
- that. On Windows, macOS and Linux desktops for example.
-
- The pull-request to use libproxy for this was deferred due to doubts on the
- reliability of the dependency and how to use it:
- https://github.com/curl/curl/pull/977
-
- libdetectproxy is a (C++) library for detecting the proxy on Windows
- https://github.com/paulharris/libdetectproxy
-
-1.12 updated DNS server while running
-
- If /etc/resolv.conf gets updated while a program using libcurl is running, it
- is may cause name resolves to fail unless res_init() is called. We should
- consider calling res_init() + retry once unconditionally on all name resolve
- failures to mitigate against this. Firefox works like that. Note that Windows
- does not have res_init() or an alternative.
-
- https://github.com/curl/curl/issues/2251
-
-1.13 c-ares and CURLOPT_OPENSOCKETFUNCTION
-
- curl creates most sockets via the CURLOPT_OPENSOCKETFUNCTION callback and
- close them with the CURLOPT_CLOSESOCKETFUNCTION callback. However, c-ares
- does not use those functions and instead opens and closes the sockets itself.
- This means that when curl passes the c-ares socket to the
- CURLMOPT_SOCKETFUNCTION it is not owned by the application like other
- sockets.
-
- See https://github.com/curl/curl/issues/2734
-
-1.15 Monitor connections in the connection pool
-
- libcurl's connection cache or pool holds a number of open connections for the
- purpose of possible subsequent connection reuse. It may contain a few up to a
- significant amount of connections. Currently, libcurl leaves all connections
- as they are and first when a connection is iterated over for matching or
- reuse purpose it is verified that it is still alive.
-
- Those connections may get closed by the server side for idleness or they may
- get an HTTP/2 ping from the peer to verify that they are still alive. By
- adding monitoring of the connections while in the pool, libcurl can detect
- dead connections (and close them) better and earlier, and it can handle
- HTTP/2 pings to keep such ones alive even when not actively doing transfers
- on them.
-
-1.16 Try to URL encode given URL
-
- Given a URL that for example contains spaces, libcurl could have an option
- that would try somewhat harder than it does now and convert spaces to %20 and
- perhaps URL encoded byte values over 128 etc (basically do what the redirect
- following code already does).
-
- https://github.com/curl/curl/issues/514
-
-1.17 Add support for IRIs
-
- IRIs (RFC 3987) allow localized, non-ASCII, names in the URL. To properly
- support this, curl/libcurl would need to translate/encode the given input
- from the input string encoding into percent encoded output "over the wire".
-
- To make that work smoothly for curl users even on Windows, curl would
- probably need to be able to convert from several input encodings.
-
-1.18 try next proxy if one does not work
-
- Allow an application to specify a list of proxies to try, and failing to
- connect to the first go on and try the next instead until the list is
- exhausted. Browsers support this feature at least when they specify proxies
- using PACs.
-
- https://github.com/curl/curl/issues/896
-
-1.19 provide timing info for each redirect
-
- curl and libcurl provide timing information via a set of different
- time-stamps (CURLINFO_*_TIME). When curl is following redirects, those
- returned time value are the accumulated sums. An improvement could be to
- offer separate timings for each redirect.
-
- https://github.com/curl/curl/issues/6743
-
-1.20 SRV and URI DNS records
-
- Offer support for resolving SRV and URI DNS records for libcurl to know which
- server to connect to for various protocols (including HTTP).
-
-1.22 CURLINFO_PAUSE_STATE
-
- Return information about the transfer's current pause state, in both
- directions. https://github.com/curl/curl/issues/2588
-
-1.25 Expose tried IP addresses that failed
-
- When libcurl fails to connect to a host, it could offer the application the
- addresses that were used in the attempt. Source + dest IP, source + dest port
- and protocol (UDP or TCP) for each failure. Possibly as a callback. Perhaps
- also provide "reason".
-
- https://github.com/curl/curl/issues/2126
-
-1.30 config file parsing
-
- Consider providing an API, possibly in a separate companion library, for
- parsing a config file like curl's -K/--config option to allow applications to
- get the same ability to read curl options from files.
-
- See https://github.com/curl/curl/issues/3698
-
-1.31 erase secrets from heap/stack after use
-
- Introducing a concept and system to erase secrets from memory after use, it
- could help mitigate and lessen the impact of (future) security problems etc.
- However: most secrets are passed to libcurl as clear text from the
- application and then clearing them within the library adds nothing...
-
- https://github.com/curl/curl/issues/7268
-
-1.32 add asynch getaddrinfo support
-
- Use getaddrinfo_a() to provide an asynch name resolver backend to libcurl
- that does not use threads and does not depend on c-ares. The getaddrinfo_a
- function is (probably?) glibc specific but that is a widely used libc among
- our users.
-
- https://github.com/curl/curl/pull/6746
-
-1.33 make DoH inherit more transfer properties
-
- Some options are not inherited because they are not relevant for the DoH SSL
- connections, or inheriting the option may result in unexpected behavior. For
- example the user's debug function callback is not inherited because it would
- be unexpected for internal handles (ie DoH handles) to be passed to that
- callback.
-
- If an option is not inherited then it is not possible to set it separately
- for DoH without a DoH-specific option. For example:
- CURLOPT_DOH_SSL_VERIFYHOST, CURLOPT_DOH_SSL_VERIFYPEER and
- CURLOPT_DOH_SSL_VERIFYSTATUS.
-
- See https://github.com/curl/curl/issues/6605
-
-2. libcurl - multi interface
-
-2.1 More non-blocking
-
- Make sure we do not ever loop because of non-blocking sockets returning
- EWOULDBLOCK or similar. Blocking cases include:
-
- - Name resolves on non-Windows unless c-ares or the threaded resolver is used.
-
- - The threaded resolver may block on cleanup:
- https://github.com/curl/curl/issues/4852
-
- - file:// transfers
-
- - TELNET transfers
-
- - GSSAPI authentication for FTP transfers
-
- - The "DONE" operation (post transfer protocol-specific actions) for the
- protocols SFTP, SMTP, FTP. Fixing multi_done() for this is a worthy task.
-
- - curl_multi_remove_handle for any of the above. See section 2.3.
-
- - Calling curl_ws_send() from a callback
-
-2.2 Better support for same name resolves
-
- If a name resolve has been initiated for name NN and a second easy handle
- wants to resolve that name as well, make it wait for the first resolve to end
- up in the cache instead of doing a second separate resolve. This is
- especially needed when adding many simultaneous handles using the same host
- name when the DNS resolver can get flooded.
-
-2.3 Non-blocking curl_multi_remove_handle()
-
- The multi interface has a few API calls that assume a blocking behavior, like
- add_handle() and remove_handle() which limits what we can do internally. The
- multi API need to be moved even more into a single function that "drives"
- everything in a non-blocking manner and signals when something is done. A
- remove or add would then only ask for the action to get started and then
- multi_perform() etc still be called until the add/remove is completed.
-
-2.4 Split connect and authentication process
-
- The multi interface treats the authentication process as part of the connect
- phase. As such any failures during authentication does not trigger the
- relevant QUIT or LOGOFF for protocols such as IMAP, POP3 and SMTP.
-
-2.5 Edge-triggered sockets should work
-
- The multi_socket API should work with edge-triggered socket events. One of
- the internal actions that need to be improved for this to work perfectly is
- the 'maxloops' handling in transfer.c:readwrite_data().
-
-2.6 multi upkeep
-
- In libcurl 7.62.0 we introduced curl_easy_upkeep. It unfortunately only works
- on easy handles. We should introduces a version of that for the multi handle,
- and also consider doing "upkeep" automatically on connections in the
- connection pool when the multi handle is in used.
-
- See https://github.com/curl/curl/issues/3199
-
-2.7 Virtual external sockets
-
- libcurl performs operations on the given file descriptor that presumes it is
- a socket and an application cannot replace them at the moment. Allowing an
- application to fully replace those would allow a larger degree of freedom and
- flexibility.
-
- See https://github.com/curl/curl/issues/5835
-
-2.8 dynamically decide to use socketpair
-
- For users who do not use curl_multi_wait() or do not care for
- curl_multi_wakeup(), we could introduce a way to make libcurl NOT
- create a socketpair in the multi handle.
-
- See https://github.com/curl/curl/issues/4829
-
-3. Documentation
-
-3.1 Improve documentation about fork safety
-
- See https://github.com/curl/curl/issues/6968
-
-4. FTP
-
-4.1 HOST
-
- HOST is a command for a client to tell which hostname to use, to offer FTP
- servers named-based virtual hosting:
-
- https://datatracker.ietf.org/doc/html/rfc7151
-
-4.2 A fixed directory listing format
-
- Since listing the contents of a remove directory with FTP is returning the
- list in a format and style the server likes without any estblished or even
- defactor standard existing, it would be a feature to users if curl could
- parse the directory listing and output a general curl format that is fixed
- and the same, independent of the server's choice. This would allow users to
- better and more reliably extract information about remote content via FTP
- directory listings.
-
-4.6 GSSAPI via Windows SSPI
-
- In addition to currently supporting the SASL GSSAPI mechanism (Kerberos V5)
- via third-party GSS-API libraries, such as MIT Kerberos, also add support
- for GSSAPI authentication via Windows SSPI.
-
-4.7 STAT for LIST without data connection
-
- Some FTP servers allow STAT for listing directories instead of using LIST,
- and the response is then sent over the control connection instead of as the
- otherwise usedw data connection: https://www.nsftools.com/tips/RawFTP.htm#STAT
-
- This is not detailed in any FTP specification.
-
-4.8 Passive transfer could try other IP addresses
-
- When doing FTP operations through a proxy at localhost, the reported spotted
- that curl only tried to connect once to the proxy, while it had multiple
- addresses and a failed connect on one address should make it try the next.
-
- After switching to passive mode (EPSV), curl could try all IP addresses for
- "localhost". Currently it tries ::1, but it should also try 127.0.0.1.
-
- See https://github.com/curl/curl/issues/1508
-
-5. HTTP
-
-5.1 Provide the error body from a CONNECT response
-
- When curl receives a body response from a CONNECT request to a proxy, it
- always just reads and ignores it. It would make some users happy if curl
- instead optionally would be able to make that responsible available. Via a
- new callback? Through some other means?
-
- See https://github.com/curl/curl/issues/9513
-
-5.2 Obey Retry-After in redirects
-
- The Retry-After is said to dictate "the minimum time that the user agent is
- asked to wait before issuing the redirected request" and libcurl does not
- obey this.
-
- See https://github.com/curl/curl/issues/11447
-
-5.3 Rearrange request header order
-
- Server implementers often make an effort to detect browser and to reject
- clients it can detect to not match. One of the last details we cannot yet
- control in libcurl's HTTP requests, which also can be exploited to detect
- that libcurl is in fact used even when it tries to impersonate a browser, is
- the order of the request headers. I propose that we introduce a new option in
- which you give headers a value, and then when the HTTP request is built it
- sorts the headers based on that number. We could then have internally created
- headers use a default value so only headers that need to be moved have to be
- specified.
-
-5.4 Allow SAN names in HTTP/2 server push
-
- curl only allows HTTP/2 push promise if the provided :authority header value
- exactly matches the hostname given in the URL. It could be extended to allow
- any name that would match the Subject Alternative Names in the server's TLS
- certificate.
-
- See https://github.com/curl/curl/pull/3581
-
-5.5 auth= in URLs
-
- Add the ability to specify the preferred authentication mechanism to use by
- using ;auth=<mech> in the login part of the URL.
-
- For example:
-
- http://test:pass;auth=NTLM@example.com would be equivalent to specifying
- --user test:pass;auth=NTLM or --user test:pass --ntlm from the command line.
-
- Additionally this should be implemented for proxy base URLs as well.
-
-5.6 alt-svc should fallback if alt-svc does not work
-
- The alt-svc: header provides a set of alternative services for curl to use
- instead of the original. If the first attempted one fails, it should try the
- next etc and if all alternatives fail go back to the original.
-
- See https://github.com/curl/curl/issues/4908
-
-5.7 Require HTTP version X or higher
-
- curl and libcurl provide options for trying higher HTTP versions (for example
- HTTP/2) but then still allows the server to pick version 1.1. We could
- consider adding a way to require a minimum version.
-
- See https://github.com/curl/curl/issues/7980
-
-6. TELNET
-
-6.1 ditch stdin
-
- Reading input (to send to the remote server) on stdin is a crappy solution
- for library purposes. We need to invent a good way for the application to be
- able to provide the data to send.
-
-6.2 ditch telnet-specific select
-
- Move the telnet support's network select() loop go away and merge the code
- into the main transfer loop. Until this is done, the multi interface does not
- work for telnet.
-
-6.3 feature negotiation debug data
-
- Add telnet feature negotiation data to the debug callback as header data.
-
-6.4 exit immediately upon connection if stdin is /dev/null
-
- If it did, curl could be used to probe if there is an server there listening
- on a specific port. That is, the following command would exit immediately
- after the connection is established with exit code 0:
-
-    curl -s --connect-timeout 2 telnet://example.com:80 </dev/null
-
-7. SMTP
-
-7.1 Passing NOTIFY option to CURLOPT_MAIL_RCPT
-
- Is there a way to pass the NOTIFY option to the CURLOPT_MAIL_RCPT option ?  I
- set a string that already contains a bracket. For instance something like
- that: curl_slist_append( recipients, "<foo@bar> NOTIFY=SUCCESS,FAILURE" );
-
- https://github.com/curl/curl/issues/8232
-
-7.2 Enhanced capability support
-
- Add the ability, for an application that uses libcurl, to obtain the list of
- capabilities returned from the EHLO command.
-
-7.3 Add CURLOPT_MAIL_CLIENT option
-
- Rather than use the URL to specify the mail client string to present in the
- HELO and EHLO commands, libcurl should support a new CURLOPT specifically for
- specifying this data as the URL is non-standard and to be honest a bit of a
- hack ;-)
-
- Please see the following thread for more information:
- https://curl.se/mail/lib-2012-05/0178.html
-
-
-8. POP3
-
-8.2 Enhanced capability support
-
- Add the ability, for an application that uses libcurl, to obtain the list of
- capabilities returned from the CAPA command.
-
-9. IMAP
-
-9.1 Enhanced capability support
-
- Add the ability, for an application that uses libcurl, to obtain the list of
- capabilities returned from the CAPABILITY command.
-
-10. LDAP
-
-10.1 SASL based authentication mechanisms
-
- Currently the LDAP module only supports ldap_simple_bind_s() in order to bind
- to an LDAP server. However, this function sends username and password details
- using the simple authentication mechanism (as clear text). However, it should
- be possible to use ldap_bind_s() instead specifying the security context
- information ourselves.
-
-10.2 CURLOPT_SSL_CTX_FUNCTION for LDAPS
-
- CURLOPT_SSL_CTX_FUNCTION works perfectly for HTTPS and email protocols, but
- it has no effect for LDAPS connections.
-
- https://github.com/curl/curl/issues/4108
-
-10.3 Paged searches on LDAP server
-
- https://github.com/curl/curl/issues/4452
-
-10.4 Certificate-Based Authentication
-
- LDAPS not possible with macOS and Windows with Certificate-Based Authentication
-
- https://github.com/curl/curl/issues/9641
-
-11. SMB
-
-11.1 File listing support
-
- Add support for listing the contents of an SMB share. The output should
- probably be the same as/similar to FTP.
-
-11.2 Honor file timestamps
-
- The timestamp of the transferred file should reflect that of the original
- file.
-
-11.3 Use NTLMv2
-
- Currently the SMB authentication uses NTLMv1.
-
-11.4 Create remote directories
-
- Support for creating remote directories when uploading a file to a directory
- that does not exist on the server, just like --ftp-create-dirs.
-
-
-12. FILE
-
-12.1 Directory listing on non-POSIX
-
- Listing the contents of a directory accessed with FILE only works on
- platforms with opendir. Support could be added for more systems, like
- Windows.
-
-13. TLS
-
-13.1 TLS-PSK with OpenSSL
-
- Transport Layer Security pre-shared key ciphersuites (TLS-PSK) is a set of
- cryptographic protocols that provide secure communication based on pre-shared
- keys (PSKs). These pre-shared keys are symmetric keys shared in advance among
- the communicating parties.
-
- https://github.com/curl/curl/issues/5081
-
-13.2 TLS channel binding
-
- TLS 1.2 and 1.3 provide the ability to extract some secret data from the TLS
- connection and use it in the client request (usually in some sort of
- authentication) to ensure that the data sent is bound to the specific TLS
- connection and cannot be successfully intercepted by a proxy. This
- functionality can be used in a standard authentication mechanism such as
- GSS-API or SCRAM, or in custom approaches like custom HTTP Authentication
- headers.
-
- For TLS 1.2, the binding type is usually tls-unique, and for TLS 1.3 it is
- tls-exporter.
-
- https://datatracker.ietf.org/doc/html/rfc5929
- https://datatracker.ietf.org/doc/html/rfc9266
- https://github.com/curl/curl/issues/9226
-
-13.3 Defeat TLS fingerprinting
-
- By changing the order of TLS extensions provided in the TLS handshake, it is
- sometimes possible to circumvent TLS fingerprinting by servers. The TLS
- extension order is of course not the only way to fingerprint a client.
-
-13.4 Consider OCSP stapling by default
-
- Treat a negative response a reason for aborting the connection. Since OCSP
- stapling is presumed to get used much less in the future when Let's Encrypt
- drops the OCSP support, the benefit of this might however be limited.
-
- https://github.com/curl/curl/issues/15483
-
-13.6 Provide callback for cert verification
-
- OpenSSL supports a callback for customised verification of the peer
- certificate, but this does not seem to be exposed in the libcurl APIs. Could
- it be? There is so much that could be done if it were.
-
-13.7 Less memory massaging with Schannel
-
- The Schannel backend does a lot of custom memory management we would rather
- avoid: the repeated alloc + free in sends and the custom memory + realloc
- system for encrypted and decrypted data. That should be avoided and reduced
- for 1) efficiency and 2) safety.
-
-13.8 Support DANE
-
- DNS-Based Authentication of Named Entities (DANE) is a way to provide SSL
- keys and certs over DNS using DNSSEC as an alternative to the CA model.
- https://datatracker.ietf.org/doc/html/rfc6698
-
- An initial patch was posted by Suresh Krishnaswamy on March 7th 2013
- (https://curl.se/mail/lib-2013-03/0075.html) but it was a too simple
- approach. See Daniel's comments:
- https://curl.se/mail/lib-2013-03/0103.html . libunbound may be the
- correct library to base this development on.
-
- Björn Stenberg wrote a separate initial take on DANE that was never
- completed.
-
-13.9 TLS record padding
-
- TLS (1.3) offers optional record padding and OpenSSL provides an API for it.
- I could make sense for libcurl to offer this ability to applications to make
- traffic patterns harder to figure out by network traffic observers.
-
- See https://github.com/curl/curl/issues/5398
-
-13.10 Support Authority Information Access certificate extension (AIA)
-
- AIA can provide various things like CRLs but more importantly information
- about intermediate CA certificates that can allow validation path to be
- fulfilled when the HTTPS server does not itself provide them.
-
- Since AIA is about downloading certs on demand to complete a TLS handshake,
- it is probably a bit tricky to get done right.
-
- See https://github.com/curl/curl/issues/2793
-
-13.11 Some TLS options are not offered for HTTPS proxies
-
- Some TLS related options to the command line tool and libcurl are only
- provided for the server and not for HTTPS proxies. --proxy-tls-max,
- --proxy-tlsv1.3, --proxy-curves and a few more.
- For more Documentation on this see:
- https://curl.se/libcurl/c/tls-options.html
-
- https://github.com/curl/curl/issues/12286
-
-13.13 Make sure we forbid TLS 1.3 post-handshake authentication
-
- RFC 8740 explains how using HTTP/2 must forbid the use of TLS 1.3
- post-handshake authentication. We should make sure to live up to that.
-
- See https://github.com/curl/curl/issues/5396
-
-13.14 Support the clienthello extension
-
- Certain stupid networks and middle boxes have a problem with SSL handshake
- packets that are within a certain size range because how that sets some bits
- that previously (in older TLS version) were not set. The clienthello
- extension adds padding to avoid that size range.
-
- https://datatracker.ietf.org/doc/html/rfc7685
- https://github.com/curl/curl/issues/2299
-
-13.16 Share the CA cache
-
- For TLS backends that supports CA caching, it makes sense to allow the share
- object to be used to store the CA cache as well via the share API. Would
- allow multiple easy handles to reuse the CA cache and save themselves from a
- lot of extra processing overhead.
-
-13.17 Add missing features to TLS backends
-
- The feature matrix at https://curl.se/libcurl/c/tls-options.html shows which
- features are supported by which TLS backends, and thus also where there are
- feature gaps.
-
-14. Proxy
-
-14.1 Retry SOCKS handshake on address type not supported
-
- When curl resolves a hostname, it might get a mix of IPv6 and IPv4 returned.
- curl might then use an IPv6 address with a SOCKS5 proxy, which - if it does
- not support IPv6 - returns "Address type not supported" and curl exits with
- that error.
-
- Perhaps it is preferred if curl would in this situation instead first retry
- the SOCKS handshake again for this case and then use one of the IPv4
- addresses for the target host.
-
- See https://github.com/curl/curl/issues/17222
-
-15. Schannel
-
-15.1 Extend support for client certificate authentication
-
- The existing support for the -E/--cert and --key options could be
- extended by supplying a custom certificate and key in PEM format, see:
- - Getting a Certificate for Schannel
-   https://learn.microsoft.com/windows/win32/secauthn/getting-a-certificate-for-schannel
-
-15.2 Extend support for the --ciphers option
-
- The existing support for the --ciphers option could be extended
- by mapping the OpenSSL/GnuTLS cipher suites to the Schannel APIs, see
- - Specifying Schannel Ciphers and Cipher Strengths
-   https://learn.microsoft.com/windows/win32/secauthn/specifying-schannel-ciphers-and-cipher-strengths
-
-15.4 Add option to allow abrupt server closure
-
- libcurl with Schannel errors without a known termination point from the server
- (such as length of transfer, or SSL "close notify" alert) to prevent against
- a truncation attack. Really old servers may neglect to send any termination
- point. An option could be added to ignore such abrupt closures.
-
- https://github.com/curl/curl/issues/4427
-
-16. SASL
-
-16.1 Other authentication mechanisms
-
- Add support for other authentication mechanisms such as OLP,
- GSS-SPNEGO and others.
-
-16.2 Add QOP support to GSSAPI authentication
-
- Currently the GSSAPI authentication only supports the default QOP of auth
- (Authentication), whilst Kerberos V5 supports both auth-int (Authentication
- with integrity protection) and auth-conf (Authentication with integrity and
- privacy protection).
-
-
-17. SSH protocols
-
-17.1 Multiplexing
-
- SSH is a perfectly fine multiplexed protocols which would allow libcurl to do
- multiple parallel transfers from the same host using the same connection,
- much in the same spirit as HTTP/2 does. libcurl however does not take
- advantage of that ability but does instead always create a new connection for
- new transfers even if an existing connection already exists to the host.
-
- To fix this, libcurl would have to detect an existing connection and "attach"
- the new transfer to the existing one.
-
-17.2 Handle growing SFTP files
-
- The SFTP code in libcurl checks the file size *before* a transfer starts and
- then proceeds to transfer exactly that amount of data. If the remote file
- grows while the transfer is in progress libcurl does not notice and does not
- adapt. The OpenSSH SFTP command line tool does and libcurl could also just
- attempt to download more to see if there is more to get...
-
- https://github.com/curl/curl/issues/4344
-
-17.3 Read keys from ~/.ssh/id_ecdsa, id_ed25519
-
- The libssh2 backend in curl is limited to only reading keys from id_rsa and
- id_dsa, which makes it fail connecting to servers that use more modern key
- types.
-
- https://github.com/curl/curl/issues/8586
-
-17.4 Support CURLOPT_PREQUOTE
-
- The two other QUOTE options are supported for SFTP, but this was left out for
- unknown reasons.
-
-17.5 SSH over HTTPS proxy with more backends
-
- The SSH based protocols SFTP and SCP did not work over HTTPS proxy at
- all until PR https://github.com/curl/curl/pull/6021 brought the
- functionality with the libssh2 backend. Presumably, this support
- can/could be added for the other backends as well.
-
-17.6 SFTP with SCP://
-
- OpenSSH 9 switched their 'scp' tool to speak SFTP under the hood. Going
- forward it might be worth having curl or libcurl attempt SFTP if SCP fails to
- follow suite.
-
-18. Command line tool
-
-18.1 sync
-
- "curl --sync http://example.com/feed[1-100].rss" or
- "curl --sync http://example.net/{index,calendar,history}.html"
-
- Downloads a range or set of URLs using the remote name, but only if the
- remote file is newer than the local file. A Last-Modified HTTP date header
- should also be used to set the mod date on the downloaded file.
-
-18.2 glob posts
-
- Globbing support for -d and -F, as in 'curl -d "name=foo[0-9]" URL'.
- This is easily scripted though.
-
-18.4 --proxycommand
-
- Allow the user to make curl run a command and use its stdio to make requests
- and not do any network connection by itself. Example:
-
-   curl --proxycommand 'ssh pi@raspberrypi.local -W 10.1.1.75 80' \
-        http://some/otherwise/unavailable/service.php
-
- See https://github.com/curl/curl/issues/4941
-
-18.5 UTF-8 filenames in Content-Disposition
-
- RFC 6266 documents how UTF-8 names can be passed to a client in the
- Content-Disposition header, and curl does not support this.
-
- https://github.com/curl/curl/issues/1888
-
-18.6 Option to make -Z merge lined based outputs on stdout
-
- When a user requests multiple lined based files using -Z and sends them to
- stdout, curl does not "merge" and send complete lines fine but may send
- partial lines from several sources.
-
- https://github.com/curl/curl/issues/5175
-
-18.7 specify which response codes that make -f/--fail return error
-
- Allows a user to better specify exactly which error code(s) that are fine
- and which are errors for their specific uses cases
-
-18.9 Choose the name of file in braces for complex URLs
-
- When using braces to download a list of URLs and you use complicated names
- in the list of alternatives, it could be handy to allow curl to use other
- names when saving.
-
- Consider a way to offer that. Possibly like
- {partURL1:name1,partURL2:name2,partURL3:name3} where the name following the
- colon is the output name.
-
- See https://github.com/curl/curl/issues/221
-
-18.10 improve how curl works in a Windows console window
-
- If you pull the scrollbar when transferring with curl in a Windows console
- window, the transfer is interrupted and can get disconnected. This can
- probably be improved. See https://github.com/curl/curl/issues/322
-
-18.11 Windows: set attribute 'archive' for completed downloads
-
- The archive bit (FILE_ATTRIBUTE_ARCHIVE, 0x20) separates files that shall be
- backed up from those that are either not ready or have not changed.
-
- Downloads in progress are neither ready to be backed up, nor should they be
- opened by a different process. Only after a download has been completed it is
- sensible to include it in any integer snapshot or backup of the system.
-
- See https://github.com/curl/curl/issues/3354
-
-18.12 keep running, read instructions from pipe/socket
-
- Provide an option that makes curl not exit after the last URL (or even work
- without a given URL), and then make it read instructions passed on a pipe or
- over a socket to make further instructions so that a second subsequent curl
- invoke can talk to the still running instance and ask for transfers to get
- done, and thus maintain its connection pool, DNS cache and more.
-
-18.13 Acknowledge Ratelimit headers
-
- Consider a command line option that can make curl do multiple serial requests
- while acknowledging server specified rate limits:
- https://datatracker.ietf.org/doc/draft-ietf-httpapi-ratelimit-headers/
-
- See https://github.com/curl/curl/issues/5406
-
-18.14 --dry-run
-
- A command line option that makes curl show exactly what it would do and send
- if it would run for real.
-
- See https://github.com/curl/curl/issues/5426
-
-18.15 --retry should resume
-
- When --retry is used and curl actually retries transfer, it should use the
- already transferred data and do a resumed transfer for the rest (when
- possible) so that it does not have to transfer the same data again that was
- already transferred before the retry.
-
- See https://github.com/curl/curl/issues/1084
-
-18.17 consider filename from the redirected URL with -O ?
-
- When a user gives a URL and uses -O, and curl follows a redirect to a new
- URL, the filename is not extracted and used from the newly redirected-to URL
- even if the new URL may have a much more sensible filename.
-
- This is clearly documented and helps for security since there is no surprise
- to users which filename that might get overwritten, but maybe a new option
- could allow for this or maybe -J should imply such a treatment as well as -J
- already allows for the server to decide what filename to use so it already
- provides the "may overwrite any file" risk.
-
- This is extra tricky if the original URL has no filename part at all since
- then the current code path does error out with an error message, and we
- cannot *know* already at that point if curl is redirected to a URL that has a
- filename...
-
- See https://github.com/curl/curl/issues/1241
-
-18.18 retry on network is unreachable
-
- The --retry option retries transfers on "transient failures". We later added
- --retry-connrefused to also retry for "connection refused" errors.
-
- Suggestions have been brought to also allow retry on "network is unreachable"
- errors and while totally reasonable, maybe we should consider a way to make
- this more configurable than to add a new option for every new error people
- want to retry for?
-
- https://github.com/curl/curl/issues/1603
-
-18.20 hostname sections in config files
-
- config files would be more powerful if they could set different
- configurations depending on used URLs, hostname or possibly origin. Then a
- default .curlrc could a specific user-agent only when doing requests against
- a certain site.
-
-18.21 retry on the redirected-to URL
-
- When curl is told to --retry a failed transfer and follows redirects, it
- might get an HTTP 429 response from the redirected-to URL and not the
- original one, which then could make curl decide to rather retry the transfer
- on that URL only instead of the original operation to the original URL.
-
- Perhaps extra emphasized if the original transfer is a large POST that
- redirects to a separate GET, and that GET is what gets the 529
-
- See https://github.com/curl/curl/issues/5462
-
-18.23 Set the modification date on an uploaded file
-
- For SFTP and possibly FTP, curl could offer an option to set the
- modification time for the uploaded file.
-
- See https://github.com/curl/curl/issues/5768
-
-18.24 Use multiple parallel transfers for a single download
-
- To enhance transfer speed, downloading a single URL can be split up into
- multiple separate range downloads that get combined into a single final
- result.
-
- An ideal implementation would not use a specified number of parallel
- transfers, but curl could:
- - First start getting the full file as transfer A
- - If after N seconds have passed and the transfer is expected to continue for
-   M seconds or more, add a new transfer (B) that asks for the second half of
-   A's content (and stop A at the middle).
- - If splitting up the work improves the transfer rate, it could then be done
-   again. Then again, etc up to a limit.
-
- This way, if transfer B fails (because Range: is not supported) it lets
- transfer A remain the single one. N and M could be set to some sensible
- defaults.
-
- See https://github.com/curl/curl/issues/5774
-
-18.25 Prevent terminal injection when writing to terminal
-
- curl could offer an option to make escape sequence either non-functional or
- avoid cursor moves or similar to reduce the risk of a user getting tricked by
- clever tricks.
-
- See https://github.com/curl/curl/issues/6150
-
-18.26 Custom progress meter update interval
-
- Users who are for example doing large downloads in CI or remote setups might
- want the occasional progress meter update to see that the transfer is
- progressing and has not stuck, but they may not appreciate the
- many-times-a-second frequency curl can end up doing it with now.
-
-18.27 -J and -O with %-encoded filenames
-
- -J/--remote-header-name does not decode %-encoded filenames. RFC 6266 details
- how it should be done. The can of worm is basically that we have no charset
- handling in curl and ASCII >=128 is a challenge for us. Not to mention that
- decoding also means that we need to check for nastiness that is attempted,
- like "../" sequences and the like. Probably everything to the left of any
- embedded slashes should be cut off.
- https://curl.se/bug/view.cgi?id=1294
-
- -O also does not decode %-encoded names, and while it has even less
- information about the charset involved the process is similar to the -J case.
-
- Note that we do not decode -O without the user asking for it with some other
- means, since -O has always been documented to use the name exactly as
- specified in the URL.
-
-18.28 -J with -C -
-
- When using -J (with -O), automatically resumed downloading together with "-C
- -" fails. Without -J the same command line works. This happens because the
- resume logic is worked out before the target filename (and thus its
- pre-transfer size) has been figured out. This can be improved.
-
- https://curl.se/bug/view.cgi?id=1169
-
-18.29 --retry and transfer timeouts
-
- If using --retry and the transfer timeouts (possibly due to using -m or
- -y/-Y) the next attempt does not resume the transfer properly from what was
- downloaded in the previous attempt but truncates and restarts at the original
- position where it was at before the previous failed attempt. See
- https://curl.se/mail/lib-2008-01/0080.html
-
-19. Build
-
-19.2 Enable PIE and RELRO by default
-
- Especially when having programs that execute curl via the command line, PIE
- renders the exploitation of memory corruption vulnerabilities a lot more
- difficult. This can be attributed to the additional information leaks being
- required to conduct a successful attack. RELRO, on the other hand, masks
- different binary sections like the GOT as read-only and thus kills a handful
- of techniques that come in handy when attackers are able to arbitrarily
- overwrite memory. A few tests showed that enabling these features had close
- to no impact, neither on the performance nor on the general functionality of
- curl.
-
-19.3 Do not use GNU libtool on OpenBSD
-
- When compiling curl on OpenBSD with "--enable-debug" it gives linking errors
- when you use GNU libtool. This can be fixed by using the libtool provided by
- OpenBSD itself. However for this the user always needs to invoke make with
- "LIBTOOL=/usr/bin/libtool". It would be nice if the script could have some
- magic to detect if this system is an OpenBSD host and then use the OpenBSD
- libtool instead.
-
- See https://github.com/curl/curl/issues/5862
-
-19.4 Package curl for Windows in a signed installer
-
- See https://github.com/curl/curl/issues/5424
-
-19.5 make configure use --cache-file more and better
-
- The configure script can be improved to cache more values so that repeated
- invokes run much faster.
-
- See https://github.com/curl/curl/issues/7753
-
-20. Test suite
-
-20.1 SSL tunnel
-
- Make our own version of stunnel for simple port forwarding to enable HTTPS
- and FTP-SSL tests without the stunnel dependency, and it could allow us to
- provide test tools built with either OpenSSL or GnuTLS
-
-20.2 more protocols supported
-
- Extend the test suite to include more protocols. The telnet could just do FTP
- or http operations (for which we have test servers).
-
-20.3 more platforms supported
-
- Make the test suite work on more platforms. OpenBSD and macOS. Remove
- fork()s and it should become even more portable.
-
-20.4 write an SMB test server to replace impacket
-
- This would allow us to run SMB tests on more platforms and do better and more
- covering tests.
-
- See https://github.com/curl/curl/issues/15697
-
-20.5 Use the RFC 6265 test suite
-
- A test suite made for HTTP cookies (RFC 6265) by Adam Barth is available at
- https://github.com/abarth/http-state/tree/master/tests
-
- It would be good if someone would write a script/setup that would run curl
- with that test suite and detect deviances. Ideally, that would even be
- incorporated into our regular test suite.
-
-20.6 Run web-platform-tests URL tests
-
- Run web-platform-tests URL tests and compare results with browsers on wpt.fyi
-
- It would help us find issues to fix and help us document where our parser
- differs from the WHATWG URL spec parsers.
-
- See https://github.com/curl/curl/issues/4477
-
-21. MQTT
-
-21.1 Support rate-limiting
-
- The rate-limiting logic is done in the PERFORMING state in multi.c but MQTT
- is not (yet) implemented to use that.
-
-21.2 Support MQTTS
-
-21.3 Handle network blocks
-
- Running test suite with `CURL_DBG_SOCK_WBLOCK=90 ./runtests.pl -a mqtt` makes
- several MQTT test cases fail where they should not.
-
-21.4 large payloads
-
- libcurl unnecessarily allocates heap memory to hold the entire payload to get
- sent, when the data is already perfectly accessible where it is when
- `CURLOPT_POSTFIELDS` is used. This is highly inefficient for larger payloads.
- Additionally, libcurl does not support using the read callback for sending
- MQTT which is yet another way to avoid having to hold large payload in
- memory.
-
-22. TFTP
-
-22.1 TFTP does not convert LF to CRLF for mode=netascii
-
- RFC 3617 defines that an TFTP transfer can be done using "netascii" mode.
- curl does not support extracting that mode from the URL nor does it treat
- such transfers specifically. It should probably do LF to CRLF translations
- for them.
-
- See https://github.com/curl/curl/issues/12655
-
-23. Gopher
-
-23.1 Handle network blocks
-
-  Running test suite with
-  `CURL_DBG_SOCK_WBLOCK=90 ./runtests.pl -a 1200 to 1300` makes several
-  Gopher test cases fail where they should not.
diff --git a/docs/TODO.md b/docs/TODO.md
new file mode 100644
index 0000000..09b4544
--- /dev/null
+++ b/docs/TODO.md
@@ -0,0 +1,1111 @@
+<!--
+Copyright (C) Daniel Stenberg, <daniel@haxx.se>, et al.
+
+SPDX-License-Identifier: curl
+-->
+
+# TODO intro
+
+Things to do in project curl. Please tell us what you think, contribute and
+send us patches that improve things.
+
+Be aware that these are things that we could do, or have once been considered
+things we could do. If you want to work on any of these areas, please consider
+bringing it up for discussions first on the mailing list so that we all agree
+it is still a good idea for the project.
+
+All bugs documented in the [known_bugs
+document](https://curl.se/docs/knownbugs.html) are subject for fixing.
+
+# libcurl
+
+## TCP Fast Open support on Windows
+
+libcurl supports the `CURLOPT_TCP_FASTOPEN` option since 7.49.0 for Linux and
+macOS. Windows supports TCP Fast Open starting with Windows 10, version 1607
+and we should add support for it.
+
+TCP Fast Open is supported on several platforms but not on Windows. Work on
+this was once started but never finished.
+
+See [curl pull request 3378](https://github.com/curl/curl/pull/3378)
+
+## Consult `%APPDATA%` also for `.netrc`
+
+`%APPDATA%\.netrc` is not considered when running on Windows. Should not it?
+
+See [curl issue 4016](https://github.com/curl/curl/issues/4016)
+
+## `struct lifreq`
+
+Use `struct lifreq` and `SIOCGLIFADDR` instead of `struct ifreq` and
+`SIOCGIFADDR` on newer Solaris versions as they claim the latter is obsolete.
+To support IPv6 interface addresses for network interfaces properly.
+
+## alt-svc sharing
+
+The share interface could benefit from allowing the alt-svc cache to be
+possible to share between easy handles.
+
+See [curl issue 4476](https://github.com/curl/curl/issues/4476)
+
+The share interface offers CURL_LOCK_DATA_CONNECT to have multiple easy
+handle share a connection cache, but due to how connections are used they are
+still not thread-safe when used shared.
+
+See [curl issue 4915](https://github.com/curl/curl/issues/4915) and lib1541.c
+
+The share interface offers CURL_LOCK_DATA_HSTS to have multiple easy handle
+share an HSTS cache, but this is not thread-safe.
+
+## get rid of PATH_MAX
+
+Having code use and rely on PATH_MAX is not nice:
+https://insanecoding.blogspot.com/2007/11/pathmax-simply-isnt.html
+
+Currently the libssh2 SSH based code uses it, but to remove PATH_MAX from
+there we need libssh2 to properly tell us when we pass in a too small buffer
+and its current API (as of libssh2 1.2.7) does not.
+
+## thread-safe sharing
+
+Using the share interface users can share some data between easy handles but
+several of the sharing options are documented as not safe and supported to
+share between multiple concurrent threads. Fixing this would enable more users
+to share data in more powerful ways.
+
+## auto-detect proxy
+
+libcurl could be made to detect the system proxy setup automatically and use
+that. On Windows, macOS and Linux desktops for example.
+
+The [pull-request to use *libproxy*](https://github.com/curl/curl/pull/977)
+for this was deferred due to doubts on the reliability of the dependency and
+how to use it.
+
+[*libdetectproxy*](https://github.com/paulharris/libdetectproxy) is a (C++)
+library for detecting the proxy on Windows.
+
+## updated DNS server while running
+
+If `/etc/resolv.conf` gets updated while a program using libcurl is running, it
+is may cause name resolves to fail unless `res_init()` is called. We should
+consider calling `res_init()` + retry once unconditionally on all name resolve
+failures to mitigate against this. Firefox works like that. Note that Windows
+does not have `res_init()` or an alternative.
+
+[curl issue 2251](https://github.com/curl/curl/issues/2251)
+
+## c-ares and CURLOPT_OPENSOCKETFUNCTION
+
+curl creates most sockets via the CURLOPT_OPENSOCKETFUNCTION callback and
+close them with the CURLOPT_CLOSESOCKETFUNCTION callback. However, c-ares does
+not use those functions and instead opens and closes the sockets itself. This
+means that when curl passes the c-ares socket to the CURLMOPT_SOCKETFUNCTION
+it is not owned by the application like other sockets.
+
+See [curl issue 2734](https://github.com/curl/curl/issues/2734)
+
+## Monitor connections in the connection pool
+
+libcurl's connection cache or pool holds a number of open connections for the
+purpose of possible subsequent connection reuse. It may contain a few up to a
+significant amount of connections. Currently, libcurl leaves all connections
+as they are and first when a connection is iterated over for matching or reuse
+purpose it is verified that it is still alive.
+
+Those connections may get closed by the server side for idleness or they may
+get an HTTP/2 ping from the peer to verify that they are still alive. By
+adding monitoring of the connections while in the pool, libcurl can detect
+dead connections (and close them) better and earlier, and it can handle HTTP/2
+pings to keep such ones alive even when not actively doing transfers on them.
+
+## Try to URL encode given URL
+
+Given a URL that for example contains spaces, libcurl could have an option
+that would try somewhat harder than it does now and convert spaces to %20 and
+perhaps URL encoded byte values over 128 etc (basically do what the redirect
+following code already does).
+
+[curl issue 514](https://github.com/curl/curl/issues/514)
+
+## Add support for IRIs
+
+IRIs (RFC 3987) allow localized, non-ASCII, names in the URL. To properly
+support this, curl/libcurl would need to translate/encode the given input
+from the input string encoding into percent encoded output "over the wire".
+
+To make that work smoothly for curl users even on Windows, curl would probably
+need to be able to convert from several input encodings.
+
+## try next proxy if one does not work
+
+Allow an application to specify a list of proxies to try, and failing to
+connect to the first go on and try the next instead until the list is
+exhausted. Browsers support this feature at least when they specify proxies
+using `PAC`.
+
+[curl issue 896](https://github.com/curl/curl/issues/896)
+
+## provide timing info for each redirect
+
+curl and libcurl provide timing information via a set of different time-stamps
+(CURLINFO_*_TIME). When curl is following redirects, those returned time value
+are the accumulated sums. An improvement could be to offer separate timings
+for each redirect.
+
+[curl issue 6743](https://github.com/curl/curl/issues/6743)
+
+## `SRV` and `URI` DNS records
+
+Offer support for resolving `SRV` and `URI` DNS records for libcurl to know which
+server to connect to for various protocols (including HTTP).
+
+## CURLINFO_PAUSE_STATE
+
+Return information about the transfer's current pause state, in both
+directions. See [curl issue 2588](https://github.com/curl/curl/issues/2588)
+
+## Expose tried IP addresses that failed
+
+When libcurl fails to connect to a host, it could offer the application the
+addresses that were used in the attempt. Source + destination IP, source +
+destination port and protocol (UDP or TCP) for each failure. Possibly as a
+callback. Perhaps also provide reason.
+
+[curl issue 2126](https://github.com/curl/curl/issues/2126)
+
+## config file parsing
+
+Consider providing an API, possibly in a separate companion library, for
+parsing a config file like curl's `-K`/`--config` option to allow applications
+to get the same ability to read curl options from files.
+
+See [curl issue 3698](https://github.com/curl/curl/issues/3698)
+
+## erase secrets from heap/stack after use
+
+Introducing a concept and system to erase secrets from memory after use, it
+could help mitigate and lessen the impact of (future) security problems etc.
+However: most secrets are passed to libcurl as clear text from the application
+and then clearing them within the library adds nothing...
+
+[curl issue 7268](https://github.com/curl/curl/issues/7268)
+
+## add asynch getaddrinfo support
+
+Use `getaddrinfo_a()` to provide an asynch name resolver backend to libcurl
+that does not use threads and does not depend on c-ares. The `getaddrinfo_a`
+function is (probably?) glibc specific but that is a widely used libc among
+our users.
+
+[curl pull request 6746](https://github.com/curl/curl/pull/6746)
+
+## make DoH inherit more transfer properties
+
+Some options are not inherited because they are not relevant for the DoH SSL
+connections, or inheriting the option may result in unexpected behavior. For
+example the user's debug function callback is not inherited because it would
+be unexpected for internal handles (i.e DoH handles) to be passed to that
+callback.
+
+If an option is not inherited then it is not possible to set it separately
+for DoH without a DoH-specific option. For example:
+`CURLOPT_DOH_SSL_VERIFYHOST`, `CURLOPT_DOH_SSL_VERIFYPEER` and
+`CURLOPT_DOH_SSL_VERIFYSTATUS`.
+
+See [curl issue 6605](https://github.com/curl/curl/issues/6605)
+
+# libcurl - multi interface
+
+## More non-blocking
+
+Make sure we do not ever loop because of non-blocking sockets returning
+`EWOULDBLOCK` or similar. Blocking cases include:
+
+- Name resolves on non-Windows unless c-ares or the threaded resolver is used.
+- The threaded resolver may block on cleanup:
+  [curl issue 4852](https://github.com/curl/curl/issues/4852)
+- `file://` transfers
+- TELNET transfers
+- GSSAPI authentication for FTP transfers
+- The "DONE" operation (post transfer protocol-specific actions) for the
+protocols SFTP, SMTP, FTP. Fixing `multi_done()` for this is a worthy task.
+- `curl_multi_remove_handle()` for any of the above.
+- Calling `curl_ws_send()` from a callback
+
+## Better support for same name resolves
+
+If a name resolve has been initiated for a given name and a second easy handle
+wants to resolve that same name as well, make it wait for the first resolve to
+end up in the cache instead of doing a second separate resolve. This is
+especially needed when adding many simultaneous handles using the same
+hostname when the DNS resolver can get flooded.
+
+## Non-blocking `curl_multi_remove_handle()`
+
+The multi interface has a few API calls that assume a blocking behavior, like
+`add_handle()` and `remove_handle()` which limits what we can do internally.
+The multi API need to be moved even more into a single function that "drives"
+everything in a non-blocking manner and signals when something is done. A
+remove or add would then only ask for the action to get started and then
+`multi_perform()` etc still be called until the add/remove is completed.
+
+## Split connect and authentication process
+
+The multi interface treats the authentication process as part of the connect
+phase. As such any failures during authentication does not trigger the
+relevant QUIT or LOGOFF for protocols such as IMAP, POP3 and SMTP.
+
+## Edge-triggered sockets should work
+
+The multi_socket API should work with edge-triggered socket events. One of the
+internal actions that need to be improved for this to work perfectly is the
+`maxloops` handling in `transfer.c:readwrite_data()`.
+
+## multi upkeep
+
+In libcurl 7.62.0 we introduced `curl_easy_upkeep`. It unfortunately only
+works on easy handles. We should introduces a version of that for the multi
+handle, and also consider doing `upkeep` automatically on connections in the
+connection pool when the multi handle is in used.
+
+See [curl issue 3199](https://github.com/curl/curl/issues/3199)
+
+## Virtual external sockets
+
+libcurl performs operations on the given file descriptor that presumes it is a
+socket and an application cannot replace them at the moment. Allowing an
+application to fully replace those would allow a larger degree of freedom and
+flexibility.
+
+See [curl issue 5835](https://github.com/curl/curl/issues/5835)
+
+## dynamically decide to use socketpair
+
+For users who do not use `curl_multi_wait()` or do not care for
+`curl_multi_wakeup()`, we could introduce a way to make libcurl NOT create a
+socketpair in the multi handle.
+
+See [curl issue 4829](https://github.com/curl/curl/issues/4829)
+
+# Documentation
+
+## Improve documentation about fork safety
+
+See [curl issue 6968](https://github.com/curl/curl/issues/6968)
+
+# FTP
+
+## HOST
+
+HOST is a command for a client to tell which hostname to use, to offer FTP
+servers named-based virtual hosting:
+
+https://datatracker.ietf.org/doc/html/rfc7151
+
+## A fixed directory listing format
+
+Since listing the contents of a remove directory with FTP is returning the
+list in a format and style the server likes without any established or even
+defacto standard existing, it would be a feature to users if curl could parse
+the directory listing and output a general curl format that is fixed and the
+same, independent of the server's choice. This would allow users to better and
+more reliably extract information about remote content via FTP directory
+listings.
+
+## GSSAPI via Windows SSPI
+
+In addition to currently supporting the SASL GSSAPI mechanism (Kerberos V5)
+via third-party GSS-API libraries, such as MIT Kerberos, also add support for
+GSSAPI authentication via Windows SSPI.
+
+## STAT for LIST without data connection
+
+Some FTP servers allow STAT for listing directories instead of using LIST, and
+the response is then sent over the control connection instead of as the
+otherwise used data connection.
+
+This is not detailed in any FTP specification.
+
+## Passive transfer could try other IP addresses
+
+When doing FTP operations through a proxy at localhost, the reported spotted
+that curl only tried to connect once to the proxy, while it had multiple
+addresses and a failed connect on one address should make it try the next.
+
+After switching to passive mode (EPSV), curl could try all IP addresses for
+`localhost`. Currently it tries `::1`, but it should also try `127.0.0.1`.
+
+See [curl issue 1508](https://github.com/curl/curl/issues/1508)
+
+# HTTP
+
+## Provide the error body from a CONNECT response
+
+When curl receives a body response from a CONNECT request to a proxy, it
+always just reads and ignores it. It would make some users happy if curl
+instead optionally would be able to make that responsible available. Via a new
+callback? Through some other means?
+
+See [curl issue 9513](https://github.com/curl/curl/issues/9513)
+
+## Obey `Retry-After` in redirects
+
+The `Retry-After` response header is said to dictate "the minimum time that
+the user agent is asked to wait before issuing the redirected request" and
+libcurl does not obey this.
+
+See [curl issue 11447](https://github.com/curl/curl/issues/11447)
+
+## Rearrange request header order
+
+Server implementers often make an effort to detect browser and to reject
+clients it can detect to not match. One of the last details we cannot yet
+control in libcurl's HTTP requests, which also can be exploited to detect that
+libcurl is in fact used even when it tries to impersonate a browser, is the
+order of the request headers. I propose that we introduce a new option in
+which you give headers a value, and then when the HTTP request is built it
+sorts the headers based on that number. We could then have internally created
+headers use a default value so only headers that need to be moved have to be
+specified.
+
+## Allow SAN names in HTTP/2 server push
+
+curl only allows HTTP/2 push promise if the provided :authority header value
+exactly matches the hostname given in the URL. It could be extended to allow
+any name that would match the Subject Alternative Names in the server's TLS
+certificate.
+
+See [curl pull request 3581](https://github.com/curl/curl/pull/3581)
+
+## `auth=` in URLs
+
+Add the ability to specify the preferred authentication mechanism to use by
+using `;auth=<mech>` in the login part of the URL.
+
+For example:
+
+`http://test:pass;auth=NTLM@example.com` would be equivalent to specifying
+`--user test:pass;auth=NTLM` or `--user test:pass --ntlm` from the command
+line.
+
+Additionally this should be implemented for proxy base URLs as well.
+
+## alt-svc should fallback if alt-svc does not work
+
+The `alt-svc:` header provides a set of alternative services for curl to use
+instead of the original. If the first attempted one fails, it should try the
+next etc and if all alternatives fail go back to the original.
+
+See [curl issue 4908](https://github.com/curl/curl/issues/4908)
+
+## Require HTTP version X or higher
+
+curl and libcurl provide options for trying higher HTTP versions (for example
+HTTP/2) but then still allows the server to pick version 1.1. We could
+consider adding a way to require a minimum version.
+
+See [curl issue 7980](https://github.com/curl/curl/issues/7980)
+
+# TELNET
+
+## ditch stdin
+
+Reading input (to send to the remote server) on stdin is a crappy solution for
+library purposes. We need to invent a good way for the application to be able
+to provide the data to send.
+
+## ditch telnet-specific select
+
+Move the telnet support's network `select()` loop go away and merge the code
+into the main transfer loop. Until this is done, the multi interface does not
+work for telnet.
+
+## feature negotiation debug data
+
+Add telnet feature negotiation data to the debug callback as header data.
+
+## exit immediately upon connection if stdin is /dev/null
+
+If it did, curl could be used to probe if there is an server there listening
+on a specific port. That is, the following command would exit immediately
+after the connection is established with exit code 0:
+
+    curl -s --connect-timeout 2 telnet://example.com:80 </dev/null
+
+# SMTP
+
+## Pass NOTIFY option to CURLOPT_MAIL_RCPT
+
+Is there a way to pass the NOTIFY option to the CURLOPT_MAIL_RCPT option ? I
+set a string that already contains a bracket. For instance something like
+that: `curl_slist_append(recipients, "<foo@bar> NOTIFY=SUCCESS,FAILURE");`.
+
+[curl issue 8232](https://github.com/curl/curl/issues/8232)
+
+## Enhanced capability support
+
+Add the ability, for an application that uses libcurl, to obtain the list of
+capabilities returned from the EHLO command.
+
+## Add `CURLOPT_MAIL_CLIENT` option
+
+Rather than use the URL to specify the mail client string to present in the
+`HELO` and `EHLO` commands, libcurl should support a new `CURLOPT`
+specifically for specifying this data as the URL is non-standard and to be
+honest a bit of a hack.
+
+Please see the following thread for more information:
+https://curl.se/mail/lib-2012-05/0178.html
+
+
+# POP3
+
+## Enhanced capability support
+
+Add the ability, for an application that uses libcurl, to obtain the list of
+capabilities returned from the CAPA command.
+
+# IMAP
+
+## Enhanced capability support
+
+ Add the ability, for an application that uses libcurl, to obtain the list of
+ capabilities returned from the CAPABILITY command.
+
+# LDAP
+
+## SASL based authentication mechanisms
+
+Currently the LDAP module only supports ldap_simple_bind_s() in order to bind
+to an LDAP server. However, this function sends username and password details
+using the simple authentication mechanism (as clear text). However, it should
+be possible to use ldap_bind_s() instead specifying the security context
+information ourselves.
+
+## `CURLOPT_SSL_CTX_FUNCTION` for LDAPS
+
+`CURLOPT_SSL_CTX_FUNCTION` works perfectly for HTTPS and email protocols, but
+it has no effect for LDAPS connections.
+
+ [curl issue 4108](https://github.com/curl/curl/issues/4108)
+
+## Paged searches on LDAP server
+
+[curl issue 4452](https://github.com/curl/curl/issues/4452)
+
+## Certificate-Based Authentication
+
+LDAPS not possible with macOS and Windows with Certificate-Based Authentication
+
+[curl issue 9641](https://github.com/curl/curl/issues/9641)
+
+# SMB
+
+## Support modern versions
+
+curl only supports version 1, which barely anyone is using anymore.
+
+## File listing support
+
+Add support for listing the contents of an SMB share. The output should
+probably be the same as/similar to FTP.
+
+## Honor file timestamps
+
+The timestamp of the transferred file should reflect that of the original
+file.
+
+## Use NTLMv2
+
+Currently the SMB authentication uses NTLMv1.
+
+## Create remote directories
+
+Support for creating remote directories when uploading a file to a directory
+that does not exist on the server, just like `--ftp-create-dirs`.
+
+# FILE
+
+## Directory listing on non-POSIX
+
+Listing the contents of a directory accessed with FILE only works on platforms
+with `opendir()`. Support could be added for more systems, like Windows.
+
+# TLS
+
+## `TLS-PSK` with OpenSSL
+
+Transport Layer Security pre-shared key cipher suites (`TLS-PSK`) is a set of
+cryptographic protocols that provide secure communication based on pre-shared
+keys (`PSK`). These pre-shared keys are symmetric keys shared in advance among
+the communicating parties.
+
+[curl issue 5081](https://github.com/curl/curl/issues/5081)
+
+## TLS channel binding
+
+TLS 1.2 and 1.3 provide the ability to extract some secret data from the TLS
+connection and use it in the client request (usually in some sort of
+authentication) to ensure that the data sent is bound to the specific TLS
+connection and cannot be successfully intercepted by a proxy. This
+functionality can be used in a standard authentication mechanism such as
+GSS-API or SCRAM, or in custom approaches like custom HTTP Authentication
+headers.
+
+For TLS 1.2, the binding type is usually `tls-unique`, and for TLS 1.3 it is
+`tls-exporter`.
+
+- https://datatracker.ietf.org/doc/html/rfc5929
+- https://datatracker.ietf.org/doc/html/rfc9266
+- [curl issue 9226](https://github.com/curl/curl/issues/9226)
+
+## Defeat TLS fingerprinting
+
+By changing the order of TLS extensions provided in the TLS handshake, it is
+sometimes possible to circumvent TLS fingerprinting by servers. The TLS
+extension order is of course not the only way to fingerprint a client.
+
+## Consider OCSP stapling by default
+
+Treat a negative response a reason for aborting the connection. Since OCSP
+stapling is presumed to get used much less in the future when Let's Encrypt
+drops the OCSP support, the benefit of this might however be limited.
+
+[curl issue 15483](https://github.com/curl/curl/issues/15483)
+
+## Provide callback for cert verification
+
+OpenSSL supports a callback for customized verification of the peer
+certificate, but this does not seem to be exposed in the libcurl APIs. Could
+it be? There is so much that could be done if it were.
+
+## Less memory massaging with Schannel
+
+The Schannel backend does a lot of custom memory management we would rather
+avoid: the repeated allocation + free in sends and the custom memory + realloc
+system for encrypted and decrypted data. That should be avoided and reduced
+for 1) efficiency and 2) safety.
+
+## Support DANE
+
+[DNS-Based Authentication of Named Entities
+(DANE)](https://www.rfc-editor.org/rfc/rfc6698.txt) is a way to provide SSL
+keys and certs over DNS using DNSSEC as an alternative to the CA model.
+
+A patch was posted on March 7 2013
+(https://curl.se/mail/lib-2013-03/0075.html) but it was a too simple approach.
+See Daniel's comments: https://curl.se/mail/lib-2013-03/0103.html
+
+Björn Stenberg once wrote a separate initial take on DANE that was never
+completed.
+
+## TLS record padding
+
+TLS (1.3) offers optional record padding and OpenSSL provides an API for it. I
+could make sense for libcurl to offer this ability to applications to make
+traffic patterns harder to figure out by network traffic observers.
+
+See [curl issue 5398](https://github.com/curl/curl/issues/5398)
+
+## Support Authority Information Access certificate extension (AIA)
+
+AIA can provide various things like certificate revocation lists but more
+importantly information about intermediate CA certificates that can allow
+validation path to be fulfilled when the HTTPS server does not itself provide
+them.
+
+Since AIA is about downloading certs on demand to complete a TLS handshake, it
+is probably a bit tricky to get done right and a serious privacy leak.
+
+See [curl issue 2793](https://github.com/curl/curl/issues/2793)
+
+## Some TLS options are not offered for HTTPS proxies
+
+Some TLS related options to the command line tool and libcurl are only
+provided for the server and not for HTTPS proxies. `--proxy-tls-max`,
+`--proxy-tlsv1.3`, `--proxy-curves` and a few more. For more Documentation on
+this see: https://curl.se/libcurl/c/tls-options.html
+
+[curl issue 12286](https://github.com/curl/curl/issues/12286)
+
+## Make sure we forbid TLS 1.3 post-handshake authentication
+
+RFC 8740 explains how using HTTP/2 must forbid the use of TLS 1.3
+post-handshake authentication. We should make sure to live up to that.
+
+See [curl issue 5396](https://github.com/curl/curl/issues/5396)
+
+## Support the `clienthello` extension
+
+Certain stupid networks and middle boxes have a problem with SSL handshake
+packets that are within a certain size range because how that sets some bits
+that previously (in older TLS version) were not set. The `clienthello`
+extension adds padding to avoid that size range.
+
+- https://datatracker.ietf.org/doc/html/rfc7685
+- [curl issue 2299](https://github.com/curl/curl/issues/2299)
+
+## Share the CA cache
+
+For TLS backends that supports CA caching, it makes sense to allow the share
+object to be used to store the CA cache as well via the share API. Would allow
+multiple easy handles to reuse the CA cache and save themselves from a lot of
+extra processing overhead.
+
+## Add missing features to TLS backends
+
+The feature matrix at https://curl.se/libcurl/c/tls-options.html shows which
+features are supported by which TLS backends, and thus also where there are
+feature gaps.
+
+# Proxy
+
+## Retry SOCKS handshake on address type not supported
+
+When curl resolves a hostname, it might get a mix of IPv6 and IPv4 returned.
+curl might then use an IPv6 address with a SOCKS5 proxy, which - if it does
+not support IPv6 - returns "Address type not supported" and curl exits with
+that error.
+
+Perhaps it is preferred if curl would in this situation instead first retry
+the SOCKS handshake again for this case and then use one of the IPv4 addresses
+for the target host.
+
+See [curl issue 17222](https://github.com/curl/curl/issues/17222)
+
+# Schannel
+
+## Extend support for client certificate authentication
+
+The existing support for the `-E`/`--cert` and `--key` options could be
+extended by supplying a custom certificate and key in PEM format, see:
+[Getting a Certificate for
+Schannel](https://learn.microsoft.com/windows/win32/secauthn/getting-a-certificate-for-schannel)
+
+## Extend support for the `--ciphers` option
+
+The existing support for the `--ciphers` option could be extended by mapping
+the OpenSSL/GnuTLS cipher suites to the Schannel APIs, see [Specifying
+Schannel Ciphers and Cipher
+Strengths](https://learn.microsoft.com/windows/win32/secauthn/specifying-schannel-ciphers-and-cipher-strengths).
+
+## Add option to allow abrupt server closure
+
+libcurl with Schannel errors without a known termination point from the server
+(such as length of transfer, or SSL "close notify" alert) to prevent against a
+truncation attack. Really old servers may neglect to send any termination
+point. An option could be added to ignore such abrupt closures.
+
+[curl issue 4427](https://github.com/curl/curl/issues/4427)
+
+# SASL
+
+## Other authentication mechanisms
+
+Add support for other authentication mechanisms such as `OLP`, `GSS-SPNEGO`
+and others.
+
+## Add `QOP` support to GSSAPI authentication
+
+Currently the GSSAPI authentication only supports the default `QOP` of auth
+(Authentication), whilst Kerberos V5 supports both `auth-int` (Authentication
+with integrity protection) and `auth-conf` (Authentication with integrity and
+privacy protection).
+
+# SSH protocols
+
+## Multiplexing
+
+SSH is a perfectly fine multiplexed protocols which would allow libcurl to do
+multiple parallel transfers from the same host using the same connection, much
+in the same spirit as HTTP/2 does. libcurl however does not take advantage of
+that ability but does instead always create a new connection for new transfers
+even if an existing connection already exists to the host.
+
+To fix this, libcurl would have to detect an existing connection and "attach"
+the new transfer to the existing one.
+
+## Handle growing SFTP files
+
+The SFTP code in libcurl checks the file size *before* a transfer starts and
+then proceeds to transfer exactly that amount of data. If the remote file
+grows while the transfer is in progress libcurl does not notice and does not
+adapt. The OpenSSH SFTP command line tool does and libcurl could also just
+attempt to download more to see if there is more to get...
+
+[curl issue 4344](https://github.com/curl/curl/issues/4344)
+
+## Read keys from `~/.ssh/id_ecdsa`, `id_ed25519`
+
+The libssh2 backend in curl is limited to only reading keys from `id_rsa` and
+`id_dsa`, which makes it fail connecting to servers that use more modern key
+types.
+
+[curl issue 8586](https://github.com/curl/curl/issues/8586)
+
+## Support `CURLOPT_PREQUOTE`
+
+The two other `QUOTE` options are supported for SFTP, but this was left out
+for unknown reasons.
+
+## SSH over HTTPS proxy for libssh
+
+The SSH based protocols SFTP and SCP did not work over HTTPS proxy at all
+until [curl pull request 6021](https://github.com/curl/curl/pull/6021) brought
+the functionality with the libssh2 backend. Presumably, this support can/could
+be added for the libssh backend as well.
+
+## SFTP with `SCP://`
+
+OpenSSH 9 switched their `scp` tool to speak SFTP under the hood. Going
+forward it might be worth having curl or libcurl attempt SFTP if SCP fails to
+follow suite.
+
+# Command line tool
+
+## sync
+
+`curl --sync http://example.com/feed[1-100].rss` or
+`curl --sync http://example.net/{index,calendar,history}.html`
+
+Downloads a range or set of URLs using the remote name, but only if the remote
+file is newer than the local file. A `Last-Modified` HTTP date header should
+also be used to set the mod date on the downloaded file.
+
+## glob posts
+
+Globbing support for `-d` and `-F`, as in `curl -d "name=foo[0-9]" URL`. This
+is easily scripted though.
+
+## `--proxycommand`
+
+Allow the user to make curl run a command and use its stdio to make requests
+and not do any network connection by itself. Example:
+
+    curl --proxycommand 'ssh pi@raspberrypi.local -W 10.1.1.75 80' \
+      http://some/otherwise/unavailable/service.php
+
+See [curl issue 4941](https://github.com/curl/curl/issues/4941)
+
+## UTF-8 filenames in Content-Disposition
+
+RFC 6266 documents how UTF-8 names can be passed to a client in the
+`Content-Disposition` header, and curl does not support this.
+
+[curl issue 1888](https://github.com/curl/curl/issues/1888)
+
+## Option to make `-Z` merge lined based outputs on stdout
+
+When a user requests multiple lined based files using `-Z` and sends them to
+stdout, curl does not *merge* and send complete lines fine but may send
+partial lines from several sources.
+
+[curl issue 5175](https://github.com/curl/curl/issues/5175)
+
+## specify which response codes that make `-f`/`--fail` return error
+
+Allows a user to better specify exactly which error code(s) that are fine and
+which are errors for their specific uses cases
+
+## Choose the name of file in braces for complex URLs
+
+When using braces to download a list of URLs and you use complicated names
+in the list of alternatives, it could be handy to allow curl to use other
+names when saving.
+
+Consider a way to offer that. Possibly like
+`{partURL1:name1,partURL2:name2,partURL3:name3}` where the name following the
+colon is the output name.
+
+See [curl issue 221](https://github.com/curl/curl/issues/221)
+
+## improve how curl works in a Windows console window
+
+If you pull the scroll bar when transferring with curl in a Windows console
+window, the transfer is interrupted and can get disconnected. This can
+probably be improved. See [curl issue 322](https://github.com/curl/curl/issues/322)
+
+## Windows: set attribute 'archive' for completed downloads
+
+The archive bit (`FILE_ATTRIBUTE_ARCHIVE, 0x20`) separates files that shall be
+backed up from those that are either not ready or have not changed.
+
+Downloads in progress are neither ready to be backed up, nor should they be
+opened by a different process. Only after a download has been completed it is
+sensible to include it in any integer snapshot or backup of the system.
+
+See [curl issue 3354](https://github.com/curl/curl/issues/3354)
+
+## keep running, read instructions from pipe/socket
+
+Provide an option that makes curl not exit after the last URL (or even work
+without a given URL), and then make it read instructions passed on a pipe or
+over a socket to make further instructions so that a second subsequent curl
+invoke can talk to the still running instance and ask for transfers to get
+done, and thus maintain its connection pool, DNS cache and more.
+
+## Acknowledge `Ratelimit` headers
+
+Consider a command line option that can make curl do multiple serial requests
+while acknowledging server specified [rate
+limits](https://datatracker.ietf.org/doc/draft-ietf-httpapi-ratelimit-headers/).
+
+See [curl issue 5406](https://github.com/curl/curl/issues/5406)
+
+## `--dry-run`
+
+A command line option that makes curl show exactly what it would do and send
+if it would run for real.
+
+See [curl issue 5426](https://github.com/curl/curl/issues/5426)
+
+## `--retry` should resume
+
+When `--retry` is used and curl actually retries transfer, it should use the
+already transferred data and do a resumed transfer for the rest (when
+possible) so that it does not have to transfer the same data again that was
+already transferred before the retry.
+
+See [curl issue 1084](https://github.com/curl/curl/issues/1084)
+
+## consider filename from the redirected URL with `-O` ?
+
+When a user gives a URL and uses `-O`, and curl follows a redirect to a new
+URL, the filename is not extracted and used from the newly redirected-to URL
+even if the new URL may have a much more sensible filename.
+
+This is clearly documented and helps for security since there is no surprise
+to users which filename that might get overwritten, but maybe a new option
+could allow for this or maybe `-J` should imply such a treatment as well as
+`-J` already allows for the server to decide what filename to use so it
+already provides the "may overwrite any file" risk.
+
+This is extra tricky if the original URL has no filename part at all since
+then the current code path does error out with an error message, and we cannot
+*know* already at that point if curl is redirected to a URL that has a
+filename...
+
+See [curl issue 1241](https://github.com/curl/curl/issues/1241)
+
+## retry on network is unreachable
+
+The `--retry` option retries transfers on *transient failures*. We later added
+`--retry-connrefused` to also retry for *connection refused* errors.
+
+Suggestions have been brought to also allow retry on *network is unreachable*
+errors and while totally reasonable, maybe we should consider a way to make
+this more configurable than to add a new option for every new error people
+want to retry for?
+
+[curl issue 1603](https://github.com/curl/curl/issues/1603)
+
+## hostname sections in config files
+
+config files would be more powerful if they could set different configurations
+depending on used URLs, hostname or possibly origin. Then a default `.curlrc`
+could a specific user-agent only when doing requests against a certain site.
+
+## retry on the redirected-to URL
+
+When curl is told to `--retry` a failed transfer and follows redirects, it
+might get an HTTP 429 response from the redirected-to URL and not the original
+one, which then could make curl decide to rather retry the transfer on that
+URL only instead of the original operation to the original URL.
+
+Perhaps extra emphasized if the original transfer is a large POST that
+redirects to a separate GET, and that GET is what gets the 529
+
+See [curl issue 5462](https://github.com/curl/curl/issues/5462)
+
+## Set the modification date on an uploaded file
+
+For SFTP and possibly FTP, curl could offer an option to set the modification
+time for the uploaded file.
+
+See [curl issue 5768](https://github.com/curl/curl/issues/5768)
+
+## Use multiple parallel transfers for a single download
+
+To enhance transfer speed, downloading a single URL can be split up into
+multiple separate range downloads that get combined into a single final
+result.
+
+An ideal implementation would not use a specified number of parallel
+transfers, but curl could:
+- First start getting the full file as transfer A
+- If after N seconds have passed and the transfer is expected to continue for
+  M seconds or more, add a new transfer (B) that asks for the second half of
+  A's content (and stop A at the middle).
+- If splitting up the work improves the transfer rate, it could then be done
+  again. Then again, etc up to a limit.
+
+This way, if transfer B fails (because Range: is not supported) it lets
+transfer A remain the single one. N and M could be set to some sensible
+defaults.
+
+See [curl issue 5774](https://github.com/curl/curl/issues/5774)
+
+## Prevent terminal injection when writing to terminal
+
+curl could offer an option to make escape sequence either non-functional or
+avoid cursor moves or similar to reduce the risk of a user getting tricked by
+clever tricks.
+
+See [curl issue 6150](https://github.com/curl/curl/issues/6150)
+
+## `-J` and `-O` with %-encoded filenames
+
+`-J`/`--remote-header-name` does not decode %-encoded filenames. RFC 6266
+details how it should be done. The can of worm is basically that we have no
+charset handling in curl and ASCII >=128 is a challenge for us. Not to mention
+that decoding also means that we need to check for nastiness that is
+attempted, like `../` sequences and the like. Probably everything to the left
+of any embedded slashes should be cut off. See
+https://curl.se/bug/view.cgi?id=1294
+
+`-O` also does not decode %-encoded names, and while it has even less
+information about the charset involved the process is similar to the `-J`
+case.
+
+Note that we do not decode `-O` without the user asking for it with some other
+means, since `-O` has always been documented to use the name exactly as
+specified in the URL.
+
+## `-J` with `-C -`
+
+When using `-J` (with `-O`), automatically resumed downloading together with
+`-C -` fails. Without `-J` the same command line works. This happens because
+the resume logic is worked out before the target filename (and thus its
+pre-transfer size) has been figured out. This can be improved.
+
+https://curl.se/bug/view.cgi?id=1169
+
+## `--retry` and transfer timeouts
+
+If using `--retry` and the transfer timeouts (possibly due to using -m or
+`-y`/`-Y`) the next attempt does not resume the transfer properly from what
+was downloaded in the previous attempt but truncates and restarts at the
+original position where it was at before the previous failed attempt. See
+https://curl.se/mail/lib-2008-01/0080.html
+
+# Build
+
+## Enable `PIE` and `RELRO` by default
+
+Especially when having programs that execute curl via the command line, `PIE`
+renders the exploitation of memory corruption vulnerabilities a lot more
+difficult. This can be attributed to the additional information leaks being
+required to conduct a successful attack. `RELRO`, on the other hand, masks
+different binary sections like the `GOT` as read-only and thus kills a handful
+of techniques that come in handy when attackers are able to arbitrarily
+overwrite memory. A few tests showed that enabling these features had close to
+no impact, neither on the performance nor on the general functionality of
+curl.
+
+## Do not use GNU libtool on OpenBSD
+
+When compiling curl on OpenBSD with `--enable-debug` it gives linking errors
+when you use GNU libtool. This can be fixed by using the libtool provided by
+OpenBSD itself. However for this the user always needs to invoke make with
+`LIBTOOL=/usr/bin/libtool`. It would be nice if the script could have some
+magic to detect if this system is an OpenBSD host and then use the OpenBSD
+libtool instead.
+
+See [curl issue 5862](https://github.com/curl/curl/issues/5862)
+
+## Package curl for Windows in a signed installer
+
+See [curl issue 5424](https://github.com/curl/curl/issues/5424)
+
+## make configure use `--cache-file` more and better
+
+The configure script can be improved to cache more values so that repeated
+invokes run much faster.
+
+See [curl issue 7753](https://github.com/curl/curl/issues/7753)
+
+# Test suite
+
+## SSL tunnel
+
+Make our own version of stunnel for simple port forwarding to enable HTTPS and
+FTP-SSL tests without the stunnel dependency, and it could allow us to provide
+test tools built with either OpenSSL or GnuTLS
+
+## more protocols supported
+
+Extend the test suite to include more protocols. The telnet could just do FTP
+or http operations (for which we have test servers).
+
+## more platforms supported
+
+Make the test suite work on more platforms. OpenBSD and macOS. Remove fork()s
+and it should become even more portable.
+
+## write an SMB test server to replace impacket
+
+This would allow us to run SMB tests on more platforms and do better and more
+covering tests.
+
+See [curl issue 15697](https://github.com/curl/curl/issues/15697)
+
+## Use the RFC 6265 test suite
+
+A test suite made for HTTP cookies (RFC 6265) by Adam Barth [is
+available](https://github.com/abarth/http-state/tree/master/tests).
+
+It would be good if someone would write a script/setup that would run curl
+with that test suite and detect deviance. Ideally, that would even be
+incorporated into our regular test suite.
+
+## Run web-platform-tests URL tests
+
+Run web-platform-tests URL tests and compare results with browsers on
+`wpt.fyi`.
+
+It would help us find issues to fix and help us document where our parser
+differs from the WHATWG URL spec parsers.
+
+See [curl issue 4477](https://github.com/curl/curl/issues/4477)
+
+# MQTT
+
+## Support rate-limiting
+
+The rate-limiting logic is done in the PERFORMING state in multi.c but MQTT is
+not (yet) implemented to use that.
+
+## Support MQTTS
+
+## Handle network blocks
+
+Running test suite with `CURL_DBG_SOCK_WBLOCK=90 ./runtests.pl -a mqtt` makes
+several MQTT test cases fail where they should not.
+
+## large payloads
+
+libcurl unnecessarily allocates heap memory to hold the entire payload to get
+sent, when the data is already perfectly accessible where it is when
+`CURLOPT_POSTFIELDS` is used. This is highly inefficient for larger payloads.
+Additionally, libcurl does not support using the read callback for sending
+MQTT which is yet another way to avoid having to hold large payload in memory.
+
+# TFTP
+
+## TFTP does not convert LF to CRLF for `mode=netascii`
+
+RFC 3617 defines that an TFTP transfer can be done using `netascii` mode. curl
+does not support extracting that mode from the URL nor does it treat such
+transfers specifically. It should probably do LF to CRLF translations for
+them.
+
+See [curl issue 12655](https://github.com/curl/curl/issues/12655)
+
+# Gopher
+
+## Handle network blocks
+
+Running test suite with `CURL_DBG_SOCK_WBLOCK=90 ./runtests.pl -a 1200 to
+1300` makes several Gopher test cases fail where they should not.
diff --git a/packages/OS400/makefile.sh b/packages/OS400/makefile.sh
index df47a74..2b688a4 100755
--- a/packages/OS400/makefile.sh
+++ b/packages/OS400/makefile.sh
@@ -67,7 +67,7 @@
 #       Copy some documentation files if needed.
 
 for TEXT in "${TOPDIR}/COPYING" "${SCRIPTDIR}/README.OS400"             \
-    "${TOPDIR}/CHANGES.md" "${TOPDIR}/docs/THANKS" "${TOPDIR}/docs/FAQ"    \
+    "${TOPDIR}/CHANGES.md" "${TOPDIR}/docs/THANKS" "${TOPDIR}/docs/FAQ.md"    \
     "${TOPDIR}/docs/FEATURES" "${TOPDIR}/docs/SSLCERTS.md"              \
     "${TOPDIR}/docs/RESOURCES" "${TOPDIR}/docs/VERSIONS.md"             \
     "${TOPDIR}/docs/HISTORY.md"
diff --git a/scripts/mdlinkcheck b/scripts/mdlinkcheck
index dfeeac0..bce3ca3 100755
--- a/scripts/mdlinkcheck
+++ b/scripts/mdlinkcheck
@@ -87,7 +87,7 @@
 my %flink;
 
 # list all .md files in the repo
-my @files=`git ls-files '**.md' docs/TODO docs/KNOWN_BUGS docs/FAQ`;
+my @files=`git ls-files '**.md'`;
 
 sub storelink {
     my ($f, $line, $link) = @_;