If the cache key of an incoming request matches the key of a previous request, then the cache considers them to be equivalent. Components of the request that are not included in the cache key are said to be "unkeyed". Typically, this would contain the request line and Host header. Caches identify equivalent requests by comparing a predefined subset of the request's components, known collectively as the "cache key". When the cache receives an HTTP request, it first has to determine whether there is a cached response that it can serve directly, or whether it has to forward the request for handling by the back-end server. This greatly eases the load on the server by reducing the number of duplicate requests it has to handle. If another user then sends an equivalent request, the cache simply serves a copy of the cached response directly to the user, without any interaction from the back-end. The cache sits between the server and the user, where it saves (caches) the responses to particular requests, usually for a fixed amount of time. Caching is primarily a means of reducing such issues. If a server had to send a new response to every single HTTP request separately, this would likely overload the server, resulting in latency issues and a poor user experience, especially during busy periods. To understand how web cache poisoning vulnerabilities arise, it is important to have a basic understanding of how web caches work. Web Cache Entanglement: Novel Pathways to Poisoning. Chaining web-cache poisoning vulnerabilities.Responses that expose too much information.Exploit cookie-handling vulnerabilities.Exploit unsafe handling of resource imports.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |