1
0
mirror of https://github.com/fafhrd91/actix-web synced 2025-07-03 01:34:32 +02:00

Compare commits

...

177 Commits

Author SHA1 Message Date
e35ec28cd2 prepare actix-web release 4.3.1 2023-02-26 03:44:34 +00:00
35006e9cae prepare actix-web-codegen release 4.2.0 2023-02-26 03:42:27 +00:00
115701eb86 prepare awc release 3.1.1 2023-02-26 03:34:47 +00:00
e2fed91efd format markdown with prettier 2023-02-26 03:26:51 +00:00
d4b833ccf0 actix-multipart: Feature: Add typed multipart form extractor (#2883)
Co-authored-by: Rob Ede <robjtede@icloud.com>
2023-02-26 03:26:06 +00:00
358c1cf85b improve docs for app_config methods 2023-02-22 23:06:23 +00:00
42193bee29 fix typos (#2982) 2023-02-20 08:11:16 +00:00
dc08ea044b clippy 2023-02-13 21:09:28 +00:00
85d88ffada Fix minor typo in Markdown (#2977) 2023-02-12 02:47:42 +00:00
bf19a0e761 added body manipulation example for error handlers (#2973)
Closes https://github.com/actix/actix-web/issues/2856
2023-02-09 20:37:01 +00:00
bf1f169be2 [awc] change client::Connect to be public (#2690) 2023-02-09 09:32:04 +00:00
359d5d5c80 refactor codegen route guards 2023-02-06 17:06:47 +00:00
65c0545a7a added support for creating custom methods with route macro (#2969)
Co-authored-by: Rob Ede <robjtede@icloud.com>
Closes https://github.com/actix/actix-web/issues/2893
2023-02-06 12:40:41 +00:00
b933ed4456 add tests for files_listing_renderer 2023-02-03 21:04:07 -05:00
4bff1d0abe require safe tokio version range
see https://rustsec.org/advisories/RUSTSEC-2023-0005
2023-02-03 20:35:19 -05:00
fa106da555 refactor: move Host guard into own module 2023-01-30 11:36:12 -05:00
c15016dafb prepare actix-files release 0.6.3 2023-01-21 19:03:19 +00:00
74688843ba prepare actix-http-test release 3.1.0 2023-01-21 19:01:14 +00:00
845156da85 prepare actix-web-actors release 4.2.0 2023-01-21 19:01:08 +00:00
98752c053c prepare actix-multipart release 0.5.0 2023-01-21 18:59:13 +00:00
df6fde883c prepare actix-web release 4.3.0 2023-01-21 18:57:42 +00:00
8d4cb8c69a prepare awc release 3.1.0 2023-01-21 18:54:58 +00:00
dd9ac4d9b8 prepare actix-http release 3.3.0 2023-01-21 18:52:57 +00:00
72c80f9107 update tokio-uring support to 0.4 2023-01-21 18:46:44 +00:00
b00fe72cf6 Update base64 to 0.21 (#2966)
Co-authored-by: Rob Ede <robjtede@icloud.com>
2023-01-21 01:36:08 +00:00
2f0b8a264a fix non-empty body of http2 HEAD response (#2920)
Co-authored-by: Rob Ede <robjtede@icloud.com>
2023-01-21 00:51:49 +00:00
b9f0faafde add cache-status and cdn-cache-control header names (#2968)
* add cache-status and cdn-cache-control header names

* fix changelog

* update docs with rfc numbers
2023-01-21 00:02:54 +00:00
6627109984 Add fallible versions of test_utils helpers to actix-test (#2961)
Co-authored-by: Rob Ede <robjtede@icloud.com>
2023-01-11 11:43:51 +00:00
b9f54c8796 use secure tokio version range
see RUSTSEC-2023-0001

fixes #2962
2023-01-10 08:58:38 +00:00
cfd40b4f15 Implement MessageBody for Cow<'static, {[u8], str}> (#2959) 2023-01-06 21:56:16 +00:00
08c2cdf641 http service finalizer for automatic h2c detection (#2957)
* http service finalizer for automatic h2c detection

* update changelog

* add h2c auto test
2023-01-03 14:43:02 +00:00
fbd0e5dd0a add headermap::retain (#2955)
* add headermap::retain

* update changelog and docs

* fix retain doc test
2023-01-02 13:38:07 +00:00
7b936bc443 add some useful header name constants (#2956) 2023-01-02 13:33:31 +00:00
d2364c80c4 improve error handling on new new example 2023-01-02 00:16:59 +00:00
77459ec415 add h2c example 2023-01-02 00:14:25 +00:00
6f0a6bd1bb address clippy lints
For intrepid commit message readers:
The choice to add allows for the inlined format args lint instead of actually
inlining them is not very clear because our actual real world MSRV is not clear.
We currently claim 1.60 is our MSRV but this is mainly due to dependencies. I'm
fairly sure that we could support < 1.58 if those deps are outdated in a users
lockfile. We'll remove these allows again at some point soon.
2023-01-01 20:56:34 +00:00
06c3513bc0 add Allow header to resource's default 405 handler (#2949) 2022-12-21 20:28:45 +00:00
29bd6a1dd5 fix version requirement for futures_util 2022-12-18 01:34:48 +00:00
17f7cd2aae bump zstd to 0.12 2022-12-18 01:31:06 +00:00
ede645ee4e bump criterion to 0.4 2022-12-18 01:11:04 +00:00
6d48593a60 fix doc tests 2022-11-25 23:28:31 +00:00
3c69d078b2 add redirect service (#1961) 2022-11-25 21:44:52 +00:00
e7c34f2e45 tweak form docs 2022-11-25 21:38:57 +00:00
d708a4de6d add acceptable guard (#2265) 2022-11-25 21:04:24 +00:00
d97bd7ec17 fix msrv CI 2022-11-25 17:37:23 +00:00
fcd06c9896 workaround zstd msrv issue 2022-11-25 17:28:06 +00:00
1065043528 ci: use dtolnay's rust-toolchain action 2022-11-25 17:00:59 +00:00
45b77c6819 GitHub Workflows security hardening (#2923) 2022-11-04 00:42:22 +00:00
a2e2c30d59 use tokio-util deps directly where possible 2022-10-30 19:47:49 +00:00
83cd061c86 remove fakeshadow from author lists (#2921) 2022-10-25 16:37:04 +01:00
068909f1b3 Replace deprecated twoway with memchr (#2909) 2022-10-14 11:52:13 +00:00
f8cb71e789 remove incomplete doc comment 2022-10-14 13:20:38 +02:00
73b94e902d fix xhtml pages' content-disposition (#2903)
Co-authored-by: Yuki Okushi <jtitor@2k36.org>
2022-10-09 12:44:10 +01:00
ad7e67f940 add middleware::logger::custom_response_replace (#2631)
Co-authored-by: Rob Ede <robjtede@icloud.com>
2022-09-26 18:44:51 +00:00
1519ae7772 clarify tokio::main docs 2022-09-26 12:29:57 +01:00
cc7145d41d rust 1.64 clippy run (#2891) 2022-09-25 20:54:17 +01:00
172c4c7a0a use noop hasher in extensions (#2890) 2022-09-25 15:32:26 +01:00
fd63305859 Fix actix-multipart field content_type() to return an Option (#2885)
Co-authored-by: Rob Ede <robjtede@icloud.com>
2022-09-23 17:06:40 +00:00
ef64d6a27c update derive_more dependency to 0.99.8 (#2888) 2022-09-23 12:39:18 +00:00
4d3689db5e Remove unnecesary clones in extractor configs (#2884)
Co-authored-by: erhodes <erik@space-nav.com>
Co-authored-by: Rob Ede <robjtede@icloud.com>
2022-09-20 23:17:58 +00:00
894effb856 prepare actix-router release 0.5.1 2022-09-19 18:52:16 +01:00
07a7290432 Fix typo in error string for i32 parse in router deserialization (#2876)
* fix typo in error string for i32 parse

* update actix-router changelog for #2876

* Update CHANGES.md

Co-authored-by: Rob Ede <robjtede@icloud.com>
2022-09-19 18:44:52 +01:00
bd5c0af0a6 Add ability to set default error handlers to the ErrorHandler middleware (#2784)
Co-authored-by: erhodes <erik@space-nav.com>
Co-authored-by: Rob Ede <robjtede@icloud.com>
2022-09-15 13:06:34 +00:00
c73fba16ce implement MessageBody for mut B (#2868) 2022-09-14 11:23:22 +01:00
909461087c add ContentDisposition::attachment constructor (#2867) 2022-09-13 01:19:25 +01:00
40f7ab38d2 prepare actix-web release 4.2.1 2022-09-12 10:43:03 +01:00
a9e44bcf07 fix -http version to 3.2.2 (#2871)
fixes #2869
2022-09-12 10:42:22 +01:00
7767cf3071 prepare actix-web release 4.2.0 2022-09-11 16:44:46 +01:00
b59a96d9d7 prepare actix-web-codegen release 4.1.0 2022-09-11 16:42:28 +01:00
037740bf62 prepare actix-http release 3.2.2 2022-09-11 16:41:29 +01:00
386258c285 clarify worker_max_blocking_threads default 2022-09-06 10:13:10 +01:00
99bf774e94 update gh-pages deploy action 2022-09-03 22:15:59 +01:00
35b0fd1a85 specify branch in doc job 2022-09-03 22:05:28 +01:00
0b5b4dcbf3 reduce size of docs branch 2022-09-03 21:56:37 +01:00
c993055fc8 replace askama_escape in favor of v_htmlescape (#2824) 2022-08-30 09:34:46 +01:00
679f61cf37 bump msrv to 1.59 2022-08-27 13:14:16 +01:00
056de320f0 fix scope doc example
fixes #2843
2022-08-25 03:17:48 +01:00
f220719fae prepare awc release 3.0.1 2022-08-25 03:13:31 +01:00
c9f91796df awc: correctly handle redirections that begins with // (#2840) 2022-08-25 03:12:58 +01:00
ea764b1d57 add feature annotations to docs 2022-07-31 23:40:09 +01:00
19aa14a9d6 re-order HttpServer methods for better docs 2022-07-31 22:10:51 +01:00
10746fb2fb improve HttpServer docs 2022-07-31 21:58:15 +01:00
4bbe60b609 document h2 ping-pong 2022-07-24 16:42:35 +01:00
8ff489aa90 apply fix from #2369 2022-07-24 16:35:00 +01:00
e0a88cea8d remove unwindsafe assertions 2022-07-24 02:47:12 +01:00
d78ff283af prepare actix-test release 0.1.0 2022-07-24 02:13:46 +01:00
ce6d520215 prepare actix-http-test release 3.0.0 2022-07-24 02:11:21 +01:00
3e25742a41 prepare actix-files release 0.6.2 2022-07-23 16:37:59 +01:00
20f4cfe6b5 fix partial ranges for video content (#2817)
fixes #2815
2022-07-23 16:27:01 +01:00
6408291ab0 appease clippy by deriving Eq on a bunch of items (#2818) 2022-07-23 16:26:48 +01:00
8d260e599f clippy 2022-07-23 02:48:28 +01:00
14bcf72ec1 web utilizes const header names 2022-07-22 20:21:58 +01:00
6485434a33 update bump script 2022-07-22 20:19:15 +01:00
16c7c16463 reduce scope of once_cell change 2022-07-22 20:19:02 +01:00
9b0fdca6e9 Remove some unnecessary uses of once_cell::sync::Lazy (#2816) 2022-07-22 20:18:38 +01:00
8759d79b03 routes macro allowing multiple paths per handler (#2718)
* WIP: basic implementation for `routes` macro

* chore: changelog, docs, tests

* error on missing methods

* Apply suggestions from code review

Co-authored-by: Igor Aleksanov <popzxc@yandex.ru>

* update test stderr expectation

* add additional tests

* fix stderr output

* remove useless ResourceType

this is dead code from back when .to and .to_async were different ways to add a service

Co-authored-by: Igor Aleksanov <popzxc@yandex.ru>
Co-authored-by: Rob Ede <robjtede@icloud.com>
2022-07-04 04:31:49 +00:00
c0d5d7bdb5 add octal-ish CL test 2022-07-02 21:04:37 +01:00
40eab1f091 simplify simple decoder tests 2022-07-02 20:07:27 +01:00
75517cce82 install cargo hack in CI faster 2022-07-02 20:00:59 +01:00
9b51624b27 update cargo-cache to 0.8.2 2022-07-02 18:43:19 +01:00
8e2ae8cd40 install nextest faster 2022-07-02 18:38:08 +01:00
9a2f8450e0 install older cargo-edit 2022-07-02 17:40:03 +01:00
23ef51609e s/cargo-add/cargo-edit 2022-07-02 17:29:06 +01:00
f7d629a61a fix cargo-add in CI 2022-07-02 17:20:46 +01:00
e0845d9ad9 add msrv workarounds to ci 2022-07-02 17:12:24 +01:00
2f79daec16 only run tests on stable 2022-07-02 17:05:48 +01:00
f3f41a0cc7 prepare actix-http release 3.2.1 2022-07-02 16:50:54 +01:00
987067698b use sparse registry in CI 2022-07-01 12:45:26 +01:00
b62f1b4ef7 migrate deprecated method in docs 2022-07-01 12:40:00 +01:00
df5257c373 update trust dns resolver 2022-07-01 10:21:46 +01:00
226ea696ce update dev deps 2022-07-01 10:19:28 +01:00
e524fc86ea add HTTP/0.9 rejection test 2022-07-01 09:03:57 +01:00
7e990e423f add http/1.0 GET parsing tests 2022-07-01 08:24:45 +01:00
8f9a12ed5d fix parsing ambiguities for HTTP/1.0 requests (#2794)
* fix HRS vuln when first CL header is 0

* ignore TE headers in http/1.0 reqs

* update changelog

* disallow HTTP/1.0 requests without a CL header

* fix test

* broken fix for http1.0 post requests
2022-07-01 08:23:40 +01:00
c6eba2da9b prepare actix-http release 3.2.0 (#2801) 2022-07-01 06:16:17 +01:00
06c7945801 retain previously set vary headers when using compress (#2798)
* retain previously set vary headers when using compress
2022-06-30 09:19:16 +01:00
0dba6310c6 Expose option for setting TLS handshake timeout (#2752)
Co-authored-by: Rob Ede <robjtede@icloud.com>
2022-06-27 02:57:21 +00:00
f7d7d92984 address clippy lints 2022-06-27 03:12:36 +01:00
3d6ea7fe9b Improve documentation for actix-web-actors (#2788) 2022-06-26 16:45:02 +00:00
8dbf7da89f Fix common grammar mistakes and add small documentation for AppConfig's Default implementation (#2793) 2022-06-25 14:01:06 +00:00
de92b3be2e fix unrecoverable Err(Overflow) in websocket frame parser (#2790) 2022-06-24 03:46:17 +00:00
5d0e8138ee Add getters for &ServiceRequest (#2786) 2022-06-22 21:02:03 +01:00
6b7196225e Bump up MSRV to 1.57 (#2789) 2022-06-22 12:08:06 +01:00
265fa0d050 Add link to MongoDB example in README (#2783) 2022-06-15 22:38:10 +01:00
062127a210 Revert "actix-http: Pull actix-web dev-dep from Git repo"
This reverts commit 3926416580.
2022-06-12 00:55:06 +09:00
3926416580 actix-http: Pull actix-web dev-dep from Git repo
The published version of actix-web depends on a buggy version of zstd crate,
temporarily use actix-web on git repo to avoid the build failure.

Signed-off-by: Yuki Okushi <jtitor@2k36.org>
2022-06-12 00:48:08 +09:00
43671ae4aa release 4.1 group (#2781) 2022-06-12 00:15:43 +09:00
264a703d94 revert broken fix in #2624 (#2779)
* revert broken fix in #2624

* update changelog
2022-06-11 13:43:13 +01:00
498fb954b3 migrate from deprecated sha-1 to sha1 (#2780)
closes #2778
2022-06-11 04:53:58 +01:00
2253eae2bb update msrv to 1.56 (#2777)
* update msrv to 1.56

* remove transitive dashmap dependency

closes #2747
2022-06-11 04:03:26 +01:00
8e76a1c775 Allow a path as a guard in route handler macro (#2771)
* Allow a path as a guard in route handler macro

* Update CHANGES.md

Co-authored-by: Rob Ede <robjtede@icloud.com>
2022-06-06 18:53:23 +01:00
dce57a79c9 Implement ResponseError for Infallible (#2769) 2022-05-30 20:52:48 +01:00
6a5b370206 fix some typos (#2744)
Co-authored-by: Rob Ede <robjtede@icloud.com>
2022-04-24 22:01:20 +00:00
b1c85ba85b Add ServiceConfig::default_service (#2743)
* Add `ServiceConfig::default_service`

based on https://github.com/actix/actix-web/pull/2338

* update changelog
2022-04-23 22:11:45 +01:00
9aab911600 Improve documentation for FromRequest::Future (#2734)
Co-authored-by: Rob Ede <robjtede@icloud.com>
2022-04-23 20:57:11 +00:00
017e40f733 update optional extractor impl docs 2022-04-23 21:02:24 +01:00
45592b37b6 add Route::wrap (#2725)
* add `Route::wrap`

* add tests

* fix clippy

* fix doctests
2022-04-23 21:01:55 +01:00
8abcb94512 fix tokio-uring version 2022-04-23 14:37:03 +01:00
f2cacc4c9d clear conn_data on HttpRequest drop (#2742)
* clear conn_data on HttpRequest drop

fixes #2740

* update changelog

* fix doc test
2022-04-23 13:35:41 +01:00
56b9c0d08e remove payload unwindsafe impl assert 2022-04-23 12:31:32 +01:00
de9e41484a Add ServiceRequest::extract (#2647)
Co-authored-by: Rob Ede <robjtede@icloud.com>
2022-04-02 19:46:26 +01:00
2fed978597 remove -http TestRequest doc test 2022-03-28 22:44:32 +01:00
40048a5811 rework actix_router::Quoter (#2709)
Co-authored-by: Rob Ede <robjtede@icloud.com>
2022-03-28 20:58:35 +00:00
e942d3e3b1 update migration guide 2022-03-26 13:26:12 +00:00
09cffc093c Bump zstd to 0.11 (#2694)
Co-authored-by: Rob Ede <robjtede@icloud.com>
2022-03-22 15:30:06 +00:00
c58f287044 Removed random superfluous whitespace (#2705) 2022-03-20 21:36:19 +00:00
7b27493e4c move coverage to own workflow 2022-03-10 16:17:49 +00:00
478b33b8a3 remove nightly io-uring job 2022-03-10 16:00:15 +00:00
592b40f914 move io-uring tests to own job 2022-03-10 15:03:55 +00:00
fe5279c77a use tracing in actix-router 2022-03-10 03:14:14 +00:00
80d222aa78 use tracing in actix-http 2022-03-10 03:12:29 +00:00
a03a2a0076 deprecate NamedFile::set_status_code 2022-03-10 02:54:06 +00:00
745e738955 fix negative impl assertion on 1.60+
see https://github.com/rust-lang/rust/issues/94791
2022-03-10 02:36:57 +00:00
1fd90f0b10 Implement getters for named file fields (#2689)
Co-authored-by: Janis Goldschmidt <github@aberrat.io>
2022-03-10 01:29:26 +00:00
a35804b89f update files tokio-uring to 0.3 2022-03-10 01:05:03 +00:00
5611b98c0d prepare actix-http release 3.0.4 2022-03-09 18:13:39 +00:00
dce9438518 document with ws feature 2022-03-09 18:11:12 +00:00
be986d96b3 bump regex requirement to 1.5.5 due to security advisory (#2687) 2022-03-08 17:42:42 +00:00
8ddb24b49b prepare awc release 3.0.0 (#2684) 2022-03-08 16:51:40 +00:00
87f627cd5d improve servicerequest docs 2022-03-07 16:48:04 +00:00
03456b8a33 update actix-web-in-http example 2022-03-05 23:43:31 +00:00
8c2fad3164 align hello-world examples 2022-03-05 23:15:33 +00:00
62fbd225bc prepare actix-http release 3.0.2 2022-03-05 22:26:19 +00:00
0fa4d999d9 fix(actix-http): encode correctly camel case header with n+2 hyphens (#2683)
Co-authored-by: Rob Ede <robjtede@icloud.com>
2022-03-05 22:24:21 +00:00
da4c849f62 prepare actix-http release 3.0.1 2022-03-04 03:16:02 +00:00
49cd303c3b fix dispatcher panic when conbining pipelining and keepalive
fixes #2678
2022-03-04 03:12:38 +00:00
955c3ac0c4 Add support for audio files streaming (#2645) 2022-03-03 00:29:59 +00:00
56e5c19b85 add actix 0.13 support (#2675) 2022-03-02 17:53:47 +00:00
3f03af1c59 clippy 2022-03-02 03:25:30 +00:00
25c0673278 Update MIGRATION-4.0.md 2022-03-02 02:20:48 +00:00
e7a05f9892 fix(docs): TestRequest example fixed (#2643)
Co-authored-by: Rob Ede <robjtede@icloud.com>
2022-03-01 00:02:08 +00:00
2f13e5f675 Update MIGRATION-4.0.md 2022-02-26 17:13:42 +00:00
9f964751f6 tweak migration doc 2022-02-25 21:40:23 +00:00
fcca515387 prepare actix-multipart release 0.4.0 2022-02-25 20:41:57 +00:00
075932d823 prepare actix-web-actors release 4.0.0 2022-02-25 20:41:33 +00:00
cb379c0e0c prepare actix-files release 0.6.0 2022-02-25 20:36:16 +00:00
d4a5d450de prepare actix-web release 4.0.1 2022-02-25 20:31:46 +00:00
230 changed files with 8091 additions and 2890 deletions

View File

@ -3,34 +3,40 @@ name: Bug Report
about: Create a bug report.
---
Your issue may already be reported!
Please search on the [Actix Web issue tracker](https://github.com/actix/actix-web/issues) before creating one.
Your issue may already be reported! Please search on the [Actix Web issue tracker](https://github.com/actix/actix-web/issues) before creating one.
## Expected Behavior
<!--- If you're describing a bug, tell us what should happen -->
<!--- If you're suggesting a change/improvement, tell us how it should work -->
## Current Behavior
<!--- If describing a bug, tell us what happens instead of the expected behavior -->
<!--- If suggesting a change/improvement, explain the difference from current behavior -->
## Possible Solution
<!--- Not obligatory, but suggest a fix/reason for the bug, -->
<!--- or ideas how to implement the addition or change -->
## Steps to Reproduce (for bugs)
<!--- Provide a link to a live example, or an unambiguous set of steps to -->
<!--- reproduce this bug. Include code to reproduce, if relevant -->
1.
2.
3.
4.
## Context
<!--- How has this issue affected you? What are you trying to accomplish? -->
<!--- Providing context helps us come up with a solution that is most useful in the real world -->
## Your Environment
<!--- Include as many relevant details about the environment you experienced the bug in -->
- Rust Version (I.e, output of `rustc -V`):

View File

@ -2,12 +2,14 @@
<!-- Please fill out the following to get your PR reviewed quicker. -->
## PR Type
<!-- What kind of change does this PR make? -->
<!-- Bug Fix / Feature / Refactor / Code Style / Other -->
PR_TYPE
## PR Checklist
<!-- Check your PR fulfills the following items. -->
<!-- For draft PRs check the boxes as you complete them. -->
@ -17,11 +19,10 @@ PR_TYPE
- [ ] Format code with the latest stable rustfmt.
- [ ] (Team) Label with affected crates and semver status.
## Overview
<!-- Describe the current and new behavior. -->
<!-- Emphasize any breaking changes. -->
<!-- If this PR fixes or closes an issue, reference it here. -->
<!-- Closes #000 -->

View File

@ -5,6 +5,9 @@ on:
branches:
- master
permissions:
contents: read # to fetch code (actions/checkout)
jobs:
check_benchmark:
runs-on: ubuntu-latest

View File

@ -4,6 +4,9 @@ on:
push:
branches: [master]
permissions:
contents: read # to fetch code (actions/checkout)
jobs:
build_and_test_nightly:
strategy:
@ -23,6 +26,7 @@ jobs:
CI: 1
CARGO_INCREMENTAL: 0
VCPKGRS_DYNAMIC: 1
CARGO_UNSTABLE_SPARSE_REGISTRY: true
steps:
- uses: actions/checkout@v2
@ -44,18 +48,15 @@ jobs:
profile: minimal
override: true
- name: Install cargo-hack
uses: taiki-e/install-action@cargo-hack
- name: Generate Cargo.lock
uses: actions-rs/cargo@v1
with: { command: generate-lockfile }
- name: Cache Dependencies
uses: Swatinem/rust-cache@v1.2.0
- name: Install cargo-hack
uses: actions-rs/cargo@v1
with:
command: install
args: cargo-hack
- name: check minimal
uses: actions-rs/cargo@v1
with: { command: ci-check-min }
@ -78,108 +79,58 @@ jobs:
cargo test --lib --tests -p=actix-multipart --all-features
cargo test --lib --tests -p=actix-web-actors --all-features
- name: tests (io-uring)
if: matrix.target.os == 'ubuntu-latest'
timeout-minutes: 60
run: >
sudo bash -c "ulimit -Sl 512
&& ulimit -Hl 512
&& PATH=$PATH:/usr/share/rust/.cargo/bin
&& RUSTUP_TOOLCHAIN=${{ matrix.version }} cargo test --lib --tests -p=actix-files --all-features"
- name: Clear the cargo caches
run: |
cargo install cargo-cache --version 0.6.3 --no-default-features --features ci-autoclean
cargo install cargo-cache --version 0.8.2 --no-default-features --features ci-autoclean
cargo-cache
ci_feature_powerset_check:
name: Verify Feature Combinations
runs-on: ubuntu-latest
env:
CI: 1
CARGO_INCREMENTAL: 0
steps:
- uses: actions/checkout@v2
- name: Install stable
uses: actions-rs/toolchain@v1
with:
toolchain: stable-x86_64-unknown-linux-gnu
profile: minimal
override: true
- uses: dtolnay/rust-toolchain@stable
- name: Install cargo-hack
uses: taiki-e/install-action@cargo-hack
- name: Generate Cargo.lock
uses: actions-rs/cargo@v1
with: { command: generate-lockfile }
run: cargo generate-lockfile
- name: Cache Dependencies
uses: Swatinem/rust-cache@v1.2.0
- name: Install cargo-hack
uses: actions-rs/cargo@v1
with:
command: install
args: cargo-hack
- name: check feature combinations
run: cargo ci-check-all-feature-powerset
- name: check feature combinations
uses: actions-rs/cargo@v1
with: { command: ci-check-all-feature-powerset }
- name: check feature combinations
uses: actions-rs/cargo@v1
with: { command: ci-check-all-feature-powerset-linux }
# job currently (1st Feb 2022) segfaults
# coverage:
# name: coverage
# runs-on: ubuntu-latest
# steps:
# - uses: actions/checkout@v2
# - name: Install stable
# uses: actions-rs/toolchain@v1
# with:
# toolchain: stable-x86_64-unknown-linux-gnu
# profile: minimal
# override: true
# - name: Generate Cargo.lock
# uses: actions-rs/cargo@v1
# with: { command: generate-lockfile }
# - name: Cache Dependencies
# uses: Swatinem/rust-cache@v1.2.0
# - name: Generate coverage file
# run: |
# cargo install cargo-tarpaulin --vers "^0.13"
# cargo tarpaulin --workspace --features=rustls,openssl --out Xml --verbose
# - name: Upload to Codecov
# uses: codecov/codecov-action@v1
# with: { file: cobertura.xml }
run: cargo ci-check-all-feature-powerset-linux
nextest:
name: nextest
runs-on: ubuntu-latest
env:
CI: 1
CARGO_INCREMENTAL: 0
steps:
- uses: actions/checkout@v2
- name: Install Rust
uses: actions-rs/toolchain@v1
with:
toolchain: stable
profile: minimal
override: true
- uses: dtolnay/rust-toolchain@stable
- name: Install nextest
uses: taiki-e/install-action@nextest
- name: Generate Cargo.lock
uses: actions-rs/cargo@v1
with: { command: generate-lockfile }
run: cargo generate-lockfile
- name: Cache Dependencies
uses: Swatinem/rust-cache@v1.3.0
- name: Install cargo-nextest
uses: actions-rs/cargo@v1
with:
command: install
args: cargo-nextest
- name: Test with cargo-nextest
uses: actions-rs/cargo@v1
with:
command: nextest
args: run
run: cargo nextest run

View File

@ -6,6 +6,9 @@ on:
push:
branches: [master]
permissions:
contents: read # to fetch code (actions/checkout)
jobs:
build_and_test:
strategy:
@ -16,7 +19,7 @@ jobs:
- { name: macOS, os: macos-latest, triple: x86_64-apple-darwin }
- { name: Windows, os: windows-2022, triple: x86_64-pc-windows-msvc }
version:
- 1.54.0 # MSRV
- 1.59.0 # MSRV
- stable
name: ${{ matrix.target.name }} / ${{ matrix.version }}
@ -47,17 +50,26 @@ jobs:
profile: minimal
override: true
- name: Install cargo-hack
uses: taiki-e/install-action@cargo-hack
- name: workaround MSRV issues
if: matrix.version != 'stable'
run: |
cargo install cargo-edit --version=0.8.0
cargo add const-str@0.3 --dev -p=actix-web
cargo add const-str@0.3 --dev -p=awc
- name: Generate Cargo.lock
uses: actions-rs/cargo@v1
with: { command: generate-lockfile }
- name: Cache Dependencies
uses: Swatinem/rust-cache@v1.2.0
- name: Install cargo-hack
uses: actions-rs/cargo@v1
with:
command: install
args: cargo-hack
- name: workaround MSRV issues
if: matrix.version != 'stable'
run: |
cargo update -p=zstd-sys --precise=2.0.1+zstd.1.5.2
- name: check minimal
uses: actions-rs/cargo@v1
@ -81,19 +93,31 @@ jobs:
cargo test --lib --tests -p=actix-multipart --all-features
cargo test --lib --tests -p=actix-web-actors --all-features
- name: Clear the cargo caches
run: |
cargo install cargo-cache --version 0.8.2 --no-default-features --features ci-autoclean
cargo-cache
io-uring:
name: io-uring tests
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- uses: dtolnay/rust-toolchain@stable
- name: Generate Cargo.lock
run: cargo generate-lockfile
- name: Cache Dependencies
uses: Swatinem/rust-cache@v1.3.0
- name: tests (io-uring)
if: matrix.target.os == 'ubuntu-latest'
timeout-minutes: 60
run: >
sudo bash -c "ulimit -Sl 512
&& ulimit -Hl 512
&& PATH=$PATH:/usr/share/rust/.cargo/bin
&& RUSTUP_TOOLCHAIN=${{ matrix.version }} cargo test --lib --tests -p=actix-files --all-features"
- name: Clear the cargo caches
run: |
cargo install cargo-cache --version 0.6.3 --no-default-features --features ci-autoclean
cargo-cache
&& RUSTUP_TOOLCHAIN=stable cargo test --lib --tests -p=actix-files --all-features"
rustdoc:
name: doc tests
@ -101,20 +125,13 @@ jobs:
steps:
- uses: actions/checkout@v2
- name: Install Rust (nightly)
uses: actions-rs/toolchain@v1
with:
toolchain: nightly-x86_64-unknown-linux-gnu
profile: minimal
override: true
- uses: dtolnay/rust-toolchain@nightly
- name: Generate Cargo.lock
uses: actions-rs/cargo@v1
with: { command: generate-lockfile }
run: cargo generate-lockfile
- name: Cache Dependencies
uses: Swatinem/rust-cache@v1.3.0
- name: doc tests
uses: actions-rs/cargo@v1
run: cargo ci-doctest
timeout-minutes: 60
with: { command: ci-doctest }

View File

@ -9,54 +9,37 @@ jobs:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- name: Install Rust
uses: actions-rs/toolchain@v1
with:
toolchain: stable
profile: minimal
components: rustfmt
- name: Check with rustfmt
uses: actions-rs/cargo@v1
with:
command: fmt
args: --all -- --check
- uses: dtolnay/rust-toolchain@nightly
with: { components: rustfmt }
- run: cargo fmt --all -- --check
clippy:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- name: Install Rust
uses: actions-rs/toolchain@v1
with:
toolchain: stable
profile: minimal
components: clippy
override: true
- uses: dtolnay/rust-toolchain@stable
with: { components: clippy }
- name: Generate Cargo.lock
uses: actions-rs/cargo@v1
with: { command: generate-lockfile }
run: cargo generate-lockfile
- name: Cache Dependencies
uses: Swatinem/rust-cache@v1.2.0
- name: Check with Clippy
uses: actions-rs/clippy-check@v1
with:
token: ${{ secrets.GITHUB_TOKEN }}
args: --workspace --tests --examples --all-features
token: ${{ secrets.GITHUB_TOKEN }}
lint-docs:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- name: Install Rust
uses: actions-rs/toolchain@v1
with:
toolchain: stable
profile: minimal
components: rust-docs
- uses: dtolnay/rust-toolchain@stable
with: { components: rust-docs }
- name: Check for broken intra-doc links
uses: actions-rs/cargo@v1
env:

36
.github/workflows/coverage.yml vendored Normal file
View File

@ -0,0 +1,36 @@
# disabled because `cargo tarpaulin` currently segfaults
name: Coverage
on:
push:
branches: [master]
jobs:
# job currently (1st Feb 2022) segfaults
coverage:
name: coverage
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- name: Install stable
uses: actions-rs/toolchain@v1
with:
toolchain: stable-x86_64-unknown-linux-gnu
profile: minimal
override: true
- name: Generate Cargo.lock
uses: actions-rs/cargo@v1
with: { command: generate-lockfile }
- name: Cache Dependencies
uses: Swatinem/rust-cache@v1.2.0
- name: Generate coverage file
run: |
cargo install cargo-tarpaulin --vers "^0.13"
cargo tarpaulin --workspace --features=rustls,openssl --out Xml --verbose
- name: Upload to Codecov
uses: codecov/codecov-action@v1
with: { file: cobertura.xml }

View File

@ -4,32 +4,29 @@ on:
push:
branches: [master]
permissions: {}
jobs:
build:
permissions:
contents: write # to push changes in repo (jamesives/github-pages-deploy-action)
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- name: Install Rust
uses: actions-rs/toolchain@v1
with:
toolchain: nightly-x86_64-unknown-linux-gnu
profile: minimal
override: true
- uses: dtolnay/rust-toolchain@nightly
- name: Build Docs
uses: actions-rs/cargo@v1
with:
command: doc
args: --workspace --all-features --no-deps
run: cargo +nightly doc --no-deps --workspace --all-features
env:
RUSTDOCFLAGS: --cfg=docsrs
- name: Tweak HTML
run: echo '<meta http-equiv="refresh" content="0;url=actix_web/index.html">' > target/doc/index.html
- name: Deploy to GitHub Pages
uses: JamesIves/github-pages-deploy-action@3.7.1
uses: JamesIves/github-pages-deploy-action@v4.4.1
with:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
BRANCH: gh-pages
FOLDER: target/doc
folder: target/doc
single-commit: true

View File

@ -1,3 +0,0 @@
{
"proseWrap": "never"
}

1
.prettierrc.yaml Normal file
View File

@ -0,0 +1 @@
proseWrap: never

View File

@ -5,6 +5,7 @@ members = [
"actix-http-test",
"actix-http",
"actix-multipart",
"actix-multipart-derive",
"actix-router",
"actix-test",
"actix-web-actors",
@ -27,6 +28,7 @@ actix-files = { path = "actix-files" }
actix-http = { path = "actix-http" }
actix-http-test = { path = "actix-http-test" }
actix-multipart = { path = "actix-multipart" }
actix-multipart-derive = { path = "actix-multipart-derive" }
actix-router = { path = "actix-router" }
actix-test = { path = "actix-test" }
actix-web = { path = "actix-web" }

View File

@ -1,43 +1,72 @@
# Changes
## Unreleased - 2021-xx-xx
## Unreleased - 2022-xx-xx
## 0.6.3 - 2023-01-21
- XHTML files now use `Content-Disposition: inline` instead of `attachment`. [#2903]
- Minimum supported Rust version (MSRV) is now 1.59 due to transitive `time` dependency.
- Update `tokio-uring` dependency to `0.4`.
[#2903]: https://github.com/actix/actix-web/pull/2903
## 0.6.2 - 2022-07-23
- Allow partial range responses for video content to start streaming sooner. [#2817]
- Minimum supported Rust version (MSRV) is now 1.57 due to transitive `time` dependency.
[#2817]: https://github.com/actix/actix-web/pull/2817
## 0.6.1 - 2022-06-11
- Add `NamedFile::{modified, metadata, content_type, content_disposition, encoding}()` getters. [#2021]
- Update `tokio-uring` dependency to `0.3`.
- Audio files now use `Content-Disposition: inline` instead of `attachment`. [#2645]
- Minimum supported Rust version (MSRV) is now 1.56 due to transitive `hashbrown` dependency.
[#2021]: https://github.com/actix/actix-web/pull/2021
[#2645]: https://github.com/actix/actix-web/pull/2645
## 0.6.0 - 2022-02-25
- No significant changes since `0.6.0-beta.16`.
## 0.6.0-beta.16 - 2022-01-31
- No significant changes since `0.6.0-beta.15`.
## 0.6.0-beta.15 - 2022-01-21
- No significant changes since `0.6.0-beta.14`.
## 0.6.0-beta.14 - 2022-01-14
- The `prefer_utf8` option introduced in `0.4.0` is now true by default. [#2583]
[#2583]: https://github.com/actix/actix-web/pull/2583
## 0.6.0-beta.13 - 2022-01-04
- The `Files` service now rejects requests with URL paths that include `%2F` (decoded: `/`). [#2398]
- The `Files` service now correctly decodes `%25` in the URL path to `%` for the file path. [#2398]
- Minimum supported Rust version (MSRV) is now 1.54.
[#2398]: https://github.com/actix/actix-web/pull/2398
## 0.6.0-beta.12 - 2021-12-29
- No significant changes since `0.6.0-beta.11`.
## 0.6.0-beta.11 - 2021-12-27
- No significant changes since `0.6.0-beta.10`.
## 0.6.0-beta.10 - 2021-12-11
- No significant changes since `0.6.0-beta.9`.
## 0.6.0-beta.9 - 2021-11-22
- Add crate feature `experimental-io-uring`, enabling async file I/O to be utilized. This feature is only available on Linux OSes with recent kernel versions. This feature is semver-exempt. [#2408]
- Add `NamedFile::open_async`. [#2408]
- Fix 304 Not Modified responses to omit the Content-Length header, as per the spec. [#2453]
@ -48,24 +77,24 @@
[#2408]: https://github.com/actix/actix-web/pull/2408
[#2453]: https://github.com/actix/actix-web/pull/2453
## 0.6.0-beta.8 - 2021-10-20
- Minimum supported Rust version (MSRV) is now 1.52.
## 0.6.0-beta.7 - 2021-09-09
- Minimum supported Rust version (MSRV) is now 1.51.
## 0.6.0-beta.6 - 2021-06-26
- Added `Files::path_filter()`. [#2274]
- `Files::show_files_listing()` can now be used with `Files::index_file()` to show files listing as a fallback when the index file is not found. [#2228]
[#2274]: https://github.com/actix/actix-web/pull/2274
[#2228]: https://github.com/actix/actix-web/pull/2228
## 0.6.0-beta.5 - 2021-06-17
- `NamedFile` now implements `ServiceFactory` and `HttpServiceFactory` making it much more useful in routing. For example, it can be used directly as a default service. [#2135]
- For symbolic links, `Content-Disposition` header no longer shows the filename of the original file. [#2156]
- `Files::redirect_to_slash_directory()` now works as expected when used with `Files::show_files_listing()`. [#2225]
@ -76,58 +105,58 @@
[#2225]: https://github.com/actix/actix-web/pull/2225
[#2257]: https://github.com/actix/actix-web/pull/2257
## 0.6.0-beta.4 - 2021-04-02
- Add support for `.guard` in `Files` to selectively filter `Files` services. [#2046]
[#2046]: https://github.com/actix/actix-web/pull/2046
## 0.6.0-beta.3 - 2021-03-09
- No notable changes.
## 0.6.0-beta.2 - 2021-02-10
- Fix If-Modified-Since and If-Unmodified-Since to not compare using sub-second timestamps. [#1887]
- Replace `v_htmlescape` with `askama_escape`. [#1953]
[#1887]: https://github.com/actix/actix-web/pull/1887
[#1953]: https://github.com/actix/actix-web/pull/1953
## 0.6.0-beta.1 - 2021-01-07
- `HttpRange::parse` now has its own error type.
- Update `bytes` to `1.0`. [#1813]
[#1813]: https://github.com/actix/actix-web/pull/1813
## 0.5.0 - 2020-12-26
- Optionally support hidden files/directories. [#1811]
[#1811]: https://github.com/actix/actix-web/pull/1811
## 0.4.1 - 2020-11-24
- Clarify order of parameters in `Files::new` and improve docs.
## 0.4.0 - 2020-10-06
- Add `Files::prefer_utf8` option that adds UTF-8 charset on certain response types. [#1714]
[#1714]: https://github.com/actix/actix-web/pull/1714
## 0.3.0 - 2020-09-11
- No significant changes from 0.3.0-beta.1.
## 0.3.0-beta.1 - 2020-07-15
- Update `v_htmlescape` to 0.10
- Update `actix-web` and `actix-http` dependencies to beta.1
## 0.3.0-alpha.1 - 2020-05-23
- Update `actix-web` and `actix-http` dependencies to alpha
- Fix some typos in the docs
- Bump minimum supported Rust version to 1.40
@ -135,73 +164,73 @@
[#1384]: https://github.com/actix/actix-web/pull/1384
## 0.2.1 - 2019-12-22
- Use the same format for file URLs regardless of platforms
## 0.2.0 - 2019-12-20
- Fix BodyEncoding trait import #1220
## 0.2.0-alpha.1 - 2019-12-07
- Migrate to `std::future`
## 0.1.7 - 2019-11-06
- Add an additional `filename*` param in the `Content-Disposition` header of
`actix_files::NamedFile` to be more compatible. (#1151)
- Add an additional `filename*` param in the `Content-Disposition` header of `actix_files::NamedFile` to be more compatible. (#1151)
## 0.1.6 - 2019-10-14
- Add option to redirect to a slash-ended path `Files` #1132
## 0.1.5 - 2019-10-08
- Bump up `mime_guess` crate version to 2.0.1
- Bump up `percent-encoding` crate version to 2.1
- Allow user defined request guards for `Files` #1113
## 0.1.4 - 2019-07-20
- Allow to disable `Content-Disposition` header #686
## 0.1.3 - 2019-06-28
- Do not set `Content-Length` header, let actix-http set it #930
## 0.1.2 - 2019-06-13
- Content-Length is 0 for NamedFile HEAD request #914
- Fix ring dependency from actix-web default features for #741
## 0.1.1 - 2019-06-01
- Static files are incorrectly served as both chunked and with length #812
## 0.1.0 - 2019-05-25
- NamedFile last-modified check always fails due to nano-seconds in file modified date #820
## 0.1.0-beta.4 - 2019-05-12
- Update actix-web to beta.4
## 0.1.0-beta.1 - 2019-04-20
- Update actix-web to beta.1
## 0.1.0-alpha.6 - 2019-04-14
- Update actix-web to alpha6
## 0.1.0-alpha.4 - 2019-04-08
- Update actix-web to alpha4
## 0.1.0-alpha.2 - 2019-04-02
- Add default handler support
## 0.1.0-alpha.1 - 2019-03-28
- Initial impl

View File

@ -1,9 +1,8 @@
[package]
name = "actix-files"
version = "0.6.0-beta.16"
version = "0.6.3"
authors = [
"Nikolay Kim <fafhrd91@gmail.com>",
"fakeshadow <24548779@qq.com>",
"Rob Ede <robjtede@icloud.com>",
]
description = "Static file serving for Actix Web"
@ -22,27 +21,30 @@ path = "src/lib.rs"
experimental-io-uring = ["actix-web/experimental-io-uring", "tokio-uring"]
[dependencies]
actix-http = "3.0.0"
actix-http = "3"
actix-service = "2"
actix-utils = "3"
actix-web = { version = "4.0.0", default-features = false }
actix-web = { version = "4", default-features = false }
askama_escape = "0.10"
bitflags = "1"
bytes = "1"
derive_more = "0.99.5"
futures-core = { version = "0.3.7", default-features = false, features = ["alloc"] }
futures-core = { version = "0.3.17", default-features = false, features = ["alloc"] }
http-range = "0.1.4"
log = "0.4"
mime = "0.3"
mime_guess = "2.0.1"
percent-encoding = "2.1"
pin-project-lite = "0.2.7"
v_htmlescape= "0.15"
tokio-uring = { version = "0.2", optional = true, features = ["bytes"] }
# experimental-io-uring
[target.'cfg(target_os = "linux")'.dependencies]
tokio-uring = { version = "0.4", optional = true, features = ["bytes"] }
actix-server = { version = "2.2", optional = true } # ensure matching tokio-uring versions
[dev-dependencies]
actix-rt = "2.2"
actix-test = "0.1.0-beta.13"
actix-web = "4.0.0"
actix-rt = "2.7"
actix-test = "0.1"
actix-web = "4"
tempfile = "3.2"

View File

@ -3,11 +3,11 @@
> Static file serving for Actix Web
[![crates.io](https://img.shields.io/crates/v/actix-files?label=latest)](https://crates.io/crates/actix-files)
[![Documentation](https://docs.rs/actix-files/badge.svg?version=0.6.0-beta.16)](https://docs.rs/actix-files/0.6.0-beta.16)
[![Version](https://img.shields.io/badge/rustc-1.54+-ab6000.svg)](https://blog.rust-lang.org/2021/05/06/Rust-1.54.0.html)
[![Documentation](https://docs.rs/actix-files/badge.svg?version=0.6.3)](https://docs.rs/actix-files/0.6.3)
![Version](https://img.shields.io/badge/rustc-1.59+-ab6000.svg)
![License](https://img.shields.io/crates/l/actix-files.svg)
<br />
[![dependency status](https://deps.rs/crate/actix-files/0.6.0-beta.16/status.svg)](https://deps.rs/crate/actix-files/0.6.0-beta.16)
[![dependency status](https://deps.rs/crate/actix-files/0.6.3/status.svg)](https://deps.rs/crate/actix-files/0.6.3)
[![Download](https://img.shields.io/crates/d/actix-files.svg)](https://crates.io/crates/actix-files)
[![Chat on Discord](https://img.shields.io/discord/771444961383153695?label=chat&logo=discord)](https://discord.gg/NWpN5mmg3x)
@ -15,4 +15,4 @@
- [API Documentation](https://docs.rs/actix-files)
- [Example Project](https://github.com/actix/examples/tree/master/basics/static-files)
- Minimum Supported Rust Version (MSRV): 1.54
- Minimum Supported Rust Version (MSRV): 1.59

View File

@ -1,8 +1,8 @@
use std::{fmt::Write, fs::DirEntry, io, path::Path, path::PathBuf};
use actix_web::{dev::ServiceResponse, HttpRequest, HttpResponse};
use askama_escape::{escape as escape_html_entity, Html};
use percent_encoding::{utf8_percent_encode, CONTROLS};
use v_htmlescape::escape as escape_html_entity;
/// A directory; responds with the generated directory listing.
#[derive(Debug)]
@ -59,7 +59,7 @@ macro_rules! encode_file_url {
/// ```
macro_rules! encode_file_name {
($entry:ident) => {
escape_html_entity(&$entry.file_name().to_string_lossy(), Html)
escape_html_entity(&$entry.file_name().to_string_lossy())
};
}

View File

@ -2,7 +2,7 @@ use actix_web::{http::StatusCode, ResponseError};
use derive_more::Display;
/// Errors which can occur when serving static files.
#[derive(Display, Debug, PartialEq)]
#[derive(Debug, PartialEq, Eq, Display)]
pub enum FilesError {
/// Path is not a directory
#[allow(dead_code)]
@ -22,7 +22,7 @@ impl ResponseError for FilesError {
}
#[allow(clippy::enum_variant_names)]
#[derive(Display, Debug, PartialEq)]
#[derive(Debug, PartialEq, Eq, Display)]
#[non_exhaustive]
pub enum UriSegmentError {
/// The segment started with the wrapped invalid character.

View File

@ -142,7 +142,7 @@ impl Files {
self
}
/// Set custom directory renderer
/// Set custom directory renderer.
pub fn files_listing_renderer<F>(mut self, f: F) -> Self
where
for<'r, 's> F:
@ -152,7 +152,7 @@ impl Files {
self
}
/// Specifies mime override callback
/// Specifies MIME override callback.
pub fn mime_override<F>(mut self, f: F) -> Self
where
F: Fn(&mime::Name<'_>) -> DispositionType + 'static,
@ -390,3 +390,42 @@ impl ServiceFactory<ServiceRequest> for Files {
}
}
}
#[cfg(test)]
mod tests {
use actix_web::{
http::StatusCode,
test::{self, TestRequest},
App, HttpResponse,
};
use super::*;
#[actix_web::test]
async fn custom_files_listing_renderer() {
let srv = test::init_service(
App::new().service(
Files::new("/", "./tests")
.show_files_listing()
.files_listing_renderer(|dir, req| {
Ok(ServiceResponse::new(
req.clone(),
HttpResponse::Ok().body(dir.path.to_str().unwrap().to_owned()),
))
}),
),
)
.await;
let req = TestRequest::with_uri("/").to_request();
let res = test::call_service(&srv, req).await;
assert_eq!(res.status(), StatusCode::OK);
let body = test::read_body(res).await;
assert!(
body.ends_with(b"actix-files/tests/"),
"body {:?} does not end with `actix-files/tests/`",
body
);
}
}

View File

@ -13,6 +13,7 @@
#![deny(rust_2018_idioms, nonstandard_style)]
#![warn(future_incompatible, missing_docs, missing_debug_implementations)]
#![allow(clippy::uninlined_format_args)]
use actix_service::boxed::{BoxService, BoxServiceFactory};
use actix_web::{
@ -364,20 +365,43 @@ mod tests {
);
}
#[allow(deprecated)]
#[actix_rt::test]
async fn test_named_file_status_code_text() {
let mut file = NamedFile::open_async("Cargo.toml")
async fn status_code_customize_same_output() {
let file1 = NamedFile::open_async("Cargo.toml")
.await
.unwrap()
.set_status_code(StatusCode::NOT_FOUND);
let file2 = NamedFile::open_async("Cargo.toml")
.await
.unwrap()
.customize()
.with_status(StatusCode::NOT_FOUND);
let req = TestRequest::default().to_http_request();
let res1 = file1.respond_to(&req);
let res2 = file2.respond_to(&req);
assert_eq!(res1.status(), StatusCode::NOT_FOUND);
assert_eq!(res2.status(), StatusCode::NOT_FOUND);
}
#[actix_rt::test]
async fn test_named_file_status_code_text() {
let mut file = NamedFile::open_async("Cargo.toml").await.unwrap();
{
file.file();
let _f: &File = &file;
}
{
let _f: &mut File = &mut file;
}
let file = file.customize().with_status(StatusCode::NOT_FOUND);
let req = TestRequest::default().to_http_request();
let resp = file.respond_to(&req);
assert_eq!(

View File

@ -23,6 +23,7 @@ use actix_web::{
use bitflags::bitflags;
use derive_more::{Deref, DerefMut};
use futures_core::future::LocalBoxFuture;
use mime::Mime;
use mime_guess::from_path;
use crate::{encoding::equiv_utf8_text, range::HttpRange};
@ -76,8 +77,8 @@ pub struct NamedFile {
pub(crate) md: Metadata,
pub(crate) flags: Flags,
pub(crate) status_code: StatusCode,
pub(crate) content_type: mime::Mime,
pub(crate) content_disposition: header::ContentDisposition,
pub(crate) content_type: Mime,
pub(crate) content_disposition: ContentDisposition,
pub(crate) encoding: Option<ContentEncoding>,
}
@ -128,10 +129,10 @@ impl NamedFile {
let ct = from_path(&path).first_or_octet_stream();
let disposition = match ct.type_() {
mime::IMAGE | mime::TEXT | mime::VIDEO => DispositionType::Inline,
mime::IMAGE | mime::TEXT | mime::AUDIO | mime::VIDEO => DispositionType::Inline,
mime::APPLICATION => match ct.subtype() {
mime::JAVASCRIPT | mime::JSON => DispositionType::Inline,
name if name == "wasm" => DispositionType::Inline,
name if name == "wasm" || name == "xhtml" => DispositionType::Inline,
_ => DispositionType::Attachment,
},
_ => DispositionType::Attachment,
@ -209,11 +210,10 @@ impl NamedFile {
Self::from_file(file, path)
}
#[allow(rustdoc::broken_intra_doc_links)]
/// Attempts to open a file asynchronously in read-only mode.
///
/// When the `experimental-io-uring` crate feature is enabled, this will be async.
/// Otherwise, it will be just like [`open`][Self::open].
/// When the `experimental-io-uring` crate feature is enabled, this will be async. Otherwise, it
/// will behave just like `open`.
///
/// # Examples
/// ```
@ -238,13 +238,13 @@ impl NamedFile {
Self::from_file(file, path)
}
/// Returns reference to the underlying `File` object.
/// Returns reference to the underlying file object.
#[inline]
pub fn file(&self) -> &File {
&self.file
}
/// Retrieve the path of this file.
/// Returns the filesystem path to this file.
///
/// # Examples
/// ```
@ -262,16 +262,53 @@ impl NamedFile {
self.path.as_path()
}
/// Set response **Status Code**
/// Returns the time the file was last modified.
///
/// Returns `None` only on unsupported platforms; see [`std::fs::Metadata::modified()`].
/// Therefore, it is usually safe to unwrap this.
#[inline]
pub fn modified(&self) -> Option<SystemTime> {
self.modified
}
/// Returns the filesystem metadata associated with this file.
#[inline]
pub fn metadata(&self) -> &Metadata {
&self.md
}
/// Returns the `Content-Type` header that will be used when serving this file.
#[inline]
pub fn content_type(&self) -> &Mime {
&self.content_type
}
/// Returns the `Content-Disposition` that will be used when serving this file.
#[inline]
pub fn content_disposition(&self) -> &ContentDisposition {
&self.content_disposition
}
/// Returns the `Content-Encoding` that will be used when serving this file.
///
/// A return value of `None` indicates that the content is not already using a compressed
/// representation and may be subject to compression downstream.
#[inline]
pub fn content_encoding(&self) -> Option<ContentEncoding> {
self.encoding
}
/// Set response status code.
#[deprecated(since = "0.7.0", note = "Prefer `Responder::customize()`.")]
pub fn set_status_code(mut self, status: StatusCode) -> Self {
self.status_code = status;
self
}
/// Set the MIME Content-Type for serving this file. By default the Content-Type is inferred
/// from the filename extension.
/// Sets the `Content-Type` header that will be used when serving this file. By default the
/// `Content-Type` is inferred from the filename extension.
#[inline]
pub fn set_content_type(mut self, mime_type: mime::Mime) -> Self {
pub fn set_content_type(mut self, mime_type: Mime) -> Self {
self.content_type = mime_type;
self
}
@ -284,15 +321,15 @@ impl NamedFile {
/// filename is taken from the path provided in the `open` method after converting it to UTF-8
/// (using `to_string_lossy`).
#[inline]
pub fn set_content_disposition(mut self, cd: header::ContentDisposition) -> Self {
pub fn set_content_disposition(mut self, cd: ContentDisposition) -> Self {
self.content_disposition = cd;
self.flags.insert(Flags::CONTENT_DISPOSITION);
self
}
/// Disable `Content-Disposition` header.
/// Disables `Content-Disposition` header.
///
/// By default Content-Disposition` header is enabled.
/// By default, the `Content-Disposition` header is sent.
#[inline]
pub fn disable_content_disposition(mut self) -> Self {
self.flags.remove(Flags::CONTENT_DISPOSITION);
@ -491,11 +528,26 @@ impl NamedFile {
length = ranges[0].length;
offset = ranges[0].start;
// don't allow compression middleware to modify partial content
res.insert_header((
header::CONTENT_ENCODING,
HeaderValue::from_static("identity"),
));
// When a Content-Encoding header is present in a 206 partial content response
// for video content, it prevents browser video players from starting playback
// before loading the whole video and also prevents seeking.
//
// See: https://github.com/actix/actix-web/issues/2815
//
// The assumption of this fix is that the video player knows to not send an
// Accept-Encoding header for this request and that downstream middleware will
// not attempt compression for requests without it.
//
// TODO: Solve question around what to do if self.encoding is set and partial
// range is requested. Reject request? Ignoring self.encoding seems wrong, too.
// In practice, it should not come up.
if req.headers().contains_key(&header::ACCEPT_ENCODING) {
// don't allow compression middleware to modify partial content
res.insert_header((
header::CONTENT_ENCODING,
HeaderValue::from_static("identity"),
));
}
res.insert_header((
header::CONTENT_RANGE,

View File

@ -30,7 +30,7 @@ impl PathBufWrap {
let mut segment_count = path.matches('/').count() + 1;
// we can decode the whole path here (instead of per-segment decoding)
// because we will reject `%2F` in paths using `segement_count`.
// because we will reject `%2F` in paths using `segment_count`.
let path = percent_encoding::percent_decode_str(path)
.decode_utf8()
.map_err(|_| UriSegmentError::NotValidUtf8)?;

View File

@ -23,7 +23,7 @@ impl Deref for FilesService {
type Target = FilesServiceInner;
fn deref(&self) -> &Self::Target {
&*self.0
&self.0
}
}

View File

@ -1,11 +1,11 @@
use actix_files::Files;
use actix_files::{Files, NamedFile};
use actix_web::{
http::{
header::{self, HeaderValue},
StatusCode,
},
test::{self, TestRequest},
App,
web, App,
};
#[actix_web::test]
@ -36,3 +36,31 @@ async fn test_utf8_file_contents() {
Some(&HeaderValue::from_static("text/plain")),
);
}
#[actix_web::test]
async fn partial_range_response_encoding() {
let srv = test::init_service(App::new().default_service(web::to(|| async {
NamedFile::open_async("./tests/test.binary").await.unwrap()
})))
.await;
// range request without accept-encoding returns no content-encoding header
let req = TestRequest::with_uri("/")
.append_header((header::RANGE, "bytes=10-20"))
.to_request();
let res = test::call_service(&srv, req).await;
assert_eq!(res.status(), StatusCode::PARTIAL_CONTENT);
assert!(!res.headers().contains_key(header::CONTENT_ENCODING));
// range request with accept-encoding returns a content-encoding header
let req = TestRequest::with_uri("/")
.append_header((header::RANGE, "bytes=10-20"))
.append_header((header::ACCEPT_ENCODING, "identity"))
.to_request();
let res = test::call_service(&srv, req).await;
assert_eq!(res.status(), StatusCode::PARTIAL_CONTENT);
assert_eq!(
res.headers().get(header::CONTENT_ENCODING).unwrap(),
"identity"
);
}

View File

@ -1,75 +1,97 @@
# Changes
## Unreleased - 2021-xx-xx
## Unreleased - 2022-xx-xx
## 3.1.0 - 2023-01-21
- Minimum supported Rust version (MSRV) is now 1.59.
## 3.0.0 - 2022-07-24
- `TestServer::stop` is now async and will wait for the server and system to shutdown. [#2442]
- Added `TestServer::client_headers` method. [#2097]
- Update `actix-server` dependency to `2`.
- Update `actix-tls` dependency to `3`.
- Update `bytes` to `1.0`. [#1813]
- Minimum supported Rust version (MSRV) is now 1.57.
[#2442]: https://github.com/actix/actix-web/pull/2442
[#2097]: https://github.com/actix/actix-web/pull/2097
[#1813]: https://github.com/actix/actix-web/pull/1813
<details>
<summary>3.0.0 Pre-Releases</summary>
## 3.0.0-beta.13 - 2022-02-16
- No significant changes since `3.0.0-beta.12`.
## 3.0.0-beta.12 - 2022-01-31
- No significant changes since `3.0.0-beta.11`.
## 3.0.0-beta.11 - 2022-01-04
- Minimum supported Rust version (MSRV) is now 1.54.
## 3.0.0-beta.10 - 2021-12-27
- Update `actix-server` to `2.0.0-rc.2`. [#2550]
[#2550]: https://github.com/actix/actix-web/pull/2550
## 3.0.0-beta.9 - 2021-12-11
- No significant changes since `3.0.0-beta.8`.
## 3.0.0-beta.8 - 2021-11-30
- Update `actix-tls` to `3.0.0-rc.1`. [#2474]
[#2474]: https://github.com/actix/actix-web/pull/2474
## 3.0.0-beta.7 - 2021-11-22
- Fix compatibility with experimental `io-uring` feature of `actix-rt`. [#2408]
[#2408]: https://github.com/actix/actix-web/pull/2408
## 3.0.0-beta.6 - 2021-11-15
- `TestServer::stop` is now async and will wait for the server and system to shutdown. [#2442]
- Update `actix-server` to `2.0.0-beta.9`. [#2442]
- Minimum supported Rust version (MSRV) is now 1.52.
[#2442]: https://github.com/actix/actix-web/pull/2442
## 3.0.0-beta.5 - 2021-09-09
- Minimum supported Rust version (MSRV) is now 1.51.
## 3.0.0-beta.4 - 2021-04-02
- Added `TestServer::client_headers` method. [#2097]
[#2097]: https://github.com/actix/actix-web/pull/2097
## 3.0.0-beta.3 - 2021-03-09
- No notable changes.
- No notable changes.
## 3.0.0-beta.2 - 2021-02-10
- No notable changes.
## 3.0.0-beta.1 - 2021-01-07
- Update `bytes` to `1.0`. [#1813]
[#1813]: https://github.com/actix/actix-web/pull/1813
</details>
## 2.1.0 - 2020-11-25
- Add ability to set address for `TestServer`. [#1645]
- Upgrade `base64` to `0.13`.
- Upgrade `serde_urlencoded` to `0.7`. [#1773]
@ -77,12 +99,12 @@
[#1773]: https://github.com/actix/actix-web/pull/1773
[#1645]: https://github.com/actix/actix-web/pull/1645
## 2.0.0 - 2020-09-11
- Update actix-codec and actix-utils dependencies.
## 2.0.0-alpha.1 - 2020-05-23
- Update the `time` dependency to 0.2.7
- Update `actix-connect` dependency to 2.0.0-alpha.2
- Make `test_server` `async` fn.
@ -92,55 +114,56 @@
- Update `env_logger` dependency to 0.7
## 1.0.0 - 2019-12-13
- Replaced `TestServer::start()` with `test_server()`
## 1.0.0-alpha.3 - 2019-12-07
- Migrate to `std::future`
## 0.2.5 - 2019-09-17
- Update serde_urlencoded to "0.6.1"
- Increase TestServerRuntime timeouts from 500ms to 3000ms
- Do not override current `System`
## 0.2.4 - 2019-07-18
- Update actix-server to 0.6
## 0.2.3 - 2019-07-16
- Add `delete`, `options`, `patch` methods to `TestServerRunner`
## 0.2.2 - 2019-06-16
- Add .put() and .sput() methods
## 0.2.1 - 2019-06-05
- Add license files
## 0.2.0 - 2019-05-12
- Update awc and actix-http deps
## 0.1.1 - 2019-04-24
- Always make new connection for http client
## 0.1.0 - 2019-04-16
- No changes
## 0.1.0-alpha.3 - 2019-04-02
- Request functions accept path #743
## 0.1.0-alpha.2 - 2019-03-29
- Added TestServerRuntime::load_body() method
- Update actix-http and awc libraries
## 0.1.0-alpha.1 - 2019-03-28
- Initial impl

View File

@ -1,6 +1,6 @@
[package]
name = "actix-http-test"
version = "3.0.0-beta.13"
version = "3.1.0"
authors = ["Nikolay Kim <fafhrd91@gmail.com>"]
description = "Various helpers for Actix applications to use during testing"
keywords = ["http", "web", "framework", "async", "futures"]
@ -29,17 +29,16 @@ default = []
openssl = ["tls-openssl", "awc/openssl"]
[dependencies]
actix-service = "2.0.0"
actix-service = "2"
actix-codec = "0.5"
actix-tls = "3"
actix-utils = "3.0.0"
actix-utils = "3"
actix-rt = "2.2"
actix-server = "2"
awc = { version = "3.0.0-beta.21", default-features = false }
awc = { version = "3", default-features = false }
base64 = "0.13"
bytes = "1"
futures-core = { version = "0.3.7", default-features = false }
futures-core = { version = "0.3.17", default-features = false }
http = "0.2.5"
log = "0.4"
socket2 = "0.4"
@ -48,8 +47,8 @@ serde_json = "1.0"
slab = "0.4"
serde_urlencoded = "0.7"
tls-openssl = { version = "0.10.9", package = "openssl", optional = true }
tokio = { version = "1.8.4", features = ["sync"] }
tokio = { version = "1.18.5", features = ["sync"] }
[dev-dependencies]
actix-web = { version = "4.0.0", default-features = false, features = ["cookies"] }
actix-http = "3.0.0"
actix-web = { version = "4", default-features = false, features = ["cookies"] }
actix-http = "3"

View File

@ -3,15 +3,15 @@
> Various helpers for Actix applications to use during testing.
[![crates.io](https://img.shields.io/crates/v/actix-http-test?label=latest)](https://crates.io/crates/actix-http-test)
[![Documentation](https://docs.rs/actix-http-test/badge.svg?version=3.0.0-beta.13)](https://docs.rs/actix-http-test/3.0.0-beta.13)
[![Version](https://img.shields.io/badge/rustc-1.54+-ab6000.svg)](https://blog.rust-lang.org/2021/05/06/Rust-1.54.0.html)
[![Documentation](https://docs.rs/actix-http-test/badge.svg?version=3.1.0)](https://docs.rs/actix-http-test/3.1.0)
![Version](https://img.shields.io/badge/rustc-1.59+-ab6000.svg)
![MIT or Apache 2.0 licensed](https://img.shields.io/crates/l/actix-http-test)
<br>
[![Dependency Status](https://deps.rs/crate/actix-http-test/3.0.0-beta.13/status.svg)](https://deps.rs/crate/actix-http-test/3.0.0-beta.13)
[![Dependency Status](https://deps.rs/crate/actix-http-test/3.1.0/status.svg)](https://deps.rs/crate/actix-http-test/3.1.0)
[![Download](https://img.shields.io/crates/d/actix-http-test.svg)](https://crates.io/crates/actix-http-test)
[![Chat on Discord](https://img.shields.io/discord/771444961383153695?label=chat&logo=discord)](https://discord.gg/NWpN5mmg3x)
## Documentation & Resources
- [API Documentation](https://docs.rs/actix-http-test)
- Minimum Supported Rust Version (MSRV): 1.54
- Minimum Supported Rust Version (MSRV): 1.59

View File

@ -2,6 +2,7 @@
#![deny(rust_2018_idioms, nonstandard_style)]
#![warn(future_incompatible)]
#![allow(clippy::uninlined_format_args)]
#![doc(html_logo_url = "https://actix.rs/img/logo.png")]
#![doc(html_favicon_url = "https://actix.rs/favicon.ico")]
@ -87,6 +88,7 @@ pub async fn test_server_with_addr<F: ServerServiceFactory<TcpStream>>(
// notify TestServer that server and system have shut down
// all thread managed resources should be dropped at this point
#[allow(clippy::let_underscore_future)]
let _ = thread_stop_tx.send(());
});
@ -294,6 +296,7 @@ impl Drop for TestServer {
// without needing to await anything
// signal server to stop
#[allow(clippy::let_underscore_future)]
let _ = self.server.stop(true);
// signal system to stop

File diff suppressed because it is too large Load Diff

View File

@ -1,6 +1,6 @@
[package]
name = "actix-http"
version = "3.0.0"
version = "3.3.0"
authors = [
"Nikolay Kim <fafhrd91@gmail.com>",
"Rob Ede <robjtede@icloud.com>",
@ -20,7 +20,7 @@ edition = "2018"
[package.metadata.docs.rs]
# features that docs.rs will build with
features = ["http2", "openssl", "rustls", "compress-brotli", "compress-gzip", "compress-zstd"]
features = ["http2", "ws", "openssl", "rustls", "compress-brotli", "compress-gzip", "compress-zstd"]
[lib]
name = "actix_http"
@ -37,7 +37,7 @@ ws = [
"local-channel",
"base64",
"rand",
"sha-1",
"sha1",
]
# TLS via OpenSSL
@ -67,26 +67,28 @@ bytes = "1"
bytestring = "1"
derive_more = "0.99.5"
encoding_rs = "0.8"
futures-core = { version = "0.3.7", default-features = false, features = ["alloc"] }
futures-core = { version = "0.3.17", default-features = false, features = ["alloc"] }
http = "0.2.5"
httparse = "1.5.1"
httpdate = "1.0.1"
itoa = "1"
language-tags = "0.3"
log = "0.4"
mime = "0.3"
percent-encoding = "2.1"
pin-project-lite = "0.2"
smallvec = "1.6.1"
tokio = { version = "1.18.5", features = [] }
tokio-util = { version = "0.7", features = ["io", "codec"] }
tracing = { version = "0.1.30", default-features = false, features = ["log"] }
# http2
h2 = { version = "0.3.9", optional = true }
# websockets
local-channel = { version = "0.1", optional = true }
base64 = { version = "0.13", optional = true }
base64 = { version = "0.21", optional = true }
rand = { version = "0.8", optional = true }
sha-1 = { version = "0.10", optional = true }
sha1 = { version = "0.10", optional = true }
# openssl/rustls
actix-tls = { version = "3", default-features = false, optional = true }
@ -94,33 +96,34 @@ actix-tls = { version = "3", default-features = false, optional = true }
# compress-*
brotli = { version = "3.3.3", optional = true }
flate2 = { version = "1.0.13", optional = true }
zstd = { version = "0.10", optional = true }
zstd = { version = "0.12", optional = true }
[dev-dependencies]
actix-http-test = { version = "3.0.0-beta.13", features = ["openssl"] }
actix-http-test = { version = "3", features = ["openssl"] }
actix-server = "2"
actix-tls = { version = "3", features = ["openssl"] }
actix-web = "4.0.0"
actix-web = "4"
async-stream = "0.3"
criterion = { version = "0.3", features = ["html_reports"] }
criterion = { version = "0.4", features = ["html_reports"] }
env_logger = "0.9"
futures-util = { version = "0.3.7", default-features = false, features = ["alloc"] }
futures-util = { version = "0.3.17", default-features = false, features = ["alloc"] }
memchr = "2.4"
once_cell = "1.9"
rcgen = "0.8"
rcgen = "0.9"
regex = "1.3"
rustls-pemfile = "0.2"
rustversion = "1"
rustls-pemfile = "1"
serde = { version = "1.0", features = ["derive"] }
serde_json = "1.0"
static_assertions = "1"
tls-openssl = { package = "openssl", version = "0.10.9" }
tls-rustls = { package = "rustls", version = "0.20.0" }
tokio = { version = "1.8.4", features = ["net", "rt", "macros"] }
tokio = { version = "1.18.5", features = ["net", "rt", "macros"] }
[[example]]
name = "ws"
required-features = ["rustls"]
required-features = ["ws", "rustls"]
[[bench]]
name = "write-camel-case"

View File

@ -3,18 +3,18 @@
> HTTP primitives for the Actix ecosystem.
[![crates.io](https://img.shields.io/crates/v/actix-http?label=latest)](https://crates.io/crates/actix-http)
[![Documentation](https://docs.rs/actix-http/badge.svg?version=3.0.0)](https://docs.rs/actix-http/3.0.0)
[![Version](https://img.shields.io/badge/rustc-1.54+-ab6000.svg)](https://blog.rust-lang.org/2021/05/06/Rust-1.54.0.html)
[![Documentation](https://docs.rs/actix-http/badge.svg?version=3.3.0)](https://docs.rs/actix-http/3.3.0)
![Version](https://img.shields.io/badge/rustc-1.59+-ab6000.svg)
![MIT or Apache 2.0 licensed](https://img.shields.io/crates/l/actix-http.svg)
<br />
[![dependency status](https://deps.rs/crate/actix-http/3.0.0/status.svg)](https://deps.rs/crate/actix-http/3.0.0)
[![dependency status](https://deps.rs/crate/actix-http/3.3.0/status.svg)](https://deps.rs/crate/actix-http/3.3.0)
[![Download](https://img.shields.io/crates/d/actix-http.svg)](https://crates.io/crates/actix-http)
[![Chat on Discord](https://img.shields.io/discord/771444961383153695?label=chat&logo=discord)](https://discord.gg/NWpN5mmg3x)
## Documentation & Resources
- [API Documentation](https://docs.rs/actix-http)
- Minimum Supported Rust Version (MSRV): 1.54
- Minimum Supported Rust Version (MSRV): 1.59
## Example
@ -25,7 +25,7 @@ use actix_http::{HttpService, Response};
use actix_server::Server;
use futures_util::future;
use http::header::HeaderValue;
use log::info;
use tracing::info;
#[actix_rt::main]
async fn main() -> io::Result<()> {
@ -49,18 +49,3 @@ async fn main() -> io::Result<()> {
.await
}
```
## License
This project is licensed under either of
- Apache License, Version 2.0, ([LICENSE-APACHE](LICENSE-APACHE) or [http://www.apache.org/licenses/LICENSE-2.0](http://www.apache.org/licenses/LICENSE-2.0))
- MIT license ([LICENSE-MIT](LICENSE-MIT) or [http://opensource.org/licenses/MIT](http://opensource.org/licenses/MIT))
at your option.
## Code of Conduct
Contribution to the actix-http crate is organized under the terms of the
Contributor Covenant, the maintainer of actix-http, @fafhrd91, promises to
intervene to uphold that code of conduct.

View File

@ -1,3 +1,5 @@
#![allow(clippy::uninlined_format_args)]
use criterion::{criterion_group, criterion_main, BenchmarkId, Criterion};
const CODES: &[u16] = &[0, 1000, 201, 800, 550];

View File

@ -114,11 +114,12 @@ mod _original {
use std::mem::MaybeUninit;
pub fn parse_headers(src: &mut BytesMut) -> usize {
#![allow(clippy::uninit_assumed_init)]
#![allow(invalid_value, clippy::uninit_assumed_init)]
let mut headers: [HeaderIndex; MAX_HEADERS] =
unsafe { MaybeUninit::uninit().assume_init() };
#[allow(invalid_value)]
let mut parsed: [httparse::Header<'_>; MAX_HEADERS] =
unsafe { MaybeUninit::uninit().assume_init() };

View File

@ -18,7 +18,8 @@ async fn main() -> std::io::Result<()> {
HttpService::build()
// pass the app to service builder
// map_config is used to map App's configuration to ServiceBuilder
.finish(map_config(app, |_| AppConfig::default()))
// h1 will configure server to only use HTTP/1.1
.h1(map_config(app, |_| AppConfig::default()))
.tcp()
})?
.run()

View File

@ -5,6 +5,7 @@ use actix_server::Server;
use bytes::BytesMut;
use futures_util::StreamExt as _;
use http::header::HeaderValue;
use tracing::info;
#[actix_rt::main]
async fn main() -> io::Result<()> {
@ -22,7 +23,7 @@ async fn main() -> io::Result<()> {
body.extend_from_slice(&item?);
}
log::info!("request body: {:?}", body);
info!("request body: {:?}", body);
let res = Response::build(StatusCode::OK)
.insert_header(("x-head", HeaderValue::from_static("dummy value!")))

View File

@ -0,0 +1,29 @@
//! An example that supports automatic selection of plaintext h1/h2c connections.
//!
//! Notably, both the following commands will work.
//! ```console
//! $ curl --http1.1 'http://localhost:8080/'
//! $ curl --http2-prior-knowledge 'http://localhost:8080/'
//! ```
use std::{convert::Infallible, io};
use actix_http::{HttpService, Request, Response, StatusCode};
use actix_server::Server;
#[tokio::main(flavor = "current_thread")]
async fn main() -> io::Result<()> {
env_logger::init_from_env(env_logger::Env::new().default_filter_or("info"));
Server::build()
.bind("h2c-detect", ("127.0.0.1", 8080), || {
HttpService::build()
.finish(|_req: Request| async move {
Ok::<_, Infallible>(Response::build(StatusCode::OK).body("Hello!"))
})
.tcp_auto_h2c()
})?
.workers(2)
.run()
.await
}

View File

@ -1,9 +1,8 @@
use std::{convert::Infallible, io, time::Duration};
use actix_http::{
header::HeaderValue, HttpMessage, HttpService, Request, Response, StatusCode,
};
use actix_http::{header::HeaderValue, HttpService, Request, Response, StatusCode};
use actix_server::Server;
use tracing::info;
#[actix_rt::main]
async fn main() -> io::Result<()> {
@ -18,12 +17,12 @@ async fn main() -> io::Result<()> {
ext.insert(42u32);
})
.finish(|req: Request| async move {
log::info!("{:?}", req);
info!("{:?}", req);
let mut res = Response::build(StatusCode::OK);
res.insert_header(("x-head", HeaderValue::from_static("dummy value!")));
let forty_two = req.extensions().get::<u32>().unwrap().to_string();
let forty_two = req.conn_data::<u32>().unwrap().to_string();
res.insert_header((
"x-forty-two",
HeaderValue::from_str(&forty_two).unwrap(),

View File

@ -12,6 +12,7 @@ use actix_http::{body::BodyStream, HttpService, Response};
use actix_server::Server;
use async_stream::stream;
use bytes::Bytes;
use tracing::info;
#[actix_rt::main]
async fn main() -> io::Result<()> {
@ -21,7 +22,7 @@ async fn main() -> io::Result<()> {
.bind("streaming-error", ("127.0.0.1", 8080), || {
HttpService::build()
.finish(|req| async move {
log::info!("{:?}", req);
info!("{:?}", req);
let res = Response::ok();
Ok::<_, Infallible>(res.set_body(BodyStream::new(stream! {

View File

@ -10,13 +10,14 @@ use std::{
time::Duration,
};
use actix_codec::Encoder;
use actix_http::{body::BodyStream, error::Error, ws, HttpService, Request, Response};
use actix_rt::time::{interval, Interval};
use actix_server::Server;
use bytes::{Bytes, BytesMut};
use bytestring::ByteString;
use futures_core::{ready, Stream};
use tokio_util::codec::Encoder;
use tracing::{info, trace};
#[actix_rt::main]
async fn main() -> io::Result<()> {
@ -34,13 +35,13 @@ async fn main() -> io::Result<()> {
}
async fn handler(req: Request) -> Result<Response<BodyStream<Heartbeat>>, Error> {
log::info!("handshaking");
info!("handshaking");
let mut res = ws::handshake(req.head())?;
// handshake will always fail under HTTP/2
log::info!("responding");
Ok(res.message_body(BodyStream::new(Heartbeat::new(ws::Codec::new())))?)
info!("responding");
res.message_body(BodyStream::new(Heartbeat::new(ws::Codec::new())))
}
struct Heartbeat {
@ -61,7 +62,7 @@ impl Stream for Heartbeat {
type Item = Result<Bytes, Error>;
fn poll_next(mut self: Pin<&mut Self>, cx: &mut Context<'_>) -> Poll<Option<Self::Item>> {
log::trace!("poll");
trace!("poll");
ready!(self.as_mut().interval.poll_tick(cx));

View File

@ -120,8 +120,28 @@ pub trait MessageBody {
}
mod foreign_impls {
use std::{borrow::Cow, ops::DerefMut};
use super::*;
impl<B> MessageBody for &mut B
where
B: MessageBody + Unpin + ?Sized,
{
type Error = B::Error;
fn size(&self) -> BodySize {
(**self).size()
}
fn poll_next(
mut self: Pin<&mut Self>,
cx: &mut Context<'_>,
) -> Poll<Option<Result<Bytes, Self::Error>>> {
Pin::new(&mut **self).poll_next(cx)
}
}
impl MessageBody for Infallible {
type Error = Infallible;
@ -179,8 +199,9 @@ mod foreign_impls {
}
}
impl<B> MessageBody for Pin<Box<B>>
impl<T, B> MessageBody for Pin<T>
where
T: DerefMut<Target = B> + Unpin,
B: MessageBody + ?Sized,
{
type Error = B::Error;
@ -303,6 +324,39 @@ mod foreign_impls {
}
}
impl MessageBody for Cow<'static, [u8]> {
type Error = Infallible;
#[inline]
fn size(&self) -> BodySize {
BodySize::Sized(self.len() as u64)
}
#[inline]
fn poll_next(
self: Pin<&mut Self>,
_cx: &mut Context<'_>,
) -> Poll<Option<Result<Bytes, Self::Error>>> {
if self.is_empty() {
Poll::Ready(None)
} else {
let bytes = match mem::take(self.get_mut()) {
Cow::Borrowed(b) => Bytes::from_static(b),
Cow::Owned(b) => Bytes::from(b),
};
Poll::Ready(Some(Ok(bytes)))
}
}
#[inline]
fn try_into_bytes(self) -> Result<Bytes, Self> {
match self {
Cow::Borrowed(b) => Ok(Bytes::from_static(b)),
Cow::Owned(b) => Ok(Bytes::from(b)),
}
}
}
impl MessageBody for &'static str {
type Error = Infallible;
@ -358,6 +412,39 @@ mod foreign_impls {
}
}
impl MessageBody for Cow<'static, str> {
type Error = Infallible;
#[inline]
fn size(&self) -> BodySize {
BodySize::Sized(self.len() as u64)
}
#[inline]
fn poll_next(
self: Pin<&mut Self>,
_cx: &mut Context<'_>,
) -> Poll<Option<Result<Bytes, Self::Error>>> {
if self.is_empty() {
Poll::Ready(None)
} else {
let bytes = match mem::take(self.get_mut()) {
Cow::Borrowed(s) => Bytes::from_static(s.as_bytes()),
Cow::Owned(s) => Bytes::from(s.into_bytes()),
};
Poll::Ready(Some(Ok(bytes)))
}
}
#[inline]
fn try_into_bytes(self) -> Result<Bytes, Self> {
match self {
Cow::Borrowed(s) => Ok(Bytes::from_static(s.as_bytes())),
Cow::Owned(s) => Ok(Bytes::from(s.into_bytes())),
}
}
}
impl MessageBody for bytestring::ByteString {
type Error = Infallible;
@ -445,6 +532,7 @@ mod tests {
use actix_rt::pin;
use actix_utils::future::poll_fn;
use bytes::{Bytes, BytesMut};
use futures_util::stream;
use super::*;
use crate::body::{self, EitherBody};
@ -481,6 +569,35 @@ mod tests {
assert_poll_next_none!(pl);
}
#[actix_rt::test]
async fn mut_equivalence() {
assert_eq!(().size(), BodySize::Sized(0));
assert_eq!(().size(), (&(&mut ())).size());
let pl = &mut ();
pin!(pl);
assert_poll_next_none!(pl);
let pl = &mut Box::new(());
pin!(pl);
assert_poll_next_none!(pl);
let mut body = body::SizedStream::new(
8,
stream::iter([
Ok::<_, std::io::Error>(Bytes::from("1234")),
Ok(Bytes::from("5678")),
]),
);
let body = &mut body;
assert_eq!(body.size(), BodySize::Sized(8));
pin!(body);
assert_poll_next!(body, Bytes::from_static(b"1234"));
assert_poll_next!(body, Bytes::from_static(b"5678"));
assert_poll_next_none!(body);
}
#[allow(clippy::let_unit_value)]
#[actix_rt::test]
async fn test_unit() {
let pl = ();
@ -606,4 +723,18 @@ mod tests {
let not_body = resp_body.downcast_ref::<()>();
assert!(not_body.is_none());
}
#[actix_rt::test]
async fn non_owning_to_bytes() {
let mut body = BoxBody::new(());
let bytes = body::to_bytes(&mut body).await.unwrap();
assert_eq!(bytes, Bytes::new());
let mut body = body::BodyStream::new(stream::iter([
Ok::<_, std::io::Error>(Bytes::from("1234")),
Ok(Bytes::from("5678")),
]));
let bytes = body::to_bytes(&mut body).await.unwrap();
assert_eq!(bytes, Bytes::from_static(b"12345678"));
}
}

View File

@ -44,7 +44,7 @@ where
#[inline]
fn size(&self) -> BodySize {
BodySize::Sized(self.size as u64)
BodySize::Sized(self.size)
}
/// Attempts to pull out the next value of the underlying [`Stream`].

View File

@ -42,7 +42,7 @@ pub async fn to_bytes<B: MessageBody>(body: B) -> Result<Bytes, B::Error> {
let body = body.as_mut();
match ready!(body.poll_next(cx)) {
Some(Ok(bytes)) => buf.extend_from_slice(&*bytes),
Some(Ok(bytes)) => buf.extend_from_slice(&bytes),
None => return Poll::Ready(Ok(())),
Some(Err(err)) => return Poll::Ready(Err(err)),
}

View File

@ -186,7 +186,7 @@ where
self
}
/// Finish service configuration and create a HTTP Service for HTTP/1 protocol.
/// Finish service configuration and create a service for the HTTP/1 protocol.
pub fn h1<F, B>(self, service: F) -> H1Service<T, S, B, X, U>
where
B: MessageBody,
@ -209,8 +209,9 @@ where
.on_connect_ext(self.on_connect_ext)
}
/// Finish service configuration and create a HTTP service for HTTP/2 protocol.
/// Finish service configuration and create a service for the HTTP/2 protocol.
#[cfg(feature = "http2")]
#[cfg_attr(docsrs, doc(cfg(feature = "http2")))]
pub fn h2<F, B>(self, service: F) -> crate::h2::H2Service<T, S, B>
where
F: IntoServiceFactory<S, Request>,

View File

@ -35,7 +35,7 @@ impl Default for ServiceConfig {
}
impl ServiceConfig {
/// Create instance of `ServiceConfig`
/// Create instance of `ServiceConfig`.
pub fn new(
keep_alive: KeepAlive,
client_request_timeout: Duration,

View File

@ -17,6 +17,7 @@ use pin_project_lite::pin_project;
#[cfg(feature = "compress-gzip")]
use flate2::write::{GzEncoder, ZlibEncoder};
use tracing::trace;
#[cfg(feature = "compress-zstd")]
use zstd::stream::write::Encoder as ZstdEncoder;
@ -256,7 +257,7 @@ fn update_head(encoding: ContentEncoding, head: &mut ResponseHead) {
head.headers_mut()
.insert(header::CONTENT_ENCODING, encoding.to_header_value());
head.headers_mut()
.insert(header::VARY, HeaderValue::from_static("accept-encoding"));
.append(header::VARY, HeaderValue::from_static("accept-encoding"));
head.no_chunking(false);
}
@ -356,7 +357,7 @@ impl ContentEncoder {
ContentEncoder::Brotli(ref mut encoder) => match encoder.write_all(data) {
Ok(_) => Ok(()),
Err(err) => {
log::trace!("Error decoding br encoding: {}", err);
trace!("Error decoding br encoding: {}", err);
Err(err)
}
},
@ -365,7 +366,7 @@ impl ContentEncoder {
ContentEncoder::Gzip(ref mut encoder) => match encoder.write_all(data) {
Ok(_) => Ok(()),
Err(err) => {
log::trace!("Error decoding gzip encoding: {}", err);
trace!("Error decoding gzip encoding: {}", err);
Err(err)
}
},
@ -374,7 +375,7 @@ impl ContentEncoder {
ContentEncoder::Deflate(ref mut encoder) => match encoder.write_all(data) {
Ok(_) => Ok(()),
Err(err) => {
log::trace!("Error decoding deflate encoding: {}", err);
trace!("Error decoding deflate encoding: {}", err);
Err(err)
}
},
@ -383,7 +384,7 @@ impl ContentEncoder {
ContentEncoder::Zstd(ref mut encoder) => match encoder.write_all(data) {
Ok(_) => Ok(()),
Err(err) => {
log::trace!("Error decoding ztsd encoding: {}", err);
trace!("Error decoding ztsd encoding: {}", err);
Err(err)
}
},

View File

@ -294,6 +294,7 @@ impl std::error::Error for PayloadError {
PayloadError::Overflow => None,
PayloadError::UnknownLength => None,
#[cfg(feature = "http2")]
#[cfg_attr(docsrs, doc(cfg(feature = "http2")))]
PayloadError::Http2Payload(err) => Some(err),
PayloadError::Io(err) => Some(err),
}
@ -351,6 +352,7 @@ pub enum DispatchError {
/// HTTP/2 error.
#[display(fmt = "{}", _0)]
#[cfg(feature = "http2")]
#[cfg_attr(docsrs, doc(cfg(feature = "http2")))]
H2(h2::Error),
/// The first request did not complete within the specified timeout.
@ -388,7 +390,7 @@ impl StdError for DispatchError {
/// A set of error that can occur during parsing content type.
#[derive(Debug, Display, Error)]
#[cfg_attr(test, derive(PartialEq))]
#[cfg_attr(test, derive(PartialEq, Eq))]
#[non_exhaustive]
pub enum ContentTypeError {
/// Can not parse content type

View File

@ -1,9 +1,30 @@
use std::{
any::{Any, TypeId},
collections::HashMap,
fmt,
hash::{BuildHasherDefault, Hasher},
};
use ahash::AHashMap;
/// A hasher for `TypeId`s that takes advantage of its known characteristics.
///
/// Author of `anymap` crate has done research on the topic:
/// https://github.com/chris-morgan/anymap/blob/2e9a5704/src/lib.rs#L599
#[derive(Debug, Default)]
struct NoOpHasher(u64);
impl Hasher for NoOpHasher {
fn write(&mut self, _bytes: &[u8]) {
unimplemented!("This NoOpHasher can only handle u64s")
}
fn write_u64(&mut self, i: u64) {
self.0 = i;
}
fn finish(&self) -> u64 {
self.0
}
}
/// A type map for request extensions.
///
@ -11,7 +32,7 @@ use ahash::AHashMap;
#[derive(Default)]
pub struct Extensions {
/// Use AHasher with a std HashMap with for faster lookups on the small `TypeId` keys.
map: AHashMap<TypeId, Box<dyn Any>>,
map: HashMap<TypeId, Box<dyn Any>, BuildHasherDefault<NoOpHasher>>,
}
impl Extensions {
@ -19,7 +40,7 @@ impl Extensions {
#[inline]
pub fn new() -> Extensions {
Extensions {
map: AHashMap::new(),
map: HashMap::default(),
}
}

View File

@ -1,6 +1,7 @@
use std::{io, task::Poll};
use bytes::{Buf as _, Bytes, BytesMut};
use tracing::{debug, trace};
macro_rules! byte (
($rdr:ident) => ({
@ -14,7 +15,7 @@ macro_rules! byte (
})
);
#[derive(Debug, PartialEq, Clone)]
#[derive(Debug, Clone, PartialEq, Eq)]
pub(super) enum ChunkedState {
Size,
SizeLws,
@ -70,13 +71,13 @@ impl ChunkedState {
match size.checked_mul(radix) {
Some(n) => {
*size = n as u64;
*size = n;
*size += rem as u64;
Poll::Ready(Ok(ChunkedState::Size))
}
None => {
log::debug!("chunk size would overflow u64");
debug!("chunk size would overflow u64");
Poll::Ready(Err(io::Error::new(
io::ErrorKind::InvalidInput,
"Invalid chunk size line: Size is too big",
@ -124,7 +125,7 @@ impl ChunkedState {
rem: &mut u64,
buf: &mut Option<Bytes>,
) -> Poll<Result<ChunkedState, io::Error>> {
log::trace!("Chunked read, remaining={:?}", rem);
trace!("Chunked read, remaining={:?}", rem);
let len = rdr.len() as u64;
if len == 0 {

View File

@ -1,9 +1,9 @@
use std::{fmt, io};
use actix_codec::{Decoder, Encoder};
use bitflags::bitflags;
use bytes::{Bytes, BytesMut};
use http::{Method, Version};
use tokio_util::codec::{Decoder, Encoder};
use super::{
decoder::{self, PayloadDecoder, PayloadItem, PayloadType},

View File

@ -1,9 +1,9 @@
use std::{fmt, io};
use actix_codec::{Decoder, Encoder};
use bitflags::bitflags;
use bytes::BytesMut;
use http::{Method, Version};
use tokio_util::codec::{Decoder, Encoder};
use super::{
decoder::{self, PayloadDecoder, PayloadItem, PayloadType},

View File

@ -6,7 +6,7 @@ use http::{
header::{self, HeaderName, HeaderValue},
Method, StatusCode, Uri, Version,
};
use log::{debug, error, trace};
use tracing::{debug, error, trace};
use super::chunked::ChunkedState;
use crate::{error::ParseError, header::HeaderMap, ConnectionType, Request, ResponseHead};
@ -46,6 +46,23 @@ pub(crate) enum PayloadLength {
None,
}
impl PayloadLength {
/// Returns true if variant is `None`.
fn is_none(&self) -> bool {
matches!(self, Self::None)
}
/// Returns true if variant is represents zero-length (not none) payload.
fn is_zero(&self) -> bool {
matches!(
self,
PayloadLength::Payload(PayloadType::Payload(PayloadDecoder {
kind: Kind::Length(0)
}))
)
}
}
pub(crate) trait MessageType: Sized {
fn set_connection_type(&mut self, conn_type: Option<ConnectionType>);
@ -59,6 +76,7 @@ pub(crate) trait MessageType: Sized {
&mut self,
slice: &Bytes,
raw_headers: &[HeaderIndex],
version: Version,
) -> Result<PayloadLength, ParseError> {
let mut ka = None;
let mut has_upgrade_websocket = false;
@ -87,21 +105,23 @@ pub(crate) trait MessageType: Sized {
return Err(ParseError::Header);
}
header::CONTENT_LENGTH => match value.to_str() {
Ok(s) if s.trim().starts_with('+') => {
debug!("illegal Content-Length: {:?}", s);
header::CONTENT_LENGTH => match value.to_str().map(str::trim) {
Ok(val) if val.starts_with('+') => {
debug!("illegal Content-Length: {:?}", val);
return Err(ParseError::Header);
}
Ok(s) => {
if let Ok(len) = s.parse::<u64>() {
if len != 0 {
content_length = Some(len);
}
Ok(val) => {
if let Ok(len) = val.parse::<u64>() {
// accept 0 lengths here and remove them in `decode` after all
// headers have been processed to prevent request smuggling issues
content_length = Some(len);
} else {
debug!("illegal Content-Length: {:?}", s);
debug!("illegal Content-Length: {:?}", val);
return Err(ParseError::Header);
}
}
Err(_) => {
debug!("illegal Content-Length: {:?}", value);
return Err(ParseError::Header);
@ -114,22 +134,23 @@ pub(crate) trait MessageType: Sized {
return Err(ParseError::Header);
}
header::TRANSFER_ENCODING => {
header::TRANSFER_ENCODING if version == Version::HTTP_11 => {
seen_te = true;
if let Ok(s) = value.to_str().map(str::trim) {
if s.eq_ignore_ascii_case("chunked") {
if let Ok(val) = value.to_str().map(str::trim) {
if val.eq_ignore_ascii_case("chunked") {
chunked = true;
} else if s.eq_ignore_ascii_case("identity") {
} else if val.eq_ignore_ascii_case("identity") {
// allow silently since multiple TE headers are already checked
} else {
debug!("illegal Transfer-Encoding: {:?}", s);
debug!("illegal Transfer-Encoding: {:?}", val);
return Err(ParseError::Header);
}
} else {
return Err(ParseError::Header);
}
}
// connection keep-alive state
header::CONNECTION => {
ka = if let Ok(conn) = value.to_str().map(str::trim) {
@ -146,6 +167,7 @@ pub(crate) trait MessageType: Sized {
None
};
}
header::UPGRADE => {
if let Ok(val) = value.to_str().map(str::trim) {
if val.eq_ignore_ascii_case("websocket") {
@ -153,19 +175,23 @@ pub(crate) trait MessageType: Sized {
}
}
}
header::EXPECT => {
let bytes = value.as_bytes();
if bytes.len() >= 4 && &bytes[0..4] == b"100-" {
expect = true;
}
}
_ => {}
}
headers.append(name, value);
}
}
self.set_connection_type(ka);
if expect {
self.set_expect()
}
@ -249,7 +275,22 @@ impl MessageType for Request {
let mut msg = Request::new();
// convert headers
let length = msg.set_headers(&src.split_to(len).freeze(), &headers[..h_len])?;
let mut length =
msg.set_headers(&src.split_to(len).freeze(), &headers[..h_len], ver)?;
// disallow HTTP/1.0 POST requests that do not contain a Content-Length headers
// see https://datatracker.ietf.org/doc/html/rfc1945#section-7.2.2
if ver == Version::HTTP_10 && method == Method::POST && length.is_none() {
debug!("no Content-Length specified for HTTP/1.0 POST request");
return Err(ParseError::Header);
}
// Remove CL value if 0 now that all headers and HTTP/1.0 special cases are processed.
// Protects against some request smuggling attacks.
// See https://github.com/actix/actix-web/issues/2767.
if length.is_zero() {
length = PayloadLength::None;
}
// payload decoder
let decoder = match length {
@ -293,22 +334,35 @@ impl MessageType for ResponseHead {
let mut headers: [HeaderIndex; MAX_HEADERS] = EMPTY_HEADER_INDEX_ARRAY;
let (len, ver, status, h_len) = {
let mut parsed: [httparse::Header<'_>; MAX_HEADERS] = EMPTY_HEADER_ARRAY;
// SAFETY:
// Create an uninitialized array of `MaybeUninit`. The `assume_init` is safe because the
// type we are claiming to have initialized here is a bunch of `MaybeUninit`s, which
// do not require initialization.
let mut parsed = unsafe {
MaybeUninit::<[MaybeUninit<httparse::Header<'_>>; MAX_HEADERS]>::uninit()
.assume_init()
};
let mut res = httparse::Response::new(&mut parsed);
match res.parse(src)? {
let mut res = httparse::Response::new(&mut []);
let mut config = httparse::ParserConfig::default();
config.allow_spaces_after_header_name_in_responses(true);
match config.parse_response_with_uninit_headers(&mut res, src, &mut parsed)? {
httparse::Status::Complete(len) => {
let version = if res.version.unwrap() == 1 {
Version::HTTP_11
} else {
Version::HTTP_10
};
let status = StatusCode::from_u16(res.code.unwrap())
.map_err(|_| ParseError::Status)?;
HeaderIndex::record(src, res.headers, &mut headers);
(len, version, status, res.headers.len())
}
httparse::Status::Partial => {
return if src.len() >= MAX_BUFFER_SIZE {
error!("MAX_BUFFER_SIZE unprocessed data reached, closing");
@ -324,7 +378,15 @@ impl MessageType for ResponseHead {
msg.version = ver;
// convert headers
let length = msg.set_headers(&src.split_to(len).freeze(), &headers[..h_len])?;
let mut length =
msg.set_headers(&src.split_to(len).freeze(), &headers[..h_len], ver)?;
// Remove CL value if 0 now that all headers and HTTP/1.0 special cases are processed.
// Protects against some request smuggling attacks.
// See https://github.com/actix/actix-web/issues/2767.
if length.is_zero() {
length = PayloadLength::None;
}
// message payload
let decoder = if let PayloadLength::Payload(pl) = length {
@ -360,9 +422,6 @@ pub(crate) const EMPTY_HEADER_INDEX: HeaderIndex = HeaderIndex {
pub(crate) const EMPTY_HEADER_INDEX_ARRAY: [HeaderIndex; MAX_HEADERS] =
[EMPTY_HEADER_INDEX; MAX_HEADERS];
pub(crate) const EMPTY_HEADER_ARRAY: [httparse::Header<'static>; MAX_HEADERS] =
[httparse::EMPTY_HEADER; MAX_HEADERS];
impl HeaderIndex {
pub(crate) fn record(
bytes: &[u8],
@ -381,7 +440,7 @@ impl HeaderIndex {
}
}
#[derive(Debug, Clone, PartialEq)]
#[derive(Debug, Clone, PartialEq, Eq)]
/// Chunk type yielded while decoding a payload.
pub enum PayloadItem {
Chunk(Bytes),
@ -391,7 +450,7 @@ pub enum PayloadItem {
/// Decoder that can handle different payload types.
///
/// If a message body does not use `Transfer-Encoding`, it should include a `Content-Length`.
#[derive(Debug, Clone, PartialEq)]
#[derive(Debug, Clone, PartialEq, Eq)]
pub struct PayloadDecoder {
kind: Kind,
}
@ -417,7 +476,7 @@ impl PayloadDecoder {
}
}
#[derive(Debug, Clone, PartialEq)]
#[derive(Debug, Clone, PartialEq, Eq)]
enum Kind {
/// A reader used when a `Content-Length` header is passed with a positive integer.
Length(u64),
@ -596,14 +655,100 @@ mod tests {
}
#[test]
fn test_parse_post() {
let mut buf = BytesMut::from("POST /test2 HTTP/1.0\r\n\r\n");
fn parse_h09_reject() {
let mut buf = BytesMut::from(
"GET /test1 HTTP/0.9\r\n\
\r\n",
);
let mut reader = MessageDecoder::<Request>::default();
reader.decode(&mut buf).unwrap_err();
let mut buf = BytesMut::from(
"POST /test2 HTTP/0.9\r\n\
Content-Length: 3\r\n\
\r\n
abc",
);
let mut reader = MessageDecoder::<Request>::default();
reader.decode(&mut buf).unwrap_err();
}
#[test]
fn parse_h10_get() {
let mut buf = BytesMut::from(
"GET /test1 HTTP/1.0\r\n\
\r\n",
);
let mut reader = MessageDecoder::<Request>::default();
let (req, _) = reader.decode(&mut buf).unwrap().unwrap();
assert_eq!(req.version(), Version::HTTP_10);
assert_eq!(*req.method(), Method::GET);
assert_eq!(req.path(), "/test1");
let mut buf = BytesMut::from(
"GET /test2 HTTP/1.0\r\n\
Content-Length: 0\r\n\
\r\n",
);
let mut reader = MessageDecoder::<Request>::default();
let (req, _) = reader.decode(&mut buf).unwrap().unwrap();
assert_eq!(req.version(), Version::HTTP_10);
assert_eq!(*req.method(), Method::GET);
assert_eq!(req.path(), "/test2");
let mut buf = BytesMut::from(
"GET /test3 HTTP/1.0\r\n\
Content-Length: 3\r\n\
\r\n
abc",
);
let mut reader = MessageDecoder::<Request>::default();
let (req, _) = reader.decode(&mut buf).unwrap().unwrap();
assert_eq!(req.version(), Version::HTTP_10);
assert_eq!(*req.method(), Method::GET);
assert_eq!(req.path(), "/test3");
}
#[test]
fn parse_h10_post() {
let mut buf = BytesMut::from(
"POST /test1 HTTP/1.0\r\n\
Content-Length: 3\r\n\
\r\n\
abc",
);
let mut reader = MessageDecoder::<Request>::default();
let (req, _) = reader.decode(&mut buf).unwrap().unwrap();
assert_eq!(req.version(), Version::HTTP_10);
assert_eq!(*req.method(), Method::POST);
assert_eq!(req.path(), "/test1");
let mut buf = BytesMut::from(
"POST /test2 HTTP/1.0\r\n\
Content-Length: 0\r\n\
\r\n",
);
let mut reader = MessageDecoder::<Request>::default();
let (req, _) = reader.decode(&mut buf).unwrap().unwrap();
assert_eq!(req.version(), Version::HTTP_10);
assert_eq!(*req.method(), Method::POST);
assert_eq!(req.path(), "/test2");
let mut buf = BytesMut::from(
"POST /test3 HTTP/1.0\r\n\
\r\n",
);
let mut reader = MessageDecoder::<Request>::default();
let err = reader.decode(&mut buf).unwrap_err();
assert!(err.to_string().contains("Header"))
}
#[test]
@ -699,121 +844,98 @@ mod tests {
#[test]
fn test_conn_default_1_0() {
let mut buf = BytesMut::from("GET /test HTTP/1.0\r\n\r\n");
let req = parse_ready!(&mut buf);
let req = parse_ready!(&mut BytesMut::from("GET /test HTTP/1.0\r\n\r\n"));
assert_eq!(req.head().connection_type(), ConnectionType::Close);
}
#[test]
fn test_conn_default_1_1() {
let mut buf = BytesMut::from("GET /test HTTP/1.1\r\n\r\n");
let req = parse_ready!(&mut buf);
let req = parse_ready!(&mut BytesMut::from("GET /test HTTP/1.1\r\n\r\n"));
assert_eq!(req.head().connection_type(), ConnectionType::KeepAlive);
}
#[test]
fn test_conn_close() {
let mut buf = BytesMut::from(
let req = parse_ready!(&mut BytesMut::from(
"GET /test HTTP/1.1\r\n\
connection: close\r\n\r\n",
);
let req = parse_ready!(&mut buf);
));
assert_eq!(req.head().connection_type(), ConnectionType::Close);
let mut buf = BytesMut::from(
let req = parse_ready!(&mut BytesMut::from(
"GET /test HTTP/1.1\r\n\
connection: Close\r\n\r\n",
);
let req = parse_ready!(&mut buf);
));
assert_eq!(req.head().connection_type(), ConnectionType::Close);
}
#[test]
fn test_conn_close_1_0() {
let mut buf = BytesMut::from(
let req = parse_ready!(&mut BytesMut::from(
"GET /test HTTP/1.0\r\n\
connection: close\r\n\r\n",
);
let req = parse_ready!(&mut buf);
));
assert_eq!(req.head().connection_type(), ConnectionType::Close);
}
#[test]
fn test_conn_keep_alive_1_0() {
let mut buf = BytesMut::from(
let req = parse_ready!(&mut BytesMut::from(
"GET /test HTTP/1.0\r\n\
connection: keep-alive\r\n\r\n",
);
let req = parse_ready!(&mut buf);
));
assert_eq!(req.head().connection_type(), ConnectionType::KeepAlive);
let mut buf = BytesMut::from(
let req = parse_ready!(&mut BytesMut::from(
"GET /test HTTP/1.0\r\n\
connection: Keep-Alive\r\n\r\n",
);
let req = parse_ready!(&mut buf);
));
assert_eq!(req.head().connection_type(), ConnectionType::KeepAlive);
}
#[test]
fn test_conn_keep_alive_1_1() {
let mut buf = BytesMut::from(
let req = parse_ready!(&mut BytesMut::from(
"GET /test HTTP/1.1\r\n\
connection: keep-alive\r\n\r\n",
);
let req = parse_ready!(&mut buf);
));
assert_eq!(req.head().connection_type(), ConnectionType::KeepAlive);
}
#[test]
fn test_conn_other_1_0() {
let mut buf = BytesMut::from(
let req = parse_ready!(&mut BytesMut::from(
"GET /test HTTP/1.0\r\n\
connection: other\r\n\r\n",
);
let req = parse_ready!(&mut buf);
));
assert_eq!(req.head().connection_type(), ConnectionType::Close);
}
#[test]
fn test_conn_other_1_1() {
let mut buf = BytesMut::from(
let req = parse_ready!(&mut BytesMut::from(
"GET /test HTTP/1.1\r\n\
connection: other\r\n\r\n",
);
let req = parse_ready!(&mut buf);
));
assert_eq!(req.head().connection_type(), ConnectionType::KeepAlive);
}
#[test]
fn test_conn_upgrade() {
let mut buf = BytesMut::from(
let req = parse_ready!(&mut BytesMut::from(
"GET /test HTTP/1.1\r\n\
upgrade: websockets\r\n\
connection: upgrade\r\n\r\n",
);
let req = parse_ready!(&mut buf);
));
assert!(req.upgrade());
assert_eq!(req.head().connection_type(), ConnectionType::Upgrade);
let mut buf = BytesMut::from(
let req = parse_ready!(&mut BytesMut::from(
"GET /test HTTP/1.1\r\n\
upgrade: Websockets\r\n\
connection: Upgrade\r\n\r\n",
);
let req = parse_ready!(&mut buf);
));
assert!(req.upgrade());
assert_eq!(req.head().connection_type(), ConnectionType::Upgrade);
@ -821,59 +943,62 @@ mod tests {
#[test]
fn test_conn_upgrade_connect_method() {
let mut buf = BytesMut::from(
let req = parse_ready!(&mut BytesMut::from(
"CONNECT /test HTTP/1.1\r\n\
content-type: text/plain\r\n\r\n",
);
let req = parse_ready!(&mut buf);
));
assert!(req.upgrade());
}
#[test]
fn test_headers_content_length_err_1() {
let mut buf = BytesMut::from(
fn test_headers_bad_content_length() {
// string CL
expect_parse_err!(&mut BytesMut::from(
"GET /test HTTP/1.1\r\n\
content-length: line\r\n\r\n",
);
));
expect_parse_err!(&mut buf)
// negative CL
expect_parse_err!(&mut BytesMut::from(
"GET /test HTTP/1.1\r\n\
content-length: -1\r\n\r\n",
));
}
#[test]
fn test_headers_content_length_err_2() {
fn octal_ish_cl_parsed_as_decimal() {
let mut buf = BytesMut::from(
"GET /test HTTP/1.1\r\n\
content-length: -1\r\n\r\n",
"POST /test HTTP/1.1\r\n\
content-length: 011\r\n\r\n",
);
expect_parse_err!(&mut buf);
let mut reader = MessageDecoder::<Request>::default();
let (_req, pl) = reader.decode(&mut buf).unwrap().unwrap();
assert!(matches!(
pl,
PayloadType::Payload(pl) if pl == PayloadDecoder::length(11)
));
}
#[test]
fn test_invalid_header() {
let mut buf = BytesMut::from(
expect_parse_err!(&mut BytesMut::from(
"GET /test HTTP/1.1\r\n\
test line\r\n\r\n",
);
expect_parse_err!(&mut buf);
));
}
#[test]
fn test_invalid_name() {
let mut buf = BytesMut::from(
expect_parse_err!(&mut BytesMut::from(
"GET /test HTTP/1.1\r\n\
test[]: line\r\n\r\n",
);
expect_parse_err!(&mut buf);
));
}
#[test]
fn test_http_request_bad_status_line() {
let mut buf = BytesMut::from("getpath \r\n\r\n");
expect_parse_err!(&mut buf);
expect_parse_err!(&mut BytesMut::from("getpath \r\n\r\n"));
}
#[test]
@ -913,11 +1038,10 @@ mod tests {
#[test]
fn test_http_request_parser_utf8() {
let mut buf = BytesMut::from(
let req = parse_ready!(&mut BytesMut::from(
"GET /test HTTP/1.1\r\n\
x-test: тест\r\n\r\n",
);
let req = parse_ready!(&mut buf);
));
assert_eq!(
req.headers().get("x-test").unwrap().as_bytes(),
@ -927,24 +1051,18 @@ mod tests {
#[test]
fn test_http_request_parser_two_slashes() {
let mut buf = BytesMut::from("GET //path HTTP/1.1\r\n\r\n");
let req = parse_ready!(&mut buf);
let req = parse_ready!(&mut BytesMut::from("GET //path HTTP/1.1\r\n\r\n"));
assert_eq!(req.path(), "//path");
}
#[test]
fn test_http_request_parser_bad_method() {
let mut buf = BytesMut::from("!12%()+=~$ /get HTTP/1.1\r\n\r\n");
expect_parse_err!(&mut buf);
expect_parse_err!(&mut BytesMut::from("!12%()+=~$ /get HTTP/1.1\r\n\r\n"));
}
#[test]
fn test_http_request_parser_bad_version() {
let mut buf = BytesMut::from("GET //get HT/11\r\n\r\n");
expect_parse_err!(&mut buf);
expect_parse_err!(&mut BytesMut::from("GET //get HT/11\r\n\r\n"));
}
#[test]
@ -961,29 +1079,66 @@ mod tests {
#[test]
fn hrs_multiple_content_length() {
let mut buf = BytesMut::from(
expect_parse_err!(&mut BytesMut::from(
"GET / HTTP/1.1\r\n\
Host: example.com\r\n\
Content-Length: 4\r\n\
Content-Length: 2\r\n\
\r\n\
abcd",
);
));
expect_parse_err!(&mut buf);
expect_parse_err!(&mut BytesMut::from(
"GET / HTTP/1.1\r\n\
Host: example.com\r\n\
Content-Length: 0\r\n\
Content-Length: 2\r\n\
\r\n\
ab",
));
}
#[test]
fn hrs_content_length_plus() {
let mut buf = BytesMut::from(
expect_parse_err!(&mut BytesMut::from(
"GET / HTTP/1.1\r\n\
Host: example.com\r\n\
Content-Length: +3\r\n\
\r\n\
000",
));
}
#[test]
fn hrs_te_http10() {
// in HTTP/1.0 transfer encoding is ignored and must therefore contain a CL header
expect_parse_err!(&mut BytesMut::from(
"POST / HTTP/1.0\r\n\
Host: example.com\r\n\
Transfer-Encoding: chunked\r\n\
\r\n\
3\r\n\
aaa\r\n\
0\r\n\
",
));
}
#[test]
fn hrs_cl_and_te_http10() {
// in HTTP/1.0 transfer encoding is simply ignored so it's fine to have both
let mut buf = BytesMut::from(
"GET / HTTP/1.0\r\n\
Host: example.com\r\n\
Content-Length: 3\r\n\
Transfer-Encoding: chunked\r\n\
\r\n\
000",
);
expect_parse_err!(&mut buf);
parse_ready!(&mut buf);
}
#[test]

View File

@ -8,20 +8,23 @@ use std::{
task::{Context, Poll},
};
use actix_codec::{AsyncRead, AsyncWrite, Decoder as _, Encoder as _, Framed, FramedParts};
use actix_codec::{Framed, FramedParts};
use actix_rt::time::sleep_until;
use actix_service::Service;
use bitflags::bitflags;
use bytes::{Buf, BytesMut};
use futures_core::ready;
use pin_project_lite::pin_project;
use tokio::io::{AsyncRead, AsyncWrite};
use tokio_util::codec::{Decoder as _, Encoder as _};
use tracing::{error, trace};
use crate::{
body::{BodySize, BoxBody, MessageBody},
config::ServiceConfig,
error::{DispatchError, ParseError, PayloadError},
service::HttpFlow,
ConnectionType, Error, Extensions, OnConnectData, Request, Response, StatusCode,
Error, Extensions, OnConnectData, Request, Response, StatusCode,
};
use super::{
@ -336,7 +339,7 @@ where
while written < len {
match io.as_mut().poll_write(cx, &write_buf[written..])? {
Poll::Ready(0) => {
log::error!("write zero; closing");
error!("write zero; closing");
return Poll::Ready(Err(io::Error::new(io::ErrorKind::WriteZero, "")));
}
@ -375,8 +378,6 @@ where
DispatchError::Io(err)
})?;
this.flags.set(Flags::KEEP_ALIVE, this.codec.keep_alive());
Ok(size)
}
@ -459,7 +460,12 @@ where
}
// all messages are dealt with
None => return Ok(PollResponse::DoNothing),
None => {
// start keep-alive if last request allowed it
this.flags.set(Flags::KEEP_ALIVE, this.codec.keep_alive());
return Ok(PollResponse::DoNothing);
}
},
StateProj::ServiceCall { fut } => {
@ -565,7 +571,7 @@ where
}
StateProj::ExpectCall { fut } => {
log::trace!(" calling expect service");
trace!(" calling expect service");
match fut.poll(cx) {
// expect resolved. write continue to buffer and set InnerDispatcher state
@ -687,76 +693,15 @@ where
let can_not_read = !self.can_read(cx);
// limit amount of non-processed requests
if pipeline_queue_full {
if pipeline_queue_full || can_not_read {
return Ok(false);
}
let mut this = self.as_mut().project();
if can_not_read {
log::debug!("cannot read request payload");
if let Some(sender) = &this.payload {
// ...maybe handler does not want to read any more payload...
if let PayloadStatus::Dropped = sender.need_read(cx) {
log::debug!("handler dropped payload early; attempt to clean connection");
// ...in which case poll request payload a few times
loop {
match this.codec.decode(this.read_buf)? {
Some(msg) => {
match msg {
// payload decoded did not yield EOF yet
Message::Chunk(Some(_)) => {
// if non-clean connection, next loop iter will detect empty
// read buffer and close connection
}
// connection is in clean state for next request
Message::Chunk(None) => {
log::debug!("connection successfully cleaned");
// reset dispatcher state
let _ = this.payload.take();
this.state.set(State::None);
// break out of payload decode loop
break;
}
// Either whole payload is read and loop is broken or more data
// was expected in which case connection is closed. In both
// situations dispatcher cannot get here.
Message::Item(_) => {
unreachable!("dispatcher is in payload receive state")
}
}
}
// not enough info to decide if connection is going to be clean or not
None => {
log::error!(
"handler did not read whole payload and dispatcher could not \
drain read buf; return 500 and close connection"
);
this.flags.insert(Flags::SHUTDOWN);
let mut res = Response::internal_server_error().drop_body();
res.head_mut().set_connection_type(ConnectionType::Close);
this.messages.push_back(DispatcherMessage::Error(res));
*this.error = Some(DispatchError::HandlerDroppedPayload);
return Ok(true);
}
}
}
}
} else {
// can_not_read and no request payload
return Ok(false);
}
}
let mut updated = false;
// decode from read buf as many full requests as possible
loop {
match this.codec.decode(this.read_buf) {
Ok(Some(msg)) => {
@ -809,7 +754,7 @@ where
if let Some(ref mut payload) = this.payload {
payload.feed_data(chunk);
} else {
log::error!("Internal server error: unexpected payload chunk");
error!("Internal server error: unexpected payload chunk");
this.flags.insert(Flags::READ_DISCONNECT);
this.messages.push_back(DispatcherMessage::Error(
Response::internal_server_error().drop_body(),
@ -823,7 +768,7 @@ where
if let Some(mut payload) = this.payload.take() {
payload.feed_eof();
} else {
log::error!("Internal server error: unexpected eof");
error!("Internal server error: unexpected eof");
this.flags.insert(Flags::READ_DISCONNECT);
this.messages.push_back(DispatcherMessage::Error(
Response::internal_server_error().drop_body(),
@ -840,7 +785,7 @@ where
Ok(None) => break,
Err(ParseError::Io(err)) => {
log::trace!("I/O error: {}", &err);
trace!("I/O error: {}", &err);
self.as_mut().client_disconnected();
this = self.as_mut().project();
*this.error = Some(DispatchError::Io(err));
@ -848,7 +793,7 @@ where
}
Err(ParseError::TooLarge) => {
log::trace!("request head was too big; returning 431 response");
trace!("request head was too big; returning 431 response");
if let Some(mut payload) = this.payload.take() {
payload.set_error(PayloadError::Overflow);
@ -868,7 +813,7 @@ where
}
Err(err) => {
log::trace!("parse error {}", &err);
trace!("parse error {}", &err);
if let Some(mut payload) = this.payload.take() {
payload.set_error(PayloadError::EncodingCorrupted);
@ -899,10 +844,7 @@ where
if timer.as_mut().poll(cx).is_ready() {
// timeout on first request (slow request) return 408
log::trace!(
"timed out on slow request; \
replying with 408 and closing connection"
);
trace!("timed out on slow request; replying with 408 and closing connection");
let _ = self.as_mut().send_error_response(
Response::with_body(StatusCode::REQUEST_TIMEOUT, ()),
@ -945,7 +887,7 @@ where
// keep-alive timer has timed out
if timer.as_mut().poll(cx).is_ready() {
// no tasks at hand
log::trace!("timer timed out; closing connection");
trace!("timer timed out; closing connection");
this.flags.insert(Flags::SHUTDOWN);
if let Some(deadline) = this.config.client_disconnect_deadline() {
@ -975,7 +917,7 @@ where
// timed-out during shutdown; drop connection
if timer.as_mut().poll(cx).is_ready() {
log::trace!("timed-out during shutdown");
trace!("timed-out during shutdown");
return Err(DispatchError::DisconnectTimeout);
}
}
@ -1036,9 +978,11 @@ where
//
// A Request head too large to parse is only checked on `httparse::Status::Partial`.
if this.payload.is_none() {
// When dispatcher has a payload the responsibility of wake up it would be shift
// to h1::payload::Payload.
match this.payload {
// When dispatcher has a payload the responsibility of wake ups is shifted to
// `h1::payload::Payload` unless the payload is needing a read, in which case it
// might not have access to the waker and could result in the dispatcher
// getting stuck until timeout.
//
// Reason:
// Self wake up when there is payload would waste poll and/or result in
@ -1049,7 +993,8 @@ where
// read anymore. At this case read_buf could always remain beyond
// MAX_BUFFER_SIZE and self wake up would be busy poll dispatcher and
// waste resources.
cx.waker().wake_by_ref();
Some(ref p) if p.need_read(cx) != PayloadStatus::Read => {}
_ => cx.waker().wake_by_ref(),
}
return Ok(false);
@ -1061,7 +1006,7 @@ where
this.read_buf.reserve(HW_BUFFER_SIZE - remaining);
}
match actix_codec::poll_read_buf(io.as_mut(), cx, this.read_buf) {
match tokio_util::io::poll_read_buf(io.as_mut(), cx, this.read_buf) {
Poll::Ready(Ok(n)) => {
this.flags.remove(Flags::FINISHED);
@ -1134,12 +1079,12 @@ where
match this.inner.project() {
DispatcherStateProj::Upgrade { fut: upgrade } => upgrade.poll(cx).map_err(|err| {
log::error!("Upgrade handler error: {}", err);
error!("Upgrade handler error: {}", err);
DispatchError::Upgrade
}),
DispatcherStateProj::Normal { mut inner } => {
log::trace!("start flags: {:?}", &inner.flags);
trace!("start flags: {:?}", &inner.flags);
trace_timer_states(
"start",
@ -1246,7 +1191,7 @@ where
// client is gone
if inner.flags.contains(Flags::WRITE_DISCONNECT) {
log::trace!("client is gone; disconnecting");
trace!("client is gone; disconnecting");
return Poll::Ready(Ok(()));
}
@ -1255,14 +1200,14 @@ where
// read half is closed; we do not process any responses
if inner_p.flags.contains(Flags::READ_DISCONNECT) && state_is_none {
log::trace!("read half closed; start shutdown");
trace!("read half closed; start shutdown");
inner_p.flags.insert(Flags::SHUTDOWN);
}
// keep-alive and stream errors
if state_is_none && inner_p.write_buf.is_empty() {
if let Some(err) = inner_p.error.take() {
log::error!("stream error: {}", &err);
error!("stream error: {}", &err);
return Poll::Ready(Err(err));
}
@ -1291,7 +1236,7 @@ where
Poll::Pending
};
log::trace!("end flags: {:?}", &inner.flags);
trace!("end flags: {:?}", &inner.flags);
poll
}
@ -1306,17 +1251,17 @@ fn trace_timer_states(
ka_timer: &TimerState,
shutdown_timer: &TimerState,
) {
log::trace!("{} timers:", label);
trace!("{} timers:", label);
if head_timer.is_enabled() {
log::trace!(" head {}", &head_timer);
trace!(" head {}", &head_timer);
}
if ka_timer.is_enabled() {
log::trace!(" keep-alive {}", &ka_timer);
trace!(" keep-alive {}", &ka_timer);
}
if shutdown_timer.is_enabled() {
log::trace!(" shutdown {}", &shutdown_timer);
trace!(" shutdown {}", &shutdown_timer);
}
}

View File

@ -64,7 +64,7 @@ fn drop_payload_service(
fn echo_payload_service() -> impl Service<Request, Response = Response<Bytes>, Error = Error> {
fn_service(|mut req: Request| {
Box::pin(async move {
use futures_util::stream::StreamExt as _;
use futures_util::StreamExt as _;
let mut pl = req.take_payload();
let mut body = BytesMut::new();
@ -637,7 +637,7 @@ async fn expect_handling() {
if let DispatcherState::Normal { ref inner } = h1.inner {
let io = inner.io.as_ref().unwrap();
let mut res = (&io.write_buf()[..]).to_owned();
let mut res = io.write_buf()[..].to_owned();
stabilize_date_header(&mut res);
assert_eq!(
@ -699,7 +699,7 @@ async fn expect_eager() {
if let DispatcherState::Normal { ref inner } = h1.inner {
let io = inner.io.as_ref().unwrap();
let mut res = (&io.write_buf()[..]).to_owned();
let mut res = io.write_buf()[..].to_owned();
stabilize_date_header(&mut res);
// Despite the content-length header and even though the request payload has not
@ -783,6 +783,9 @@ async fn upgrade_handling() {
.await;
}
// fix in #2624 reverted temporarily
// complete fix tracked in #2745
#[ignore]
#[actix_rt::test]
async fn handler_drop_payload() {
let _ = env_logger::try_init();
@ -929,7 +932,6 @@ fn http_msg(msg: impl AsRef<str>) -> BytesMut {
.as_ref()
.trim()
.split('\n')
.into_iter()
.map(|line| [line.trim_start(), "\r"].concat())
.collect::<Vec<_>>()
.join("\n");

View File

@ -450,7 +450,7 @@ impl TransferEncoding {
buf.extend_from_slice(&msg[..len as usize]);
*remaining -= len as u64;
*remaining -= len;
Ok(*remaining == 0)
} else {
Ok(true)
@ -517,6 +517,7 @@ unsafe fn write_camel_case(value: &[u8], buf: *mut u8, len: usize) {
if let Some(c @ b'a'..=b'z') = iter.next() {
buffer[index] = c & 0b1101_1111;
}
index += 1;
}
index += 1;
@ -528,7 +529,7 @@ mod tests {
use std::rc::Rc;
use bytes::Bytes;
use http::header::AUTHORIZATION;
use http::header::{AUTHORIZATION, UPGRADE_INSECURE_REQUESTS};
use super::*;
use crate::{
@ -559,6 +560,9 @@ mod tests {
head.headers
.insert(CONTENT_TYPE, HeaderValue::from_static("plain/text"));
head.headers
.insert(UPGRADE_INSECURE_REQUESTS, HeaderValue::from_static("1"));
let mut head = RequestHeadType::Owned(head);
let _ = head.encode_headers(
@ -574,6 +578,7 @@ mod tests {
assert!(data.contains("Connection: close\r\n"));
assert!(data.contains("Content-Type: plain/text\r\n"));
assert!(data.contains("Date: date\r\n"));
assert!(data.contains("Upgrade-Insecure-Requests: 1\r\n"));
let _ = head.encode_headers(
&mut bytes,

View File

@ -16,7 +16,7 @@ use crate::error::PayloadError;
/// max buffer size 32k
pub(crate) const MAX_BUFFER_SIZE: usize = 32_768;
#[derive(Debug, PartialEq)]
#[derive(Debug, PartialEq, Eq)]
pub enum PayloadStatus {
Read,
Pause,
@ -252,18 +252,15 @@ impl Inner {
#[cfg(test)]
mod tests {
use std::panic::{RefUnwindSafe, UnwindSafe};
use actix_utils::future::poll_fn;
use static_assertions::{assert_impl_all, assert_not_impl_any};
use super::*;
assert_impl_all!(Payload: Unpin);
assert_not_impl_any!(Payload: Send, Sync, UnwindSafe, RefUnwindSafe);
assert_not_impl_any!(Payload: Send, Sync);
assert_impl_all!(Inner: Unpin, Send, Sync);
assert_not_impl_any!(Inner: UnwindSafe, RefUnwindSafe);
#[actix_rt::test]
async fn test_unread_data() {

View File

@ -13,6 +13,7 @@ use actix_service::{
};
use actix_utils::future::ready;
use futures_core::future::LocalBoxFuture;
use tracing::error;
use crate::{
body::{BoxBody, MessageBody},
@ -133,6 +134,7 @@ mod openssl {
U::InitError: fmt::Debug,
{
/// Create OpenSSL based service.
#[cfg_attr(docsrs, doc(cfg(feature = "openssl")))]
pub fn openssl(
self,
acceptor: SslAcceptor,
@ -195,6 +197,7 @@ mod rustls {
U::InitError: fmt::Debug,
{
/// Create Rustls based service.
#[cfg_attr(docsrs, doc(cfg(feature = "rustls")))]
pub fn rustls(
self,
config: ServerConfig,
@ -305,13 +308,13 @@ where
Box::pin(async move {
let expect = expect
.await
.map_err(|e| log::error!("Init http expect service error: {:?}", e))?;
.map_err(|e| error!("Init http expect service error: {:?}", e))?;
let upgrade = match upgrade {
Some(upgrade) => {
let upgrade = upgrade
.await
.map_err(|e| log::error!("Init http upgrade service error: {:?}", e))?;
.map_err(|e| error!("Init http upgrade service error: {:?}", e))?;
Some(upgrade)
}
None => None,
@ -319,7 +322,7 @@ where
let service = service
.await
.map_err(|e| log::error!("Init http service error: {:?}", e))?;
.map_err(|e| error!("Init http service error: {:?}", e))?;
Ok(H1ServiceHandler::new(
cfg,
@ -357,7 +360,7 @@ where
fn poll_ready(&self, cx: &mut Context<'_>) -> Poll<Result<(), Self::Error>> {
self._poll_ready(cx).map_err(|err| {
log::error!("HTTP/1 service readiness error: {:?}", err);
error!("HTTP/1 service readiness error: {:?}", err);
DispatchError::Service(err)
})
}

View File

@ -1,6 +1,7 @@
use std::{fmt, future::Future, pin::Pin, task::Context};
use actix_rt::time::{Instant, Sleep};
use tracing::trace;
#[derive(Debug)]
pub(super) enum TimerState {
@ -24,7 +25,7 @@ impl TimerState {
pub(super) fn set(&mut self, timer: Sleep, line: u32) {
if matches!(self, Self::Disabled) {
log::trace!("setting disabled timer from line {}", line);
trace!("setting disabled timer from line {}", line);
}
*self = Self::Active {
@ -39,11 +40,11 @@ impl TimerState {
pub(super) fn clear(&mut self, line: u32) {
if matches!(self, Self::Disabled) {
log::trace!("trying to clear a disabled timer from line {}", line);
trace!("trying to clear a disabled timer from line {}", line);
}
if matches!(self, Self::Inactive) {
log::trace!("trying to clear an inactive timer from line {}", line);
trace!("trying to clear an inactive timer from line {}", line);
}
*self = Self::Inactive;

View File

@ -19,8 +19,8 @@ use h2::{
server::{Connection, SendResponse},
Ping, PingPong,
};
use log::{error, trace};
use pin_project_lite::pin_project;
use tracing::{error, trace, warn};
use crate::{
body::{BodySize, BoxBody, MessageBody},
@ -29,7 +29,7 @@ use crate::{
HeaderName, HeaderValue, CONNECTION, CONTENT_LENGTH, DATE, TRANSFER_ENCODING, UPGRADE,
},
service::HttpFlow,
Extensions, OnConnectData, Payload, Request, Response, ResponseHead,
Extensions, Method, OnConnectData, Payload, Request, Response, ResponseHead,
};
const CHUNK_SIZE: usize = 16_384;
@ -67,7 +67,7 @@ where
timer
})
.unwrap_or_else(|| Box::pin(sleep(dur))),
on_flight: false,
in_flight: false,
ping_pong: conn.ping_pong().unwrap(),
});
@ -84,9 +84,14 @@ where
}
struct H2PingPong {
timer: Pin<Box<Sleep>>,
on_flight: bool,
/// Handle to send ping frames from the peer.
ping_pong: PingPong,
/// True when a ping has been sent and is waiting for a reply.
in_flight: bool,
/// Timeout for pong response.
timer: Pin<Box<Sleep>>,
}
impl<T, S, B, X, U> Future for Dispatcher<T, S, B, X, U>
@ -113,6 +118,7 @@ where
let payload = crate::h2::Payload::new(body);
let pl = Payload::H2 { payload };
let mut req = Request::with_payload(pl);
let head_req = parts.method == Method::HEAD;
let head = req.head_mut();
head.uri = parts.uri;
@ -130,10 +136,10 @@ where
actix_rt::spawn(async move {
// resolve service call and send response.
let res = match fut.await {
Ok(res) => handle_response(res.into(), tx, config).await,
Ok(res) => handle_response(res.into(), tx, config, head_req).await,
Err(err) => {
let res: Response<BoxBody> = err.into();
handle_response(res, tx, config).await
handle_response(res, tx, config, head_req).await
}
};
@ -143,7 +149,7 @@ where
DispatchError::SendResponse(err) => {
trace!("Error sending HTTP/2 response: {:?}", err)
}
DispatchError::SendData(err) => log::warn!("{:?}", err),
DispatchError::SendData(err) => warn!("{:?}", err),
DispatchError::ResponseBody(err) => {
error!("Response payload stream error: {:?}", err)
}
@ -152,26 +158,28 @@ where
});
}
Poll::Ready(None) => return Poll::Ready(Ok(())),
Poll::Pending => match this.ping_pong.as_mut() {
Some(ping_pong) => loop {
if ping_pong.on_flight {
// When have on flight ping pong. poll pong and and keep alive timer.
// on success pong received update keep alive timer to determine the next timing of
// ping pong.
if ping_pong.in_flight {
// When there is an in-flight ping-pong, poll pong and and keep-alive
// timer. On successful pong received, update keep-alive timer to
// determine the next timing of ping pong.
match ping_pong.ping_pong.poll_pong(cx)? {
Poll::Ready(_) => {
ping_pong.on_flight = false;
ping_pong.in_flight = false;
let dead_line = this.config.keep_alive_deadline().unwrap();
ping_pong.timer.as_mut().reset(dead_line.into());
}
Poll::Pending => {
return ping_pong.timer.as_mut().poll(cx).map(|_| Ok(()))
return ping_pong.timer.as_mut().poll(cx).map(|_| Ok(()));
}
}
} else {
// When there is no on flight ping pong. keep alive timer is used to wait for next
// timing of ping pong. Therefore at this point it serves as an interval instead.
// When there is no in-flight ping-pong, keep-alive timer is used to
// wait for next timing of ping-pong. Therefore, at this point it serves
// as an interval instead.
ready!(ping_pong.timer.as_mut().poll(cx));
ping_pong.ping_pong.send_ping(Ping::opaque())?;
@ -179,7 +187,7 @@ where
let dead_line = this.config.keep_alive_deadline().unwrap();
ping_pong.timer.as_mut().reset(dead_line.into());
ping_pong.on_flight = true;
ping_pong.in_flight = true;
}
},
None => return Poll::Pending,
@ -199,6 +207,7 @@ async fn handle_response<B>(
res: Response<B>,
mut tx: SendResponse<Bytes>,
config: ServiceConfig,
head_req: bool,
) -> Result<(), DispatchError>
where
B: MessageBody,
@ -208,14 +217,14 @@ where
// prepare response.
let mut size = body.size();
let res = prepare_response(config, res.head(), &mut size);
let eof = size.is_eof();
let eof_or_head = size.is_eof() || head_req;
// send response head and return on eof.
let mut stream = tx
.send_response(res, eof)
.send_response(res, eof_or_head)
.map_err(DispatchError::SendResponse)?;
if eof {
if eof_or_head {
return Ok(());
}
@ -287,13 +296,13 @@ fn prepare_response(
_ => {}
}
let _ = match size {
BodySize::None | BodySize::Stream => None,
match size {
BodySize::None | BodySize::Stream => {}
BodySize::Sized(0) => {
#[allow(clippy::declare_interior_mutable_const)]
const HV_ZERO: HeaderValue = HeaderValue::from_static("0");
res.headers_mut().insert(CONTENT_LENGTH, HV_ZERO)
res.headers_mut().insert(CONTENT_LENGTH, HV_ZERO);
}
BodySize::Sized(len) => {
@ -302,7 +311,7 @@ fn prepare_response(
res.headers_mut().insert(
CONTENT_LENGTH,
HeaderValue::from_str(buf.format(*len)).unwrap(),
)
);
}
};

View File

@ -103,11 +103,9 @@ where
#[cfg(test)]
mod tests {
use std::panic::{RefUnwindSafe, UnwindSafe};
use static_assertions::assert_impl_all;
use super::*;
assert_impl_all!(Payload: Unpin, Send, Sync, UnwindSafe, RefUnwindSafe);
assert_impl_all!(Payload: Unpin, Send, Sync);
}

View File

@ -14,7 +14,7 @@ use actix_service::{
};
use actix_utils::future::ready;
use futures_core::{future::LocalBoxFuture, ready};
use log::error;
use tracing::{error, trace};
use crate::{
body::{BoxBody, MessageBody},
@ -117,6 +117,7 @@ mod openssl {
B: MessageBody + 'static,
{
/// Create OpenSSL based service.
#[cfg_attr(docsrs, doc(cfg(feature = "openssl")))]
pub fn openssl(
self,
acceptor: SslAcceptor,
@ -164,6 +165,7 @@ mod rustls {
B: MessageBody + 'static,
{
/// Create Rustls based service.
#[cfg_attr(docsrs, doc(cfg(feature = "rustls")))]
pub fn rustls(
self,
mut config: ServerConfig,
@ -355,7 +357,7 @@ where
}
Err(err) => {
log::trace!("H2 handshake error: {}", err);
trace!("H2 handshake error: {}", err);
Poll::Ready(Err(err))
}
},

View File

@ -0,0 +1,51 @@
//! Common header names not defined in [`http`].
//!
//! Any headers added to this file will need to be re-exported from the list at `crate::headers`.
use http::header::HeaderName;
/// Response header field that indicates how caches have handled that response and its corresponding
/// request.
///
/// See [RFC 9211](https://www.rfc-editor.org/rfc/rfc9211) for full semantics.
pub const CACHE_STATUS: HeaderName = HeaderName::from_static("cache-status");
/// Response header field that allows origin servers to control the behavior of CDN caches
/// interposed between them and clients separately from other caches that might handle the response.
///
/// See [RFC 9213](https://www.rfc-editor.org/rfc/rfc9213) for full semantics.
pub const CDN_CACHE_CONTROL: HeaderName = HeaderName::from_static("cdn-cache-control");
/// Response header that prevents a document from loading any cross-origin resources that don't
/// explicitly grant the document permission (using [CORP] or [CORS]).
///
/// [CORP]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Cross-Origin_Resource_Policy_(CORP)
/// [CORS]: https://developer.mozilla.org/en-US/docs/Web/HTTP/CORS
pub const CROSS_ORIGIN_EMBEDDER_POLICY: HeaderName =
HeaderName::from_static("cross-origin-embedder-policy");
/// Response header that allows you to ensure a top-level document does not share a browsing context
/// group with cross-origin documents.
pub const CROSS_ORIGIN_OPENER_POLICY: HeaderName =
HeaderName::from_static("cross-origin-opener-policy");
/// Response header that conveys a desire that the browser blocks no-cors cross-origin/cross-site
/// requests to the given resource.
pub const CROSS_ORIGIN_RESOURCE_POLICY: HeaderName =
HeaderName::from_static("cross-origin-resource-policy");
/// Response header that provides a mechanism to allow and deny the use of browser features in a
/// document or within any `<iframe>` elements in the document.
pub const PERMISSIONS_POLICY: HeaderName = HeaderName::from_static("permissions-policy");
/// Request header (de-facto standard) for identifying the originating IP address of a client
/// connecting to a web server through a proxy server.
pub const X_FORWARDED_FOR: HeaderName = HeaderName::from_static("x-forwarded-for");
/// Request header (de-facto standard) for identifying the original host requested by the client in
/// the `Host` HTTP request header.
pub const X_FORWARDED_HOST: HeaderName = HeaderName::from_static("x-forwarded-host");
/// Request header (de-facto standard) for identifying the protocol that a client used to connect to
/// your proxy or load balancer.
pub const X_FORWARDED_PROTO: HeaderName = HeaderName::from_static("x-forwarded-proto");

View File

@ -150,9 +150,7 @@ impl HeaderMap {
/// assert_eq!(map.len(), 3);
/// ```
pub fn len(&self) -> usize {
self.inner
.iter()
.fold(0, |acc, (_, values)| acc + values.len())
self.inner.values().map(|vals| vals.len()).sum()
}
/// Returns the number of _keys_ stored in the map.
@ -309,7 +307,7 @@ impl HeaderMap {
pub fn get_all(&self, key: impl AsHeaderName) -> std::slice::Iter<'_, HeaderValue> {
match self.get_value(key) {
Some(value) => value.iter(),
None => (&[]).iter(),
None => [].iter(),
}
}
@ -552,6 +550,39 @@ impl HeaderMap {
Keys(self.inner.keys())
}
/// Retains only the headers specified by the predicate.
///
/// In other words, removes all headers `(name, val)` for which `retain_fn(&name, &mut val)`
/// returns false.
///
/// The order in which headers are visited should be considered arbitrary.
///
/// # Examples
/// ```
/// # use actix_http::header::{self, HeaderMap, HeaderValue};
/// let mut map = HeaderMap::new();
///
/// map.append(header::HOST, HeaderValue::from_static("duck.com"));
/// map.append(header::SET_COOKIE, HeaderValue::from_static("one=1"));
/// map.append(header::SET_COOKIE, HeaderValue::from_static("two=2"));
///
/// map.retain(|name, val| val.as_bytes().starts_with(b"one"));
///
/// assert_eq!(map.len(), 1);
/// assert!(map.contains_key(&header::SET_COOKIE));
/// ```
pub fn retain<F>(&mut self, mut retain_fn: F)
where
F: FnMut(&HeaderName, &mut HeaderValue) -> bool,
{
self.inner.retain(|name, vals| {
vals.inner.retain(|val| retain_fn(name, val));
// invariant: make sure newly empty value lists are removed
!vals.is_empty()
})
}
/// Clears the map, returning all name-value sets as an iterator.
///
/// Header names will only be yielded for the first value in each set. All items that are
@ -943,6 +974,55 @@ mod tests {
assert!(map.is_empty());
}
#[test]
fn retain() {
let mut map = HeaderMap::new();
map.append(header::LOCATION, HeaderValue::from_static("/test"));
map.append(header::HOST, HeaderValue::from_static("duck.com"));
map.append(header::COOKIE, HeaderValue::from_static("one=1"));
map.append(header::COOKIE, HeaderValue::from_static("two=2"));
assert_eq!(map.len(), 4);
// by value
map.retain(|_, val| !val.as_bytes().contains(&b'/'));
assert_eq!(map.len(), 3);
// by name
map.retain(|name, _| name.as_str() != "cookie");
assert_eq!(map.len(), 1);
// keep but mutate value
map.retain(|_, val| {
*val = HeaderValue::from_static("replaced");
true
});
assert_eq!(map.len(), 1);
assert_eq!(map.get("host").unwrap(), "replaced");
}
#[test]
fn retain_removes_empty_value_lists() {
let mut map = HeaderMap::with_capacity(3);
map.append(header::HOST, HeaderValue::from_static("duck.com"));
map.append(header::HOST, HeaderValue::from_static("duck.com"));
assert_eq!(map.len(), 2);
assert_eq!(map.len_keys(), 1);
assert_eq!(map.inner.len(), 1);
assert_eq!(map.capacity(), 3);
// remove everything
map.retain(|_n, _v| false);
assert_eq!(map.len(), 0);
assert_eq!(map.len_keys(), 0);
assert_eq!(map.inner.len(), 0);
assert_eq!(map.capacity(), 3);
}
#[test]
fn entries_into_iter() {
let mut map = HeaderMap::new();

View File

@ -1,14 +1,18 @@
//! Pre-defined `HeaderName`s, traits for parsing and conversion, and other header utility methods.
// declaring new header consts will yield this error
#![allow(clippy::declare_interior_mutable_const)]
use percent_encoding::{AsciiSet, CONTROLS};
// re-export from http except header map related items
pub use http::header::{
pub use ::http::header::{
HeaderName, HeaderValue, InvalidHeaderName, InvalidHeaderValue, ToStrError,
};
// re-export const header names
pub use http::header::{
// re-export const header names, list is explicit so that any updates to `common` module do not
// conflict with this set
pub use ::http::header::{
ACCEPT, ACCEPT_CHARSET, ACCEPT_ENCODING, ACCEPT_LANGUAGE, ACCEPT_RANGES,
ACCESS_CONTROL_ALLOW_CREDENTIALS, ACCESS_CONTROL_ALLOW_HEADERS,
ACCESS_CONTROL_ALLOW_METHODS, ACCESS_CONTROL_ALLOW_ORIGIN, ACCESS_CONTROL_EXPOSE_HEADERS,
@ -30,22 +34,30 @@ pub use http::header::{
use crate::{error::ParseError, HttpMessage};
mod as_name;
mod common;
mod into_pair;
mod into_value;
pub mod map;
mod shared;
mod utils;
pub use self::as_name::AsHeaderName;
pub use self::into_pair::TryIntoHeaderPair;
pub use self::into_value::TryIntoHeaderValue;
pub use self::map::HeaderMap;
pub use self::shared::{
parse_extended_value, q, Charset, ContentEncoding, ExtendedValue, HttpDate, LanguageTag,
Quality, QualityItem,
pub use self::{
as_name::AsHeaderName,
into_pair::TryIntoHeaderPair,
into_value::TryIntoHeaderValue,
map::HeaderMap,
shared::{
parse_extended_value, q, Charset, ContentEncoding, ExtendedValue, HttpDate,
LanguageTag, Quality, QualityItem,
},
utils::{fmt_comma_delimited, from_comma_delimited, from_one_raw_str, http_percent_encode},
};
pub use self::utils::{
fmt_comma_delimited, from_comma_delimited, from_one_raw_str, http_percent_encode,
// re-export list is explicit so that any updates to `http` do not conflict with this set
pub use self::common::{
CACHE_STATUS, CDN_CACHE_CONTROL, CROSS_ORIGIN_EMBEDDER_POLICY, CROSS_ORIGIN_OPENER_POLICY,
CROSS_ORIGIN_RESOURCE_POLICY, PERMISSIONS_POLICY, X_FORWARDED_FOR, X_FORWARDED_HOST,
X_FORWARDED_PROTO,
};
/// An interface for types that already represent a valid header.

View File

@ -12,7 +12,7 @@ use crate::header::{Charset, HTTP_VALUE};
/// - A character sequence representing the actual value (`value`), separated by single quotes.
///
/// It is defined in [RFC 5987 §3.2](https://datatracker.ietf.org/doc/html/rfc5987#section-3.2).
#[derive(Clone, Debug, PartialEq)]
#[derive(Clone, Debug, PartialEq, Eq)]
pub struct ExtendedValue {
/// The character set that is used to encode the `value` to a string.
pub charset: Charset,

View File

@ -147,7 +147,7 @@ mod tests {
// copy of encoding from actix-web headers
#[allow(clippy::enum_variant_names)] // allow Encoding prefix on EncodingExt
#[derive(Clone, PartialEq, Debug)]
#[derive(Debug, Clone, PartialEq, Eq)]
pub enum Encoding {
Chunked,
Brotli,

View File

@ -21,10 +21,12 @@
#![allow(
clippy::type_complexity,
clippy::too_many_arguments,
clippy::borrow_interior_mutable_const
clippy::borrow_interior_mutable_const,
clippy::uninlined_format_args
)]
#![doc(html_logo_url = "https://actix.rs/img/logo.png")]
#![doc(html_favicon_url = "https://actix.rs/favicon.ico")]
#![cfg_attr(docsrs, feature(doc_cfg))]
pub use ::http::{uri, uri::Uri};
pub use ::http::{Method, StatusCode, Version};
@ -39,6 +41,7 @@ pub mod error;
mod extensions;
pub mod h1;
#[cfg(feature = "http2")]
#[cfg_attr(docsrs, doc(cfg(feature = "http2")))]
pub mod h2;
pub mod header;
mod helpers;
@ -53,6 +56,7 @@ mod responses;
mod service;
pub mod test;
#[cfg(feature = "ws")]
#[cfg_attr(docsrs, doc(cfg(feature = "ws")))]
pub mod ws;
pub use self::builder::HttpServiceBuilder;
@ -69,6 +73,9 @@ pub use self::payload::{BoxedPayloadStream, Payload, PayloadStream};
pub use self::requests::{Request, RequestHead, RequestHeadType};
pub use self::responses::{Response, ResponseBuilder, ResponseHead};
pub use self::service::HttpService;
#[cfg(any(feature = "openssl", feature = "rustls"))]
#[cfg_attr(docsrs, doc(cfg(any(feature = "openssl", feature = "rustls"))))]
pub use self::service::TlsAcceptorConfig;
/// A major HTTP protocol version.
#[derive(Copy, Clone, Debug, PartialEq, Eq, Hash)]

View File

@ -3,7 +3,7 @@ use std::{cell::RefCell, ops, rc::Rc};
use bitflags::bitflags;
/// Represents various types of connection
#[derive(Copy, Clone, PartialEq, Debug)]
#[derive(Debug, Clone, Copy, PartialEq, Eq)]
pub enum ConnectionType {
/// Close connection after response.
Close,

View File

@ -13,7 +13,8 @@ use crate::error::PayloadError;
/// A boxed payload stream.
pub type BoxedPayloadStream = Pin<Box<dyn Stream<Item = Result<Bytes, PayloadError>>>>;
#[deprecated(since = "4.0.0", note = "Renamed to `BoxedPayloadStream`.")]
#[doc(hidden)]
#[deprecated(since = "3.0.0", note = "Renamed to `BoxedPayloadStream`.")]
pub type PayloadStream = BoxedPayloadStream;
#[cfg(not(feature = "http2"))]
@ -96,12 +97,10 @@ where
#[cfg(test)]
mod tests {
use std::panic::{RefUnwindSafe, UnwindSafe};
use static_assertions::{assert_impl_all, assert_not_impl_any};
use super::*;
assert_impl_all!(Payload: Unpin);
assert_not_impl_any!(Payload: Send, Sync, UnwindSafe, RefUnwindSafe);
assert_not_impl_any!(Payload: Send, Sync);
}

View File

@ -113,14 +113,14 @@ impl<P> Request<P> {
#[inline]
/// Http message part of the request
pub fn head(&self) -> &RequestHead {
&*self.head
&self.head
}
#[inline]
#[doc(hidden)]
/// Mutable reference to a HTTP message part of the request
pub fn head_mut(&mut self) -> &mut RequestHead {
&mut *self.head
&mut self.head
}
/// Mutable reference to the message's headers.

View File

@ -144,7 +144,7 @@ impl ResponseBuilder {
self
}
/// Set connection type to Upgrade
/// Set connection type to `Upgrade`.
#[inline]
pub fn upgrade<V>(&mut self, value: V) -> &mut Self
where
@ -161,7 +161,7 @@ impl ResponseBuilder {
self
}
/// Force close connection, even if it is marked as keep-alive
/// Force-close connection, even if it is marked as keep-alive.
#[inline]
pub fn force_close(&mut self) -> &mut Self {
if let Some(parts) = self.inner() {

View File

@ -237,7 +237,7 @@ mod tests {
.await;
let mut stream = net::TcpStream::connect(srv.addr()).unwrap();
let _ = stream
stream
.write_all(b"GET /camel HTTP/1.1\r\nConnection: Close\r\n\r\n")
.unwrap();
let mut data = vec![];
@ -251,7 +251,7 @@ mod tests {
assert!(memmem::find(&data, b"content-length").is_none());
let mut stream = net::TcpStream::connect(srv.addr()).unwrap();
let _ = stream
stream
.write_all(b"GET /lower HTTP/1.1\r\nConnection: Close\r\n\r\n")
.unwrap();
let mut data = vec![];

View File

@ -83,13 +83,13 @@ impl<B> Response<B> {
/// Returns a reference to the head of this response.
#[inline]
pub fn head(&self) -> &ResponseHead {
&*self.head
&self.head
}
/// Returns a mutable reference to the head of this response.
#[inline]
pub fn head_mut(&mut self) -> &mut ResponseHead {
&mut *self.head
&mut self.head
}
/// Returns the status code of this response.

View File

@ -15,6 +15,7 @@ use actix_service::{
};
use futures_core::{future::LocalBoxFuture, ready};
use pin_project_lite::pin_project;
use tracing::error;
use crate::{
body::{BoxBody, MessageBody},
@ -23,7 +24,39 @@ use crate::{
h1, ConnectCallback, OnConnectData, Protocol, Request, Response, ServiceConfig,
};
/// A `ServiceFactory` for HTTP/1.1 or HTTP/2 protocol.
/// A [`ServiceFactory`] for HTTP/1.1 and HTTP/2 connections.
///
/// Use [`build`](Self::build) to begin constructing service. Also see [`HttpServiceBuilder`].
///
/// # Automatic HTTP Version Selection
/// There are two ways to select the HTTP version of an incoming connection:
/// - One is to rely on the ALPN information that is provided when using a TLS (HTTPS); both
/// versions are supported automatically when using either of the `.rustls()` or `.openssl()`
/// finalizing methods.
/// - The other is to read the first few bytes of the TCP stream. This is the only viable approach
/// for supporting H2C, which allows the HTTP/2 protocol to work over plaintext connections. Use
/// the `.tcp_auto_h2c()` finalizing method to enable this behavior.
///
/// # Examples
/// ```
/// # use std::convert::Infallible;
/// use actix_http::{HttpService, Request, Response, StatusCode};
///
/// // this service would constructed in an actix_server::Server
///
/// # actix_rt::System::new().block_on(async {
/// HttpService::build()
/// // the builder finalizing method, other finalizers would not return an `HttpService`
/// .finish(|_req: Request| async move {
/// Ok::<_, Infallible>(
/// Response::build(StatusCode::OK).body("Hello!")
/// )
/// })
/// // the service finalizing method method
/// // you can use `.tcp_auto_h2c()`, `.rustls()`, or `.openssl()` instead of `.tcp()`
/// .tcp();
/// # })
/// ```
pub struct HttpService<T, S, B, X = h1::ExpectHandler, U = h1::UpgradeHandler> {
srv: S,
cfg: ServiceConfig,
@ -162,7 +195,9 @@ where
U::Error: fmt::Display + Into<Response<BoxBody>>,
U::InitError: fmt::Debug,
{
/// Create simple tcp stream service
/// Creates TCP stream service from HTTP service.
///
/// The resulting service only supports HTTP/1.x.
pub fn tcp(
self,
) -> impl ServiceFactory<
@ -178,6 +213,61 @@ where
})
.and_then(self)
}
/// Creates TCP stream service from HTTP service that automatically selects HTTP/1.x or HTTP/2
/// on plaintext connections.
#[cfg(feature = "http2")]
#[cfg_attr(docsrs, doc(cfg(feature = "http2")))]
pub fn tcp_auto_h2c(
self,
) -> impl ServiceFactory<
TcpStream,
Config = (),
Response = (),
Error = DispatchError,
InitError = (),
> {
fn_service(move |io: TcpStream| async move {
// subset of HTTP/2 preface defined by RFC 9113 §3.4
// this subset was chosen to maximize likelihood that peeking only once will allow us to
// reliably determine version or else it should fallback to h1 and fail quickly if data
// on the wire is junk
const H2_PREFACE: &[u8] = b"PRI * HTTP/2";
let mut buf = [0; 12];
io.peek(&mut buf).await?;
let proto = if buf == H2_PREFACE {
Protocol::Http2
} else {
Protocol::Http1
};
let peer_addr = io.peer_addr().ok();
Ok((io, proto, peer_addr))
})
.and_then(self)
}
}
/// Configuration options used when accepting TLS connection.
#[cfg(any(feature = "openssl", feature = "rustls"))]
#[cfg_attr(docsrs, doc(cfg(any(feature = "openssl", feature = "rustls"))))]
#[derive(Debug, Default)]
pub struct TlsAcceptorConfig {
pub(crate) handshake_timeout: Option<std::time::Duration>,
}
#[cfg(any(feature = "openssl", feature = "rustls"))]
impl TlsAcceptorConfig {
/// Set TLS handshake timeout duration.
pub fn handshake_timeout(self, dur: std::time::Duration) -> Self {
Self {
handshake_timeout: Some(dur),
// ..self
}
}
}
#[cfg(feature = "openssl")]
@ -219,6 +309,7 @@ mod openssl {
U::InitError: fmt::Debug,
{
/// Create OpenSSL based service.
#[cfg_attr(docsrs, doc(cfg(feature = "openssl")))]
pub fn openssl(
self,
acceptor: SslAcceptor,
@ -229,7 +320,29 @@ mod openssl {
Error = TlsError<SslError, DispatchError>,
InitError = (),
> {
Acceptor::new(acceptor)
self.openssl_with_config(acceptor, TlsAcceptorConfig::default())
}
/// Create OpenSSL based service with custom TLS acceptor configuration.
#[cfg_attr(docsrs, doc(cfg(feature = "openssl")))]
pub fn openssl_with_config(
self,
acceptor: SslAcceptor,
tls_acceptor_config: TlsAcceptorConfig,
) -> impl ServiceFactory<
TcpStream,
Config = (),
Response = (),
Error = TlsError<SslError, DispatchError>,
InitError = (),
> {
let mut acceptor = Acceptor::new(acceptor);
if let Some(handshake_timeout) = tls_acceptor_config.handshake_timeout {
acceptor.set_handshake_timeout(handshake_timeout);
}
acceptor
.map_init_err(|_| {
unreachable!("TLS acceptor service factory does not error on init")
})
@ -291,9 +404,26 @@ mod rustls {
U::InitError: fmt::Debug,
{
/// Create Rustls based service.
#[cfg_attr(docsrs, doc(cfg(feature = "rustls")))]
pub fn rustls(
self,
config: ServerConfig,
) -> impl ServiceFactory<
TcpStream,
Config = (),
Response = (),
Error = TlsError<io::Error, DispatchError>,
InitError = (),
> {
self.rustls_with_config(config, TlsAcceptorConfig::default())
}
/// Create Rustls based service with custom TLS acceptor configuration.
#[cfg_attr(docsrs, doc(cfg(feature = "rustls")))]
pub fn rustls_with_config(
self,
mut config: ServerConfig,
tls_acceptor_config: TlsAcceptorConfig,
) -> impl ServiceFactory<
TcpStream,
Config = (),
@ -305,7 +435,13 @@ mod rustls {
protos.extend_from_slice(&config.alpn_protocols);
config.alpn_protocols = protos;
Acceptor::new(config)
let mut acceptor = Acceptor::new(config);
if let Some(handshake_timeout) = tls_acceptor_config.handshake_timeout {
acceptor.set_handshake_timeout(handshake_timeout);
}
acceptor
.map_init_err(|_| {
unreachable!("TLS acceptor service factory does not error on init")
})
@ -369,13 +505,13 @@ where
Box::pin(async move {
let expect = expect
.await
.map_err(|e| log::error!("Init http expect service error: {:?}", e))?;
.map_err(|e| error!("Init http expect service error: {:?}", e))?;
let upgrade = match upgrade {
Some(upgrade) => {
let upgrade = upgrade
.await
.map_err(|e| log::error!("Init http upgrade service error: {:?}", e))?;
.map_err(|e| error!("Init http upgrade service error: {:?}", e))?;
Some(upgrade)
}
None => None,
@ -383,7 +519,7 @@ where
let service = service
.await
.map_err(|e| log::error!("Init http service error: {:?}", e))?;
.map_err(|e| error!("Init http service error: {:?}", e))?;
Ok(HttpServiceHandler::new(
cfg,
@ -490,7 +626,7 @@ where
fn poll_ready(&self, cx: &mut Context<'_>) -> Poll<Result<(), Self::Error>> {
self._poll_ready(cx).map_err(|err| {
log::error!("HTTP service readiness error: {:?}", err);
error!("HTTP service readiness error: {:?}", err);
DispatchError::Service(err)
})
}
@ -666,7 +802,7 @@ where
self.poll(cx)
}
Err(err) => {
log::trace!("H2 handshake error: {}", err);
tracing::trace!("H2 handshake error: {}", err);
Poll::Ready(Err(err))
}
}

View File

@ -19,29 +19,7 @@ use crate::{
Request,
};
/// Test `Request` builder
///
/// ```ignore
/// # use http::{header, StatusCode};
/// # use actix_web::*;
/// use actix_web::test::TestRequest;
///
/// fn index(req: &HttpRequest) -> Response {
/// if let Some(hdr) = req.headers().get(header::CONTENT_TYPE) {
/// Response::Ok().into()
/// } else {
/// Response::BadRequest().into()
/// }
/// }
///
/// let resp = TestRequest::default().insert_header("content-type", "text/plain")
/// .run(&index)
/// .unwrap();
/// assert_eq!(resp.status(), StatusCode::OK);
///
/// let resp = TestRequest::default().run(&index).unwrap();
/// assert_eq!(resp.status(), StatusCode::BAD_REQUEST);
/// ```
/// Test `Request` builder.
pub struct TestRequest(Option<Inner>);
struct Inner {

View File

@ -1,7 +1,8 @@
use actix_codec::{Decoder, Encoder};
use bitflags::bitflags;
use bytes::{Bytes, BytesMut};
use bytestring::ByteString;
use tokio_util::codec::{Decoder, Encoder};
use tracing::error;
use super::{
frame::Parser,
@ -10,7 +11,7 @@ use super::{
};
/// A WebSocket message.
#[derive(Debug, PartialEq)]
#[derive(Debug, PartialEq, Eq)]
pub enum Message {
/// Text message.
Text(ByteString),
@ -35,7 +36,7 @@ pub enum Message {
}
/// A WebSocket frame.
#[derive(Debug, PartialEq)]
#[derive(Debug, PartialEq, Eq)]
pub enum Frame {
/// Text frame. Note that the codec does not validate UTF-8 encoding.
Text(Bytes),
@ -57,7 +58,7 @@ pub enum Frame {
}
/// A WebSocket continuation item.
#[derive(Debug, PartialEq)]
#[derive(Debug, PartialEq, Eq)]
pub enum Item {
FirstText(Bytes),
FirstBinary(Bytes),
@ -253,7 +254,7 @@ impl Decoder for Codec {
}
}
_ => {
log::error!("Unfinished fragment {:?}", opcode);
error!("Unfinished fragment {:?}", opcode);
Err(ProtocolError::ContinuationFragment(opcode))
}
};

View File

@ -73,10 +73,12 @@ mod inner {
use actix_service::{IntoService, Service};
use futures_core::stream::Stream;
use local_channel::mpsc;
use log::debug;
use pin_project_lite::pin_project;
use tracing::debug;
use actix_codec::{AsyncRead, AsyncWrite, Decoder, Encoder, Framed};
use actix_codec::Framed;
use tokio::io::{AsyncRead, AsyncWrite};
use tokio_util::codec::{Decoder, Encoder};
use crate::{body::BoxBody, Response};

View File

@ -1,7 +1,7 @@
use std::convert::TryFrom;
use bytes::{Buf, BufMut, BytesMut};
use log::debug;
use tracing::debug;
use super::{
mask::apply_mask,
@ -17,7 +17,6 @@ impl Parser {
fn parse_metadata(
src: &[u8],
server: bool,
max_size: usize,
) -> Result<Option<(usize, bool, OpCode, usize, Option<[u8; 4]>)>, ProtocolError> {
let chunk_len = src.len();
@ -60,20 +59,12 @@ impl Parser {
return Ok(None);
}
let len = u64::from_be_bytes(TryFrom::try_from(&src[idx..idx + 8]).unwrap());
if len > max_size as u64 {
return Err(ProtocolError::Overflow);
}
idx += 8;
len as usize
} else {
len as usize
};
// check for max allowed size
if length > max_size {
return Err(ProtocolError::Overflow);
}
let mask = if server {
if chunk_len < idx + 4 {
return Ok(None);
@ -98,11 +89,10 @@ impl Parser {
max_size: usize,
) -> Result<Option<(bool, OpCode, Option<BytesMut>)>, ProtocolError> {
// try to parse ws frame metadata
let (idx, finished, opcode, length, mask) =
match Parser::parse_metadata(src, server, max_size)? {
None => return Ok(None),
Some(res) => res,
};
let (idx, finished, opcode, length, mask) = match Parser::parse_metadata(src, server)? {
None => return Ok(None),
Some(res) => res,
};
// not enough data
if src.len() < idx + length {
@ -112,6 +102,13 @@ impl Parser {
// remove prefix
src.advance(idx);
// check for max allowed size
if length > max_size {
// drop the payload
src.advance(length);
return Err(ProtocolError::Overflow);
}
// no need for body
if length == 0 {
return Ok(Some((finished, opcode, None)));
@ -316,7 +313,7 @@ mod tests {
#[test]
fn test_parse_frame_no_mask() {
let mut buf = BytesMut::from(&[0b0000_0001u8, 0b0000_0001u8][..]);
buf.extend(&[1u8]);
buf.extend([1u8]);
assert!(Parser::parse(&mut buf, true, 1024).is_err());
@ -329,7 +326,7 @@ mod tests {
#[test]
fn test_parse_frame_max_size() {
let mut buf = BytesMut::from(&[0b0000_0001u8, 0b0000_0010u8][..]);
buf.extend(&[1u8, 1u8]);
buf.extend([1u8, 1u8]);
assert!(Parser::parse(&mut buf, true, 1).is_err());
@ -339,6 +336,30 @@ mod tests {
}
}
#[test]
fn test_parse_frame_max_size_recoverability() {
let mut buf = BytesMut::new();
// The first text frame with length == 2, payload doesn't matter.
buf.extend([0b0000_0001u8, 0b0000_0010u8, 0b0000_0000u8, 0b0000_0000u8]);
// Next binary frame with length == 2 and payload == `[0x1111_1111u8, 0x1111_1111u8]`.
buf.extend([0b0000_0010u8, 0b0000_0010u8, 0b1111_1111u8, 0b1111_1111u8]);
assert_eq!(buf.len(), 8);
assert!(matches!(
Parser::parse(&mut buf, false, 1),
Err(ProtocolError::Overflow)
));
assert_eq!(buf.len(), 4);
let frame = extract(Parser::parse(&mut buf, false, 2));
assert!(!frame.finished);
assert_eq!(frame.opcode, OpCode::Binary);
assert_eq!(
frame.payload,
Bytes::from(vec![0b1111_1111u8, 0b1111_1111u8])
);
assert_eq!(buf.len(), 0);
}
#[test]
fn test_ping_frame() {
let mut buf = BytesMut::new();

View File

@ -67,7 +67,7 @@ pub enum ProtocolError {
}
/// WebSocket handshake errors
#[derive(Debug, Clone, Copy, PartialEq, Display, Error)]
#[derive(Debug, Clone, Copy, PartialEq, Eq, Display, Error)]
pub enum HandshakeError {
/// Only get method is allowed.
#[display(fmt = "Method not allowed.")]

View File

@ -3,6 +3,9 @@ use std::{
fmt,
};
use base64::prelude::*;
use tracing::error;
/// Operation codes defined in [RFC 6455 §11.8].
///
/// [RFC 6455]: https://datatracker.ietf.org/doc/html/rfc6455#section-11.8
@ -58,7 +61,7 @@ impl From<OpCode> for u8 {
Ping => 9,
Pong => 10,
Bad => {
log::error!("Attempted to convert invalid opcode to u8. This is a bug.");
error!("Attempted to convert invalid opcode to u8. This is a bug.");
8 // if this somehow happens, a close frame will help us tear down quickly
}
}
@ -242,7 +245,7 @@ pub fn hash_key(key: &[u8]) -> [u8; 28] {
};
let mut hash_b64 = [0; 28];
let n = base64::encode_config_slice(&hash, base64::STANDARD, &mut hash_b64);
let n = BASE64_STANDARD.encode_slice(hash, &mut hash_b64).unwrap();
assert_eq!(n, 28);
hash_b64

View File

@ -1,14 +1,15 @@
#![cfg(feature = "openssl")]
#![allow(clippy::uninlined_format_args)]
extern crate tls_openssl as openssl;
use std::{convert::Infallible, io};
use std::{convert::Infallible, io, time::Duration};
use actix_http::{
body::{BodyStream, BoxBody, SizedStream},
error::PayloadError,
header::{self, HeaderValue},
Error, HttpService, Method, Request, Response, StatusCode, Version,
Error, HttpService, Method, Request, Response, StatusCode, TlsAcceptorConfig, Version,
};
use actix_http_test::test_server;
use actix_service::{fn_service, ServiceFactoryExt};
@ -16,7 +17,7 @@ use actix_utils::future::{err, ok, ready};
use bytes::{Bytes, BytesMut};
use derive_more::{Display, Error};
use futures_core::Stream;
use futures_util::stream::{once, StreamExt as _};
use futures_util::{stream::once, StreamExt as _};
use openssl::{
pkey::PKey,
ssl::{SslAcceptor, SslMethod},
@ -89,7 +90,10 @@ async fn h2_1() -> io::Result<()> {
assert_eq!(req.version(), Version::HTTP_2);
ok::<_, Error>(Response::ok())
})
.openssl(tls_config())
.openssl_with_config(
tls_config(),
TlsAcceptorConfig::default().handshake_timeout(Duration::from_secs(5)),
)
.map_err(|_| ())
})
.await;

View File

@ -1,4 +1,5 @@
#![cfg(feature = "rustls")]
#![allow(clippy::uninlined_format_args)]
extern crate tls_rustls as rustls;
@ -8,13 +9,14 @@ use std::{
net::{SocketAddr, TcpStream as StdTcpStream},
sync::Arc,
task::Poll,
time::Duration,
};
use actix_http::{
body::{BodyStream, BoxBody, SizedStream},
error::PayloadError,
header::{self, HeaderName, HeaderValue},
Error, HttpService, Method, Request, Response, StatusCode, Version,
Error, HttpService, Method, Request, Response, StatusCode, TlsAcceptorConfig, Version,
};
use actix_http_test::test_server;
use actix_rt::pin;
@ -40,7 +42,7 @@ where
let body = stream.as_mut();
match ready!(body.poll_next(cx)) {
Some(Ok(bytes)) => buf.extend_from_slice(&*bytes),
Some(Ok(bytes)) => buf.extend_from_slice(&bytes),
None => return Poll::Ready(Ok(())),
Some(Err(err)) => return Poll::Ready(Err(err)),
}
@ -160,7 +162,10 @@ async fn h2_1() -> io::Result<()> {
assert_eq!(req.version(), Version::HTTP_2);
ok::<_, Error>(Response::ok())
})
.rustls(tls_config())
.rustls_with_config(
tls_config(),
TlsAcceptorConfig::default().handshake_timeout(Duration::from_secs(5)),
)
})
.await;
@ -212,6 +217,7 @@ async fn h2_content_length() {
let value = HeaderValue::from_static("0");
{
#[allow(clippy::single_element_loop)]
for &i in &[0] {
let req = srv
.request(Method::HEAD, srv.surl(&format!("/{}", i)))
@ -226,6 +232,7 @@ async fn h2_content_length() {
// assert_eq!(response.headers().get(&header), None);
}
#[allow(clippy::single_element_loop)]
for &i in &[1] {
let req = srv
.request(Method::GET, srv.surl(&format!("/{}", i)))

View File

@ -1,3 +1,5 @@
#![allow(clippy::uninlined_format_args)]
use std::{
convert::Infallible,
io::{Read, Write},
@ -7,18 +9,15 @@ use std::{
use actix_http::{
body::{self, BodyStream, BoxBody, SizedStream},
header, Error, HttpService, KeepAlive, Request, Response, StatusCode,
header, Error, HttpService, KeepAlive, Request, Response, StatusCode, Version,
};
use actix_http_test::test_server;
use actix_rt::time::sleep;
use actix_rt::{net::TcpStream, time::sleep};
use actix_service::fn_service;
use actix_utils::future::{err, ok, ready};
use bytes::Bytes;
use derive_more::{Display, Error};
use futures_util::{
stream::{once, StreamExt as _},
FutureExt as _,
};
use futures_util::{stream::once, FutureExt as _, StreamExt as _};
use regex::Regex;
#[actix_rt::test]
@ -858,3 +857,44 @@ async fn not_modified_spec_h1() {
srv.stop().await;
}
#[actix_rt::test]
async fn h2c_auto() {
let mut srv = test_server(|| {
HttpService::build()
.keep_alive(KeepAlive::Disabled)
.finish(|req: Request| {
let body = match req.version() {
Version::HTTP_11 => "h1",
Version::HTTP_2 => "h2",
_ => unreachable!(),
};
ok::<_, Infallible>(Response::ok().set_body(body))
})
.tcp_auto_h2c()
})
.await;
let req = srv.get("/");
assert_eq!(req.get_version(), &Version::HTTP_11);
let mut res = req.send().await.unwrap();
assert!(res.status().is_success());
assert_eq!(res.body().await.unwrap(), &b"h1"[..]);
// awc doesn't support forcing the version to http/2 so use h2 manually
let tcp = TcpStream::connect(srv.addr()).await.unwrap();
let (h2, connection) = h2::client::handshake(tcp).await.unwrap();
tokio::spawn(async move { connection.await.unwrap() });
let mut h2 = h2.ready().await.unwrap();
let request = ::http::Request::new(());
let (response, _) = h2.send_request(request, true).unwrap();
let (head, mut body) = response.await.unwrap().into_parts();
let body = body.data().await.unwrap().unwrap();
assert!(head.status.is_success());
assert_eq!(body, &b"h2"[..]);
srv.stop().await;
}

View File

@ -1,3 +1,5 @@
#![allow(clippy::uninlined_format_args)]
use std::{
cell::Cell,
convert::Infallible,

View File

@ -0,0 +1,26 @@
[package]
name = "actix-multipart-derive"
version = "0.5.0"
authors = ["Jacob Halsey <jacob@jhalsey.com>"]
description = "Multipart form derive macro for Actix Web"
keywords = ["http", "web", "framework", "async", "futures"]
homepage = "https://actix.rs"
repository = "https://github.com/actix/actix-web.git"
license = "MIT OR Apache-2.0"
edition = "2018"
[lib]
proc-macro = true
[dependencies]
darling = "0.14"
parse-size = "1"
proc-macro2 = "1"
quote = "1"
syn = "1"
[dev-dependencies]
actix-multipart = "0.5"
actix-web = "4"
rustversion = "1"
trybuild = "1"

View File

@ -0,0 +1 @@
../LICENSE-APACHE

View File

@ -0,0 +1 @@
../LICENSE-MIT

View File

@ -0,0 +1,3 @@
# actix-multipart-derive
> The derive macro implementation for actix-multipart.

View File

@ -0,0 +1,315 @@
//! Multipart form derive macro for Actix Web.
//!
//! See [`macro@MultipartForm`] for usage examples.
#![deny(rust_2018_idioms, nonstandard_style)]
#![warn(future_incompatible)]
#![doc(html_logo_url = "https://actix.rs/img/logo.png")]
#![doc(html_favicon_url = "https://actix.rs/favicon.ico")]
#![cfg_attr(docsrs, feature(doc_cfg))]
use std::{collections::HashSet, convert::TryFrom as _};
use darling::{FromDeriveInput, FromField, FromMeta};
use parse_size::parse_size;
use proc_macro::TokenStream;
use proc_macro2::Ident;
use quote::quote;
use syn::{parse_macro_input, Type};
#[derive(FromMeta)]
enum DuplicateField {
Ignore,
Deny,
Replace,
}
impl Default for DuplicateField {
fn default() -> Self {
Self::Ignore
}
}
#[derive(FromDeriveInput, Default)]
#[darling(attributes(multipart), default)]
struct MultipartFormAttrs {
deny_unknown_fields: bool,
duplicate_field: DuplicateField,
}
#[derive(FromField, Default)]
#[darling(attributes(multipart), default)]
struct FieldAttrs {
rename: Option<String>,
limit: Option<String>,
}
struct ParsedField<'t> {
serialization_name: String,
rust_name: &'t Ident,
limit: Option<usize>,
ty: &'t Type,
}
/// Implements `MultipartCollect` for a struct so that it can be used with the `MultipartForm`
/// extractor.
///
/// # Basic Use
///
/// Each field type should implement the `FieldReader` trait:
///
/// ```
/// use actix_multipart::form::{tempfile::TempFile, text::Text, MultipartForm};
///
/// #[derive(MultipartForm)]
/// struct ImageUpload {
/// description: Text<String>,
/// timestamp: Text<i64>,
/// image: TempFile,
/// }
/// ```
///
/// # Optional and List Fields
///
/// You can also use `Vec<T>` and `Option<T>` provided that `T: FieldReader`.
///
/// A [`Vec`] field corresponds to an upload with multiple parts under the [same field
/// name](https://www.rfc-editor.org/rfc/rfc7578#section-4.3).
///
/// ```
/// use actix_multipart::form::{tempfile::TempFile, text::Text, MultipartForm};
///
/// #[derive(MultipartForm)]
/// struct Form {
/// category: Option<Text<String>>,
/// files: Vec<TempFile>,
/// }
/// ```
///
/// # Field Renaming
///
/// You can use the `#[multipart(rename = "foo")]` attribute to receive a field by a different name.
///
/// ```
/// use actix_multipart::form::{tempfile::TempFile, MultipartForm};
///
/// #[derive(MultipartForm)]
/// struct Form {
/// #[multipart(rename = "files[]")]
/// files: Vec<TempFile>,
/// }
/// ```
///
/// # Field Limits
///
/// You can use the `#[multipart(limit = "<size>")]` attribute to set field level limits. The limit
/// string is parsed using [parse_size].
///
/// Note: the form is also subject to the global limits configured using `MultipartFormConfig`.
///
/// ```
/// use actix_multipart::form::{tempfile::TempFile, text::Text, MultipartForm};
///
/// #[derive(MultipartForm)]
/// struct Form {
/// #[multipart(limit = "2 KiB")]
/// description: Text<String>,
///
/// #[multipart(limit = "512 MiB")]
/// files: Vec<TempFile>,
/// }
/// ```
///
/// # Unknown Fields
///
/// By default fields with an unknown name are ignored. They can be rejected using the
/// `#[multipart(deny_unknown_fields)]` attribute:
///
/// ```
/// # use actix_multipart::form::MultipartForm;
/// #[derive(MultipartForm)]
/// #[multipart(deny_unknown_fields)]
/// struct Form { }
/// ```
///
/// # Duplicate Fields
///
/// The behaviour for when multiple fields with the same name are received can be changed using the
/// `#[multipart(duplicate_field = "<behavior>")]` attribute:
///
/// - "ignore": (default) Extra fields are ignored. I.e., the first one is persisted.
/// - "deny": A `MultipartError::UnsupportedField` error response is returned.
/// - "replace": Each field is processed, but only the last one is persisted.
///
/// Note that `Vec` fields will ignore this option.
///
/// ```
/// # use actix_multipart::form::MultipartForm;
/// #[derive(MultipartForm)]
/// #[multipart(duplicate_field = "deny")]
/// struct Form { }
/// ```
///
/// [parse_size]: https://docs.rs/parse-size/1/parse_size
#[proc_macro_derive(MultipartForm, attributes(multipart))]
pub fn impl_multipart_form(input: proc_macro::TokenStream) -> proc_macro::TokenStream {
let input: syn::DeriveInput = parse_macro_input!(input);
let name = &input.ident;
let data_struct = match &input.data {
syn::Data::Struct(data_struct) => data_struct,
_ => {
return compile_err(syn::Error::new(
input.ident.span(),
"`MultipartForm` can only be derived for structs",
))
}
};
let fields = match &data_struct.fields {
syn::Fields::Named(fields_named) => fields_named,
_ => {
return compile_err(syn::Error::new(
input.ident.span(),
"`MultipartForm` can only be derived for a struct with named fields",
))
}
};
let attrs = match MultipartFormAttrs::from_derive_input(&input) {
Ok(attrs) => attrs,
Err(err) => return err.write_errors().into(),
};
// Parse the field attributes
let parsed = match fields
.named
.iter()
.map(|field| {
let rust_name = field.ident.as_ref().unwrap();
let attrs = FieldAttrs::from_field(field).map_err(|err| err.write_errors())?;
let serialization_name = attrs.rename.unwrap_or_else(|| rust_name.to_string());
let limit = match attrs.limit.map(|limit| match parse_size(&limit) {
Ok(size) => Ok(usize::try_from(size).unwrap()),
Err(err) => Err(syn::Error::new(
field.ident.as_ref().unwrap().span(),
format!("Could not parse size limit `{}`: {}", limit, err),
)),
}) {
Some(Err(err)) => return Err(compile_err(err)),
limit => limit.map(Result::unwrap),
};
Ok(ParsedField {
serialization_name,
rust_name,
limit,
ty: &field.ty,
})
})
.collect::<Result<Vec<_>, TokenStream>>()
{
Ok(attrs) => attrs,
Err(err) => return err,
};
// Check that field names are unique
let mut set = HashSet::new();
for field in &parsed {
if !set.insert(field.serialization_name.clone()) {
return compile_err(syn::Error::new(
field.rust_name.span(),
format!("Multiple fields named: `{}`", field.serialization_name),
));
}
}
// Return value when a field name is not supported by the form
let unknown_field_result = if attrs.deny_unknown_fields {
quote!(::std::result::Result::Err(
::actix_multipart::MultipartError::UnsupportedField(field.name().to_string())
))
} else {
quote!(::std::result::Result::Ok(()))
};
// Value for duplicate action
let duplicate_field = match attrs.duplicate_field {
DuplicateField::Ignore => quote!(::actix_multipart::form::DuplicateField::Ignore),
DuplicateField::Deny => quote!(::actix_multipart::form::DuplicateField::Deny),
DuplicateField::Replace => quote!(::actix_multipart::form::DuplicateField::Replace),
};
// limit() implementation
let mut limit_impl = quote!();
for field in &parsed {
let name = &field.serialization_name;
if let Some(value) = field.limit {
limit_impl.extend(quote!(
#name => ::std::option::Option::Some(#value),
));
}
}
// handle_field() implementation
let mut handle_field_impl = quote!();
for field in &parsed {
let name = &field.serialization_name;
let ty = &field.ty;
handle_field_impl.extend(quote!(
#name => ::std::boxed::Box::pin(
<#ty as ::actix_multipart::form::FieldGroupReader>::handle_field(req, field, limits, state, #duplicate_field)
),
));
}
// from_state() implementation
let mut from_state_impl = quote!();
for field in &parsed {
let name = &field.serialization_name;
let rust_name = &field.rust_name;
let ty = &field.ty;
from_state_impl.extend(quote!(
#rust_name: <#ty as ::actix_multipart::form::FieldGroupReader>::from_state(#name, &mut state)?,
));
}
let gen = quote! {
impl ::actix_multipart::form::MultipartCollect for #name {
fn limit(field_name: &str) -> ::std::option::Option<usize> {
match field_name {
#limit_impl
_ => None,
}
}
fn handle_field<'t>(
req: &'t ::actix_web::HttpRequest,
field: ::actix_multipart::Field,
limits: &'t mut ::actix_multipart::form::Limits,
state: &'t mut ::actix_multipart::form::State,
) -> ::std::pin::Pin<::std::boxed::Box<dyn ::std::future::Future<Output = ::std::result::Result<(), ::actix_multipart::MultipartError>> + 't>> {
match field.name() {
#handle_field_impl
_ => return ::std::boxed::Box::pin(::std::future::ready(#unknown_field_result)),
}
}
fn from_state(mut state: ::actix_multipart::form::State) -> ::std::result::Result<Self, ::actix_multipart::MultipartError> {
Ok(Self {
#from_state_impl
})
}
}
};
gen.into()
}
/// Transform a syn error into a token stream for returning.
fn compile_err(err: syn::Error) -> TokenStream {
TokenStream::from(err.to_compile_error())
}

View File

@ -0,0 +1,16 @@
#[rustversion::stable(1.59)] // MSRV
#[test]
fn compile_macros() {
let t = trybuild::TestCases::new();
t.pass("tests/trybuild/all-required.rs");
t.pass("tests/trybuild/optional-and-list.rs");
t.pass("tests/trybuild/rename.rs");
t.pass("tests/trybuild/deny-unknown.rs");
t.pass("tests/trybuild/deny-duplicates.rs");
t.compile_fail("tests/trybuild/deny-parse-fail.rs");
t.pass("tests/trybuild/size-limits.rs");
t.compile_fail("tests/trybuild/size-limit-parse-fail.rs");
}

View File

@ -0,0 +1,19 @@
use actix_web::{web, App, Responder};
use actix_multipart::form::{tempfile::TempFile, text::Text, MultipartForm};
#[derive(Debug, MultipartForm)]
struct ImageUpload {
description: Text<String>,
timestamp: Text<i64>,
image: TempFile,
}
async fn handler(_form: MultipartForm<ImageUpload>) -> impl Responder {
"Hello World!"
}
#[actix_web::main]
async fn main() {
App::new().default_service(web::to(handler));
}

View File

@ -0,0 +1,16 @@
use actix_web::{web, App, Responder};
use actix_multipart::form::MultipartForm;
#[derive(MultipartForm)]
#[multipart(duplicate_field = "deny")]
struct Form {}
async fn handler(_form: MultipartForm<Form>) -> impl Responder {
"Hello World!"
}
#[actix_web::main]
async fn main() {
App::new().default_service(web::to(handler));
}

View File

@ -0,0 +1,7 @@
use actix_multipart::form::MultipartForm;
#[derive(MultipartForm)]
#[multipart(duplicate_field = "no")]
struct Form {}
fn main() {}

View File

@ -0,0 +1,5 @@
error: Unknown literal value `no`
--> tests/trybuild/deny-parse-fail.rs:4:31
|
4 | #[multipart(duplicate_field = "no")]
| ^^^^

View File

@ -0,0 +1,16 @@
use actix_web::{web, App, Responder};
use actix_multipart::form::MultipartForm;
#[derive(MultipartForm)]
#[multipart(deny_unknown_fields)]
struct Form {}
async fn handler(_form: MultipartForm<Form>) -> impl Responder {
"Hello World!"
}
#[actix_web::main]
async fn main() {
App::new().default_service(web::to(handler));
}

View File

@ -0,0 +1,18 @@
use actix_web::{web, App, Responder};
use actix_multipart::form::{tempfile::TempFile, text::Text, MultipartForm};
#[derive(MultipartForm)]
struct Form {
category: Option<Text<String>>,
files: Vec<TempFile>,
}
async fn handler(_form: MultipartForm<Form>) -> impl Responder {
"Hello World!"
}
#[actix_web::main]
async fn main() {
App::new().default_service(web::to(handler));
}

View File

@ -0,0 +1,18 @@
use actix_web::{web, App, Responder};
use actix_multipart::form::{tempfile::TempFile, MultipartForm};
#[derive(MultipartForm)]
struct Form {
#[multipart(rename = "files[]")]
files: Vec<TempFile>,
}
async fn handler(_form: MultipartForm<Form>) -> impl Responder {
"Hello World!"
}
#[actix_web::main]
async fn main() {
App::new().default_service(web::to(handler));
}

View File

@ -0,0 +1,21 @@
use actix_multipart::form::{text::Text, MultipartForm};
#[derive(MultipartForm)]
struct Form {
#[multipart(limit = "2 bytes")]
description: Text<String>,
}
#[derive(MultipartForm)]
struct Form2 {
#[multipart(limit = "2 megabytes")]
description: Text<String>,
}
#[derive(MultipartForm)]
struct Form3 {
#[multipart(limit = "four meters")]
description: Text<String>,
}
fn main() {}

View File

@ -0,0 +1,17 @@
error: Could not parse size limit `2 bytes`: invalid digit found in string
--> tests/trybuild/size-limit-parse-fail.rs:6:5
|
6 | description: Text<String>,
| ^^^^^^^^^^^
error: Could not parse size limit `2 megabytes`: invalid digit found in string
--> tests/trybuild/size-limit-parse-fail.rs:12:5
|
12 | description: Text<String>,
| ^^^^^^^^^^^
error: Could not parse size limit `four meters`: invalid digit found in string
--> tests/trybuild/size-limit-parse-fail.rs:18:5
|
18 | description: Text<String>,
| ^^^^^^^^^^^

View File

@ -0,0 +1,21 @@
use actix_web::{web, App, Responder};
use actix_multipart::form::{tempfile::TempFile, text::Text, MultipartForm};
#[derive(MultipartForm)]
struct Form {
#[multipart(limit = "2 KiB")]
description: Text<String>,
#[multipart(limit = "512 MiB")]
files: Vec<TempFile>,
}
async fn handler(_form: MultipartForm<Form>) -> impl Responder {
"Hello World!"
}
#[actix_web::main]
async fn main() {
App::new().default_service(web::to(handler));
}

View File

@ -1,31 +1,46 @@
# Changes
## Unreleased - 2021-xx-xx
## Unreleased - 2022-xx-xx
- Added `MultipartForm` typed data extractor. [#2883]
[#2883]: https://github.com/actix/actix-web/pull/2883
## 0.5.0 - 2023-01-21
- `Field::content_type()` now returns `Option<&mime::Mime>`. [#2885]
- Minimum supported Rust version (MSRV) is now 1.59 due to transitive `time` dependency.
[#2885]: https://github.com/actix/actix-web/pull/2885
## 0.4.0 - 2022-02-25
- No significant changes since `0.4.0-beta.13`.
## 0.4.0-beta.13 - 2022-01-31
- No significant changes since `0.4.0-beta.12`.
## 0.4.0-beta.12 - 2022-01-04
- Minimum supported Rust version (MSRV) is now 1.54.
## 0.4.0-beta.11 - 2021-12-27
- No significant changes since `0.4.0-beta.10`.
## 0.4.0-beta.10 - 2021-12-11
- No significant changes since `0.4.0-beta.9`.
## 0.4.0-beta.9 - 2021-12-01
- Polling `Field` after dropping `Multipart` now fails immediately instead of hanging forever. [#2463]
[#2463]: https://github.com/actix/actix-web/pull/2463
## 0.4.0-beta.8 - 2021-11-22
- Ensure a correct Content-Disposition header is included in every part of a multipart message. [#2451]
- Added `MultipartError::NoContentDisposition` variant. [#2451]
- Since Content-Disposition is now ensured, `Field::content_disposition` is now infallible. [#2451]
@ -35,52 +50,52 @@
[#2451]: https://github.com/actix/actix-web/pull/2451
## 0.4.0-beta.7 - 2021-10-20
- Minimum supported Rust version (MSRV) is now 1.52.
## 0.4.0-beta.6 - 2021-09-09
- Minimum supported Rust version (MSRV) is now 1.51.
## 0.4.0-beta.5 - 2021-06-17
- No notable changes.
- No notable changes.
## 0.4.0-beta.4 - 2021-04-02
- No notable changes.
- No notable changes.
## 0.4.0-beta.3 - 2021-03-09
- No notable changes.
- No notable changes.
## 0.4.0-beta.2 - 2021-02-10
- No notable changes.
## 0.4.0-beta.1 - 2021-01-07
- Fix multipart consuming payload before header checks. [#1513]
- Update `bytes` to `1.0`. [#1813]
[#1813]: https://github.com/actix/actix-web/pull/1813
[#1513]: https://github.com/actix/actix-web/pull/1513
## 0.3.0 - 2020-09-11
- No significant changes from `0.3.0-beta.2`.
## 0.3.0-beta.2 - 2020-09-10
- Update `actix-*` dependencies to latest versions.
## 0.3.0-beta.1 - 2020-07-15
- Update `actix-web` to 3.0.0-beta.1
## 0.3.0-alpha.1 - 2020-05-25
- Update `actix-web` to 3.0.0-alpha.3
- Bump minimum supported Rust version to 1.40
- Minimize `futures` dependencies

View File

@ -1,7 +1,10 @@
[package]
name = "actix-multipart"
version = "0.4.0-beta.13"
authors = ["Nikolay Kim <fafhrd91@gmail.com>"]
version = "0.5.0"
authors = [
"Nikolay Kim <fafhrd91@gmail.com>",
"Jacob Halsey <jacob@jhalsey.com>",
]
description = "Multipart form support for Actix Web"
keywords = ["http", "web", "framework", "async", "futures"]
homepage = "https://actix.rs"
@ -9,26 +12,46 @@ repository = "https://github.com/actix/actix-web.git"
license = "MIT OR Apache-2.0"
edition = "2018"
[package.metadata.docs.rs]
rustdoc-args = ["--cfg", "docsrs"]
all-features = true
[features]
default = ["tempfile", "derive"]
derive = ["actix-multipart-derive"]
tempfile = ["tempfile-dep", "tokio/fs"]
[lib]
name = "actix_multipart"
path = "src/lib.rs"
[dependencies]
actix-utils = "3.0.0"
actix-web = { version = "4.0.0", default-features = false }
actix-multipart-derive = { version = "=0.5.0", optional = true }
actix-utils = "3"
actix-web = { version = "4", default-features = false }
bytes = "1"
derive_more = "0.99.5"
futures-core = { version = "0.3.7", default-features = false, features = ["alloc"] }
futures-core = { version = "0.3.17", default-features = false, features = ["alloc"] }
futures-util = { version = "0.3.17", default-features = false, features = ["alloc"] }
httparse = "1.3"
local-waker = "0.1"
log = "0.4"
memchr = "2.5"
mime = "0.3"
twoway = "0.2"
serde = "1"
serde_json = "1"
serde_plain = "1"
# TODO(MSRV 1.60): replace with dep: prefix
tempfile-dep = { package = "tempfile", version = "3.4", optional = true }
tokio = { version = "1.18.5", features = ["sync"] }
[dev-dependencies]
actix-http = "3"
actix-multipart-rfc7578 = "0.10"
actix-rt = "2.2"
actix-http = "3.0.0"
futures-util = { version = "0.3.7", default-features = false, features = ["alloc"] }
tokio = { version = "1.8.4", features = ["sync"] }
actix-test = "0.1"
awc = "3"
futures-util = { version = "0.3.17", default-features = false, features = ["alloc"] }
tokio = { version = "1.18.5", features = ["sync"] }
tokio-stream = "0.1"

View File

@ -3,15 +3,15 @@
> Multipart form support for Actix Web.
[![crates.io](https://img.shields.io/crates/v/actix-multipart?label=latest)](https://crates.io/crates/actix-multipart)
[![Documentation](https://docs.rs/actix-multipart/badge.svg?version=0.4.0-beta.13)](https://docs.rs/actix-multipart/0.4.0-beta.13)
[![Version](https://img.shields.io/badge/rustc-1.54+-ab6000.svg)](https://blog.rust-lang.org/2021/05/06/Rust-1.54.0.html)
[![Documentation](https://docs.rs/actix-multipart/badge.svg?version=0.5.0)](https://docs.rs/actix-multipart/0.5.0)
![Version](https://img.shields.io/badge/rustc-1.59+-ab6000.svg)
![MIT or Apache 2.0 licensed](https://img.shields.io/crates/l/actix-multipart.svg)
<br />
[![dependency status](https://deps.rs/crate/actix-multipart/0.4.0-beta.13/status.svg)](https://deps.rs/crate/actix-multipart/0.4.0-beta.13)
[![dependency status](https://deps.rs/crate/actix-multipart/0.5.0/status.svg)](https://deps.rs/crate/actix-multipart/0.5.0)
[![Download](https://img.shields.io/crates/d/actix-multipart.svg)](https://crates.io/crates/actix-multipart)
[![Chat on Discord](https://img.shields.io/discord/771444961383153695?label=chat&logo=discord)](https://discord.gg/NWpN5mmg3x)
## Documentation & Resources
- [API Documentation](https://docs.rs/actix-multipart)
- Minimum Supported Rust Version (MSRV): 1.54
- Minimum Supported Rust Version (MSRV): 1.59

Some files were not shown because too many files have changed in this diff Show More